亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

線上filebeat部署文檔和使用方法

發布時間:2020-07-29 17:24:39 來源:網絡 閱讀:854 作者:拎壺沖沖沖 欄目:大數據

第一步:安裝filebeat
參考:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html
第二步:filebeat目錄說明

Type                  Description                                                                 Location
home                 Home of the Filebeat installation.                             {extract.path}
bin                   The location for the binary files.                                {extract.path}
config              The location for configuration files.                           {extract.path}
data                   The location for persistent data files.                      {extract.path}/data
logs                    The location for the logs created by Filebeat.         {extract.path}/logs

第三步:filebeat配置
默認配置文件為filebeat.yml
內容為:
###################### Filebeat Configuration Example #########################

#This file is an example configuration file highlighting only the most common
#options. The filebeat.reference.yml file from the same directory contains all the
#supported options with more comments. You can use it as a reference.

#You can find the full configuration reference here:
https://www.elastic.co/guide/en/beats/filebeat/index.html

#For more available modules and options, please see the filebeat.reference.yml sample
#configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

#Each - is an input. Most options can be set at the input level, so
#you can use different inputs for various configurations.
#Below are the input specific configurations.

  • type: log

    #Change to true to enable this input configuration.
    enabled: false

    #Paths that should be crawled and fetched. Glob based paths.
    paths:

    • /var/log/.log
      #- c:\programdata\elasticsearch\logs\

    #Exclude lines. A list of regular expressions to match. It drops the lines that are
    #matching any regular expression from the list.
    #exclude_lines: ['^DBG']

    #Include lines. A list of regular expressions to match. It exports the lines that are
    #matching any regular expression from the list.
    #include_lines: ['^ERR', '^WARN']

    #Exclude files. A list of regular expressions to match. Filebeat drops the files that
    #are matching any regular expression from the list. By default, no files are dropped.
    #exclude_files: ['.gz$']

    #Optional additional fields. These fields can be freely picked
    #to add additional information to the crawled log files for filtering
    #fields:
    #level: debug
    #review: 1

    Multiline options

    #Multiline can be used for log messages spanning multiple lines. This is common
    #for Java Stack Traces or C-Line Continuation

    #The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
    #multiline.pattern: ^[

    #Defines if the pattern set under pattern should be negated or not. Default is false.
    #multiline.negate: false

    #Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
    #that was (not) matched before or after or as long as a pattern is not matched based on negate.
    #Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
    #multiline.match: after

#============================= Filebeat modules ===============================

filebeat.config.modules:
#Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml

#Set to true to enable config reloading
reload.enabled: false

#Period on which files under path should be checked for changes
#reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

#================================ General =====================================

#The name of the shipper that publishes the network data. It can be used to group
#all the transactions sent by a single shipper in the web interface.
#name:

#The tags of the shipper are included in their own field with each
#transaction published.
#tags: ["service-X", "web-tier"]

#Optional fields that you can specify to add additional information to the
#output.
#fields:
#env: staging

#============================== Dashboards =====================================
#These settings control loading the sample dashboards to the Kibana index. Loading
#the dashboards is disabled by default and can be enabled either by setting the
#options here, or by using the -setup CLI flag or the setup command.
#setup.dashboards.enabled: false

#The URL from where to download the dashboards archive. By default this URL
#has a value which is computed based on the Beat name and version. For released
#versions, this URL points to the dashboard archive on the artifacts.elastic.co
#website.
#setup.dashboards.url:

#============================== Kibana =====================================

#Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
#This requires a Kibana endpoint configuration.
setup.kibana:

#Kibana Host
#Scheme and port can be left out and will be set to the default (http and 5601)
#In case you specify and additional path, the scheme is required: http://localhost:5601/path
#IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"

#Kibana Space ID
#ID of the Kibana Space into which the dashboards should be loaded. By default,
#the Default Space will be used.
#space.id:

#============================= Elastic Cloud ==================================

#These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

#The cloud.id setting overwrites the output.elasticsearch.hosts and
#setup.kibana.host options.
#You can find the cloud.id in the Elastic Cloud web UI.
#cloud.id:

#The cloud.auth setting overwrites the output.elasticsearch.username and
#output.elasticsearch.password settings. The format is <user>:<pass>.
#cloud.auth:

#================================ Outputs =====================================

#Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
#Array of hosts to connect to.
hosts: ["localhost:9200"]

#Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
#The Logstash hosts
#hosts: ["localhost:5044"]

#Optional SSL. By default is off.
#List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

#Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"

#Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

#Configure processors to enhance or manipulate events generated by the beat.

processors:

  • add_host_metadata: ~
  • add_cloud_metadata: ~

#================================ Logging =====================================

#Sets log level. The default log level is info.
#Available log levels are: error, warning, info, debug
#logging.level: debug

#At debug level, you can selectively enable logging only for some components.
#To enable all selectors use [""]. Examples of other selectors are "beat",
#"publish", "service".
#logging.selectors: ["
"]

#============================== Xpack Monitoring ===============================
#filebeat can export internal metrics to a central Elasticsearch monitoring
#cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
#reporting is disabled by default.

#Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

#Uncomment to send the metrics to Elasticsearch. Most settings from the
#Elasticsearch output are accepted here as well. Any setting that is not set is
#automatically inherited from the Elasticsearch output configuration, so if you
#have the Elasticsearch output configured, you can simply uncomment the
#following line.
#xpack.monitoring.elasticsearch:

配置文件解釋詳細見:https://www.cnblogs.com/zlslch/p/6622079.html
第四步:filebeat抓取各個服務日志并以服務的名字創建索引存儲到es當中

  1. 編寫一個filebeat-123.yml文件文件內容如下:
filebeat.config:
  prospectors:
    path: /data/software/filebeat-6.5.1/conf/*.yml
    reload.enabled: true
    reload.period: 10s
output.elasticsearch:
  hosts: ["IP:9200"]
  index: "%{[fields][out_topic]}"
setup.template.name: "customname"
setup.template.pattern: "customname-*"
setup.template.overwrite: true
logging:
  level: debug
  1. 結合自定義路徑conf下的文件ceshi.yml
    - type: log
    paths:
    - /var/log/zookeeper/zookeeper.log
    tags: ["zookeeper"]
    exclude_files: [".gz$"]
    scan_frequency: 1s
    fields:
    server_name: 主機名
    out_topic: "zookeeper_log"
    multiline:
    pattern: "^\\S"
    match: after
    - type: log
    paths:
    - /var/log/nginx/access.log
    tags: ["nginx"]
    exclude_files: [".gz$"]
    scan_frequency: 1s
    fields:
    server_name: 主機名
    out_topic: "nginx_log"
    multiline:
    pattern: "^\\S"
    match: after

上邊這塊我們抓取了zookeeper日志和nginx日志,定義索引名稱分別為zookeeper_log和nginx_log

第五步:啟動filebeat并在es中查看生成的索引

./filebeat -e -c filebeat-123.yml
去es中查看索引
線上filebeat部署文檔和使用方法

在es中已生成nginx_log和zookeeper_log索引,我們在kibana中去查看索引中的內容
線上filebeat部署文檔和使用方法

線上filebeat部署文檔和使用方法

線上filebeat部署文檔和使用方法
線上filebeat部署文檔和使用方法

我看看到zookeeper_log索引里邊已經有實時日志在跑,那么怎么自動讓他更新呢。

線上filebeat部署文檔和使用方法

線上filebeat部署文檔和使用方法
然后我們在kibana上就可以看到1分鐘后日志在實時更新。

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

皮山县| 新田县| 灵川县| 尼勒克县| 绥中县| 壶关县| 长武县| 阳江市| 南皮县| 桃园县| 正定县| 安多县| 梅州市| 乌鲁木齐县| 南阳市| 全州县| 海口市| 韶山市| 北海市| 宁南县| 贵阳市| 松滋市| 阜宁县| 多伦县| 温宿县| 安化县| 汾阳市| 六盘水市| 南阳市| 台东市| 福贡县| 固原市| 平邑县| 长宁县| 吕梁市| 射洪县| 明光市| 芒康县| 紫阳县| 涞源县| 萝北县|