Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

首先说明liunx版本:centos 7.0 ,redhat内核

一、Filebeat

1、安装Filebeat,版本filebeat-6.5.4-x86_64.rpm

通过wget进行下载 https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.4-x86_64.rpm,如下操作:

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

# 执行安装,如下操作

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

默认安装路径:

/usr/share/filebeat
/etc/filebeat/  

2、配置Filebeat ,路径/etc/filebeat/filebeat.yml,通过cat  /etc/filebeat/filebeat.yml 先查看此文件,如下操作:

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

通过命令行:vi  /etc/filebeat/filebeat.yml进行修改,

具体修改文件如下:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - /apps/eam-service/logs/eam-service.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["10.59.8.48:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

3.启动filebeat

切换到filebeat 安装目录/usr/share/filebeat下,执行./filebeat -e -c /etc/filebeat/filebeat.yml -d "publish",如下图所示:

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

3、重启:使用ps -ef| grep filebeat 查到之后,直接kill -9   pid 

二、logstash

1、官方下载地址:https://www.elastic.co/downloads/logstash   logstash-6.2.4.tar.gz

2、解压 tar -zxvf  logstash-6.2.4.tar.gz

3、解压之后,将文件mv logstash-6.2.4  /usr/share

4、切换目录到 /usr/share/ogstash-6.2.4下 创建一个文件logstash-default.conf

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

input {
    stdin {
    }
    
    beats {
        port => "5044" 
    }
}

output {
    stdout { 
        codec => rubydebug 
        }
}

4、检查配置并启动Logstash:切换到/usr/share/logstash-6.2.4 执行如下操作:

 bin/logstash -f logstash-default.conf --config.test_and_exit (--config.test_and_exit选项的意思是解析配置文件并报告任何错误)

或者
bin/logstash -f logstash-default.conf --config.reload.automatic(--config.reload.automatic选项的意思是启用自动配置加载,以至于每次你修改完配置文件以后无需停止然后重启Logstash)

5、重启:使用ps -ef| grep logstash 查到之后,直接kill -9   pid 

三、elasticsearch

1、官方下载地址:https://www.elastic.co/downloads/elasticsearch  elasticsearch-6.6.2.tar.gz

2、解压:tar -xzvf elasticsearch-6.6.2.tar.gz

3、解压之后,将文件mv elasticsearch-6.6.2  /usr/share

4、创建一个用户,并且赋权限,设置密码:

adduser elasticsearch
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch-6.6.2/
passwd elasticsearch

5、切换到elasticsearch 用户下,并且到/usr/share/elasticsearch-6.6.2/下,修改配置文件:vim config/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /usr/share/elasticsearch-6.6.2/to/data
#
# Path to log files:
#
path.logs: /usr/share/elasticsearch-6.6.2/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: false
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
如图所示:

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

6、启动elasticsearch,切换到用户elasticsearch下,并且到/usr/share/elasticsearch-6.6.2/下,执行如下操作:

./elasticsearch

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

7、访问:http://10.59.8.48:9200/ 查看elasticsearch是否成功运行

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

8、访问http://10.59.8.48:9200/logstash-default/_search 查看logstash-default这个index下的日志信息

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

四、kibana

1、官方下载地址:https://www.elastic.co/downloads/kibana  kibana-6.6.2-linux-x86_64.tar.gz

2、解压:tar -xzvf kibana-6.6.2-linux-x86_64.tar.gz

3、解压之后,将文件mv kibana-6.6.2-linux-x86_64  /usr/share

4、切换到/usr/share/kibana-6.6.2-linux-x86_64 目录下面:进行设置配置文件:打开config/kibana.yml,修改如下内容

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "10.59.8.48"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://10.59.8.48:9200"]

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
#i18n.locale: "en"

5、启动程序,切换到/usr/share/kibana-6.6.2-linux-x86_64目录下,执行:./kibana

6、访问:http://xxxxx:5601/查看kibana是否成功运行,并检索查看已存入的信息。

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

  7、点击“Create index pattern”按钮来添加索引模式。第一个索引模式自动配置为默认的索引默认,以后当你有多个索引模式的时候,你就可以选择将哪一个设为默认。(提示:Management > Index Patterns)

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

现在,Kibana已经连接到你的Elasticsearch数据。Kibana展示了一个只读的字段列表,这些字段是匹配到的这个索引配置的字段。

 7、你可以从Discover页面交互式的探索你的数据。你可以访问与所选择的索引默认匹配的每个索引中的每个文档。你可以提交查询请求,过滤搜索结构,并查看文档数据。你也可以看到匹配查询请求的文档数量,以及字段值统计信息。如果你选择的索引模式配置了time字段,则文档随时间的分布将显示在页面顶部的直方图中。

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

五、注意遇到的坑:

  1、elasticsearch和kibana版本要对应,否则服务起不来。我是用的都是6.6.2 版本

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

 


   2、如果浏览器中访问http://localhost:9200/没有返回预期的结果,就需要修改Elasticsearch的配置,使其支持外网访问。首先,按Ctrl +C停止Elasticsearch,然后,打开Elasticsearch的配置文件vim config/elasticsearch.yml,找到network.host这一行。

 Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

3、max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144] 
解决:切换到root用户修改配置sysctl.conf 
     vi /etc/sysctl.conf 
添加下面配置: 
     vm.max_map_count=655360 
并执行命令: 
     sysctl -p


    4、max number of threads [1024] for user [lish] likely too low, increase to at least [2048] 
解决:切换到root用户,进入limits.d目录下修改配置文件。 
     vi /etc/security/limits.d/90-nproc.conf 
修改如下内容: 
     soft nproc 1024 
修改为 
      soft proc 2048


      5、 ERROR: bootstrap checks failed max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536] max number of threads [1024] for user [lishang] likely too low, increase to at least [2048] 
解决:切换到root用户,编辑limits.conf 添加类似如下内容 
      vi /etc/security/limits.conf 
添加如下内容: 
      soft nofile 65536 
      hard nofile 131072 
      soft nproc 2048 
      hard proc 4096


 以上几个问题是我,在搭建环境中遇到的问题,以上步骤,如有误的地方,请各位拍砖。

六、logstash和elasticsearch整合:

对logstash-default.conf进行修改如下:

Liunx filebeat +logstash+elasticsearch+kibana搭建日志平台

input {
    
    stdin {
    }
    
    beats {
        port => "5044" 
    }
}

output {
    elasticsearch {
        hosts => "10.59.8.48:9200" 
        index => "logstash-default"
    }
    stdout { 
        codec => rubydebug 
        }
}