大数据学习33:logstash 和 kibanba 安装以及ELK的联调记录

logstash 和 kibanba 安装以及ELK的联调记录

1.Logstash安装

[[email protected] elk]# tar -xzvf logstash-6.0.0.tar.gz 
[[email protected] elk]# cd logstash-6.0.0/
[[email protected] logstash-6.0.0]# ll
total 100
drwxr-xr-x. 2 root root  4096 Nov 25 14:55 bin
drwxr-xr-x. 2 root root  4096 Nov 25 14:55 config
-rw-r--r--. 1 root root  2276 Nov 11 03:59 CONTRIBUTORS
drwxr-xr-x. 2 root root  4096 Nov 11 03:59 data
-rw-r--r--. 1 root root  3959 Nov 11 04:02 Gemfile
-rw-r--r--. 1 root root 21265 Nov 11 03:59 Gemfile.jruby-2.3.lock
drwxr-xr-x. 5 root root  4096 Nov 25 14:55 lib
-rw-r--r--. 1 root root   589 Nov 11 03:59 LICENSE
drwxr-xr-x. 4 root root  4096 Nov 25 14:55 logstash-core
drwxr-xr-x. 3 root root  4096 Nov 25 14:55 logstash-core-plugin-api
drwxr-xr-x. 4 root root  4096 Nov 25 14:55 modules
-rw-rw-r--. 1 root root 26953 Nov 11 04:02 NOTICE.TXT
drwxr-xr-x. 3 root root  4096 Nov 25 14:55 tools
drwxr-xr-x. 4 root root  4096 Nov 25 14:55 vendor
[[email protected] logstash-6.0.0]# 

#创建一个日志收集的规则
#input 的 file 的path 要和 nginx 产生日志的路径一致,否则无法读取到。
#filter 中为匹配规则
#output 中为输出到 elasticsearch 的位置,es中存储数据 index 相当于 rdbms 表空间,type相当于 rdbms 表
[[email protected] logstash-6.0.0]# vi logstash-nginx-access-log.conf
input {
    file {
        path => ["/usr/local/nginx/logs/access.log"]
        type => "nginx_access"
        start_position => "beginning"
    }
}


filter {
  grok {
    match => {
      "message" => '%{IPORHOST:remote_ip} - %{DATA:user_name} \[%{HTTPDATE:time}\] "%{WORD:request_action} %{DATA:request} HTTP/%{NUMBER:http_version}" %{NUMBER:response} %{NUMBER:bytes} "%{DATA:referrer}" "%{DATA:agent}"'
    }
  }
  date {
    match => [ "time", "dd/MMM/YYYY:HH:mm:ss Z" ]
    locale => en
  }}


output {
  elasticsearch {
        hosts => ["192.168.137.11:9200"]
        index => "logstash-nginx-access-log"
    }
}


这篇文章详细说了如何使用 grok 去将日志匹配进es 
https://www.cnblogs.com/Orgliny/p/5592186.html
注意:
如果logstash 和 nginx 配合的不好,比如生成日志和 match 后面的写的有问题,会在 es 中导入多条记录,比如刷新一次,会导入5次。


[[email protected] logstash-6.0.0]# nohup bin/logstash -f logstash-nginx-access-log.conf  &
[1] 3547
[[email protected] logstash-6.0.0]# nohup: ignoring input and appending output to `nohup.out'

关闭logstash 
[[email protected] logstash-6.0.0]#  kill -9 $(pgrep -f logstash)

查看日志
[[email protected] logstash-6.0.0]# tail -f nohup.out 

这时,我们没有访问nginx的主页,没有产生新记录,所有 es 上是没有任何 index 和数据的。
这里建议手动先创建一个index ,避免自动创建后 number_of_shards 不好修改

curl -XPUT 'http://192.168.137.11:9200/logstash-nginx-access-log' -d '
{
    "settings" : {
        "index" : {
            "number_of_shards" : 3,
"number_of_replicas" : 1
        }
    }
}' -H 'Content-Type: application/json; charset=UTF-8' 
大数据学习33:logstash 和 kibanba 安装以及ELK的联调记录
刷新nginx网页,es中会产生新数据。
这时,我们就确认logstash 配置完成。

清理 es 数据,这里直接删 index 
curl -XDELETE 'http://192.168.137.11:9200/logstash-nginx-access-log' 

2.Kibana安装
[[email protected] elk]# tar -xzvf kibana-6.0.0-linux-x86_64.tar.gz
[[email protected] elk]# chown -R root.root kibana-6.0.0-linux-x86_64/
[[email protected] elk]# cd kibana-6.0.0-linux-x86_64
[[email protected] kibana-6.0.0-linux-x86_64]# ll
total 856
drwxr-xr-x.   2 root root   4096 Nov 11 02:50 bin
drwxrwxr-x.   2 root root   4096 Nov 11 02:50 config
drwxrwxr-x.   2 root root   4096 Nov 11 02:50 data
-rw-rw-r--.   1 root root    562 Nov 11 02:50 LICENSE.txt
drwxrwxr-x.   6 root root   4096 Nov 11 02:50 node
drwxrwxr-x. 620 root root  20480 Nov 11 02:50 node_modules
-rw-rw-r--.   1 root root 799543 Nov 11 02:50 NOTICE.txt
drwxrwxr-x.   3 root root   4096 Nov 11 02:50 optimize
-rw-rw-r--.   1 root root    721 Nov 11 02:50 package.json
drwxrwxr-x.   2 root root   4096 Nov 11 02:50 plugins
-rw-rw-r--.   1 root root   4654 Nov 11 02:50 README.txt
drwxr-xr-x.  14 root root   4096 Nov 11 02:50 src
drwxrwxr-x.   5 root root   4096 Nov 11 02:50 ui_framework
drwxr-xr-x.   2 root root   4096 Nov 11 02:50 webpackShims
[[email protected] kibana-6.0.0-linux-x86_64]# cd config/
[[email protected] config]# ll
total 8
-rw-r--r--. 1 root root 4649 Nov 11 02:50 kibana.yml
[[email protected] config]# vi kibana.yml 
server.host: "192.168.137.11"
elasticsearch.url: "http://192.168.137.11:9200"
[[email protected] config]# cd ../
[[email protected] kibana-6.0.0-linux-x86_64]# bin/kibana
  log   [06:14:46.050] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [06:14:46.095] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [06:14:46.168] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [06:14:46.209] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [06:14:46.407] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [06:14:46.413] [info][listening] Server running at http://192.168.137.11:5601
  log   [06:14:46.414] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow
  log   [06:14:51.198] [info][status][plugin:[email protected]] Status changed from yellow to yellow - No existing Kibana index found
  log   [06:14:52.409] [info][status][plugin:[email protected]] Status changed from yellow to green - Kibana index ready
  log   [06:14:52.410] [info][status][ui settings] Status changed from yellow to green - Ready


这时候,可以打开 网页看下 kibana

第一次打开kibana需要创建一个默认的 index pattern 

大数据学习33:logstash 和 kibanba 安装以及ELK的联调记录

点击 create ,这时其实就是 kibana 去 es 中的索引,匹配索引中存储的记录建一个模板
大数据学习33:logstash 和 kibanba 安装以及ELK的联调记录

刷新nginx 网页,会出现数据

大数据学习33:logstash 和 kibanba 安装以及ELK的联调记录


一个基础的联调完成