CENTOS7 ELK 5.3.2 elasticsearch logstash kibana filebeat

ELK 简单的经典场景如下图,其中kafka、logstash是可选的,可根据业务确定是否必要引入。消息队列kafka也可以换成其他MQ,甚至redis。

此图只是为了说明ELK各部分对应的关系,是一个非常简单的示例图。一般正常的生产环境 zookeeper、kafka、elasticsearch都是集群的,logstash也是多节点的。

CENTOS7 ELK 5.3.2 elasticsearch logstash kibana filebeat


本文件的内容是没有配消息队列kafka的,其实beat和logstash对kafka可以说是无缝对接,只需在对应的配置文件中进行配置即可。

使用消息队列和集群的文章,本人转载了一篇地址:

http://blog.****.net/stonexmx/article/details/71308393


1.  下载 所需资源

官网 : https://www.elastic.co/downloads

elasticsearch  logstash  kibana  filebeat

JDK1.8 我这之前已经安装

[[email protected] elk]# ll
total 177340
-rw-r--r-- 1 root root  33725176 May  4 13:04 elasticsearch-5.3.1.zip
-rw-r--r-- 1 root root   8767885 Apr 27 21:46 filebeat-5.3.2-linux-x86_64.tar.gz
-rw-r--r-- 1 root root  38743345 May  4 13:04 kibana-5.3.2-linux-x86_64.tar.gz
-rw-r--r-- 1 root root 100150030 May  4 13:04 logstash-5.3.1.zip

用unzip  解压 zip, tar -xvzf  解压 tar.gz

解压后拷贝到/usr/local/elk

[[email protected] elk]# ll
total 16
drwxr-xr-x  8 root root 4096 May  4 13:19 elasticsearch-5.3.1
drwxr-xr-x  4 root root 4096 Apr 25 00:08 filebeat-5.3.2-linux-x86_64
drwxrwxr-x 12 1000 1000 4096 Apr 25 00:26 kibana-5.3.2-linux-x86_64
drwxr-xr-x 11 root root 4096 May  4 13:11 logstash-5.3.1
[[email protected] elk]#


2. 配置 elasticsearch

[[email protected] bin]# cd /usr/local/elk/elasticsearch-5.3.1/bin
[[email protected] bin]# ./elasticsearch
[2017-05-04T13:29:55,392][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:127) ~[elasticsearch-5.3.1.jar:5.3.1]

上面提示是因为不能以root身份运行,创建 elk对应的用户

[[email protected] bin]# groupadd elkgroup
[[email protected] bin]# useradd -g elkgroup elkuser
[[email protected] bin]# passwd elkuser
Changing password for user elkuser.

[[email protected] bin]# su elkuser
[[email protected] bin]$ ./elasticsearch

2017-05-04 13:33:17,618 main ERROR Could not register mbeans java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register")
        at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)

权限不足,修改权限

[[email protected] local]# chown -R elkuser:elkgroup elk/
[[email protected] local]# chmod -R 755 elk/

[[email protected] bin]$ ./elasticsearch
[2017-05-04T13:41:05,712][INFO ][o.e.n.Node               ] [] initializing ...
[2017-05-04T13:41:05,806][INFO ][o.e.e.NodeEnvironment    ] [IHYB5WZ] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [19.3gb], net total_space [24.4gb], spins? [unknown], types [rootfs]
[2017-05-04T13:41:05,807][INFO ][o.e.e.NodeEnvironment    ] [IHYB5WZ] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-05-04T13:41:05,810][INFO ][o.e.n.Node               ] node name [IHYB5WZ] derived from node ID [IHYB5WZKQ4G3Ue6lDUn_ZQ]; set [node.name] to override
[2017-05-04T13:41:05,810][INFO ][o.e.n.Node               ] version[5.3.1], pid[23213], build[5f9cf58/2017-04-17T15:52:53.846Z], OS[Linux/3.10.0-327.28.3.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_121/25.121-b13]

.................

[2017-05-04T13:41:12,407][INFO ][o.e.h.n.Netty4HttpServerTransport] [IHYB5WZ] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2017-05-04T13:41:12,413][INFO ][o.e.n.Node               ] [IHYB5WZ] started
[2017-05-04T13:41:12,420][INFO ][o.e.g.GatewayService     ] [IHYB5WZ] recovered [0] indices into cluster_state

本地连接测试

[[email protected] elk]# curl 127.0.0.1:9200
{
  "name" : "IHYB5WZ",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "irj3hPD7SxGeHDuNz2S-6g",
  "version" : {
    "number" : "5.3.1",
    "build_hash" : "5f9cf58",
    "build_date" : "2017-04-17T15:52:53.846Z",
    "build_snapshot" : false,
    "lucene_version" : "6.4.2"
  },
  "tagline" : "You Know, for Search"
}

若想进行远程连接需配置

[[email protected] config]# vi elasticsearch.yml

network.host: 172.20.4.132

碰到的一些错误及解决办法

[[email protected] bin]$ ./elasticsearch
[2017-05-04T15:17:30,381][INFO ][o.e.n.Node               ] [] initializing ...
[2017-05-04T15:17:30,420][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/local/elk/elasticsearch-5.3.1/data/elasticsearch]] with lock id [0];maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?

删除 data下的node

[[email protected] bin]$ cd /usr/local/elk/elasticsearch-5.3.1/data/
[[email protected] data]$ ll
total 4
drwxr-xr-x 3 elkuser elkgroup 4096 May  4 13:41 nodes
[[email protected] data]$ rm -fr nodes/
[[email protected] data]$ ll
total 0

max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

修改/etc/security/limits.conf

[[email protected] bin]# vi /etc/security/limits.conf

# End of file
* soft nofile 65536
* hard nofile 65536
* soft nproc 2048
* hard nproc 2048


ERROR: bootstrap checks failed
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

修改etc/sysctl.conf
[[email protected] logs]# vi /etc/sysctl.conf

vm.max_map_count=655360

执行 sysctl -p



3. 配置logstash

在config目录下新建logstash.conf

设置filebeat作为日志输入,elasticsearch作为输出

内容:

[[email protected] config]# more logstash.conf
input {
    beats {
        port => "5043"
    }

}
 filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
    geoip {
        source => "clientip"
    }
}
output {
    elasticsearch {
        hosts => [ "172.20.4.132:9200" ]
    }

}



4. 配置filebeat

[[email protected] filebeat-5.3.2]$ vi filebeat.yml

日志读取目录

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /usr/local/elk/data/logs/*.log

    #- c:\programdata\elasticsearch\logs\*


日志输出位置,默认是ElasticSearch,此处改为logstash

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
 # hosts: ["172.20.4.130:9201","172.20.4.132:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.20.4.132:5043"]



5. 配置kibana

[[email protected] local]$ cd /usr/local/elk/kibana-5.3.2/config/
[[email protected] config]$ vi kibana.yml
修改elasticsearch的地址

elasticsearch.url: "http://172.20.4.132:9200"

修改访问用的IP

server.host: "172.20.4.132"

后台启动

[[email protected] bin]$ nohup ./kibana &

访问  http://ip:5601

若界面提示:No default index pattern. You must select or create one to continue.

这是对应的logstash没有配置好

然后进行如下图配置索引模式,因为filebeat.yml中output配置的是logstash,所以pattern是 logstash-*.

若是output配置的是elasticsearch,则parttern是filebeat-*。

其实就是看谁作为elasticsearch的索引源。

为了看效果,我将项目的部分日志文件放到了/usr/local/elk/data/logs目录下,就是之前filebeat配置的日志收集目录

CENTOS7 ELK 5.3.2 elasticsearch logstash kibana filebeat


效果如下

CENTOS7 ELK 5.3.2 elasticsearch logstash kibana filebeat


多说几句,文章中提到的filebeat,只是elastic公司beats产品中的一项,还有Topbeat、Packetbeat。beats是一个代理,将不同类型的数据发送到elasticsearch。beats可以直接将数据发送到elasticsearch,也可以通过logstash将数据发送elasticsearch。beats有三个典型的例子:Filebeat、Topbeat、Packetbeat。Filebeat用来收集日志,Topbeat用来收集系统基础设置数据如cpu、内存、每个进程的统计信息,Packetbeat是一个网络包分析工具,统计收集网络信息。

其实ELK、Beats场景如下,借用网络的一张图

CENTOS7 ELK 5.3.2 elasticsearch logstash kibana filebeat