第十二章 Logstash入门
一、概念介绍
Logstash是一个类似实时流水线的开源数据传输引擎,它像一个两头连接不 同数据源的数据传输管道,将数据实时地从一个数据源传输到另一个数据源中。在数据传输的过程中,Logstash还可以对数据进行清洗、加工和整理,使数据在 到达目的地时直接可用或接近可用,为更复杂的数据分析、处理以及可视化做准备。
既然需要将数据搬运到指定的地点,为什么不在数据产生时就将数据写到需要的地方呢?这个问题可以从以下几个方面理解。首先,许多数据在产生时并不支持直接写入到除本地文件以外的其他数据源。比如大多数第三方软件在运行中产生的日志,都以文本形式写到本地文件中。其次,在分布式环境下许多数据都分散在不同容器甚至不同机器上,而在处理这些数据时往往需要将数据收集到一起统一处理。 最后,即使软件支持将数据写入到指定的地点,但随着人们对数据理解的深人和新技术的诞生又会有新的数据分析需求出现,总会有一些接入需来是原生软件无法满足的。综上,Logstash的核心价值就在于它将业务系统与数摆处现系统隔离开来,屏蔽了各自系统变化对彼此的影响,使系统之间的依赖降低并可独自进化发展。
Logstash可以从多个数据源提取数据,然后再清洗过滤并发送到指定的目标 数据源。目标数据源也可以是多个,而且只要修改Logstash管道配置就可以轻松扩展数据的源头和目标。这在实际应用中非常有价值,尤其是在提取或发送的数据源发生变更时更为明显。比如原来只将数据提取到Elasticsearch中做检索,但 现在需要将它们同时传给Spark做实时分析。如果事先没有使用Logstash就必须设计新代码向Spark发送数据,而如果预先使用了Logstash则只需要在管道配置中添加新的输出配置。这极大增强了数据传输的灵活性。
Logstash负责日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析。
1、部署规划
计划在三台CentOS7机器上部署ELK,其中一台机器作为ELK的服务节点,IP为192.168.0.101;另外两台作为客户节点,IP为192.168.0.102/103。其中服务节点部署Elasticsearch、Logstash和Kibana三个组件,客户节点部署Logstash,Nginx等。
2、服务节点部署
2.1、环境准备
默认root用户下操作,其他用户请自觉添加sudo。
1)安装JDK,省略
2)关闭防火墙,关闭selinux
systemctl stop firewalld
systemctl disable firewalld
或者设置防火墙规则:
firewall-cmd --add-port=9200/tcp --permanent
firewall-cmd --add-port=9300/tcp --permanent
firewall-cmd --add-port=5601/tcp --permanent
firewall-cmd --reload
2.2、添加ELK仓库
Add the following in your `/etc/yum.repos.d/` directory in a file with a `.repo` suffix, for example `logstash.repo`
cat <<EOF | tee /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
本文参考博文:https://www.voidking.com/dev-centos7-install-elk/
二、ES服务器部署logstash
Logstash是基于Java生态圈的语言开发的,所以安装Logstash之前需要先安装JDK。Logstash 7目前只支持Java8或Java 11,所以安装前要检查安装JDK的版本是否正确。Logstash提供了DEBh和RPM安装包,还提供了tar. gz和zip压缩包,也可以直接通过Docker启动Logstash。安装过程比较简单,直接解压缩文件是最简单的办法。
Logstash 的启动命令位于安装路径的 bin 目录中,直接运行 logstash 不行, 需要按如下方式提供参数: ./logstash -e "input {stdin {}} output {stdout{}}" 启动时应注意: -e 参数后要使用双引号。如果在命令行启动日志中看到 “Successfully started Logstash API end-point l:port= >9600”,就证明启动成功
在上面的命令行中,-e 代表输入配置字符串,定义了一个标准输入插件( 即 stdin) 和一个标准输出插件( 即 stdout),意思就是从命令行提取输入,并在命令行直接 将提取的数据输出。如果想要更换输入或输出,只要将 input 或 output 中的插 件名称更改即可,这充分体现了 Logstash 管道配置的灵活性。按示例启动 Logstash,命令行将会等待输入,键人“Hello, World!"后会在命令行返回结果如 下:
在默认情况下,stdout 输出插件的编解码器为 rubydebug,所以输出内容中 包含了版本、时间等信息,其中 message 属性包含的就是在命令行输入的内容。 试着将输出插件的编码器更换为 plain 或 line,则输入的结果将会发生变化: ./logstash -e "input {stdin {}} output {stdout{codec => plain}}"
1、RPM方式安装logstash
1.1、下载下载RPM安装包
1)192.168.0.101上执行:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.0-x86_64.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.7.0-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.7.0.rpm
2)192.168.0.102上执行:
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.7.0.rpm
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.0-x86_64.rpm
wget http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.8.1-1.el7.ngx.x86_64.rpm
1.2、RPM方式安装:
1)下载并安装公共签名**
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
2)安装logstash-7.7.0.rpm
sudo rpm --install logstash-7.7.0.rpm
3)设置Logstash开机自启动
sudo systemctl daemon-reload
sudo systemctl enable logstash
确认Logstash的安装信息:
rpm -qi logstash
4)测试
/usr/share/logstash/bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
备注:具体配置参见后面步骤详述
2、yum方式部署logstash
官方安装手册: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
2.1、下载yum源的**认证:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
2.2、利用yum安装logstash
yum install -y logstash
查看下logstash的安装目录:
rpm -ql logstash
创建一个软连接,每次执行命令的时候不用在写安装路劲(默认安装在/usr/share下)
ln -s /usr/share/logstash/bin/logstash /bin/
2.3、启动测试执行logstash的命令
1)通过rubydebug来输出下更详细的信息 :
logstash -e 'input { stdin { } } output { stdout {codec => rubydebug} }'
执行成功输入: hello logstash
2)stdout输出的结果:
hello logstash { "message" => "hello logstash", "host" => "localhost.localdomain", "@timestamp" => 2020-07-08T15:24:35.594Z, "@version" => "1" }
3、压缩包方式部署logstash
3.1、下载安装包
到官方release发布也下载:
https://www.elastic.co/cn/downloads/past-releases
例如:https://artifacts.elastic.co/downloads/kibana/kibana-7.7.0-linux-x86_64.tar.gz
3.2、上传安装包并解压
这里安装目录为:
安装包目录:/usr/local/elk
解压:tar -zxvf logstash-7.7.0.tar.gz
3.3、修改配置
在安装包目录config下面添加配置:
安装路劲:/usr/local/elk/logstash-7.7.0
4、配置logsash并启动测试
官方指南: https://www.elastic.co/guide/en/logstash/current/configuration.html
4.1、创建配置文件elk.conf
1)如果是RPM或者yum安装方式,新建命令:
vim /etc/logstash/conf.d/elk.conf
2)如果是手动安装压缩包方式:则在config目录下面找到对应的logstash-sample.conf文件复制一个elk.conf文件,然后编辑。
4.2、文件中添加以下内容:
测试收集elasticsearch的日志并且将日志输出到elasticsearch,配置如下所示:
[[email protected] config]$ vim elk.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
file {
path => "/var/log/elasticsearch/my-application.log"
type => "elasticsearch"
start_position => "beginning"
}
}
filter {
}
output {
elasticsearch {
hosts => ["http://192.168.0.101:9200"]
index => "elasticsearch-%{+YYYY.MM.dd}"
}
}
4.3、配置logstash.yml(可选)
1)在/etc/logstash/logstash.yml文件最后加入以下几行
作用:加入以下配置后,可在kibana上上查看logstash的性能
xpack.monitoring.elasticsearch.username: es
xpack.monitoring.elasticsearch.password: chenhuajing710
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: ["http://192.168.0.101:9200"]
2)配置堆内存(建议配置)
修改/etc/logstash/jvm.options配置文件
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms4g
-Xmx4g
3) logstash的安装目录和日志目录
rpm包安装的目录:/etc/logstash
日志目录:/var/log/logstash (可在配置文件中elasticsearch.yml自定义)
创建一个软连接,每次执行命令的时候不用在写安装路劲(默认安装在/usr/share下)
ln -s /usr/share/logstash/bin/logstash /bin/
4.4、使用配置文件运行logstash
1)手动安装方式启动:
cd /usr/local/elk/logstash-7.7.0
./bin/logstash -f ./config/elk.conf
运行成功以后输入以及标准输出结果:
2)RPM方式安装启动
启动Logstash:
systemctl start logstash
systemctl restart logstash
3)以配置文件方式启动logstash(rpm安装):
主要配置文件路劲:cd /etc/logstash/conf.d/
/usr/share/logstash/bin/logstash -f ./elk.conf
查看Logstash运行状态:
systemctl status logstash
ps -ef | grep logstash
netstat -nlpt
4.5、通过rubydebug来输出下更详细的信息:
logstash -e 'input { stdin { } } output { stdout {codec => rubydebug} }'
执行成功输入: hello logstash,
stdout输出的结果:
hello logstash { "message" => "hello logstash", "host" => "localhost.localdomain", "@timestamp" => 2020-07-08T15:24:35.594Z, "@version" => "1" }
4.6、Logstash配置文件以服务方式运行
在安装目录下修改startip.optins文件
1)查看默认启动项配置:
vim /etc/logstash/config/startup.options
# Set a home directory
LS_HOME=/usr/share/logstash
# logstash settings directory, the path which contains logstash.yml
LS_SETTINGS_DIR=/etc/logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR} --path.config /etc/logstash/conf.d"
LS_JAVA_OPTS=""
LS_USER=logstash
LS_GROUP=logstash
LS_GC_LOG_FILE=/var/log/logstash/gc.log
之后我们编辑logstash.conf 配置文件,下面的例子将heartbeat写到磁盘上
input {
beats {
port => 5044
}
file {
path => "/var/log/elasticsearch/my-application.log"
type => "elasticsearch"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "elasticsearch-%{+YYYY.MM.dd}"
}
}
2)修改启动服务
以root身份执行logstash命令创建服务:
vim /etc/systemd/system/logstash.service
添加:"--path.config" "/etc/logstash/conf.d"
注意修改完启动文件需要重新启动系统:
systemctl daemon-reload
3)以nohup方式后台启动运行:
nohup /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/elk.conf>/dev/null &
nohup /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/elk.conf -w 8 -b 1000 > /dev/null 2>&1 &
5、测试日志搜集与输出
完成上述安装配置并启动成功以后,我们可以通过查看es的索引管理接口,来确认我们前面配置的索引elasticsearch-*是否输入到Elasticsearch。
查看路劲:http://192.168.0.101:9200/_cat/indices,内容如下所示:
索引保存成功以后我们就可以在ES的可视化界面Kibana页面创建索引,如下如所示:
展开左侧菜单,点击management,Kibana区块点击索引模式(index Patterns),点击“创建索引模式”按钮,输入elasticsearch-*
点击“下一步”,选择时间字段为@timestamp,点击“创建索引模式”按钮:
展开左侧菜单,点击“Discover”,选择我们刚才创建的索引:
搜索出来的日志信息即为elasticsearch打印的日志信息。
三、客户节点部署logsash
1、安装nginx服务
1.1、nginx安装运行
安装Nginx(192.168.0.102)(为测试Logstash收集日志,本步骤非必须)
1)下载安装包:
wget http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.18.0-1.el7.ngx.x86_64.rpm
2)Rpm安装:
rpm --install nginx-1.18.0-1.el7.ngx.x86_64.rpm
3)设置开机自启动
systemctl daemon-reload
systemctl enable nginx
4)启动Nginx
systemctl start nginx
备注:
Nginx启停命令:
./nginx -c /usr/local/nginx/conf/nginx.conf
如果不指定nginx.conf,默认为NGINX_HOME/conf/nginx.conf
./nginx -s stop 停止
./nginx -s quit 退出
./nginx -s reload 重新加载nginx.conf
浏览器访问Nginx:http://192.168.0.102/index.html
1.2、Nginx配置:
1)编辑nginx默认的配置文件
主要是将日志配置信息打开,为logstash收集日志提供数据。
然后修改/etc/nginx/nginx.conf配置,在http配置块中加入如下配置:
log_format access_log_json '{"user_ip":"$http_x_forwarded_for","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_rqp":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}';
access_log /var/log/nginx/access.log access_log_json;
vim /etc/nginx/nginx.conf
默认配置内容如下所示:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
2)设置日志文件权限:
chmod -R 755 /var/log/nginx/access.log
1.3、重启访问nginx
访问地址:http://192.168.0.102/index.html
2、安装logstash(略)
安装步骤参考前面的安装说明即可。
2.1、创建Logstash配置文件:
vim /etc/logstash/conf.d/elk.conf
输入以下配置:
input {
file {
path => "/usr/local/nginx/logs/host.access.log"
type => "nginx"
start_position => "beginning"
}
}
filter {
}
output {
elasticsearch {
hosts => ["http://192.168.0.101:9200"]
index => "nginx-%{+YYYY.MM.dd}"
user => "es"
password => "chenhuajing710"
}
}
2.2、重启Logstash
systemctl restart logstash
或者:./bin/logstash -f ./config/elk.conf
查看运行状态:
systemctl status logstash
完成上述安装配置并启动成功以后,我们可以通过查看es的索引管理接口,来确认我们前面配置的索引elasticsearch-*是否输入到Elasticsearch。
查看路劲:http://192.168.0.101:9200/_cat/indices,内容如下所示:
然后在kibana页面创建nginx-*索引,最后在Discover页面选择nginx索引查看日志数据:
3、收集Nginx的access.log和error.log
3.1、创建Logstash配置
vim /etc/logstash/conf.d/nginx_log.conf
配置如下:
[[email protected] config]# cat nginx_log.conf
input {
file {
path => "/var/log/nginx/access.log"
type => "nginx-access"
start_position => "beginning"
}
file {
path => "/var/log/nginx/error.log"
type => "nginx-error"
start_position => "beginning"
}
}
filter {
}
output {
if [type] == "nginx-access"{
elasticsearch {
hosts => ["http://192.168.0.101:9200"]
index => "nginx-log-access-%{+YYYY.MM.dd}"
user => "es"
password => "chenhuajing710"
}
}
if [type] == "nginx-error"{
elasticsearch {
hosts => ["http://192.168.0.101:9200"]
index => "nginx-log-error-%{+YYYY.MM.dd}"
user => "es"
password => "chenhuajing710"
}
}
}
关于filter的配置参考如下:
filter {
if [type] == "nginx-access"{
grok {
match => ["message","%{IPORHOST:remote_addr} - %{HTTPDUSER:remote_user} \[%{HTTPDATE:time_local}\] \"(?:%{WORD:method} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:status} (?:%{NUMBER:body_bytes}|-) %{QS:referrer} %{QS:user_agent} %{QS:x_forward_for}"]
}
}
if [type] == "nginx-error"{
grok {
match => [
"message", "(?<time_local>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:log_level}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:error_message}(?:, client: (?<client>%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:server}?)(?:, request: %{QS:request})?(?:, upstream: (?<upstream>\"%{URI}\"|%{QS}))?(?:, host: %{QS:request_host})?(?:, referrer: \"%{URI:referrer}\")?",
"message", "(?<time_local>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:log_level}\]\s{1,}%{GREEDYDATA:error_message}"
]
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
mutate {
convert => [ "status", "integer" ]
convert => [ "body_bytes","integer" ]
}
ruby {
code => "event.set('log_day', event.get('@timestamp').time.localtime.strftime('%Y%m%d'))"
}
}
检查配置:
/usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/nginx_log.conf
3.2、测试收集和解析日志
1)如果Logstash当前正在运行,先停掉
systemctl stop logstash
2)指定配置文件执行,测试终端输出结果
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx_log.conf
nohup sh /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx_log.conf &
systemd启动Logstash并查看运行状态
systemctl status logstash
3)生产错误日志:把Nginx配置文件改错,重启Nginx
systemctl restart nginx
4)查看ES索引存储情况:
在kibana页面创建access和error log的索引,然后查看错误信息:
改回正确配置,启动Nginx,浏览器访问Nginx,访问一个不存在的页面,生产错误日志:
访问:http://192.168.0.102/inde.html (将index.html改为inde.html)
生产访问日志:浏览器访问Nginx,访问一个存在的页面,可以看到正常的日志访问信息:
四、日志采集系统构建实战
1、环境准备
1)首先确保服务器已经运行了ES集群与Kibana组件。
2)确保服务器上已经安装部署了logstash。
备注:具体安装步骤参考前面章节。
2、配置与测试
2.1、Logstash配置
在Logstash安装路径下的config目录中,新建一个conf文件,取名为es_log.conf,并且填入以下内容:
input {
file {
path => "/opt/unimeta/uni-meta-getway/uni-meta-getway-1.0.log"
type => "uni-meta-getway"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
file {
path => "/opt/unimeta/uni-meta-auth/uni-meta-auth-1.0.log"
type => "uni-meta-auth"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
file {
path => "/opt/unimeta/uni-meta-manage/uni-meta-manage-1.0.log"
type => "uni-meta-manage"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
filter {
}
output {
if [type] == "uni-meta-getway"{
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "uni-meta-getway-%{+YYYY.MM.dd}"
}
}
if [type] == "uni-meta-auth"{
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "uni-meta-auth-%{+YYYY.MM.dd}"
}
}
if [type] == "uni-meta-manage"{
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "uni-meta-manage-%{+YYYY.MM.dd}"
}
}
stdout{}
}
2.2、以配置文件方式启动测试
进入Logstash的bin目录执行
./usr/share/logstash/bin/logstash -f ./etc/logstash/conf.d/elk.conf
nohup /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/elk.conf>/dev/null &
控制台会输出:
我们到浏览器查询,发现ES中确实创建了一个索引es-log-2020.07.08
然后在在ES中查询这个索引的内容:
仔细检查存入的日志内容,发现日志信息是作为整体存入message字段的,有没有办法存入的更细粒些呢?仔细分析es的日志,呈现了一定的结构化特征,日志内容总是:“[时间戳][日志级别][输出信息的类名][节点名]具体的日志信息 ”这种格式,所以我们完全可以考虑使用过滤器插件对文本进行分析后再存入es,怎么分析?可以用grok过滤器。
3、使用grok过滤器
3.1、配置grok过滤器
在Logstash安装路径下的config目录中,新建conf文件,取名为log_grok.conf,并且填入以下内容:
input {
file {
path => "/opt/unimeta/uni-meta-getway/uni-meta-getway-1.0.log"
type => "uni-meta-getway"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
file {
path => "/opt/unimeta/uni-meta-auth/uni-meta-auth-1.0.log"
type => "uni-meta-auth"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
file {
path => "/opt/unimeta/uni-meta-manage/uni-meta-manage-1.0.log"
type => "uni-meta-manage"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
filter {
grok {
match=>{message=>"%{TIMESTAMP_ISO8601:time}%{SPACE}%{LOGLEVEL:level}%{SPACE}%{NOT SPACE:loggerclass}%{SPACE}%{GREEDYDATA:char}%{SPACE}%{GREEDYDATA:msg}"
}
}
output {
if [type] == "uni-meta-getway"{
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "uni-meta-getway-%{+YYYY.MM.dd}"
}
}
if [type] == "uni-meta-auth"{
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "uni-meta-auth-%{+YYYY.MM.dd}"
}
}
if [type] == "uni-meta-manage"{
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "uni-meta-manage-%{+YYYY.MM.dd}"
}
}
stdout{}
}
如果担心自己写的conf有语法问题,可以执行:
./usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/elk.conf -t
检查conf,当然只能检查语法,不能检查诸如正则表达式是否正确这类问题。
然后再次执行:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/elk.conf
ps -ef | grep logstash
netstat -nlpt
如果发现程序没有输出,有可能是elk的日志文件已经被处理过,logstash不会重复处理,这时可以到logstash的/data/plugins/inputs/file目录下,删除.sincedb文件(这个文件是个隐藏文件),再执行启动命令。
可以看见,每条日志已经成功的被解析了。
3.2、使用ES模板配置
现在考虑如何存入es,按照原来的方式配置是不行的,必须还要做点改变,在config目录下创建一个 es_template.json,配置内容如下所示:
vim /etc/logstash/es_template.json
{
"template":"es-log-text-%{+YYYY.MM.dd}",
"settings": {
"index.refresh_interval":"1s"
},
"mappings":{
"properties":{
"time":{
"type":"date"
},
"level":{
"type":"keyword"
},
"loggerclass":{
"type":"keyword"
},
"nodename":{
"type":"keyword"
},
"msg":{
"type":"text"
},
"message":{
"type":"text"
}
}
}
}
定义好索引的 mapping(映射),注意要和我们对日志的分解一一对应。
3.3、创建新的配置文件并启动
创建一个新的conf文件,取名elk-grok.conf,内容如下:
input {
file {
path => "/opt/unimeta/uni-meta-getway/uni-meta-getway-1.0.log"
type => "uni-meta-getway"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
file {
path => "/opt/unimeta/uni-meta-auth/uni-meta-auth-1.0.log"
type => "uni-meta-auth"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
file {
path => "/opt/unimeta/uni-meta-manage/uni-meta-manage-1.0.log"
type => "uni-meta-manage"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
filter {
grok {
match=>{message=>"%{TIMESTAMP_ISO8601:time}%{SPACE}%{LOGLEVEL:level}%{SPACE}%{NOT SPACE:loggerclass}%{SPACE}%{-}%{SPACE}%{GREEDYDATA:msg}"
}
}
output {
if [type] == "uni-meta-getway"{
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "uni-meta-getway-%{+YYYY.MM.dd}"
template_name => "es_template*"
template => "/etc/logstash/"
}
}
if [type] == "uni-meta-auth"{
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "uni-meta-auth-%{+YYYY.MM.dd}"
template_name => "es_template*"
template => "/etc/logstash/"
}
}
if [type] == "uni-meta-manage"{
elasticsearch {
hosts => ["http://192.168.0.209:9200"]
index => "uni-meta-manage-%{+YYYY.MM.dd}"
template_name => "es_template*"
template => "/etc/logstash/"
}
}
stdout{}
}
主要是在elasticsearch插件中增加了对刚才新增的json文件的读取。
执行启动命令:
./usr/share/logstash/bin/logstash -f ./etc/logstash/conf.d/elk.conf
会看到控制台的输出,同时在浏览器中,也会看到解析后的日志: