为什么我的流利的工作与elasticsearch
问题描述:
我想使用fluentd和elasticsearch收集docker日志,这里是我的日志开始fluentd。为什么我的流利的工作与elasticsearch
2016-11-30 16:29:34 +0800 [info]: starting fluentd-0.12.19
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-mixin-config-placeholders' version '0.3.0'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-elasticsearch' version '1.7.0'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-mongo' version '0.7.11'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.3'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-s3' version '0.6.4'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-scribe' version '0.10.14'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-secure-forward' version '0.4.3'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-td' version '0.10.28'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-td-monitoring' version '0.2.1'
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-webhdfs' version '0.4.1'
2016-11-30 16:29:34 +0800 [info]: gem 'fluentd' version '0.12.19'
2016-11-30 16:29:34 +0800 [info]: adding match pattern="td.*.*" type="tdlog"
2016-11-30 16:29:34 +0800 [info]: adding match pattern="debug.**" type="stdout"
2016-11-30 16:29:34 +0800 [info]: adding match pattern="docker.**" type="stdout"
2016-11-30 16:29:34 +0800 [info]: adding match pattern="*.**" type="copy"
2016-11-30 16:29:35 +0800 [info]: adding source type="forward"
2016-11-30 16:29:35 +0800 [info]: adding source type="http"
2016-11-30 16:29:35 +0800 [info]: adding source type="debug_agent"
2016-11-30 16:29:35 +0800 [info]: using configuration file: <ROOT>
<match td.*.*>
type tdlog
apikey xxxxxx
auto_create_table
buffer_type file
buffer_path /var/log/td-agent/buffer/td
<secondary>
type file
path /var/log/td-agent/failed_records
buffer_path /var/log/td-agent/failed_records.*
</secondary>
</match>
<match debug.**>
type stdout
</match>
<match docker.**>
type stdout
</match>
<match *.**>
type copy
<store>
@type elasticsearch
host localhost
port 9200
include_tag_key true
tag_key log_name
logstash_format true
flush_interval 1s
</store>
</match>
<source>
type forward
</source>
<source>
type http
port 8888
</source>
<source>
type debug_agent
bind 127.0.0.1
port 24230
</source>
</ROOT>
2016-11-30 16:29:35 +0800 [info]: listening fluent socket on 0.0.0.0:24224
2016-11-30 16:29:35 +0800 [info]: listening dRuby uri="druby://127.0.0.1:24230" object="Engine"
2016-11-30 16:29:38 +0800 docker.40271db2b565: {"log":"1:M 30 Nov 08:29:38.065 # User requested shutdown...","container_id":"40271db2b565d52fa0ab54bde2b0fa4b61e4ca033fca2b7edcf54c1a93443c19","container_name":"/tender_banach","source":"stdout"}
我使用elasticsearch的默认配置,并且在我启动后,它保持这样的记录。
[2016-11-30T16:49:32154] [WARN] [oecraDiskThresholdMonitor] [I_hB3Vd]高磁盘水印[90%]超过上[I_hB3VdfQ3q3hBeP5skTBQ] [I_hB3Vd] [/用户/它/桌面/ elasticsearch-5.0.1 /数据/节点/ 0]免费:10GB [8.9%],碎片将被从这个节点
我elasticsearch
版本是5.0.1,fluentd
版本和fluent-plugin-elasticsearch
以上表明重新定位了。我正在使用Mac OS 10.11.6。我尝试了所有可以在网上找到的方法,任何人都可以提供帮助?
答
忘了说我的elasticsearch版本是5.0.1,和我的fluentd版本流畅,插件,elasticsearch上面显示,而我使用的是Mac OS 10.11.6
答
貌似fluentd开始就好了。我会说你的弹性搜索是一个问题(如果你有多个节点,数据节点)。该错误说明它...看起来像一个节点上的磁盘空间问题。