cool-2018-10-22-centos7-flume+kafka
kafka集群搭建启动
下载apache-flume-1.6.0-bin.tar.gz包
通过tar –zxvf apache-flume-1.6.0-bin.tar.gz命令解压
Flume解压既安装成功,配置conf/ flume-conf.properties文件启动完成相应的功能
hadoop1这个节点配置flume连接kafka即可
前面kafka集群已经成功,这里只需要配置好conf/ flume-conf.properties文件,配置如下
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type=avro
a1.sources.r1.bind=hadoop1
a1.sources.r1.port=41414
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = testflume
a1.sinks.k1.brokerList = 192.168.25.151:9092,192.168.25.152:9092,192.168.25.153:9092,192.168.25.154:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000000
a1.channels.c1.transactionCapacity = 10000
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
注意这里需要根据实际需求来配置sources,这里是用avro的方式配置监听master本机的41414端口注意这里是启动agent的配置 后续的flume client也需要用到这个配置,下面的sink配置a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink是固定的不需要修改,而a1.sinks.k1.topic = testflume是创建的话题名,需要根据自己需要来改,而a1.sinks.k1.brokerList = 192.168.57.4:9092,192.168.57.5:9092,192.168.57.6:9092是根据实际的kafka集群情况配置的
启动zookeeper
启动kafka
单节点flume启动
启动flume连接到kafka
bin/flume-ng agent -c ./conf/ -f conf/flume-conf.properties -Dflume.root.logger=DEBUG,console -n a1
注意这里的a1指的是配置文件中的agent名字a1不是随意写的
这里的flume对接kafka实际已经完成,接下来就是测试
将flume工程跑起来
在kafka集群中随意一个节点敲以下代码
./bin/kafka-console-consumer.sh --zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 --from-beginning --topic testflume
注意这里的topic的名字要和配置文件中一致
执行上面的main方法作为flume的client端来产生数据,可以在上面的consumer监听里面看到结果
接下来启动storm连接到kafka
启动storm集群,hadoop1节点启动nimbus,hadoop1启动storm UI,hadoop1-hadoop4启动supervisor
将项目storm中的logFilterTopology启动起来
在stormUI上监控该DAG
在kafka上监控topic test
./bin/kafka-console-consumer.sh --zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 --from-beginning --topic test