来自Kafka在Flume中的EOFException

问题描述:

我正在尝试设置从控制台Kafka生产者到Hadoop文件系统(HDFS)的简单数据管道。我正在开发一款64位的Ubuntu虚拟机,并按照我所遵循的指南的建议,为Hadoop和Kafka创建了单独的用户。使用控制台消费者在卡夫卡消费生产的输入,并且HDFS似乎正在运行。来自Kafka在Flume中的EOFException

现在我想使用Flume将输入传送到HDFS。我使用下面的配置文件:

tier1.sources = source1 
tier1.channels = channel1 
tier1.sinks = sink1 

tier1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource 
tier1.sources.source1.zookeeperConnect = 127.0.0.1:2181 
tier1.sources.source1.topic = test 
tier1.sources.source1.groupId = flume 
tier1.sources.source1.channels = channel1 
tier1.sources.source1.interceptors = i1 
tier1.sources.source1.interceptors.i1.type = timestamp 
tier1.sources.source1.kafka.consumer.timeout.ms = 2000 

tier1.channels.channel1.type = memory 
tier1.channels.channel1.capacity = 10000 
tier1.channels.channel1.transactionCapacity = 1000 

tier1.sinks.sink1.type = hdfs 
tier1.sinks.sink1.hdfs.path = hdfs://flume/kafka/%{topic}/%y-%m-%d 
tier1.sinks.sink1.hdfs.rollInterval = 5 
tier1.sinks.sink1.hdfs.rollSize = 0 
tier1.sinks.sink1.hdfs.rollCount = 0 
tier1.sinks.sink1.hdfs.fileType = DataStream 
tier1.sinks.sink1.channel = channel1 

现在,当我用下面的命令

bin/flume-ng agent --conf ./conf -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n tier1 

我得到的控制台输出一遍又一遍相同的异常运行水槽:

2017-10-19 12:17:04,279 (lifecycleSupervisor-1-2) [DEBUG - org.apache.kafka.clients.NetworkClient.handleConnections(NetworkClient.java:467)] Completed connection to node 2147483647 
2017-10-19 12:17:04,279 (lifecycleSupervisor-1-2) [DEBUG - org.apache.kafka.common.network.Selector.poll(Selector.java:307)] Connection with Ubuntu-Sandbox/127.0.1.1 disconnected 
java.io.EOFException 
    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83) 
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) 
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) 
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) 
    at org.apache.kafka.common.network.Selector.poll(Selector.java:286) 
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) 
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320) 
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213) 
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193) 
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163) 
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222) 
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311) 
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890) 
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
    at org.apache.flume.source.kafka.KafkaSource.doStart(KafkaSource.java:529) 
    at org.apache.flume.source.BasicSourceSemantics.start(BasicSourceSemantics.java:83) 
    at org.apache.flume.source.PollableSourceRunner.start(PollableSourceRunner.java:71) 
    at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:249) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) 
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
    at java.lang.Thread.run(Thread.java:748) 

停止Flume的唯一方法是杀死Java进程。

我认为这可能与Hadoop和Kafka的单独用户有关,但即使在使用Kafka用户运行所有内容时,我也会得到相同的结果。我还没有发现有关EOFException方法的任何内容,这很奇怪,因为我刚刚遵循了“入门指南”,并且对所有内容都使用了相当标准的配置。

也许它与上一行(“Ubuntu-Sandbox/127.0.1.1已断开连接”)有关系,因此我的虚拟机的配置?

任何帮助,高度赞赏!

您是否考虑过使用Kafka Connect(Apache Kafka的一部分)和HDFS connector而不是?这通常被认为已经取代了Flume。它很容易使用,与Flume类似的基于文件的配置。

+0

感谢您的建议,罗宾。我已经让自己熟悉了Confluent,它似乎让所有事情变得更容易。但是,我再次无法通过简单地按照快速入门指南将数据从卡夫卡写入HDFS ...这次我甚至没有发现异常,“连接独立”进程不会完成, HDFS中的文件夹 - 尽管被创建 - 是空的...这真是令人沮丧! – stefanS