StreamSets中测试Pipeline时,报 .....metadata for topic ‘topicName‘ from broker ‘localhost:9092‘ 问题解决

今天在绘制Oracle数据库中数据导出到HBASE的管道时,验证测试时失败,一直弹出报错:

2020-08-13 03:46:40,626    test_Oracle/testOracle20094215-4e64-431d-8ffd-7ba20f11706e    WARN    [Consumer clientId=consumer-15, groupId=sdcTopicMetadataClient] Connection to node -1 could not be established. Broker may not be available.      NetworkClient    *admin        preview-pool-1-thread-2

StreamSets中测试Pipeline时,报 .....metadata for topic ‘topicName‘ from broker ‘localhost:9092‘ 问题解决

最后强制停止后报错:

Error getting metadata for topic 'topicName' from broker 'localhost:9092' due to error: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic ....

这里我比较疑惑,因为测试StreamSets时,将Oracle中的数据传输到HBASE中,我并没有配置kafka,怎么会去localhost:9092 路径下去找topicName主题?

然后我点击空白处,选择Error Records,可以看到这里的Error Records是会将错误的日志写入到kafka中的:

StreamSets中测试Pipeline时,报 .....metadata for topic ‘topicName‘ from broker ‘localhost:9092‘ 问题解决

然后选择Error Records - Writer to Kafka中修改Broker URL:

StreamSets中测试Pipeline时,报 .....metadata for topic ‘topicName‘ from broker ‘localhost:9092‘ 问题解决

改为 cluster2-4:9092:(具体情况看自己的kafka配置文件)

StreamSets中测试Pipeline时,报 .....metadata for topic ‘topicName‘ from broker ‘localhost:9092‘ 问题解决

然后在Kafka中新建这个topicName的主题来收集错误的信息:

StreamSets中测试Pipeline时,报 .....metadata for topic ‘topicName‘ from broker ‘localhost:9092‘ 问题解决

然后再进行验证测试则不会报错了。