设为首页 加入收藏

TOP

Kafka 异常错误集
2019-04-23 14:31:54 】 浏览:66
Tags:Kafka 异常 错误
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_24505127/article/details/80430608


1、启动生产者进程:
[root@VM_0_16_centos config]#kafka-console-producer.sh --broker-listhadoop000:9092 --topic test
sss
[2018-05-08 22:56:08,946] WARN Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,055] WARN Error while fetching metadata with correlation id 1 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,159] WARN Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,263] WARN Error while fetching metadata with correlation id 3 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,367] WARN Error while fetching metadata with correlation id 4 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,471] WARN Error while fetching metadata with correlation id 5 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,577] WARN Error while fetching metadata with correlation id 6 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,680] WARN Error while fetching metadata with correlation id 7 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,785] WARN Error while fetching metadata with correlation id 8 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,889] WARN Error while fetching metadata with correlation id 9 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:09,994] WARN Error while fetching metadata with correlation id 10 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:10,099] WARN Error while fetching metadata with correlation id 11 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:10,204] WARN Error while fetching metadata with correlation id 12 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:10,308] WARN Error while fetching metadata with correlation id 13 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-05-08 22:56:10,411] WARN Error while fetching metadata with correlation id 14 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
解决方法: 没有绑定Kafka启动监听的host信息
viconfig/server.properties
2、使用API在开发时,启动报错


2016-04-16 16:43:34,069 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Connected to dx.zdp.ol:9092 for producing 2016-04-16 16:43:34,069 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Disconnecting from dx.zdp.ol:9092 2016-04-16 16:43:34,069 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:89)] Failed to send producer request with correlation id 2 to broker 0 with data for partitions [OtaAudit1,0] java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:76) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:75) at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:106) at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:106) at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:106) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:105) at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:105) at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:105) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.producer.SyncProducer.send(SyncProducer.scala:104) at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:259) at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:110) at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:102) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:102) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:75) at kafka.producer.Producer.send(Producer.scala:77) at kafka.javaapi.producer.Producer.send(Producer.scala:42) at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:135) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) at java.lang.Thread.run(Thread.java:745) 2016-04-16 16:43:34,079 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Back off for 1000 ms before retrying send. Remaining retries = 3 2016-04-16 16:43:34,522 (agent-shutdown-hook) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.stop(LifecycleSupervisor.java:79)] Stopping lifecycle supervisor 12

部署Flume在Window环境中,Kafka部署在Linux上,从Flume发送事件到Kafka始终有一下错误,
经过长时间在网上搜索终于把问题解决,
解决办法1:
修改kafka中配置项,
#advertised.host.name=<hostname routable by clients>
注释去掉,并配置上kafka所在linux的ip地址。要和程序中的信息一致。要么都是IP,要么都是hostname
advertised.host.name=140.143.236.169
重启kafka。



解决办法2(推荐):
将kafka所在linux的(ip,别名)配置到Window环境中hosts文件:
192.168.10.10 kafka.host
Flume配置文件consumer.properties中
producer.sinks.r.brokerList = kafka.host:9092




】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇Kafka-0.8.2.x Consumer API 下一篇Centos 7上使用Kafka 2.1.0启动关..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目