版权声明:纯属个人理解哈 https://blog.csdn.net/ggj20ss/article/details/50601847
quickstart
安装和用:官网介绍的很详细
http://kafka.apache.org/documentation.html#quickstart
linux下面:
wget http://apache.opencas.org/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz
tar -xzvf kafka_2.11-0.9.0.0.tgz
bin/zookeeper-server-start.sh config/zookeeper.properties &
bin/kafka-server-start.sh config/server.properties &
Java调用的坑
因为自己在服务器上安装的kafka是最新的0.9.0版本,所以遇到不少坑,以下列一下。
1、jdk版本问题,Kafka 0.9不再支持 Java 6和Scala 2.9。 所以我直接把服务器上面jdk升级到jdk8
2、java 调用demo问题。网上能搜到的都是0.9.0以前的sample,所以测试的时候总是出现问题,千万不能用以前的demo来校验!
3、异常:Error while fetching metadata with correlation id 1 : {topicapi=LEADER_NOT_AVAILABLE}
如果linu 系统设置 hostname 不是localhost 那么如果windows在调用的时候 需要设置host别名
例如我的服务器设置的别名是
/etc/hostname值是master
/etc/hosts里面添加一行值123.56.118.135 master
C:\Windows\System32\drivers\etc\hosts
在里面添加一行记录ip 别名:123.56.118.135 master
修改kafka-server-start.sh kafka的启动java 内存
export KAFKA_HEAP_OPTS=”-Xmx30m -Xms30m”
kafka java api
kafka-0.9不再支持jdk6和Scala 2.9 以前的java-clinet也用java重新编写了。
Apache Kafka includes new java clients (in the org.apache.kafka.clients package). These are meant to supplant the older Scala clients, but for compatability they will co-exist for some time. These clients are available in a seperate jar with minimal dependencies, while the old Scala clients remain packaged with the server.
maven 直接引用
<dependency >
<groupId > org.apache.kafka</groupId >
<artifactId > kafka-clients</artifactId >
<version > 0.9.0.0</version >
</dependency >
kafka-0.9.0.0 sample
api:
http://kafka.apache.org/090/javadoc/index.htmlorg/apache/kafka/clients/producer/KafkaProducer.html
http://kafka.apache.org/documentation.html#producerapi
代码:
常用命令
> tar -xzf kafka_2.11 - 0.9 .0 .0 . tgz
> cd kafka_2.11 - 0.9 .0 .0
> bin/zookeeper-server -start . sh config/zookeeper. properties &
> bin/kafka-server -start . sh config/server. properties &
> bin/kafka-topics . sh -- create -- zookeeper localhost:2181 -- replication-factor 1 -- partitions 1 -- topic test
> bin/kafka-topics . sh -- list -- zookeeper localhost:2181
> bin/kafka-console -producer . sh -- broker-list localhost:9092 -- topic test
> bin/kafka-console -consumer . sh -- zookeeper localhost:2181 -- topic test -- from-beginning
> cp config/server. properties config/server- 1. properties
> cp config/server. properties config/server- 2. properties
config/server- 1. properties:
broker. id= 1
port= 9093
log . dir= /tmp/kafka-logs - 1
config/server- 2. properties:
broker. id= 2
port= 9094
log . dir= /tmp/kafka-logs - 2
> bin/kafka-server -start . sh config/server- 1. properties &
> bin/kafka-server -start . sh config/server- 2. properties &
> bin/kafka-topics . sh -- create -- zookeeper localhost:2181 -- replication-factor 3 -- partitions 1 -- topic my-replicated -topic
> bin/kafka-topics . sh -- describe -- zookeeper localhost:2181 -- topic my-replicated -topic
> bin/kafka-topics . sh -- describe -- zookeeper localhost:2181 -- topic test
> bin/kafka-console -producer . sh -- broker-list localhost:9092 -- topic my-replicated -topic
> bin/kafka-console -consumer . sh -- zookeeper localhost:2181 -- from-beginning -- topic my-replicated -topic
> ps | grep server- 1. properties
> kill - 9 7564
> bin/kafka-topics . sh -- describe -- zookeeper localhost:2181 -- topic my-replicated -topic
> bin/kafka-console -consumer . sh -- zookeeper localhost:2181 -- from-beginning -- topic my-replicated -topic
> echo -e "foo\nbar" > test. txt
> bin/connect-standalone . sh config/connect-standalone . properties config/connect-file -source . properties config/connect-file -sink . properties
> cat test. sink. txt
> bin/kafka-console -consumer . sh -- zookeeper localhost:2181 -- topic connect-test -- from-beginning
> echo "Another line" >> test. txt
配置优化
配置优化都是修改server.properties 文件中参数值
1. 网络和io操作线程配置优化
# broker处理消息的最大线程数
num.network .threads =xxx
# broker处理磁盘IO的线程数
num.io .threads =xxx
建议配置:
一般num.network .threads 主要处理网络io,读写缓冲区数据,基本没有io等待,配置线程数量为cpu核数加1.
num.io .threads 主要进行磁盘io操作,高峰期可能有些io等待,因此配置需要大些。配置线程数量为cpu核数2 倍,最大不超过3 倍.
2. log数据文件刷新策略
为了大幅度提高producer写入吞吐量,需要定期批量写文件。
建议配置:
# 每当producer写入10000条消息时,刷数据到磁盘
log.flush .interval .messages =10000
# 每间隔1秒钟时间,刷数据到磁盘
log.flush .interval .ms =1000
3. 日志保留策略配置
当kafka server的被写入海量消息后,会生成很多数据文件,且占用大量磁盘空间,如果不及时清理,可能磁盘空间不够用,kafka默认是保留7 天。
建议配置:
# 保留三天,也可以更短
log.retention .hours =72
# 段文件配置1GB,有利于快速回收磁盘空间,重启kafka加载也会加快(如果文件过小,则文件数量比较多,
# kafka启动时是单线程扫描目录(log.dir)下所有数据文件)
log.segment .bytes =1073741824
4. 配置jmx服务
kafka server中默认是不启动jmx端口的,需要用户自己配置
[lizhitao@root kafka_2.10 -0.8 .1 ]$ vim bin/kafka-run-class.sh
#最前面添加一行
JMX_PORT=8060
引用块内容
kafka参考博客
https://github.com/ajmalbabu/kafka-clients
http://my.oschina.net/ielts0909/blog/117489
http://my.oschina.net/u/1475616/blog/374686#OSC_h1_1
http://www.open-open.com/lib/view/open1434551761926.html
http://my.oschina.net/frankwu/blog/303745#OSC_h2_1
http://blog.csdn.net/honglei915/article/details/37697655
Kafka文件存储机制那些事
http://blog.csdn.net/opensure/article/details/46048589
Kafka项目实战-用户日志上报实时统计之应用概述
http://www.w2bc.com/Article/57680
http://kelgon.iteye.com/blog/2287985