版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/ymybxx/article/details/78843147
1.flume配置文件kfk.conf
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/tmp/test.txt
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = mytopic
a1.sinks.k1.brokerList = hadoop0:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
2.启动zookeeper
3.启动kafka
kafka-server-start.sh ~/bda/kafka/config/server.properties
4.创建topic
create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test
查看:
bin/kafka-topics.sh –list –zookeeper localhost:2181
5.启动flume
bin/flume-ng agent –conf conf –conf-file conf/kfk.conf –name a1 -Dflume.root.logger=INFO,console
6.消费者消费
bin/kafka-console-consumer.sh –bootstrap-server hadoop0:9092 –topic test –from-beginning
(由于配置的时候配置的是hadoop0,虽然ip地址和localhost一样,但是如果输入localhost会报错)