设为首页 加入收藏

TOP

解决flume向kafka发送 均分到各个partition中
2018-11-29 03:25:57 】 浏览:112
Tags:解决 flume kafka 发送 均分 各个 partition

官网中虽然说没有key 会随机分配到partition,但是不知道为什么在我这没有出现这种效果,所以我加了一个key,需要加个source拦截器

运行flume-ng agent --conf conf --conf-file test.sh --name a1 -Dflume.root.logger=INFO,console

# example.conf: A single-node Flume configuration

# Name the components on this agent

a1.sources = r1

a1.sinks = k1

a1.channels = c1

# Describe/configure the source

a1.sources.r1.type =exec

a1.sources.r1.command =tail -F /opt/access.log

# Describe the sink

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink

a1.sinks.k1.topic = test

a1.sinks.k1.brokerList = node1:9092,node2:9092,node3:9092

a1.sinks.k1.metadata.broker.list = node1:9092,node2:9092,node3:9092

a1.sinks.k1.requiredAcks = 1

a1.sinks.k1.batchSize = 20

a1.sinks.k1.channel = c1

#source 拦截器

a1.sources.r1.interceptors = i2

a1.sources.r1.interceptors.i2.type = org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder

a1.sources.r1.interceptors.i2.headerName = key

a1.sources.r1.interceptors.i2.preserveExisting = false

# Use a channel which buffers events in memory

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇flume读取日志数据写入kafka &nbs.. 下一篇Flume Avro Source 远程连接拒绝..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目