Source为本地的 /flume/weblogs_spooldir,里面存储有大量的log文件。
Channel为memory。
Sink为HDFS,设置为文件大小滚动。
配置文件(configuration):
agent.sources = source_spool
agent.sinks = sink_hdfs
agent.channels = channel_memory
agent.sources.source_spool.type = spooldir
agent.sources.source_spool.spoolDir = /flume/weblogs_spooldir
agent.sources.source_spool.channels = channel_memory
#注意这里sources的channel是需要加s的,channels
agent.sinks.sink_hdfs.type = hdfs
#文件存储到hdfs的位置,这里设置为/loudacre/weblogs下面
agent.sinks.sink_hdfs.hdfs.path = /loudacre/weblogs
agent.sinks.sink_hdfs.hdfs.rollInterval = 0
agent.sinks.sink_hdfs.hdfs.rollCount = 0
agent.sinks.sink_hdfs.hdfs.rollSize = 524288
#通过设定hdfs.fileType为DataStream来写原始文本文件[row text file](而不是SequenceFile格式文件)
agent.sinks.sink_hdfs.hdfs.fileType = DataStream
agent.sinks.sink_hdfs.channel = channel_memory
agent.channels.channel_memory.type = memory
#属性可以存储10000条事件
agent.channels.channel_memory.capacity = 10000
#属性具有1,000个事件的事务容量
agent.channels.channel_memory.transactionCapacity = 1000
下面是拦截器(interceptor)部分,可选则添加在上面:
agent.sources.source_spool.interceptors = i1
agent.sources.source_spool.interceptors.i1.type = regex_extractor
agent.sources.source_spool.interceptors.i1.regex = ^(:\\n)(\\d\\d\\d\\d-\\d\\d-\\d\\d\\s\\d\\d:\\d\\d)
agent.sources.source_spool.interceptors.i1.serializers = s1
agent.sources.source_spool.interceptors.i1.serializers.s1.type = org.apache.flume.interceptor.RegexExtractorInterceptorMillisSerializer
agent.sources.source_spool.interceptors.i1.serializers.s1.name = timestamp
agent.sources.source_spool.interceptors.i1.serializers.s1.pattern = yyyy-MM-dd HH:mm
在HDFS的存储格式:
#用日期存储文件夹,如果%后面的Y小写,则为两位数的年
agent.sinks.sink_hdfs.hdfs.path = /loudacre/weblogs/%Y%m%d
#文件名前缀
agent.sinks.sink_hdfs.hdfs.filePrefix = %Y-%m-%d
#文件名后缀
agent.sinks.sink_hdfs.hdfs.fileSuffix = .log
到这里差不多就完了,如果有别的,后面会补充。