设为首页 加入收藏

TOP

实时流处理学习(二)-Flume
2018-11-13 16:14:06 】 浏览:62
Tags:实时 处理 学习 -Flume
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/victorzzzz/article/details/83043784

http://flume.apache.org/

Flume概述:

Flume is a distributed(分布式地), reliable(可靠地),and available(可用地) service for efficiently collecting(收集), aggregating(聚合), and moving(移动) large amounts of log data. It has a simple and flexible architecture based on streaming data flows(在简单、灵活的架构).

Flume设计目标:可靠性、扩展性

业界同类产品(类似Flume)的对比:

  1. Flume: Choudera/Apache Java
  2. Scribe: Facebook C/C++ 不再维护
  3. Chukwa:Yahoo / Apache Java 不再维护
  4. Fluentd: Ruby
  5. Logstash: ELK(ElasticSearch,Kibana)

Apache开源社区问题跟踪:https://issues.apache.org

Flume架构以及核心组件:

1)Source:收集

2)Channel:聚集,把数据存在某个地方(memory channel,File channel,kafka channel)

3)Sink:输出(hdfs,hive,avro,es,hbase,kafka)

安装Flume(archive.cloudera.com/cdh5/cdh/5/flume-ng-1.6.0-cdh5.7.0.tar.gz)

配置系统变量:vi ~/.bash_profile

export FLUME_HOME=~/flume/apache-flume-1.6.0-cdh5.7.0-bin

export PATH=$FLUME_HOME/bin:$PATH

配置:flume-env.sh.template(在这里需要配置java_home) echo $JAVA_HOME

检测:flume-ng version

使用Flume的关键就是写配置文件

配置过程:

  1. 配置Source
  2. 配置Channel
  3. 配置Sink
  4. 把以上三个组件串起来

一个source可以输出到多个channel

一个channel输出的sink只能有一个

需求1:从ssh中读入并输出

在flume的conf目录下配置example.conf:

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动agent

flume-ng agent \

--name a1 \

--conf $FLUME_HOME/conf \

--conf-file $FLUME_HOME/conf/example.conf \

-Dflume.root.logger=INFO,console

启动后测试:telnet iz2zef94dnmkl8kf3l63r9z 44444

Event:{ headers:{ } body:68 65 6c 6c 6f 0D hello. }

Event是Flume数据传输的基本单元 ,Event = 可选的header + byte array

需求2:监控一个文件实时增加

exec source + memory channel + logger sink

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /root/flume-test.log
a1.sources.r1.shell = /bin/sh -c

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory


# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

向文本中追加内容,查看输出

echo hello >> flume-test.log

需求3:A服务器的日志收集到B服务器(重点掌握)

机器A的Flume:exec source + memory channel + avro sink

机器B的Flume:avro source + memory channel + logger sink

(跨界点:Avro Sink,需要指定ip与端口,指定到Avro Source)

exec-memory-avro.conf

# Name the components on this agent

# Name the components on this agent
exec-memory-avro.sources = exec-source
exec-memory-avro.sinks = avro-sink
exec-memory-avro.channels = memory-channel

# Describe/configure the source
exec-memory-avro.sources.exec-source.type = exec
exec-memory-avro.sources.exec-source.command = tail -F /root/flume-test.log
exec-memory-avro.sources.exec-source.shell = /bin/sh -c

# Describe the sink
exec-memory-avro.sinks.avro-sink.type = avro
exec-memory-avro.sinks.avro-sink.hostname = localhost
exec-memory-avro.sinks.avro-sink.port = 44444

# Use a channel which buffers events in memory
exec-memory-avro.channels.memory-channel.type = memory


# Bind the source and sink to the channel
exec-memory-avro.sources.exec-source.channels = memory-channel
exec-memory-avro.sinks.avro-sink.channel = memory-channel

avro-memory-logger.conf

# Name the components on this agent
avro-memory-logger.sources = avro-source
avro-memory-logger.sinks = logger-sink
avro-memory-logger.channels = memory-channel

# Describe/configure the source
avro-memory-logger.sources.avro-source.type = avro
avro-memory-logger.sources.avro-source.bind = localhost
avro-memory-logger.sources.avro-source.port = 44444

# Describe the sink
avro-memory-logger.sinks.logger-sink.type = logger


# Use a channel which buffers events in memory
avro-memory-logger.channels.memory-channel.type = memory


# Bind the source and sink to the channel
avro-memory-logger.sources.avro-source.channels = memory-channel
avro-memory-logger.sinks.logger-sink.channel = memory-channel

(1)先启动:avro-memory-logger,开启监听44444

flume-ng agent \

--name avro-memory-logger \

--conf $FLUME_HOME/conf \

--conf-file $FLUME_HOME/conf/avro-memory-logger.conf \

-Dflume.root.logger=INFO,console

(2)再启动:exec-memory-avro

flume-ng agent \

--name exec-memory-avro \

--conf $FLUME_HOME/conf \

--conf-file $FLUME_HOME/conf/exec-memory-avro.conf \

-Dflume.root.logger=INFO,console

(3)向文本中追加内容,查看输出

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇flume+kafka+hdfs详解 下一篇_00016 Flume的体系结构介绍以及F..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目