设为首页 加入收藏

TOP

Spark-Flume整合--Pull
2018-11-13 16:01:45 】 浏览:25
Tags:Spark-Flume 整合 --Pull
版权声明:本文为涂志海博客原创文章,未经博主允许不得转载。 https://blog.csdn.net/tuzhihai/article/details/78800681

第二种sparkStreaming 整合Flume

flume采用 netcat-memory-customer sink架构

本地测试

1:本地启动sprakStreaming服务,(192.168.145.128 10000)

2. 服务器中启动flume agent

3. telnet往端口中输入数据,观察本地idea控制台输出数据

服务器测试

mvn打包:mvn clean package -DskipTests

上传至服务器

先启动flume

flume-ng agent \
  --name netcat-memory-avro \
  --conf $FLUME_HOME/conf \
  --conf-file $FLUME_HOME/conf/netcat-memory-avro.conf \
  -Dflume.root.logger=INFO,console 

后启动spark

spark-submit \
--class com.tuzhihai.flumespark.SparkPullFlume \
--master local[2] \
--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 \
/root/soft_down/lib/sparklearn-1.0.jar \
192.168.145.128 10000

在端口输入数据

telnet 192.168.145.128 9990

观察flume控制台

pull方式为什么要先启动flume后启动spark

首先说明,pull方式比push方式更加可靠,在实际工作中应用极多。

pull方式是flume采集到数据后存储在一个Agent中,然后spark想要数据的时候,直接从这个Agent中pull就ok了。这个方式明显更加友好,更加符合工作要求,不是你主动传给我,而是我想要的时候我自己从你那拿。




spark-pull-flume.conf
 flume-ng agent \
  --name netcat-memory-spark \
  --conf $FLUME_HOME/conf \
  --conf-file $FLUME_HOME/conf/spark-pull-flume.conf \
  -Dflume.root.logger=INFO,console 

# example netcat-memory-spark
netcat-memory-spark.sources = netcat-source
netcat-memory-spark.sinks = spark-sink
netcat-memory-spark.channels = memory-channel

# Describe/configure the source
netcat-memory-spark.sources.netcat-source.type = netcat
netcat-memory-spark.sources.netcat-source.bind = 192.168.145.128
netcat-memory-spark.sources.netcat-source.port = 9999

# Describe/ the sink
netcat-memory-spark.sinks.spark-sink.type = org.apache.spark.streaming.flume.sink.SparkSink
netcat-memory-spark.sinks.spark-sink.hostname = 192.168.145.128
netcat-memory-spark.sinks.spark-sink.port = 10000

# Use a channel which buffers events in memory
netcat-memory-spark.channels.memory-channel.type = memory

# Bind the source and sink to the channel
netcat-memory-spark.sources.netcat-source.channels = memory-channel
netcat-memory-spark.sinks.spark-sink.channel = memory-channel
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇使用hadoop平台进行小型网站日志.. 下一篇Flume - 安装及启动命令详解

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目