设为首页 加入收藏

TOP

Flume的安装与配置(分布式日志收集框架)
2019-04-18 14:13:30 】 浏览:89
Tags:Flume 安装 配置 分布式 日志 收集 框架
版权声明:@GaoShan https://blog.csdn.net/weixin_42969976/article/details/84776794

Flume概述:分布式,高可靠,高可用服务,用于高效收集,聚合和移动大量的日志数据
Flume核心组件

  • Source(收集)
  • Channel(聚合)
  • Sink(输出)

安装
tar -zvxf flume安装包
配置环境变量(profile)
export FLUME_HOME=/…/…/flume安装包
PATH=PATH:PATH:FLUME_HOME/bin

source /etc/profile

配置
cd $FLUME_HOME/conf
mv flume-env.sh.template flume-env.sh
vi flume-env.sh
export JAVA_HOME=/…/…/jdk安装包

  1. a1-agent
  2. r1-source
  3. c1-channel
  4. k1-sink

Flume的日志收集启动文件根据不同需求进行配置conf文件

  1. 从端口直接采集数据
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = SZ01
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

---启动命令
flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/example.conf \
-Dflume.root.logger=INFO,console

---监听端口
telnet bigdata 44444
  1. 采集文件新增的数据
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /home/bigdata/data.log
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.

---启动命令
flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/file-memory-logger.conf \
-Dflume.root.logger=INFO,console

---向文件中追加内容即可在控制台看到相应的采集
echo hello flume >> ~/data.log
  1. 将采集的日志记录到文件
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /home/bigdata/data.log
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://SZ01:8020/flume
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
# Bind the source and sink to the cha

---启动命令
flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/file-memory-hdfsFile.conf
  1. 将数据收集到另外的服务器
**服务器一**
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /home/bigdata/data.log
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = SZ01
a1.sinks.k1.port = 44444
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

---启动命令
flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/file-memory-avro.conf

**服务器二**
# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1
# Describe/configure the source
a2.sources.r1.type = avro
a2.sources.r1.bind = SZ01
a2.sources.r1.port = 44444
# Describe the sink
a2.sinks.k1.type = logger
# Use a channel which buffers events in memory
a2.channels.c1.type = memory
# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

---启动命令
flume-ng agent \
--name a2 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/avro-memory-logger.conf \
-Dflume.root.logger=INFO,console
】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇kafka和flume集成 下一篇flume读取日志数据写入kafka &nbs..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目