首页 > 代码库 > _00017 Flume的体系结构介绍以及Flume入门案例(往HDFS上传数据)
_00017 Flume的体系结构介绍以及Flume入门案例(往HDFS上传数据)
个性签名:世界上最遥远的距离不是天涯,也不是海角,而是我站在妳的面前,妳却感觉不到我的存在
技术方向:hadoop,数据分析与挖掘
转载声明:可以转载, 但必须以超链接形式标明文章原始出处和作者信息及版权声明,谢谢合作!
qq交流群:214293307 (期待与你一起学习,共同进步)
# 学习前言
想学习一下Flume,网上找了好多文章基本上都说的很简单,只有一半什么的,简直就是坑爹,饿顿时怒火就上来了,学个东西真不容易,然后自己耐心的把这些零零碎碎的东西整理整理,各种搭环境实验之后才弄好的,也不容易啊,希望可以帮到想学Flume的你 、、、
# Flume介绍
Flume是Cloudera提供的一个高可用的,高可靠的,分布式的海量日志采集、聚合和传输的系统,Flume支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力。
# 系统功能
# 日志收集
Flume最早是Cloudera提供的日志收集系统,目前是Apache下的一个孵化项目,Flume支持在日志系统中定制各类数据发送方,用于收集数据。
# 数据处理
Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力Flume提供了从console(控制台)、RPC(Thrift-RPC)、text(文件)、tail(UNIX tail)、syslog(syslog日志系统,支持TCP和UDP等2种模式),exec(命令执行)等数据源上收集数据的能力。
# 工作方式
Flume采用了多Master的方式。为了保证配置数据的一致性,Flume引入了ZooKeeper,用于保存配置数据,ZooKeeper本身可保证配置数据的一致性和高可用,另外,在配置数据发生变化时,ZooKeeper可以通知FlumeMaster节点。Flume Master间使用gossip协议同步数据。
# Flume的设计目标
# 可靠性
当节点出现故障时,日志能够被传送到其他节点上而不会丢失。Flume提供了三种级别的可靠性保障,从强到弱依次分别为:end-to-end(收到数据agent首先将event写到磁盘上,当数据传送成功后,再删除;如果数据发送失败,可以重新发送。),Store on failure(这也是scribe采用的策略,当数据接收方crash时,将数据写到本地,待恢复后,继续发送),Besteffort(数据发送到接收方后,不会进行确认)。
# 可扩展性
Flume采用了三层架构,分别为agent,collector和storage,每一层均可以水平扩展。其中,所有agent和collector由master统一管理,这使得系统容易监控和维护,且master允许有多个(使用ZooKeeper进行管理和负载均衡),这就避免了单点故障问题。
# 可管理性
所有agent和colletor由master统一管理,这使得系统便于维护。多master情况,Flume利用ZooKeeper和gossip,保证动态配置数据的一致性。用户可以在master上查看各个数据源或者数据流执行情况,且可以对各个数据源配置和动态加载。Flume提供了web和shell script command两种形式对数据流进行管理。
# 功能可扩展性
用户可以根据需要添加自己的agent,collector或者storage。此外,Flume自带了很多组件,包括各种agent(file, syslog等),collector和storage(file,HDFS等)。(这里看下面的Flume架构图你就明白了)
# Flume架构
# Flume基础架构,如下图:
这是一个flume-ng 最简单的图。flume-ng 是由一个个agent组成的。一个agent就像一个细胞一样。
# Flume自由组合架构,如下图:
上面是两个agent链接在一起的,再看看更多的......
# Flume复杂的架构
你是不是觉得这种设计是不是吊炸天了,可以随意组合,跟搭积木一样。跟Storm的设计思想是差不多的,何止吊炸天啊,简直就是吊炸天 、、、
# Flume一般情况下的架构图
# agent的构造
每个agent里都有三部分构成:source、channel和sink。
就相当于source接收数据,通过channel传输数据,sink把数据写到下一端。这就完了,就这么简单。其中source有很多种可以选择,channel有很多种可以选择,sink也同样有多种可以选择,并且都支持自定义。饿靠!太灵活了。想怎么玩就怎么玩,这你妹的!
同时,如上上图所示,agent还支持选择器,就是一个source支持多个channel和多个sink,这样就完成了数据的分发。
这就完了,flume-ng就这么简单........
从看到最后用,一天足可以搞定。剩下的就是怎么组织你的agent的问题了。也就是搭积木的过程......
另外有一点需要强调的是,flume-ng提供了一种特殊的启动方式(不同于agent),那就是client启动。cilent是一个特殊的agent, 他的source是文件,channel是内存,sink是arvo。实际上是为了方便大家用,直接来传递文件的。具体可以看看官方使用手册。
估计到这儿,应该对flume-ng有了解了吧 、、、on my god、、、
# Flume的安装
# 下载 flume(使用wget下载)
[root@rs229 flume]# wget -c -p .http://mirrors.cnnic.cn/apache/flume/1.5.0/apache-flume-1.5.0-bin.tar.gz
# 安装
root@rs229 flume]# pwd
/usr/local/adsit/yting/apache/flume
[root@rs229 flume]# ll
total 4
drwxr-xr-x 3 root root 4096 Jun 24 17:25mirrors.cnnic.cn
[root@rs229 flume]# cpmirrors.cnnic.cn/apache/flume/1.5.0/apache-flume-1.5.0-bin.tar.gz .
[root@rs229 flume]# ll
total 25276
-rw-r--r-- 1 root root 25876246 Jun 24 17:27apache-flume-1.5.0-bin.tar.gz
drwxr-xr-x 3 root root 4096 Jun 24 17:25 mirrors.cnnic.cn
[root@rs229 flume]# tar -zxvfapache-flume-1.5.0-bin.tar.gz
[root@rs229 flume]# ll
total 25280
drwxr-xr-x 7 root root 4096 Jun 24 17:27 apache-flume-1.5.0-bin
-rw-r--r-- 1 root root 25876246 Jun 24 17:27apache-flume-1.5.0-bin.tar.gz
drwxr-xr-x 3 root root 4096 Jun 24 17:25 mirrors.cnnic.cn
[root@rs229 flume]# rm -rfapache-flume-1.5.0-bin.tar.gz
[root@rs229 flume]# rm -rf mirrors.cnnic.cn/
[root@rs229 flume]# ll
total 4
drwxr-xr-x 7 root root 4096 Jun 24 17:27apache-flume-1.5.0-bin
[root@rs229 flume]#
# 修改 flume-env.sh 配置文件
[root@rs229 conf]# pwd
/usr/local/adsit/yting/apache/flume/apache-flume-1.5.0-bin/conf
[root@rs229 conf]# ll
total 12
-rw-r--r-- 1 501 games 1661 Mar 29 06:15flume-conf.properties.template
-rw-r--r-- 1 501 games 1197 Mar 29 06:15 flume-env.sh.template
-rw-r--r-- 1 501 games 3063 Mar 29 06:15log4j.properties
[root@rs229 conf]# cp flume-env.sh.templateflume-env.sh
[root@rs229 conf]# vi flume-env.sh
# Enviroment variables can be sethere.
JAVA_HOME=/usr/local/adsit/yting/jdk/jdk1.7.0_60
# 修改 flume-site.xml 配置文件(貌似没有该步骤,貌似也可以修改,研究后再来弄吧!)
# 验证 flume是否安装成功
[root@rs229 conf]# ../bin/flume-ng version
Flume 1.5.0
Source code repository:https://git-wip-us.apache.org/repos/asf/flume.git
Revision: 8633220df808c4cd0c13d1cf0320454a94f1ea97
Compiled by hshreedharan on Wed May 7 14:49:18 PDT 2014
From source with checksuma01fe726e4380ba0c9f7a7d222db961f
出现这样的信息表示安装成功了
# Flume 入门案例
# Flume监控指定目录下的日志信息,并将日志信息上传到HDFS中去
# 在conf目录下新建example.conf配置文件
新建文件:在conf目录下新建一个example.conf文件(随便起什么名字),当然随便哪里都行
注意:文件名最好跟配置中的名字一样,比如里面的agent1最好跟外面的文件名一样,见名知意
[root@rs229 conf]# pwd
/usr/local/adsit/yting/apache/flume/apache-flume-1.5.0-bin/conf
[root@rs229 apache-flume-1.5.0-bin]# catconf/example.conf
# agent1 : yting first flume example
agent1.sources=source1
agent1.sinks=sink1
agent1.channels=channel1
# configure source1
agent1.sources.source1.type=spooldir
agent1.sources.source1.spoolDir=/usr/local/yting/flume/tdata/tdir1
agent1.sources.source1.channels=channel1
agent1.sources.source1.fileHeader = false
# configure sink1
agent1.sinks.sink1.type=hdfs
agent1.sinks.sink1.hdfs.path=hdfs://rs229:9000/yting/flumet
agent1.sinks.sink1.hdfs.fileType=DataStream
agent1.sinks.sink1.hdfs.writeFormat=TEXT
agent1.sinks.sink1.hdfs.rollInterval=4
agent1.sinks.sink1.channel=channel1
# configure channel1
agent1.channels.channel1.type=file
agent1.channels.channel1.checkpointDir=/usr/local/yting/flume/checkpointdir/tcpdir/example_agent1_001
agent1.channels.channel1.dataDirs=/usr/local/yting/flume/datadirs/tddirs/example_agent1_001
注意:红色字体部分自己修改成自己对应的目录了
# 运行Flume 使用example.conf
#命令参数说明
-c conf 指定配置目录为conf
-f conf/example.conf 指定配置文件为conf/example.conf
-n agent1 指定agent名字为agent1,需要与example.conf中的一致(这里不一致,可能会一直停在那里,请参考笔记中后面的错误全集Flume部分,那里介绍了错误的分析,原因,解决)
-Dflume.root.logger=INFO,console 指定DEBUF模式在console输出INFO信息
[root@rs229conf]# ./bin/flume-ng agent -c conf/ -f conf/example.conf-n agent1 -Dflume.root.logger=INFO,console
-bash:./bin/flume-ng: No such file or directory
[root@rs229conf]# cd ..
[root@rs229apache-flume-1.5.0-bin]# ./bin/flume-ng agent -c conf/ -f conf/example.conf -nagent1 -Dflume.root.logger=INFO,console
Info: Sourcingenvironment configuration script/usr/local/adsit/yting/apache/flume/apache-flume-1.5.0-bin/conf/flume-env.sh
Info: IncludingHadoop libraries found via(/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/bin/hadoop) for HDFS access
Info: Excluding/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jarfrom classpath
Info: Excluding/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jarfrom classpath
Info: IncludingHBASE libraries found via(/usr/local/adsit/yting/apache/hbase/hbase-0.96.2-hadoop2/bin/hbase) for HBASEaccess
Info: Excluding/usr/local/adsit/yting/apache/hbase/hbase-0.96.2-hadoop2/bin/../lib/slf4j-api-1.6.4.jarfrom classpath
Info: Excluding/usr/local/adsit/yting/apache/hbase/hbase-0.96.2-hadoop2/bin/../lib/slf4j-log4j12-1.6.4.jarfrom classpath
Info: Excluding/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jarfrom classpath
Info: Excluding/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jarfrom classpath
….capacity-scheduler/*.jar:/conf‘-Djava.library.path=:/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0/lib:/usr/local/adsit/yting/apache/hadoop/hadoop-2.2.0//liborg.apache.flume.node.Application -f conf/example.conf -n agent1
2014-06-2510:37:45,763 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:61)]Configuration provider starting
2014-06-2510:37:45,772 (conf-file-poller-0) [INFO -org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)]Reloading configuration file:conf/example.conf
2014-06-2510:37:45,781 (conf-file-poller-0) [INFO -org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)]Processing:sink1
2014-06-2510:37:45,783 (conf-file-poller-0) [INFO -org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)]Processing:sink1
2014-06-2510:37:45,783 (conf-file-poller-0) [INFO -org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:930)]Added sinks: sink1 Agent: agent1
2014-06-2510:37:45,783 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)]Processing:sink1
2014-06-2510:37:45,783 (conf-file-poller-0) [INFO -org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)]Processing:sink1
2014-06-2510:37:45,783 (conf-file-poller-0) [INFO -org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)]Processing:sink1
2014-06-2510:37:45,784 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)]Processing:sink1
2014-06-2510:37:45,809 (conf-file-poller-0) [INFO -org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)]Post-validation flume configuration contains configuration for agents: [agent1]
2014-06-2510:37:45,809 (conf-file-poller-0) [INFO -org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:150)]Creating channels
2014-06-2510:37:45,823 (conf-file-poller-0) [INFO -org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:40)]Creating instance of channel channel1 type file
2014-06-2510:37:45,828 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:205)]Created channel channel1
2014-06-2510:37:45,829 (conf-file-poller-0) [INFO -org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:39)]Creating instance of source source1, type spooldir
2014-06-2510:37:45,844 (conf-file-poller-0) [INFO -org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:40)]Creating instance of sink: sink1, type: hdfs
2014-06-2510:37:46,293 (conf-file-poller-0) [WARN -org.apache.hadoop.util.NativeCodeLoader.<clinit>(NativeCodeLoader.java:62)]Unable to load native-hadoop library for your platform... using builtin-javaclasses where applicable
2014-06-2510:37:46,572 (conf-file-poller-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.java:555)]Hadoop Security enabled: false
2014-06-2510:37:46,576 (conf-file-poller-0) [INFO -org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:119)]Channel channel1 connected to [source1, sink1]
2014-06-2510:37:46,587 (conf-file-poller-0) [INFO -org.apache.flume.node.Application.startAllComponents(Application.java:138)]Starting new configuration:{ sourceRunners:{source1=EventDrivenSourceRunner: {source:Spool Directory source source1: { spoolDir:/usr/local/yting/flume/tdata/tdir1 } }} sinkRunners:{sink1=SinkRunner: {policy:org.apache.flume.sink.DefaultSinkProcessor@7205c140 counterGroup:{name:null counters:{} } }} channels:{channel1=FileChannel channel1 { dataDirs:[/usr/local/yting/flume/datadirs/tddirs/example_agent1_001] }} }
2014-06-2510:37:46,593 (conf-file-poller-0) [INFO -org.apache.flume.node.Application.startAllComponents(Application.java:145)]Starting Channel channel1
2014-06-2510:37:46,593 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.FileChannel.start(FileChannel.java:259)] StartingFileChannel channel1 { dataDirs:[/usr/local/yting/flume/datadirs/tddirs/example_agent1_001] }...
2014-06-2510:37:46,617 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.Log.<init>(Log.java:328)] Encryption is notenabled
2014-06-2510:37:46,618 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.Log.replay(Log.java:373)] Replay started
2014-06-2510:37:46,620 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.Log.replay(Log.java:385)] Found NextFileID 0,from []
2014-06-2510:37:46,661 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.EventQueueBackingStoreFile.<init>(EventQueueBackingStoreFile.java:91)]Preallocated/usr/local/yting/flume/checkpointdir/tcpdir/example_agent1_001/checkpoint to8008232 for capacity 1000000
2014-06-2510:37:46,663 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>(EventQueueBackingStoreFileV3.java:53)]Starting up with/usr/local/yting/flume/checkpointdir/tcpdir/example_agent1_001/checkpoint and/usr/local/yting/flume/checkpointdir/tcpdir/example_agent1_001/checkpoint.meta
2014-06-2510:37:47,095 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.FlumeEventQueue.<init>(FlumeEventQueue.java:114)]QueueSet population inserting 0 took 0
2014-06-2510:37:47,100 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.Log.replay(Log.java:423)] Last Checkpoint Wed Jun25 10:37:46 CST 2014, queue depth = 0
2014-06-2510:37:47,105 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.Log.doReplay(Log.java:507)] Replaying logs withv2 replay logic
2014-06-2510:37:47,109 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:249)]Starting replay of []
2014-06-2510:37:47,109 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:346)]read: 0, put: 0, take: 0, rollback: 0, commit: 0, skip: 0, eventCount:0
2014-06-2510:37:47,110 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.FlumeEventQueue.replayComplete(FlumeEventQueue.java:407)]Search Count = 0, Search Time = 0, Copy Count = 0, Copy Time = 0
2014-06-2510:37:47,119 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.Log.replay(Log.java:470)] Rolling/usr/local/yting/flume/datadirs/tddirs/example_agent1_001
2014-06-2510:37:47,120 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.Log.roll(Log.java:932)] Roll start/usr/local/yting/flume/datadirs/tddirs/example_agent1_001
2014-06-2510:37:47,137 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.tools.DirectMemoryUtils.getDefaultDirectMemorySize(DirectMemoryUtils.java:113)]Unable to get maxDirectMemory from VM: NoSuchMethodException:sun.misc.VM.maxDirectMemory(null)
2014-06-2510:37:47,140 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.tools.DirectMemoryUtils.allocate(DirectMemoryUtils.java:47)]Direct Memory Allocation: Allocation =1048576, Allocated = 0, MaxDirectMemorySize = 18874368, Remaining = 18874368
2014-06-2510:37:47,195 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.channel.file.LogFile$Writer.<init>(LogFile.java:214)]Opened /usr/local/yting/flume/datadirs/tddirs/example_agent1_001/log-1
2014-06-2510:37:47,208 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.Log.roll(Log.java:948)] Roll end
2014-06-2510:37:47,208 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint(EventQueueBackingStoreFile.java:214)]Start checkpoint for/usr/local/yting/flume/checkpointdir/tcpdir/example_agent1_001/checkpoint,elements to sync = 0
2014-06-2510:37:47,211 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:239)]Updating checkpoint metadata: logWriteOrderID: 1403663867120, queueSize: 0,queueHead: 0
2014-06-2510:37:47,235 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:1005)] Updatedcheckpoint for file:/usr/local/yting/flume/datadirs/tddirs/example_agent1_001/log-1 position: 0logWriteOrderID: 1403663867120
2014-06-2510:37:47,235 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.channel.file.FileChannel.start(FileChannel.java:285)] QueueSize after replay: 0 [channel=channel1]
2014-06-2510:37:47,296 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)]Monitored counter group for type: CHANNEL, name: channel1: Successfullyregistered new MBean.
2014-06-2510:37:47,296 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)]Component type: CHANNEL, name: channel1 started
2014-06-2510:37:47,297 (conf-file-poller-0) [INFO -org.apache.flume.node.Application.startAllComponents(Application.java:173)] StartingSink sink1
2014-06-2510:37:47,297 (conf-file-poller-0) [INFO -org.apache.flume.node.Application.startAllComponents(Application.java:184)]Starting Source source1
2014-06-2510:37:47,298 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.source.SpoolDirectorySource.start(SpoolDirectorySource.java:77)]SpoolDirectorySource source starting with directory:/usr/local/yting/flume/tdata/tdir1
2014-06-2510:37:47,300 (lifecycleSupervisor-1-1) [INFO -org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)]Monitored counter group for type: SINK, name: sink1: Successfully registerednew MBean.
2014-06-2510:37:47,300 (lifecycleSupervisor-1-1) [INFO -org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)]Component type: SINK, name: sink1 started
2014-06-2510:37:47,330 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)]Monitored counter group for type: SOURCE, name: source1: Successfullyregistered new MBean.
2014-06-2510:37:47,330 (lifecycleSupervisor-1-0) [INFO -org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)]Component type: SOURCE, name: source1 started
2014-06-2510:37:47,331 (pool-6-thread-1) [INFO -org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:254)]Spooling Directory Source runner has shutdown.
2014-06-2510:37:47,831 (pool-6-thread-1) [INFO - org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:254)]Spooling Directory Source runner has shutdown.
2014-06-2510:37:48,332 (pool-6-thread-1) [INFO -org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:254)]Spooling Directory Source runner has shutdown.
到了这里说明你的程序运行正常了,但是你的监视目录 /usr/local/yting/flume/tdata/tdir1下没有新文件的产生,所以会一直出现上面的那条信息
# 在/usr/local/yting/flume/tdata/tdir1这个flume监视目录下添加一个新文件yting_flume_example_agent1_00001.log
[root@rs229 hadoop-2.2.0]# ./bin/hadoop fs -ls /yting
14/06/25 10:48:00 WARN util.NativeCodeLoader: Unableto load native-hadoop library for your platform... using builtin-java classeswhere applicable
Found 1 items
-rw-r--r-- 3root supergroup 4278 2014-06-1018:29 /yting/yarn-daemon.sh
[root@rs229 tdir1]# ll
total 0
[root@rs229 tdir1]# ll -a
total 12
drwxr-xr-x 3 root root 4096 Jun 25 10:37 .
drwxr-xr-x 3 root root 4096 Jun 24 22:25 ..
drwxr-xr-x 2 root root 4096 Jun 25 09:48 .flumespool(隐藏文件)
[root@rs229 tdir1]# viyting_flume_example_agent1_00001.log
The you smile until forever .....................
[root@rs229 tdir1]# ll
total 4
-rw-r--r-- 1 root root 50 Jun 25 10:51 yting_flume_example_agent1_00001.log.COMPLETED
# 文件名变成.COMPLETED结尾
说明该文件yting_flume_example_agent1_00001.log已经被flume处理了,处理过后的文件名变成yting_flume_example_agent1_00001.log.COMPLETED,接下来看看flume那边的信息,应该发生变化了
# 查看flume shell的信息变化
2014-06-25 10:51:00,530 (pool-6-thread-1) [INFO -org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:254)]Spooling Directory Source runner has shutdown.
2014-06-25 10:51:01,434 (pool-6-thread-1) [INFO -org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:332)]Preparing to move file/usr/local/yting/flume/tdata/tdir1/yting_flume_example_agent1_00001.log to/usr/local/yting/flume/tdata/tdir1/yting_flume_example_agent1_00001.log.COMPLETED
2014-06-25 10:51:02,436 (pool-6-thread-1) [INFO -org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:254)]Spooling Directory Source runner has shutdown.
2014-06-25 10:51:02,473(SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO -org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:261)]Creatinghdfs://rs229:9000/yting/flumet/FlumeData.1403664662360.tmp
2014-06-25 10:51:07,440 (pool-6-thread-1) [INFO -org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:254)]Spooling Directory Source runner has shutdown.
2014-06-25 10:51:07,519 (hdfs-sink1-roll-timer-0)[INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:409)]Closinghdfs://rs229:9000/yting/flumet/FlumeData.1403664662360.tmp
2014-06-25 10:51:07,521 (hdfs-sink1-call-runner-3)[INFO - org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:339)]Close tries incremented
2014-06-25 10:51:07,549 (hdfs-sink1-call-runner-4)[INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:669)]Renaminghdfs://rs229:9000/yting/flumet/FlumeData.1403664662360.tmp tohdfs://rs229:9000/yting/flumet/FlumeData.1403664662360
2014-06-25 10:51:07,557 (hdfs-sink1-roll-timer-0)[INFO - org.apache.flume.sink.hdfs.HDFSEventSink$1.run(HDFSEventSink.java:402)]Writer callback called.
2014-06-25 10:51:16,448 (pool-6-thread-1) [INFO -org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:254)]Spooling Directory Source runner has shutdown.
2014-06-25 10:51:16,626 (Log-BackgroundWorker-channel1)[INFO -org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint(EventQueueBackingStoreFile.java:214)]Start checkpoint for/usr/local/yting/flume/checkpointdir/tcpdir/example_agent1_001/checkpoint,elements to sync = 1
2014-06-25 10:51:16,628(Log-BackgroundWorker-channel1) [INFO -org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:239)]Updating checkpoint metadata: logWriteOrderID: 1403663867125, queueSize: 0,queueHead: 0
2014-06-25 10:51:16,630(Log-BackgroundWorker-channel1) [INFO -org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:1005)] Updatedcheckpoint for file:/usr/local/yting/flume/datadirs/tddirs/example_agent1_001/log-1 position: 206logWriteOrderID: 1403663867125
注意:这里的分析请看下面的分析整个过程
# 查看hdfs上flume是否上传了数据
[root@rs229 tdir1]# hadoop fs -ls /yting
Found 2 items
drwxr-xr-x -root supergroup 0 2014-06-2510:51 /yting/flumet
-rw-r--r-- 3root supergroup 4278 2014-06-1018:29 /yting/yarn-daemon.sh
[root@rs229 tdir1]# hadoop fs -ls /yting/flumet
Found 1 items
-rw-r--r-- 3root supergroup 50 2014-06-2510:51 /yting/flumet/FlumeData.1403664662360
[root@rs229 tdir1]# hadoop fs -cat /yting/flumet/FlumeData.1403664662360
The you smile until forever .....................(日志信息以及被上传了,OK、、、)
[root@rs229 tdir1]#
# 分析整个过程
通过分析flume shell的日志信息可以发现当我们在监视目录下新文件被创建保存的时候flume进行处理并且重命名该文件,在原文件命后面添加.COMPLETE,然后将文件中的数据上传到hdfs中并创建一个临时文件filename.tmp,上传成功后重命名hdfs上的临时文件,将文件后缀.tmp去掉就ok了,最后flume将本次操作写入自己的日志信息。
# 初学者注意的地方
# 配置文件的文件名命
# 配置文件中的agent1与flume-ng 的-n 参数一直
# 最好配置文件的文件名与配置文件内容的名字一样,这样-n参数就不会敲错了
# 最后的最后想吐槽一下csdn的这个编辑器,真心烂,自己写好的笔记放在word文档里面,复制过来的时候会出现些莫名其妙的东西,改都改半天 ,靠 、、、
时间:2014-06-25 11:08:21