2017-03-08 20 views
1

我試圖設置一個基本的Kafka-Flume-HDFS管道。 卡夫卡是啓動和運行,但是當我通過爲什麼我的Flume代理不啓動?

bin/flume-ng agent -n flume1 -c conf -f conf/flume-conf.properties -D flume.root.logger=INFO,console 

好像代理開始水槽劑不上來作爲唯一的控制檯日誌我得到的是:

Info: Sourcing environment configuration script /opt/hadoop/flume/conf/flume-env.sh 
Info: Including Hive libraries found via() for Hive access 
+ exec /opt/jdk1.8.0_111/bin/java -Xmx20m -D -cp '/opt/hadoop/flume/conf:/opt/hadoop/flume/lib/*:/opt/hadoop/flume/lib/:/lib/*' -Djava.library.path= org.apache.flume.node.Application -n flume1 -f conf/flume-conf.properties flume.root.logger=INFO,console 
SLF4J: Class path contains multiple SLF4J bindings. 
SLF4J: Found binding in [jar:file:/opt/hadoop/flume/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: Found binding in [jar:file:/opt/hadoop/flume/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 

水槽配置文件:

flume1.sources = kafka-source-1 
flume1.channels = hdfs-channel-1 
flume1.sinks = hdfs-sink-1 
flume1.sources.kafka-source-1.type = org.apache.flume.source.kafka.KafkaSource 
flume1.sources.kafka-source-1.zookeeperConnect = localhost:2181 
flume1.sources.kafka-source-1.topic = twitter_topic 
flume1.sources.kafka-source-1.batchSize = 100 
flume1.sources.kafka-source-1.channels = hdfs-channel-1 

flume1.channels.hdfs-channel-1.type = memory 
flume1.sinks.hdfs-sink-1.channel = hdfs-channel-1 
flume1.sinks.hdfs-sink-1.type = hdfs 
flume1.sinks.hdfs-sink-1.hdfs.writeFormat = Text 
flume1.sinks.hdfs-sink-1.hdfs.fileType = DataStream 
flume1.sinks.hdfs-sink-1.hdfs.filePrefix = test-events 
flume1.sinks.hdfs-sink-1.hdfs.useLocalTimeStamp = true 
flume1.sinks.hdfs-sink-1.hdfs.path = /tmp/kafka/twitter_topic/%y-%m-%d 
flume1.sinks.hdfs-sink-1.hdfs.rollCount= 100 
flume1.sinks.hdfs-sink-1.hdfs.rollSize= 0 
flume1.channels.hdfs-channel-1.capacity = 10000 
flume1.channels.hdfs-channel-1.transactionCapacity = 1000 

這是flume-conf.properties配置問題還是我缺少重要的東西?

編輯

重新啓動它似乎工作比以前好一切後,水槽實際上是做一些事情,現在(這似乎是爲了啓動HDFS時,動物園管理員,重要的是卡夫卡,水槽和我的流媒體應用) 。 我現在得到水槽

java.lang.NoSuchMethodException: org.apache.hadoop.fs.LocalFileSystem.isFileClosed(org.apache.hadoop.fs.path) 
... 

回答

1

編輯hdfs.path值與全HDFS URI異常,

flume1.sinks.hdfs-sink-1.hdfs.path = hdfs://namenode_host:port/tmp/kafka/twitter_topic/%y-%m-%d 

對於日誌: 的日誌不會被打印在控制檯上,刪除-Dflume.root.logger=INFO,console之間的空格。

嘗試,

bin/flume-ng agent -n flume1 -c conf -f conf/flume-conf.properties -Dflume.root.logger=INFO,console 

或者從$FLUME_HOME/logs/目錄訪問日誌。

相關問題