2015-08-28 22 views
0

我看到幾篇文章討論這個主題和各種網頁,也在stackoverflow中,但我不明白真正的問題是什麼,也不知道我能做什麼解決它。事實是,我對Hadoop非常陌生。當hadoop不在同一個主機上時,在eclipse中執行MapReduce時出錯

因此,我使用的是Cloudera Quickstart 5.4.2虛擬機。我開發了嵌入式Eclipse的典型WordCount示例,並且一切運行良好。

現在我試圖在虛擬機之外的另一個eclipse中執行相同的代碼,並且得到一個ConnectException。

到虛擬機的連接沒有問題,並且創建了我的「輸出」目錄,但在執行map()/ reduce()任務之前失敗。

精確我的情況: 主持人:

  • 的CentOS 6.6的x64與Oracle JDK 1.7.0_67
  • 的Hadoop 2.6.0(Apache的)下載並解壓到/ opt/Hadoop的,$ HADOOP_HOME有設置和$ HADOOP_HOME/bin中加入$ PATH
  • Eclipse的月神和Maven 3.0.4
  • 命令HDFS DFS -ls HDFS://快速入門:8020 /用戶/ Cloudera的列出我把這個路徑中的所有文件

VM:

  • Cloudera的快速入門5.4.2(Hadoop的2.6.0-cdh5.4.2)
  • HDFS運行正常(住進了Cloudera管理器)

我所看到的internet是文件mapred-site.xml中的一些「Hadoop客戶端配置」。這個文件在我的主機中不存在,我應該創建它嗎?哪裏?

我的錯誤是:

15/08/28 15:21:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
15/08/28 15:21:10 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 
15/08/28 15:21:10 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 
15/08/28 15:21:10 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 
15/08/28 15:21:10 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String). 
15/08/28 15:21:10 INFO input.FileInputFormat: Total input paths to process : 1 
15/08/28 15:21:10 INFO mapreduce.JobSubmitter: number of splits:1 
15/08/28 15:21:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local13835919_0001 
15/08/28 15:21:11 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 
15/08/28 15:21:11 INFO mapreduce.Job: Running job: job_local13835919_0001 
15/08/28 15:21:11 INFO mapred.LocalJobRunner: OutputCommitter set in config null 
15/08/28 15:21:11 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 
15/08/28 15:21:11 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 
15/08/28 15:21:11 INFO mapred.LocalJobRunner: Waiting for map tasks 
15/08/28 15:21:11 INFO mapred.LocalJobRunner: Starting task: attempt_local13835919_0001_m_000000_0 
15/08/28 15:21:11 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 
15/08/28 15:21:11 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 
15/08/28 15:21:11 INFO mapred.MapTask: Processing split: hdfs://192.168.111.128:8020/user/cloudera/input/download/pg4363.txt:0+408781 
15/08/28 15:21:11 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 
15/08/28 15:21:11 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 
15/08/28 15:21:11 INFO mapred.MapTask: soft limit at 83886080 
15/08/28 15:21:11 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 
15/08/28 15:21:11 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 
15/08/28 15:21:11 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 
15/08/28 15:21:11 WARN hdfs.BlockReaderFactory: I/O error constructing remote block reader. 
java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) 
    at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3108) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:354) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:621) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:847) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:897) 
    at java.io.DataInputStream.read(DataInputStream.java:100) 
    at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180) 
    at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216) 
    at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174) 
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:143) 
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:183) 
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) 
    at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) 
    at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) 
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
15/08/28 15:21:11 WARN hdfs.DFSClient: Failed to connect to /127.0.0.1:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) 
    at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3108) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:354) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:621) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:847) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:897) 
    at java.io.DataInputStream.read(DataInputStream.java:100) 
    at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180) 
    at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216) 
    at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174) 
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:143) 
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:183) 
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) 
    at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) 
    at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) 
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
15/08/28 15:21:11 INFO hdfs.DFSClient: Could not obtain BP-286282631-127.0.0.1-1433865208026:blk_1073742256_1432 from any node: java.io.IOException: No live nodes contain block BP-286282631-127.0.0.1-1433865208026:blk_1073742256_1432 after checking nodes = [DatanodeInfoWithStorage[127.0.0.1:50010,DS-3299869f-57b5-40b1-9917-9d69cd32f1d2,DISK]], ignoredNodes = null No live nodes contain current block Block locations: DatanodeInfoWithStorage[127.0.0.1:50010,DS-3299869f-57b5-40b1-9917-9d69cd32f1d2,DISK] Dead nodes: DatanodeInfoWithStorage[127.0.0.1:50010,DS-3299869f-57b5-40b1-9917-9d69cd32f1d2,DISK]. Will get new block locations from namenode and retry... 
15/08/28 15:21:11 WARN hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 1838.582183063173 msec. 
15/08/28 15:21:12 INFO mapreduce.Job: Job job_local13835919_0001 running in uber mode : false 
15/08/28 15:21:12 INFO mapreduce.Job: map 0% reduce 0% 
15/08/28 15:21:13 WARN hdfs.BlockReaderFactory: I/O error constructing remote block reader. 
java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) 
    at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3108) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:354) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:621) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:847) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:897) 
    at java.io.DataInputStream.read(DataInputStream.java:100) 
    at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180) 
    at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216) 
    at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174) 
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:143) 
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:183) 
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) 
    at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) 
    at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) 
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
15/08/28 15:21:13 WARN hdfs.DFSClient: Failed to connect to /127.0.0.1:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) 
    at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3108) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:354) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:621) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:847) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:897) 
    at java.io.DataInputStream.read(DataInputStream.java:100) 
    at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180) 
    at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216) 
    at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174) 
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:143) 
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:183) 
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) 
    at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) 
    at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) 
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
15/08/28 15:21:13 INFO hdfs.DFSClient: Could not obtain BP-286282631-127.0.0.1-1433865208026:blk_1073742256_1432 from any node: java.io.IOException: No live nodes contain block BP-286282631-127.0.0.1-1433865208026:blk_1073742256_1432 after checking nodes = [DatanodeInfoWithStorage[127.0.0.1:50010,DS-3299869f-57b5-40b1-9917-9d69cd32f1d2,DISK]], ignoredNodes = null No live nodes contain current block Block locations: DatanodeInfoWithStorage[127.0.0.1:50010,DS-3299869f-57b5-40b1-9917-9d69cd32f1d2,DISK] Dead nodes: DatanodeInfoWithStorage[127.0.0.1:50010,DS-3299869f-57b5-40b1-9917-9d69cd32f1d2,DISK]. Will get new block locations from namenode and retry... 
15/08/28 15:21:13 WARN hdfs.DFSClient: DFS chooseDataNode: got # 2 IOException, will wait for 4651.463132320157 msec. 

我不明白爲什麼有任何試圖連接到127.0.0.1:50010,因爲我的Hadoop集羣是不是這個主機上,但在快速啓動VM( 192.168.xy,所有/ etc/hosts也是最新的)。我想我必須配置的東西,但我不知道什麼,或者在哪裏...

我會嘗試在Windows 7 x64,配置winutils等,同樣的事情,我也有同樣的例外。

謝謝!

回答

0

好吧,它似乎是Cloudera Quickstart虛擬機配置問題。它可以與Hortonworks沙盒配合使用。

的區別是: - 該hortonworks沙箱被配置爲使用「真正的」 IP來定位它的節點 - 將Cloudera的虛擬機配置爲使用「localhost」的IP來定位它的節點(這裏,數據節點)。

所以當我嘗試從Windows執行Cloudera虛擬羣集中的MR時,yarn資源管理器返回的配置是127.0.0.1而不是真實IP,當然,我沒有任何數據節點在我的本地機器上。

+0

我一直在嘗試通過修改Cloudera VM中的/ etc/hosts文件來解決此問題。但是這個問題沒有解決。 –

+0

修改完成後重新啓動虛擬機 –

相關問題