嘗試使用Streaming在Hadoop上運行mapreduce作業。我有兩個ruby腳本wcmapper.rb和wcreducer.rb。我試圖運行如下的工作:Hadoop Streaming - 外部映射器腳本 - 文件未找到
hadoop jar hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar -file wcmapper.rb -mapper wcmapper.rb -file wcreducer.rb -reducer wcreducer.rb -input test.txt -output output
這在控制檯導致以下錯誤消息:
13/11/26 12:54:07 INFO streaming.StreamJob: map 0% reduce 0%
13/11/26 12:54:36 INFO streaming.StreamJob: map 100% reduce 100%
13/11/26 12:54:36 INFO streaming.StreamJob: To kill this job, run:
13/11/26 12:54:36 INFO streaming.StreamJob: /home/paul/bin/hadoop-1.2.1/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201311261104_0009
13/11/26 12:54:36 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201311261104_0009
13/11/26 12:54:36 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201311261104_0009_m_000000
13/11/26 12:54:36 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
縱觀失敗的嘗試任何的任務顯示:
java.io.IOException: Cannot run program "/var/lib/hadoop/mapred/local/taskTracker/paul/jobcache/job_201311261104_0010/attempt_201311261104_0010_m_000001_3/work/./wcmapper.rb": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1042)
據我所知,hadoop需要複製地圖和reducer腳本供所有節點使用,並相信這是-file參數的用途。但是,似乎腳本沒有被複制到hadoop希望找到它們的位置。控制檯表明他們正在打包,我認爲:
packageJobJar: [wcmapper.rb, wcreducer.rb, /var/lib/hadoop/hadoop-unjar3547645655567272034/] [] /tmp/streamjob3978604690657430710.jar tmpDir=null
我也曾嘗試以下操作:
hadoop jar hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar -files wcmapper.rb,wcreducer.rb -mapper wcmapper.rb -reducer wcreducer.rb -input test.txt -output output
但是這給了同樣的錯誤。
誰能告訴我問題是什麼?
或在哪裏可以更好地診斷問題?
非常感謝
保羅