2012-12-19 43 views
2

我有安裝了rmr2和rhdfs軟件包的hadoop集羣設置。我已經能夠通過CLI和rscripts運行一些樣本MR作業。例如,這個工程:運行簡單rhadoop作業的問題 - 破損管道錯誤

#!/usr/bin/env Rscript 
require('rmr2') 

small.ints = to.dfs(1:1000) 
out = mapreduce(input = small.ints, map = function(k, v) keyval(v, v^2)) 
df = as.data.frame(from.dfs(out)) 
colnames(df) = c('n', 'n2') 
str(df) 

最終輸出:

DEPRECATED: Use of this script to execute hdfs command is deprecated. 
Instead use the hdfs command for it. 

DEPRECATED: Use of this script to execute hdfs command is deprecated. 
Instead use the hdfs command for it. 

'data.frame': 1000 obs. of 2 variables: 
$ n : int 1 2 3 4 5 6 7 8 9 10 ... 
$ n2: num 1 4 9 16 25 36 49 64 81 100 ... 

現在我想移動到寫我自己的MR作業的下一個步驟。我有一個文件(`/user/michael/batsmall.csv')與一些擊球統計:

aardsda01,2004,1,SFN,NL,11,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,11 
aardsda01,2006,1,CHN,NL,45,43,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,45 
aardsda01,2007,1,CHA,AL,25,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2 
aardsda01,2008,1,BOS,AL,47,5,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,5 
aardsda01,2009,1,SEA,AL,73,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 
aardsda01,2010,1,SEA,AL,53,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 

(batsmall.csv是一個更大的文件的摘錄,但實際上我只是想證明我可以閱讀和分析HDFS文件)

這裏的腳本我有:

#!/usr/bin/env Rscript 

require('rmr2'); 
require('rhdfs'); 

hdfs.init() 
hdfs.rmr("/user/michael/rMean") 

findMean = function (input, output) { 
    mapreduce(input = input, 
      output = output, 
      input.format = 'csv', 
      map = function(k, fields) { 
       myField <- fields[[5]] 
       keyval(fields[[0]], myField) 
      }, 
      reduce = function(key, vv) { 
       keyval(key, mean(as.numeric(vv))) 
      } 
    ) 
} 

from.dfs(findMean("/home/michael/r/Batting.csv", "/home/michael/r/rMean")) 
print(hdfs.read.text.file("/user/michael/batsmall.csv")) 

這個失敗每次看着Hadoop日誌這似乎是一個破裂的管道錯誤。我無法弄清楚是什麼原因造成的。由於其他工作的工作,我認爲這是我的腳本問題,而不是我的配置,但我無法弄清楚。我承認是R新手,對hadoop比較陌生。

這裏的作業輸出:

[[email protected] r]$ ./rtest.r 
Loading required package: rmr2 
Loading required package: Rcpp 
Loading required package: RJSONIO 
Loading required package: methods 
Loading required package: digest 
Loading required package: functional 
Loading required package: stringr 
Loading required package: plyr 
Loading required package: rhdfs 
Loading required package: rJava 

HADOOP_CMD=/usr/bin/hadoop 

Be sure to run hdfs.init() 
Deleted hdfs://hadoop01.dev.terapeak.com/user/michael/rMean 
[1] TRUE 
packageJobJar: [/tmp/Rtmp2XnCL3/rmr-local-env55d1533355d7, /tmp/Rtmp2XnCL3/rmr-global-env55d119877dd3, /tmp/Rtmp2XnCL3/rmr-streaming-map55d13c0228b7, /tmp/Rtmp2XnCL3/rmr-streaming-reduce55d150f7ffa8, /tmp/hadoop-michael/hadoop-unjar5464463427878425265/] [] /tmp/streamjob4293464845863138032.jar tmpDir=null 
12/12/19 11:09:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
12/12/19 11:09:41 INFO mapred.FileInputFormat: Total input paths to process : 1 
12/12/19 11:09:42 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-michael/mapred/local] 
12/12/19 11:09:42 INFO streaming.StreamJob: Running job: job_201212061720_0039 
12/12/19 11:09:42 INFO streaming.StreamJob: To kill this job, run: 
12/12/19 11:09:42 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039 
12/12/19 11:09:42 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039 
12/12/19 11:09:43 INFO streaming.StreamJob: map 0% reduce 0% 
12/12/19 11:10:15 INFO streaming.StreamJob: map 100% reduce 100% 
12/12/19 11:10:15 INFO streaming.StreamJob: To kill this job, run: 
12/12/19 11:10:15 INFO streaming.StreamJob: /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hadoop01.dev.terapeak.com:8021 -kill job_201212061720_0039 
12/12/19 11:10:15 INFO streaming.StreamJob: Tracking URL: http://hadoop01.dev.terapeak.com:50030/jobdetails.jsp?jobid=job_201212061720_0039 
12/12/19 11:10:15 ERROR streaming.StreamJob: Job not successful. Error: NA 
12/12/19 11:10:15 INFO streaming.StreamJob: killJob... 
Streaming Command Failed! 
Error in mr(map = map, reduce = reduce, combine = combine, in.folder = if (is.list(input)) { : 
    hadoop streaming failed with error code 1 
Calls: findMean -> mapreduce -> mr 
Execution halted 

而且從作業服務器的樣本例外:

ava.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362) 
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:572) 
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:136) 
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57) 
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) 
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327) 
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:396) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) 
    at org.apache.hadoop.mapred.Child.main(Child.java:262) 

回答

3

您需要檢查未果的標準錯誤。 jobtracker Web UI是最簡單的方法。受過教育的猜測是,fields是一個數據框,您可以像列表一樣訪問它,可能但不常見。由此可以間接產生錯誤。

另外我們在RHadoop wiki上有一個調試文檔,提供了更多的建議。

最後,我們有專門的RHadoop google group,您可以在這裏與大量熱情的用戶進行互動。或者你可以在你自己的SO上。

+0

不知道我對我自己,這似乎是一個奇怪的評論,但你沒有得到它的權利,我沒有訪問字段的權利。我將input.format更改爲文本,然後進行了拆分並且一切正常。我將不得不更多地瞭解csv input.format。謝謝! – Ilion

+0

@Ilion:對我來說,這似乎不是一個奇怪的評論。安東尼奧只是說有一個關於'rmr'和'RHadoop'的問題的專門場所,他認爲這可能會更有效地發佈在那裏,並且您可能會通過搜索該存檔獲得更多有用的提示。 –

+0

Ditto Dwin,謝謝 – piccolbo