2011-09-09 76 views
1

我已經在一個小羣集上測試了hadoop豬。如何使用hadoop豬流媒體編譯的C程序?

我已經成功地使用豬流perl,python,shell腳本,甚至罐子,但不是c二進制文件!

我只是在C

建立一個簡單的Hello World程序並編譯它作爲測試

而下ubuntu11.04和g使用./test運行它++編譯器是最新的。

程序在OS中完美運行。

但是,當我試圖在豬流中,它總是失敗!

這是豬的腳本:

a = load ('test.txt'); 
define p `./test` ship('/home/clouduser/test'); 
b = stream a through p; 
dump p; 

的test.txt只包含一個空間

和我有Perl,Python和shell腳本和java成功地測試相同的配置。

grunt> a = load 'test.txt'; 
grunt> define p `./1.sh` ship('/home/clouduser/1.sh'); 
grunt> b = stream a through p; 
grunt> dump b 
2011-09-08 23:53:33,940 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: STREAMING 
2011-09-08 23:53:33,940 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - pig.usenewlogicalplan is set to true. New logical plan will be used. 
2011-09-08 23:53:34,017 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - (Name: b: Store(hdfs://cloudlab-namenode/tmp/temp-502536453/tmp-1972014919:org.apache.pig.impl.io.InterStorage) - scope-2 Operator Key: scope-2) 
2011-09-08 23:53:34,026 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false 
2011-09-08 23:53:34,048 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1 
2011-09-08 23:53:34,048 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1 
2011-09-08 23:53:34,111 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job 
2011-09-08 23:53:34,126 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 
2011-09-08 23:53:35,938 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job 
2011-09-08 23:53:35,994 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission. 
2011-09-08 23:53:36,312 [Thread-9] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1 
2011-09-08 23:53:36,313 [Thread-9] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1 
2011-09-08 23:53:36,324 [Thread-9] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
2011-09-08 23:53:36,324 [Thread-9] WARN org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library not loaded 
2011-09-08 23:53:36,326 [Thread-9] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1 
2011-09-08 23:53:36,494 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete 
2011-09-08 23:53:37,101 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201109051400_0283 
2011-09-08 23:53:37,101 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://172.19.1.4:50030/jobdetails.jsp?jobid=job_201109051400_0283 
2011-09-08 23:54:01,755 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201109051400_0283 has failed! Stop running all dependent jobs 
2011-09-08 23:54:01,762 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 
2011-09-08 23:54:01,774 [main] ERROR org.apache.pig.tools.pigstats.PigStats - ERROR 2997: Unable to recreate exception from backed error: org.apache.pig.backend.executionengine.ExecException: ERROR 2055: Received Error while processing the map plan: './1.sh ' failed with exit status: 127 
2011-09-08 23:54:01,774 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed! 
2011-09-08 23:54:01,776 [main] INFO org.apache.pig.tools.pigstats.PigStats - Script Statistics: 

HadoopVersion PigVersion  UserId StartedAt  FinishedAt  Features 
0.20.2-cdh3u1 0.8.1-cdh3u1 clouduser  2011-09-08 23:53:34  2011-09-08 23:54:01  STREAMING 

Failed! 

Failed Jobs: 
JobId Alias Feature Message Outputs 
job_201109051400_0283 a,b  STREAMING,MAP_ONLY  Message: Job failed! Error - NA hdfs://cloudlab-namenode/tmp/temp-502536453/tmp-1972014919, 

Input(s): 
Failed to read data from "hdfs://cloudlab-namenode/user/clouduser/test.txt" 

Output(s): 
Failed to produce result in "hdfs://cloudlab-namenode/tmp/temp-502536453/tmp-1972014919" 

Counters: 
Total records written : 0 
Total bytes written : 0 
Spillable Memory Manager spill count : 0 
Total bags proactively spilled: 0 
Total records proactively spilled: 0 

Job DAG: 
job_201109051400_0283 


2011-09-08 23:54:01,776 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed! 
2011-09-08 23:54:01,793 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2997: Unable to recreate exception from backed error: org.apache.pig.backend.executionengine.ExecException: ERROR 2055: Received Error while processing the map plan: './1.sh ' failed with exit status: 127 
Details at logfile: /home/clouduser/pig_1315540364239.log 

我甚至試過在shell腳本運行此文件,並附帶shell腳本,這C二元 但還是失敗了!

任何人有想法?

的StackOverflow似乎不允許原來的C代碼,但是代碼運行正常

回答

2

從給定的日誌: 無法從讀取數據「HDFS://cloudlab-namenode/user/clouduser/test.txt」

請確保您有在集羣路徑文件test.txt 「HDFS://cloudlab-namenode/user/clouduser/test.txt」

從日誌行: 2011-09-08 23: 54:01,793 [main] ERROR org.apache.pig.tools.grunt.Grunt - 錯誤2997:無法重新創建支持錯誤的異常:org.apache.pig.backend .executionengine.ExecException:錯誤2055:處理地圖計劃時收到錯誤:'./1.sh'失敗,退出狀態:127

檢查是否可以執行./1.sh?