2011-11-30 23 views
0

我試圖運行提到的矩陣乘法的例子(源代碼):MapReduce的矩陣乘法與下面的鏈接上的hadoop

http://www.norstad.org/matrix-multiply/index.html

我在pseudodistributed模式的hadoop設置和我配置它使用此教程:

http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-pseudo-distributed.html?showComment=1321528406255#c3661776111033973764

當我運行我的jar文件,然後我得到以下錯誤:

Identity test 
11/11/30 10:37:34 INFO input.FileInputFormat: Total input paths to process : 2 
11/11/30 10:37:34 INFO mapred.JobClient: Running job: job_201111291041_0010 
11/11/30 10:37:35 INFO mapred.JobClient: map 0% reduce 0% 
11/11/30 10:37:44 INFO mapred.JobClient: map 100% reduce 0% 
11/11/30 10:37:56 INFO mapred.JobClient: map 100% reduce 100% 
11/11/30 10:37:58 INFO mapred.JobClient: Job complete: job_201111291041_0010 
11/11/30 10:37:58 INFO mapred.JobClient: Counters: 17 
11/11/30 10:37:58 INFO mapred.JobClient: Job Counters 
11/11/30 10:37:58 INFO mapred.JobClient:  Launched reduce tasks=1 
11/11/30 10:37:58 INFO mapred.JobClient:  Launched map tasks=2 
11/11/30 10:37:58 INFO mapred.JobClient:  Data-local map tasks=2 
11/11/30 10:37:58 INFO mapred.JobClient: FileSystemCounters 
11/11/30 10:37:58 INFO mapred.JobClient:  FILE_BYTES_READ=114 
11/11/30 10:37:58 INFO mapred.JobClient:  HDFS_BYTES_READ=248 
11/11/30 10:37:58 INFO mapred.JobClient:  FILE_BYTES_WRITTEN=298 
11/11/30 10:37:58 INFO mapred.JobClient:  HDFS_BYTES_WRITTEN=124 
11/11/30 10:37:58 INFO mapred.JobClient: Map-Reduce Framework 
11/11/30 10:37:58 INFO mapred.JobClient:  Reduce input groups=2 
11/11/30 10:37:58 INFO mapred.JobClient:  Combine output records=0 
11/11/30 10:37:58 INFO mapred.JobClient:  Map input records=4 
11/11/30 10:37:58 INFO mapred.JobClient:  Reduce shuffle bytes=60 
11/11/30 10:37:58 INFO mapred.JobClient:  Reduce output records=2 
11/11/30 10:37:58 INFO mapred.JobClient:  Spilled Records=8 
11/11/30 10:37:58 INFO mapred.JobClient:  Map output bytes=100 
11/11/30 10:37:58 INFO mapred.JobClient:  Combine input records=0 
11/11/30 10:37:58 INFO mapred.JobClient:  Map output records=4 
11/11/30 10:37:58 INFO mapred.JobClient:  Reduce input records=4 
11/11/30 10:37:58 INFO input.FileInputFormat: Total input paths to process : 1 
11/11/30 10:37:59 INFO mapred.JobClient: Running job: job_201111291041_0011 
11/11/30 10:38:00 INFO mapred.JobClient: map 0% reduce 0% 
11/11/30 10:38:09 INFO mapred.JobClient: map 100% reduce 0% 
11/11/30 10:38:21 INFO mapred.JobClient: map 100% reduce 100% 
11/11/30 10:38:23 INFO mapred.JobClient: Job complete: job_201111291041_0011 
11/11/30 10:38:23 INFO mapred.JobClient: Counters: 17 
11/11/30 10:38:23 INFO mapred.JobClient: Job Counters 
11/11/30 10:38:23 INFO mapred.JobClient:  Launched reduce tasks=1 
11/11/30 10:38:23 INFO mapred.JobClient:  Launched map tasks=1 
11/11/30 10:38:23 INFO mapred.JobClient:  Data-local map tasks=1 
11/11/30 10:38:23 INFO mapred.JobClient: FileSystemCounters 
11/11/30 10:38:23 INFO mapred.JobClient:  FILE_BYTES_READ=34 
11/11/30 10:38:23 INFO mapred.JobClient:  HDFS_BYTES_READ=124 
11/11/30 10:38:23 INFO mapred.JobClient:  FILE_BYTES_WRITTEN=100 
11/11/30 10:38:23 INFO mapred.JobClient:  HDFS_BYTES_WRITTEN=124 
11/11/30 10:38:23 INFO mapred.JobClient: Map-Reduce Framework 
11/11/30 10:38:23 INFO mapred.JobClient:  Reduce input groups=2 
11/11/30 10:38:23 INFO mapred.JobClient:  Combine output records=2 
11/11/30 10:38:23 INFO mapred.JobClient:  Map input records=2 
11/11/30 10:38:23 INFO mapred.JobClient:  Reduce shuffle bytes=0 
11/11/30 10:38:23 INFO mapred.JobClient:  Reduce output records=2 
11/11/30 10:38:23 INFO mapred.JobClient:  Spilled Records=4 
11/11/30 10:38:23 INFO mapred.JobClient:  Map output bytes=24 
11/11/30 10:38:23 INFO mapred.JobClient:  Combine input records=2 
11/11/30 10:38:23 INFO mapred.JobClient:  Map output records=2 
11/11/30 10:38:23 INFO mapred.JobClient:  Reduce input records=2 
Exception in thread "main" java.io.IOException: Cannot open filename /tmp/Matrix Multiply/out/_logs 
     at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.ja va:1497) 
     at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java :1488) 
     at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:376) 
     at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSyst em.java:178) 
     at org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1 437) 
     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:142 4) 
     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:141 7) 
     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:141 2) 
     at TestMatrixMultiply.fillMatrix(TestMatrixMultiply.java:62) 
     at TestMatrixMultiply.readMatrix(TestMatrixMultiply.java:84) 
     at TestMatrixMultiply.checkAnswer(TestMatrixMultiply.java:108) 
     at TestMatrixMultiply.runOneTest(TestMatrixMultiply.java:144) 
     at TestMatrixMultiply.testIdentity(TestMatrixMultiply.java:156) 
     at TestMatrixMultiply.main(TestMatrixMultiply.java:258) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:57) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:601) 
     at org.apache.hadoop.util.RunJar.main(RunJar.java:156) 

可有人請建議我,我究竟做錯了什麼?謝謝

回答

1

它試圖讀取作業輸出。當你提交給你的羣集時,它會添加這個_log目錄。由於目錄沒有序列文件,因此無法讀取。

您必須更改讀取它的代碼。

我也照本宣科的東西等於:

FileStatus[] stati = fs.listStatus(output); 
for (FileStatus status : stati) { 
    if (!status.isDir()) { 
     Path path = status.getPath(); 
     // HERE IS THE READ CODE FROM YOUR EXAMPLE 
    } 
} 

http://code.google.com/p/hama-shortest-paths/source/browse/trunk/hama-gsoc/src/de/jungblut/clustering/mapreduce/KMeansClusteringJob.java#127

+0

謝謝托馬斯。我說得對。你有什麼想法,關於我提到的矩陣乘法的例子,爲什麼這個工作正常,而且在hadoop獨立模式下工作正常,但在檢查答案的時候不能與hadoop分佈模式一起工作? – waqas

+0

因爲在分佈式模式下運行時,_log存儲在HDFS中,在僞分佈式模式/獨立模式下,這將存儲在tasktracker日誌中。 –

0

它可能是一個原始的建議,但您可能需要更改日誌文件名與 的/ tmp /矩陣\乘法/出/ _logs。目錄名稱中的空格可能不會自動處理,我認爲您正在使用Linux。

+0

Thanks miette。是的,我正在使用Linux,事情是所有這些日誌文件和路徑都是由程序本身定義的。那麼你是否仍然認爲它可能與你描述的問題相同?由於我對這個領域相當陌生,所以請糾正我,如果我錯了 – waqas

+0

它不是空間,它試圖讀取一個目錄。 –

0

有在TestMatrixMultiply.java兩個問題:

  1. 正如托馬斯·Jungblut說,_logs應排除在readMatrix( ) 方法。我已經改變了這樣的代碼:

    if (fs.isFile(path)) { 
         fillMatrix(result, path); 
        } else { 
         FileStatus[] fileStatusArray = fs.listStatus(path); 
         for (FileStatus fileStatus : fileStatusArray) { 
          if (!fileStatus.isDir()) // this line is added by me 
           fillMatrix(result, fileStatus.getPath()); 
         } 
        } 
    
  2. 在main()方法的末尾,fs.delete應該評論,或者輸出目錄將在MapReduce工作結束後立即每次刪除。

    finally { 
         //fs.delete(new Path(DATA_DIR_PATH), true); 
        }