2013-05-30 21 views
1

我試圖運行使用Mahout.Following聚類程序是我的Java代碼,我使用IO異常,同時運行K-均值使用象夫和Hadoop罐子

但聚類當我運行它,它開始正常執行,但在最後給我一個錯誤.. 以下是我正在運行它時得到的堆棧跟蹤。

13/05/30 09:49:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
13/05/30 09:49:22 INFO kmeans.KMeansDriver: Input: /home/vishal/testdata/points Clusters In: /home/vishal/testdata/clusters Out: /home/vishal/output Distance: org.apache.mahout.common.distance.EuclideanDistanceMeasure 
13/05/30 09:49:22 INFO kmeans.KMeansDriver: convergence: 0.0010 max Iterations: 10 num Reduce Tasks: org.apache.mahout.math.VectorWritable Input Vectors: {} 
13/05/30 09:49:22 INFO kmeans.KMeansDriver: K-Means Iteration 1 
13/05/30 09:49:22 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-1 
13/05/30 09:49:23 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
13/05/30 09:49:23 INFO input.FileInputFormat: Total input paths to process : 1 
13/05/30 09:49:23 INFO mapred.JobClient: Running job: job_local_0001 
13/05/30 09:49:23 INFO util.ProcessTree: setsid exited with exit code 0 
13/05/30 09:49:23 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:23 INFO mapred.MapTask: io.sort.mb = 100 
13/05/30 09:49:23 INFO mapred.MapTask: data buffer = 79691776/99614720 
13/05/30 09:49:23 INFO mapred.MapTask: record buffer = 262144/327680 
13/05/30 09:49:23 INFO mapred.MapTask: Starting flush of map output 
13/05/30 09:49:23 INFO mapred.MapTask: Finished spill 0 
13/05/30 09:49:23 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:24 INFO mapred.JobClient: map 0% reduce 0% 
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done. 
13/05/30 09:49:26 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Merger: Merging 1 sorted segments 
13/05/30 09:49:26 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 185 bytes 
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now 
13/05/30 09:49:26 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to /home/vishal/output/clusters-1 
13/05/30 09:49:27 INFO mapred.JobClient: map 100% reduce 0% 
13/05/30 09:49:29 INFO mapred.LocalJobRunner: reduce > reduce 
13/05/30 09:49:29 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done. 
13/05/30 09:49:30 INFO mapred.JobClient: map 100% reduce 100% 
13/05/30 09:49:30 INFO mapred.JobClient: Job complete: job_local_0001 
13/05/30 09:49:30 INFO mapred.JobClient: Counters: 21 
13/05/30 09:49:30 INFO mapred.JobClient: File Output Format Counters 
13/05/30 09:49:30 INFO mapred.JobClient:  Bytes Written=474 
13/05/30 09:49:30 INFO mapred.JobClient: Clustering 
13/05/30 09:49:30 INFO mapred.JobClient:  Converged Clusters=1 
13/05/30 09:49:30 INFO mapred.JobClient: FileSystemCounters 
13/05/30 09:49:30 INFO mapred.JobClient:  FILE_BYTES_READ=3328461 
13/05/30 09:49:30 INFO mapred.JobClient:  FILE_BYTES_WRITTEN=3422872 
13/05/30 09:49:30 INFO mapred.JobClient: File Input Format Counters 
13/05/30 09:49:30 INFO mapred.JobClient:  Bytes Read=443 
13/05/30 09:49:30 INFO mapred.JobClient: Map-Reduce Framework 
13/05/30 09:49:30 INFO mapred.JobClient:  Map output materialized bytes=189 
13/05/30 09:49:30 INFO mapred.JobClient:  Map input records=9 
13/05/30 09:49:30 INFO mapred.JobClient:  Reduce shuffle bytes=0 
13/05/30 09:49:30 INFO mapred.JobClient:  Spilled Records=6 
13/05/30 09:49:30 INFO mapred.JobClient:  Map output bytes=531 
13/05/30 09:49:30 INFO mapred.JobClient:  Total committed heap usage (bytes)=325713920 
13/05/30 09:49:30 INFO mapred.JobClient:  CPU time spent (ms)=0 
13/05/30 09:49:30 INFO mapred.JobClient:  SPLIT_RAW_BYTES=104 
13/05/30 09:49:30 INFO mapred.JobClient:  Combine input records=9 
13/05/30 09:49:30 INFO mapred.JobClient:  Reduce input records=3 
13/05/30 09:49:30 INFO mapred.JobClient:  Reduce input groups=3 
13/05/30 09:49:30 INFO mapred.JobClient:  Combine output records=3 
13/05/30 09:49:30 INFO mapred.JobClient:  Physical memory (bytes) snapshot=0 
13/05/30 09:49:30 INFO mapred.JobClient:  Reduce output records=3 
13/05/30 09:49:30 INFO mapred.JobClient:  Virtual memory (bytes) snapshot=0 
13/05/30 09:49:30 INFO mapred.JobClient:  Map output records=9 
13/05/30 09:49:30 INFO kmeans.KMeansDriver: K-Means Iteration 2 
13/05/30 09:49:30 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-2 
13/05/30 09:49:30 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
13/05/30 09:49:30 INFO input.FileInputFormat: Total input paths to process : 1 
13/05/30 09:49:30 INFO mapred.JobClient: Running job: job_local_0002 
13/05/30 09:49:30 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:30 INFO mapred.MapTask: io.sort.mb = 100 
13/05/30 09:49:30 INFO mapred.MapTask: data buffer = 79691776/99614720 
13/05/30 09:49:30 INFO mapred.MapTask: record buffer = 262144/327680 
13/05/30 09:49:30 INFO mapred.MapTask: Starting flush of map output 
13/05/30 09:49:30 INFO mapred.MapTask: Finished spill 0 
13/05/30 09:49:30 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:31 INFO mapred.JobClient: map 0% reduce 0% 
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done. 
13/05/30 09:49:33 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Merger: Merging 1 sorted segments 
13/05/30 09:49:33 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 124 bytes 
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now 
13/05/30 09:49:33 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to /home/vishal/output/clusters-2 
13/05/30 09:49:34 INFO mapred.JobClient: map 100% reduce 0% 
13/05/30 09:49:36 INFO mapred.LocalJobRunner: reduce > reduce 
13/05/30 09:49:36 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done. 
13/05/30 09:49:37 INFO mapred.JobClient: map 100% reduce 100% 
13/05/30 09:49:37 INFO mapred.JobClient: Job complete: job_local_0002 
13/05/30 09:49:37 INFO mapred.JobClient: Counters: 20 
13/05/30 09:49:37 INFO mapred.JobClient: File Output Format Counters 
13/05/30 09:49:37 INFO mapred.JobClient:  Bytes Written=364 
13/05/30 09:49:37 INFO mapred.JobClient: FileSystemCounters 
13/05/30 09:49:37 INFO mapred.JobClient:  FILE_BYTES_READ=6658544 
13/05/30 09:49:37 INFO mapred.JobClient:  FILE_BYTES_WRITTEN=6844248 
13/05/30 09:49:37 INFO mapred.JobClient: File Input Format Counters 
13/05/30 09:49:37 INFO mapred.JobClient:  Bytes Read=443 
13/05/30 09:49:37 INFO mapred.JobClient: Map-Reduce Framework 
13/05/30 09:49:37 INFO mapred.JobClient:  Map output materialized bytes=128 
13/05/30 09:49:37 INFO mapred.JobClient:  Map input records=9 
13/05/30 09:49:37 INFO mapred.JobClient:  Reduce shuffle bytes=0 
13/05/30 09:49:37 INFO mapred.JobClient:  Spilled Records=4 
13/05/30 09:49:37 INFO mapred.JobClient:  Map output bytes=531 
13/05/30 09:49:37 INFO mapred.JobClient:  Total committed heap usage (bytes)=525074432 
13/05/30 09:49:37 INFO mapred.JobClient:  CPU time spent (ms)=0 
13/05/30 09:49:37 INFO mapred.JobClient:  SPLIT_RAW_BYTES=104 
13/05/30 09:49:37 INFO mapred.JobClient:  Combine input records=9 
13/05/30 09:49:37 INFO mapred.JobClient:  Reduce input records=2 
13/05/30 09:49:37 INFO mapred.JobClient:  Reduce input groups=2 
13/05/30 09:49:37 INFO mapred.JobClient:  Combine output records=2 
13/05/30 09:49:37 INFO mapred.JobClient:  Physical memory (bytes) snapshot=0 
13/05/30 09:49:37 INFO mapred.JobClient:  Reduce output records=2 
13/05/30 09:49:37 INFO mapred.JobClient:  Virtual memory (bytes) snapshot=0 
13/05/30 09:49:37 INFO mapred.JobClient:  Map output records=9 
13/05/30 09:49:37 INFO kmeans.KMeansDriver: K-Means Iteration 3 
13/05/30 09:49:37 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-3 
13/05/30 09:49:37 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
13/05/30 09:49:37 INFO input.FileInputFormat: Total input paths to process : 1 
13/05/30 09:49:37 INFO mapred.JobClient: Running job: job_local_0003 
13/05/30 09:49:37 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:37 INFO mapred.MapTask: io.sort.mb = 100 
13/05/30 09:49:37 INFO mapred.MapTask: data buffer = 79691776/99614720 
13/05/30 09:49:37 INFO mapred.MapTask: record buffer = 262144/327680 
13/05/30 09:49:37 INFO mapred.MapTask: Starting flush of map output 
13/05/30 09:49:37 INFO mapred.MapTask: Finished spill 0 
13/05/30 09:49:37 INFO mapred.Task: Task:attempt_local_0003_m_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:38 INFO mapred.JobClient: map 0% reduce 0% 
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Task: Task 'attempt_local_0003_m_000000_0' done. 
13/05/30 09:49:40 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Merger: Merging 1 sorted segments 
13/05/30 09:49:40 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 124 bytes 
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Task: Task:attempt_local_0003_r_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Task: Task attempt_local_0003_r_000000_0 is allowed to commit now 
13/05/30 09:49:40 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0003_r_000000_0' to /home/vishal/output/clusters-3 
13/05/30 09:49:41 INFO mapred.JobClient: map 100% reduce 0% 
13/05/30 09:49:43 INFO mapred.LocalJobRunner: reduce > reduce 
13/05/30 09:49:43 INFO mapred.Task: Task 'attempt_local_0003_r_000000_0' done. 
13/05/30 09:49:44 INFO mapred.JobClient: map 100% reduce 100% 
13/05/30 09:49:44 INFO mapred.JobClient: Job complete: job_local_0003 
13/05/30 09:49:44 INFO mapred.JobClient: Counters: 21 
13/05/30 09:49:44 INFO mapred.JobClient: File Output Format Counters 
13/05/30 09:49:44 INFO mapred.JobClient:  Bytes Written=364 
13/05/30 09:49:44 INFO mapred.JobClient: Clustering 
13/05/30 09:49:44 INFO mapred.JobClient:  Converged Clusters=2 
13/05/30 09:49:44 INFO mapred.JobClient: FileSystemCounters 
13/05/30 09:49:44 INFO mapred.JobClient:  FILE_BYTES_READ=9988052 
13/05/30 09:49:44 INFO mapred.JobClient:  FILE_BYTES_WRITTEN=10265506 
13/05/30 09:49:44 INFO mapred.JobClient: File Input Format Counters 
13/05/30 09:49:44 INFO mapred.JobClient:  Bytes Read=443 
13/05/30 09:49:44 INFO mapred.JobClient: Map-Reduce Framework 
13/05/30 09:49:44 INFO mapred.JobClient:  Map output materialized bytes=128 
13/05/30 09:49:44 INFO mapred.JobClient:  Map input records=9 
13/05/30 09:49:44 INFO mapred.JobClient:  Reduce shuffle bytes=0 
13/05/30 09:49:44 INFO mapred.JobClient:  Spilled Records=4 
13/05/30 09:49:44 INFO mapred.JobClient:  Map output bytes=531 
13/05/30 09:49:44 INFO mapred.JobClient:  Total committed heap usage (bytes)=724434944 
13/05/30 09:49:44 INFO mapred.JobClient:  CPU time spent (ms)=0 
13/05/30 09:49:44 INFO mapred.JobClient:  SPLIT_RAW_BYTES=104 
13/05/30 09:49:44 INFO mapred.JobClient:  Combine input records=9 
13/05/30 09:49:44 INFO mapred.JobClient:  Reduce input records=2 
13/05/30 09:49:44 INFO mapred.JobClient:  Reduce input groups=2 
13/05/30 09:49:44 INFO mapred.JobClient:  Combine output records=2 
13/05/30 09:49:44 INFO mapred.JobClient:  Physical memory (bytes) snapshot=0 
13/05/30 09:49:44 INFO mapred.JobClient:  Reduce output records=2 
13/05/30 09:49:44 INFO mapred.JobClient:  Virtual memory (bytes) snapshot=0 
13/05/30 09:49:44 INFO mapred.JobClient:  Map output records=9 
Exception in thread "main" java.io.IOException: Target /home/vishal/output/clusters-3-final/clusters-3 is a directory 
    at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:359) 
    at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:361) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:211) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163) 
    at org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:287) 
    at org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:425) 
    at org.apache.mahout.clustering.kmeans.KMeansDriver.buildClustersMR(KMeansDriver.java:322) 
    at org.apache.mahout.clustering.kmeans.KMeansDriver.buildClusters(KMeansDriver.java:239) 
    at org.apache.mahout.clustering.kmeans.KMeansDriver.run(KMeansDriver.java:154) 
    at com.ClusteringDemo.main(ClusteringDemo.java:80) 

可能是什麼原因?

感謝

+0

你真的需要**開始讀取錯誤信息**。顯然存在的目錄比不應該存在。 –

回答

3

這裏是KMeansDriver正在試圖做的事:

Path finalClustersIn = new Path(output, AbstractCluster.CLUSTERS_DIR + (iteration-1) + "-final"); 
FileSystem.get(conf).rename(new Path(output, AbstractCluster.CLUSTERS_DIR + (iteration-1)), finalClustersIn); 

正如你可以看到,它已經收斂後3次迭代,並試圖在目錄合併第三次迭代的結果cluster-3分爲簇-3最終以顯示它已完成。

現在的rename方法在實際重命名之前進行檢查,以確保它不會嘗試重命名爲已存在的目錄。事實上,它看起來像你已經有這個目錄羣集-3最終,可能來自以前的運行。

刪除這個目錄應該解決您的問題,您可以通過使用命令行做到這一點:

hadoop fs -rmr /home/vishal/output/clusters-3-final 

還是因爲它看起來像你正在運行在本地模式下你的工作:

rm -rf /home/vishal/output/clusters-3-final 

爲了避免這種問題,我建議您每次運行分析時都使用一個唯一的輸出目錄,例如可以取當前日期並將其附加到輸出文件名Path,例如使用System.currentTimeMillis()

編輯:你對第二個問題:

Exception in thread "main" java.io.IOException: wrong value class: 0.0: null is not class org.apache.mahout.clustering.WeightedPropertyVectorWritable at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1932) at com.ClusteringDemo.main(ClusteringDemo.java:90) 

你實際上是從亨利馬烏版本之間有衝突痛苦,因爲舊的亨利馬烏版本使用WeightedVectorWritable而更近的使用WeightedPropertyVectorWritable。爲了解決這個問題,只是從改變你的聲明變量value的:

WeightedVectorWritable value = new WeightedVectorWritable(); 

到:

WeightedPropertyVectorWritable value = new WeightedPropertyVectorWritable(); 
+0

嗨查爾斯............謝謝。但是我從你的解釋中瞭解到,它是由於目錄的存在,程序在第三個集羣的終點。所以我刪除了該目錄並再次運行代碼..但仍存在同樣的問題.............我認爲我做了其他不需要的東西...是這樣嗎?....其實我只是這方面的初學者。所以這就是爲什麼混亂?謝謝 –

+0

其實它看起來像你在本地模式下運行你的工作,你可以刪除你的本地磁盤上的這個目錄,然後再試一次嗎? –

+0

感謝您得到它>但現在它thwoing以下異常;;; –