2013-04-27 56 views
1

它顯示它創建了緩存文件。但是,當我去看看文件不存在的位置,並且當我試圖從我的映射器讀取時,它顯示文件未找到異常。未找到Hadoop文件中的分佈式緩存異常

這是我試圖運行代碼:

JobConf conf2 = new JobConf(getConf(), CorpusCalculator.class); 
    conf2.setJobName("CorpusCalculator2"); 

    //Distributed Caching of the file emitted by the reducer2 is done here 
    conf2.addResource(new Path("/opt/hadoop1/conf/core-site.xml")); 
    conf2.addResource(new Path("/opt/hadoop1/conf/hdfs-site.xml")); 

    //cacheFile(conf2, new Path(outputPathofReducer2)); 

    conf2.setNumReduceTasks(1); 
    //conf2.setOutputKeyComparatorClass() 

    conf2.setMapOutputKeyClass(FloatWritable.class); 
    conf2.setMapOutputValueClass(Text.class); 


    conf2.setOutputKeyClass(Text.class); 
    conf2.setOutputValueClass(Text.class); 

    conf2.setMapperClass(MapClass2.class); 
    conf2.setReducerClass(Reduce2.class); 



    FileInputFormat.setInputPaths(conf2, new Path(inputPathForMapper1)); 
    FileOutputFormat.setOutputPath(conf2, new Path(outputPathofReducer3)); 

    DistributedCache.addCacheFile(new Path("/sunilFiles/M51.txt").toUri(),conf2); 
    JobClient.runJob(conf 

日誌:

13/04/27 04:43:40 INFO filecache.TrackerDistributedCacheManager: Creating M51.txt in /tmp1/mapred/local/archive/-1731849462204707023_-2090562221_1263420527/localhost/sunilFiles-work-2204204368663038938 with rwxr-xr-x 

13/04/27 04:43:40 INFO filecache.TrackerDistributedCacheManager: Cached /sunilFiles/M51.txt as /tmp1/mapred/local/archive/-1731849462204707023_-2090562221_1263420527/localhost/sunilFiles/M51.txt 

13/04/27 04:43:40 INFO filecache.TrackerDistributedCacheManager: Cached /sunilFiles/M51.txt as /tmp1/mapred/local/archive/-1731849462204707023_-2090562221_1263420527/localhost/sunilFiles/M51.txt 

13/04/27 04:43:40 INFO mapred.JobClient: Running job: job_local_0003 

13/04/27 04:43:40 INFO mapred.Task: Using ResourceCalculatorPlugin : o 
[email protected] 

13/04/27 04:43:40 INFO mapred.MapTask: numReduceTasks: 1 

13/04/27 04:43:40 INFO mapred.MapTask: io.sort.mb = 100 

13/04/27 04:43:40 INFO mapred.MapTask: data buffer = 79691776/99614720 

13/04/27 04:43:40 INFO mapred.MapTask: record buffer = 262144/327680 

configure()

Exception reading DistribtuedCache: java.io.FileNotFoundException: /tmp1/mapred/local/archive/-1731849462204707023_-2090562221_1263420527/localhost/sunilFiles/M51.txt (Is a directory) 

Inside setup(): /tmp1/mapred/local/archive/-1731849462204707023_-2090562221_1263420527/localhost/sunilFiles/M51.txt 

13/04/27 04:43:41 WARN mapred.LocalJobRunner: job_local_0003 

請幫我,我一直在尋找爲此持續6個小時的解決方案,明天我會提交作業。非常感謝你。

+0

您使用的是哪個版本的hadoop? – Rags 2013-04-27 17:20:37

+0

也分享你的完整代碼 – Rags 2013-04-27 17:23:41

回答

0

我解決了這個問題通過使用copyMerge()屬性,它將各種機器中存在的所有文件合併到一個文件中,並且該文件成功地能夠使用..如果我使用普通文件,它是faili NG。感謝您的答覆球員。

0

您可能想嘗試更簡單的-files選項。爲了能夠使用它,驅動程序類需要擴展Configured並實現Tool。

例如,

Hadoop的罐子jarname.jar driverclass -files file1.xml,FILE2.TXT

在映射或減速。

BufferedReader reader1 = new BufferedReader(new FileReader("file1.xml")); 
BufferedReader reader2 = new BufferedReader(new FileReader("file2.txt"));