2014-07-21 48 views
0

我正在使用hadoop cdf4.7以紗線模式運行。有一個映射文件中hdfs://test1:9100/user/tagdict_builder_output/part-00000 並且有兩個文件indexdata使用DistributedCache訪問MapFile時FileNotFoundException

我用下面的代碼,將其添加到distributedCache:

Configuration conf = new Configuration(); 
Path tagDictFilePath = new Path("hdfs://test1:9100/user/tagdict_builder_output/part-00000"); 
DistributedCache.addCacheFile(tagDictFilePath.toUri(), conf); 
Job job = new Job(conf); 

而在映射的設置初始化MapFile.Reader:

 @Override 
     protected void setup(Context context) throws IOException, InterruptedException { 



      Path[] localFiles = DistributedCache.getLocalCacheFiles(context.getConfiguration()); 
      if (localFiles != null && localFiles.length > 0 && localFiles[0] != null) { 
       String mapFileDir = localFiles[0].toString(); 
       LOG.info("mapFileDir " + mapFileDir); 
       FileSystem fs = FileSystem.get(context.getConfiguration()); 
       reader = new MapFile.Reader(fs, mapFileDir, context.getConfiguration()); 
      } 
      else { 
       throw new IOException("Could not read lexicon file in DistributedCache"); 
      } 
} 

但它拋出FileNotFoundException異常:

Error: java.io.FileNotFoundException: File does not exist: /home/mps/cdh/local/usercache/mps/appcache/application_1405497023620_0045/container_1405497023620_0045_01_000012/part-00000/data 
     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:824) 
     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1704) 
     at org.apache.hadoop.io.MapFile$Reader.createDataFileReader(MapFile.java:452) 
     at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:426) 
     at org.apache.hadoop.io.MapFile$Reader.<init>(MapFile.java:396) 
     at org.apache.hadoop.io.MapFile$Reader.<init>(MapFile.java:405) 
     at aps.Cdh4MD5TaglistPreprocessor$Vectorizer.setup(Cdh4MD5TaglistPreprocessor.java:61) 
     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) 
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756) 
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338) 
     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:160) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:415) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438) 
     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:155) 

我也試過/user/tagdict_builder_output/part-00000作爲路徑,或者使用符號鏈接。但是這些都不起作用。如何解決這個問題?非常感謝。

回答

0

,因爲它說here

分佈式緩存緩存文件關聯到使用符號連接映射器和減速機的當前工作目錄。

所以,你應該嘗試通過File對象來訪問您的文件:

File f = new File("./part-00000"); 

EDIT1

我最後的建議是:

DistributedCache.addCacheFile(new URI(tagDictFilePath.toString() + "#cache-file"), conf); 
DistributedCache.createSymlink(conf); 
... 
// in mapper 
File f = new File("cache-file"); 
+0

我tried.It不工作。 – Treper

相關問題