2013-10-23 27 views
3

我的配置是hadoop 2.0.0 hbase 0.96。所有東西都以僞分佈模式運行。hbase importTsv FileNotFoundException

當我運行importTsv與以下命令。

./hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,surname,name,age persons 'hdfs://localhost:9000/user/joe/persons.tsv' 

它試圖讀取該文件HDFS://本地主機:9000 /家庭/喬/程序/ HBase的-0.96.0-hadoop2/lib目錄/ HBase的客戶端 - 0.96.0-hadoop2.jar,做不存在...

堆棧跟蹤下方。

非常感謝您的幫助。

 
2013-10-22 19:33:52,079 INFO [main] mapreduce.TableOutputFormat: Created table instance for persons 
2013-10-22 19:33:53,253 INFO [main] mapreduce.JobSubmitter: Cleaning up the staging area file:/tmp/hadoop-joe/mapred/staging/joe1659915806/.staging/job_local1659915806_0001 
2013-10-22 19:33:53,256 ERROR [main] security.UserGroupInformation: PriviledgedActionException as:joe (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: hdfs://localhost:9000/home/joe/Programs/hbase-0.96.0-hadoop2/lib/hbase-client-0.96.0-hadoop2.jar 
Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://localhost:9000/home/joe/Programs/hbase-0.96.0-hadoop2/lib/hbase-client-0.96.0-hadoop2.jar 
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102) 
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288) 
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224) 
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93) 
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57) 
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264) 
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300) 
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387) 
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268) 
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) 
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265) 
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286) 
    at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:480) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) 
    at org.apache.hadoop.hbase.mapreduce.ImportTsv.main(ImportTsv.java:484) 


回答

0

Hadoop中,配置參數如下:等/的hadoop/mapred-site.xml中:

<configuration> 
    <property> 
     <name>mapreduce.framework.name</name> 
     <value>yarn</value> 
    </property> 
</configuration> 

等/ hadoop的/紗-site.xml中:

<configuration> 
    <property> 
     <name>yarn.nodemanager.aux-services</name> 
     <value>mapreduce_shuffle</value> 
    </property> 
</configuration>