我已經安裝了cloudera cdh4 release我試圖在此上運行mapreduce作業。我收到以下錯誤 - >cdh4 hadoop-hbase PriviledgedActionException as:hdfs(auth:SIMPLE)cause:java.io.FileNotFoundException
2012-07-09 15:41:16 ZooKeeperSaslClient [INFO] Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-07-09 15:41:16 ClientCnxn [INFO] Socket connection established to Cloudera/192.168.0.102:2181, initiating session
2012-07-09 15:41:16 RecoverableZooKeeper [WARN] Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
2012-07-09 15:41:16 RetryCounter [INFO] The 1 times to retry after sleeping 2000 ms
2012-07-09 15:41:16 ClientCnxn [INFO] Session establishment complete on server Cloudera/192.168.0.102:2181, sessionid = 0x1386b0b44da000b, negotiated timeout = 60000
2012-07-09 15:41:18 TableOutputFormat [INFO] Created table instance for exact_custodian
2012-07-09 15:41:18 NativeCodeLoader [WARN] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2012-07-09 15:41:18 JobSubmitter [WARN] Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
2012-07-09 15:41:18 JobSubmitter [INFO] Cleaning up the staging area file:/tmp/hadoop-hdfs/mapred/staging/hdfs48876562/.staging/job_local_0001
2012-07-09 15:41:18 UserGroupInformation [ERROR] PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: /home/cloudera/yogesh/lib/hbase.jar
Exception in thread "main" java.io.FileNotFoundException: File does not exist: /home/cloudera/yogesh/lib/hbase.jar
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:736)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:208)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:71)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:246)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:284)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:355)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1226)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1223)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1223)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1244)
at
我能夠運行Hadoop的MapReduce的例子-2.0.0-cdh4.0.0.jar給定的樣本程序。 但是當我的工作成功提交給jobtracker時,我收到了這個錯誤。看起來它試圖再次訪問本地文件系統(儘管我已經爲分佈式緩存中的作業執行設置了所有必需的庫,但仍嘗試訪問本地目錄)。這個問題與用戶權限有關嗎?
I) Cloudera:~ # hadoop fs -ls hdfs://<MyClusterIP>:8020/
節目 -
Found 8 items
drwxr-xr-x - hbase hbase 0 2012-07-04 17:58 hdfs://<MyClusterIP>:8020/hbase<br/>
drwxr-xr-x - hdfs supergroup 0 2012-07-05 16:21 hdfs://<MyClusterIP>:8020/input<br/>
drwxr-xr-x - hdfs supergroup 0 2012-07-05 16:21 hdfs://<MyClusterIP>:8020/output<br/>
drwxr-xr-x - hdfs supergroup 0 2012-07-06 16:03 hdfs:/<MyClusterIP>:8020/tools-lib<br/>
drwxr-xr-x - hdfs supergroup 0 2012-06-26 14:02 hdfs://<MyClusterIP>:8020/test<br/>
drwxrwxrwt - hdfs supergroup 0 2012-06-12 16:13 hdfs://<MyClusterIP>:8020/tmp<br/>
drwxr-xr-x - hdfs supergroup 0 2012-07-06 15:58 hdfs://<MyClusterIP>:8020/user<br/>
II) ---沒有結果用於以下----
[email protected]:/etc/hadoop/conf> find . -name '**' | xargs grep "default.name"<br/>
[email protected]:/etc/hbase/conf> find . -name '**' | xargs grep "default.name"<br/>
相反,我覺得跟我們使用新的API - >
fs.defaultFS - > hdfs:// Cloudera:8020我已經正確設置了
雖然「fs.default.name」我有條目Hadoop集羣0.20.2(非Cloudera的集羣)
[email protected]:~/hadoop/conf> find . -name '**' | xargs grep "default.name"<br/>
./core-default.xml: <name>fs.default.name</name><br/>
./core-site.xml: <name>fs.default.name</name><br/>
我覺得CDH4默認配置應該添加在各自的目錄中該條目。 (如果它的錯誤)。
命令我使用運行我progrmme -
[email protected]:/home/cloudera/yogesh/lib> java -classpath hbase-tools.jar:hbase.jar:slf4j-log4j12-1.6.1.jar:slf4j-api-1.6.1.jar:protobuf-java-2.4.0a.jar:hadoop-common-2.0.0-cdh4.0.0.jar:hadoop-hdfs-2.0.0-cdh4.0.0.jar:hadoop-mapreduce-client-common-2.0.0-cdh4.0.0.jar:hadoop-mapreduce-client-core-2.0.0-cdh4.0.0.jar:log4j-1.2.16.jar:commons-logging-1.0.4.jar:commons-lang-2.5.jar:commons-lang3-3.1.jar:commons-cli-1.2.jar:commons-configuration-1.6.jar:guava-11.0.2.jar:google-collect-1.0-rc2.jar:google-collect-1.0-rc1.jar:hadoop-auth-2.0.0-cdh4.0.0.jar:hadoop-auth.jar:jackson.jar:avro-1.5.4.jar:hadoop-yarn-common-2.0.0-cdh4.0.0.jar:hadoop-yarn-api-2.0.0-cdh4.0.0.jar:hadoop-yarn-server-common-2.0.0-cdh4.0.0.jar:commons-httpclient-3.0.1.jar:commons-io-1.4.jar:zookeeper-3.3.2.jar:jdom.jar:joda-time-1.5.2.jar com.hbase.xyz.MyClassName
你可以發表你的作業提交的命令行,或引用任何代碼這個文件 - 文件是否存在於本地系統上? – 2012-07-06 14:57:12
嗨克里斯,感謝您的回覆我已更新的問題,請參閱上文。 – Yogesh 2012-07-09 08:01:24
job_local_0001表示mapred-site.xml設置不正確。並且應該在使用New Configuration()時使用。在那裏設置。 http://hbase.apache.org/book.html#trouble.mapreduce.local – Yogesh 2012-07-13 10:32:37