2016-01-22 166 views
0

我想讓我的筆記本電腦上設置hadoop。我已經遵循了幾個關於設置hadoop的教程。如果我再次運行它,它說,已經存在hadoop輸入路徑不存在

bin/hdfs dfs -mkdir /user/<username> 

我跑了這個命令。

我嘗試使用以下命令運行測試jar文件:

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep input output 'dfs[a-z.]+' 

,並收到這個異常

16/01/22 15:11:06 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/<username>/.staging/job_1453492366595_0006 org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/<username>/grep-temp-891167560

我不知道,我這個錯誤之前收到此:

16/01/22 15:51:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
16/01/22 15:51:51 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 
16/01/22 15:51:51 INFO input.FileInputFormat: Total input paths to process : 33 
16/01/22 15:51:52 INFO mapreduce.JobSubmitter: number of splits:33 
16/01/22 15:51:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1453492366595_0009 
16/01/22 15:51:52 INFO impl.YarnClientImpl: Submitted application application_1453492366595_0009 
16/01/22 15:51:52 INFO mapreduce.Job: The url to track the job: http://Marys-MacBook-Pro.local:8088/proxy/application_1453492366595_0009/ 
16/01/22 15:51:52 INFO mapreduce.Job: Running job: job_1453492366595_0009 
16/01/22 15:51:56 INFO mapreduce.Job: Job job_1453492366595_0009 running in uber mode : false 
16/01/22 15:51:56 INFO mapreduce.Job: map 0% reduce 0% 
16/01/22 15:51:56 INFO mapreduce.Job: Job job_1453492366595_0009 failed with state FAILED due to: Application application_1453492366595_0009 failed 2 times due to AM Container for appattempt_1453492366595_0009_000002 exited with exitCode: 127 
For more detailed output, check application tracking page:http://Marys-MacBook-Pro.local:8088/cluster/app/application_1453492366595_0009Then, click on links to logs of each attempt. 
Diagnostics: Exception from container-launch. 
Container id: container_1453492366595_0009_02_000001 
Exit code: 127 
Stack trace: ExitCodeException exitCode=127: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) 
    at org.apache.hadoop.util.Shell.run(Shell.java:456) 
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) 
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) 
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 


Container exited with a non-zero exit code 127 
Failing this attempt. Failing the application. 

有一個堆棧跟蹤遵循這一點。 我在Mac電腦上。

+0

那個JAR文件是做什麼的? 'grep input output'dfs [az。] +''是參數,所以我假設它在模式'dfs [az。] +'的'input'目錄/文件上運行'grep'並將結果放入'output'目錄? –

+0

這是幾個教程提供的示例。你的假設似乎是正確的。我下面這個網站:http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html – user2983836

+0

你運行'斌/ HDFS DFS -put等/ Hadoop的input'那個鏈接提到? –

回答

1

我用Hadoop 2.7.2,而在跟着the Official Docs的時候,我起初也遇到這個問題。

原因是我忘了關注「準備啓動Hadoop集羣」一章。

我在etc/hadoop/hadoop-env.sh設置JAVA_HOME解決它。

1

對於我來說,這是因爲使用了錯誤版本的JDK使用Hadoop。我用hadoop 2.6.5。起初,我使用oracle JDK 1.8.0_131啓動hadoop,運行示例jar併發生錯誤。在使用JDK 1.7.0_80之後,該示例就像一個魅力。

有一個關於HadoopJavaVersions頁面。

相關問題