2012-09-20 41 views
1

我只是在Psuedo-Distributed模式下設置Hadoop/Yarn 2.x(特別是v0.23.3)。Hadoop/Yarn(v0.23.3)Psuedo-Distributed模式設置::無作業節點

我遵循幾個博客&網站的指示,或多或少地提供 相同的處方來設置它。我也跟着O'reilly的第三版 Hadoop書(諷刺的是最不有幫助)。

問題:

After running "start-dfs.sh" and then "start-yarn.sh", while all of the daemons 
do start (as indicated by jps(1)), the Resource Manager web portal 
(Here: http://localhost:8088/cluster/nodes) indicates 0 (zero) job-nodes in the 
cluster. So while submitting the example/test Hadoop job indeed does get 
scheduled, it pends forever because, I assume, the configuration doesn't see a 
node to run it on. 

Below are the steps I performed, including resultant configuration files. 
Hopefully the community help me out... (And thank you in advance). 

配置:

以下環境變量被設置在兩個我和Hadoop的UNIX帳戶配置文件:〜/ .profile文件:

export HADOOP_HOME=/home/myself/APPS.d/APACHE_HADOOP.d/latest 
    # Note: /home/myself/APPS.d/APACHE_HADOOP.d/latest -> hadoop-0.23.3 

export HADOOP_COMMON_HOME=${HADOOP_HOME} 
export HADOOP_INSTALL=${HADOOP_HOME} 
export HADOOP_CLASSPATH=${HADOOP_HOME}/lib 
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop/conf 
export HADOOP_MAPRED_HOME=${HADOOP_HOME} 
export YARN_HOME=${HADOOP_HOME} 
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop/conf 
export JAVA_HOME=/usr/lib/jvm/jre 

Hadoop的$ java -version

java version "1.7.0_06-icedtea<br> 
OpenJDK Runtime Environment (fedora-2.3.1.fc17.2-x86_64)<br> 
OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode)<br> 

# Although the above shows OpenJDK, the same problem happens with Sun's JRE/JDK. 

名稱節點& Datanode的目錄中,等/ Hadoop的/ conf目錄/ HDFS-site.xml中還規定:

/home/myself/APPS.d/APACHE_HADOOP.d/latest/YARN_DATA.d/HDFS.d/DATANODE.d/ 
/home/myself/APPS.d/APACHE_HADOOP.d/latest/YARN_DATA.d/HDFS.d/NAMENODE.d/ 

接下來,各種XML配置文件(再次,紗/ MRv2/v0.23.3這裏) :

hadoop$ pwd; ls -l 
/home/myself/APPS.d/APACHE_HADOOP.d/latest/etc/hadoop/conf 
lrwxrwxrwx 1 hadoop hadoop 16 Sep 20 13:14 core-site.xml -> ../core-site.xml 
lrwxrwxrwx 1 hadoop hadoop 16 Sep 20 13:14 hdfs-site.xml -> ../hdfs-site.xml 
lrwxrwxrwx 1 hadoop hadoop 18 Sep 20 13:14 httpfs-site.xml -> ../httpfs-site.xml 
lrwxrwxrwx 1 hadoop hadoop 18 Sep 20 13:14 mapred-site.xml -> ../mapred-site.xml 
-rw-rw-r-- 1 hadoop hadoop 10 Sep 20 15:36 slaves 
lrwxrwxrwx 1 hadoop hadoop 16 Sep 20 13:14 yarn-site.xml -> ../yarn-site.xml 

核心的site.xml

<?xml version="1.0"?> 
<!-- core-site.xml --> 
<configuration> 
    <property> 
    <name>fs.default.name</name> 
    <value>hdfs://localhost/</value> 
    </property> 
</configuration> 

mapred-site.xml中

<?xml version="1.0"?> 
<!-- mapred-site.xml --> 
<configuration> 

    <!-- Same problem whether this (legacy) stanza is included or not. --> 
    <property> 
    <name>mapred.job.tracker</name> 
    <value>localhost:8021</value> 
    </property> 

    <property> 
    <name>mapreduce.framework.name</name> 
    <value>yarn</value> 
    </property> 
</configuration> 

HDFS-site.xml中

<!-- hdfs-site.xml --> 
<configuration> 
    <property> 
    <name>dfs.replication</name> 
    <value>1</value> 
    </property> 
    <property> 
    <name>dfs.namenode.name.dir</name> 
    <value>file:/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/HDFS.d/NAMENODE.d</value> 
    </property> 
    <property> 
    <name>dfs.datanode.data.dir</name> 
    <value>file:/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/HDFS.d/DATANODE.d</value> 
    </property> 
</configuration> 

紗的site.xml

<?xml version="1.0"?> 
<!-- yarn-site.xml --> 
<configuration> 
    <property> 
    <name>yarn.resourcemanager.address</name> 
    <value>localhost:8032</value> 
    </property> 
    <property> 
    <name>yarn.nodemanager.aux-services</name> 
    <value>mapreduce.shuffle</value> 
    </property> 
    <property> 
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> 
    <value>org.apache.hadoop.mapred.ShuffleHandler</value> 
    </property> 
    <property> 
    <name>yarn.nodemanager.resource.memory-mb</name> 
    <value>4096</value> 
    </property> 
    <property> 
    <name>yarn.nodemanager.local-dirs</name> 
    <value>/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/TEMP.d</value> 
    </property> 
</configuration> 

等/ Hadoop的/ conf目錄/保存

localhost 
    # Community/friends, is this entry correct/needed for my psuedo-dist mode? 

雜總結註釋:

(1) As you may have gleaned from above, all files/directories are owned 
    by the 'hadoop' UNIX user. There is a hadoop:hadoop, UNIX User and 
    Group, respectively. 

(2) The following command was run after the NAMENODE & DATANODE directories 
    (listed above) were created (and whose paths were entered into 
    hdfs-site.xml): 

    hadoop$ hadoop namenode -format 

(3) Next, I ran "start-dfs.sh", then "start-yarn.sh". 
    Here is jps(1) output: 

[email protected]$ jps 
    21979 DataNode 
    22253 ResourceManager 
    22384 NodeManager 
    22156 SecondaryNameNode 
    21829 NameNode 
    22742 Jps 

謝謝!

+0

不知道,但應該'文件:/'是'文件://'? – scarcer

回答

0

經過在這個問題上沒有成功(並且相信我,我嘗試了所有的一切)後,我制定了 hadoop使用不同的解決方案。而上面我從一個下載鏡像中下載了hadoop發行版的gzip/tar球 ,這個 我使用了RPM軟件包的Caldera CDH發行版,我通過 安裝了它們的YUM回購版。希望這可以幫助某人,下面是詳細的步驟。

步驟-1:

對於Hadoop 0.20。X(MapReduce的版本1):

# rpm -Uvh http://archive.cloudera.com/redhat/6/x86_64/cdh/cdh3-repository-1.0-1.noarch.rpm 
    # rpm --import http://archive.cloudera.com/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera 
    # yum install hadoop-0.20-conf-pseudo 

- 或 -

Hadoop的0.23.x(MapReduce的版本2):

# rpm -Uvh http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.noarch.rpm 
    # rpm --import http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera 
    # yum install hadoop-conf-pseudo 

在上述兩種情況下,安裝該 「僞」 包(代表「僞分佈式 Hadoop」模式),將單獨方便地觸發您需要的所有其他必要軟件包(通過依賴關係解析)的安裝。

步驟2:

安裝Sun/Oracle的Java JRE(如果你還沒有這樣做的話)。您可以通過它們提供的RPM或者gzip/tar球便攜式版本 來安裝它。只要您設置並正確導出「JAVA_HOME」環境,並確保$ {JAVA_HOME}/bin/java在您的路徑中,這並不重要。

# echo $JAVA_HOME; which java 
    /home/myself/APPS.d/JAVA-JRE.d/jdk1.7.0_07 
    /home/myself/APPS.d/JAVA-JRE.d/jdk1.7.0_07/bin/java 

注:其實我創建一個名爲「最新」和點/重新指向它的符號鏈接,每當我更新Java JAVA的 版本特定的目錄。我明確以上爲 讀者的理解。

第3步:將hdfs格式化爲「hdfs」Unix用戶(在上面的「yum install」中創建)。

# sudo su hdfs -c "hadoop namenode -format" 

步驟4:

手動啓動Hadoop守護進程。

for file in `ls /etc/init.d/hadoop*` 
    do 
    { 
    ${file} start 
    } 
    done 

第五步:

檢查,看看是否一切正常。以下是MapReduce v1 (這與表面層次上的MapReduce v2沒有多大區別)。

root# jps 
    23104 DataNode 
    23469 TaskTracker 
    23361 SecondaryNameNode 
    23187 JobTracker 
    23267 NameNode 
    24754 Jps 

    # Do the next commands as yourself (not as "root"). 
    myself$ hadoop fs -mkdir /foo 
    myself$ hadoop fs -rmr /foo 
    myself$ hadoop jar /usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u5-examples.jar pi 2 100000 

我希望這有助於!

+0

P.S.這是在Fedora-17 x86-64位O/S上完成的。 –