2015-11-18 166 views
2

我按照以下鏈接中的過程在我的Ubuntu 12.04中安裝了Hadoop。否Namenode或Datanode或Secondary NameNode停止

http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php

一切都成功安裝,當我運行start-all.sh只有一些服務都在運行。

[email protected]:~$ su - hduse 
Password: 

[email protected]:~$ cd /usr/local/hadoop/sbin 

[email protected]:/usr/local/hadoop/sbin$ start-all.sh 

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 
Starting namenodes on [localhost] 
[email protected]'s password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out 
[email protected]'s password: 
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out 
Starting secondary namenodes [0.0.0.0] 
[email protected]'s password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out 
starting yarn daemons 
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out 
[email protected]'s password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out 

[email protected]:/usr/local/hadoop/sbin$ jps 
7940 Jps 
7545 ResourceManager 
7885 NodeManager 

一旦我通過運行腳本stop-all.sh

[email protected]:/usr/local/hadoop/sbin$ stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh 
Stopping namenodes on [localhost] 
[email protected]'s password: 
localhost: no namenode to stop 
[email protected]'s password: 
localhost: no datanode to stop 
Stopping secondary namenodes [0.0.0.0] 
[email protected]'s password: 
0.0.0.0: no secondarynamenode to stop 
stopping yarn daemons 
stopping resourcemanager 
[email protected]'s password: 
localhost: stopping nodemanager 
no proxyserver to stop 

我的配置文件

  1. 編輯的.bashrc文件

    vi ~/.bashrc 
    
    #HADOOP VARIABLES START 
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/ 
    export HADOOP_INSTALL=/usr/local/hadoop 
    export PATH=$PATH:$HADOOP_INSTALL/bin 
    export PATH=$PATH:$HADOOP_INSTALL/sbin 
    export HADOOP_MAPRED_HOME=$HADOOP_INSTALL 
    export HADOOP_COMMON_HOME=$HADOOP_INSTALL 
    export HADOOP_HDFS_HOME=$HADOOP_INSTALL 
    export YARN_HOME=$HADOOP_INSTALL 
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native 
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" 
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" 
    #HADOOP VARIABLES END 
    
  2. HDFS停止服務-site.xm升

    vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml 
    
    <configuration> 
    <property> 
        <name>dfs.replication</name> 
        <value>1</value> 
        <description>Default block replication. 
        The actual number of replications can be specified when the file is created. 
        The default is used if replication is not specified in create time. 
        </description> 
    </property> 
    <property> 
        <name>dfs.namenode.name.dir</name> 
        <value>file:/usr/local/hadoop_store/hdfs/namenode</value> 
    </property> 
    <property> 
        <name>dfs.datanode.data.dir</name> 
        <value>file:/usr/local/hadoop_store/hdfs/datanode</value> 
    </property> 
    </configuration> 
    
  3. hadoop-env.sh

    vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh 
    
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/ 
    export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"} 
    
    for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do 
        if [ "$HADOOP_CLASSPATH" ]; then 
        export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f 
        else 
        export HADOOP_CLASSPATH=$f 
        fi 
    done 
    
    export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true" 
    export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS" 
    export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS" 
    
    export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS" 
    
    export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS" 
    export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS" 
    
    # The following applies to multiple commands (fs, dfs, fsck, distcp etc) 
    export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS" 
    export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER} 
    
    export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER} 
    export HADOOP_PID_DIR=${HADOOP_PID_DIR} 
    export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR} 
    
    # A string representing this instance of hadoop. $USER by default. 
    export HADOOP_IDENT_STRING=$USER 
    
  4. 核心的site.xml

    vi /usr/local/hadoop/etc/hadoop/core-site.xml 
    <configuration> 
    <property> 
        <name>hadoop.tmp.dir</name> 
        <value>/app/hadoop/tmp</value> 
        <description>A base for other temporary directories.</description> 
    </property> 
    
    <property> 
        <name>fs.default.name</name> 
        <value>hdfs://localhost:54310</value> 
        <description>The name of the default file system. A URI whose 
        scheme and authority determine the FileSystem implementation. The 
        uri's scheme determines the config property (fs.SCHEME.impl) naming 
        the FileSystem implementation class. The uri's authority is used to 
        determine the host, port, etc. for a filesystem.</description> 
    </property> 
    </configuration> 
    
  5. mapred-site.xml中

    vi /usr/local/hadoop/etc/hadoop/mapred-site.xml 
    <configuration> 
    <property> 
        <name>mapred.job.tracker</name> 
        <value>localhost:54311</value> 
        <description>The host and port that the MapReduce job tracker runs 
        at. If "local", then jobs are run in-process as a single map 
        and reduce task. 
        </description> 
    </property> 
    </configuration> 
    

    $ javac的 - 版本號

    javac 1.8.0_66 
    

    $ Java的版本

    java version "1.8.0_66" 
    Java(TM) SE Runtime Environment (build 1.8.0_66-b17) 
    Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode) 
    

我是新來的Hadoop,找不到的問題。我在哪裏可以找到Jobtracker和NameNode的日誌文件以跟蹤服務?

+0

我發現了這個問題。我犯了一個愚蠢的錯誤。實際的hadoop用戶不喜歡hduser。我將所有權更改爲/ usr/local/hadoop_store/hdfs。哇!!現在它工作!!! ..... – Wanderer

回答

0

您必須爲ssh設置無密碼驗證。 hduse用戶應該能夠通過ssh無需密碼登錄到本地主機。

1

如果仔細查看start-all.sh命令日誌,可以輕鬆看到日誌文件路徑。每次服務後嘗試啓動寫入日誌

localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out 
ocalhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out 
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out 
3

如果它不是一個SSH的問題,做下:

  1. 刪除臨時目錄中的所有內容:室射頻/應用/ Hadoop的/ tmp並格式化namenode服務器bin/hadoop namenode -format。 用bin/start-dfs.sh啓動namenode和datanode。 在命令行中鍵入jps以檢查節點是否正在運行。

  2. 檢查hduser有權寫有LS -ld目錄hadoop_store/HDFS/NameNode和DataNode會目錄

    您可以通過須藤搭配chmod + 777/hadoop_store/HDFS/namenode的更改權限/

+1

'bin/hadoop namenode -format'似乎爲我做。 –

+1

@AlperAkture酷! –