2017-08-30 63 views
1

我使用的是hadoop 2.7.3版本,配置後namenode沒有在jps中顯示。任何人都可以說我給予有關文件的正確權限的原因是什麼。我刪除了/ tmp文件並重新創建了namenode,然後對其進行了重新格式化。 在此先感謝。任何人都可以說沒有在jps中顯示hadoop惡魔的原因?

22561 Jps 
21633 DataNode 
21975 ResourceManager 
22093 NodeManager 
21821 SecondaryNameNode 

芯-site.xml中:

<configuration> 
<property> 
    <name>hadoop.tmp.dir</name> 
    <value>/app/hadoop/tmp</value> 
    <description>A base for other temporary directories.</description> 
</property> 
<property> 
    <name>fs.defaultFS</name> 
    <value>hdfs://localhost:54310</value> 
    <description>The name of the default file system. A URI whose 
    scheme and authority determine the FileSystem implementation. The 
    uri's scheme determines the config property (fs.SCHEME.impl) naming 
    the FileSystem implementation class. The uri's authority is used to 
    determine the host, port, etc. for a filesystem.</description> 
</property> 
</configuration> 

HDFS-site.xml中

<configuration> 
<property> 
    <name>dfs.replication</name> 
    <value>1</value> 
</property> 
<property> 
    <name>dfs.namenode.name.dir</name> 
    <value>file:/usr/local/hadoop_store/hdfs/namenode</value> 
</property> 
<property> 
    <name>dfs.datanode.data.dir</name> 
    <value>file:/usr/local/hadoop_store/hdfs/datanode</value> 
</property> 
<property> 
    <name>dfs.permissions.enabled</name> 
    <value>true</value> 
</property> 
</configuration> 

mapred-site.xml中

<configuration> 
    <property> 
     <name>mapreduce.framework.name</name> 
     <value>yarn</value> 
    </property> 
    <property> 
     <name>mapred.job.tracker</name> 
     <value>localhost:54311</value> 
     <description>The host and port that the MapReduce job tracker runs 
      at. If "local", then jobs are run in-process as a single map 
      and reduce task. 
     </description> 
    </property> 
</configuration> 

紗線的site.xml

<configuration> 
    <property> 
     <name>mapreduce.framework.name</name> 
     <value>yarn</value> 
    </property> 
    <property> 
     <name>mapred.job.tracker</name> 
     <value>localhost:54311</value> 
     <description>The host and port that the MapReduce job tracker runs 
      at. If "local", then jobs are run in-process as a single map 
      and reduce task. 
     </description> 
    </property> 
</configuration> 

NameNode的日誌文件:

2017-08-30 16:49:29,764 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
2017-08-30 16:49:29,771 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] 
2017-08-30 16:49:30,131 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2017-08-30 16:49:30,246 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2017-08-30 16:49:30,246 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 
2017-08-30 16:49:30,249 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:54310 
2017-08-30 16:49:30,250 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:54310 to access this namenode/service. 
2017-08-30 16:49:37,330 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070 
2017-08-30 16:49:37,414 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 
2017-08-30 16:49:37,426 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 
2017-08-30 16:49:37,432 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 
2017-08-30 16:49:37,438 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 
2017-08-30 16:49:37,441 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 
2017-08-30 16:49:37,441 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 
2017-08-30 16:49:37,441 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 
2017-08-30 16:49:37,582 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) 
2017-08-30 16:49:37,584 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 
2017-08-30 16:49:37,606 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException 
java.net.BindException: Port in use: 0.0.0.0:50070 
     at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919) 
     at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:753) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559) 
Caused by: java.net.BindException: Address already in use 
     at sun.nio.ch.Net.bind0(Native Method) 
     at sun.nio.ch.Net.bind(Net.java:433) 
     at sun.nio.ch.Net.bind(Net.java:425) 
     at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 
     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) 
     at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) 
     at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914) 
     ... 8 more 
2017-08-30 16:49:37,611 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 
2017-08-30 16:49:37,612 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 
2017-08-30 17:04:08,717 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 
2017-08-30 17:04:08,717 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. 
java.net.BindException: Port in use: 0.0.0.0:50070 
     at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919) 
     at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) 
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:753) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559) 
Caused by: java.net.BindException: Address already in use 
     at sun.nio.ch.Net.bind0(Native Method) 
     at sun.nio.ch.Net.bind(Net.java:433) 
     at sun.nio.ch.Net.bind(Net.java:425) 
     at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 
     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) 
     at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) 
     at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914) 
     ... 8 more 
2017-08-30 17:04:08,716 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 
2017-08-30 17:04:08,717 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 
2017-08-30 17:04:08,717 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 
2017-08-30 17:04:08,717 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. 
java.net.BindException: Port in use: 0.0.0.0:50070 
     at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919) 
     at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:753) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559) 
Caused by: java.net.BindException: Address already in use 
     at sun.nio.ch.Net.bind0(Native Method) 
     at sun.nio.ch.Net.bind(Net.java:433) 
     at sun.nio.ch.Net.bind(Net.java:425) 

回答

1

還有另一個進程監聽相同的端口50070,先停止它。

+0

謝謝你,現在工作 – Yasodhara

0

,如果你不運行它之前格式化HDFS這種情況發生。

運行

hdfs namenode -format 
+0

我也這樣做過,在這裏我向你發送日誌文件,如果你能理解,告訴我如何解決這個問題,請參閱上面的plz。我已經盡了一切辦法來解決但不能解決。 – Yasodhara

相關問題