我試圖在Hadoop的我是多麼得0數據節點爲活動數據節點和我的HDFS建立多節點集羣顯示爲0字節Hadoop的多節點集羣設置
分配然而節點管理器守護進程在數據節點
大師運行: masterhost1 172.31.100.3(充當也次級名稱節點)名稱節點
datahost1 172.31.100.4 #datanode
日誌數據管理部的下面是:
`STARTUP_MSG:build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc865b490b9a6260e9611a5b8633cab885b3d247;由jenkins編譯於2015-12-18T01:19Z STARTUP_MSG:java = 1.8.0_71 **************************** ********************************/ 2016-01-24 03:53:28,368 INFO org.apache.hadoop .hdfs.server.datanode.DataNode:爲[TERM,HUP,INT]註冊的UNIX信號處理程序 2016-01-24 03:53:28,862 WARN org.apache.hadoop.hdfs.server.common.Util:Path/usr/local/hadoop_tmp/hdfs/datanode應該在配置文件中指定爲URI。請更新hdfs配置。 2016-01-24 03:53:36,454 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:從hadoop-metrics2.properties加載的屬性 2016-01-24 03:53:37,127 INFO org.apache.hadoop。 metrics2.impl.MetricsSystemImpl:計劃的10秒快照週期。 2016-01-24 03:53:37,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:啓動DataNode度量系統 2016-01-24 03:53:37,132 INFO org.apache.hadoop.hdfs.server。 datanode.DataNode:配置的主機名是datahost1 2016-01-24 03:53:37,142 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:使用maxLockedMemory = 0啓動DataNode 2016-01-24 03:53: 37,195 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:在/0.0.0.0:50010打開流服務器 2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode。 DataNode:平衡帶寬爲1048576字節/秒 2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:用於平衡的線程數爲5 2016-01-24 03:53 :47,331 INFO org.mortbay.log:記錄到org.slf4j.impl.Log4jLoggerA dapter(org.mortbay.log)via org.mortbay.log.Slf4jLog 2016-01-24 03:53:47,375 INFO org.apache.hadoop.http.HttpRequestLog:http.requests.datanode的Http請求日誌未定義 2016-01-24 03:53:47,395 INFO org.apache.hadoop.http.HttpServer2:增加了全局過濾器'safety'(class = org.apache.hadoop.http.HttpServer2 $ QuotingInputFilter) 2016-01-24 03 :53:47,400信息org.apache.hadoop.http.HttpServer2:添加過濾器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter)到上下文datanode 2016-01-24 03:53:47,404信息org.apache.hadoop.http.HttpServer2:在上下文日誌中添加了過濾器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter) 2016-01-24 03:53:47,405 INFO org.apache.hadoop .http.HttpServer2:增加了過濾器static_user_filter(class = org.apache.hadoop.http.lib .StaticUserWebFilter $ StaticUserFilter)上下文靜態 2016-01-24 03:53:47,559 INFO org.apache.hadoop.http.HttpServer2:addJerseyResourcePackage:packageName = org.apache.hadoop.hdfs.server.datanode.web.resources; org.apache.hadoop.hdfs.web.resources,pathSpec =/webhdfs/v1/* 2016-01-24 03:53:47,566 INFO org.apache.hadoop.http.HttpServer2:綁定到端口50075的Jetty 2016- 01-24 03:53:47,566信息org.mortbay.log:jetty-6.1.26 2016-01-24 03:53:48,565 INFO org.mortbay.log:已啓動[email protected]:50075 2016 -01-24 03:53:49,200 INFO org.apache.hadoop.hdfs.server.datanode。DataNode:dnUserName = hadoop 2016-01-24 03:53:49,201 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:supergroup = sudo 2016-01-24 03:53:59,319 INFO org.apache。 hadoop.ipc.CallQueueManager:使用callQueue類java.util.concurrent.LinkedBlockingQueue 2016-01-24 03:53:59,354 INFO org.apache.hadoop.ipc.Server:啓動端口50020的Socket讀取器#1 2016-01 -24 03:53:59,401 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:在/0.0.0.0:50020打開IPC服務器2016-01-24 03:53:59,450 INFO org.apache.hadoop。 hdfs.server.datanode.DataNode:爲名稱服務收到的刷新請求:空 2016-01-24 03:53:59,485信息org.apache.hadoop.hdfs.server.datanode.DataNode:爲名稱服務啓動BPOfferServices: 2016-01 -24 03:53:59,491 WARN org.apache.hadoop.hdfs.serve r.common.Util:路徑/ usr/local/hadoop_tmp/hdfs/datanode應該在配置文件中指定爲URI。請更新hdfs配置。 2016-01-24 03:53:59,499 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:從masterhost1/172.31.100.3:9000開始提供服務的塊池(Datanode Uuid unassigned)服務 2016-01 -24 03:53:59,503 INFO org.apache.hadoop.ipc.Server:IPC服務器響應者:啓動 2016-01-24 03:53:59,504 INFO org.apache.hadoop.ipc.Server:50020上的IPC服務器監聽者:開始 2016-01-24 03:54:00,805 INFO org.apache.hadoop.ipc.Client:重試連接到服務器:masterhost1/172.31.100.3:9000。已經嘗試0次(s);重試策略是RetryUpToToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS) 2016-01-24 03:54:01,808信息org.apache.hadoop.ipc.Client:重試連接到服務器:masterhost1/172.31.100.3:9000。已經嘗試過1次;重試策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS) 2016-01-24 03:54:02,811 INFO org.apache.hadoop.ipc.Client:重試連接到服務器:masterhost1/172.31.100.3:9000。已經嘗試過2次(s);重試策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS) 2016-01-24 03:54:03,826 INFO org.apache.hadoop.ipc.Client:重試連接到服務器:masterhost1/172.31.100.3:9000。已經嘗試過3次;重試策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS) 2016-01-24 03:54:04,831 INFO org.apache.hadoop.ipc.Client:重試連接到服務器:masterhost1/172.31.100.3:9000。已經嘗試過4次;重試的政策是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,休眠時間= 1000毫秒)
`
我想我有同樣的問題與你。我有3個從機,當我放入時,它報告沒有數據節點在運行 –