2015-09-01 40 views
0

我有一個新鮮安裝Hortonworks版本2.3_1 for oracle virtualbox和我得到一個java.net.SocketTimeoutException每當我嘗試運行mapreduce作業。除了虛擬機可用的內存和內核之外,我什麼也沒做。獲取java.net.SocketTimeoutException當試圖運行Hadoop mapReduce新鮮安裝Hortonworks

運行全文:

WARNING: Use "yarn jar" to launch YARN applications. 
15/09/01 01:15:17 INFO impl.TimelineClientImpl: Timeline service address: http:/                            /sandbox.hortonworks.com:8188/ws/v1/timeline/ 
15/09/01 01:15:20 INFO client.RMProxy: Connecting to ResourceManager at sandbox.                            hortonworks.com/10.0.2.15:8050 
15/09/01 01:16:19 WARN mapreduce.JobResourceUploader: Hadoop command-line option                            parsing not performed. Implement the Tool interface and execute your applicatio                            n with ToolRunner to remedy this. 
15/09/01 01:18:09 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor excepti                            on for block BP-601678901-10.0.2.15-1439987491556:blk_1073742292_1499 
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel                            to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.0                            .2.15:52924 remote=/10.0.2.15:50010] 
     at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.ja                            va:164) 
     at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1                            61) 
     at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1                            31) 
     at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1                            18) 
     at java.io.FilterInputStream.read(FilterInputStream.java:83) 
     at java.io.FilterInputStream.read(FilterInputStream.java:83) 
     at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java                            :2280) 
     at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(P                            ipelineAck.java:244) 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor                            .run(DFSOutputStream.java:749) 
15/09/01 01:18:11 INFO mapreduce.JobSubmitter: Cleaning up the staging area /use                            r/root/.staging/job_1441069639378_0001 
Exception in thread "main" java.io.IOException: All datanodes DatanodeInfoWithStorage[10.0.2.15:50010,DS-56099a5f-3cb3-426e-8e1a-ff3b53df9bf2,DISK] are bad. Aborting... 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1117) 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:909) 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:412) 
文件OVA文件的

全名我使用:Sandbox_HDP_2.3_1_virtualbox.ova

我的主人是一個窗口7家庭高級版機八行執行( 4個超線程核心,我認爲)

+0

訪問以下鏈接:https://issues.apache.org/jira/browse/HDFS-693和https://issues.apache.org/jira/browse/HDFS-770,看看他們是否幫助你 –

回答

1

的問題,正是它似乎是一個超時錯誤。通過轉到hadoop config文件夾並提高所有超時以及重試次數(儘管來自未發揮作用的日誌)並停止主機和來賓操作系統上的不必要服務來修復。

謝謝,這些問題上的sunrise76指向我的配置文件夾。