2015-12-21 47 views
1

我正在使用spark將大文件寫入HDFS。基本上我所做的就是加入3個大文件,然後使用toJSON()將結果數據框轉換爲json,然後使用saveAsTextFile將其保存到HDFS。最後寫的文件大約是4TB。應用程序運行速度非常慢(正如我應該預料的那樣),6小時後它會拋出異常java.lang.IllegalArgumentException:不允許自我抑制。詳細的失敗原因是從監控頁面複製到以下:Spark:將大文件寫入HDFS時不允許自我抑制

Job aborted due to stage failure: Task 37 in stage 6.0 failed 4 times, most recent failure: Lost task 37.3 in stage 6.0 (TID 361, 192.168.10.149): java.lang.IllegalArgumentException: Self-suppression not permitted 
at java.lang.Throwable.addSuppressed(Throwable.java:1043) 
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1219) 
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1116) 
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1095) 
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
at org.apache.spark.scheduler.Task.run(Task.scala:88) 
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745) 
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/dawei/upid_json_all/_temporary/0/_temporary/attempt_201512210857_0006_m_000037_361/part-00037 could only be replicated to 0 nodes instead of minReplication (=1). There are 5 datanode(s) running and no node(s) are excluded in this operation. 
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1562) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3245) 
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:663) 
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482) 
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) 
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:422) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656) 
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034) 

at org.apache.hadoop.ipc.Client.call(Client.java:1468) 
at org.apache.hadoop.ipc.Client.call(Client.java:1399) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) 
at com.sun.proxy.$Proxy14.addBlock(Unknown Source) 
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399) 
at sun.reflect.GeneratedMethodAccessor119.invoke(Unknown Source) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:497) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
at com.sun.proxy.$Proxy15.addBlock(Unknown Source) 
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532) 
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349) 
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) 

Driver stacktrace: 

誰能告訴我是什麼原因導致這個問題,我該如何解決它?

+0

看起來像你的HDFS的一個問題: 產生的原因:org.apache.hadoop.ipc.RemoteException(java.io.IOException異常):文件/用戶/大魏/ upid_json_all/_temporary/0/_temporary/attempt_201512210857_0006_m_000037_361/part-00037只能複製到0節點而不是minReplication(= 1)。有5個數據節點正在運行,並且此操作中不包含任何節點。 檢查您的ulimit並在DFS日誌中查看正在發生的事情。 – AkhlD

+0

你在使用eclipse嗎? –

+0

Nah,pycharm上的python。我想這可能是由於磁盤空間不足 - 我的磁盤上只剩下大約7TB,不足以進行3次重複。 – TheStupidOne

回答

2

從這個錯誤:

Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/user/dawei/upid_json_all/_temporary/0/_temporary/attempt_201512210857_0006_m_000037_361/ 
part-00037 could only be replicated to 0 nodes instead of minReplication (=1). 
There are 5 datanode(s) running and no node(s) are excluded in this operation. 

似乎複製沒有發生。如果你修復這個錯誤,事情可能會落在正確的位置。

這可能是由於下面的問題:

  1. 不一致在你的數據節點:如果這能解決你的問題,重新啓動您的Hadoop集羣,看
  2. 數據節點和NameNode的之間的通信:網絡連接問題以及與端口可訪問性相關的權限/防火牆訪問問題。
  3. 磁盤空間可能滿的數據節點
  4. 的Datanode可能正忙或無響應
  5. 無效配置像負塊大小配置

看一看相關SE的問題上也這個話題。

HDFS error: could only be replicated to 0 nodes, instead of 1