0
我正在運行具有並行性的flink流式作業1。Flink流式作業自動失敗
突然在8小時後工作失敗。它顯示
Association with remote system [akka.tcp://[email protected]:44863] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
2017-04-12 00:48:36,683 INFO org.apache.flink.yarn.YarnJobManager - Container container_e35_1491556562442_5086_01_000002 is completed with diagnostics: Container [pid=64750,containerID=container_e35_1491556562442_5086_01_000002] is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB physical memory used; 2.9 GB of 4.2 GB virtual memory used. Killing container.
Dump of the process-tree for container_e35_1491556562442_5086_01_000002 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 64750 64748 64750 64750 (bash) 0 0 108654592 306 /bin/bash -c /usr/java/jdk1.7.0_67-cloudera/bin/java -Xms724m -Xmx724m -XX:MaxDirectMemorySize=1448m -Djava.library.path=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native/ -Dlog.file=/var/log/hadoop-yarn/container/application_1491556562442_5086/container_e35_1491556562442_5086_01_000002/taskmanager.log -Dlogback.configurationFile=file:logback.xml -Dlog4j.configuration=file:log4j.properties org.apache.flink.yarn.YarnTaskManagerRunner --configDir . 1> /var/log/hadoop-yarn/container/application_1491556562442_5086/container_e35_1491556562442_5086_01_000002/taskmanager.out 2> /var/log/hadoop-yarn/container/application_1491556562442_5086/container_e35_1491556562442_5086_01_000002/taskmanager.err
|- 64756 64750 64750 64750 (java) 269053 57593 2961149952 524252 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xms724m -Xmx724m -XX:MaxDirectMemorySize=1448m -Djava.library.path=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native/ -Dlog.file=/var/log/hadoop-yarn/container/application_1491556562442_5086/container_e35_1491556562442_5086_01_000002/taskmanager.log -Dlogback.configurationFile=file:logback.xml -Dlog4j.configuration=file:log4j.properties org.apache.flink.yarn.YarnTaskManagerRunner --configDir .
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
有沒有應用程序/代碼方面的錯誤。
需要幫助瞭解可能的原因?
這可能是由應用程序或由於一些紗線資源管理過程中的內存消耗?我在並行性1上運行作業。 – Sohi
我試圖在任務管理器上使用jmap進行監視,但沒有得到任何可能導致內存不足的情況。日誌中也沒有內存不足錯誤。 – Sohi
我試圖運行4 GB內存的容器。這次工作運行了20個小時,然後以相同的例外失敗。只有我注意到,permgen空間增加了15 mb。 – Sohi