我正在運行火花流工作。Spark工人在運行一段時間後死亡
我的集羣配置
Spark version - 1.6.1
spark node config
cores - 4
memory - 6.8 G (out of 8G)
number of nodes - 3
對於我的工作,我給每個節點和總核6GB內存 - 3
作業後已經運行了一個小時,我收到以下錯誤工作日誌
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f53b496a000, 262144, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 262144 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/spark/sbin/hs_err_pid1622.log
雖然我在工作目錄/ app-id/stderr中看不到任何錯誤。
什麼是通常建議用於運行火花工作者的xm *設置?
如何進一步調試此問題?
PS:我用默認設置啓動了我的工人和主人。
更新:
我看到我的執行人越來越添加由於錯誤頻頻刪除"cannot allocate memory".
日誌:
16/06/24 12:53:47 INFO MemoryStore: Block broadcast_53 stored as values in memory (estimated size 14.3 KB, free 440.8 MB)
16/06/24 12:53:47 INFO BlockManager: Found block rdd_145_1 locally
16/06/24 12:53:47 INFO BlockManager: Found block rdd_145_0 locally
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f3440743000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)
請chcek這https://support.datastax.com/hc/en-us/articles/205610783 -FAQ-Why-There-different-places-to-configure-Spark-Worker-memory- –