2017-06-05 42 views

回答

0

當一個容器內存不足,爲什麼不能點火嘗試新的容器?

...因爲星火默認情況下不這樣做(和沒有另行配置的話)。

在​​時間內,執行程序的數量,更重要的是CPU內核和RAM內存的總數由您控制。那就是--driver-memory,--executor-memory,--driver-cores,--total-executor-cores,--executor-cores,--num-executors等等。

$ ./bin/spark-submit --help 
... 
    --driver-memory MEM   Memory for driver (e.g. 1000M, 2G) (Default: 1024M). 
    --driver-java-options  Extra Java options to pass to the driver. 
    --driver-library-path  Extra library path entries to pass to the driver. 
    --driver-class-path   Extra class path entries to pass to the driver. Note that 
           jars added with --jars are automatically included in the 
           classpath. 

    --executor-memory MEM  Memory per executor (e.g. 1000M, 2G) (Default: 1G). 
... 
Spark standalone with cluster deploy mode only: 
    --driver-cores NUM   Cores for driver (Default: 1). 
... 
Spark standalone and Mesos only: 
    --total-executor-cores NUM Total cores for all executors. 

Spark standalone and YARN only: 
    --executor-cores NUM  Number of cores per executor. (Default: 1 in YARN mode, 
           or all available cores on the worker in standalone mode) 

YARN-only: 
    --driver-cores NUM   Number of cores used by the driver, only in cluster mode 
           (Default: 1). 
    --queue QUEUE_NAME   The YARN queue to submit to (Default: "default"). 
    --num-executors NUM   Number of executors to launch (Default: 2). 
           If dynamic allocation is enabled, the initial number of 
           executors will be at least NUM. 
... 

有些是部署模式而定的,而其他人在使用依賴於集羣管理器(這將是你的情況YARN)。

總結...它是決定使用​​的選項分配給Spark應用程序的資源數量。

在Spark的官方文檔中閱讀Submitting Applications

相關問題