我正在使用本地模式上Sparklyr以下配置:當更多的內核使用的Spark(Sparklyr)錯誤的許多文件打開
conf <- spark_config()
conf$`sparklyr.cores.local` <- 28
conf$`sparklyr.shell.driver-memory` <- "1000G"
conf$spark.memory.fraction <- 0.9
sc <- spark_connect(master = "local",
version = "2.1.1",
config = conf)
這工作得很好,當我在一個CSV使用spark_read_csv讀取。然而,當我用更多的內核,如
conf <- spark_config()
conf$`sparklyr.cores.local` <- 30
conf$`sparklyr.shell.driver-memory` <- "1000G"
conf$spark.memory.fraction <- 0.9
我得到以下錯誤:
Blockquote Error in value[3L] : Failed to fetch data: org.apache.spark.SparkException: Job aborted due to stage failure: Task 10 in stage 3.0 failed 1 times, most recent failure: Lost task 10.0 in stage 3.0 (TID 132, localhost, executor driver): java.io.FileNotFoundException: /tmp/blockmgr-9ded7dfb-20b8- 4c72-8a6f-2db12ba884fb/1f/temp_shuffle_e69d56ba-80b4-499f-a91f- 0ae63fe4553f (Too many open files) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.(FileOutputStream.java:213) at org.apache.spark.storage.DiskBlockObjectWriter.initialize(DiskBlockObjectWriter.scala:102) at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:115) at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:235) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:152) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMa
我從1040(軟,硬)增加的ulimit至419430,這沒有什麼區別。
我的虛擬機有128個內核和2T的內存,我希望能夠使用它的所有。
有什麼建議嗎?