提取表數據時,我有火花HAWQ JDBC連接的工作,但現在兩天後有一個與從表中提取數據的問題。沒什麼改變在Spark配置...PostgreSQL的誤差通過JDBC從星火
簡單步驟#1 - 打印從一個簡單的表的模式在HAWQ 我可以創建一個SQLContext數據幀,並連接到HAWQ DB:
df = sqlContext.read.format('jdbc').options(url=db_url, dbtable=db_table).load()
df.printSchema()
它打印:
root
|-- product_no: integer (nullable = true)
|-- name: string (nullable = true)
|-- price: decimal (nullable = true)
但是,當真正試圖提取數據:
df.select("product_no").show()
個
這些錯誤彈出...
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost):
org.postgresql.util.PSQLException: ERROR: could not write 3124 bytes to temporary file: No space left on device (buffile.c:408) (seg33 adnpivhdwapda04.gphd.local:40003 pid=544124) (cdbdisp.c:1571)
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:173)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:615)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:465)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:350)
at org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:372)
at org.apache.spark.sql.jdbc.JDBCRDD.compute(JDBCRDD.scala:350)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:248)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:208)
事情我已經嘗試過(但願意,如果有更精確的步驟重試):
- 在HAWQ嘗試了「DF -i」主節點和有隻有1%的利用率
- 試過HAWQ數據庫上dbvacuum(真空ALL不建議 上HAWQ)
- 嘗試創建這個小小的新的DB(與單表,3 列),無運氣
這不可能是真正的內存不足,以便在那裏的,什麼是跳閘這件事?
可能是一個權限問題。請檢查postgres日誌;你在渾水中游泳。帶着太陽鏡。/ORM – wildplasser
請顯示'df -h'和'mount'的完整,未修改的輸出,以及'psql'中的'SHOW temp_tablespaces'。還+1,並感謝您顯示完整的堆棧跟蹤。 –