2015-09-30 89 views
0

我正在使用Apache Spark 1.5.1並嘗試連接到名爲clinton.db的本地SQLite數據庫。從數據庫的表中創建數據框可以正常工作,但是當我在創建的對象上執行一些操作時,出現下面的錯誤提示「SQL錯誤或缺少數據庫(連接已關閉)」。有趣的是,我得到了手術的結果。任何想法我可以做什麼來解決這個問題,即避免錯誤?火花殼SQLITE_ERROR:從Spark通過JDBC連接到SQLite數據庫時連接關閉

啓動命令:

../spark/bin/spark-shell --master local[8] --jars ../libraries/sqlite-jdbc-3.8.11.1.jar --classpath ../libraries/sqlite-jdbc-3.8.11.1.jar 

從數據庫讀取:

val emails = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlite:../data/clinton.sqlite", "dbtable" -> "Emails")).load() 

簡單計數(失敗):

emails.count 

錯誤:

15/09/30 09:06:39 WARN JDBCRDD: Exception closing statement java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed) at org.sqlite.core.DB.newSQLException(DB.java:890) at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109) at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358) at org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60) at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79) at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77) at org.apache.spark.scheduler.Task.run(Task.scala:90) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) res1: Long = 7945

+1

看起來就像你給到.sqlite數據庫文件的路徑問題。你有沒有嘗試過使用絕對路徑?或者至少不以'..'開頭? 也許你正在運行的進程沒有父目錄的權限。 –

+0

@aguibert:我只是嘗試了數據庫的絕對路徑,但問題仍然存在。我也將clinton.sqlite的權限更改爲777,但它也沒有幫助。 Spark 1.4.1和新發布的1.5.1也會出現這個問題。 – Michael

回答

1

我得到了同樣的錯誤today,而重要的線只是異常之前:

15/11/30 12:13:02 INFO jdbc.JDBCRDD: closed connection

15/11/30 12:13:02 WARN jdbc.JDBCRDD: Exception closing statement java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed) at org.sqlite.core.DB.newSQLException(DB.java:890) at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109) at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)

所以星火成功關閉JDBC 連接,然後它不能關閉JDBC 聲明


查看源,close()被稱爲兩次

線358(org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD,星火1.5.1)

context.addTaskCompletionListener{ context => close() } 

線469

override def hasNext: Boolean = { 
    if (!finished) { 
    if (!gotNext) { 
     nextValue = getNext() 
     if (finished) { 
     close() 
     } 
     gotNext = true 
    } 
    } 
    !finished 
} 

如果你看一下close()方法(線443)

def close() { 
    if (closed) return 

你可以看到,它會檢查變量closed,b該值從未設置爲true。

如果我看到它正確,這個錯誤仍然在主。我已經提交了bug report

+0

感謝您的回覆並提交錯誤報告! – Michael

+0

嗯,我甚至修好了:-) – Beryllium

+0

呵呵,不錯的工作! :) – Michael