2016-11-09 30 views
2

我有一個spark集羣和一個vertica數據庫。我使用如何查看SPARK發送到我的數據庫的SQL語句?

spark.read.jdbc(# etc 

將Spark數據框加載到羣集中。當我做了一定的GROUPBY功能

df2 = df.groupby('factor').agg(F.stddev('sum(PnL)')) 
df2.show() 

然後我得到了Vertica的語法異常

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 
    at scala.Option.foreach(Option.scala:257) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1890) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1903) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1916) 
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347) 
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39) 
    at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2193) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57) 
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2546) 
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2192) 
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2199) 
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1935) 
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1934) 
    at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2576) 
    at org.apache.spark.sql.Dataset.head(Dataset.scala:1934) 
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2149) 
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:239) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 
    at py4j.Gateway.invoke(Gateway.java:280) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:214) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.sql.SQLSyntaxErrorException: [Vertica][VJDBC](4856) ERROR: Syntax error at or near "Window" 
    at com.vertica.util.ServerErrorData.buildException(Unknown Source) 
    at com.vertica.io.ProtocolStream.readExpectedMessage(Unknown Source) 
    at com.vertica.dataengine.VDataEngine.prepareImpl(Unknown Source) 
    at com.vertica.dataengine.VDataEngine.prepare(Unknown Source) 
    at com.vertica.dataengine.VDataEngine.prepare(Unknown Source) 
    at com.vertica.jdbc.common.SPreparedStatement.<init>(Unknown Source) 
    at com.vertica.jdbc.jdbc4.S4PreparedStatement.<init>(Unknown Source) 
    at com.vertica.jdbc.VerticaJdbc4PreparedStatementImpl.<init>(Unknown Source) 
    at com.vertica.jdbc.VJDBCObjectFactory.createPreparedStatement(Unknown Source) 
    at com.vertica.jdbc.common.SConnection.prepareStatement(Unknown Source) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:400) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:379) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) 
    at org.apache.spark.scheduler.Task.run(Task.scala:86) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    ... 1 more 
Caused by: com.vertica.support.exceptions.SyntaxErrorException: [Vertica][VJDBC](4856) ERROR: Syntax error at or near "Window" 
    ... 27 more 

我想知道的是,究竟是什麼沒火花嘗試執行對Vertica的數據庫?有沒有可以在某處設置的跟蹤配置?

謝謝!

回答

1

您可以查看query_requests系統表以查看針對您的數據庫運行了哪些SQL。您可以在user_namestart_timestamp上進行過濾,以嘗試幫助查找查詢。

通常,當您控制SQL時,您會想要在label中添加。但在這種情況下,您必須搜索它。

另請注意,保留期限由數據收集器設置決定。

+0

這是非常有用的 - 謝謝!但是,只是爲了注意其他人,這是一個查詢vertica,而不是火花。 – ThatDataGuy

0

使用Spark Web UI,您可以檢查Spark應用程序的行爲和性能。它還可以顯示Web UI中的SQL選項卡中的SQL。您還可以瀏覽資源管理器日誌以獲取更多詳細信息。

Spark web UI at http://<host ip>:4040. 

您可以訪問/ SQL URL下的SQL選項卡,例如, HTTP://:4040/SQL /。

+0

欲瞭解更多詳細信息,請參閱:https://jaceklaskowski.gitbooks.io/mastering-apache-spark/content/spark-webui-sql.html –

+0

我認爲你是令人困惑的SQL執行完成的spark數據框和SQL語句SPARK正在製作我的源(外部)數據庫。我在後面。 – ThatDataGuy

+0

我認爲Spark WholeStageCodegen可以提供一些關於由spark生成的代碼的細節,這些代碼將在數據庫上執行。 Sparkcode.explain(true) - 將給出執行計劃------------------------------------- ----------------------------- import org.apache.spark.sql.execution.debug._ df2.debugCodegen - 會給生成的代碼。 –

相關問題