2017-12-27 371 views
1

我正在使用spark-redshift和查詢使用pyspark進行處理的redshift數據。[亞馬遜](500310)操作無效:斷言

查詢工作正常,如果我在使用工作臺等紅移運行。但spark-redshift卸載數據到s3,然後檢索它,它會引發以下錯誤,當我運行它。什麼是這裏的問題,我怎麼能解決這個

UNLOAD ('SELECT 「x」,」y" FROM (select x,y from table_name where 
((load_date=20171226 and hour>=16) or (load_date between 20171227 and 
20171226) or (load_date=20171227 and hour<=16))) ') TO ‘s3:s3path' WITH 
CREDENTIALS ‘aws_access_key_id=xxx;aws_secret_access_key=yyy' ESCAPE 
MANIFEST 

py4j.protocol.Py4JJavaError: An error occurred while calling o124.save. 
: java.sql.SQLException: [Amazon](500310) Invalid operation: Assert 
Details: 
----------------------------------------------- 
    error: Assert 
    code:  1000 
    context: !AmLeaderProcess - 
    query:  583860 
    location: scheduler.cpp:642 
    process: padbmaster [pid=31521] 
    -----------------------------------------------; 
    at com.amazon.redshift.client.messages.inbound.ErrorResponse.toErrorException(ErrorResponse.java:1830) 
    at com.amazon.redshift.client.PGMessagingContext.handleErrorResponse(PGMessagingContext.java:822) 
    at com.amazon.redshift.client.PGMessagingContext.handleMessage(PGMessagingContext.java:647) 
    at com.amazon.jdbc.communications.InboundMessagesPipeline.getNextMessageOfClass(InboundMessagesPipeline.java:312) 
    at com.amazon.redshift.client.PGMessagingContext.doMoveToNextClass(PGMessagingContext.java:1080) 
    at com.amazon.redshift.client.PGMessagingContext.getErrorResponse(PGMessagingContext.java:1048) 
    at com.amazon.redshift.client.PGClient.handleErrorsScenario2ForPrepareExecution(PGClient.java:2524) 
    at com.amazon.redshift.client.PGClient.handleErrorsPrepareExecute(PGClient.java:2465) 
    at com.amazon.redshift.client.PGClient.executePreparedStatement(PGClient.java:1420) 
    at com.amazon.redshift.dataengine.PGQueryExecutor.executePreparedStatement(PGQueryExecutor.java:370) 
    at com.amazon.redshift.dataengine.PGQueryExecutor.execute(PGQueryExecutor.java:245) 
    at com.amazon.jdbc.common.SPreparedStatement.executeWithParams(Unknown Source) 
    at com.amazon.jdbc.common.SPreparedStatement.execute(Unknown Source) 
    at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:108) 
    at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:108) 
    at com.databricks.spark.redshift.JDBCWrapper$$anonfun$2.apply(RedshiftJDBCWrapper.scala:126) 
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) 
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
Caused by: com.amazon.support.exceptions.ErrorException: [Amazon](500310) Invalid operation: Assert 

它獲取生成的查詢。

+0

您是否試圖簡化查詢?你不需要大寫字母的包裝。斷言錯誤通常發生在解釋數據類型時出現問題,例如對於'union'查詢的2部分,其中一個部分的N列是varchar,另一部分是同一列是整數或null。也許它是來自不同節點的數據的斷言錯誤。 – AlexYes

+0

其實,我使用的查詢只是內部的一部分..外部部分(包裝)得到生成,因爲它必須卸載到s3.i猜測它從火花紅移。 –

+0

如果您在工作臺中使用完整生成的查詢,該怎麼辦?它會返回相同的錯誤嗎? – AlexYes

回答

0

斷言錯誤通常發生在解釋數據類型時出錯,例如查詢的union查詢的兩部分,其中一列中的第N列是varchar,而另一部分中的同一列是整數或null。也許你的斷言錯誤發生在來自不同節點的數據上(就像在聯合查詢中一樣)。嘗試爲每列添加明確的數據格式,如x::integer