2016-08-14 130 views
1

以下是我正在執行的spark腳本。它在DAS(3.0.1)批量分析控制檯上成功運行。但在批量分析中保存爲腳本時無法執行。WSO2 DAS Spark腳本無法執行

insert overwrite table CLASS_COUNT select ((timestamp/120000) * 120000) as time , vin , username , classType,   
sum(acceleCount) as acceleCount , sum(decceleCount) as decceleCount 
from ACCELE_COUNTS 
group by ((timestamp/120000) * 120000) ,classType, vin, username; 

錯誤:

ERROR: [1.199] failure: ``limit'' expected but identifier ACCELE_COUNTSgroup found insert overwrite table X1234_CLASS_COUNT select ((timestamp/120000) * 120000) as time , vin , username , classType, sum(acceleCount) as acceleCount , sum(decceleCount) as decceleCountfrom ACCELE_COUNTSgroup by ((timestamp/120000) * 120000) ,classType, vin, username^

在此之前,我執行下面沒有任何問題。

CREATE TEMPORARY TABLE ACCELE_COUNTS 
USING CarbonAnalytics 
OPTIONS (tableName "KAMPANA_RECKLESS_COUNT_STREAM", 
    schema "timestamp LONG , vin STRING, username STRING, classType STRING, acceleCount INT,decceleCount INT"); 

CREATE TEMPORARY TABLE CLASS_COUNT 
USING org.wso2.carbon.analytics.spark.event.EventStreamProvider 
OPTIONS (receiverURL "tcp://localhost:7611", 
    username "admin", 
    password "admin", 
    streamName "DAS_RECKELSS_COUNT_STREAM", 
    version "1.0.0", 
    description "Events are published when product quantity goes beyond a certain level", 
    nickName "product alerts", 
    payload "time LONG,vin STRING,username STRING, classType STRING, acceleCount INT, decceleCount INT" 
); 

回答

0

出現這種情況,你沒有

1)decceleCountfrom

2)ACCELE_COUNTSgroup by

之間的空間所以,一定要有的話,即使之間的空間第二個字是在一個新的行。