我們正在集羣模式下構建wso2am。 有沒有關於構建wso2am-analytics集羣的文檔? 我試過使用wso2das,參考如下。 https://docs.wso2.com/display/DAS310/Working+with+Product+Specific+Analytics+Profiles如何羣集wso2am-analytics-2.0.0
但得到的錯誤如下
TID: [-1234] [] [2016-12-09 15:00:00,101] ERROR {org.wso2.carbon.analytics.spark.core.AnalyticsTask} - Error while executing the scheduled task for the script: APIM_LATENCY_BREAKDOWN_STATS {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException: Exception in executing query CREATE TEMPORARY TABLE APIMGT_PERHOUR_EXECUTION_TIME USING CarbonAnalytics OPTIONS(tableName "ORG_WSO2_APIMGT_STATISTICS_PERHOUREXECUTIONTIMES", schema " year INT -i, month INT -i, day INT -i, hour INT -i, context STRING, api_version STRING, api STRING, tenantDomain STRING, apiPublisher STRING, apiResponseTime DOUBLE, securityLatency DOUBLE, throttlingLatency DOUBLE, requestMediationLatency DOUBLE, responseMediationLatency DOUBLE, backendLatency DOUBLE, otherLatency DOUBLE, firstEventTime LONG, _timestamp LONG -i", primaryKeys "year, month, day, hour, context, api_version, tenantDomain, apiPublisher", incrementalProcessing "APIMGT_PERHOUR_EXECUTION_TIME, DAY", mergeSchema "false")
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:764)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:721)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Unknown options : incrementalprocessing
at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.checkParameters(AnalyticsRelationProvider.java:123)
at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.setParameters(AnalyticsRelationProvider.java:113)
at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.createRelation(AnalyticsRelationProvider.java:75)
at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.createRelation(AnalyticsRelationProvider.java:45)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.execution.datasources.CreateTempTableUsing.run(ddl.scala:92)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:760)
... 11 more
=============================== =======================
任何建議將不勝感激!
Tks很多,我使用wso2am-analytics。 另一個問題是,有沒有關於淨化分析數據的文檔? 我們需要puge分析數據,但將彙總的數據保存在statdb中。 根據此文檔https://docs.wso2.com/display/AM191/Publishing+API+Runtime+Statistics+Using+WSO2+DAS#PublishingAPIRuntimeStatisticsUsingWSO2DAS-PurgingData(optional) – Angus
什麼是您的分佈和/或聚類模式?你是否將wso2am-analytics作爲wso2am-2.0.0的單個節點運行,還是集羣?不清楚清除,對不起但如果我記得正確的數據默認保存2周,然後它會自動清除。 –
根據此文檔,以HA模式運行wso2am-analytics https://docs.wso2.com/display/CLUSTER44x/Minimum+High+Availability+Deployment+-+DAS+3.0.1 我運行負載測試後, 'WSO2_ANALYTICS_EVENT_STORE_DB'db增長約500mb。那麼,有沒有配置可以清除數據以防止磁盤空間耗盡。非常感謝!! – Angus