2015-09-04 26 views
3

我試圖從Spark數據框中將一些數據保存到S3存儲桶。這很簡單:什麼是AWSRequestMetricsFullSupport,如何關閉它?

dataframe.saveAsParquetFile("s3://kirk/my_file.parquet") 

數據已成功保存,但UI很忙很長一段時間。我得到成千上萬的這樣的行:

2015-09-04 20:48:19,591 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[5C3211750F4FF5AB], ServiceEndpoint=[https://kirk.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[63.827], HttpRequestTime=[62.919], HttpClientReceiveResponseTime=[61.678], RequestSigningTime=[0.05], ResponseProcessingTime=[0.812], HttpClientSendRequestTime=[0.038], 
2015-09-04 20:48:19,610 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[204], ServiceName=[Amazon S3], AWSRequestID=[709DA41540539FE0], ServiceEndpoint=[https://kirk.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[18.064], HttpRequestTime=[17.959], HttpClientReceiveResponseTime=[16.703], RequestSigningTime=[0.06], ResponseProcessingTime=[0.003], HttpClientSendRequestTime=[0.046], 
2015-09-04 20:48:19,664 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[204], ServiceName=[Amazon S3], AWSRequestID=[1B1EB812E7982C7A], ServiceEndpoint=[https://kirk.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[54.36], HttpRequestTime=[54.26], HttpClientReceiveResponseTime=[53.006], RequestSigningTime=[0.057], ResponseProcessingTime=[0.002], HttpClientSendRequestTime=[0.034], 
2015-09-04 20:48:19,675 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[404], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: AF6F960F3B2BF3AB), S3 Extended Request ID: CLs9xY8HAxbEAKEJC4LS1SgpqDcnHeaGocAbdsmYKwGttS64oVjFXJOe314vmb9q], ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], AWSRequestID=[AF6F960F3B2BF3AB], ServiceEndpoint=[https://kirk.s3.amazonaws.com], Exception=1, HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[10.111], HttpRequestTime=[10.009], HttpClientReceiveResponseTime=[8.758], RequestSigningTime=[0.043], HttpClientSendRequestTime=[0.044], 
2015-09-04 20:48:19,685 INFO [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[404], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: F2198ACEB4B2CE72), S3 Extended Request ID: J9oWD8ncn6WgfUhHA1yqrBfzFC+N533oD/DK90eiSvQrpGH4OJUc3riG2R4oS1NU], ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], AWSRequestID=[F2198ACEB4B2CE72], ServiceEndpoint=[https://kirk.s3.amazonaws.com], Exception=1, HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[9.879], HttpRequestTime=[9.776], HttpClientReceiveResponseTime=[8.537], RequestSigningTime=[0.05], HttpClientSendRequestTime=[0.033], 

我可以理解,如果一些用戶感興趣的記錄S3操作的等待時間,但有什麼辦法禁用任何和所有的監測和AWSRequestMetricsFullSupport登錄?

當我檢查Spark UI時,它告訴我作業完成得相對較快,但控制檯充斥着這些消息很長一段時間。

+0

對於上下文,我保存了1m行和500列的數據幀。大約需要20秒鐘才能保存,但延遲警告會出現在我的控制檯中> 20分鐘。 –

回答

1

該resp。 AWS SDK for Java source comment寫着:

/** 
* Start an event which will be timed. [...] 
* 
* This feature is enabled if the system property 
* "com.amazonaws.sdk.enableRuntimeProfiling" is set, or if a 
* {@link RequestMetricCollector} is in use either at the request, web service 
* client, or AWS SDK level. 
* 
* @param eventName 
*   - The name of the event to start 
* 
* @see AwsSdkMetrics 
*/ 

如參考AwsSdkMetrics Java Docs進一步概括,你也許可以通過系統屬性來禁用它:

了Java AWS SDK默認指標收集由 默認情況下禁用。要啓用它,只需在啓動JVM時指定系統屬性 「com.amazonaws.sdk.enableDefaultMetrics」。 指定系統屬性時,將在AWS SDK級別啓動默認度量收集器 。默認實施將使用AWS 憑證通過DefaultAWSCredentialsProviderChain獲取的憑證上載到Amazon CloudWatch的請求/響應指標上傳到 。

這似乎可以被一個RequestMetricCollector硬有線的請求,Web服務客戶端進行覆蓋,或AWS SDK級別,這大概需要RESP。在使用客戶端/框架結構的調整(如星火這裏):

客戶誰需要完全自定義指標收集通過可 實現SPI MetricCollector,然後更換集熱器的默認AWS SDK實現 setMetricCollector(MetricCollector)

文檔這些功能似乎有點稀疏,到目前爲止,這裏有兩個相關的博客文章我所知道的:

+0

謝謝Steffen。我找到了同樣的文件:'AwsSdkMetrics',它(如你所發佈的)表示這應該在默認情況下被關閉。我想這是一個較老的文檔。把它關掉似乎並不重要。我會跟隨你最終引用的博客。 –

0

最好的解決方案我發現是通過將log4j配置文件傳遞給Spark上下文來配置Java日誌記錄(即,如果關閉)。

--driver-java-options "-Dlog4j.configuration=/home/user/log4j.properties" 

log4j.properties是禁用INFO類型消息中的log4j配置文件。

1

在釋放標籤EMR消除這些日誌證明是一個相當大的挑戰。在版本emr-4.7.2中修復了「an issue with Spark Log4j-based logging in YARN containers」。一個工作的解決方案是這些jsons添加爲配置:

[ 
{ 
    "Classification": "hadoop-log4j", 
    "Properties": { 
    "log4j.logger.com.amazon.ws.emr.hadoop.fs": "ERROR", 
    "log4j.logger.com.amazonaws.latency": "ERROR" 
    }, 
    "Configurations": [] 
} 
] 

,並在預EMR-4.7.2也是這個JSON,它放棄「馬車」錯誤log4j的火花,這是默認的:

[ 
{ 
    "Classification": "spark-defaults", 
    "Properties": { 
    "spark.driver.extraJavaOptions": "-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=512M -XX:OnOutOfMemoryError='kill -9 %p'" 
    }, 
    "Configurations": [] 
} 
] 
相關問題