2

我有一個Python Cloud Dataflow作業,可以在較小的子集上正常工作,但由於完整數據集沒有明顯的原因,似乎失敗了。無法更新工作狀態Python Cloud Dataflow中的異常

我在數據流接口得到的唯一錯誤的標準錯誤消息:

A work item was attempted 4 times without success. Each time the worker eventually lost contact with the service.

析爲Stackdriver只記錄顯示此錯誤:

Exception in worker loop: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 736, in run deferred_exception_details=deferred_exception_details) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 590, in do_work exception_details=exception_details) File "/usr/local/lib/python2.7/dist-packages/apache_beam/utils/retry.py", line 167, in wrapper return fun(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 454, in report_completion_status exception_details=exception_details) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 266, in report_status work_executor=self._work_executor) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/workerapiclient.py", line 364, in report_status response = self._client.projects_jobs_workItems.ReportStatus(request) File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/clients/dataflow/dataflow_v1b3_client.py", line 210, in ReportStatus config, request, global_params=global_params) File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 723, in _RunMethod return self.ProcessHttpResponse(method_config, http_response, request) File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 729, in ProcessHttpResponse self.__ProcessHttpResponse(method_config, http_response, request)) File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 599, in __ProcessHttpResponse http_response.request_url, method_config, request) HttpError: HttpError accessing https://dataflow.googleapis.com/v1b3/projects//jobs/2017-05-03_03_33_40-3860129055041750274/workItems:reportStatus?alt=json>: response: <{'status': '400', 'content-length': '360', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Wed, 03 May 2017 16:46:11 GMT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json; charset=UTF-8'}>, content <{ "error": { "code": 400, "message": "(2a7b20b33659c46e): Failed to publish the result of the work update. Causes: (2a7b20b33659c523): Failed to update work status. Causes: (8a8b13f5c3a944ba): Failed to update work status., (8a8b13f5c3a945d9): Work \"4047499437681669251\" not leased (or the lease was lost).", "status": "INVALID_ARGUMENT" } } >

我認爲這Failed to update work status錯誤與雲跑者?但是由於我沒有在網上找到關於這個錯誤的任何信息,我想知道是否有人遇到它並且有更好的解釋?我正在使用Google Cloud Dataflow SDK for Python 0.5.5

+0

什麼是您的管道源和匯(S)? –

+0

這兩個源是GCS上的avro文件,接收器是GCS上的TFRecord文件。 – Fematich

+0

您是否有工作編號可供分享?關於你的管道在做什麼,你可以描述的任何細節? –

回答

2

租約到期的一個主要原因與VM上的內存壓力有關。你可以嘗試在內存較高的機器上運行你的工作。特別是高級機器類型應該可以做到。

有關機種的詳細信息,請查看GCE Documentation

下一個數據流版本(2.0.0)應該能夠更好地處理這些案件。

+0

謝謝,我將嘗試使用更高內存的機器(以及Dataflow版本0.6.0)來運行作業!有沒有可以找到Dataflow 2.0.0的預期發佈日期的路線圖?我還沒有找到它。 – Fematich

+0

Dataflow 2.0.0於6月中旬發佈:) – Pablo

+0

感謝:-)。升級解決了問題! – Fematich