2017-08-29 55 views
2

一直試圖適應新的結構化流,但只要我開始查詢.writeStream,它就會一直給我提供以下錯誤。任何想法可能會造成這種情況?如果您在本地和HDFS之間分割檢查點和元數據文件夾,但最近我能找到的是一個持續的Spark錯誤。在Windows 10,Spark 2.2和IntelliJ上運行。爲什麼開始流式查詢會導致「ExitCodeException exitCode = -1073741515」?

17/08/29 21:47:39 ERROR StreamMetadata: Error writing stream metadata StreamMetadata(41dc9417-621c-40e1-a3cb-976737b83fb7) to C:/Users/jason/AppData/Local/Temp/temporary-b549ee73-6476-46c3-aaf8-23295bd6fa8c/metadata 
ExitCodeException exitCode=-1073741515: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582) 
    at org.apache.hadoop.util.Shell.run(Shell.java:479) 
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773) 
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:866) 
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:849) 
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733) 
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225) 
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209) 
    at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307) 
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296) 
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328) 
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398) 
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461) 
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:789) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:778) 
    at org.apache.spark.sql.execution.streaming.StreamMetadata$.write(StreamMetadata.scala:76) 
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$6.apply(StreamExecution.scala:116) 
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$6.apply(StreamExecution.scala:114) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.sql.execution.streaming.StreamExecution.<init>(StreamExecution.scala:114) 
    at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:240) 
    at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:278) 
    at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:282) 
    at FileStream$.main(FileStream.scala:157) 
    at FileStream.main(FileStream.scala) 
Exception in thread "main" ExitCodeException exitCode=-1073741515: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582) 
    at org.apache.hadoop.util.Shell.run(Shell.java:479) 
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773) 
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:866) 
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:849) 
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733) 
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225) 
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209) 
    at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307) 
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296) 
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328) 
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398) 
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461) 
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:789) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:778) 
    at org.apache.spark.sql.execution.streaming.StreamMetadata$.write(StreamMetadata.scala:76) 
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$6.apply(StreamExecution.scala:116) 
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$6.apply(StreamExecution.scala:114) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.sql.execution.streaming.StreamExecution.<init>(StreamExecution.scala:114) 
    at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:240) 
    at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:278) 
    at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:282) 
    at FileStream$.main(FileStream.scala:157) 
    at FileStream.main(FileStream.scala) 
17/08/29 21:47:39 INFO SparkContext: Invoking stop() from shutdown hook 
17/08/29 21:47:39 INFO SparkUI: Stopped Spark web UI at http://192.168.178.21:4040 
17/08/29 21:47:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 
17/08/29 21:47:39 INFO MemoryStore: MemoryStore cleared 
17/08/29 21:47:39 INFO BlockManager: BlockManager stopped 
17/08/29 21:47:39 INFO BlockManagerMaster: BlockManagerMaster stopped 
17/08/29 21:47:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 
17/08/29 21:47:39 INFO SparkContext: Successfully stopped SparkContext 
17/08/29 21:47:39 INFO ShutdownHookManager: Shutdown hook called 
17/08/29 21:47:39 INFO ShutdownHookManager: Deleting directory C:\Users\jason\AppData\Local\Temp\temporary-b549ee73-6476-46c3-aaf8-23295bd6fa8c 
17/08/29 21:47:39 INFO ShutdownHookManager: Deleting directory C:\Users\jason\AppData\Local\Temp\spark-117ed625-a588-4dcb-988b-2055ec5fa7ec 

Process finished with exit code 1 
+1

從控制檯,查詢只是爲了測試它是否會起作用。 我會嘗試checkpointLocation選項,任何關於C:應該這樣做我猜?你的github頁面是很好的順便說一句。喜歡解釋和例子,很棒的細節! VAL查詢= fileStreamDf.writeStream .format( 「控制檯」) .outputMode(OutputMode.Append())。start()方法 query.awaitTermination() – Trisivieta

+1

定了! mscvr100.dll損壞,重新安裝解決了問題,現在查詢以流模式啓動。 – Trisivieta

+0

你能回答你自己的問題嗎?謝謝! –

回答

2

實際上,我在我的本地機器上運行Spark單元測試時遇到了同樣的問題。它是由失敗WinUtils.exe%HADOOP_HOME%造成文件夾:

輸入:%HADOOP_HOME%\bin\winutils.exe chmod 777 %SOME_TEMP_DIRECTORY%

輸出:

winutils.exe - 系統錯誤
代碼執行無法繼續,因爲找不到MSVCR100.dll。
重新安裝該程序可能會解決此問題。

在衝浪互聯網後,我發現了Steve Loughran的winutils項目的一個問題:Windows 10: winutils.exe doesn't work
特別是它說,安裝VC++可再發行組件包應該解決這個問題(在我的情況下工作):How do I fix this error "msvcp100.dll is missing"

0

這是Windows的問題

程序無法啓動,因爲您的計算機中缺少MSVCP100.dll。嘗試重新安裝程序來解決這個問題

您需要安裝VC++可再發行組件包:

  • 下載的Microsoft Visual C++ 2010再發行組件包從 官方(x86)的Microsoft下載中心

http://www.microsoft.com/en-us/download/details.aspx?id=5555

安裝vcredist_x86.exe

  • 下載的Microsoft Visual C++ 2010再發行組件包從 官方(x64)的Microsoft下載中心

http://www.microsoft.com/en-us/download/details.aspx?id=14632

在IDE中安裝vcredist_x64.exe

相關問題