0
我能夠將結構化流式處理的結果寫入實驗性文件。問題是這些文件在本地文件系統中,現在我想將它們寫入hadoop文件系統。有沒有辦法做到這一點?結構化流式傳輸將實驗性文件寫入hadoop
StreamingQuery query = result //.orderBy("window")
.repartition(1)
.writeStream()
.outputMode(OutputMode.Append())
.format("parquet")
.option("checkpointLocation", "hdfs://localhost:19000/data/checkpoints")
.start("hdfs://localhost:19000/data/total");
我用這個代碼,但它說:
Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: hdfs://localhost:19000/data/checkpoints/metadata, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:649)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:82)
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
at org.apache.spark.sql.execution.streaming.StreamMetadata$.read(StreamMetadata.scala:51)
at org.apache.spark.sql.execution.streaming.StreamExecution.<init>(StreamExecution.scala:100)
at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:232)
at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:269)
at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:262)
at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:206)
感謝
它的工作原理,以及我使用:SparkSession.builder() .appName( 「火花數據處理」) 的.master( 「本地[2]」) 的.config( 「spark.hadoop.fs.defaultFS」, 「hdfs:// localhost:19000」) .getOrCreate(); – taniGroup
是的,這是一樣的。 – zsxwing