2016-11-22 70 views
0

我很難管理如何在Spark環境中從Linux文件系統加載JSON文件。順便說一句,我正在使用Spark 1.6。使用spark提交從Linux FS加載文件

該文件位於/home/wymeka/fields.json,我想這個命令行:

spark-submit --master yarn transform.jar --schema-file "file:///home/wymeka/fields.json" --cache 

從主類線負責裝載的這個文件如下:

val df_schema = sqlContext.read.json(pathToSchemaFile) 

使用所有這一切,使我以下異常:

Caused by: java.io.FileNotFoundException: File file:/home/wymeka/fields.json does not exist 
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:542) 
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:755) 
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:532) 
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425) 
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140) 
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341) 
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:778) 
    at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109) 
    at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67) 
    at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237) 
    at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208) 
    at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 

或當我嘗試這個逗號第二行:

spark-submit --master yarn transform.jar --schema-file "file:\/\/\/home\/imachraoui\/fields.json" --cache 

我得到另一個異常:

java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:%5C/%5C/%5C/home%5C/wymeka%5C/fields.json 
    at org.apache.hadoop.fs.Path.initialize(Path.java:206) 
    at org.apache.hadoop.fs.Path.<init>(Path.java:172) 
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$11.apply(ResolvedDataSource.scala:170) 
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$11.apply(ResolvedDataSource.scala:169) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) 
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) 
    at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:108) 
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:169) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109) 
    at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:244) 
    at com.nexys.spark.transform.Main$.main(Main.scala:80) 
    at com.nexys.spark.transform.Main.main(Main.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:%5C/%5C/%5C/home%5C/wymeka%5C/fields.json 
    at java.net.URI.checkPath(URI.java:1804) 
    at java.net.URI.<init>(URI.java:752) 
    at org.apache.hadoop.fs.Path.initialize(Path.java:203) 
    ... 24 more 

任何幫助將是非常歡迎的。


編輯

我想以後這個命令行:

spark-submit --files /home/wymeka/fields.json --master yarn transform.jar --schema-file "fields.json" --cache 

,從而如下改變了我的火花代碼:

val df_schema = sqlContext.read.json(SparkFiles.getRootDirectory()+"/"+pathToSchemaFile) 

,但仍然沒有!

+0

你試過這樣嗎? 「/home/wymeka/fields.json」? – user4342532

+0

是的,我試過了。它似乎試圖在HDFS中查找它,而不是在Linux FS – wymeka

+0

這應該工作。這導致我認爲它與spark無關,檢查應用程序是否有權讀取文件等? –

回答

1

這個文件應該在同一路徑上的所有工人節點上,否則它應該是HDFS路徑

+0

我認爲在主節點中提供它就足夠了。我錯了嗎? – wymeka

+0

如果你在本地模式下運行,你是對的,但是在集羣模式下它應該是hdfs路徑。 – SanthoshPrasad

0

您在提交申請紗線集羣( - 主紗參數)。 Spark期望您指定的文件可以通過集羣上的路徑/home/wymeka/fields.json在本地獲得。

要在本地運行程序,您應該更改spark-submit參數。

--master local[*] 

或者如果您要部署到YARN羣集,請指定適當的hdfs位置。

Launching Applications with spark-submit