2017-02-08 212 views
-2

我使用星火v2.0和嘗試讀取使用CSV文件:Spark.read.csv錯誤:java.io.IOException異常:權限拒絕

spark.read.csv("filepath") 

但是,得到下面的錯誤:

java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Permission denied 
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) 
    at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:171) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
    at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258) 
    at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359) 
    at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263) 
    at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39) 
    at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38) 
    at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46) 
    at org.apache.spark.sql.hive.HiveSharedState.externalCatalog(HiveSharedState.scala:45) 
    at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:50) 
    at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48) 
    at org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:63) 
    at org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63) 
    at org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62) 
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49) 
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) 
    at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143) 
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:401) 
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:342) 
    ... 48 elided 
Caused by: java.lang.RuntimeException: java.io.IOException: Permission denied 
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:515) 
    ... 71 more 
Caused by: java.io.IOException: Permission denied 
    at java.io.UnixFileSystem.createFileExclusively(Native Method) 
    at java.io.File.createTempFile(File.java:2024) 
    at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:818) 
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513) 
    ... 71 more 

我也嘗試過使用.format("csv").csv("filepath"),但這也給出了相同的結果。

+1

確保你的「文件路徑」有一個適當的權限 – Bhavesh

+0

嗨Bhavesh,文件路徑具有以下權限:-rwxr-xr-x 3 pratyush04 hdfs –

回答

1

如果您查看異常堆棧跟蹤的最後一部分,您會意識到此錯誤並非關於沒有足夠的權限訪問「filepath」處的文件。

我在Windows客戶端上使用Spark shell時遇到了類似的問題。這是我

at java.io.WinNTFileSystem.createFileExclusively(Native Method) 
    at java.io.File.createTempFile(File.java:2024) 
    at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:818) 
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513) 

注意它是如何說WinNTFileSystem在堆棧跟蹤(當你把它當作UnixFileSystem),這讓我在這個堆棧跟蹤更加緊密地尋找錯誤。我意識到當前用戶無權訪問本地創建臨時文件。更具體地說,org.apache.hadoop.hive.ql.session.SessionState嘗試在Hive本地暫存目錄中創建臨時文件。如果當前用戶沒有足夠的權限來執行此操作,則會出現此錯誤。

對於我來說,在Windows上,我意識到我必須「以管理員身份運行」用於運行Spark Shell的命令提示符。這對我有效。

對於你來說,在Unix上,我猜想要麼是sudo,要麼更新Hive配置來設置本地暫存目錄,或者更新現有Hive配置的目錄安全設置。

+0

以管理員身份運行爲我工作,謝謝! – HuckIt

1

試試這個代碼,它可能會幫助

要通過CSV

Dataset<Row> src = sqlContext.read() 
     .format("com.databricks.spark.csv") 
     .option("header", "true") 
     .load("Source_new.csv");` 

讀取數據將數據寫入到CSV

src.write() 
     .format("com.databricks.spark.csv") 
     .option("header", "true") 
     .save("LowerCaseData.csv"); 
相關問題