2015-07-03 27 views
4

我有火花(1.4.0)配置的蜂巢(1.13.1),我能夠從蜂巢訪問所有的數據庫和表和我的倉庫目錄是hdfs://192.168.1.17:8020/user/hive/warehouse爲什麼DataFrame.saveAsTable(「df」)將表保存到不同的HDFS主機?

但是,當我試圖保存使用df.saveAsTable("df")函數通過Spark-Shell(使用master)將數據幀轉換爲Hive,我得到了這個錯誤。

15/07/03 14:48:59 INFO audit: ugi=user ip=unknown-ip-addr cmd=get_database: default 
15/07/03 14:48:59 INFO HiveMetaStore: 0: get_table : db=default tbl=df 
15/07/03 14:48:59 INFO audit: ugi=user ip=unknown-ip-addr cmd=get_table : db=default tbl=df 
java.net.ConnectException: Call From bdiuser-Vostro-3800/127.0.1.1 to 192.168.1.19:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1414) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1363) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) 
    at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699) 
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120) 
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398) 
    at org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.run(commands.scala:78) 
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57) 
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57) 
    at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148) 
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87) 
    at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:939) 
    at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:939) 
    at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:332) 
    at org.apache.spark.sql.hive.execution.CreateMetastoreDataSourceAsSelect.run(commands.scala:239) 
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57) 
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57) 
    at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148) 
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87) 
    at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:939) 
    at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:939) 
    at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:211) 
    at org.apache.spark.sql.DataFrame.saveAsTable(DataFrame.scala:1517) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:22) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:27) 
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:29) 
    at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31) 
    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:33) 
    at $iwC$$iwC$$iwC.<init>(<console>:35) 
    at $iwC$$iwC.<init>(<console>:37) 
    at $iwC.<init>(<console>:39) 
    at <init>(<console>:41) 
    at .<init>(<console>:45) 
    at .<clinit>(<console>) 
    at .<init>(<console>:7) 
    at .<clinit>(<console>) 
    at $print(<console>) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) 
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338) 
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) 
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) 
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) 
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) 
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) 
    at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657) 
    at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665) 
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) 
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) 
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) 
    at org.apache.spark.repl.Main$.main(Main.scala:31) 
    at org.apache.spark.repl.Main.main(Main.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
Caused by: java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699) 
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1381) 
    ... 86 more 

當我經歷這個錯誤,我發現程序嘗試了不同的主機進行HDFS連接以保存表。

而且我也嘗試過不同的工人的火星殼,我得到了同樣的錯誤。

回答

7

With saveAsTable Spark保存的默認位置由HiveMetastore(基於文檔)控制。另一種選擇是使用saveAsParquetFile並指定路徑,然後在您的配置單元Metastore中註冊該路徑,或者使用新的DataFrameWriter接口並指定路徑選項write.format(source).mode(mode).options(options).saveAsTable(tableName)

+0

謝謝答覆.....你能告訴我嗎?如何將路徑設置爲選項...意味着我必須編寫哪些代碼才能設置自定義路徑。 – Kaushal

14

請看以下例子:

val options = Map("path" -> hiveTablePath) 
result.write.format("orc").partitionBy("partitiondate").options(options).mode(SaveMode.Append).saveAsTable(hiveTable) 

我有explained this a little bit more in my blog

+0

有沒有提及數據庫名稱的方法。 –

+1

只需將它放在表名之前,如下所示:.saveAsTable(dbname.hiveTable) – Nosk

1

您可以將火花數據框寫入現有的火花表。

請看以下例子:

df.write.mode("overwrite").saveAsTable("database.tableName") 
+1

也許您可以增強您的答案,解釋作者正在做什麼導致錯誤,以及您的解決方案如何解決該問題?沒有任何上下文的單行代碼片段並不是那麼有用。 – buruzaemon

相關問題