2017-07-03 67 views
0

我使用的是cloudera 10.0,其spark版本爲1.6。Pyspark sparkSql問題

我想下面的語句登錄到控制檯pyspark

sqlContext.sql("select * from /user/hive/warehouse/default.party").show() 

我得到如下錯誤給後來從蜂巢數據。

Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/usr/lib/spark/python/pyspark/sql/context.py", line 580, in sql 
    return DataFrame(self._ssql_ctx.sql(sqlQuery), self) 
    File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__ 
    File "/usr/lib/spark/python/pyspark/sql/utils.py", line 45, in deco 
    return f(*a, **kw) 
    File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling o18.sql. 
: java.lang.RuntimeException: [1.15] failure: ``('' expected but `/' found 


select * from /user/hive/warehouse/default.party 
      ^
    at scala.sys.package$.error(package.scala:27) 
    at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:36) 
    at org.apache.spark.sql.catalyst.DefaultParserDialect.parse(ParserDialect.scala:67) 
    at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211) 
    at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211) 
    at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114) 
    at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113) 
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136) 
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) 
    at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254) 
    at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) 
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) 
    at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) 
    at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891) 
    at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891) 
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) 
    at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890) 
    at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110) 
    at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34) 
    at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208) 
    at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208) 
    at org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43) 
    at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231) 
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) 
    at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 

請幫我解決這個障礙

+0

您需要創建一個'HiveContext',然後使用表名而不是表路徑訪問表。像'SELECT * from default.party'。 – philantrovert

+0

嗨Philantrovert,謝謝你的回覆。執行命令 party = sqlContext.table(「default.party」)後出現錯誤:錯誤:表未找到。但表格存在於配置單元默認數據庫中 –

回答

2

爲了查詢蜂巢表你就需要先註冊爲臨時表。

from pyspark.sql import HiveContext 
sqlContext = HiveContext(sc) 
party = sqlContext.table("default.party") 
party.registerTempTable("party_temp_in_spark") 
sqlContext.sql("select * from party_temp_in_spark").show() 

希望它有幫助!