1
這是Save Spark dataframe as dynamic partitioned table in Hive的後續操作。我試圖在答案中使用建議,但無法使其在Spark 1.6.1中工作。Spark分區:創建RDD分區,但不創建Hive分區
我試圖從`DataFrame以編程方式創建分區。下面是相關代碼(改編自火花試驗):
hc.setConf("hive.metastore.warehouse.dir", "tmp/tests")
// hc.setConf("hive.exec.dynamic.partition", "true")
// hc.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
hc.sql("create database if not exists tmp")
hc.sql("drop table if exists tmp.partitiontest1")
Seq(2012 -> "a").toDF("year", "val")
.write
.partitionBy("year")
.mode(SaveMode.Append)
.saveAsTable("tmp.partitiontest1")
hc.sql("show partitions tmp.partitiontest1").show
完整的文件是在這裏:https://gist.github.com/SashaOv/7c65f03a51c7e8f9c9e018cd42aa4c4a
分區文件被創建優良的文件系統上,但蜂巢抱怨說,表未分區:
======================
HIVE FAILURE OUTPUT
======================
SET hive.support.sql11.reserved.keywords=false
SET hive.metastore.warehouse.dir=tmp/tests
OK
OK
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Table tmp.partitiontest1 is not a partitioned table
======================
看起來像根本原因是org.apache.spark.sql.hive.HiveMetastoreCatalog.newSparkSQLSpecificMetastoreTable
始終創建帶有空分區的表。
任何有助於推動這一進步的讚賞。
編輯:也創造SPARK-14927