2017-05-06 79 views
0

我有這樣的JSON數據:閱讀嵌套JSON通過星火SQL - [AnalysisException]無法解析列

{ 
    "parent":[ 
     { 
     "prop1":1.0, 
     "prop2":"C", 
     "children":[ 
      { 
       "child_prop1":[ 
        "3026" 
       ] 
      } 
     ] 
     } 
    ] 
} 

讀取來自星火數據後,我得到下面的模式:

val df = spark.read.json("test.json") 

df.printSchema 
root 
|-- parent: array (nullable = true) 
| |-- element: struct (containsNull = true) 
| | |-- children: array (nullable = true) 
| | | |-- element: struct (containsNull = true) 
| | | | |-- child_prop1: array (nullable = true) 
| | | | | |-- element: string (containsNull = true) 
| | |-- prop1: double (nullable = true) 
| | |-- prop2: string (nullable = true) 

現在,我想從df中選擇child_prop1。但是,當我嘗試選擇它時,我得到org.apache.spark.sql.AnalysisException。事情是這樣的:

df.select("parent.children.child_prop1") 
org.apache.spark.sql.AnalysisException: cannot resolve '`parent`.`children`['child_prop1']' due to data type mismatch: argument 2 requires integral type, however, ''child_prop1'' is of string type.;; 
'Project [parent#60.children[child_prop1] AS child_prop1#63] 
+- Relation[parent#60] json 

    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:82) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:74) 
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:310) 
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:310) 
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) 
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:309) 
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:307) 
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:307) 
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:331) 
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188) 
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:329) 
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:307) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionUp$1(QueryPlan.scala:282) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:292) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2$1.apply(QueryPlan.scala:296) 
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) 
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) 
    at scala.collection.AbstractTraversable.map(Traversable.scala:104) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:296) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$7.apply(QueryPlan.scala:301) 
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:301) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:74) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:67) 
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:128) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:67) 
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:57) 
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48) 
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63) 
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:2822) 
    at org.apache.spark.sql.Dataset.select(Dataset.scala:1121) 
    at org.apache.spark.sql.Dataset.select(Dataset.scala:1139) 
    ... 48 elided 

雖然,當我只選擇childrendf它工作正常。

df.select("parent.children").show(false) 
+------------------------------------+ 
|children       | 
+------------------------------------+ 
|[WrappedArray([WrappedArray(3026)])]| 
+------------------------------------+ 

我不明白爲什麼這是給即使列存在於數據幀異常。

任何幫助表示讚賞!

+0

請在您選擇獨生子女的輸出更新你的問題。你的'dataframe'中只有一列叫做'Parent',那麼你如何選擇'children'或'parent.children.child_prop1'? –

+0

@RameshMaharjan我已經用'df.select(「parent.children」)。show(false)''的輸出更新了這個問題。我希望這有幫助 ! – himanshuIIITian

+0

您應該嘗試將您的json更改爲'{「parent」:{「prop1」:1.0,「prop2」:「C」,「children」:{「child_prop1」:[「3026」]}}}'。然後它工作。 –

回答

0

您的JSON是有效的JSON其中,我認爲你不需要改變你的輸入數據。

使用爆炸來獲取數據

import org.apache.spark.sql.functions.explode 

val data = spark.read.json("src/test/java/data.json") 
val child = data.select(explode(data("parent.children"))).toDF("children") 

child.select(explode(child("children.child_prop1"))).toDF("child_prop1").show() 

如果你可以改變輸入數據就可以按照@ramesh建議

+0

感謝您的回答!使用'explode'可以正常工作,但是我沒有辦法通過它來選擇'child_prop1'而不使用'explode'? – himanshuIIITian

0

如果你看看模式child_prop1nested array裏面的根數組parent。因此,我們需要能夠定義child_prop1position,這就是錯誤建議您定義的內容。
轉換你的json格式應該是個訣竅。
改變json

{"parent":{"prop1":1.0,"prop2":"C","children":{"child_prop1":["3026"]}}} 

和應用

df.select("parent.children.child_prop1").show(false) 

會給作爲

+-----------+ 
|child_prop1| 
+-----------+ 
|[3026]  | 
+-----------+ 

而且
改變輸出的json

{"parent":{"prop1":1.0,"prop2":"C","children":[{"child_prop1":["3026"]}]}} 

和應用

df.select("parent.children.child_prop1").show(false) 

將導致

+--------------------+ 
|child_prop1   | 
+--------------------+ 
|[WrappedArray(3026)]| 
+--------------------+ 

我希望答案可以幫助