2016-03-14 32 views
9

試圖刪除DataFrame中的列,但我列出了其中包含點的列名,這些列名都是我逃過的。Spark 1.6:在DataFrame中刪除列,並使用轉義列名

我逃離之前,我的模式是這樣的:

root 
|-- user_id: long (nullable = true) 
|-- hourOfWeek: string (nullable = true) 
|-- observed: string (nullable = true) 
|-- raw.hourOfDay: long (nullable = true) 
|-- raw.minOfDay: long (nullable = true) 
|-- raw.dayOfWeek: long (nullable = true) 
|-- raw.sensor2: long (nullable = true) 

如果我試圖刪除列,我得到:

df = df.drop("hourOfWeek") 
org.apache.spark.sql.AnalysisException: cannot resolve 'raw.hourOfDay' given input columns raw.dayOfWeek, raw.sensor2, observed, raw.hourOfDay, hourOfWeek, raw.minOfDay, user_id; 
     at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) 
     at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:60) 
     at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:57) 
     at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:319) 
     at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:319) 
     at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:53) 

請注意,我不甚至還試圖砸名字中有點的列。 因爲我似乎不能做太多不逃逸的列名,我轉換架構:

root 
|-- user_id: long (nullable = true) 
|-- hourOfWeek: string (nullable = true) 
|-- observed: string (nullable = true) 
|-- `raw.hourOfDay`: long (nullable = true) 
|-- `raw.minOfDay`: long (nullable = true) 
|-- `raw.dayOfWeek`: long (nullable = true) 
|-- `raw.sensor2`: long (nullable = true) 

,但似乎並沒有幫助。我仍然得到同樣的錯誤。

我試着轉義所有列名稱,並使用轉義名稱,但這也不起作用。

root 
|-- `user_id`: long (nullable = true) 
|-- `hourOfWeek`: string (nullable = true) 
|-- `observed`: string (nullable = true) 
|-- `raw.hourOfDay`: long (nullable = true) 
|-- `raw.minOfDay`: long (nullable = true) 
|-- `raw.dayOfWeek`: long (nullable = true) 
|-- `raw.sensor2`: long (nullable = true) 

df.drop("`hourOfWeek`") 
org.apache.spark.sql.AnalysisException: cannot resolve 'user_id' given input columns `user_id`, `raw.dayOfWeek`, `observed`, `raw.minOfDay`, `raw.hourOfDay`, `raw.sensor2`, `hourOfWeek`; 
     at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) 
     at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:60) 

是否有另一種方法可以刪除不會對此類型數據失敗的列?

回答

15

好吧,我似乎已經找到了解決辦法畢竟:

df.drop(df.col("raw.hourOfWeek"))似乎工作

+0

有用的答案。但我還有一個類似的問題。假設我在Spark Dataframe中有大約100列。有什麼辦法從這個數據框中只選擇幾列,並用這些選定的列創建另一個數據框?像df2 = df1.select(df.col(「col1」,「col2」)) – JKC

+0

我認爲這個https://stackoverflow.com/questions/36131716/scala-spark-dataframe-dataframe-select-multiple-columns -given -a-sequence-of-co回答你的問題 – MrE

0
val data = df.drop("Customers"); 

將正常工作正常列

val new = df.drop(df.col("old.column")); 
+0

重點在於名稱中帶有點的列。 – MrE

+0

感謝您指出@MrE –

相關問題