0

我有一個很大的spark sql語句,我試圖將其分成更小的塊以獲得更好的代碼可讀性。我不想加入它,只是合併結果。把大火花sql查詢分解成更小的查詢併合並它

當前工作SQL陳述式

val dfs = x.map(field => spark.sql(s" 
    select ‘test’ as Table_Name, 
      '$field' as Column_Name, 
      min($field) as Min_Value, 
      max($field) as Max_Value, 
      approx_count_distinct($field) as Unique_Value_Count, 
      (
      SELECT 100 * approx_count_distinct($field)/count(1) 
      from tempdftable 
     ) as perc 
    from tempdftable 
」)) 

我試圖把下面的查詢出上面的SQL

(SELECT 100 * approx_count_distinct($field)/count(1) from tempdftable) as perc 

與此邏輯 -

val Perce = x.map(field => spark.sql(s"(SELECT 100 * approx_count_distinct($field)/count(1) from parquetDFTable)")) 

後來將此val Perce與下面的語句中的第一個大SQL語句合併,但它不起作用 -

val dfs = x.map(field => spark.sql(s" 
    select ‘test’ as Table_Name, 
     '$field' as Column_Name, 
     min($field) as Min_Value, 
     max($field) as Max_Value, 
     approx_count_distinct($field) as Unique_Value_Count, 
     '"+Perce+ "' 
    from tempdftable 
」)) 

我們該怎麼寫?

回答

2

爲什麼不全部將整個表達式轉換爲Spark代碼?

import spark.implicits._ 
import org.apache.spark.sql.functions._ 

val fraction = udf((approxCount: Double, totalCount: Double) => 100 * approxCount/totalCount) 

val fields = Seq("colA", "colB", "colC") 

val dfs = fields.map(field => { 
    tempdftable 
    .select(min(field) as "Min_Value", max(field) as "Max_Value", approx_count_distinct(field) as "Unique_Value_Count", count(field) as "Total_Count") 
    .withColumn("Table_Name", lit("test")) 
    .withColumn("Column_Name", lit(field)) 
    .withColumn("Perc", fraction('Unique_Value_Count, 'Total_Count)) 
    .select('Table_Name, 'Column_Name, 'Min_Value, 'Max_Value, 'Unique_Value_Count, 'Perc) 
}) 

val df = dfs.reduce(_ union _) 

在這樣一個測試例子:

val tempdftable = spark.sparkContext.parallelize(List((3.0, 7.0, 2.0), (1.0, 4.0, 10.0), (3.0, 7.0, 2.0), (5.0, 0.0, 2.0))).toDF("colA", "colB", "colC") 

tempdftable.show 

+----+----+----+ 
|colA|colB|colC| 
+----+----+----+ 
| 3.0| 7.0| 2.0| 
| 1.0| 4.0|10.0| 
| 3.0| 7.0| 2.0| 
| 5.0| 0.0| 2.0| 
+----+----+----+ 

我們得到

df.show 

+----------+-----------+---------+---------+------------------+----+ 
|Table_Name|Column_Name|Min_Value|Max_Value|Unique_Value_Count|Perc| 
+----------+-----------+---------+---------+------------------+----+ 
|  test|  colA|  1.0|  5.0|     3|75.0| 
|  test|  colB|  0.0|  7.0|     3|75.0| 
|  test|  colC|  2.0|  10.0|     2|50.0| 
+----------+-----------+---------+---------+------------------+----+ 
+0

謝謝Glennie!它有幫助,我接受這個答案,但我擅長SQL,並且有很少的表達式,我已經使用了RANK等分析函數,純粹是Spark,我不知道如何實現這些結果。 – sabby

+0

導入'org.apache.spark.sql.functions._'會爲您提供大部分(??)的sql函數。包括'rank';) –

+0

再次感謝Glennie!請問我可以從哪裏得到這些信息,我可以參考的任何文件都可以成爲Master:P :) – sabby