2016-12-16 26 views
1

我想根據如下所示的SQL case語句連接兩個dataFrame。請告訴我,處理這種情況的最佳方法是什麼?PySpark根據案例statemnet加入

from df1 
    left join df2 d 
     on d."Date1" <= Case when v."DATE2" >= v."DATE3" then df1."col1" else df1."col2" end 

回答

0

就我個人而言,我會把它放入一個UDF,它返回一個布爾值。因此,業務邏輯將在Python代碼結束和SQL將保持清潔:

>>> from pyspark.sql.types import BooleanType 

>>> def join_based_on_dates(left_date, date0, date1, col0, col1): 
>>>  if(date0 >= date1): 
>>>   right_date = col0 
>>>  else: 
>>>   right_date = col1 
>>>  return left_date <= right_date 

>>> sqlContext.registerFunction("join_based_on_dates", join_based_on_dates, BooleanType()) 

>>> join_based_on_dates("2016-01-01", "2017-01-01", "2018-01-01", "res1", "res2"); 
True 

>>> sqlContext.sql("SELECT join_based_on_dates('2016-01-01', '2017-01-01', '2018-01-01', 'res1', 'res2')").collect(); 
[Row(_c0=True)] 

您的查詢最終會喜歡的東西:

FROM df1 
LEFT JOIN df2 ON join_based_on_dates('2016-01-01', '2017-01-01', '2018-01-01', 'res1', 'res2') 

希望這有助於,有星火樂趣!