0
我在spark-shell中調用以下查詢。Spark-CSV問題sqlContext
sqlContext.sql("select cast(ts_time as varchar(10)),cast(y as varchar(10)),cast('0' as varchar(3)),case when x0 = '' then cast(null as float) else cast(x0 as float) end from tasmaxload UNION ALL
select cast(ts_time as varchar(10)),cast(y as varchar(10)),cast('1' as varchar(3)),case when x1 = '' then cast(null as float) else cast(x1 as float) end from tasmaxload").registerTempTable("testcast");
這會在某些地方拋出錯誤unclosed字符串字面值。
然後,我設法明白,如果在一行中給出如下查詢,沒有錯誤,並且執行得很好。
sqlContext.sql("select cast(ts_time as varchar(10)),cast(y as varchar(10)),cast('0' as varchar(3)),case when x0 = '' then cast(null as float) else cast(x0 as float) end from tasmaxload UNION ALL select cast(ts_time as varchar(10)),cast(y as varchar(10)),cast('1' as varchar(3)),case when x1 = '' then cast(null as float) else cast(x1 as float) end from tasmaxload").registerTempTable("testcast");
但是,有沒有一種方法可以管理這個問題,而無需使它成爲單行?
我問這個,因爲原來的查詢分解成超過150多行,我不能一直改變它到單行。
有人能幫我解決嗎?
供參考:我也試過使用:粘貼模式。
在此先感謝。