1
將紅移表保存爲s3作爲parquet文件的問題...這是來自日期字段。我將嘗試將列轉換爲long,然後將其存儲爲現在的unix時間戳。Spark Redshift保存爲s3拼圖
Caused by: java.lang.NumberFormatException: multiple points
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1110)
at java.lang.Double.parseDouble(Double.java:540)
at java.text.DigitList.getDouble(DigitList.java:168)
at java.text.DecimalFormat.parse(DecimalFormat.java:1321)
at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1793)
at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1455)
at com.databricks.spark.redshift.Conversions$$anon$1.parse(Conversions.scala:54)
at java.text.DateFormat.parse(DateFormat.java:355)
at com.databricks.spark.redshift.Conversions$.com$databricks$spark$redshift$Conversions$$parseTimestamp(Conversions.scala:67)
at com.databricks.spark.redshift.Conversions$$anonfun$1.apply(Conversions.scala:122)
at com.databricks.spark.redshift.Conversions$$anonfun$1.apply(Conversions.scala:108)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at com.databricks.spark.redshift.Conversions$.com$databricks$spark$redshift$Conversions$$convertRow(Conversions.scala:108)
at com.databricks.spark.redshift.Conversions$$anonfun$createRowConverter$1.apply(Conversions.scala:135)
at com.databricks.spark.redshift.Conversions$$anonfun$createRowConverter$1.apply(Conversions.scala:135)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:241)
... 8 more
這是我的依賴關係的gradle:
dependencies {
compile 'com.amazonaws:aws-java-sdk:1.10.31'
compile 'com.amazonaws:aws-java-sdk-redshift:1.10.31'
compile 'org.apache.spark:spark-core_2.10:1.5.1'
compile 'org.apache.spark:spark-sql_2.10:1.5.1'
compile 'com.databricks:spark-redshift_2.10:0.5.1'
compile 'com.fasterxml.jackson.module:jackson-module-scala_2.10:2.6.3'
}
編輯1:df.write.parquet( 「S3N://bucket/path/log.parquet」)是我如何保存在我使用spark-redshift加載紅移數據之後的數據框。
編輯2:我在我的MacBook Air上運行所有這些,也許太多的數據損壞了Dataframe?不確定...當我'限制1000',只是不適用於整個表...所以「查詢」的作品,但「表」不在火花紅移選項參數。
感謝您的快速響應!就是這樣! – jackar