2016-07-01 140 views
2

我試圖使用spark-cassandra-connector將cassandra行映射到參數化類型。我一直在試圖使用隱式定義columnMapper定義的映射,所以:在Spark Spark RDD中將cassandra行映射到參數化類型

class Foo[T<:Bar:ClassTag:RowReaderFactory] { 
    implicit object Mapper extends JavaBeanColumnMapper[T](
    Map("id" -> "id", 
     "timestamp" -> "ts")) 

    def doSomeStuff(operations: CassandraTableScanRDD[T]): Unit = { 
    println("do some stuff here") 
    } 
} 

不過,我遇到了下面的錯誤,我相信這是由於這樣的事實,我傳遞一個RowReaderFactory並沒有正確指定RowReaderFactory的映射。任何想法如何指定RowReaderFactory的映射信息?

Exception in thread "main" java.lang.IllegalArgumentException: Failed to map constructor parameter timestamp in Bar to a column of MyNamespace 
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4$$anonfun$apply$1.apply(DefaultColumnMapper.scala:78) 
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4$$anonfun$apply$1.apply(DefaultColumnMapper.scala:78) 
    at scala.Option.getOrElse(Option.scala:120) 
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4.apply(DefaultColumnMapper.scala:78) 
    at com.datastax.spark.connector.mapper.DefaultColumnMapper$$anonfun$4.apply(DefaultColumnMapper.scala:76) 
    at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722) 
    at scala.collection.immutable.List.foreach(List.scala:318) 
    at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721) 
    at com.datastax.spark.connector.mapper.DefaultColumnMapper.columnMapForReading(DefaultColumnMapper.scala:76) 
    at com.datastax.spark.connector.rdd.reader.GettableDataToMappedTypeConverter.<init>(GettableDataToMappedTypeConverter.scala:56) 
    at com.datastax.spark.connector.rdd.reader.ClassBasedRowReader.<init>(ClassBasedRowReader.scala:23) 
    at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:48) 
    at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:43) 
    at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.rowReader(CassandraTableRowReaderProvider.scala:48) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader$lzycompute(CassandraTableScanRDD.scala:59) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader(CassandraTableScanRDD.scala:59) 
    at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:147) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:59) 
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:143) 

回答

1

您可以定義foo的同伴對象,它隱含的,如下:

object Foo { 
    implicit object Mapper extends JavaBeanColumnMapper[T](
    Map("id" -> "id", 
     "timestamp" -> "ts")) 
} 

斯卡拉看起來一類的同伴對象試圖找到那個類的隱式實例時。如果需要,您可以在需要隱式的範圍內定義它,但是您可能希望添加伴隨對象,因此無需在必要時重複該操作。

1

原來,columnMapper在其中創建的Foo的實例,而不是在Foo本身範圍內創建。

相關問題