0
 val topics= "test" 
     val zkQuorum="localhost:2181" 
     val group="test-consumer-group"  
     val sparkConf = new org.apache.spark.SparkConf() 
      .setAppName("XXXXX") 
      .setMaster("local[*]") 
      .set("cassandra.connection.host", "127.0.0.1") 
      .set("cassandra.connection.port", "9042") 

     val ssc = new StreamingContext(sparkConf, Seconds(2)) 
     ssc.checkpoint("checkpoint") 
     val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap 

     val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2) 

我越來越DSTREAM(JSON)這樣如何將Dstream數據(json)存儲到cassandra?

[{"id":100,"firstName":"Beulah","lastName":"Fleming","gender":"female","ethnicity":"SpEd","height":167,"address":27,"createdDate":1494489672243,"lastUpdatedDate":1494489672244,"isDeleted":0},{"id":101,"firstName":"Traci","lastName":"Summers","gender":"female","ethnicity":"Frp","height":181,"address":544,"createdDate":1494510639611,"lastUpdatedDate":1494510639611,"isDeleted":0}] 

通過這個上面的程序我在DSTREAM獲取JSON數據。 我將如何處理這個Dstream數據並存儲到Cassandra或彈性搜索中?那麼我將如何從DStream(以json格式)檢索數據並存儲在Cassandra中?

回答

0

您需要導入com.datastax.spark.connector._,轉換成流的元素,適當的情況下,類

case class Record(id: String, firstName: String, ...) 
val colums = SomeColums("id", "first_name", ...) 
val mapped = lines.map(whateverDataYouHave => fuctionThatReutrnsARecordObject) 

,並使用隱函數saveToCassandra

mapped.saveToCassandra(KEYSPACE_NAME, TABLE_NAME, columns) 

欲瞭解更多信息檢查文檔保存https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md

相關問題