2017-07-10 37 views
2

我正在將我的Spark Scala App Kafka API升級到版本0.10。我用來創建自定義的方法來消息的字符串格式的反序列化。卡夫卡用於Spark的用於Kafka API 0.10的Spark的消費者:定製的AVRO解串器

我已經意識到有一種方法可以將StringDeserializer或ByteArrayDeserializer作爲參數傳遞給key或value。

但是,我無法找到任何有關如何創建自定義Avro模式反序列化器的信息,因此當我createDirectStream並使用來自Kafka的數據時,我的kafkaStream可以使用它。

這可能嗎?

回答

5

這是可能的。您需要覆蓋org.apache.kafka.common.serialization中定義的Deserializer<T>接口,並且您需要通過包含Kafka參數的ConsumerStrategy[K, V]類將key.deserializervalue.deserializer指向您的自定義類。例如:

import org.apache.kafka.common.serialization.Deserializer 

class AvroDeserializer extends Deserializer[Array[Byte]] { 
    override def configure(map: util.Map[String, _], b: Boolean): Unit = ??? 
    override def close(): Unit = ??? 
    override def deserialize(s: String, bytes: Array[Byte]): Array[Byte] = ??? 
} 

然後:

import org.apache.kafka.clients.consumer.ConsumerRecord 
import org.apache.kafka.common.serialization.StringDeserializer 
import org.apache.spark.streaming.kafka010._ 
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent 
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe 
import my.location.with.AvroDeserializer 

val ssc: StreamingContext = ??? 
val kafkaParams = Map[String, Object](
    "bootstrap.servers" -> "localhost:9092,anotherhost:9092", 
    "key.deserializer" -> classOf[StringDeserializer], 
    "value.deserializer" -> classOf[AvroDeserializer], 
    "group.id" -> "use_a_separate_group_id_for_each_stream", 
    "auto.offset.reset" -> "latest", 
    "enable.auto.commit" -> (false: java.lang.Boolean) 
) 

val topics = Array("sometopic") 
val stream = KafkaUtils.createDirectStream[String, MyTypeWithAvroDeserializer](
    ssc, 
    PreferConsistent, 
    Subscribe[String, String](topics, kafkaParams) 
)