2017-07-03 36 views
1

這可能是很明顯的,但是我無法弄清楚。Kafka Connect每次都無法反序列化偏移量

每次我開始了我的源連接器是無法讀取的偏移量存儲在與下面的錯誤文件:

21:05:01:519 | ERROR | pool-1-thread-1 | o.a.k.c.s.OffsetStorageReaderImpl | CRITICAL: Failed to deserialize offset data when getting offsets for tas 
k with namespace zohocrm-source-calls. No value for this data will be returned, which may break the task or cause it to skip some data. This could ei 
ther be due to an error in the connector implementation or incompatible schema. 
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additiona 
l fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration. 
     at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:309) 

這裏是我的StandaloneConfig值:

access.control.allow.methods = 
    access.control.allow.origin = 
    bootstrap.servers = [localhost:9092] 
    internal.key.converter = class org.apache.kafka.connect.json.JsonConverter 
    internal.value.converter = class org.apache.kafka.connect.json.JsonConverter 
    key.converter = class io.confluent.connect.avro.AvroConverter 
    offset.flush.interval.ms = 60000 
    offset.flush.timeout.ms = 5000 
    offset.storage.file.filename = maxoptra-data.offset 
    rest.advertised.host.name = null 
    rest.advertised.port = null 
    rest.host.name = null 
    rest.port = 8083 
    task.shutdown.graceful.timeout.ms = 5000 
    value.converter = class io.confluent.connect.avro.AvroConverter 

這裏是我的連接器配置:

connector.class = com.maxoptra.data.zoho.connect.ZohoCrmSourceConnector 
    key.converter = null 
    name = zohocrm-source-calls 
    tasks.max = 1 
    transforms = null 
    value.converter = null 

請指教。

謝謝

回答

0

設置key.converter.schemas.enable =真實value.converter.schemas.enable = TRUE。這將使JsonConverter嘗試解釋您的模式,而不是反序列化與您在此處不匹配的默認模式