2015-05-14 69 views
0

VoltDB是企業5.1.2
卡夫卡2.9.2-0.8.1.1,也試過2.10-0.8.2.1無法從voltdb出口到卡夫卡

VoltDB是192.168.56.101處
卡夫卡是192.168.56.102

這裏是VoltDB我的部署配置:

<deployment> 
    <cluster hostcount="1" sitesperhost="4" kfactor="0" /> 

    <commandlog enabled="true"> 
    <frequency time="400" transactions="1000" /> 
    </commandlog> 

    <export> 
    <configuration enabled="true" type="kafka" stream="archive"> 
     <property name="metadata.broker.list">192.168.56.102:9092</property> 
     <property name="producer.type">sync</property> 
     <property name="batch.mode">true</property> 
    </configuration> 
    </export> 
</deployment> 

的模式被定義爲:

drop table person if exists; 
create table person (
    ic varchar(9) not null, 
    first_name varchar(20) not null, 
    last_name varchar(20) not null, 
    middle_name varchar(20), 
    gender tinyint not null, 
    dob timestamp not null, 
    date_created timestamp default now 
); 
partition table person on column ic; 

export table person to stream archive; 

而對於卡夫卡的server.properties,我只加這行

auto.create.topics.enable=true 

我第一次開始卡夫卡如下:

bin/zookeeper-server-startsh config/zookeeper.properties 
bin/kafka-server-start.sh config/server.properties 

當我開始VoltDB,我遇到這樣的例外:

[email protected]:~$ voltdb create catalog.jar --deployment=config.xml 
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=64m; support was removed in 8.0 
Initializing VoltDB... 

_ __  ____ ____ ____ 
| |//___// /_/ __ \/ __) 
| |// __ \//__////__ | 
| |///_/// /_/ /_///_//
|___/\____/_/\__/_____/_____/ 

-------------------------------- 

Build: 5.1.2 voltdb-5.1.2-0-g6d05c33-local Enterprise Edition 
Connecting to VoltDB cluster as the leader... 
Host id of this node is: 0 
Starting VoltDB with trial license. License expires on May 31, 2015. 
Initializing the database and command logs. This may take a moment... 
WARN: Failed to send producer request with correlation id 2 to broker 0  with data for partitions [voltdbexportPERSON,0] 
java.nio.channels.ClosedChannelException 
    at kafka.network.BlockingChannel.send(BlockingChannel.scala:97) 
    at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:72) 
    at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:71) 
    at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:102) 
    at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:102) 
    at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:102) 
    at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
    at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:101) 
    at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:101) 
    at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:101) 
    at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
    at kafka.producer.SyncProducer.send(SyncProducer.scala:100) 
    at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255) 
    at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:106) 
    at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:100) 
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80) 
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:631) 
    at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:161) 
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:194) 
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) 
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:80) 
    at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100) 
    at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) 
    at kafka.producer.Producer.send(Producer.scala:76) 
    at kafka.javaapi.producer.Producer.send(Producer.scala:42) 
    at org.voltdb.exportclient.KafkaExportClient$KafkaExportDecoder.onBlockCompletion(KafkaExportClient.java:217) 
    at org.voltdb.export.processors.GuestProcessor$2.run(GuestProcessor.java:223) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at org.voltcore.utils.CoreUtils$7$1.run(CoreUtils.java:735) 
    at java.lang.Thread.run(Thread.java:745) 

在卡夫卡方面,我不斷收到:

[2015-05-14 00:40:08,197] INFO Closing socket connection to /192.168.56.101. (kafka.network.Processor) 

有什麼建議嗎?

+0

問題解決。它與VoltDB無關。該解決方案來自http://stackoverflow.com/questions/28146409/kafka-unable-to-send-a-message-to-a-remote-server-using-java – David

回答