2017-02-24 29 views
1

我運行的火花卡桑德拉連接器,打一個奇怪的問題: 我運行火花殼爲:可在一致性LOCAL_ONE查詢(需要1但只有0活着)沒有足夠的副本

bin/spark-shell --packages datastax:spark-cassandra-connector:2.0.0-M2-s_2.1 

然後我運行下面的命令:

import com.datastax.spark.connector._ 
val rdd = sc.cassandraTable("test_spark", "test") 
println(rdd.first) 
# CassandraRow{id: 2, name: john, age: 29} 

問題是,下面的命令給出一個錯誤:

rdd.take(1).foreach(println) 
# CassandraRow{id: 2, name: john, age: 29} 
rdd.take(2).foreach(println) 
# Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_ONE (1 required but only 0 alive) 
# at com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:128) 
# at com.datastax.driver.core.Responses$Error.asException(Responses.java:114) 
# at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:467) 
# at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1012) 
# at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:935) 
# at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) 

而下面的命令只是掛起:

println(rdd.count) 

我卡桑德拉密鑰空間似乎有合適的複製因子:

describe test_spark; 
CREATE KEYSPACE test_spark WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'} AND durable_writes = true; 

如何解決上述兩個錯誤?

回答

1

我假設你在使用LOCAL_ONE(spark連接器默認值)一致性時遇到了SimpleStrategy和multi-dc的問題。它將查找本地DC中的節點發出請求,但有可能所有副本都存在於不同的DC中,並且不會滿足要求。 (CASSANDRA-12053

如果你change your consistency levelinput.consistency.levelONE)我認爲它會得到解決。你也應該考慮使用網絡拓撲策略。

相關問題