2017-12-18 227 views
0

我試圖實現包含容錯的Kafka應用程序的Spark Streaming。當我重新啓動應用程序時,它會讀取重新啓動前已讀取的消息,並且我的計算出錯了。請幫我解決這個問題。Spark Streaming檢查點失敗後讀取

這是用Java編寫的代碼。

public static JavaStreamingContext createContextFunc() { 

    SummaryOfTransactionsWithCheckpoints app = new SummaryOfTransactionsWithCheckpoints(); 

    ApplicationConf conf = new ApplicationConf(); 
    String checkpointDir = conf.getCheckpointDirectory(); 

    JavaStreamingContext streamingContext = app.getStreamingContext(checkpointDir); 

    JavaDStream<String> kafkaInputStream = app.getKafkaInputStream(streamingContext); 

    return streamingContext; 
} 


public static void main(String[] args) throws InterruptedException { 

    String checkpointDir = conf.getCheckpointDirectory(); 

    Function0<JavaStreamingContext> createContextFunc =() -> createContextFunc(); 
    JavaStreamingContext streamingContext = JavaStreamingContext.getOrCreate(checkpointDir, createContextFunc); 

    streamingContext.start(); 
    streamingContext.awaitTermination(); 

} 

public JavaStreamingContext getStreamingContext(String checkpointDir) { 

    ApplicationConf conf = new ApplicationConf(); 
    String appName = conf.getAppName(); 
    String master = conf.getMaster(); 
    int duration = conf.getDuration(); 

    SparkConf sparkConf = new SparkConf().setAppName(appName).setMaster(master); 
    sparkConf.set("spark.streaming.receiver.writeAheadLog.enable", "true"); 

    JavaStreamingContext streamingContext = new JavaStreamingContext(sparkConf, new Duration(duration)); 
    streamingContext.checkpoint(checkpointDir); 

    return streamingContext; 
} 

public SparkSession getSession() { 

    ApplicationConf conf = new ApplicationConf(); 
    String appName = conf.getAppName(); 
    String hiveConf = conf.getHiveConf(); 
    String thriftConf = conf.getThriftConf(); 
    int shufflePartitions = conf.getShuffle(); 

    SparkSession spark = SparkSession 
      .builder() 
      .appName(appName) 
      .config("spark.sql.warehouse.dir", hiveConf) 
      .config("hive.metastore.uris", thriftConf) 
      .enableHiveSupport() 
      .getOrCreate(); 

    spark.conf().set("spark.sql.shuffle.partitions", shufflePartitions); 
    return spark; 

} 


public JavaDStream<String> getKafkaInputStream(JavaStreamingContext streamingContext) { 

    KafkaConfig kafkaConfig = new KafkaConfig(); 
    Set<String> topicsSet = kafkaConfig.getTopicSet(); 
    Map<String, Object> kafkaParams = kafkaConfig.getKafkaParams(); 

    // Create direct kafka stream with brokers and topics 
    JavaInputDStream<ConsumerRecord<String, String>> messages = KafkaUtils.createDirectStream(
      streamingContext, 
      LocationStrategies.PreferConsistent(), 
      ConsumerStrategies.Subscribe(topicsSet, kafkaParams)); 

    JavaDStream<String> logdata = messages.map(ConsumerRecord::value); 

    return logdata; 
} 

這裏是github項目的鏈接。 https://github.com/ThisaST/Spark-Fault-Tolerance

回答

0

我已經通過在代碼中添加以下配置來解決此問題。

sparkConf.set(「spark.streaming.stopGracefullyOnShutdown","true") 
相關問題