2016-11-19 40 views
0

即使在遵循Victor Jabor blog非常全面的示例之後,我仍無法獲得此工作。我在他描述和使用所有最新的依賴關係時遵循了他的配置。我,正如維克多試圖從一個數據庫讀取並寫入另一個數據庫。我有這個工作沒有分區,但需要分區來提高性能,因爲我需要能夠在5分鐘內讀取5到1000萬行。Spring引導批分區JdbcCursorItemReader錯誤

下面似乎工作: 1)ColumnRangePartitioner 2)TaskExecutorPartitionHandler的生成的基礎上,gridsize步驟任務的正確數目和從由ColumnRangePartitioner設置stepExecution生成線程 3)setPreparedStatementSetter的正確數目。

但是,當我運行該應用程序時,我從JdbcCursorItemReader得到的錯誤是不一致的,我不明白。作爲最後的手段,我將不得不調試JdbcCursorItemReader。我希望在此之前得到一些幫助,並希望這將是一個配置問題。

ERROR: Caused by: java.sql.SQLException: Exhausted Resultset at oracle.jdbc.driver.OracleResultSetImpl.getInt(OracleResultSetImpl.java:901) ~[ojdbc6-11.2.0.2.0.jar:11.2.0.2.0] at org.springframework.jdbc.support.JdbcUtils.getResultSetValue(JdbcUtils.java:160) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE] at org.springframework.jdbc.core.BeanPropertyRowMapper.getColumnValue(BeanPropertyRowMapper.java:370) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE] at org.springframework.jdbc.core.BeanPropertyRowMapper.mapRow(BeanPropertyRowMapper.java:291) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE] at org.springframework.batch.item.database.JdbcCursorItemReader.readCursor(JdbcCursorItemReader.java:139) ~[spring-batch-infrastructure-3.0.7.RELEASE.jar:3.0.7.RELEASE]

配置類:

@Configuration @EnableBatchProcessing public class BatchConfiguration { 

    @Bean 
    public ItemProcessor<Archive, Archive> processor(@Value("${etl.region}") String region) { 
     return new ArchiveProcessor(region); 
    } 

    @Bean 
    public ItemWriter<Archive> writer(@Qualifier(value = "postgres") DataSource dataSource) { 
     JdbcBatchItemWriter<Archive> writer = new JdbcBatchItemWriter<>(); 

     writer.setSql("insert into tdw_src.archive (id) " + 
       "values (:id)"); 
     writer.setDataSource(dataSource); 
     writer.setItemSqlParameterSourceProvider(new org.springframework.batch.item.database. 
       BeanPropertyItemSqlParameterSourceProvider<>()); 
     return writer; 
    } 

    @Bean 
    public Partitioner archivePartitioner(@Qualifier(value = "gmDataSource") DataSource dataSource, 
              @Value("ROWNUM") String column, 
              @Value("archive") String table, 
              @Value("${gm.datasource.username}") String schema) { 
     return new ColumnRangePartitioner(dataSource, column, schema + "." + table); 
    } 

    @Bean 
    public Job archiveJob(JobBuilderFactory jobs, Step partitionerStep, JobExecutionListener listener) { 
     return jobs.get("archiveJob") 
       .preventRestart() 
       .incrementer(new RunIdIncrementer()) 
       .listener(listener) 
       .start(partitionerStep) 
       .build(); 
    } 

    @Bean 
    public Step partitionerStep(StepBuilderFactory stepBuilderFactory, 
           Partitioner archivePartitioner, 
           Step step1, 
           @Value("${spring.batch.gridsize}") int gridSize) { 
     return stepBuilderFactory.get("partitionerStep") 
       .partitioner(step1) 
       .partitioner("step1", archivePartitioner) 
       .gridSize(gridSize) 
       .taskExecutor(taskExecutor()) 
       .build(); 
    } 

    @Bean(name = "step1") 
    public Step step1(StepBuilderFactory stepBuilderFactory, ItemReader<Archive> customReader, 
         ItemWriter<Archive> writer, ItemProcessor<Archive, Archive> processor) { 
     return stepBuilderFactory.get("step1") 
       .listener(customReader) 
       .<Archive, Archive>chunk(5) 
       .reader(customReader) 
       .processor(processor) 
       .writer(writer) 
       .build(); 
    } 

    @Bean 
    public TaskExecutor taskExecutor(){ 
     return new SimpleAsyncTaskExecutor(); 
    } 

    @Bean 
    public SimpleJobLauncher getJobLauncher(JobRepository jobRepository) { 
     SimpleJobLauncher jobLauncher = new SimpleJobLauncher(); 
     jobLauncher.setJobRepository(jobRepository); 
     return jobLauncher; 
    } 

Custom Reader:- 

public class CustomReader extends JdbcCursorItemReader<Archive> implements StepExecutionListener { 

    private StepExecution stepExecution; 

    @Autowired 
    public CustomReader(@Qualifier(value = "gmDataSource") DataSource geomangerDataSource, 
         @Value("${gm.datasource.username}") String schema) throws Exception { 
     super(); 
     this.setSql("SELECT TMP.* FROM (SELECT ROWNUM AS ID_PAGINATION, id FROM " + schema + ".archive) TMP " + 
       "WHERE TMP.ID_PAGINATION >= ? AND TMP.ID_PAGINATION <= ?"); 
     this.setDataSource(geomangerDataSource); 
     BeanPropertyRowMapper<Archive> rowMapper = new BeanPropertyRowMapper<>(Archive.class); 
     this.setRowMapper(rowMapper); 
     this.setFetchSize(5); 
     this.setSaveState(false); 

     this.setVerifyCursorPosition(false); 
// not sure if this is needed?  this.afterPropertiesSet(); 
    } 

    @Override 
    public synchronized void beforeStep(StepExecution stepExecution) { 
     this.stepExecution = stepExecution; 
     this.setPreparedStatementSetter(getPreparedStatementSetter()); 
    } 

    private PreparedStatementSetter getPreparedStatementSetter() { 
     ListPreparedStatementSetter listPreparedStatementSetter = new ListPreparedStatementSetter(); 
     List<Integer> list = new ArrayList<>(); 
     list.add(stepExecution.getExecutionContext().getInt("minValue")); 
     list.add(stepExecution.getExecutionContext().getInt("maxValue")); 
     listPreparedStatementSetter.setParameters(list); 
     LOGGER.debug("getPreparedStatementSetter list: " + list); 
     return listPreparedStatementSetter; 
    } 

    @Override 
    public ExitStatus afterStep(StepExecution stepExecution) { 
     return null; 
    } 
} 
+0

移除顧客閱讀器用作成分,並加入到批量配置如下:@Bean 公共ItemReader 讀數器(@Qualifier(值= 「gmDataSource」)的DataSource geomangerDataSource, @Value(「$ {geomanager.datasource.username} 「)字符串模式)拋出異常{ return new CustomReader(geomangerDataSource,schema); } 仍然得到同樣的錯誤: 產生的原因:java.sql.SQLException中:力竭結果集 \t在oracle.jdbc.driver.OracleResultSetImpl.getTimestamp(OracleResultSetImpl.java:1381)〜[ojdbc6-11.2.0.2。 0.jar:11.2.0.2.0] – user103122

回答

0

我已經得到了這一切工作。

首先,我需要在我的CustomReader中對我的select語句進行排序,因此rownum對於所有線程都保持不變,最後,我必須使用@StepScope來對步驟中使用的每個bean使用@StepScope範圍。

在現實中,我不會使用rownum,因爲這需要進行排序以減少鬆散的性能,因此我將使用pk列來獲得最佳性能。