2017-02-07 51 views
0

我正在使用Groovy Sql.withBatch來處理CSV文件並在我的Postgres數據庫中加載所有數據。Groovy Sql WithBatch在DB中缺少記錄

這裏我的方法:

def processCSV() { 
    def logger = Logger.getLogger('groovy.sql') 
    logger.level = Level.FINE 
    logger.addHandler(new ConsoleHandler(level: Level.FINE)) 

    def fileName = "file.csv" 
    def resource = this.getClass().getResource('/csv/' + fileName) 

    File file = new File(resource.path) 

    String year = '2016' 

    char separator = ',' 

    def lines = CSV 
      .separator(separator) 
      .skipLines(1) 
      .quote(CSVParser.DEFAULT_QUOTE_CHARACTER) 
      .escape(CSVParser.DEFAULT_ESCAPE_CHARACTER) 
      .charset('UTF-8') 
      .create() 
      .reader(file) 
      .readAll() 

    def totalLines = lines.size() 

    Sql sql = getDatabaseInstance() 

    println("Delete existing rows for " + year + " if exists") 
    String dQuery = "DELETE FROM table1 WHERE year = ?" 
    sql.execute(dQuery, [year]) 

    def statement = 'INSERT INTO table1 (column1, column2, column3, coulmn4, year) VALUES (?, ?, ?, ?, ?)' 

    println("Total lines in the CSV files: " + totalLines) 

    def batches = [] 

    sql.withBatch(BATCH_SIZE, statement) { ps -> 
     lines.each { fields -> 
      String coulmn1 = fields[0] 
      String coulmn2 = fields[1] 
      String column3 = fields[2] 
      String column4 = fields[3] 

      def params = [column1, coulmn2, column3, column4, year] 

      def batch = ['params': params, 'error': false] 
      try { 
       ps.addBatch(params) 
      } 
      catch (all) { 
       batch['error'] = true 
       throw all 
      } 

      batches << batch 
     } 
    } 

    def recordsAddedInDB = sql.firstRow("SELECT count(*) FROM " + tableName + " WHERE year = ?", year)[0] 

    sql.close() 

    println("") 
    println("Processed lines: " + line) 
    println("Batches: " + batches.size()) 
    println("Batches in error: " + batches.findAll{ it.error }.size()) 
    println("Record in DB for " + year + ": " + recordsAddedInDB) 
} 

在CSV文件中的行(exclusing標題行)是23758. 這種方法的輸出是下面的:

Delete existing rows for 2016 if exists 
Total lines in the CSV files: 23758 
Processed lines: 23758 
Batches: 23758 
Batches in error: 0 
Record in DB for 2016 year: 23580 

如果啓用在日誌中,使用BATCH_SIZE爲500,我可以看到:

  • 47次,句子「成功LY執行批次500命令(多個)」
  • 1時間句子‘成功執行批次258命令(多個)’

這意味着23758插入語句已經被處理。

任何人都知道爲什麼數據庫中的行數少於已處理的行數?

+0

只適用於exatra forensics:可能值得看看sql.withBatch的返回值,例如'def counts = sql.withBatch {} .sum()'。在ps.addBatch之後直接添加一個'ps.executeBatch()'也可能很有用,以查看你得到的結果。 –

+0

sql.withBatch {} .sum()返回23580 ...這與我在數據庫中的記錄數相同。如果我添加了ps.executeBatch(),那麼每個插入語句都是單獨執行的,sql.withBatch {} .sum()返回0,並且在數據庫中我仍然有相同數量的記錄。 – Bagbyte

+0

您是否有可能在indata中爲id列重複值?即您可能會插入相同的行,它們會在數據庫中相互覆蓋。 –

回答

0

已解決。 INSERT語句有一個子查詢,當子查詢沒有返回值時,INSERT語句被忽略。