2014-05-07 39 views
0

我有一個需求,需要處理來自文本文件的記錄並插入/更新到表中。以下是我寫的代碼。但是,如果文件中的記錄數爲50,000,則需要超過30分鐘來處理記錄,如果記錄接近80k,則會引發內存不足錯誤。任何人都可以請建議一種方法來優化我編寫的代碼以提高性能?使用Java處理大文件

public static String insertIntoCHG_PNT_Table(String FILE_NAME) throws NumberFormatException, IOException 
    { 
     Date DATE_INSERTED = new Date(); 
     String strLine = ""; 
     FileReader fr = new FileReader(FILE_NAME); 
     BufferedReader br = new BufferedReader(fr); 
     long SEQ = 0; 
     double consumption = 1; 
     String returnString = ""; 
     CHG_PNT insertObj = null; 
     long KY_PREM_NO = 0; 
     long KY_SPT = 0; 
     String COD_COT_TYP = ""; 
     String DT_EFF = ""; 
     String TS_KY_TOT = ""; 
     String COD_COT = ""; 
     String ACL_VLE = ""; 
     String ACL_QTY = ""; 
     String WTR_VLE = ""; 
     String WTR_QTY = ""; 
     String SWG_VLE = ""; 
     String SWG_QTY = ""; 
     String CD_TYPE_ACT = ""; 
     String DT_TERM = ""; 
     String CD_STAT = ""; 
     String DT_STAT = ""; 
     String VLN_PPE_SIZ_COD = ""; 
     String WTR_PPE_SIZ_MTD = ""; 
     String SWG_PPE_SIZ_MTD = ""; 
     while((strLine = br.readLine()) != null){ 
      /* 
      * Meter Serial No, Property No, Current Meter Index, Previous meter index, Consumption needs to be added 
      * 
      * 
      */ 
      String[] split = strLine.split("\\;"); 
      KY_PREM_NO = Long.parseLong(split[0].trim()); 
      KY_SPT = Long.parseLong(split[1].trim()); 
      COD_COT_TYP = split[2].trim(); 
      DT_EFF = split[3].trim(); 
      TS_KY_TOT = split[4].trim(); 
      COD_COT = split[5].trim(); 
      ACL_VLE = split[6].trim(); 
      ACL_QTY = split[7].trim(); 
      WTR_VLE = split[8].trim(); 
      WTR_QTY = split[9].trim(); 
      SWG_VLE = split[10].trim(); 
      SWG_QTY = split[11].trim(); 
      CD_TYPE_ACT = split[12].trim(); 
      DT_TERM = split[13].trim(); 
      CD_STAT = split[14].trim(); 
      DT_STAT = split[15].trim(); 
      VLN_PPE_SIZ_COD = split[16].trim(); 
      WTR_PPE_SIZ_MTD = split[17].trim(); 
      SWG_PPE_SIZ_MTD = split[18].trim(); 

      long counter = 0; 
      long newCounter = 0; 
      CHG_PNT checkRecordCount = null; 
      checkRecordCount = checkAndUpdateRecord(KY_PREM_NO,KY_SPT,COD_COT_TYP,TS_KY_TOT); 

      try { 

       if(checkRecordCount == null) 
        insertObj = new CHG_PNT(); 
       else 
        insertObj = checkRecordCount; 
       insertObj.setKY_PREM_NO(KY_PREM_NO); 
       //insertObj.setSEQ_NO(SEQ); 
       insertObj.setKY_SPT(KY_SPT); 
       insertObj.setCOD_COT_TYP(COD_COT_TYP); 
       insertObj.setDT_EFF(DT_EFF); 
       insertObj.setTS_KY_TOT(TS_KY_TOT); 
       insertObj.setCOD_COT(COD_COT); 
       insertObj.setACL_VLE(Double.parseDouble(ACL_VLE)); 
       insertObj.setACL_QTY(Double.parseDouble(ACL_QTY)); 
       insertObj.setWTR_VLE(Double.parseDouble(WTR_VLE)); 
       insertObj.setWTR_QTY(Double.parseDouble(WTR_QTY)); 
       insertObj.setSWG_VLE(Double.parseDouble(SWG_VLE)); 
       insertObj.setSWG_QTY(Double.parseDouble(SWG_QTY)); 
       insertObj.setCD_TYPE_ACT(CD_TYPE_ACT); 
       insertObj.setDT_TERM(DT_TERM); 
       insertObj.setCD_STAT(Double.parseDouble(CD_STAT)); 
       insertObj.setDT_STAT(DT_STAT); 
       insertObj.setVLN_PPE_SIZ_COD(VLN_PPE_SIZ_COD); 
       insertObj.setWTR_PPE_SIZ_MTD(WTR_PPE_SIZ_MTD); 
       insertObj.setSWG_PPE_SIZ_MTD(SWG_PPE_SIZ_MTD); 
       insertObj.setDATE_INSERTED(DATE_INSERTED); 
       if(checkRecordCount == null) 
       { 
        insertObj.setDATE_INSERTED(DATE_INSERTED); 
        insertObj.insert(); 
       } 
       else 
       { 
        insertObj.setDATE_MODIFIED(DATE_INSERTED); 
        insertObj.update(); 
       } 
       BSF.getObjectManager()._commitTransactionDirect(true); 

      }catch(Exception e) 
      { 
       String abc = e.getMessage(); 
      } 

     } 
     fr.close(); 
     br.close(); 
     String localPath = FILE_NAME; 
     File f = new File(FILE_NAME); 
     String fullPath = f.getParent(); 
     String fileName = f.getName(); 
     String SubStr1 = new String("Processing"); 
     int index = fullPath.lastIndexOf(SubStr1); 
     String path = fullPath.substring(0, index); 
     String destPath = path+"\\Archive\\"+fileName; 
     PMP_PROPERTIES.copyFile(new File(localPath),new File(destPath)); 
     File file = new File(FILE_NAME); 
     file.delete(); 
     return null; 
    } 
+0

看看這個http://stackoverflow.com/questions/2356137/read-large-files-in-java – user3470953

+2

它不是文件I/O,它是處理。您可以在Java中每秒讀取數百萬行。你可以嘗試使用'Scanner.'而不是使用'String.split()',你也不應該將所有的字符串初始化爲'「」。只需在循環中聲明它們。你的異常處理需要工作。您不需要爲相同的文件名創建兩個單獨的「File」對象。並且不要用'new String(「literal」)創建'Strings';' – EJP

+2

如果不確定問題的哪一部分,您可以嘗試首先評論所有數據庫訪問和文件副本,單獨的數據庫和文件副本。 –

回答

1

有兩個主要問題。第一個是性能問題 - 與直覺相反,問題在於數據庫插入速度。

您將在單獨的事務中插入每個項目。如果您希望快速插入,則不應該這樣做。引入計數器變量並僅在每個N次插入和結束時執行一次通信。

int commitStep = 100; 
int modCount = 0; 

while() { 
    //... your code 
    modCount++; 
    if (modCount % commitStep == 0 ) { 
    BSF.getObjectManager()._commitTransactionDirect(true); 
    } 
} 

你可以閱讀更多有關SQL插入速度,在這裏:Sql insert speed up

第二個問題是,可能的話,文件讀取的可擴展性。它將適用於較小的文件,但不適用於較大的文件。這個問題Read large files in Java對你的問題有一些很好的答案。