2017-02-02 67 views
0

有人請提供內部和指導如何解決以下FileConnector.copy問題?我正在使用wso2esb版本4.9和FileConnector2。WSO2 ESB FileConnector2.copy將文件複製到目標200次以上

我有一個利用FileConnector2 COPY操作將文件複製到FTP位置的序列。即使沒有我知道的併發問題,COPY操作也會多次將文件複製到目標位置!有趣的是,文件被複制的次數取決於它運行的環境。在開發環境中,一個文件被複制了17次。在製作過程中,一個文件被複制了200次以上。我想了解是什麼驅動這種行爲。我錯誤配置了FileConnector的COPY操作嗎?我在下面提供了序列定義和網絡流量。提前感謝您的幫助。

執行文件複製的序列定義。

<?xml version="1.0" encoding="UTF-8"?> 
<sequence name="SendFile2VendorSeq" onError="FileUploadFailSeq" 
trace="disable" xmlns="http://ws.apache.org/ns/synapse"> 
<property action="remove" name="ClientApiNonBlocking" scope="axis2"/> 
<property name="OUT_ONLY" scope="default" type="STRING" value="true"/> 
<dblookup description="Get CDR files to be upload"> 
    <connection> 
    <pool> 
     <dsName>jdbc/DB_DS</dsName> 
    </pool> 
    </connection> 
    <statement> 
    <sql><![CDATA[ 
     SELECT QUERY HERE 
    ]]></sql> 
    <parameter expression="get-property('vendorCode')" type="VARCHAR"/> 
    <result column="vendor_dest" name="vendorDest"/> 
    <result column="file_name" name="fileName"/> 
    <result column="vendor_id" name="vendorId"/> 
    <result column="file_path" name="fileSource"/> 
    <result column="vendor_code" name="vendorCode"/> 
    <result column="id" name="fileId"/> 
    </statement> 
</dblookup> 
<!-- CHECK TO SEE IF A ROW IS RETURNED. IF NOT THEN EXIT --> 
<filter xpath="boolean(get-property('fileSource'))"> 
    <then> 
    <property 
     expression="fn:concat(get-property('fileSource'),get-property('fileName'))" 
     name="sourceFile" scope="default" type="STRING"/> 
    <fileconnector.isFileExist> 
     <source>{$ctx:sourceFile}</source> 
    </fileconnector.isFileExist> 
    <switch source="//fileExist"> 
     <case regex="true"> 
     <log description="logEntry" level="full"/> 
     <!-- Log the transaction in Database --> 
     <log description="LogEntry" level="custom"> 
      <property 
      expression="fn:concat('Uploading ', get-property('fileName'), ' (id:', get-property('fileId'), ') to vendor ', get-property('vendorCode'))" name="message"/> 
     </log> 
     <dblookup description="LogTX"> 
      <connection> 
      <pool> 
       <dsName>jdbc/DB_DS</dsName> 
      </pool> 
      </connection> 
      <statement> 
      <sql><![CDATA[SELECT record_TX(?,?)]]></sql> 
      <parameter expression="get-property('fileId')" type="INTEGER"/> 
      <parameter expression="get-property('vendorId')" type="INTEGER"/> 
      </statement> 
     </dblookup> 
     <!-- Conduct file copy --> 
     <fileconnector.copy> 
      <source>{$ctx:fileSource}</source> 
      <destination>{$ctx:vendorDest}</destination> 
      <filePattern>{$ctx:fileName}</filePattern> 
     </fileconnector.copy> 
     <!-- Update the transaction with status --> 
     <dblookup description="LogTX"> 
      <connection> 
      <pool> 
       <dsName>jdbc/DB_DS</dsName> 
      </pool> 
      </connection> 
      <statement> 
      <sql><![CDATA[SELECT record_TX(?,?,?,?)]]></sql> 
      <parameter expression="get-property('fileId')" type="INTEGER"/> 
      <parameter expression="get-property('vendorId')" type="INTEGER"/> 
      <parameter type="CHAR" value="S"/> 
      <parameter type="VARCHAR" value="Success"/> 
      </statement> 
     </dblookup> 
     <log description="LogEntry" level="custom"> 
      <property 
      expression="fn:concat('Successfully uploaded ', get-property('fileName'), ' (id:', get-property('fileId'), ') to vendor ', get-property('vendorCode'))" name="message"/> 
     </log> 
     <drop/> 
     </case> 
     <default> 
     <log description="LogEntry" level="custom"> 
      <property 
      expression="fn:concat(get-property('fileName'), ' (id:', get-property('fileId'), ') does not exists in the source directory.')" name="message"/> 
     </log> 
     <dblookup description="LogTX"> 
      <connection> 
      <pool> 
       <dsName>jdbc/DB_DS</dsName> 
      </pool> 
      </connection> 
      <statement> 
      <sql><![CDATA[SELECT record_TX(?,?,?,?)]]></sql> 
      <parameter expression="get-property('fileId')" type="INTEGER"/> 
      <parameter expression="get-property('vendorId')" type="INTEGER"/> 
      <parameter type="CHAR" value="S"/> 
      <parameter type="VARCHAR" value="File does not exists in the source directory."/> 
      </statement> 
     </dblookup> 
     </default> 
    </switch> 
    </then> 
    <else> 
    <drop/> 
    </else> 
</filter> 
</sequence> 

網絡輸出示例顯示行爲。

Mon Jan 30 11:46:00 2017 [pid 21385] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:00 2017 [pid 21384] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:00 2017 [pid 21386] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 66.90Kbyte/sec 
Mon Jan 30 11:46:00 2017 [pid 21388] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:00 2017 [pid 21387] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:00 2017 [pid 21389] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 67.74Kbyte/sec 
Mon Jan 30 11:46:00 2017 [pid 21392] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:01 2017 [pid 21391] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:01 2017 [pid 21393] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 31.43Kbyte/sec 
Mon Jan 30 11:46:01 2017 [pid 21395] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:01 2017 [pid 21394] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:01 2017 [pid 21396] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 64.64Kbyte/sec 
Mon Jan 30 11:46:01 2017 [pid 21398] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:01 2017 [pid 21397] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:02 2017 [pid 21399] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 64.84Kbyte/sec 
Mon Jan 30 11:46:02 2017 [pid 21401] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:02 2017 [pid 21400] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:02 2017 [pid 21402] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 47.52Kbyte/sec 
Mon Jan 30 11:46:02 2017 [pid 21404] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:02 2017 [pid 21403] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:02 2017 [pid 21405] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 58.88Kbyte/sec 
Mon Jan 30 11:46:02 2017 [pid 21407] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:03 2017 [pid 21406] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:03 2017 [pid 21408] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 52.50Kbyte/sec 
Mon Jan 30 11:46:03 2017 [pid 21410] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:03 2017 [pid 21409] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:03 2017 [pid 21411] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 63.34Kbyte/sec 
Mon Jan 30 11:46:03 2017 [pid 21413] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:03 2017 [pid 21412] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:04 2017 [pid 21414] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 69.12Kbyte/sec 
Mon Jan 30 11:46:04 2017 [pid 21416] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:04 2017 [pid 21415] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:04 2017 [pid 21417] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 56.66Kbyte/sec 
Mon Jan 30 11:46:04 2017 [pid 21419] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:04 2017 [pid 21418] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:04 2017 [pid 21420] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 66.30Kbyte/sec 
Mon Jan 30 11:46:04 2017 [pid 21422] CONNECT: Client "Some-IP-address" 
Mon Jan 30 11:46:04 2017 [pid 21421] [userName] OK LOGIN: Client "Some-IP-address" 
Mon Jan 30 11:46:05 2017 [pid 21423] [userName] OK UPLOAD: Client "Some-IP-address", "/test/file.T70130IN03.csv", 1607 bytes, 68.93Kbyte/sec 

系統日誌文件中的日誌條目。它看起來像中介需要2分鐘以上才能結束其操作。在這2分鐘左右的時間裏,它反覆將文件一遍又一遍地複製到目的地,直到某種超時,我想。

[2017-02-02 15:24:58,250] DEBUG - ClassMediator Start : Class mediator 
[2017-02-02 15:24:58,250] DEBUG - ClassMediator invoking : class org.wso2.carbon.connector.FileCopy.mediate() 
[2017-02-02 15:25:46,851] DEBUG - ThreadingView Thread state summary for PassthroughHttpServerWorker threads - Blocked: 0.0%, Unblocked: 100.0% 
[2017-02-02 15:25:46,881] DEBUG - ThreadingView Thread state summary for PassthroughHttpServerWorker threads - Blocked: 0.0%, Unblocked: 100.0% 
[2017-02-02 15:26:46,855] DEBUG - ThreadingView Thread state summary for PassthroughHttpServerWorker threads - Blocked: 0.0%, Unblocked: 100.0% 
[2017-02-02 15:26:46,884] DEBUG - ThreadingView Thread state summary for PassthroughHttpServerWorker threads - Blocked: 0.0%, Unblocked: 100.0% 
[2017-02-02 15:27:29,479] DEBUG - ClassMediator End : Class mediator 
[2017-02-02 15:27:29,479] DEBUG - SequenceMediator End : Sequence <CopyFile> 

回答

0

我從FileConnector反編譯FileCopy.class後發現解決方案。顯然,在介體配置中指定filePattern的情況下,存在FOR循環。在我刪除filePattern並在源文件中指定文件後,它按預期工作。

<sequence name="CopyFile" onError="AppDefaultFailSeq" trace="disable" 
xmlns="http://ws.apache.org/ns/synapse"> 
     <fileconnector.copy> 
     <source>c://data/out/testFile.csv</source> 
     <destination>ftp://userName:{password}@ftp.co.com/test</destination> 
     </fileconnector.copy> 
</sequence>