我使用的Jnr熔絲庫(https://github.com/SerCeMan/jnr-fuse)內部使用JNR提供本地訪問使用Java語言編寫一個保險絲文件系統。保險絲文件系統中的Java - JVM錯誤雙重釋放或腐敗
文件系統可以作爲一個前端至一個Amazon S3桶,基本上使用戶能夠安裝其桶作爲一個正常的存儲裝置。
雖然返工我的讀法,我碰到以下JVM錯誤傳來:
*** Error in `/usr/local/bin/jdk1.8.0_65/bin/java': double free or corruption (!prev): 0x00007f3758953d80 ***
的錯誤,而試圖從保險絲文件系統中的文件複製到本地FS,通常在第二調用總是發生所讀取的方法(用於數據的第二塊128K字節)
cp /tmp/fusetest/benchmark/benchmarkFile.large /tmp
所討論的讀出方法是:
public int read(String path, Pointer buf, @size_t long size, @off_t long offset, FuseFileInfo fi) {
LOGGER.debug("Reading file {}, offset = {}, read length = {}", path, offset, size);
S3fsNodeInfo nodeInfo;
try {
nodeInfo = this.dbHelper.getNodeInfo(S3fsPath.fromUnixPath(path));
} catch (FileNotFoundException ex) {
LOGGER.error("Read called on non-existing node: {}", path);
return -ErrorCodes.ENOENT();
}
try {
// *** important part start
InputStream is = this.s3Helper.getInputStream(nodeInfo.getPath(), offset, size);
byte[] data = new byte[is.available()];
int numRead = is.read(data, 0, (int) size);
LOGGER.debug("Got {} bytes from stream, putting to buffer", numRead);
buf.put(offset, data, 0, numRead);
return numRead;
// *** important part end
} catch (IOException ex) {
LOGGER.error("Error while reading file {}", path, ex);
return -ErrorCodes.EIO();
}
}
使用的輸入流是其實上,我使用,以減少與S3 HTTP通信的緩衝器一個ByteArrayInputStream。 我現在單線程模式下運行保險絲,以避免任何併發相關問題。
有趣的是,我已經有一個工作版本,沒有做任何內部緩存,但在其他方面是完全一樣的,如下所示。
不幸的是,我不是真的到JVM內部,所以我不知道如何去這條底線 - 調試正常收益率沒有什麼作爲實際的錯誤似乎是在C端發生。
這裏的讀操作的完整控制檯輸出通過上面的命令觸發:
2016-02-29 02:08:45,652 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Reading file /benchmark/benchmarkFile.large, offset = 0, read length = 131072
unique: 7, opcode: READ (15), nodeid: 3, insize: 80, pid: 8297
read[0] 131072 bytes from 0 flags: 0x8000
2016-02-29 02:08:46,024 DEBUG s3fs.fs.CachedS3Helper [main] - Getting data from cache - path = /benchmark/benchmarkFile.large, offset = 0, length = 131072
2016-02-29 02:08:46,025 DEBUG s3fs.fs.CachedS3Helper [main] - Path /benchmark/benchmarkFile.large not yet in cache, add it
2016-02-29 02:08:57,178 DEBUG s3fs.fs.CachedS3Helper [main] - Path /benchmark/benchmarkFile.large found in cache!
read[0] 131072 bytes from 0
unique: 7, success, outsize: 131088
2016-02-29 02:08:57,179 DEBUG s3fs.fs.CachedS3Helper [main] - Starting actual cache read for path /benchmark/benchmarkFile.large
2016-02-29 02:08:57,179 DEBUG s3fs.fs.CachedS3Helper [main] - Reading data from cache block 0, blockOffset = 0, length = 131072
2016-02-29 02:08:57,179 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Got 131072 bytes from stream, putting to buffer
2016-02-29 02:08:57,180 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Reading file /benchmark/benchmarkFile.large, offset = 131072, read length = 131072
unique: 8, opcode: READ (15), nodeid: 3, insize: 80, pid: 8297
read[0] 131072 bytes from 131072 flags: 0x8000
2016-02-29 02:08:57,570 DEBUG s3fs.fs.CachedS3Helper [main] - Getting data from cache - path = /benchmark/benchmarkFile.large, offset = 131072, length = 131072
2016-02-29 02:08:57,570 DEBUG s3fs.fs.CachedS3Helper [main] - Path /benchmark/benchmarkFile.large found in cache!
2016-02-29 02:08:57,570 DEBUG s3fs.fs.CachedS3Helper [main] - Starting actual cache read for path /benchmark/benchmarkFile.large
2016-02-29 02:08:57,571 DEBUG s3fs.fs.CachedS3Helper [main] - Reading data from cache block 0, blockOffset = 131072, length = 131072
2016-02-29 02:08:57,571 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Got 131072 bytes from stream, putting to buffer
read[0] 131072 bytes from 131072
unique: 8, success, outsize: 131088
*** Error in `/usr/local/bin/jdk1.8.0_65/bin/java': double free or corruption (!prev): 0x00007fcaa8b30c80 ***
錯誤消息表明你正在跺腳的內存是堆結構化爲塊列表。這是有效映射的內存,因此您不會收到段錯誤,但堆管理器會檢測到您已覆蓋鏈接到上一個塊的元數據。 –