我試圖通過切換到C,以加快最初用Java寫的我的TIFF編碼器和編譯Zlib 1.2.8與Z_SOLO
定義和最低限度的C文件:adler32.c
,crc32.c
,deflate.c
, trees.c
和zutil.c
。 Java正在使用java.util.zip.Deflater
。標杆的Zlib的Java對丙
我寫了一個簡單的測試程序來評估壓縮級別和速度方面的性能,並且由於無論我需要什麼級別,壓縮都不會踢得那麼厲害,考慮到更高級別所需時間的增加。
的Java:
Level 1 : 8424865 => 6215200 (73,8%) in 247 cycles.
Level 2 : 8424865 => 6178098 (73,3%) in 254 cycles.
Level 3 : 8424865 => 6181716 (73,4%) in 269 cycles.
Level 4 : 8424865 => 6337236 (75,2%) in 334 cycles.
Level 5 : 8424865 => 6331902 (75,2%) in 376 cycles.
Level 6 : 8424865 => 6333914 (75,2%) in 395 cycles.
Level 7 : 8424865 => 6333350 (75,2%) in 400 cycles.
Level 8 : 8424865 => 6331986 (75,2%) in 437 cycles.
Level 9 : 8424865 => 6331598 (75,2%) in 533 cycles.
C:
Level 1 : 8424865 => 6215586 (73.8%) in 298 cycles.
Level 2 : 8424865 => 6195280 (73.5%) in 309 cycles.
Level 3 : 8424865 => 6182748 (73.4%) in 331 cycles.
Level 4 : 8424865 => 6337942 (75.2%) in 406 cycles.
Level 5 : 8424865 => 6339203 (75.2%) in 457 cycles.
Level 6 : 8424865 => 6337100 (75.2%) in 481 cycles.
Level 7 : 8424865 => 6336396 (75.2%) in 492 cycles.
Level 8 : 8424865 => 6334293 (75.2%) in 547 cycles.
Level 9 : 8424865 => 6333084 (75.2%) in 688 cycles.
上午我也被Java的實際執行比在這兩個壓縮和速度的Visual Studio版本編譯(VC2010)更好驚歎我是唯一見證這種結果的人嗎?我的猜測是JVM中的Zlib正在使用我未包含在C項目中的彙編類型優化,或者在編譯Zlib(或Visual Studio編譯器時)時缺少一個明顯的配置步驟。
這裏有兩個片段:
的Java:
public static void main(String[] args) throws IOException {
byte[] pix = Files.readAllBytes(Paths.get("MY_MOSTLY_UNCOMPRESSED.TIFF"));
int szin = pix.length;
byte[] buf = new byte[szin*101/100];
int szout;
long t0, t1;
for (int i = 1; i <= 9; i++) {
t0 = System.currentTimeMillis();
Deflater deflater = new Deflater(i);
deflater.setInput(pix);
szout = deflater.deflate(buf);
deflater.finish();
t1 = System.currentTimeMillis();
System.out.println(String.format("Level %d : %d => %d (%.1f%%) in %d cycles.", i, szin, szout, 100.0f*szout/szin, t1 - t0));
}
}
C:
#include <time.h>
#define SZIN 9000000
#define SZOUT 10000000
void main(void)
{
static unsigned char buf[SZIN];
static unsigned char out[SZOUT];
clock_t t0, t1;
int i, ret;
uLongf sz, szin;
FILE* f = fopen("MY_MOSTLY_UNCOMPRESSED.TIFF", "rb");
szin = fread(buf, 1, SZIN, f);
fclose(f);
for (i = 1; i <= 9; i++) {
sz = SZOUT;
t0 = clock();
compress2(out, &sz, buf, szin, i); // I rewrote compress2, as it's not available when Z_SOLO is defined
t1 = clock();
printf("Level %d : %d => %d (%.1f%%) in %ld cycles.\n", i, szin, sz, 100.0f*sz/szin, t1 - t0);
}
}
編輯:
@ MarkAdler的言論之後,我通過deflateInit2()
嘗試不同的壓縮策略(即Z_FILTERED
和Z_HUFFMAN_ONLY
):
Z_FILTERED
:
Level 1 : 8424865 => 6215586 (73.8%) in 299 cycles.
Level 2 : 8424865 => 6195280 (73.5%) in 310 cycles.
Level 3 : 8424865 => 6182748 (73.4%) in 330 cycles.
Level 4 : 8424865 => 6623409 (78.6%) in 471 cycles.
Level 5 : 8424865 => 6604616 (78.4%) in 501 cycles.
Level 6 : 8424865 => 6595698 (78.3%) in 528 cycles.
Level 7 : 8424865 => 6594845 (78.3%) in 536 cycles.
Level 8 : 8424865 => 6592863 (78.3%) in 595 cycles.
Level 9 : 8424865 => 6591118 (78.2%) in 741 cycles.
Z_HUFFMAN_ONLY
:
Level 1 : 8424865 => 6803043 (80.7%) in 111 cycles.
Level 2 : 8424865 => 6803043 (80.7%) in 108 cycles.
Level 3 : 8424865 => 6803043 (80.7%) in 106 cycles.
Level 4 : 8424865 => 6803043 (80.7%) in 106 cycles.
Level 5 : 8424865 => 6803043 (80.7%) in 107 cycles.
Level 6 : 8424865 => 6803043 (80.7%) in 106 cycles.
Level 7 : 8424865 => 6803043 (80.7%) in 107 cycles.
Level 8 : 8424865 => 6803043 (80.7%) in 108 cycles.
Level 9 : 8424865 => 6803043 (80.7%) in 107 cycles.
按他的評論預計,Z_HUFFMAN_ONLY
不會改變的壓縮,但執行很多更快。根據我的數據,Z_FILTERED
不是更快,而是比Z_DEFAULT_STRATEGY
稍差一些。
我很驚訝,級別'3'是最小的。你確定你的數據沒有什麼特別的嗎? –
@PeterLawrey它是一個「標準」TIFF文件大小2800x2900,包含2頁,第一個是未壓縮的,第二個是deflate-compressed。我可以理解爲「壓縮壓縮的數據使其膨脹」。我可以嘗試壓縮已經壓縮的數據,看看發生了什麼(如果我在這週末有一些時間的話)。 – Matthieu
在Java程序中,請注意'fis.read(pix)'可能不會讀取整個文件,在這種情況下,'pix'的其餘部分將爲零。我建議用'pix = Files.readAllBytes(Paths.get(「MY_MOSTLY_UNCOMPRESSED.TIFF」))'替換FileInputStream的使用。 – VGR