3
我在'C'中寫了一個動態內存密集型的示例程序,並試圖對'glibc'默認分配器和Hoard分配器進行基準測試(就時間而言) 。當分配大塊數據塊時,囤積性能嚴重下降
1 #include <stdio.h>
2 #include <stdlib.h>
3
4 #define NUM_OF_BLOCKS (1 * 4096)
5
6 void *allocated_mem_ptr_arr[NUM_OF_BLOCKS];
7
8 int
9 main (int argc, char *argv[])
10 {
11 void *myblock = NULL;
12
13 int count, iter;
14
15 int blk_sz;
16
17 if (argc != 2)
18 {
19 fprintf (stderr, "Usage:./memory_intensive <Block size (KB)>\n\n");
20 exit (-1);
21 }
22
23 blk_sz = atoi (argv[1]);
24
25 for (iter = 0; iter < 1024; iter++)
26 {
27 /*
28 * The allocated memory is not accessed (read/write) hence the residual memory
29 * size remains low since no corresponding physical pages are being allocated
30 */
31 printf ("\nCurrently at iteration %d\n", iter);
32 fflush (NULL);
33
34 for (count = 0; count < NUM_OF_BLOCKS; count++)
35 {
36 myblock = (void *) malloc (blk_sz * 1024);
37 if (!myblock)
38 {
39 printf ("malloc() fails\n");
40 sleep (30);
41 return;
42 }
43
44 allocated_mem_ptr_arr[count] = myblock;
45 }
46
47 for (count = 0; count < NUM_OF_BLOCKS; count++)
48 {
49 free (allocated_mem_ptr_arr[count]);
50 }
51 }
52 }
由於這個基準活動的結果,我得到了下面的結果(塊大小,所經歷的時間爲默認的分配,所經歷的時間爲囤積居奇):
- '1K'4.380s「0.927 s'的
- '2K' '8.390s' '0.960s'
- '4K' '16 .757s' '1.078s'
- '8K' '16 .619s' '1.154s'
- '16K' '17 .028s''13m 6.463s'
- '32K' '17 .755s' '5米45.039s'
如可以看到的,囤性能嚴重與塊大小> = 16K下降。是什麼原因?我們可以說,Hoard不適用於分配大尺寸塊的應用程序嗎?
你看過你提到的分配器的源代碼嗎?他們是自由軟件。 –
看到這個問題http://stackoverflow.com/q/9204354/841108 –
我試圖複製你的測試和囤積超過每個尺寸的標準分配器,更大的塊大小更是如此。 (Linux,x86-64,hoard 3.8)例如,在32KB時,囤積時間爲11.22秒。標準分配器耗時35.89秒。 –