2013-12-12 45 views
1

我已經寫以下代碼作爲測試 我從每個處理器的陣列接收,我將它們放置在廣告2D陣列的每一行是用於從不同的處理器陣列MPI發送和接收錯誤沒有運行

#include <iostream> 
#include <mpi.h> 

using namespace std; 

int main(int argc, char* argv[]) 
{ 

    int *sendBuff; 
    int **table; 
    int size, rank; 
    MPI_Status stat; 
    int pass = 1; 

    MPI_Init(&argc, &argv); 
    MPI_Comm_size(MPI_COMM_WORLD, &size); 
    MPI_Comm_rank(MPI_COMM_WORLD, &rank); 
    sendBuff = new int[10]; 
    printf("task %d passed %d\n", rank, pass); //1 
    pass++; 
    if (rank == 0) 
    { 
     table = new int*[size]; 
    } 
    for (int i = 0; i < 10; i++) 
    { 
     sendBuff[i] = rank; 
    } 

    printf("task %d passed %d\n", rank, pass); //2 
    pass++; 
    if (rank != 0) 
    { 
     MPI_Send(&sendBuff, 10, MPI_INT, 0, rank, MPI_COMM_WORLD); 
    } 

    printf("task %d passed %d\n", rank, pass); //3 
    pass++; 
    if (rank == 0) 
    { 
     table[0] = sendBuff; 
     for (int i = 1; i < size; i++) 
     { 
      MPI_Recv(&table[i], 10, MPI_INT, i, i, MPI_COMM_WORLD, &stat); 
     } 
    } 
    printf("task %d passed %d\n", rank, pass); //4 
    pass++; 
    delete[] sendBuff; 
    if (rank == 0) 
    { 
     for (int i = 0; i < size; i++) 
     { 
      delete[] table[i]; 
     } 
     delete[] table; 
    } 

    MPI_Finalize(); 
    return 0; 
} 

但我跑得使用

mpirun -np 4 a.out 

,我得到了以下這不是乳寧 :

[arch:03429] *** Process received signal *** 
[arch:03429] Signal: Aborted (6) 
[arch:03429] Signal code: (-6) 
[arch:03429] [ 0] /usr/lib/libpthread.so.0(+0xf870) [0x7fd2675bd870] 
[arch:03429] [ 1] /usr/lib/libc.so.6(gsignal+0x39) [0x7fd2672383d9] 
[arch:03429] [ 2] /usr/lib/libc.so.6(abort+0x148) [0x7fd2672397d8] 
[arch:03429] [ 3] /usr/lib/libc.so.6(+0x72e64) [0x7fd267275e64] 
[arch:03429] [ 4] /usr/lib/libc.so.6(+0x7862e) [0x7fd26727b62e] 
[arch:03429] [ 5] /usr/lib/libc.so.6(+0x79307) [0x7fd26727c307] 
[arch:03429] [ 6] a.out() [0x408704] 
[arch:03429] [ 7] /usr/lib/libc.so.6(__libc_start_main+0xf5) [0x7fd267224bc5] 
[arch:03429] [ 8] a.out() [0x408429] 
[arch:03429] *** End of error message *** 
-------------------------------------------------------------------------- 
mpirun noticed that process rank 0 with PID 3429 on node arch exited on signal 6 (Aborted). 
-------------------------------------------------------------------------- 

任何幫助?

+0

當傳遞指針變量像你'sendBuf'到'MPI_Send'或'MPI_Recv',你不需要額外的'&'。 –

回答

3

正如Hristo Iliev所指出的那樣,sendBuf數組應該是MPI_Send的參數。它對table [i]的作用方式相同。

另一個事實:MPI_Send和MPI_Recv不分配內存。這些功能只是將消息從一個地方複製到另一個地方。 sendBuff和table [i]都應該先分配。並且寫入表[0] = sendBuff會因此觸發內存泄漏。

下面是一個代碼,可以幫助你:

#include <iostream> 
#include <mpi.h> 

using namespace std; 

int main(int argc, char* argv[]) 
{ 

    int *sendBuff; 
    int **table; 
    int size, rank; 
    MPI_Status stat; 
    int pass = 1; 

    MPI_Init(&argc, &argv); 
    MPI_Comm_size(MPI_COMM_WORLD, &size); 
    MPI_Comm_rank(MPI_COMM_WORLD, &rank); 
    sendBuff = new int[10]; 
    printf("firts task %d passed %d\n", rank, pass); //1 
    pass++; 
    if (rank == 0) 
    { 
     table = new int*[size]; 
    } 
    for (int i = 0; i < 10; i++) 
    { 
     sendBuff[i] = rank; 
    } 

    printf("second task %d passed %d\n", rank, pass); //2 
    pass++; 
    if (rank != 0) 
    { 
     MPI_Send(sendBuff, 10, MPI_INT, 0, rank, MPI_COMM_WORLD); 
    } 

    printf("thrid task %d passed %d\n", rank, pass); //3 
    pass++; 
    if (rank == 0) 
    { 
    table[0]=new int[10]; 
    for(int i=0;i<10;i++){ 
     table[0][i]=sendBuff[i]; 
} 
     // table[0] = sendBuff; 
     for (int i = 1; i < size; i++) 
     { 
    table[i]=new int[10]; 
      MPI_Recv(table[i], 10, MPI_INT, i, i, MPI_COMM_WORLD, &stat); 
     } 
    } 
    printf("fourth task %d passed %d\n", rank, pass); //4 
    pass++; 


    if (rank == 0) 
    { 
     for (int i = 0; i < size; i++) 
     { 
      delete [] table[i]; 
     table[i]=NULL; 
     } 
     delete [] table; 
    } 

delete [] sendBuff; 

    MPI_Finalize(); 
    return 0; 
} 

一個功能,可以幫助您:MPI_Gather(...)。這似乎是你在找什麼!如果要使用它,請注意內存分配:應該將表的所有值分配爲一塊連續的內存塊。

http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Gather.html

再見,

弗朗西斯