2015-06-25 37 views
4

最終搜索和搜索後,我有一個函數爲nD數組分配內存,如向量或線性。
功能是:mpi_gather,c中的2d動態數組,在信號6上退出(中止)

int malloc2dint(int ***array, int n, int m) 
{ 
    /* allocate the n*m contiguous items */ 
    int *p = (int *)malloc(n*m*sizeof(int)); 
    if (!p) return -1; 

    /* allocate the row pointers into the memory */ 
    (*array) = (int **)malloc(n*sizeof(int*)); 
    if (!(*array)) 
    { 
     free(p); 
     return -1; 
    } 

    /* set up the pointers into the contiguous memory */ 
    int i; 
    for (i=0; i<n; i++) 
     (*array)[i] = &(p[i*m]); 

    return 0; 
} 

通過使用該方法我可以廣播並且還散射2D動態正確分配陣列,但在MPI_Gather問題仍然存在。
的主要功能是:

int length = atoi(argv[1]); 
int rank, size, from, to, i, j, k, **first_array, **second_array, **result_array; 

MPI_Init (&argc, &argv); 
MPI_Comm_rank(MPI_COMM_WORLD, &rank); 
MPI_Comm_size(MPI_COMM_WORLD, &size); 

//2D dynamic memory allocation 
malloc2dint(&first_array, length, length); 
malloc2dint(&second_array, length, length); 
malloc2dint(&result_array, length, length); 

//Related boundary to each task 
from = rank * length/size; 
to = (rank+1) * length/size; 

//Intializing first and second array 
if (rank==0) 
{ 
    for(i=0; i<length; i++) 
     for(j=0; j<length; j++) 
     { 
      first_array[i][j] = 1; 
      second_array[i][j] = 1; 
     } 
} 

//Broadcast second array so all tasks will have it 
MPI_Bcast (&(second_array[0][0]), length*length, MPI_INT, 0, MPI_COMM_WORLD); 

//Scatter first array so each task has matrix values between its boundary 
MPI_Scatter (&(first_array[0][0]), length*(length/size), MPI_INT, first_array[from], length*(length/size), MPI_INT, 0, MPI_COMM_WORLD); 


//Now each task will calculate matrix multiplication for its part 
for (i=from; i<to; i++) 
    for (j=0; j<length; j++) 
    { 
     result_array[i][j]=0; 
     for (k=0; k<length; k++) 
      result_array[i][j] += first_array[i][k]*second_array[k][j]; 

     //printf("\nrank(%d)->result_array[%d][%d] = %d\n", rank, i, j, result_array[i][j]); 
     //this line print the correct value 
    } 

//Gathering info from all task and put each partition to resulat_array 
MPI_Gather (&(result_array[from]), length*(length/size), MPI_INT, result_array, length*(length/size), MPI_INT, 0, MPI_COMM_WORLD); 

if (rank==0) 
{ 
    for (i=0; i<length; i++) 
    { 
     printf("\n\t| "); 
     for (j=0; j<length; j++) 
      printf("%2d ", result_array[i][j]); 
     printf("|\n"); 
    } 
} 

MPI_Finalize(); 
return 0; 

現在,當我運行mpirun -np 2 xxx.out 4輸出爲:

| 4 4 4 4 | ---> Good Job! 

| 4 4 4 4 | ---> Good Job! 

| 1919252078 1852795251 1868524912 778400882 | ---> Where are you baby?!!! 

| 540700531 1701080693 1701734758 2037588068 | ---> Where are you baby?!!! 

最後的mpirun通知該過程退出rank 0上信號6(中止)。
對我來說奇怪的一點是MPI_BcastMPI_Scatter工作正常,但MPI_Gather沒有。
任何幫助將不勝感激

回答

1

問題是你如何通過緩衝區。您在MPI_Scatter中正確執行此操作,但是對於MPI_Gather執行的操作不正確。

通過result_array通過&result_array[from]將讀取保存指針列表的內存,而不是矩陣的實際數據。改爲使用&result_array[from][0]

類似於接收緩衝區。通過&result_array[0][0]而不是result_array將指針傳遞到數據位於內存中的位置。

因此,而不是:

//Gathering info from all task and put each partition to resulat_array 
MPI_Gather (&(result_array[from]), length*(length/size), MPI_INT, result_array, length*(length/size), MPI_INT, 0, MPI_COMM_WORLD); 

務必:

//Gathering info from all task and put each partition to resulat_array 
MPI_Gather (&(result_array[from][0]), length*(length/size), MPI_INT, &(result_array[0][0]), length*(length/size), MPI_INT, 0, MPI_COMM_WORLD); 
+0

謝謝,非常漂亮的回答和說明。 –