2016-08-21 30 views
1

在我的矩陣加法代碼中,我使用ISend和Tag 1將下界傳遞給其他進程,但是當我編譯代碼時,所有其他從進程聲稱具有相同的下界。我不明白爲什麼?MPI中的接收和接收:收到不同的值?

輸出:

I am process 1 and I received 1120 as lower bound 
I am process 1 and my lower bound is 1120 and my upper bound is 1682 
I am process 2 and I received 1120 as lower bound 
I am process 2 and my lower bound is 1120 and my upper bound is 1682 
Process 0 here: I am sending lower bound 0 to process 1 
Process 0 here: I am sending lower bound 560 to process 2 
Process 0 here: I am sending lower bound 1120 to process 3 
Timings : 13.300698 Sec 
I am process 3 and I received 1120 as lower bound 
I am process 3 and my lower bound is 1120 and my upper bound is 1682 

的代碼:

#define N_ROWS 1682 
#define N_COLS 823 
#define MASTER_TO_SLAVE_TAG 1 //tag for messages sent from master to slaves 
#define SLAVE_TO_MASTER_TAG 4 //tag for messages sent from slaves to master 

void readMatrix(); 
int rank, nproc, proc; 
double matrix_A[N_ROWS][N_COLS]; 
double matrix_B[N_ROWS][N_COLS]; 
double matrix_C[N_ROWS][N_COLS]; 
int low_bound; //low bound of the number of rows of [A] allocated to a slave 
int upper_bound; //upper bound of the number of rows of [A] allocated to a slave 
int portion; //portion of the number of rows of [A] allocated to a slave 
MPI_Status status; // store status of a MPI_Recv 
MPI_Request request; //capture request of a MPI_Isend 

int main (int argc, char *argv[]) { 

    MPI_Init(&argc, &argv); 
    MPI_Comm_size(MPI_COMM_WORLD, &nproc); 
    MPI_Comm_rank(MPI_COMM_WORLD, &rank); 

    double StartTime = MPI_Wtime(); 

     // -------------------> Process 0 initalizes matrices and sends work portions to other processes 
     if (rank==0) { 
      readMatrix(); 
      for (proc = 1; proc < nproc; proc++) {//for each slave other than the master 
       portion = (N_ROWS/(nproc - 1)); // calculate portion without master 
       low_bound = (proc - 1) * portion; 
       if (((proc + 1) == nproc) && ((N_ROWS % (nproc - 1)) != 0)) {//if rows of [A] cannot be equally divided among slaves 
        upper_bound = N_ROWS; //last slave gets all the remaining rows 
       } else { 
        upper_bound = low_bound + portion; //rows of [A] are equally divisable among slaves 
       } 
       //send the low bound first without blocking, to the intended slave 
       printf("Process 0 here: I am sending lower bound %i to process %i \n",low_bound,proc); 
       MPI_Isend(&low_bound, 1, MPI_INT, proc, MASTER_TO_SLAVE_TAG, MPI_COMM_WORLD, &request); 
       //next send the upper bound without blocking, to the intended slave 
       MPI_Isend(&upper_bound, 1, MPI_INT, proc, MASTER_TO_SLAVE_TAG + 1, MPI_COMM_WORLD, &request); 
       //finally send the allocated row portion of [A] without blocking, to the intended slave 
       MPI_Isend(&matrix_A[low_bound][0], (upper_bound - low_bound) * N_COLS, MPI_DOUBLE, proc, MASTER_TO_SLAVE_TAG + 2, MPI_COMM_WORLD, &request); 
      } 
     } 

     //broadcast [B] to all the slaves 
     MPI_Bcast(&matrix_B, N_ROWS*N_COLS, MPI_DOUBLE, 0, MPI_COMM_WORLD); 


     // -------------------> Other processes do their work 
     if (rank != 0) { 
      //receive low bound from the master 
      MPI_Recv(&low_bound, 1, MPI_INT, 0, MASTER_TO_SLAVE_TAG, MPI_COMM_WORLD, &status); 
      printf("I am process %i and I received %i as lower bound \n",rank,low_bound); 
      //next receive upper bound from the master 
      MPI_Recv(&upper_bound, 1, MPI_INT, 0, MASTER_TO_SLAVE_TAG + 1, MPI_COMM_WORLD, &status); 
      //finally receive row portion of [A] to be processed from the master 
      MPI_Recv(&matrix_A[low_bound][0], (upper_bound - low_bound) * N_COLS, MPI_DOUBLE, 0, MASTER_TO_SLAVE_TAG + 2, MPI_COMM_WORLD, &status); 
      printf("I am process %i and my lower bound is %i and my upper bound is %i \n",rank,low_bound,upper_bound); 
      //do your work 
      for (int i = low_bound; i < upper_bound; i++) { 
       for (int j = 0; j < N_COLS; j++) { 
        matrix_C[i][j] = (matrix_A[i][j] + matrix_B[i][j]); 
       } 
      } 
      //send back the low bound first without blocking, to the master 
      MPI_Isend(&low_bound, 1, MPI_INT, 0, SLAVE_TO_MASTER_TAG, MPI_COMM_WORLD, &request); 
      //send the upper bound next without blocking, to the master 
      MPI_Isend(&upper_bound, 1, MPI_INT, 0, SLAVE_TO_MASTER_TAG + 1, MPI_COMM_WORLD, &request); 
      //finally send the processed portion of data without blocking, to the master 
      MPI_Isend(&matrix_C[low_bound][0], (upper_bound - low_bound) * N_COLS, MPI_DOUBLE, 0, SLAVE_TO_MASTER_TAG + 2, MPI_COMM_WORLD, &request); 
     } 

     // -------------------> Process 0 gathers the work 
     ... 

回答

1

MPI_Isend()開始一個非阻塞發送。因此,修改發送的緩衝區而不檢查消息是否實際發送會導致發送錯誤的值。

這是在片的你提供的代碼發生什麼,在環上過程for (proc = 1; proc < nproc; proc++)

  1. PROC = 1:low_bound被計算。

  2. PROC = 1:low_bound被髮送(非阻塞)來處理1

  3. PROC = 2:low_bound被修改。該消息已損壞。

不同的解決方案存在:

  • 使用阻塞發送MPI_Send()

  • 通過創建3個請求數組MPI_Request requests[3]; MPI_Status statuses[3];來檢查消息是否已完成,使用非阻塞發送並使用MPI_Waitall()來檢查請求的完成情況。

    MPI_Isend(&low_bound, 1, MPI_INT, proc, MASTER_TO_SLAVE_TAG, MPI_COMM_WORLD, &requests[0]); 
    MPI_Isend(..., &requests[1]); 
    MPI_Isend(..., &requests[2]); 
    MPI_Waitall(3, requests, statuses); 
    
  • 看看MPI_Scatter()MPI_Scatterv()

執行此操作的「常規」方法是MPI_Bcast()矩陣的大小。然後每個進程計算其矩陣部分的大小。進程0計算MPI_Scatterv()所需的sendcountsdispls

+0

如果我將所有MPI_Isend()更改爲MPI_Send(),我似乎遇到了死鎖(控制檯中沒有任何事情發生)。你知道這是爲什麼嗎? – Marcel

+0

哦,是的!死鎖是由於進程0嘗試通過'MPI_Send()'發送消息而所有其他進程在'MPI_Bcast()'處停止的事實。是的,這是一個僵局。如果在MPI_Recv()操作之後移動'B'的'MPI_Bcast()',則死鎖應該消失。 – francis