2015-04-12 49 views
0

我想知道爲什麼我無法從MPI_Recv命令訪問數據。我有一個100個元素的陣列,我想將其分成8個進程。由於100/8返回不等長度的塊,我手動執行此操作。我然後計算塊並將它們分別提交給每個進程。每個進程然後對數組的塊執行一個動作,讓我們說重新組合它,然後返回它的重新組合成一個原始數組。該程序運行良好,直到我必須將從屬進程的結果組合在一起。我特別想訪問哪個剛剛被從返回的數組處理訪問來自MPI_Irecv()的數據

for (i=1; i<numProcs; i++) { 
MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]); 
MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]); 
MPI_Irecv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &recv_req[i]); 

// how to access chunk, take part from msgsA[i] to msgsB[i] and assign to a part of a different array?? 

} 

整個代碼

#include <mpi.h> 
#include <stdio.h> 
#define MAXPROCS 8 /* max number of processes */ 

int main(int argc, char *argv[]) 
{ 
int i, j, n=100, numProcs, myid, tag=55, msgsA[MAXPROCS], msgsB[MAXPROCS], myStart, myEnd; 
double *chunk = malloc(n*sizeof(double)); 
double *K1 = malloc (n*sizeof(double)); 

MPI_Init(&argc, &argv); 
MPI_Comm_size(MPI_COMM_WORLD, &numProcs); 
MPI_Comm_rank(MPI_COMM_WORLD, &myid); 

if(myid==0) { 
/* split the array into pieces and send the starting and finishing indices to the slave processes */ 
for (i=1; i<numProcs; i++) { 
    myStart = (n/numProcs) * i + ((n % numProcs) < i ? (n % numProcs) : i); 
    myEnd = myStart + (n/numProcs) + ((n % numProcs) > i) - 1; 
    if(myEnd>n) myEnd=n; 
    MPI_Isend(&myStart, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &send_req[i]); 
    MPI_Isend(&myEnd, 1, MPI_INT, i, tag+1, MPI_COMM_WORLD, &send_req[i]); 
} 
/* starting and finish values for the master process */ 
myStart = (n/numProcs) * myid + ((n % numProcs) < myid ? (n % numProcs) : myid); 
myEnd = myStart + (n/numProcs) + ((n % numProcs) > myid) - 1; 

for (i=1; i<numProcs; i++) { 
    MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]); 
    MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]); 
    MPI_Irecv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &recv_req[i]); 

// --- access the chunk array here, take part from msgsA[i] to msgsB[i] and assign to a part of a different array 

} 
//calculate a function on fragments of K1 and returns void 

/* Wait until all chunks have been collected */ 
MPI_Waitall(numProcs-1, &recv_req[1], &status[1]); 
} 

else { 
    //calculate a function on fragments of K1 and returns void 

    MPI_Isend (K1, n, MPI_DOUBLE, 0, tag+2, MPI_COMM_WORLD, &send_req[0]); 
    MPI_Wait(&send_req[0], &status[0]); 
} 
MPI_Finalize(); 
return 0; 
} 

回答

0

我想我找到了解決辦法。導致問題的原因是MPI_Irecv()。對於非阻塞接收器,我無法訪問塊變量。所以解決方案似乎只是

MPI_Status status[MAXPROCS]; 

for (i=1; i<numProcs; i++) { 
MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]); 
MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]); 
MPI_Recv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &status[i]); 

//do whatever I need on chunk[j] variables 
} 
+0

這不是一個解決方案。你的代碼仍然有多個問題。我建議你通過阻塞所有非阻塞調用(例如'MPI_Recv')來替換所有非阻塞調用(例如'MPI_Irecv')。 –