我有一些麻煩用下面的代碼:MPI發送和接收不與工作更然後8182雙
int main(int argc, char *argv[]){
int id, p, n, ln, i, j, retCode;
double *buffer;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &p);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
n = strtol(argv[1],NULL,10); // total number of elements to be distributed
ln = n/p; // local number of elements
buffer = (double*)calloc(ln, sizeof(double));
if (id == p-1) // Process p-1 send to other processes
{
for (i=0; i< p-1; i++)
{
fprintf(stdout, "Process %d is sending %d elements to process %d\n", p-1, ln, i);
retCode = MPI_Ssend (buffer, ln, MPI_DOUBLE, i, 0, MPI_COMM_WORLD);
if(retCode)
fprintf(stdout, "MPISend error at file %s, line %d code %d\n", __FILE__, __LINE__, retCode);
fprintf(stdout, "Process %d completed sending to process %d\n", p-1, i);
}
}
else // other processes receive from process p-1
{
fprintf(stdout, "Process %d is receiving %d elements from process %d\n", id, ln,p-1);
retCode = MPI_Recv (buffer, ln, MPI_DOUBLE, p-1, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
if(retCode)
fprintf(stdout, "MPI_Recv error at file %s, line %d code %d\n", __FILE__, __LINE__, retCode);
fprintf(stdout, "Process %d received from process %d\n", id, p-1);
}
free(buffer);
MPI_Finalize();
return 0;
}
的想法是打開與過程P-1的數據集,然後分發它到剩下的過程。此解決方案時,變量LN(本地號碼元素)小於8182當我提高我下面的錯誤元素的個數:
mpiexec -np 2 ./sendreceive 16366
Process 0 is receiving 8183 elements from process 1
Process 1 is sending 8183 elements to process 0
Fatal error in MPI_Recv: Other MPI error, error stack:
MPI_Recv(224)...................: MPI_Recv(buf=0x2000590, count=8183, MPI_DOUBLE, src=1, tag=MPI_ANY_TAG, MPI_COMM_WORLD, status=0x1) failed
PMPIDI_CH3I_Progress(623).......: fail failed
pkt_RTS_handler(317)............: fail failed
do_cts(662).....................: fail failed
MPID_nem_lmt_dcp_start_recv(288): fail failed
dcp_recv(154)...................: Internal MPI error! cannot read from remote process
到底哪裏出問題了?
謝謝大衛。我修復了代碼,但我的問題不在MPI_Finalize()中,在我擁有它的原始代碼中。我也在修復中運行代碼,但仍然存在問題。 – user2126217