EDIT#1:MPI_Gatherv,在MPI_Gatherv致命錯誤:未決的請求(無差錯),錯誤堆棧:
所以溶液:
線
MPI_Gatherv(buffer, rank, MPI_INT, buffer, receive_counts, receive_displacements, MPI_INT, 0, MPI_COMM_WORLD);
具有改爲
MPI_Gatherv(buffer, receive_counts[rank], MPI_INT, buffer, receive_counts, receive_displacements, MPI_INT, 0, MPI_COMM_WORLD);
再次感謝你的幫助
原貼:
我的代碼是從DeinoMPI
當我運行mpiexec的-localonly 4 skusamGatherv.exe,everithing是OK。
如果我改變線
INT receive_counts [4] = {0,1,2,3 };
到
INT receive_counts [4] = {0,1,2,};
編譯仍然是好的,但是當我運行mpiexec的4 -localonly我skusamGatherv.exe會得到錯誤
我的東西它想工作
感謝您的幫助
我會收到錯誤:
Fatal error in MPI_Gatherv: Message truncated, error stack:
MPI_Gatherv(363)........................: MPI_Gatherv failed(sbuf=0012FF4C, scou
nt=0, MPI_INT, rbuf=0012FF2C, rcnts=0012FEF0, displs=0012FED8, MPI_INT, root=0,
MPI_COMM_WORLD) failed
MPIDI_CH3_PktHandler_EagerShortSend(351): Message from rank 3 and tag 4 truncate
d; 12 bytes received but buffer size is 4
unable to read the cmd header on the pmi context, Error = -1
.
0. [0][0][0][0][0][0] , [0][0][0][0][0][0]
Error posting readv, An existing connection was forcibly closed by the remote ho
st.(10054)
unable to read the cmd header on the pmi context, Error = -1
.
Error posting readv, An existing connection was forcibly closed by the remote ho
st.(10054)
1. [1][1][1][1][1][1] , [0][0][0][0][0][0]
unable to read the cmd header on the pmi context, Error = -1
.
Error posting readv, An existing connection was forcibly closed by the remote ho
st.(10054)
2. [2][2][2][2][2][2] , [0][0][0][0][0][0]
unable to read the cmd header on the pmi context, Error = -1
.
Error posting readv, An existing connection was forcibly closed by the remote ho
st.(10054)
3. [3][3][3][3][3][3] , [0][0][0][0][0][0]
job aborted:
rank: node: exit code[: error message]
0: jan-pc-nb: 1: Fatal error in MPI_Gatherv: Message truncated, error stack:
MPI_Gatherv(363)........................: MPI_Gatherv failed(sbuf=0012FF4C, scou
nt=0, MPI_INT, rbuf=0012FF2C, rcnts=0012FEF0, displs=0012FED8, MPI_INT, root=0,
MPI_COMM_WORLD) failed
MPIDI_CH3_PktHandler_EagerShortSend(351): Message from rank 3 and tag 4 truncate
d; 12 bytes received but buffer size is 4
1: jan-pc-nb: 1
2: jan-pc-nb: 1
3: jan-pc-nb: 1
Press any key to continue . . .
我的代碼:
#include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[])
{
int buffer[6];
int rank, size, i;
int receive_counts[4] = { 0, 1, 2, 3 };
int receive_displacements[4] = { 0, 0, 1, 3 };
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (size != 4)
{
if (rank == 0)
{
printf("Please run with 4 processes\n");fflush(stdout);
}
MPI_Finalize();
return 0;
}
for (i=0; i<rank; i++)
{
buffer[i] = rank;
}
MPI_Gatherv(buffer, rank, MPI_INT, buffer, receive_counts, receive_displacements, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0)
{
for (i=0; i<6; i++)
{
printf("[%d]", buffer[i]);
}
printf("\n");
fflush(stdout);
}
MPI_Finalize();
return 0;
}
謝謝,那個排名作爲論據讓我困惑,我把它從它的發送地址和它發送數據的大小 –
好,只是爲了澄清,元組「buffer,count,datatype」顯示了很多MPI。該元組描述了內存的位置和大小。 –