我想找到數組中所有給定數字的總和。我必須將數組拆分成相同的大小併發送到每個進程並計算總和。之後將計算出的總和從每個進程發送回根進程以得到最終答案。其實,我知道我可以使用MPI_Scatter。但是我的問題是如果我的清單是奇數。例如:我有一個包含13個元素的數組,然後我有3個過程。因此,默認情況下,MPI_Scatter會將數組除以3並保留最後一個元素。基本上,它會計算出只有12個元素的總和。我的輸出,當我只使用MPI_Scatter:如何在C中使用MPI查找給定數字的總和?
myid= 0 total= 6
myid= 1 total= 22
myid= 2 total= 38
results from all processors_= 66
size= 13
所以,我打算使用MPI_Scatter和MPI_Send。因此,我可以獲取最後一個元素,並通過MPI_Send發送並計算它,並以root進程接收。但我得到的問題..我的代碼:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <mpi.h>
/* globals */
int numnodes,myid,mpi_err;
int last_core;
int n;
int last_elements[];
#define mpi_root 0
/* end globals */
void init_it(int *argc, char ***argv);
void init_it(int *argc, char ***argv) {
mpi_err = MPI_Init(argc,argv);
mpi_err = MPI_Comm_size(MPI_COMM_WORLD, &numnodes);
mpi_err = MPI_Comm_rank(MPI_COMM_WORLD, &myid);
}
int main(int argc,char *argv[]){
int *myray,*send_ray,*back_ray;
int count;
int size,mysize,i,k,j,total;
MPI_Status status;
init_it(&argc,&argv);
/* each processor will get count elements from the root */
count=4;
myray=(int*)malloc(count*sizeof(int));
size=(count*numnodes)+1;
send_ray=(int*)malloc(size*sizeof(int));
back_ray=(int*)malloc(numnodes*sizeof(int));
last_core = numnodes - 1;
/* create the data to be sent on the root */
if(myid == mpi_root){
for(i=0;i<size;i++)
{
send_ray[i]=i;
}
}
/* send different data to each processor */
mpi_err = MPI_Scatter( send_ray, count, MPI_INT,
myray, count, MPI_INT,
mpi_root,
MPI_COMM_WORLD);
if(myid == mpi_root){
n = 1;
memcpy(last_elements, &send_ray[size-n], n * sizeof(int));
//Send the last numbers to the last core through send command
MPI_Send(last_elements, n, MPI_INT, last_core, 99, MPI_COMM_WORLD);
}
/* each processor does a local sum */
total=0;
for(i=0;i<count;i++)
total=total+myray[i];
//total = total + send_ray[size-1];
printf("myid= %d total= %d\n ",myid,total);
if(myid==last_core)
{
printf("Last core\n");
MPI_Recv(last_elements, n, MPI_INT, 0, 99, MPI_COMM_WORLD, &status);
}
/* send the local sums back to the root */
mpi_err = MPI_Gather(&total, 1, MPI_INT,
back_ray, 1, MPI_INT,
mpi_root,
MPI_COMM_WORLD);
/* the root prints the global sum */
if(myid == mpi_root){
total=0;
for(i=0;i<numnodes;i++)
total=total+back_ray[i];
printf("results from all processors_= %d \n ",total);
printf("size= %d \n ",size);
}
mpi_err = MPI_Finalize();
}
輸出:
myid= 0 total= 6
myid= 1 total= 22
myid= 2 total= 38
Last core
[ubuntu:11884] *** An error occurred in MPI_Recv
[ubuntu:11884] *** on communicator MPI_COMM_WORLD
[ubuntu:11884] *** MPI_ERR_TRUNCATE: message truncated
[ubuntu:11884] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
mpiexec has exited due to process rank 2 with PID 11884 on
node ubuntu exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpiexec (as reported here).
我知道我做錯了。我將不勝感激,如果你能指出我。
謝謝, 埃裏克。
調查使用'mpi_scatterv'的分發數據如你所願,和'mpi_reduce'執行總和。 –