1
我想嘗試在C++中使用OpenMPI,所以我寫了一個小代碼來進行數值積分。我的問題是,它似乎DEOS不執行線在這一切發生的正確:在C++中運行MPI時的調用函數
integral = trapezintegration(local_a, local_b, local_n);
現在我確信的是,MPI正常工作這一行的旁邊。當打印出local_a,local_b,local_n和rank_world時,我得到:
0 3.75 2.5e+09 0
3.75 7.5 2.5e+09 1
7.5 11.25 2.5e+09 2
11.25 15 2.5e+09 3
這是我非常想要的。當我打印積分,rank_world。我得到:
17.5781 2
17.5781 3
17.5781 1
17.5781 0
這是一個似乎奇怪對我來說,只有rank_world = = 0,應該有積分= 17.5781價值的部分。我的問題是,如何在MPI中進行函數調用,因此排名不會全部獲得rank_world == 0的值?
的完整代碼可以看到下面:
#include <mpi.h>
#include <iostream>
double f(const double x){
return x*x;
}
double trapezintegration(const double a, const double b, const double n){
"a = start value of integration range";
"b = end value of integration range";
"n = number of integration slices";
double integral=0.0, h=(b-a)/n;
long loopbound = (long)n;
"integral = the value of the numeric integral";
"h = width of the numeric integration";
integral = -(f(a)+f(b))/2.0;
for (long i=1;i<=loopbound;i++){
integral = integral + f(i*h);
}
integral = integral*(b-a)/n;
return integral;
}
int main(){
// The MPI enviroment need to be initialized
MPI_Init(NULL, NULL);
// The program need to know how many processors that are avaible
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
// The processors index is also needed to be known
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
// Now the execution of the program can be done
// If no processor index (rank) is specfied the code will be
// executed for all the processes
const double a=0.0, b=15.0, n=1e+10;
double integral, total_integral;
// Right now all of the processes have the same a and b
// now different a and b will be assigned to the processes
// The rank is the index of the processor going from
// 0 to WORLD_SIZE-1, all of the processes will now get
// different local_a and local_b
double local_a = (b - a)/world_size*world_rank;
double local_b = (b - a)/world_size*(world_rank+1);
double local_n = n/world_size;
std::cout << local_a << ' '<< local_b << ' ' << local_n << ' ' << world_rank << '\n';
integral = trapezintegration(local_a, local_b, local_n);
// All of the processes have now run the numerical integration
// for their given interval. All of the integrated parts need
// to be collected to get the total integration.
// Lets collect the result in Rank 0
std::cout << integral << ' ' << world_rank << '\n';
if (world_rank != 0){
MPI_Send(&integral,1,MPI_DOUBLE,0,555+world_rank,MPI_COMM_WORLD);
}
if (world_rank == 0){
total_integral = integral;
for (int i=1; i<world_size; i++){
MPI_Recv(&integral,1,MPI_DOUBLE,i,555+i,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
total_integral = total_integral + integral;
}
}
// if rank is different from rank 0, the result need to be send
if (world_rank == 0){
std::cout << total_integral << '\n';
}
// The MPI enviroment need to be closed when the calculation is finished
MPI_Finalize();
}
這工作,有什麼尷尬的錯誤。 感謝您對MPI_Reduce()的暗示 –
歡迎您!偶爾,每個人都需要一雙額外的眼睛:-) –