2015-05-21 45 views
7

我想使用PyOpt運行一些並行優化。棘手的部分是,在我的目標函數中,我想使用mpi來運行C++代碼。Python與嵌入式調用mpirun

我的Python腳本如下:

#!/usr/bin/env python  
# Standard Python modules 
import os, sys, time, math 
import subprocess 


# External Python modules 
try: 
    from mpi4py import MPI 
    comm = MPI.COMM_WORLD 
    myrank = comm.Get_rank() 
except: 
    raise ImportError('mpi4py is required for parallelization') 

# Extension modules 
from pyOpt import Optimization 
from pyOpt import ALPSO 

# Predefine the BashCommand 
RunCprogram = "mpirun -np 2 CProgram" # Parallel C++ program 


######################### 
def objfunc(x): 

    f = -(((math.sin(2*math.pi*x[0])**3)*math.sin(2*math.pi*x[1]))/((x[0]**3)*(x[0]+x[1]))) 

    # Run CProgram 
    os.system(RunCprogram) #where the mpirun call occurs 

    g = [0.0]*2 
    g[0] = x[0]**2 - x[1] + 1 
    g[1] = 1 - x[0] + (x[1]-4)**2 

    time.sleep(0.01) 
    fail = 0 
    return f,g, fail 

# Instantiate Optimization Problem 
opt_prob = Optimization('Thermal Conductivity Optimization',objfunc) 
opt_prob.addVar('x1','c',lower=5.0,upper=1e-6,value=10.0) 
opt_prob.addVar('x2','c',lower=5.0,upper=1e-6,value=10.0) 
opt_prob.addObj('f') 
opt_prob.addCon('g1','i') 
opt_prob.addCon('g2','i') 

# Solve Problem (DPM-Parallelization) 
alpso_dpm = ALPSO(pll_type='DPM') 
alpso_dpm.setOption('fileout',0) 
alpso_dpm(opt_prob) 
print opt_prob.solution(0) 

我運行使用代碼:

mpirun -np 20 python Script.py 

不過,我收到以下錯誤:

[user:28323] *** Process received signal *** 
[user:28323] Signal: Segmentation fault (11) 
[user:28323] Signal code: Address not mapped (1) 
[user:28323] Failing at address: (nil) 
[user:28323] [ 0] /lib64/libpthread.so.0() [0x3ccfc0f500] 
[user:28323] *** End of error message *** 

我估計, 2個不同mpirun調用(調用python腳本和調用腳本)彼此衝突。 如何解決這個問題?

謝謝!

+0

你使用MPI通信蟒蛇進程之間交換數據還是你只是使用'mpi4py'運行多個孤立的事件。如果是這種情況,你可以考慮在Python中使用'subprocess'模塊來產生多個線程,每個線程可以調用一個'mpirun'實例(使用'subprocess.Popen')。我經常這樣做,沒有問題。如果你在多臺機器上運行'Script.py',這可能是不可能的... –

回答

1

請參閱Calling mpi binary in serial as subprocess of mpi application:最安全的方法是使用MPI_Comm_spawn()。以this manager-worker example爲例。

快速解決方法是使用由@EdSmith發送的subprocess.Popen。但請注意,默認行爲subprocess.Popen使用父級的環境。我的猜測是os.system()是相同的。不幸的是,一些環境變量由mpirun添加,具體取決於MPI的實現,如OMPI_COMM_WORLD_RANKOMPI_MCA_orte_ess_num_procs。要查看這些環境變量,請在mpi4py代碼和基本python shell中輸入import os ; print os.environ。這些環境變量可能導致子進程失敗。因此,我不得不添加一行來擺脫他們的...這是相當髒...它歸結爲:

args = shlex.split(RunCprogram) 
    env=os.environ 
    # to remove all environment variables with "MPI" in it...rather dirty... 
    new_env = {k: v for k, v in env.iteritems() if "MPI" not in k} 

    #print new_env 
    # shell=True : watch for security issues... 
    p = subprocess.Popen(RunCprogram,shell=True, env=new_env,stdout=subprocess.PIPE, stdin=subprocess.PIPE) 
    p.wait() 
    result="process myrank "+str(myrank)+" got "+p.stdout.read() 
    print result 

完整的測試代碼,由mpirun -np 2 python opti.py跑:

#!/usr/bin/env python  
# Standard Python modules 
import os, sys, time, math 
import subprocess 
import shlex 


# External Python modules 
try: 
    from mpi4py import MPI 
    comm = MPI.COMM_WORLD 
    myrank = comm.Get_rank() 
except: 
    raise ImportError('mpi4py is required for parallelization') 

# Predefine the BashCommand 
RunCprogram = "mpirun -np 2 main" # Parallel C++ program 


######################### 
def objfunc(x): 

    f = -(((math.sin(2*math.pi*x[0])**3)*math.sin(2*math.pi*x[1]))/((x[0]**3)*(x[0]+x[1]))) 

    # Run CProgram 
    #os.system(RunCprogram) #where the mpirun call occurs 
    args = shlex.split(RunCprogram) 
    env=os.environ 
    new_env = {k: v for k, v in env.iteritems() if "MPI" not in k} 

    #print new_env 
    p = subprocess.Popen(RunCprogram,shell=True, env=new_env,stdout=subprocess.PIPE, stdin=subprocess.PIPE) 
    p.wait() 
    result="process myrank "+str(myrank)+" got "+p.stdout.read() 
    print result 



    g = [0.0]*2 
    g[0] = x[0]**2 - x[1] + 1 
    g[1] = 1 - x[0] + (x[1]-4)**2 

    time.sleep(0.01) 
    fail = 0 
    return f,g, fail 

print objfunc([1.0,0.0]) 

基本工人,通過mpiCC main.cpp -o main編譯:

#include "mpi.h" 

int main(int argc, char* argv[]) { 
    int rank, size; 

    MPI_Init (&argc, &argv);  
    MPI_Comm_rank (MPI_COMM_WORLD, &rank); 
    MPI_Comm_size (MPI_COMM_WORLD, &size); 

    if(rank==0){ 
     std::cout<<" size "<<size<<std::endl; 
    } 
    MPI_Finalize(); 

    return 0; 

}