在ParallelItem中使用分佈式組件在MPI下運行分析時,向分析添加DumpRecorder時出現錯誤。以下是一個演示這是一個小例子(這與最新的主分支運行從28/10/2015提交aaa67a4d51f4081e9e41b250b0a76b077f6f0c21):OpenMDAO 1.x:並行錄製
import numpy as np
from openmdao.core.mpi_wrap import MPI
from openmdao.api import Component, Group, DumpRecorder, Problem, ParallelGroup
class Sliced(Component):
def __init__(self):
super(Sliced, self).__init__()
self.add_param('x', 0.)
self.add_output('y', 0.)
def solve_nonlinear(self, params, unknowns, resids):
unknowns['y'] = params['x'] * 2.
class VectorComp(Component):
def __init__(self, size):
super(VectorComp, self).__init__()
self.add_param('xin', np.zeros(size))
self.add_output('x', np.zeros(size))
def solve_nonlinear(self, params, unknowns, resids):
unknowns['x'] = params['xin'] * 2.
class Analysis(Group):
def __init__(self, size):
super(Analysis, self).__init__()
self.add('v', VectorComp(size), promotes=['*'])
par = self.add('par', ParallelGroup())
for i in range(size):
par.add('sec%02d' % i, Sliced())
self.connect('x', 'par.sec%02d.x' % i, src_indices=[i])
if __name__ == '__main__':
if MPI:
from openmdao.core.petsc_impl import PetscImpl as impl
else:
from openmdao.core.basic_impl import BasicImpl as impl
p = Problem(impl=impl, root=Analysis(4))
recorder = DumpRecorder('optimization.log')
# adding specific includes works, but leaving it out results in a crash
# recorder.options['includes'] = ['x']
p.driver.add_recorder(recorder)
p.setup()
p.run()
其引發的錯誤是:
RuntimeError: Cannot access remote Variable 'par.sec00.x' in this process.
我見記錄器爲每個處理器轉儲一個文件,所以BaseRecorder._filter_vectors
方法不應該過濾掉在特定處理器上不存在的參數?我還不太熟悉代碼以提出解決方案,所以我希望OpenMDAO開發人員能夠輕鬆找出問題所在。
手動指定包含的作品,因爲排除了Sliced參數,但是如果這不是必要的,那就太好了,並在引擎蓋下處理。
我也想讓你們知道我們對新框架的興奮。這是所以 0.x版本更快,平行的FD功能非常讚賞和作品像一個魅力!
好的,謝謝。我會嘗試'SqliteRecorder'。 – frza