2017-04-06 70 views
2

我有一個Keras模型,它在8個GPU上進行了培訓。這意味着該模型具有如下塊:with tf.device('gpu:0')。現在我想用另一臺具有4 gpus的pc來應用轉移學習。但是,這會導致錯誤,很可能是因爲模型是通過更多gpus(error: could not set cudnn tensor descriptor: CUDNN_STATUS_BAD_PARAM)進行培訓的。在錯誤日誌中,我還可以看到tensorflow正在嘗試在設備GPU 0-7上共置梯度的警告。有沒有適合或清除配置了Keras的訓練模型中的設備的方法?在經過訓練和重新加載的Keras模型中更改設備分配

FYI:我沒有元圖形文件,因爲該模型還保存Keras而不是與tensorflow保護功能


電流試圖

我試着改變圖層屬性,但這並沒有使它的工作:

track = 0 
for i in range(len(model.layers)): 
    if model.layers[i].name[:6] == 'lambda': 
     model.layers[i].arguments['n_gpus'] = n_gpus 
     if model.layers[i].arguments['part'] > n_gpus-1: 
      model.layers[i].arguments['part'] = np.arange(n_gpus)[track] 
      track += 1 
      if track > n_gpus-1: 
       track = 0 

此外,我試圖設置可見的設備,也沒有工作的數量:

import os 
os.environ['CUDA_VISIBLE_DEVICES'] = "0,1,2,3" 

腳本超過8個GPU

""" 
to_multi_gpu & slice_batch by: https://github.com/fchollet/keras/issues/2436 
baseline_model by: http://machinelearningmastery.com/ 
""" 
from keras import backend as K 
from keras.models import Sequential, Model 
from keras.layers import Dense, Input, Lambda, merge 
import tensorflow as tf 

def slice_batch(x, n_gpus, part): 
    """ 
    Divide the input batch into [n_gpus] slices, and obtain slice no. [part] 
    i.e. if len(x)=10, then slice_batch(x, 2, 1) will return x[5:]. 
    x: input batch (input shape of model) 
    n_gpus: number of gpus 
    part: id of current gpu 

    return: sliced model per gpu 
    """ 
    sh = K.shape(x) 
    L = sh[0] // n_gpus 
    if part == n_gpus - 1: 
     return x[part*L:] 
    return x[part*L:(part+1)*L] 

def to_multi_gpu(model, n_gpus): 
    """ 
    Given a keras [model], return an equivalent model which parallelizes 
    the computation over [n_gpus] GPUs. 
    Each GPU gets a slice of the input batch, applies the model on that slice 
    and later the outputs of the models are concatenated to a single 
    tensor, hence the user sees a model that behaves the same as the original. 

    model: sequential model created with the Keras library 
    n_gpus: number of gpus 

    return: model divided over n_gpus 
    """ 
    # Only divide model over multiple gpus if there is more than one 
    if n_gpus > 1: 
     with tf.device('/cpu:0'): 
      x = Input(model.input_shape[1:])#, name=model.input_names[0] 

     towers = [] 
     # Divide model over gpus 
     for g in range(n_gpus): 
      # Work on GPU number g. 
      with tf.device('/gpu:' + str(g)): 
       # Obtain the g-th slice of the batch. 
       slice_g = Lambda(slice_batch, lambda shape: shape, 
           arguments={'n_gpus':n_gpus, 'part':g})(x) 
       # Apply model on the batch slice. 
       towers.append(model(slice_g)) 
     # Merge multi-gpu outputs with cpu 
     with tf.device('/cpu:0'): 
      merged = merge(towers, mode='concat', concat_axis=0) 

     return Model(input=[x], output=merged) 
    else: 
     return model 

def baseline_model(num_pixels, num_classes, n_gpus): 
    # create model 
    model = Sequential() 
    model.add(Dense(num_pixels, input_dim=num_pixels, init='normal', activation='relu')) 
    model.add(Dense(num_classes, init='normal', activation='softmax')) 

    model = to_multi_gpu(model, n_gpus) 
    # Compile model 
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) 
    return model 

if __name__ == '__main__': 
    model = baseline_model(784, 9, 8) 

回答

0

使用創建模型分裂下面的設置解決了它。但是,現在該模型運行在cpu而不是gpu上。由於我在最後一層對此模型進行了微調,所以這不是一個大問題。但是如果你想重新加載和訓練整個模型,這個答案可能不盡如人意。

重要設置是os.environ['CUDA_VISIBLE_DEVICES'] = ""allow_soft_placement=True

第一個掩碼所有的GPU,第二個使Tensorflow自動在可用設備(在這種情況下CPU)上分配模型。


示例代碼

import os 
os.environ['CUDA_VISIBLE_DEVICES'] = "" 
import tensorflow as tf 
from keras.models import load_model 
from keras import backend as K 

if __name__ == '__main__': 
    model = load_model('baseline_model.h5') 
    init = tf.global_variables_initializer() 
    gpu_options = tf.GPUOptions(allow_growth=True) 
    # Add ops to save and restore all the variables. 
    with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, allow_soft_placement=True,\ 
             log_device_placement=True)) as sess: 
     K.set_session(sess) 
     sess.run(init) 
     tf.train.start_queue_runners(sess=sess) 
     # Call model.fit here 
     sess.close() 
相關問題