我正在嘗試通過Tensorflow的多個GPU(1臺機器)的初始代碼。據我所知,我很困惑,因爲我們從不同的塔樓,也就是GPU中獲得了多個損失,但是loss
變量評估似乎只是最後一個塔而不是所有塔的損失總和:Tensorflow初始多GPU處理損失不算總和?
for step in xrange(FLAGS.max_steps):
start_time = time.time()
_, loss_value = sess.run([train_op, loss])
duration = time.time() - start_time
凡loss
最後每個塔專門定義:
for i in xrange(FLAGS.num_gpus):
with tf.device('/gpu:%d' % i):
with tf.name_scope('%s_%d' % (inception.TOWER_NAME, i)) as scope:
# Force all Variables to reside on the CPU.
with slim.arg_scope([slim.variables.variable], device='/cpu:0'):
# Calculate the loss for one tower of the ImageNet model. This
# function constructs the entire ImageNet model but shares the
# variables across all towers.
loss = _tower_loss(images_splits[i], labels_splits[i], num_classes,
scope)
有人能解釋其中的步驟是將損失從不同的塔結合?或者,我們是否僅僅是一座塔的損失來代表另一座塔的損失呢?
這裏的鏈接代碼: https://github.com/tensorflow/models/blob/master/inception/inception/inception_train.py#L336