我正在做一個通過Tensorflow增強(4層DNN到5層DNN)的例子。我正在保存會話並在TF中恢復,因爲TF tute中有一個簡短的段落: '例如,您可能已經訓練了一個4層的神經網絡,現在您想要訓練5層的新模型,將來自先前訓練模型的4層的參數恢復到新模型的前4層。',其中張量流通啓動於https://www.tensorflow.org/how_tos/variables/。恢復Tensorflow中新模型子集的變量?
但是,我發現當檢查點保存4層參數時,沒有人詢問如何使用「恢復」,但我們需要將它放入5層,引發紅旗。
使這在實際的代碼,我做了
with tf.name_scope('fcl1'):
hidden_1 = fully_connected_layer(inputs, train_data.inputs.shape[1], num_hidden)
with tf.name_scope('fcl2'):
hidden_2 = fully_connected_layer(hidden_1, num_hidden, num_hidden)
with tf.name_scope('fclf'):
hidden_final = fully_connected_layer(hidden_2, num_hidden, num_hidden)
with tf.name_scope('outputl'):
outputs = fully_connected_layer(hidden_final, num_hidden, train_data.num_classes, tf.identity)
outputs = tf.nn.softmax(outputs)
with tf.name_scope('boosting'):
boosts = fully_connected_layer(outputs, train_data.num_classes, train_data.num_classes, tf.identity)
其中內部變量(或稱爲)「FCL1」 - 這樣我有「FCL1 /變量」和「FCL1/Variable_1」的重量和偏見 - 'fcl2','fclf'和'outputl'由saver.save()存儲在腳本中,沒有'boosting'圖層。然而,正如我們現在已經「助推」層,saver.restore(SESS,「saved_models/model_list.ckpt」)不工作作爲
NotFoundError: Key boosting/Variable_1 not found in checkpoint
我真的很希望聽到這個問題。謝謝。下面的代碼是我陷入困境的代碼的主要部分。
def fully_connected_layer(inputs, input_dim, output_dim, nonlinearity=tf.nn.relu):
weights = tf.Variable(
tf.truncated_normal(
[input_dim, output_dim], stddev=2./(input_dim + output_dim)**0.5),
'weights')
biases = tf.Variable(tf.zeros([output_dim]), 'biases')
outputs = nonlinearity(tf.matmul(inputs, weights) + biases)
return outputs
inputs = tf.placeholder(tf.float32, [None, train_data.inputs.shape[1]], 'inputs')
targets = tf.placeholder(tf.float32, [None, train_data.num_classes], 'targets')
with tf.name_scope('fcl1'):
hidden_1 = fully_connected_layer(inputs, train_data.inputs.shape[1], num_hidden)
with tf.name_scope('fcl2'):
hidden_2 = fully_connected_layer(hidden_1, num_hidden, num_hidden)
with tf.name_scope('fclf'):
hidden_final = fully_connected_layer(hidden_2, num_hidden, num_hidden)
with tf.name_scope('outputl'):
outputs = fully_connected_layer(hidden_final, num_hidden, train_data.num_classes, tf.identity)
with tf.name_scope('error'):
error = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(outputs, targets))
with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(
tf.equal(tf.argmax(outputs, 1), tf.argmax(targets, 1)),
tf.float32))
with tf.name_scope('train'):
train_step = tf.train.AdamOptimizer().minimize(error)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
saver.restore(sess, "saved_models/model.ckpt")
print("Model restored")
print("Optimization Starts!")
for e in range(training_epochs):
...
#Save model - save session
save_path = saver.save(sess, "saved_models/model.ckpt")
### I once saved the variables using var_list, but didn't work as well...
print("Model saved in file: %s" % save_path)
爲了清楚起見,檢查點文件具有
fcl1/Variable:0
fcl1/Variable_1:0
fcl2/Variable:0
fcl2/Variable_1:0
fclf/Variable:0
fclf/Variable_1:0
outputl/Variable:0
outputl/Variable_1:0
由於原來的4層模型不具有 '升壓' 層。
可以恢復使用'tf.Saver' [構造]的'var_list'參數(https://www.tensorflow.org/api_docs/python模型/ state_ops/saving_and_restoring_variables)。 之後您將負責正確初始化第5層。 – drpng