2017-10-13 61 views
1

我遇到了正確恢復保存的張量流模型的問題。我創建了雙向RNN模型tensorflow與以下代碼:如何在張量流中恢復已保存的BiRNN模型,使所有輸出神經元正確綁定到相應的輸出類

batchX_placeholder = tf.placeholder(tf.float32, [None, timesteps, 1], 
            name="batchX_placeholder")]) 
batchY_placeholder = tf.placeholder(tf.float32, [None, num_classes], 
            name="batchY_placeholder") 
weights = tf.Variable(np.random.rand(2*STATE_SIZE, num_classes), 
         dtype=tf.float32, name="weights") 
biases = tf.Variable(np.zeros((1, num_classes)), dtype=tf.float32, 
        name="biases") 
logits = BiRNN(batchX_placeholder, weights, biases) 
with tf.name_scope("prediction"): 
    prediction = tf.nn.softmax(logits) 
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=batchY_placeholder)) 
lr = tf.Variable(learning_rate, trainable=False, dtype=tf.float32, 
       name='lr') 
optimizer = tf.train.AdamOptimizer(learning_rate=lr) 
train_op = optimizer.minimize(loss_op) 
init_op = tf.initialize_all_variables() 
saver = tf.train.Saver() 

BiRNN的體系結構用下面的函數創建的:

def BiRNN(x, weights, biases): 
    # Unstack to get a list of 'time_steps' tensors of shape (batch_size, 
    # num_input) 
    x = tf.unstack(x, time_steps, 1) 
    # Forward and Backward direction cells 
    lstm_fw_cell = rnn.BasicLSTMCell(STATE_SIZE, forget_bias=1.0) 
    lstm_bw_cell = rnn.BasicLSTMCell(STATE_SIZE, forget_bias=1.0) 
    outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, 
     lstm_bw_cell, x, dtype=tf.float32) 
    # Linear activation, using rnn inner loop last output 
    return tf.matmul(outputs[-1], weights) + biases 

然後我訓練模型,並且每個200步後保存:

with tf.Session() as sess: 
    sess.run(init_op) 
    current_step = 0 
    for batch_x, batch_y in get_minibatch(): 
     sess.run(train_op, feed_dict={batchX_placeholder: batch_x, 
             batchY_placeholder: batch_y}) 
     current_step += 1 
     if current_step % 200 == 0: 
      saver.save(sess, os.path.join(model_dir, "model") 

要以推理模式保存的模型我用「model.meta」文件保存tensorflow圖:

graph = tf.get_default_graph() 
saver = tf.train.import_meta_graph(os.path.join(model_dir, "model.meta")) 
sess = tf.Session() 
saver.restore(sess, tf.train.latest_checkpoint(model_dir) 
weights = graph.get_tensor_by_name("weights:0") 
biases = graph.get_tensor_by_name("biases:0") 
batchX_placeholder = graph.get_tensor_by_name("batchX_placeholder:0") 
batchY_placeholder = graph.get_tensor_by_name("batchY_placeholder:0") 
logits = BiRNN(batchX_placeholder, weights, biases) 
prediction = graph.get_operation_by_name("prediction/Softmax") 
argmax_pred = tf.argmax(prediction, 1) 
init = tf.global_variables_initializer() 
sess.run(init) 
for x_seq, y_gt in get_sequence(): 
    _, y_pred = sess.run([prediction, argmax_pred], 
        feed_dict={batchX_placeholder: [x_seq]], 
           batchY_placeholder: [[0.0, 0.0]]}) 
    print("Y ground true: " + str(y_gt) + ", Y pred: " + str(y_pred[0])) 

當我在推理模式下運行代碼時,每次啓動它時都會得到不同的結果。似乎來自softmax層的輸出神經元隨機地與不同的輸出類捆綁在一起。

所以,我的問題是:如何保存並正確恢復張量流模型,使所有神經元與相應的輸出類正確綁定?

回答

2

有沒有必要撥打tf.global_variables_initializer(),我認爲這是你的問題。

我刪除了一些操作:logitsweightsbiases,因爲你並不需要它們,所有這些都已經加載,使用graph.get_tensor_by_name得到它們。

對於prediction,得到代替操作的。 (見本answer):

這是代碼:

graph = tf.get_default_graph() 
saver = tf.train.import_meta_graph(os.path.join(model_dir, "model.meta")) 
sess = tf.Session() 
saver.restore(sess, tf.train.latest_checkpoint(model_dir)) 

batchX_placeholder = graph.get_tensor_by_name("batchX_placeholder:0") 
batchY_placeholder = graph.get_tensor_by_name("batchY_placeholder:0") 
prediction = graph.get_tensor_by_name("prediction/Softmax:0") 
argmax_pred = tf.argmax(prediction, 1) 

編輯1:我發現我不是爲什麼會得到不同的結果清晰。

而當我在推理模式下運行代碼時,每次啓動它時我都會得到不同的結果 。

注意,雖然你從所加載的模型使用的權,你又創建BiRNN,和BasicLSTMCell也有重量,你不從你加載的模型設置其他變量,因此他們需要初始化(使用新的隨機值)再次導致未經訓練的模型。

相關問題