我是TensorFlow的新手,對理解RNN模塊有困難。我試圖從LSTM中提取隱藏/單元狀態。 對於我的代碼,我使用的實現從https://github.com/aymericdamien/TensorFlow-Examples。如何在張量流中從RNN模型中提取細胞狀態和隱藏狀態?
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
# Define weights
weights = {'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))}
biases = {'out': tf.Variable(tf.random_normal([n_classes]))}
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, n_steps, n_input)
# Required shape: 'n_steps' tensors list of shape (batch_size, n_input)
# Permuting batch_size and n_steps
x = tf.transpose(x, [1, 0, 2])
# Reshaping to (n_steps*batch_size, n_input)
x = tf.reshape(x, [-1, n_input])
# Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)
x = tf.split(0, n_steps, x)
# Define a lstm cell with tensorflow
#with tf.variable_scope('RNN'):
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
# Get lstm cell output
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out'], states
pred, states = RNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
現在我想提取預測中每個時間步的細胞/隱藏狀態。狀態存儲在(c,h)形式的LSTMStateTuple中,我可以通過評估print states
來找到它。但是,試圖撥打print states.c.eval()
(根據文檔應給予張量states.c
中的值),會產生一個錯誤,指出我的變量未被初始化,即使在我預測某事後我正在調用它。造成這種情況的代碼是在這裏:
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
for v in tf.get_collection(tf.GraphKeys.VARIABLES, scope='RNN'):
print v.name
while step * batch_size < training_iters:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
print states.c.eval()
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
step += 1
print "Optimization Finished!"
和錯誤消息是
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
的狀態是還沒有顯示在tf.all_variables()
,只有受過訓練的基質/偏壓張量(如下所述:Tensorflow: show or save forget gate values in LSTM)。我不想從零開始構建整個LSTM,因爲我有states
變量中的狀態,我只需要調用它。
非常感謝!我不知道它是如此工作。雖然對於未來偶然遇到這個問題的人有兩個小的調整: 1)參數應該在一個數組中 2)我分配的變量的名稱必須與先前分配的變量不同。 爲我工作的語法是'preds,stat,acc = sess.run([pred,states,accuracy],feed_dict = {x:batch_x,y:batch_y})' – Valedra