2016-10-26 37 views
4

我是TensorFlow的新手,對理解RNN模塊有困難。我試圖從LSTM中提取隱藏/單元狀態。 對於我的代碼,我使用的實現從https://github.com/aymericdamien/TensorFlow-Examples如何在張量流中從RNN模型中提取細胞狀態和隱藏狀態?

# tf Graph input 
x = tf.placeholder("float", [None, n_steps, n_input]) 
y = tf.placeholder("float", [None, n_classes]) 

# Define weights 
weights = {'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))} 
biases = {'out': tf.Variable(tf.random_normal([n_classes]))} 

def RNN(x, weights, biases): 
    # Prepare data shape to match `rnn` function requirements 
    # Current data input shape: (batch_size, n_steps, n_input) 
    # Required shape: 'n_steps' tensors list of shape (batch_size, n_input) 

    # Permuting batch_size and n_steps 
    x = tf.transpose(x, [1, 0, 2]) 
    # Reshaping to (n_steps*batch_size, n_input) 
    x = tf.reshape(x, [-1, n_input]) 
    # Split to get a list of 'n_steps' tensors of shape (batch_size, n_input) 
    x = tf.split(0, n_steps, x) 

    # Define a lstm cell with tensorflow 
    #with tf.variable_scope('RNN'): 
    lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True) 

    # Get lstm cell output 
     outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32) 

    # Linear activation, using rnn inner loop last output 
    return tf.matmul(outputs[-1], weights['out']) + biases['out'], states 

pred, states = RNN(x, weights, biases) 

# Define loss and optimizer 
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) 
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) 

# Evaluate model 
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1)) 
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) 
# Initializing the variables 
init = tf.initialize_all_variables() 

現在我想提取預測中每個時間步的細胞/隱藏狀態。狀態存儲在(c,h)形式的LSTMStateTuple中,我可以通過評估print states來找到它。但是,試圖撥打print states.c.eval()(根據文檔應給予張量states.c中的值),會產生一個錯誤,指出我的變量未被初始化,即使在我預測某事後我正在調用它。造成這種情況的代碼是在這裏:

# Launch the graph 
with tf.Session() as sess: 
    sess.run(init) 
    step = 1 
    # Keep training until reach max iterations 
    for v in tf.get_collection(tf.GraphKeys.VARIABLES, scope='RNN'): 
     print v.name 
    while step * batch_size < training_iters: 
     batch_x, batch_y = mnist.train.next_batch(batch_size) 
     # Reshape data to get 28 seq of 28 elements 
     batch_x = batch_x.reshape((batch_size, n_steps, n_input)) 
     # Run optimization op (backprop) 
     sess.run(optimizer, feed_dict={x: batch_x, y: batch_y}) 

     print states.c.eval() 
     # Calculate batch accuracy 
     acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y}) 

     step += 1 
    print "Optimization Finished!" 

和錯誤消息是

InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float 
    [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] 

的狀態是還沒有顯示在tf.all_variables(),只有受過訓練的基質/偏壓張量(如下所述:Tensorflow: show or save forget gate values in LSTM)。我不想從零開始構建整個LSTM,因爲我有states變量中的狀態,我只需要調用它。

回答

3

您可以簡單地收集states的值,這與收集精確度的方式相同。我想pred, states, acc = sess.run(pred, states, accuracy, feed_dict={x: batch_x, y: batch_y})應該工作得很好。

+0

非常感謝!我不知道它是如此工作。雖然對於未來偶然遇到這個問題的人有兩個小的調整: 1)參數應該在一個數組中 2)我分配的變量的名稱必須與先前分配的變量不同。 爲我工作的語法是'preds,stat,acc = sess.run([pred,states,accuracy],feed_dict = {x:batch_x,y:batch_y})' – Valedra

4

關於你的假設的一個評論:「狀態」確實只具有來自上次時間步的「隱藏狀態」和「存儲單元」的值。

「輸出」包含你想要的每個時間步的「隱藏狀態」(輸出的大小是[batch_size,seq_len,hidden_​​size],所以我假設你想要「輸出」變量,而不是「狀態」。見documentation

1

我有user3480922的答案,不同意的代碼:。

outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32) 

能夠提取的預測爲每time_step隱藏狀態,你必須使用輸出。因爲輸出具有每個time_step的隱藏狀態值,但是我不確定是否有任何方法可以存儲單元格的值每個time_step的狀態也是如此。因爲狀態元組提供了單元格狀態值,但僅用於最後一個時間段。例如,在具有5個time_steps的以下樣本中,輸出[4,:,],time_step = 0,...,4具有time_step = 4的隱藏狀態值,而狀態元組h僅具有time_step = 4的隱藏狀態值。狀態元組c在time_step = 4處具有單元格值。

outputs = [[[ 0.0589103 -0.06925126 -0.01531546 0.06108122] 
    [ 0.00861215 0.06067181 0.03790079 -0.04296958] 
    [ 0.00597713 0.03916606 0.02355802 -0.0277683 ]] 

    [[ 0.06252582 -0.07336216 -0.01607122 0.05024602] 
    [ 0.05464711 0.03219429 0.06635305 0.00753127] 
    [ 0.05385715 0.01259535 0.0524035 0.01696803]] 

    [[ 0.0853352 -0.06414541 0.02524283 0.05798233] 
    [ 0.10790729 -0.05008117 0.03003334 0.07391824] 
    [ 0.10205664 -0.04479517 0.03844892 0.0693808 ]] 

    [[ 0.10556188 0.0516542 0.09162509 -0.02726674] 
    [ 0.11425048 -0.00211394 0.06025286 0.03575509] 
    [ 0.11338984 0.02839304 0.08105748 0.01564003]] 

    **[[ 0.10072514 0.14767936 0.12387902 -0.07391471] 
    [ 0.10510238 0.06321315 0.08100517 -0.00940042] 
    [ 0.10553667 0.0984127 0.10094948 -0.02546882]]**] 
    states = LSTMStateTuple(c=array([[ 0.23870754, 0.24315512, 0.20842518, -0.12798975], 
    [ 0.23749796, 0.10797793, 0.14181322, -0.01695861], 
    [ 0.2413336 , 0.16692916, 0.17559692, -0.0453596 ]], dtype=float32), h=array(**[[ 0.10072514, 0.14767936, 0.12387902, -0.07391471], 
    [ 0.10510238, 0.06321315, 0.08100517, -0.00940042], 
    [ 0.10553667, 0.0984127 , 0.10094948, -0.02546882]]**, dtype=float32))