2016-07-07 169 views
18

給定一個訓練有素的LSTM模型,我想對單個時間步進執行推理,即下例中的seq_length = 1。在每個時間步後,內部LSTM(內存和隱藏)狀態需要記住下一個'批'。對於推論的開始,內部LSTM狀態init_c, init_h根據輸入進行計算。然後將它們存儲在傳遞給LSTM的對象LSTMStateTuple中。在訓練期間,這個狀態每次更新都會更新。然而,爲了推斷,我希望state可以在批次之間保存,即只需要在開始時計算初始狀態,然後在每個「批量」(n = 1)之後應該保存LSTM狀態。TensorFlow:記住下一批次的LSTM狀態(有狀態的LSTM)

我發現這個相關的StackOverflow問題:Tensorflow, best way to save state in RNNs?。然而,這隻適用於state_is_tuple=False,但這種行爲很快將由TensorFlow棄用(請參閱rnn_cell.py)。 Keras似乎有一個很好的包裝,使有狀態 LSTMs可能,但我不知道在TensorFlow中實現這一點的最佳方法。 TensorFlow GitHub上的這個問題也與我的問題有關:https://github.com/tensorflow/tensorflow/issues/2838

任何有關構建有狀態LSTM模型的好建議?

inputs = tf.placeholder(tf.float32, shape=[None, seq_length, 84, 84], name="inputs") 
targets = tf.placeholder(tf.float32, shape=[None, seq_length], name="targets") 

num_lstm_layers = 2 

with tf.variable_scope("LSTM") as scope: 

    lstm_cell = tf.nn.rnn_cell.LSTMCell(512, initializer=initializer, state_is_tuple=True) 
    self.lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * num_lstm_layers, state_is_tuple=True) 

    init_c = # compute initial LSTM memory state using contents in placeholder 'inputs' 
    init_h = # compute initial LSTM hidden state using contents in placeholder 'inputs' 
    self.state = [tf.nn.rnn_cell.LSTMStateTuple(init_c, init_h)] * num_lstm_layers 

    outputs = [] 

    for step in range(seq_length): 

     if step != 0: 
      scope.reuse_variables() 

     # CNN features, as input for LSTM 
     x_t = # ... 

     # LSTM step through time 
     output, self.state = self.lstm(x_t, self.state) 
     outputs.append(output) 
+2

[Tensorflow,在RNN中保存狀態的最佳方式是?]的可能重複(http://stackoverflow.com/questions/37969065/tensorflow-best-way-to-save-state-in-rnns) –

回答

17

我發現保存佔位符中所有圖層的整個狀態是最容易的。

init_state = np.zeros((num_layers, 2, batch_size, state_size)) 

... 

state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size]) 

然後在使用本地tensorflow RNN Api之前解壓它並創建一個LSTMStateTuples的元組。

l = tf.unpack(state_placeholder, axis=0) 
rnn_tuple_state = tuple(
[tf.nn.rnn_cell.LSTMStateTuple(l[idx][0], l[idx][1]) 
for idx in range(num_layers)] 
) 

RNN經過API中:

cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True) 
cell = tf.nn.rnn_cell.MultiRNNCell([cell]*num_layers, state_is_tuple=True) 
outputs, state = tf.nn.dynamic_rnn(cell, x_input_batch, initial_state=rnn_tuple_state) 

state - 變量然後將feeded到下一批次作爲佔位符。

6

Tensorflow,保存RNN中狀態的最佳方法?其實是我最初的問題。下面的代碼是我如何使用狀態元組。

with tf.variable_scope('decoder') as scope: 
    rnn_cell = tf.nn.rnn_cell.MultiRNNCell \ 
    ([ 
     tf.nn.rnn_cell.LSTMCell(512, num_proj = 256, state_is_tuple = True), 
     tf.nn.rnn_cell.LSTMCell(512, num_proj = WORD_VEC_SIZE, state_is_tuple = True) 
    ], state_is_tuple = True) 

    state = [[tf.zeros((BATCH_SIZE, sz)) for sz in sz_outer] for sz_outer in rnn_cell.state_size] 

    for t in range(TIME_STEPS): 
     if t: 
      last = y_[t - 1] if TRAINING else y[t - 1] 
     else: 
      last = tf.zeros((BATCH_SIZE, WORD_VEC_SIZE)) 

     y[t] = tf.concat(1, (y[t], last)) 
     y[t], state = rnn_cell(y[t], state) 

     scope.reuse_variables() 

而不是使用tf.nn.rnn_cell.LSTMStateTuple我只是創建一個清單工作正常。在這個例子中,我沒有保存狀態。但是,你可以很容易地使狀態變量不變,只是使用assign來保存值。