2016-10-05 76 views
2

我訓練RNN模型期間使用最大序列長度較長的推斷(使用rnn.dynamic_rnn方法)和我的數據矩陣是形狀num_examples x max_sequence_length x num_features的。在訓練期間,我不想將max_sequence_length增加到50或100以上,因爲它增加了訓練時間和記憶。我的訓練集中的所有句子都少於50.然而,在測試過程中,我希望模型能夠推斷多達500個令牌。可能嗎?我該怎麼做?RNN模型:在句比訓練

回答

0

@sonal - 是的,這是可能的。因爲在大多數時間測試中,我們感興趣的是傳遞一個示例,而不是一堆數據。 所以,你需要的是,你需要通過單一實例的陣列讓說

test_index = [10 , 23 , 42 ,12 ,24, 50] 

到dynamic_rnn。預測必須基於最終的隱藏狀態發生。在dynamic_rnn裏面,我認爲你可以在訓練中傳遞超出max_length的句子。如果不是,您可以編寫一個自定義解碼器函數,用您在訓練時獲得的權重來計算GRU或LSTM狀態。這個想法是,您可以繼續生成輸出,直到您達到測試用例的最大長度,或者直到模型生成「EOS」特殊令牌。我更喜歡在編碼器得到最終隱藏狀態之後使用解碼器,這也會給出更好的結果。

# function to the while-loop, for early stopping 
    def decoder_cond(time, state, output_ta_t): 
     return tf.less(time, max_sequence_length) 

    # the body_builder is just a wrapper to parse feedback 
    def decoder_body_builder(feedback=False): 
     # the decoder body, this is where the RNN magic happens! 
     def decoder_body(time, old_state, output_ta_t): 
      # when validating we need previous prediction, handle in feedback 
      if feedback: 
       def from_previous(): 
        prev_1 = tf.matmul(old_state, W_out) + b_out 
        a_max = tf.argmax(prev_1, 1) 
        #### Try to find the token index and stop the condition until you get a EOS token index . 
        return tf.gather(embeddings, a_max) 
       x_t = tf.cond(tf.equal(time, 0), from_previous, lambda: input_ta.read(0)) 
      else: 
       # else we just read the next timestep 
       x_t = input_ta.read(time) 

      # calculate the GRU 
      z = tf.sigmoid(tf.matmul(x_t, W_z_x) + tf.matmul(old_state, W_z_h) + b_z) # update gate 
      r = tf.sigmoid(tf.matmul(x_t, W_r_x) + tf.matmul(old_state, W_r_h) + b_r) # reset gate 
      c = tf.tanh(tf.matmul(x_t, W_c_x) + tf.matmul(r*old_state, W_c_h) + b_c) # proposed new state 
      new_state = (1-z)*c + z*old_state # new state 

      # writing output 
      output_ta_t = output_ta_t.write(time, new_state) 

      # return in "input-to-next-step" style 
      return (time + 1, new_state, output_ta_t) 
     return decoder_body 
    # set up variables to loop with 
    output_ta = tensor_array_ops.TensorArray(tf.float32, size=1, dynamic_size=True, infer_shape=False) 
    time = tf.constant(0) 
    loop_vars = [time, initial_state, output_ta] 

    # run the while-loop for training 
    _, state, output_ta = tf.while_loop(decoder_cond, 
             decoder_body_builder(feedback = True), 
             loop_vars, 
             swap_memory=swap) 

這只是一個代碼段,嘗試相應地修改它。更多詳細信息請參閱https://github.com/alrojo/tensorflow-tutorial