Tensorflow 1.0版多層編碼器的輸出狀態,以多層解碼器Seq2Seq模型TF 1.0
我的問題是,什麼尺寸encoder_state
說法確實tf.contrib.seq2seq attention_decoder_fn_train
預期。
它可以採取多層編碼器狀態輸出嗎?
語境:
我想在tensorflow 1.0創建基於多層雙向關注seq2seq。
我的編碼器:
cell = LSTM(n)
cell = MultiRnnCell([cell]*4)
((encoder_fw_outputs,encoder_bw_outputs),
(encoder_fw_state,encoder_bw_state)) = (tf.nn.bidirectional_dynamic_rnn(cell_fw=cell, cell_bw = cell....)
現在,mutilayered雙向編碼器返回編碼器cell_states[c]
hidden_states[h]
和對於每個層,並且還用於向後和向前通。 我串連的直傳和復路各州通過它來encoder_state:
self.encoder_state = tf.concat((encoder_fw_state, encoder_bw_state), -1)
而且我通過這我的解碼器:
decoder_fn_train = seq2seq.simple_decoder_fn_train(encoder_state=self.encoder_state)
(self.decoder_outputs_train,
self.decoder_state_train,
self.decoder_context_state_train) = seq2seq.dynamic_rnn_decoder(cell=decoder_cell,...)
但它給以下錯誤:
ValueError: The two structures don't have the same number of elements. First structure: Tensor("BidirectionalEncoder/transpose:0", shape=(?, 2, 2, 20), dtype=float32), second structure: (LSTMStateTuple(c=20, h=20), LSTMStateTuple(c=20, h=20)).
我的decoder_cell
也是多層的。
1: