我想創建一個使用注意機制的多層動態RNN解碼器。要做到這一點,我首先要創建一個注意機制:如何使用MultiRNNCell和dynamic_decode的注意力機制?
attention_mechanism = BahdanauAttention(num_units=ATTENTION_UNITS,
memory=encoder_outputs,
normalize=True)
然後我用AttentionWrapper
包裝一個LSTM細胞與注意機制:
attention_wrapper = AttentionWrapper(cell=self._create_lstm_cell(DECODER_SIZE),
attention_mechanism=attention_mechanism,
output_attention=False,
alignment_history=True,
attention_layer_size=ATTENTION_LAYER_SIZE)
其中self._create_lstm_cell
定義如下:
@staticmethod
def _create_lstm_cell(cell_size):
return BasicLSTMCell(cell_size)
然後我做了一些簿記(例如創建我的MultiRNNCell
,創建初始狀態,創建TrainingHelper
等)
attention_zero = attention_wrapper.zero_state(batch_size=tf.flags.FLAGS.batch_size, dtype=tf.float32)
# define initial state
initial_state = attention_zero.clone(cell_state=encoder_final_states[0])
training_helper = TrainingHelper(inputs=self.y, # feed in ground truth
sequence_length=self.y_lengths) # feed in sequence lengths
layered_cell = MultiRNNCell(
[attention_wrapper] + [ResidualWrapper(self._create_lstm_cell(cell_size=DECODER_SIZE))
for _ in range(NUMBER_OF_DECODER_LAYERS - 1)])
decoder = BasicDecoder(cell=layered_cell,
helper=training_helper,
initial_state=initial_state)
decoder_outputs, decoder_final_state, decoder_final_sequence_lengths = dynamic_decode(decoder=decoder,
maximum_iterations=tf.flags.FLAGS.max_number_of_scans // 12,
impute_finished=True)
但我收到以下錯誤:AttributeError: 'LSTMStateTuple' object has no attribute 'attention'
。
將注意機制添加到MultiRNNCell動態解碼器的正確方法是什麼?