5

Tensorflow的GRUCell的以下代碼顯示了當前一個隱藏狀態與序列中的當前輸入一起提供時,獲取更新隱藏狀態的典型操作。Tensorflow中GRU單元的說明?

def __call__(self, inputs, state, scope=None): 
    """Gated recurrent unit (GRU) with nunits cells.""" 
    with vs.variable_scope(scope or type(self).__name__): # "GRUCell" 
     with vs.variable_scope("Gates"): # Reset gate and update gate. 
     # We start with bias of 1.0 to not reset and not update. 
     r, u = array_ops.split(1, 2, _linear([inputs, state], 
              2 * self._num_units, True, 1.0)) 
     r, u = sigmoid(r), sigmoid(u) 
     with vs.variable_scope("Candidate"): 
     c = self._activation(_linear([inputs, r * state], 
            self._num_units, True)) 
     new_h = u * state + (1 - u) * c 
return new_h, new_h 

但我看不出有任何weightsbiases這裏。 例如我的理解是,獲得ru將需要權重和偏差與當前輸入和/或隱藏狀態相乘以獲​​得更新的隱藏狀態。

我已經寫了GRU單位如下:

def gru_unit(previous_hidden_state, x): 
    r = tf.sigmoid(tf.matmul(x, Wr) + br) 
    z = tf.sigmoid(tf.matmul(x, Wz) + bz) 
    h_ = tf.tanh(tf.matmul(x, Wx) + tf.matmul(previous_hidden_state, Wh) * r) 
    current_hidden_state = tf.mul((1 - z), h_) + tf.mul(previous_hidden_state, z) 
    return current_hidden_state 

在這裏,我明確地使用權Wx, Wr, Wz, Wh和偏見br, bh, bz等得到更新隱藏狀態。這些權重和偏差是訓練後學習/調整的內容。

如何使用Tensorflow的內置GRUCell實現與上述相同的結果?

+0

他們串聯了'r'和'z'門做這一切一氣呵成,節省了計算。 –

回答

3

他們在那裏,你只是沒有在代碼中看到他們,因爲_linear函數添加了權重和偏見。

r, u = array_ops.split(1, 2, _linear([inputs, state], 
              2 * self._num_units, True, 1.0)) 

...

def _linear(args, output_size, bias, bias_start=0.0, scope=None): 
    """Linear map: sum_i(args[i] * W[i]), where W[i] is a variable. 

    Args: 
    args: a 2D Tensor or a list of 2D, batch x n, Tensors. 
    output_size: int, second dimension of W[i]. 
    bias: boolean, whether to add a bias term or not. 
    bias_start: starting value to initialize the bias; 0 by default. 
    scope: VariableScope for the created subgraph; defaults to "Linear". 

    Returns: 
    A 2D Tensor with shape [batch x output_size] equal to 
    sum_i(args[i] * W[i]), where W[i]s are newly created matrices. 

    Raises: 
    ValueError: if some of the arguments has unspecified or wrong shape. 
    """ 
    if args is None or (nest.is_sequence(args) and not args): 
    raise ValueError("`args` must be specified") 
    if not nest.is_sequence(args): 
    args = [args] 

    # Calculate the total size of arguments on dimension 1. 
    total_arg_size = 0 
    shapes = [a.get_shape().as_list() for a in args] 
    for shape in shapes: 
    if len(shape) != 2: 
     raise ValueError("Linear is expecting 2D arguments: %s" % str(shapes)) 
    if not shape[1]: 
     raise ValueError("Linear expects shape[1] of arguments: %s" % str(shapes)) 
    else: 
     total_arg_size += shape[1] 

    # Now the computation. 
    with vs.variable_scope(scope or "Linear"): 
    matrix = vs.get_variable("Matrix", [total_arg_size, output_size]) 
    if len(args) == 1: 
     res = math_ops.matmul(args[0], matrix) 
    else: 
     res = math_ops.matmul(array_ops.concat(1, args), matrix) 
    if not bias: 
     return res 
    bias_term = vs.get_variable(
     "Bias", [output_size], 
     initializer=init_ops.constant_initializer(bias_start)) 
    return res + bias_term 
+0

因此,似乎權重和偏見是根據需求創建的,並且通過'get_variable'跨時間步驟共享,如果在相同的變量範圍內調用,它將返回相同的東西。我不清楚權重矩陣被初始化了。 –

+0

我認爲它使用當前變量作用域的默認初始值設定項進行初始化。 – chasep255

+0

我想這也回答了我關於張量流量的其他[問題](http://stackoverflow.com/questions/39302344/tensorflow-rnn-input-size)。 –