2017-06-10 37 views

回答

3

他們做同樣的事情(至少現在)。唯一的區別是tf.contrib.layers.l2_regularizertf.nn.l2_loss的結果乘以scale

看的tf.contrib.layers.l2_regularizer [https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/contrib/layers/python/layers/regularizers.py]實現:

def l2_regularizer(scale, scope=None): 
    """Returns a function that can be used to apply L2 regularization to weights. 
    Small values of L2 can help prevent overfitting the training data. 
    Args: 
    scale: A scalar multiplier `Tensor`. 0.0 disables the regularizer. 
    scope: An optional scope name. 
    Returns: 
    A function with signature `l2(weights)` that applies L2 regularization. 
    Raises: 
    ValueError: If scale is negative or if scale is not a float. 
    """ 
    if isinstance(scale, numbers.Integral): 
    raise ValueError('scale cannot be an integer: %s' % (scale,)) 
    if isinstance(scale, numbers.Real): 
    if scale < 0.: 
     raise ValueError('Setting a scale less than 0 on a regularizer: %g.' % 
         scale) 
    if scale == 0.: 
     logging.info('Scale of 0 disables regularizer.') 
     return lambda _: None 

    def l2(weights): 
    """Applies l2 regularization to weights.""" 
    with ops.name_scope(scope, 'l2_regularizer', [weights]) as name: 
     my_scale = ops.convert_to_tensor(scale, 
             dtype=weights.dtype.base_dtype, 
             name='scale') 
     return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name) 

    return l2 

您感興趣的線路是:

return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name) 

因此,在實踐中,tf.contrib.layers.l2_regularizer電話tf.nn.l2_loss由內部和簡單相乘的結果scale參數。

相關問題