似乎在tensorflow L2正規化可以以2種方式來實現:tf.nn.l2_loss和tf.contrib.layers.l2_regularizer是否可以達到在tensorflow中添加L2正則化的相同目的?
(i)使用tf.nn.l2_loss或(ii)使用 tf.contrib.layers.l2_regularizer
將這些兩種方法服務同樣的目的?如果它們不同,它們有什麼不同?
似乎在tensorflow L2正規化可以以2種方式來實現:tf.nn.l2_loss和tf.contrib.layers.l2_regularizer是否可以達到在tensorflow中添加L2正則化的相同目的?
(i)使用tf.nn.l2_loss或(ii)使用 tf.contrib.layers.l2_regularizer
將這些兩種方法服務同樣的目的?如果它們不同,它們有什麼不同?
他們做同樣的事情(至少現在)。唯一的區別是tf.contrib.layers.l2_regularizer
將tf.nn.l2_loss
的結果乘以scale
。
看的tf.contrib.layers.l2_regularizer
[https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/contrib/layers/python/layers/regularizers.py]實現:
def l2_regularizer(scale, scope=None):
"""Returns a function that can be used to apply L2 regularization to weights.
Small values of L2 can help prevent overfitting the training data.
Args:
scale: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
scope: An optional scope name.
Returns:
A function with signature `l2(weights)` that applies L2 regularization.
Raises:
ValueError: If scale is negative or if scale is not a float.
"""
if isinstance(scale, numbers.Integral):
raise ValueError('scale cannot be an integer: %s' % (scale,))
if isinstance(scale, numbers.Real):
if scale < 0.:
raise ValueError('Setting a scale less than 0 on a regularizer: %g.' %
scale)
if scale == 0.:
logging.info('Scale of 0 disables regularizer.')
return lambda _: None
def l2(weights):
"""Applies l2 regularization to weights."""
with ops.name_scope(scope, 'l2_regularizer', [weights]) as name:
my_scale = ops.convert_to_tensor(scale,
dtype=weights.dtype.base_dtype,
name='scale')
return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)
return l2
您感興趣的線路是:
return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)
因此,在實踐中,tf.contrib.layers.l2_regularizer
電話tf.nn.l2_loss
由內部和簡單相乘的結果scale
參數。