在MNIST beginner tutorial,有accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
Numpy和Tensorflow中np.mean和tf.reduce_mean之間的區別?
tf.cast
基本上改變了張量的對象的類型,但是是什麼tf.reduce_mean
和np.mean
之間的區別?
這裏是tf.reduce_mean
的DOC:
reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)
input_tensor: The tensor to reduce. Should have numeric type.
reduction_indices: The dimensions to reduce. If `None` (the defaut),
reduces all dimensions.
# 'x' is [[1., 1. ]]
# [2., 2.]]
tf.reduce_mean(x) ==> 1.5
tf.reduce_mean(x, 0) ==> [1.5, 1.5]
tf.reduce_mean(x, 1) ==> [1., 2.]
對於一維向量,它看起來像np.mean == tf.reduce_mean
,但我不明白髮生了什麼事在tf.reduce_mean(x, 1) ==> [1., 2.]
。 tf.reduce_mean(x, 0) ==> [1.5, 1.5]
有點合理,因爲[1,2]和[1,2]的平均值是[1.5,1.5],但tf.reduce_mean(x,1)
是怎麼回事?
他們生產的[整數值不同的結果(http://stackoverflow.com/a/43713062/1090562)由於蟒蛇 –