我有一個小問題,在張量流中恢復模型時使用批量常量。恢復模型時使用批量規範?
下面是我的批處理規範從here:
def _batch_normalization(self, input_tensor, is_training, batch_norm_epsilon, decay=0.999):
"""batch normalization for dense nets.
Args:
input_tensor: `tensor`, the input tensor which needed normalized.
is_training: `bool`, if true than update the mean/variance using moving average,
else using the store mean/variance.
batch_norm_epsilon: `float`, param for batch normalization.
decay: `float`, param for update move average, default is 0.999.
Returns:
normalized params.
"""
# actually batch normalization is according to the channels dimension.
input_shape_channels = int(input_tensor.get_shape()[-1])
# scala and beta using in the the formula like that: scala * (x - E(x))/sqrt(var(x)) + beta
scale = tf.Variable(tf.ones([input_shape_channels]))
beta = tf.Variable(tf.zeros([input_shape_channels]))
# global mean and var are the mean and var that after moving averaged.
global_mean = tf.Variable(tf.zeros([input_shape_channels]), trainable=False)
global_var = tf.Variable(tf.ones([input_shape_channels]), trainable=False)
# if training, then update the mean and var, else using the trained mean/var directly.
if is_training:
# batch norm in the channel axis.
axis = list(range(len(input_tensor.get_shape()) - 1))
batch_mean, batch_var = tf.nn.moments(input_tensor, axes=axis)
# update the mean and var.
train_mean = tf.assign(global_mean, global_mean * decay + batch_mean * (1 - decay))
train_var = tf.assign(global_var, global_var * decay + batch_var * (1 - decay))
with tf.control_dependencies([train_mean, train_var]):
return tf.nn.batch_normalization(input_tensor,
batch_mean, batch_var, beta, scale, batch_norm_epsilon)
else:
return tf.nn.batch_normalization(input_tensor,
global_mean, global_var, beta, scale, batch_norm_epsilon)
我訓練模型,並使用tf.train.Saver()
保存。下面是測試代碼:
def inference(self, images_for_predict):
"""load the pre-trained model and do the inference.
Args:
images_for_predict: `tensor`, images for predict using the pre-trained model.
Returns:
the predict labels.
"""
tf.reset_default_graph()
images, labels, _, _, prediction, accuracy, saver = self._build_graph(1, False)
predictions = []
correct = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# saver = tf.train.import_meta_graph('./models/dense_nets_model/dense_nets.ckpt.meta')
# saver.restore(sess, tf.train.latest_checkpoint('./models/dense_nets_model/'))
saver.restore(sess, './models/dense_nets_model/dense_nets.ckpt')
for i in range(100):
pred, corr = sess.run([tf.argmax(prediction, 1), accuracy],
feed_dict={
images: [images_for_predict.images[i]],
labels: [images_for_predict.labels[i]]})
correct += corr
predictions.append(pred[0])
print("PREDICTIONS:", predictions)
print("ACCURACY:", correct/100)
但預測結果總是很糟糕,這樣的:
('PREDICTIONS:', [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
('ACCURACY:', 0.080000000000000002)
一些提示:images_for_predict = mnist.test
和self._build_graph
方法有兩個參數:batch_size
和is_training
。
任何人都可以幫到我嗎?
謝謝!但是如果我的測試批量大小爲1,如何將其放入批量大於1的訓練模型中? – Yang
嗨,gdelab,我改變我的'batch_norm'到'tf.layers.batch_normalization(input_tensor,training = is_training)'但它似乎不工作,我更新github帖子,你能幫我嗎? – Yang