2017-04-05 23 views
10

我正在嘗試使用批規範化。我試圖在mnist的簡單轉換網上使用tf.layers.batch_normalization。tf.layers.batch_normalization大測試錯誤

我對列車步驟(> 98%)的準確性很高,但測試準確性很低(< 50%)。我試圖改變動量值(我嘗試了0.8,0.9,0.99,0.999),並嘗試使用批量大小,但它的行爲方式基本相同。我訓練它在20k迭代。

我的代碼

# Input placeholders 
x = tf.placeholder(tf.float32, [None, 784], name='x-input') 
y_ = tf.placeholder(tf.float32, [None, 10], name='y-input') 
is_training = tf.placeholder(tf.bool) 

# inut layer 
input_layer = tf.reshape(x, [-1, 28, 28, 1]) 
with tf.name_scope('conv1'): 
    #Convlution #1 ([5,5] : [28x28x1]->[28x28x6]) 
    conv1 = tf.layers.conv2d(
     inputs=input_layer, 
     filters=6, 
     kernel_size=[5, 5], 
     padding="same", 
     activation=None 
    ) 

    #Batch Norm #1 
    conv1_bn = tf.layers.batch_normalization(
     inputs=conv1, 
     axis=-1, 
     momentum=0.9, 
     epsilon=0.001, 
     center=True, 
     scale=True, 
     training = is_training, 
     name='conv1_bn' 
    ) 

    #apply relu 
    conv1_bn_relu = tf.nn.relu(conv1_bn) 
    #apply pool ([2,2] : [28x28x6]->[14X14X6]) 
    maxpool1=tf.layers.max_pooling2d(
     inputs=conv1_bn_relu, 
     pool_size=[2,2], 
     strides=2, 
     padding="valid" 
    ) 

with tf.name_scope('conv2'): 
    #convolution #2 ([5x5] : [14x14x6]->[14x14x16] 
    conv2 = tf.layers.conv2d(
     inputs=maxpool1, 
     filters=16, 
     kernel_size=[5, 5], 
     padding="same", 
     activation=None 
    ) 

    #Batch Norm #2 
    conv2_bn = tf.layers.batch_normalization(
     inputs=conv2, 
     axis=-1, 
     momentum=0.999, 
     epsilon=0.001, 
     center=True, 
     scale=True, 
     training = is_training 
    ) 

    #apply relu 
    conv2_bn_relu = tf.nn.relu(conv2_bn) 
    #maxpool2 ([2,2] : [14x14x16]->[7x7x16] 
    maxpool2=tf.layers.max_pooling2d(
     inputs=conv2_bn_relu, 
     pool_size=[2,2], 
     strides=2, 
     padding="valid" 
    ) 

#fully connected 1 [7*7*16 = 784 -> 120] 
maxpool2_flat=tf.reshape(maxpool2,[-1,7*7*16]) 
fc1 = tf.layers.dense(
    inputs=maxpool2_flat, 
    units=120, 
    activation=None 
) 

#Batch Norm #2 
fc1_bn = tf.layers.batch_normalization(
    inputs=fc1, 
    axis=-1, 
    momentum=0.999, 
    epsilon=0.001, 
    center=True, 
    scale=True, 
    training = is_training 
) 
#apply reliu 

fc1_bn_relu = tf.nn.relu(fc1_bn) 

#fully connected 2 [120-> 84] 
fc2 = tf.layers.dense(
    inputs=fc1_bn_relu, 
    units=84, 
    activation=None 
) 

#apply relu 
fc2_bn_relu = tf.nn.relu(fc2) 

#fully connected 3 [84->10]. Output layer with softmax 
y = tf.layers.dense(
    inputs=fc2_bn_relu, 
    units=10, 
    activation=None 
) 

#loss 
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) 
tf.summary.scalar('cross entropy', cross_entropy) 

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) 
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 
tf.summary.scalar('accuracy',accuracy) 

#merge summaries and init train writer 
sess = tf.Session() 
merged = tf.summary.merge_all() 
train_writer = tf.summary.FileWriter(log_dir + '/train' ,sess.graph) 
test_writer = tf.summary.FileWriter(log_dir + '/test') 
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) 
init = tf.global_variables_initializer() 
sess.run(init) 

with sess.as_default(): 
    def get_variables_values(): 
     variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) 
     values = {} 
     for variable in variables: 
      values[variable.name[:-2]] = sess.run(variable, feed_dict={ 
       x:batch[0], y_:batch[1], is_training:True 
       }) 
     return values 


    for i in range(t_iter): 
     batch = mnist.train.next_batch(batch_size) 
     if i%100 == 0: #test-set summary 
      print('####################################') 
      values = get_variables_values() 
      print('moving variance is:') 
      print(values["conv1_bn/moving_variance"]) 
      print('moving mean is:') 
      print(values["conv1_bn/moving_mean"]) 
      print('gamma is:') 
      print(values["conv1_bn/gamma/Adam"]) 
      print('beta is:') 
      print(values["conv1_bn/beta/Adam"]) 
      summary, acc = sess.run([merged,accuracy], feed_dict={ 
       x:mnist.test.images, y_:mnist.test.labels, is_training:False 

      }) 

     else: 
      summary, _ = sess.run([merged,train_step], feed_dict={ 
       x:batch[0], y_:batch[1], is_training:True 
      }) 
      if i%10 == 0: 
       train_writer.add_summary(summary,i) 

我認爲問題是,該moving_mean/VAR沒有被更新。 我在運行過程中打印了moving_mean/var,我得到: 移動方差爲: [1. 1. 1. 1. 1. 1.] 移動平均值爲: [0.0.0.0 0.] 伽馬是: [-0.00055969 0.00164391 0.00163301 -0.00206227 -0.00011434 -0.00070161] beta是: [-0.00232835 -0.00040769 0.00114277 -0.0025414 -0.00049697 0.00221556]

任何人有任何想法,我是什麼做錯了?

+0

嗨,MRG,可以告訴你我你的測試代碼?我和你有同樣的問題,並且總是使用tf.layers.batch_normalization預測常量。 – Yang

回答

25

tf.layers.batch_normalization爲更新平均值和方差而添加的操作不會自動作爲列車操作的依賴項添加 - 因此,如果您不做任何額外的操作,它們將永遠無法運行。 (不幸的是,該文檔目前沒有提及這一點,我正在打開一個問題。)

幸運的是,更新操作很容易得到,因爲它們被添加到tf.GraphKeys.UPDATE_OPS集合中。然後,你可以手動運行額外的操作:

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 
sess.run([train_op, extra_update_ops], ...) 

或將其添加爲您的操作培訓的相關性,然後只需運行你的訓練運行正常:

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 
with tf.control_dependencies(extra_update_ops): 
    train_op = optimizer.minimize(loss) 
... 
sess.run([train_op], ...) 
+0

非常感謝!現在作品 – MrG

+0

感謝您的幫助..我看到帖子詳述類似的方法,當批量規範仍然在貢獻 - 我犯了一個錯誤,認爲它是「固定」,當他們遷移到tf.layers 是否有任何理由,你不會更新均值和方差的默認行爲? – Prophecies

+0

我同意,這有點不方便。我懷疑它可能與摘要操作相似:數據通過圖表流向損失函數並不依賴於這些操作,因此必須單獨調用它們。 –