2016-04-12 73 views
13

我正在用TensorFlow/Python爲notMNIST數據集編寫一個神經網絡分類器。我在隱藏層上實現了l2正則化和輟學。只要只有一個隱藏層,它可以正常工作,但是當我添加更多圖層(以提高準確性)時,損失函數在每個步驟中迅速增加,在步驟5中變爲NaN。我嘗試暫時禁用Dropout和L2正則化,但只要有2+層,我就會得到相同的行爲。我甚至重寫了我的代碼(做一些重構使其更加靈活),但結果相同。層數和大小由hidden_layer_spec控制。我錯過了什麼?向TensorFlow添加多個圖層會導致損失函數成爲Nan

#works for np.array([1024]) with about 96.1% accuracy 
hidden_layer_spec = np.array([1024, 300]) 
num_hidden_layers = hidden_layer_spec.shape[0] 
batch_size = 256 
beta = 0.0005 

epochs = 100 
stepsPerEpoch = float(train_dataset.shape[0])/batch_size 
num_steps = int(math.ceil(float(epochs) * stepsPerEpoch)) 

l2Graph = tf.Graph() 
with l2Graph.as_default(): 
    #with tf.device('/cpu:0'): 
     # Input data. For the training data, we use a placeholder that will be fed 
     # at run time with a training minibatch. 
     tf_train_dataset = tf.placeholder(tf.float32, 
             shape=(batch_size, image_size * image_size)) 
     tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) 
     tf_valid_dataset = tf.constant(valid_dataset) 
     tf_test_dataset = tf.constant(test_dataset) 

     weights = [] 
     biases = [] 
     for hi in range(0, num_hidden_layers + 1): 
     width = image_size * image_size if hi == 0 else hidden_layer_spec[hi - 1] 
     height = num_labels if hi == num_hidden_layers else hidden_layer_spec[hi] 
     weights.append(tf.Variable(tf.truncated_normal([width, height]), name = "w" + `hi + 1`)) 
     biases.append(tf.Variable(tf.zeros([height]), name = "b" + `hi + 1`)) 
     print(`width` + 'x' + `height`) 

     def logits(input, addDropoutLayer = False): 
     previous_layer = input 
     for hi in range(0, hidden_layer_spec.shape[0]): 
      previous_layer = tf.nn.relu(tf.matmul(previous_layer, weights[hi]) + biases[hi]) 
      if addDropoutLayer: 
      previous_layer = tf.nn.dropout(previous_layer, 0.5) 
     return tf.matmul(previous_layer, weights[num_hidden_layers]) + biases[num_hidden_layers] 

     # Training computation. 
     train_logits = logits(tf_train_dataset, True) 

     l2 = tf.nn.l2_loss(weights[0]) 
     for hi in range(1, len(weights)): 
     l2 = l2 + tf.nn.l2_loss(weights[0]) 
     loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(train_logits, tf_train_labels)) + beta * l2 

     # Optimizer. 
     global_step = tf.Variable(0) # count the number of steps taken. 
     learning_rate = tf.train.exponential_decay(0.5, global_step, int(stepsPerEpoch) * 2, 0.96, staircase = True) 
     optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) 

     # Predictions for the training, validation, and test data. 
     train_prediction = tf.nn.softmax(train_logits) 
     valid_prediction = tf.nn.softmax(logits(tf_valid_dataset)) 
     test_prediction = tf.nn.softmax(logits(tf_test_dataset)) 
     saver = tf.train.Saver() 

with tf.Session(graph=l2Graph) as session: 
    tf.initialize_all_variables().run() 
    print("Initialized") 
    for step in range(num_steps): 
    # Pick an offset within the training data, which has been randomized. 
    # Note: we could use better randomization across epochs. 
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size) 
    # Generate a minibatch. 
    batch_data = train_dataset[offset:(offset + batch_size), :] 
    batch_labels = train_labels[offset:(offset + batch_size), :] 
    # Prepare a dictionary telling the session where to feed the minibatch. 
    # The key of the dictionary is the placeholder node of the graph to be fed, 
    # and the value is the numpy array to feed to it. 
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} 
    _, l, predictions = session.run(
     [optimizer, loss, train_prediction], feed_dict=feed_dict) 
    if (step % 500 == 0): 
     print("Minibatch loss at step %d: %f" % (step, l)) 
     print("Learning rate: " % learning_rate) 
     print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) 
     print("Validation accuracy: %.1f%%" % accuracy(
     valid_prediction.eval(), valid_labels)) 
    print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) 
    save_path = saver.save(session, "l2_degrade.ckpt") 
    print("Model save to " + `save_path`) 

回答

17

原來,這不是一個深度學習問題的編碼問題。額外的層使梯度太不穩定,並導致損失函數迅速轉移到NaN。解決此問題的最佳方法是使用Xavier initialization。否則,初始值的方差將趨於太高,從而導致不穩定。另外,降低學習率可能會有所幫助。

+1

我意識到這是前一陣子,但你可能也想看看一些正則化方法,特別是[批量規範化](https://arxiv.org/pdf/1502.03167.pdf)。如果你有它的工作,那麼很好,但BN幫助網絡對初始重量分佈的敏感度等等。 – Engineero

+0

對於'truncated_normal',嘗試設置'stddev = sqrt(2/N)',其中'N'是權重矩陣中的行數。或者將'stddev'設置爲[較低值](https://discussions.udacity.com/t/problem-3-3-dropout-does-not-improve-test-accuarcy/46286/13)。這裏有一個[示例](http://www.ritchieng.com/machine-learning/deep-learning/tensorflow/regularization/),儘管存在一些錯誤,例如在評估步驟中包括退出。 – orodbhen

+0

其實,這裏是'sqrt(2/N)'來自的原始[論文](https://arxiv.org/pdf/1502.01852v1.pdf)。 – orodbhen

8

我有同樣的問題,減少批量和學習率爲我工作。

+0

是的。我的學習率是固定的0.5,我不得不將它降低到0.06才能實現。 – scipilot

相關問題