2017-06-23 116 views
0

我正在學習Udacity深度學習課程,其作業說:「演示過度擬合的極端情況,將您的訓練數據限制在幾個批次。梯度下降批量步驟Tensorflow

我的問題是:

1) 爲什麼減少num_steps, num_batches有什麼做過度擬合?我們沒有添加任何變量也沒有增加W的大小。

在下面的代碼中,num_steps曾經是3001,num_batches是128,解決方案是分別將它們減少到101和3。

num_steps = 101 
    num_bacthes = 3 

    with tf.Session(graph=graph) as session: 
     tf.initialize_all_variables().run() 
     print("Initialized") 
     for step in range(num_steps): 
     # Pick an offset within the training data, which has been randomized. 
     # Note: we could use better randomization across epochs. 
     #offset = (step * batch_size) % (train_labels.shape[0] - batch_size) 
     offset = step % num_bacthes 
     # Generate a minibatch. 
     batch_data = train_dataset[offset:(offset + batch_size), :] 
     batch_labels = train_labels[offset:(offset + batch_size), :] 
     # Prepare a dictionary telling the session where to feed the minibatch. 
     # The key of the dictionary is the placeholder node of the graph to be fed, 
     # and the value is the numpy array to feed to it. 
     feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, beta_regul : 1e-3} 
     _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict) 
     if (step % 2 == 0): 
      print("Minibatch loss at step %d: %f" % (step, l)) 
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) 
      print("Validation accuracy: %.1f%%" % accuracy(
      valid_prediction.eval(), valid_labels)) 
     print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) 

此代碼是從溶液中的摘錄:https://github.com/rndbrtrnd/udacity-deep-learning/blob/master/3_regularization.ipynb

2)可有人解釋的梯度下降「偏移」概念?爲什麼我們必須使用它?

3)我已經用num_steps進行了實驗,發現如果增加num_steps,精度會提高。爲什麼?我應該如何解讀num_step和學習率?

回答

1

1)當您訓練神經網絡以防止過度擬合時,設置早期停止條件是非常典型的。你沒有增加新的變量,但使用早期停止條件,你不能夠密集地使用它們,什麼是更不相同的。

2)在這種情況下「偏移」是在minibatch未使用的剩餘的觀察的劃分(其餘部分)

3)認爲「學習率」爲「速度」和「num_steps」爲「時間」的。如果你跑得更長,你可能會進一步......但也許如果你開得更快,也許你可能會墜毀,而不會進一步...