2017-08-27 50 views
1

我正在訓練一個疊加了一個在另一個頂部的S形層的NN。我有與每個圖層相關的標籤,我希望在減少第一層損失和最小化第二層損失的培訓之間進行交替。我期望不管我是否訓練第二層,我在第一層獲得的結果都不會改變。但是,我確實有顯着的不同。我錯過了什麼?堆疊S形:爲什麼要訓練第二層改變第一層?

下面是代碼:

dim = Xtrain.shape[1] 
output_dim = Ytrain.shape[1] 
categories_dim = Ctrain.shape[1] 
features = C.input_variable(dim, np.float32) 
label = C.input_variable(output_dim, np.float32) 
categories = C.input_variable(categories_dim, np.float32) 
b = C.parameter(shape=(output_dim)) 
w = C.parameter(shape=(dim, output_dim)) 
adv_w = C.parameter(shape=(output_dim, categories_dim)) 
adv_b = C.parameter(shape=(categories_dim)) 
pred_parameters = (w, b) 
adv_parameters = (adv_w, adv_b) 
z = C.tanh(C.times(features, w) + b) 
adverse = C.tanh(C.times(z, adv_w) + adv_b) 
pred_loss = C.cross_entropy_with_softmax(z, label) 
pred_error = C.classification_error(z, label) 

adv_loss = C.cross_entropy_with_softmax(adverse, categories) 
adv_error = C.classification_error(adverse, categories) 

pred_learning_rate = 0.5 
pred_lr_schedule = C.learning_rate_schedule(pred_learning_rate, C.UnitType.minibatch) 
pred_learner = C.adam(pred_parameters, pred_lr_schedule, C.momentum_as_time_constant_schedule(0.9)) 
pred_trainer = C.Trainer(adverse, (pred_loss, pred_error), [pred_learner]) 

adv_learning_rate = 0.5 
adv_lr_schedule = C.learning_rate_schedule(adv_learning_rate, C.UnitType.minibatch) 
adv_learner = C.adam(adverse.parameters, adv_lr_schedule, C.momentum_as_time_constant_schedule(0.9)) 
adv_trainer = C.Trainer(adverse, (adv_loss, adv_error), [adv_learner]) 

minibatch_size = 50 
num_of_epocs = 40 

# Run the trainer and perform model training 
training_progress_output_freq = 50 

def permute (x, y, c): 
    rr = np.arange(x.shape[0]) 
    np.random.shuffle(rr) 
    x = x[rr, :] 
    y = y[rr, :] 
    c = c[rr, :] 
    return (x, y, c) 
for e in range(0,num_of_epocs): 
    (x, y, c) = permute(Xtrain, Ytrain, Ctrain) 
    for i in range (0, x.shape[0], minibatch_size): 
     m_features = x[i:min(i+minibatch_size, x.shape[0]),] 
     m_labels = y[i:min(i+minibatch_size, x.shape[0]),] 
     m_cat = c[i:min(i+minibatch_size, x.shape[0]),] 

     if (e % 2 == 0): 
      pred_trainer.train_minibatch({features : m_features, label : m_labels, categories : m_cat, diagonal : m_diagonal}) 
     else: 
      adv_trainer.train_minibatch({features : m_features, label : m_labels, categories : m_cat, diagonal : m_diagonal}) 

我感到奇怪的是,如果我註釋掉最後兩行(否則:adv_training.train ...)列車和試驗的誤差Z的預測標籤塗改。由於adv_trainer應該只修改adv_w和adv_b,它們不用於計算z或其損失,所以我看不出爲什麼會發生這種情況。我很感謝幫助。

回答

1

你不應該做

adv_learner = C.adam(adverse.parameters, adv_lr_schedule, C.momentum_as_time_constant_schedule(0.9)) 

但:

adv_learner = C.adam(adv_parameters, adv_lr_schedule, C.momentum_schedule(0.9)) 

adverse.parameters包含了所有的參數,你不希望出現這種情況。另一方面,您將需要用momentum_schedule替換momentum_as_time_constant_schedule。前者以參數表示樣本數,之後梯度的貢獻將被exp(-1)衰減。

+0

謝謝Nikos,你釘了它! –

相關問題