2017-08-02 49 views
0

訓練的時候成本不變的我是一個總的新秀,並試圖使用tensorflow解決多輸入多輸出的問題。然而,培訓的過程中,權重和網絡的成本是不變的。下面是一些主要的代碼,任何建議,將不勝感激!重量和tensorflow

learning_rate = 0.01 
training_epoch = 2000 
batch_size = 100 
display_step = 1 

# place holder for graph input 
x = tf.placeholder("float64", [None, 14]) 
y = tf.placeholder("float64", [None, 8]) 

# model weights 
w_1 = tf.Variable(tf.zeros([14, 11], dtype = tf.float64)) 
w_2 = tf.Variable(tf.zeros([11, 8], dtype = tf.float64)) 

# construct a model 
h_in = tf.matmul(x, w_1) 
h_out = tf.nn.relu(h_in) 
o_in = tf.matmul(h_out, w_2) 
o_out = tf.nn.relu(o_in) 

# cost: mean square error 
cost = tf.reduce_sum(tf.pow((o_out - y), 2)) 

# optimizer 
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) 

# initializer 
init = tf.global_variables_initializer() 

# launch the graph 
with tf.Session() as sess: 
    sess.run(init) 

    for epoch in range(training_epoch): 
     pos = 0; 
     # loop over all batches 
     if pos < train_input_array.shape[0]: 
      # get the next batch 
      batch_i = [] 
      batch_o = [] 
      for i in range(pos, pos + batch_size): 
       batch_i.append(train_input_array[i].tolist()) 
       batch_o.append(train_output_array[i].tolist()) 
      np.array(batch_i) 
      np.array(batch_o) 
      pos += batch_size; 
     sess.run(optimizer, feed_dict = {x: batch_i, y: batch_o}) 
     print sess.run(w_2[0]) 

     if (epoch + 1) % display_step == 0: 
      c = sess.run(cost, feed_dict = {x: batch_i, y: batch_o}) 
      print("Epoch: ", "%04d" % (epoch + 1), "cost: ", "{:.9f}".format(c)) 

回答

0

我認爲你需要改變你的成本函數,以reduce_mean

# reduce sum doesn't work 
cost = tf.reduce_sum(tf.pow((o_out - y), 2)) 
# you need to use mean 
cost = tf.reduce_mean(tf.pow((o_out - y), 2)) 
+0

我改成了reduce_mean,但損失仍然是不變的。 – Dennis

+0

您可以嘗試使用momentm優化,數目較多的 –

+0

謝謝你的建議的參數,我改變了權重變量初始化從tf.zero到tf.random_normal,損失終於開始下降。 – Dennis