2017-05-04 46 views
-2

爲什麼我運行此代碼時,我的成本函數等於零?我的代碼有什麼問題?爲什麼我的成本函數等於零

import tensorflow as tf 

filename_queue = tf.train.string_input_producer(["data.csv"]) 

line_reader = tf.TextLineReader(skip_header_lines=0) 
_, csv_row = line_reader.read(filename_queue) 

record_defaults = [[1],[1.0],[1.0],[1.0],[1.0]] 
out,in1,in2,in3,in4 = tf.decode_csv(csv_row, record_defaults=record_defaults) 

features = tf.stack([in1,in2,in3,in4]) 

learning_rate = 0.6 
training_epochs = 10 
batch_size = 2 
display_step = 1 
num_examples= 10 

n_hidden_1 = 10 
n_hidden_2 = 10 
n_input = 4 
n_classes = 1 

x = tf.placeholder("float", [None, n_input]) 
y = tf.placeholder("float", [n_classes]) 

def multilayer_perceptron(x, weights, biases): 
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) 
    layer_1 = tf.nn.relu(layer_1) 

    layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) 
    layer_2 = tf.nn.relu(layer_2) 

    out_layer = tf.matmul(layer_2, weights['out']) + biases['out'] 

    return out_layer 

weights = { 
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 
    'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes])) 
} 
biases = { 
    'b1': tf.Variable(tf.random_normal([n_hidden_1])), 
    'b2': tf.Variable(tf.random_normal([n_hidden_2])), 
    'out': tf.Variable(tf.random_normal([n_classes])) 
} 

prediction = multilayer_perceptron(x, weights, biases) 

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y)) 
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) 

init = tf.global_variables_initializer() 

with tf.Session() as sess: 
    sess.run(init) 
    coord = tf.train.Coordinator() 
    threads = tf.train.start_queue_runners(coord=coord) 

    for epoch in range(training_epochs): 
     avg_cost = 0 
     total_batch = int(num_examples/batch_size) 

     for i in range(total_batch): 
      batch_x = [] 
      batch_y = [] 
      for _ in range(1, batch_size): 
       example, label = sess.run([features, out]) 
       batch_x.append(example) 
       batch_y.append(label) 
       _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, 
                   y: batch_y}) 
       avg_cost += c/total_batch 

     if epoch % display_step == 0: 
      print ("Epoch:", '%04d' % (epoch+1), "cost=", \ 
          "{:.9f}".format(avg_cost)) 
    print ("Optimization Finished!") 
    coord.request_stop() 
    coord.join(threads) 

data.csv文件:

0,0.1,0.3,0.2,0.9 
1,0.7,0.9,0.1,0.0 
2,0.6,0.9,0.4,0.4 
3,0.9,0.3,0.6,0.4 
4,0.5,0.3,0.5,0.5 
5,0.5,0.6,0.1,0.4 
6,0.0,0.4,0.6,0.6 
7,0.0,0.9,0.4,0.5 
8,0.6,0.4,0.2,0.5 
9,0.7,0.1,0.1,0.9 

結果:

時期:0001成本= 0.000000000時期:0002成本= 0.000000000時期: 0003成本= 0.000000000時期:0004年費= 0.000000000時期:0005年費= 0.000000000時期:0006成本= 0.000000000時期:0007成本= 0.000000000時期:0008成本= 0.000000000時期:0009成本= 0.000000000時期:0010成本= 0.000000000優化完成!

回答

1

從會話中返回的c的值實際上等於零。

_, c = sess.run([optimizer, cost], feed_dict={x: batch_x, 
               y: batch_y}) 

您確定tensorflow正確執行嗎?

+0

但是爲什麼?運行feed_dict和優化器後,要求c的值等於成本函數的計算輸出值。不是嗎? – netizen

相關問題