你好這是我第一次使用tensorflow,我試着在這裏修改TensorFlow-Examples的例子來用這個代碼來回歸波士頓數據庫的問題。基本上,我只改變成本函數,數據庫,輸入數量和目標數字,但是當我運行MPL不會收斂(我使用非常低的速率)。我用亞當優化測試它,並下降梯度優化,但我有相同的行爲。 我感謝你的建議和想法...... !!!MLP在tensorflow中用於迴歸......不收斂
觀察:當我運行這個程序沒有上述修改時,成本函數值總是減少。
在這裏,我運行模型時的演變,即使學習速率很低,成本函數也會振盪。在最壞的情況下,我希望模型收斂在一個值中,例如,如果時期944顯示值爲0.2267548 if沒有其他更好的價值是發現,那麼這個值必須保持,直到優化結束。
時期:0942成本= 0.445707272
時期:0943成本= 0.389314095
時期:0944成本= 0.226754842
時期:0945成本= 0.404150135
時期:0946成本= 0.382190095
Epoch:0947 cost = 0.897880572
時期:0948成本= 0.481954243
時期:0949成本= 0.269408980
時期:0950成本= 0.427961614
時期:0951成本= 1.206053280
時期:0952成本= 0.834200084
from __future__ import print_function
# Import MNIST data
#from tensorflow.examples.tutorials.mnist import input_data
#mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
import tensorflow as tf
import ToolInputData as input_data
ALL_DATA_FILE_NAME = "boston_normalized.csv"
##Load complete database, then this database is splitted in training, validation and test set
completedDatabase = input_data.Databases(databaseFileName=ALL_DATA_FILE_NAME, targetLabel="MEDV", trainPercentage=0.70, valPercentage=0.20, testPercentage=0.10,
randomState=42, inputdataShuffle=True, batchDataShuffle=True)
# Parameters
learning_rate = 0.0001
training_epochs = 1000
batch_size = 5
display_step = 1
# Network Parameters
n_hidden_1 = 10 # 1st layer number of neurons
n_hidden_2 = 10 # 2nd layer number of neurons
n_input = 13 # number of features of my database
n_classes = 1 # one target value (float)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.square(pred-y))
#cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(completedDatabase.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = completedDatabase.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c/total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost))
print("Optimization Finished!")
Hello @Steven波士頓數據庫只有503個例子,有13個特徵和1個目標變量。Al特徵被標準化並且是浮動的。 – EdwinMald
我將學習率設置得很低以測試模型。 – EdwinMald
雖然有哪些標籤?他們是{0,1}還是真實值的真實數字,而不受限制的範圍? – Steven