我是Tensorflow的新手,嘗試在Tensorflow中構建神經網絡模型來解決任務調度問題。Tensorflow中的神經網絡模型解決任務調度問題
我用2個隱藏層構建模型,輸入層有36個節點,輸出層有22個節點。節點中的所有值(在輸入和輸出層中)都是標準化的浮點數(值在0.0和1.0之間)。因爲我需要從csv文件導入數據,所以我在網上的示例中構建模型:http://tneal.org/post/tensorflow-iris/TensorFlowIris/
我最初使用9個數據樣本來訓練網絡並獲得過度擬合結果,因此我將樣本數量增加到1000 ,但結果變得很奇怪,甚至沒有過度擬合(當同一數據集用於訓練和測試時,輸出的預測值和實際值不相同)。
當我調整學習率的值時,預測結果發生了變化,我甚至得到了一些負值或非常大的值。我還試圖改變優化器,隱藏層節點的數量,成本函數,但仍沒有得到任何改進。
這裏是劇本我已經在Python中寫道:
import csv
import tensorflow as tf
import numpy as np
import pandas as pd
resource_file = "testGraphs/testgraph_input_output_CCR_1.0_Norm.csv"
respd = pd.read_csv(resource_file)
#print(respd.head())
n_nodes = 12
n_nodes_hl1 = 30
n_nodes_hl2 = 25
n_classes = n_nodes*2-2
#batch_size = 100
shuffled_res = respd.sample(frac = 1)
trainSet_res = shuffled_res[0:len(shuffled_res)]
testSet_res = shuffled_res[len(shuffled_res)-2:]
x = tf.placeholder('float32',[None,n_nodes*3])
y = tf.placeholder('float32',[None,n_classes])
def nerual_network_model(data):
hidden_1_layer = {'weights':tf.Variable(tf.random_normal([n_nodes*3,n_nodes_hl1])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1,n_nodes_hl2])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))}
output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2,n_classes])), 'biases':tf.Variable(tf.random_normal([n_classes]))}
#input_data * weights + biases
l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']),hidden_1_layer['biases'])
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1,hidden_2_layer['weights']),hidden_2_layer['biases'])
l2 = tf.nn.relu(l2)
output = tf.matmul(l2,output_layer['weights'])+output_layer['biases']
return output
def train_nerual_network(x):
prediction = nerual_network_model(x)
#cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction,y))
cost = tf.reduce_mean(tf.square(prediction-y))
#cost = tf.pow(prediction-y,2)
optimizer = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
input_labels = ['In0','Weight0','Out0','In1','Weight1','Out1','In2','Weight2','Out2','In3','Weight3','Out3','In4','Weight4','Out4','In5','Weight5','Out5','In6','Weight6','Out6','In7','Weight7','Out7','In8','Weight8','Out8','In9','Weight9','Out9','In10','Weight10','Out10','In11','Weight11','Out11']
output_labels = ['ProcessorForNode1','StartingTime1','ProcessorForNode2','StartingTime2','ProcessorForNode3','StartingTime3','ProcessorForNode4','StartingTime4','ProcessorForNode5','StartingTime5','ProcessorForNode6','StartingTime6','ProcessorForNode7','StartingTime7','ProcessorForNode8','StartingTime8','ProcessorForNode9','StartingTime9','ProcessorForNode10','StartingTime10','ProcessorForNode11','StartingTime11']
for i in range(1000):
train_res = trainSet_res.sample(100)
sess.run(optimizer,feed_dict={x: [j for j in train_res[input_labels].values],
y:[j for j in train_res[output_labels].values]})
#correct = tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
# accuracy = tf.reduce_mean(tf.cast(correct,'float32'))
#print sess.run(accuracy, feed_dict={x: [j for j in testSet_res[input_labels].values],
# y:[j for j in testSet_res[output_labels].values]})
print sess.run(prediction, feed_dict={x: [j for j in testSet_res[input_labels].values],
y:[j for j in testSet_res[output_labels].values]})
print sess.run(y, feed_dict={x: [j for j in testSet_res[input_labels].values],
y:[j for j in testSet_res[output_labels].values]})
下面是結果: Prediction values above and actual values below
有人能告訴我什麼,也許在這個模型中問題的原因是什麼? 謝謝。