我把你的train_neural_network
和make_prediction
功能合併爲一個單一功能。將tf.nn.softmax
應用於模型函數將使得數值範圍從0〜1(解釋爲概率),然後tf.argmax
以較高的概率提取列數。請注意,在這種情況下y
的placeholder
需要進行一次熱編碼。 (如果你不是編碼一個熱-Y這裏,然後pred_y=tf.round(tf.nn.softmax(model))
會的softmax
輸出轉換爲0或1)
def train_neural_network_and_make_prediction(train_X, test_X):
model = neural_network_model(x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model, y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
pred_y=tf.argmax(tf.nn.softmax(model),1)
ephocs = 10
with tf.Session() as sess :
tf.initialize_all_variables().run()
for epoch in range(ephocs):
epoch_cost = 0
i = 0
while i< len(titanic_train) :
start = i
end = i+batch_size
batch_x = np.array(train_x[start:end])
batch_y = np.array(train_y[start:end])
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})
epoch_cost += c
i+=batch_size
print("Epoch",epoch+1,"completed with a cost of", epoch_cost)
# make predictions on test data
predictions = pred_y.eval(feed_dict={x : test_X})
return predictions
謝謝你這麼多'pred_y = tf.round(tf.nn.softmax(模型))'是我一直在尋找:) –