2017-09-04 60 views
1

我是tensorflow的新手,試圖訓練以下兩層網絡。看來它不起作用,因爲交叉熵不會像迭代一樣減少。我認爲我搞砸了連接隱藏層到輸出層。請幫助我,如果你可以看到問題,張量流中的2層神經網絡

import tensorflow as tf 
from scipy.io import loadmat 
import numpy as np 
import sys 

x = loadmat('../mnist_data/ex4data1.mat') 
X = x['X'] 

# one hot conversion 
y_temp = x['y'] 
y_temp = np.reshape(y_temp, (len(y_temp),)) 
y = np.zeros((len(y_temp),10)) 
y[np.arange(len(y_temp)), y_temp-1] = 1. 



input_size = 400 
hidden1_size = 25 
output_size = 10 
num_iters = 50 
reg_alpha = 0.05 


x = tf.placeholder(tf.float32, [None, input_size], name='data') 
W1 = tf.Variable(tf.zeros([hidden1_size, input_size], tf.float32, name='weights_1st_layer')) 
b1 = tf.Variable(tf.zeros([hidden1_size], tf.float32), name='bias_layer_1') 
W2 = tf.Variable(tf.zeros([output_size, hidden1_size], tf.float32, name='weights_2nd_layer')) 
b2 = tf.Variable(tf.zeros([output_size], tf.float32), name='bias_layer_2') 


hidden_op = tf.nn.relu(tf.add(tf.matmul(x, W1, transpose_b=True), b1)) 
output_op = tf.matmul(hidden_op, W2, transpose_b=True) + b2 
pred = tf.nn.softmax(output_op) 

y_ = tf.placeholder(tf.float32, [None, 10], name='actual_labels') 


cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    labels=y_, logits=output_op)) 
train_step = tf.train.GradientDescentOptimizer(reg_alpha).minimize(cross_entropy) 

sess = tf.InteractiveSession() 
tf.global_variables_initializer().run() 

for _ in range(50): 
    print ('training..', _) 
    print (sess.run([train_step, cross_entropy], feed_dict={x : X, y_ : y})) 

corr_pred = tf.equal(tf.argmax(pred, axis=1), tf.argmax(y_, axis=1)) 
acc = tf.reduce_mean(tf.cast(corr_pred, tf.float32)) 
print (sess.run(acc, feed_dict={x:X, y_:y})) 
sess.close() 
+2

我不熟悉TF,但它看起來像你正在初始化所有的權重和偏差爲零。那是對的嗎?如果是這樣,這是一個巨大的問題,因爲它可以防止破壞對稱性。 – hobbs

回答

1

嘗試初始化您的權重作爲隨機數,而不是零。

所以不是:

W1 = tf.Variable(tf.zeros([hidden1_size, input_size], tf.float32, name='weights_1st_layer')) 
W2 = tf.Variable(tf.zeros([output_size, hidden1_size], tf.float32, name='weights_2nd_layer')) 

使用:

W1 = tf.Variable(tf.truncated_normal([hidden1_size, input_size], tf.float32, name='weights_1st_layer'), stddev=0.1)) 
W2 = tf.Variable(tf.truncated_normal([output_size, hidden1_size], tf.float32, name='weights_2nd_layer'), stddev=0.1)) 

檢查this很好的總結,爲什麼初始化所有的權重爲零防止網絡訓練。

+1

是的。這樣可行。最初我正在關注單層網絡的實施。在單層的情況下權重被初始化爲0。我認爲之前的輸出層可以有0初始化,層之前,必須有非零初始化讓GD工作...謝謝 –