2017-10-06 138 views
1

我想在張量流中實現多元迴歸,其中我有192個具有6個特徵和一個輸出變量的示例。從我的模型中,我得到一個矩陣(192,6),而它應該是(192,1)。有人知道我的代碼有什麼問題嗎?我在下面提供了我的代碼。使用TensorFlow計算多元迴歸

# Parameters 
learning_rate = 0.0001 
training_epochs = 50 
display_step = 5 

train_X = Data_ABX3[0:192, 0:6] 
train_Y = Data_ABX3[0:192, [24]] 


# placeholders for a tensor that will be always fed. 
X = tf.placeholder('float', shape = [None, 6]) 
Y = tf.placeholder('float', shape = [None, 1]) 


# Training Data 

n_samples = train_Y.shape[0] 


# Set model weights 
W = tf.cast(tf.Variable(rng.randn(1, 6), name="weight"), tf.float32) 
b = tf.Variable(rng.randn(), name="bias") 

# Construct a linear model 
pred = tf.add(tf.multiply(X, W), b) 

# Mean squared error 
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples) 
# Gradient descent 
# Note, minimize() knows to modify W and b because Variable objects are  trainable=True by default 
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) 

# Accuracy 
# #accuracy = tf.contrib.metrics.streaming_accuracy(Y, pred) 

# Initialize the variables (i.e. assign their default value) 
init = tf.global_variables_initializer() 

# Start training 
with tf.Session() as sess: 

    # Run the initializer 
    sess.run(init) 

    # Fit all training data 
    for epoch in range(training_epochs): 
     #for (x, y) in zip(train_X, train_Y): 
     sess.run(optimizer, feed_dict={X: train_X, Y: train_Y}) 

     # Display logs per epoch step 
     if (epoch+1) % display_step == 0: 
      c = sess.run(cost, feed_dict={X: train_X, Y:train_Y}) 
      print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \ 
       "W=", sess.run(W), "b=", sess.run(b)) 

    print("Optimization Finished!") 
    #training_cost = 0 
    #for (x, y) in zip(train_X, train_Y): 
    #  tr_cost = sess.run(cost, feed_dict={X: x, Y: y}) 
    #  training_cost += tr_cost 
    training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y}) 
    print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n') 

    # Graphic display 
    plt.plot(train_Y, train_X * sess.run(W) + sess.run(b), label='Fitted line') 
    plt.legend() 
    plt.show() 

回答

1

請您pred方程使用tf.matmul代替tf.multiplytf.multiply做元素明智乘法因此,它將生成一個與train_X相同維度的矩陣,而tf.matmul將執行矩陣乘法,並且將基於實際矩陣乘法規則生成結果矩陣。

我不確定你的數據是什麼。添加隨機數據,然後更改代碼以滿足所有維度要求。如果你能幫助我的意圖,這將有助於更好地看到問題。

編輯

import numpy as np 
import tensorflow as tf 
import matplotlib.pyplot as plt 
# Parameters 
learning_rate = 0.0001 
training_epochs = 50 
display_step = 5 

Data_ABX3 = np.random.random((193, 8)).astype('f') 

train_X = Data_ABX3[0:192, 0:6] 
train_Y = Data_ABX3[0:192, [7]] 


# placeholders for a tensor that will be always fed. 
X = tf.placeholder('float32', shape = [None, 6]) 
Y = tf.placeholder('float32', shape = [None, 1]) 

# Training Data 
n_samples = train_Y.shape[0] 

# Set model weights 
W = tf.cast(tf.Variable(np.random.randn(6, 1), name="weight"), tf.float32) 
b = tf.Variable(np.random.randn(), name="bias") 

mult_node = tf.matmul(X, W) 
print(mult_node.shape) 
# Construct a linear model 
pred = tf.add(tf.matmul(X, W), b) 

# Mean squared error 
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples) 
# Gradient descent 
# Note, minimize() knows to modify W and b because Variable objects are    trainable=True by default 
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) 

# Accuracy 
# #accuracy = tf.contrib.metrics.streaming_accuracy(Y, pred) 

# Initialize the variables (i.e. assign their default value) 
init = tf.global_variables_initializer() 

# Start training 
with tf.Session() as sess: 

# Run the initializer 
sess.run(init) 

# Fit all training data 
for epoch in range(training_epochs): 
    #for (x, y) in zip(train_X, train_Y): 
    sess.run(optimizer, feed_dict={X: train_X, Y: train_Y}) 

    # Display logs per epoch step 
    if (epoch+1) % display_step == 0: 
     c = sess.run(cost, feed_dict={X: train_X, Y:train_Y}) 
     print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \ 
      "W=", sess.run(W), "b=", sess.run(b)) 

print("Optimization Finished!") 
#training_cost = 0 
#for (x, y) in zip(train_X, train_Y): 
#  tr_cost = sess.run(cost, feed_dict={X: x, Y: y}) 
#  training_cost += tr_cost 
training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y}) 
print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n') 

line = sess.run(tf.add(tf.matmul(train_X, W), b)) 
# Graphic display 
plt.plot(train_Y, line, label='Fitted line') 
plt.legend() 
plt.show()` 
+0

謝謝Rachit。我使用它並且沒有工作,我得到這個消息:ValueError:尺寸必須相等,但對於'MatMul'(op:'MatMul'),其輸入形狀爲[?,6],[1, 6]。 – Hamid

+0

@Hamid不知道你的輸入數據,但我用你的代碼隨機數據。 –

+0

非常感謝您的有用評論。我更改了代碼的這些部分:「Data_ABX3 = numpy.loadtxt(file,dtype ='float32',...」以及「W = tf.cast(tf.Variable(tf.zeros([6,1])) ,...「。現在它正在工作,但我得到越來越高的成本(培訓成本= 4.81842e + 28),同時我增加了training_epochs。 – Hamid