2016-03-15 117 views
23

我試圖在Python中使用TensorFlow實現多元線性迴歸,但遇到了一些邏輯和實現問題。我的代碼引發以下錯誤:TensorFlow:「嘗試使用未初始化的值」在變量初始化中

Attempting to use uninitialized value Variable 
Caused by op u'Variable/read' 

理想的weights輸出應該是[2, 3]

def hypothesis_function(input_2d_matrix_trainingexamples, 
         output_matrix_of_trainingexamples, 
         initial_parameters_of_hypothesis_function, 
         learning_rate, num_steps): 
    # calculate num attributes and num examples 
    number_of_attributes = len(input_2d_matrix_trainingexamples[0]) 
    number_of_trainingexamples = len(input_2d_matrix_trainingexamples) 

    #Graph inputs 
    x = [] 
    for i in range(0, number_of_attributes, 1): 
     x.append(tf.placeholder("float")) 
    y_input = tf.placeholder("float") 

    # Create Model and Set Model weights 
    parameters = [] 
    for i in range(0, number_of_attributes, 1): 
     parameters.append(
      tf.Variable(initial_parameters_of_hypothesis_function[i])) 

    #Contruct linear model 
    y = tf.Variable(parameters[0], "float") 
    for i in range(1, number_of_attributes, 1): 
     y = tf.add(y, tf.multiply(x[i], parameters[i])) 

    # Minimize the mean squared errors 
    loss = tf.reduce_mean(tf.square(y - y_input)) 
    optimizer = tf.train.GradientDescentOptimizer(learning_rate) 
    train = optimizer.minimize(loss) 

    #Initialize the variables 
    init = tf.initialize_all_variables() 

    # launch the graph 
    session = tf.Session() 
    session.run(init) 
    for step in range(1, num_steps + 1, 1): 
     for i in range(0, number_of_trainingexamples, 1): 
      feed = {} 
      for j in range(0, number_of_attributes, 1): 
       array = [input_2d_matrix_trainingexamples[i][j]] 
       feed[j] = array 
      array1 = [output_matrix_of_trainingexamples[i]] 
      feed[number_of_attributes] = array1 
      session.run(train, feed_dict=feed) 

    for i in range(0, number_of_attributes - 1, 1): 
     print (session.run(parameters[i])) 

array = [[0.0, 1.0, 2.0], [0.0, 2.0, 3.0], [0.0, 4.0, 5.0]] 
hypothesis_function(array, [8.0, 13.0, 23.0], [1.0, 1.0, 1.0], 0.01, 200) 
+0

你什麼行上的異常? –

+0

@Daniel Slater在行: - parameters.append(tf.Variable(initial_parameters_of_hypothesis_function [i])) –

+3

OK,是initial_parameters_of_hypothesis_function是一個tf.variable數組嗎?如果是這樣,那是你的問題。 –

回答

9

這是從代碼示例不是100%清楚,但如果該列表initial_parameters_of_hypothesis_functiontf.Variable對象的列表,那麼行session.run(init)將會失敗,因爲TensorFlow還不夠聰明,無法找出變量初始化中的依賴關係。要解決這個問題,你應該改變創建parameters循環使用initial_parameters_of_hypothesis_function[i].initialized_value(),增加必要的依賴關係:

parameters = [] 
for i in range(0, number_of_attributes, 1): 
    parameters.append(tf.Variable(
     initial_parameters_of_hypothesis_function[i].initialized_value())) 
+0

這有效,但現在它發生錯誤: - TypeError:無法將feed_dict鍵解釋爲張量:無法轉換進入張量的整數。在線會議。運行(train,feed_dict = feed) –

+1

錯誤信息告訴你什麼是錯誤的:feed字典的鍵必須是Tensor對象(通常是tf.placeholder()張量)而不是int值。你可能想用'feed [x [j]] = array'來替換'feed [j] = array'。 – mrry

+0

我不知道如何在tensorflow中實現隨機梯度下降,請問你能建議嗎? –

25

運行以下命令:

init = tf.global_variables_initializer() 
sess.run(init) 

或(視TF的版本,您有):

init = tf.initialize_all_variables() 
sess.run(init) 
+1

init = tf.global_variables_initializer() –

+0

是的,謝謝。 TF更新了其規格。 –

+0

現在是tf。 initialize_all_variables() –

2

我想給我的分辨率,當我與[sess = tf.InteractiveSession()]更換線[session = tf.Session()]工作。希望這對其他人有用。

+0

謝謝,這對我在Jupyter Notebook上運行時確實有幫助。可以解釋它爲什麼會起作用? – shubhamsingh

3

在調用初始化全局變量時,還有另一個與順序有關的錯誤發生。我有代碼的樣本具有類似的錯誤FailedPreconditionError(見上文回溯):試圖使用未初始化值W

def linear(X, n_input, n_output, activation = None): 
    W = tf.Variable(tf.random_normal([n_input, n_output], stddev=0.1), name='W') 
    b = tf.Variable(tf.constant(0, dtype=tf.float32, shape=[n_output]), name='b') 
    if activation != None: 
     h = tf.nn.tanh(tf.add(tf.matmul(X, W),b), name='h') 
    else: 
     h = tf.add(tf.matmul(X, W),b, name='h') 
    return h 

from tensorflow.python.framework import ops 
ops.reset_default_graph() 
g = tf.get_default_graph() 
print([op.name for op in g.get_operations()]) 
with tf.Session() as sess: 
    # RUN INIT 
    sess.run(tf.global_variables_initializer()) 
    # But W hasn't in the graph yet so not know to initialize 
    # EVAL then error 
    print(linear(np.array([[1.0,2.0,3.0]]).astype(np.float32), 3, 3).eval()) 

您應更改爲以下

from tensorflow.python.framework import ops 
ops.reset_default_graph() 
g = tf.get_default_graph() 
print([op.name for op in g.get_operations()]) 
with tf.Session() as 
    # NOT RUNNING BUT ASSIGN 
    l = linear(np.array([[1.0,2.0,3.0]]).astype(np.float32), 3, 3) 
    # RUN INIT 
    sess.run(tf.global_variables_initializer()) 
    print([op.name for op in g.get_operations()]) 
    # ONLY EVAL AFTER INIT 
    print(l.eval(session=sess)) 
+0

訂單無關緊要 - 謝謝! – ltt

0

通常有兩種初始化變量的方法,1)使用sess.run(tf.global_variables_initializer())作爲以前的答案; 2)從檢查點加載圖形。

你可以這樣做:

sess = tf.Session(config=config) 
saver = tf.train.Saver(max_to_keep=3) 
try: 
    saver.restore(sess, tf.train.latest_checkpoint(FLAGS.model_dir)) 
    # start from the latest checkpoint, the sess will be initialized 
    # by the variables in the latest checkpoint 
except ValueError: 
    # train from scratch 
    init = tf.global_variables_initializer() 
    sess.run(init) 

而第三種方法是使用tf.train.Supervisor。該會議將是

Create a session on 'master', recovering or initializing the model as needed, or wait for a session to be ready.

sv = tf.train.Supervisor([parameters]) 
sess = sv.prepare_or_wait_for_session()