2016-08-17 264 views
11

在我的問題,我需要運行GD與每個訓練步驟的數據1例。這是已知的問題,session.run()具有開銷,因此訓練模型的時間太長。 爲了避免開銷,我嘗試使用while_loop並使用一次run()調用來訓練所有數據的模型。但它的方法不工作,train_op甚至不執行。下面是我在做什麼的簡單例子:Tensorflow while_loop訓練

data = [k*1. for k in range(10)] 
tf.reset_default_graph() 

i = tf.Variable(0, name='loop_i') 
q_x = tf.FIFOQueue(100000, tf.float32) 
q_y = tf.FIFOQueue(100000, tf.float32) 

x = q_x.dequeue() 
y = q_y.dequeue() 
w = tf.Variable(0.) 
b = tf.Variable(0.) 
loss = (tf.add(tf.mul(x, w), b) - y)**2 

gs = tf.Variable(0) 

train_op = tf.train.GradientDescentOptimizer(0.05).minimize(loss, global_step=gs) 

s = tf.Session() 
s.run(tf.initialize_all_variables()) 

def cond(i): 
    return i < 10 

def body(i): 
    return tf.tuple([tf.add(i, 1)], control_inputs=[train_op]) 


loop = tf.while_loop(cond, body, [i]) 

for _ in range(1): 
    s.run(q_x.enqueue_many((data,))) 
    s.run(q_y.enqueue_many((data,))) 

s.run(loop) 
s.close() 

我在做什麼錯了?或者還有另外一個解決這個問題的開銷太高的問題?

謝謝!

回答

18

該模型不出現訓練的原因是因爲輸入讀數,梯度計算和minimize()呼叫都被定義(且因此,在數據流而言,之前)的tf.while_loop()的主體。這意味着在循環執行之前,模型的所有這些部分只運行一次,而循環本身不起作用。

輕微重構—移動dequeue()操作,梯度計算,minimize()呼叫內環路—修復問題,並允許程序訓練:

optimizer = tf.train.GradientDescentOptimizer(0.05) 

def cond(i): 
    return i < 10 

def body(i): 
    # Dequeue a new example each iteration. 
    x = q_x.dequeue() 
    y = q_y.dequeue() 

    # Compute the loss and gradient update based on the current example. 
    loss = (tf.add(tf.mul(x, w), b) - y)**2 
    train_op = optimizer.minimize(loss, global_step=gs) 

    # Ensure that the update is applied before continuing. 
    return tf.tuple([tf.add(i, 1)], control_inputs=[train_op]) 

loop = tf.while_loop(cond, body, [i]) 

UPDATE:這裏有一個完成程序執行while循環,根據您的問題中的代碼:

import tensorflow as tf 

# Define a single queue with two components to store the input data. 
q_data = tf.FIFOQueue(100000, [tf.float32, tf.float32]) 

# We will use these placeholders to enqueue input data. 
placeholder_x = tf.placeholder(tf.float32, shape=[None]) 
placeholder_y = tf.placeholder(tf.float32, shape=[None]) 
enqueue_data_op = q_data.enqueue_many([placeholder_x, placeholder_y]) 

gs = tf.Variable(0) 
w = tf.Variable(0.) 
b = tf.Variable(0.) 
optimizer = tf.train.GradientDescentOptimizer(0.05) 

# Construct the while loop. 
def cond(i): 
    return i < 10 

def body(i): 
    # Dequeue a single new example each iteration. 
    x, y = q_data.dequeue() 
    # Compute the loss and gradient update based on the current example. 
    loss = (tf.add(tf.multiply(x, w), b) - y) ** 2 
    train_op = optimizer.minimize(loss, global_step=gs) 
    # Ensure that the update is applied before continuing. 
    with tf.control_dependencies([train_op]): 
     return i + 1 

loop = tf.while_loop(cond, body, [tf.constant(0)]) 

data = [k * 1. for k in range(10)] 

with tf.Session() as sess: 
    sess.run(tf.global_variables_initializer()) 
    for _ in range(1): 
     # NOTE: Constructing the enqueue op ahead of time avoids adding 
     # (potentially many) copies of `data` to the graph. 
     sess.run(enqueue_data_op, 
       feed_dict={placeholder_x: data, placeholder_y: data}) 
    print (sess.run([gs, w, b])) # Prints before-loop values. 
    sess.run(loop) 
    print (sess.run([gs, w, b])) # Prints after-loop values. 
+1

我應該在外面定義** w **和** b **嗎?所以我正在嘗試類似的東西(現在我嘗試了你提供的東西),但是我得到了錯誤*所有輸入到節點,而/ GradientDescent/update_while/w/ApplyGradientDescent必須來自同一幀。* –

+0

我添加了完整的程序我用TensorFlow 0.10rc0運行。 (您可能需要升級;'tf.while_loop()'實現中存在各種錯誤,在前幾個版本中已修復。 – mrry

+0

是的,我在0.9上啓動它,謝謝,更新後它工作!還有一個關於你的解決方案的問題 - 它看起來像新的優化器創建的每一步,以及如果我想使用Ftrl優化器(它有一些更新的插槽)會怎麼樣?它會像訓練過程中的一個優化器一樣工作嗎? –