2017-02-03 116 views
1

時,我一直在下面this post,尤其是第二部分,使用Keras爲TensorFlow接口set_learning_phase輟學。Keras節約TensorFlow會議

作爲一個例子我已經使用MNIST數據組被訓練CNN。我的目標是培養和評價的TF會話模型然後保存使用tf.train.Saver()會話,這樣我可以部署在CloudML模型。

我能夠爲不使用Dropout的模型做到這一點,但是,當我在Keras中包含Dropout圖層時,需要指定learning_phase(training = 1,testing = 0),這是通過feed_dict完成的(見下面的代碼)。

本地我能夠通過執行類似

test_accuracy = accuracy.eval(feed_dict={images: mnist_data.test.images, labels: mnist_data.test.labels, K.learning_phase(): 0}) 

然而,當我上傳我的模型CloudML並嘗試測試,我得到以下錯誤

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'keras_learning_phase' with dtype bool 
    [[Node: keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] 

我知道這是爲了控制這個由於feed_dict中的行,但我不知道如何解決它。在博客文章第四部分中,在TensorFlow服務中加載和重新保存模型的背景下討論了這個問題。我無法讓我的方法工作,因爲我需要導出會話導出和export.meta,而不是Keras模型。

# Make a session in tf 
sess = tf.Session() 
# sess = tf.InteractiveSession() 

# Register the tf session with Keras 
K.set_session(sess) 

# Generate placeholders for the images and labels and mark as input. 
images = tf.placeholder(tf.float32, shape=(None, 28, 28, 1)) 
keys_placeholder = tf.placeholder(tf.int64, shape=(None,)) 
labels = tf.placeholder(tf.float32, shape=(None, 10)) 
inputs = {'key': keys_placeholder.name, 'image': images.name} 
tf.add_to_collection('inputs', json.dumps(inputs)) 

# To be able to extract the id, we need to add the identity function. 
keys = tf.identity(keys_placeholder) 

# Define a simple network 
# Two fully-connected layer with 128 units and ReLU activation 
model = Sequential() 
model.add(Convolution2D(32, 5, 5, activation='relu', input_shape=(28, 28, 1))) 
model.add(MaxPooling2D(pool_size=(2,2))) 
model.add(Convolution2D(64, 5, 5, activation='relu')) 
model.add(MaxPooling2D(pool_size=(2,2))) 
model.add(Dropout(0.25)) 
model.add(Flatten()) 
model.add(Dense(1024, activation='relu')) 
model.add(Dropout(0.50)) 
model.add(Dense(10, activation='softmax')) 
preds = model(images) # Output 

# Define some Ops 
prediction = tf.argmax(preds ,1) 
scores = tf.nn.softmax(preds) 

# Use the Keras caterforical crossentropy_function and the tf reduce mean 
loss = tf.reduce_mean(categorical_crossentropy(labels, preds)) 
# Define the optimizer 
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss) 
# Initialization op 
init_op = tf.initialize_all_variables() 
# Saver op 
saver = tf.train.Saver() 

# Mark the outputs. 
outputs = {'key': keys.name, 
      'prediction': prediction.name, 
      'scores': scores.name} 
tf.add_to_collection('outputs', json.dumps(outputs)) 

# Get the data 
mnist_data = input_data.read_data_sets('MNIST_data', one_hot=True, reshape=False) 

# Open session 
with sess.as_default(): 
    sess.run(init_op) 
    # print keras_learning_phase.eval() 

    for i in range(100): 
     batch = mnist_data.train.next_batch(50) 
     train_step.run(feed_dict={images: batch[0], 
            labels: batch[1], 
            K.learning_phase(): 1}) 
    saver.save(sess, 'test/export') 

回答