我使用128×128×128 ndarrays作爲輸入到CNN使用:形狀失配
# input arrays
x = tf.placeholder(tf.float32, [None, 128, 128, 128, 1])
的每個ndarray沒有colur信道數據,所以就用:
data = np.reshape(data, (128, 128, 128, 1))
爲了得到它最初適合佔位符。但現在我得到這個錯誤:
Traceback (most recent call last):
File "tfvgg.py", line 287, in <module>
for i in range(10000 + 1): training_step(i, i % 100 == 0, i % 20 == 0)
File "tfvgg.py", line 277, in training_step
a, c = sess.run([accuracy, cross_entropy], {x: batch_X, y: batch_Y})
File "/home/entelechy/tfenv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 717, in run
run_metadata_ptr)
File "/home/entelechy/tfenv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 894, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (128, 128, 128, 1) for Tensor 'Placeholder:0', which has shape '(?, 128, 128, 128, 1)'
我感到困惑的佔位符的工作方式,因爲我認爲第一個參數是爲批量大小。通過使用None
,我認爲佔位符可以使用任意數量(128,128,128,1)的輸入。因爲這是一個3d網絡,所以如果我將佔位符更改爲(128,128,128,1),則會在第一個conv3d
圖層上丟失參數以引發錯誤。
我缺少關於佔位符參數傳遞的信息?
編輯: (train_data是列表的列表,每個爲[ndarray,標號])
這是網絡的初始化:
def training_step(i, update_test_data, update_train_data):
for a in range(len(train_data)):
batch = train_data[a]
batch_X = batch[0]
batch_Y = batch[1]
# learning rate decay
max_learning_rate = 0.003
min_learning_rate = 0.0001
decay_speed = 2000.0
learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)
if update_train_data:
a, c = sess.run([accuracy, cross_entropy], {x: batch_X, y: batch_Y})
print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c) + " (lr:" + str(learning_rate) + ")")
if update_test_data:
a, c = sess.run([accuracy, cross_entropy], {x: test_data[0], y: test_data[1]})
print(str(i) + ": ********* epoch " + " ********* test accuracy:" + str(a) + " test loss: " + str(c))
sess.run(train_step, {x: batch_X, y: batch_Y, lr: learning_rate})
for i in range(10000 + 1): training_step(i, i % 100 == 0, i % 20 == 0)
你不應該重塑數據的大小爲'(1,128,128,128,1)' ,即批量= 1? – hbaderts