2017-08-30 91 views
0

我有這樣的錯誤:錯誤:Tensorflow BRNN logits和標籤必須是相同的大小

InvalidArgumentError (see above for traceback): logits and labels must 
be same size: logits_size=[10,9] labels_size=[7040,9] [[Node: 
SoftmaxCrossEntropyWithLogits = 
SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, 
_device="/job:localhost/replica:0/task:0/gpu:0"](Reshape, Reshape_1)]] 

但我無法找到發生此錯誤的張....我認爲這是出現尺寸不匹配......

我輸入尺寸是batch_size * n_steps * n_input

如此,這將是10 * 704 * 100,我想使輸出

batch_size * n_steps * n_classes =>它將由10 * 700 * 9,通過雙向RNN

我應該如何改變這種代碼來修正這個錯誤?

裝置的batch_size DATAS的這樣的數:

數據1:ABCABCABCAAADDD ... ... 數據10:ABCCCCABCDBBAA ...

而且 n_step裝置,每個數據的長度(的數據是通過「O」填充,以固定每個數據的長度):704

而且 n_input意味着數據如何表達在這樣的每個數據的每個字母: A - [1,2,1, -1,...,-1]

和學習的輸出應該是這樣的: 輸出數據的1:XYZYXYZYYXY ... ...數據10 輸出:ZXYYRZYZZ ...

輸出的每個字母被影響由周圍的字母和輸入序列組成。

learning_rate = 0.001 
training_iters = 100000 
batch_size = 10 
display_step = 10 
# Network Parameters 
n_input = 100 
n_steps = 704 # timesteps 
n_hidden = 50 # hidden layer num of features 
n_classes = 9 

x = tf.placeholder("float", [None, n_steps, n_input]) 
y = tf.placeholder("float", [None, n_steps, n_classes]) 

weights = { 
    'out': tf.Variable(tf.random_normal([2*n_hidden, n_classes])) 
} 
biases = { 
    'out': tf.Variable(tf.random_normal([n_classes])) 
} 
def BiRNN(x, weights, biases): 
    x = tf.unstack(tf.transpose(x, perm=[1, 0, 2])) 

    # Forward direction cell 
    lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) 
    # Backward direction cell 
    lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) 
    # Get lstm cell output 
    try: 
     outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, 
              dtype=tf.float32) 
    except Exception: # Old TensorFlow version only returns outputs not states 
     outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, 
            dtype=tf.float32) 
    # Linear activation, using rnn inner loop last output 
    return tf.matmul(outputs[-1], weights['out']) + biases['out'] 
pred = BiRNN(x, weights, biases) 
# Define loss and optimizer 
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) 
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) 
# Evaluate model 
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1)) 
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) 
# Initializing the variables 
init = tf.global_variables_initializer() 
# Launch the graph 
with tf.Session() as sess: 
    sess.run(init) 
    step = 1 
    while step * batch_size < training_iters: 
     batch_x, batch_y = next_batch(batch_size, r_big_d, y_r_big_d) 
     #batch_x = batch_x.reshape((batch_size, n_steps, n_input)) 
     # Run optimization op (backprop) 
     sess.run(optimizer, feed_dict={x: batch_x, y: batch_y}) 
     if step % display_step == 0: 
      # Calculate batch accuracy 
      acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y}) 
      # Calculate batch loss 
      loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y}) 
      print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \ 
        "{:.6f}".format(loss) + ", Training Accuracy= " + \ 
        "{:.5f}".format(acc)) 
     step += 1 
    print("Optimization Finished!") 
    test_x, test_y = next_batch(batch_size, v_big_d, y_v_big_d) 
    print("Testing Accuracy:", \ 
     sess.run(accuracy, feed_dict={x: test_x, y: test_y})) 

回答

0

static_bidirectional_rnn第一返回值是張量的列表 - 每個RNN步驟。只使用tf.matmul中的最後一個,你就會失去所有其他的東西。相反,將它們堆疊成適當形狀的單個張量,然後重塑形狀,然後變形。

outputs = tf.stack(outputs, axis=1) 
outputs = tf.reshape(outputs, (batch_size*n_steps, n_hidden)) 
outputs = tf.matmul(outputs, weights['out']) + biases['out'] 
outputs = tf.reshape(outputs, (batch_size, n_steps, n_classes)) 

備選地,可以使用tf.einsum

outputs = tf.stack(outputs, axis=1) 
outputs = tf.einsum('ijk,kl->ijl', outputs, weights['out']) + biases['out'] 
+0

但是我有另一個問題 「輸出= tf.matmul(輸出,權重[ '出'])+偏差[ '出']」線。錯誤註釋如下所示:「尺寸必須相同,但對於'MatMul'(op:'MatMul'),其輸入形狀爲[7040,9],[128,9]」爲9和128。 –

+0

哎呀固定。第一次重塑應該是很多* n_hidden,而不是很多* n_classes – DomJack

相關問題