2016-10-22 281 views
0

我目前正試圖訓練這個RNN網絡,但似乎遇到了奇怪的錯誤,我無法解碼。Tensorflow RNN培訓不會執行?

我的網絡輸入是數字採樣音頻文件。由於音頻文件的長度可能不同,採樣音頻的矢量也會有不同的長度。

神經網絡的輸出或目標是重新創建一個14維向量,其中包含音頻文件的某些信息。我已經知道目標,通過手動計算,但需要使它與神經網絡一起工作。

我目前使用tensorflow作爲框架。

我的網絡設置是這樣的:

def last_relevant(output): 
    max_length = int(output.get_shape()[1]) 
    relevant = tf.reduce_sum(tf.mul(output, tf.expand_dims(tf.one_hot(length, max_length), -1)), 1) 
    return relevant 

def length(sequence): ##Zero padding to fit the max lenght... Question whether that is a good idea. 
    used = tf.sign(tf.reduce_max(tf.abs(sequence), reduction_indices=2)) 
    length = tf.reduce_sum(used, reduction_indices=1) 
    length = tf.cast(length, tf.int32) 
    return length 

def cost(output, target): 
    # Compute cross entropy for each frame. 
    cross_entropy = target * tf.log(output) 
    cross_entropy = -tf.reduce_sum(cross_entropy, reduction_indices=2) 
    mask = tf.sign(tf.reduce_max(tf.abs(target), reduction_indices=2)) 
    cross_entropy *= mask 
    # Average over actual sequence lengths. 
    cross_entropy = tf.reduce_sum(cross_entropy, reduction_indices=1) 
    cross_entropy /= tf.reduce_sum(mask, reduction_indices=1) 
    return tf.reduce_mean(cross_entropy) 
#----------------------------------------------------------------------# 
#----------------------------Main--------------------------------------# 
### Tensorflow neural network setup 

batch_size = None 
sequence_length_max = max_length 
input_dimension=1 

data = tf.placeholder(tf.float32,[batch_size,sequence_length_max,input_dimension]) 
target = tf.placeholder(tf.float32,[None,14]) 

num_hidden = 24 ## Hidden layer 
cell = tf.nn.rnn_cell.LSTMCell(num_hidden,state_is_tuple=True) ## Long short term memory 

output, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32,sequence_length = length(data)) ## Creates the Rnn skeleton 

last = last_relevant(output)#tf.gather(val, int(val.get_shape()[0]) - 1) ## Appedning as last 

weight = tf.Variable(tf.truncated_normal([num_hidden, int(target.get_shape()[1])])) 
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]])) 

prediction = tf.nn.softmax(tf.matmul(last, weight) + bias) 

cross_entropy = cost(output,target)# How far am I from correct value? 

optimizer = tf.train.AdamOptimizer() ## TensorflowOptimizer 
minimize = optimizer.minimize(cross_entropy) 

mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1)) 
error = tf.reduce_mean(tf.cast(mistakes, tf.float32)) 

## Training ## 

init_op = tf.initialize_all_variables() 
sess = tf.Session() 
sess.run(init_op) 

    batch_size = 1000 
    no_of_batches = int(len(train_data)/batch_size) 
    epoch = 5000 
    for i in range(epoch): 
     ptr = 0 
     for j in range(no_of_batches): 
      inp, out = train_data[ptr:ptr+batch_size], train_output[ptr:ptr+batch_size] 
      ptr+=batch_size 
      sess.run(minimize,{data: inp, target: out}) 
     print "Epoch - ",str(i) 
    incorrect = sess.run(error,{data: test_data, target: test_output}) 
    print('Epoch {:2d} error {:3.1f}%'.format(i + 1, 100 * incorrect)) 
    sess.close() 

的錯誤似乎是功能last_relevant,即應採取的輸出,並反饋的使用。

這是錯誤消息:

TypeError: Expected binary or unicode string, got <function length at 0x7f846594dde8> 

反正告訴這可能是錯在這裏?

+0

您定義長度的函數。然後你將它傳遞給tf.one_hot。你故意這麼做嗎? –

+0

是...掩蓋不相關的相關部分。長度給我的長度,並且max_length具有全長 –

回答

1

我試圖在我的本地生成你的代碼。 有一個在它是你叫tf.one_hot代碼的根本錯誤,但你傳真的不適合與預期是什麼:

閱讀文檔在這裏: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.one_hot.md

tf.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None) 

然而,你正在傳遞一個函數指針(「length」是你的代碼中的一個函數,我建議用有意義的方式命名你的函數,避免使用通用關鍵字)而不是第一個參數。

對於野生指南,你可以把你作爲指數第一個參數(而不是我的佔位符空列表),這將是固定

relevant = tf.reduce_sum(
     tf.mul(output, tf.expand_dims(tf.one_hot([], max_length), -1)), 1)