0
我正在研究基於bigram的LSTM。使用Tensorflow sampled_softmax_loss
由於我介紹了嵌入,我不得不選擇正確的損失函數。這裏 是我的選擇:
loss=tf.reduce_mean(tf.nn.sampled_softmax_loss(weights=softmax_weights,\
biases=softmax_biases, \
labels=tf.concat(train_labels,0),\
inputs=logits,\
num_sampled=num_sampled,\
num_classes=vocabulary_size))
我對着標籤張量尺寸問題的錯誤:
logits了這種形狀:(640,13)
標籤具有這種形狀張量( 「CONCAT_2:0」,形狀=(640,27),D型細胞= FLOAT32)
我還試圖
labels==tf.reshape(tf.concat(train_labels,0),[-1])
對於這兩種情況,我得到一個錯誤:
對於第一種情況,錯誤是:
Dimension must be 1 but is 27 for
'sampled_softmax_loss/ComputeAccidentalHits' (op:
'ComputeAccidentalHits') with input shapes: [640,27], [20].
對於第二種情況,錯誤是:
Shape must be rank 2 but is rank 1 for
'sampled_softmax_loss/LogUniformCandidateSampler' (op:
'LogUniformCandidateSampler') with input shapes: [17280].
這裏是我的參數:
640 = batch_size *num_enrollings =64*10
27 = vocabulary_size (I am implementing first Embedding on single character as vocabulary.
20 is num_sampled of the loss function.
13 = embedding_size
爲什麼tf.nn.sampled_softmax_loss不接受1維的標籤?
TF版本是1.0.1 Python版本:2.7發現