目前正在研究使用帶有Tensorflow後端的Keras的問題。元組索引超出範圍與LSTM神經網絡。 Python,帶有Tensorflow的Keras
我想創建與下面的代碼(例如最小,不包括我的真實數據)神經網絡,但它給出了錯誤:
「的元組索引超出範圍」的大小input_shape以下行:
model.add(TimeDistributed(LSTM(32,return_sequences=True),input_shape=trainData.shape[1:]))
的錯誤似乎是recurrent.py文件中的行964:
self.input_dim = input_shape[2]
當它試圖訪問input_shape [2]。我通過形狀只有兩個數字(數據的時間序列長度和通道數量)。我通過的形狀是(100000,2)。
我想這條線試圖訪問我沒有傳遞給它的東西的索引。
所以我的問題是我應該如何使用我的神經網絡配置的輸入形狀?
我使用的是Keras版本2.0.3和Tensorflow版本1.0.1。
編輯:recurrent.py是與Keras(我認爲)提供的文件。我不想開始編輯它,以防萬一我真的摔了一跤。
# import the necessary packages
from sklearn.cross_validation import train_test_split
from keras.models import Sequential
from keras.layers import Activation
from keras.optimizers import SGD
from keras.layers import LSTM
from keras.layers.wrappers import TimeDistributed
from keras.utils import np_utils
import numpy as np
numClasses = 10
time_points = 100000
num_chans = 2
iq = np.empty((time_points,0,1), int)# creates empty numpy array.
labels = np.empty([0,1])
raw_data = np.random.rand(500,time_points,num_chans)
labels = np.random.randint(numClasses, size=(500, 1))
one_hot_labels = np_utils.to_categorical(labels, num_classes=None)
print(one_hot_labels.shape)
# partition the data into training and testing splits, using 75%
# of the data for training and the remaining 25% for testing
print("[INFO] constructing training/testing split...")
(trainData, testData, trainLabels, testLabels) = train_test_split(
raw_data, one_hot_labels, test_size=0.25, random_state=42)
trainLabels = trainLabels.reshape(375,10,1)
testLabels = testLabels.reshape(125,10,1)
print(trainData.shape)
print(testData.shape)
print(trainLabels.shape)
print(testLabels.shape)
print(len(trainData.shape))
# define the architecture of the network
model = Sequential()
# Long short term memory experiment
model.add(TimeDistributed(LSTM(32, return_sequences=True),input_shape=trainData.shape[1:]))
model.add(TimeDistributed(LSTM(10, return_sequences=True)))
model.add(Activation("softmax"))
print(trainData.shape)
print(trainLabels.shape)
## train the model using SGD
print("[INFO] compiling model...")
sgd = SGD(lr=0.01)
model.compile(loss="sparse_categorical_crossentropy", optimizer=sgd, metrics=["accuracy"])
model.fit(trainData, trainLabels)
## show the accuracy on the testing set
print("[INFO] evaluating on testing set...")
(loss, accuracy) = model.evaluate(testData, testLabels, batch_size=128, verbose=1)
print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss, accuracy * 100))
使用timedistributed LSTM的目的是什麼? LSTM最初採取的順序,所以你餵養的序列序列?不知道我得到它... –
在我的實際數據中,長度爲100000的尺寸是時間。所以我在兩個通道中有100000個時間採樣點。因此大小爲100000×2。所以我認爲在這種情況下timedistributed是必要的? – bluefocs
print(trainData.shape)print(trainLabels.shape)這個輸出是什麼? –