使用蟒蛇Python 2.7版的Windows 10如何返回確認損失的歷史Keras
我使用Keras〔實施例訓練語言模型:
print('Build model...')
model = Sequential()
model.add(GRU(512, return_sequences=True, input_shape=(maxlen, len(chars))))
model.add(Dropout(0.2))
model.add(GRU(512, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
def sample(a, temperature=1.0):
# helper function to sample an index from a probability array
a = np.log(a)/temperature
a = np.exp(a)/np.sum(np.exp(a))
return np.argmax(np.random.multinomial(1, a, 1))
# train the model, output generated text after each iteration
for iteration in range(1, 3):
print()
print('-' * 50)
print('Iteration', iteration)
model.fit(X, y, batch_size=128, nb_epoch=1)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print()
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
據Keras文檔中,model.fit
方法返回歷史回調,其歷史屬性包含連續虧損和其他度量的列表。
hist = model.fit(X, y, validation_split=0.2)
print(hist.history)
訓練我的模型後,如果我跑print(model.history)
我得到的錯誤:
AttributeError: 'Sequential' object has no attribute 'history'
我如何訓練我的模型與上面的代碼後返回我的模型的歷史?
UPDATE
的問題是:
下必須首先定義:
from keras.callbacks import History
history = History()
的回調選項有被稱爲
model.fit(X_train, Y_train, nb_epoch=5, batch_size=16, callbacks=[history])
但現在如果我打印
print(history.History)
返回
{}
即使我跑了一個迭代。
您可以指定是否從控制檯運行此代碼,還是從命令行(或IDE)運行腳本?訓練後你有訪問hist變量嗎? –
我正在運行Anaconda。我找到了一個讓我訪問hist變量的解決方案。但它總是返回一個空的大括號。 – ishido