我試圖根據Andrew Ng的演講筆記實現稀疏自動編碼器,如here所示。 它要求通過引入懲罰項(K-L發散度)來對自動編碼器層應用稀疏約束。我嘗試使用here提供的方向執行此操作,稍作更改後。 下面是SparseActivityRegularizer類實現的K-L散度和稀疏懲罰項,如下所示。如何正確實施Keras中的自定義活動調節器?
def kl_divergence(p, p_hat):
return (p * K.log(p/p_hat)) + ((1-p) * K.log((1-p)/(1-p_hat)))
class SparseActivityRegularizer(Regularizer):
sparsityBeta = None
def __init__(self, l1=0., l2=0., p=-0.9, sparsityBeta=0.1):
self.p = p
self.sparsityBeta = sparsityBeta
def set_layer(self, layer):
self.layer = layer
def __call__(self, loss):
#p_hat needs to be the average activation of the units in the hidden layer.
p_hat = T.sum(T.mean(self.layer.get_output(True) , axis=0))
loss += self.sparsityBeta * kl_divergence(self.p, p_hat)
return loss
def get_config(self):
return {"name": self.__class__.__name__,
"p": self.l1}
該模型是建立像這樣
X_train = np.load('X_train.npy')
X_test = np.load('X_test.npy')
autoencoder = Sequential()
encoder = containers.Sequential([Dense(250, input_dim=576, init='glorot_uniform', activation='tanh',
activity_regularizer=SparseActivityRegularizer(p=-0.9, sparsityBeta=0.1))])
decoder = containers.Sequential([Dense(576, input_dim=250)])
autoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=True))
autoencoder.layers[0].build()
autoencoder.compile(loss='mse', optimizer=SGD(lr=0.001, momentum=0.9, nesterov=True))
loss = autoencoder.fit(X_train_tmp, X_train_tmp, nb_epoch=200, batch_size=800, verbose=True, show_accuracy=True, validation_split = 0.3)
autoencoder.save_weights('SparseAutoEncoder.h5',overwrite = True)
result = autoencoder.predict(X_test)
當我打電話配合()函數,我得到負損耗值與產量不相似的輸入都沒有。我想知道我出錯的地方。計算圖層的平均激活並使用此自定義稀疏正規化器的正確方法是什麼?任何形式的幫助將不勝感激。謝謝!
我使用Keras 0.3.1和Python 2.7,因爲最新的Keras(1.0.1)版本沒有Autoencoder圖層。
https://stackoverflow.com/questions/47859737/keras-error-in-fit-method-expected-model-2-to-have-shape-none-252-252-1-b有什麼建議嗎? –