2016-11-23 98 views
1

我想在Keras中實現反捲積。我的模型定義如下:如何在Keras/Theano中執行反捲積?

model=Sequential() 


model.add(Convolution2D(32, 3, 3, border_mode='same', 
         input_shape=X_train.shape[1:])) 
model.add(Activation('relu')) 
model.add(Convolution2D(32, 3, 3, border_mode='same')) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2, 2))) 
model.add(Dropout(0.25)) 

model.add(Convolution2D(64, 3, 3, border_mode='same')) 
model.add(Activation('relu')) 
model.add(Convolution2D(64, 3, 3,border_mode='same')) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2, 2))) 
model.add(Dropout(0.25)) 

model.add(Flatten()) 
model.add(Dense(512)) 
model.add(Activation('relu')) 
model.add(Dropout(0.5)) 
model.add(Dense(nb_classes)) 
model.add(Activation('softmax')) 

欲由第一卷積層即convolution2d_1給出的輸出信號進行去卷積或轉置卷積。

可以說我們在第一個卷積圖層後的特徵圖是X這是(9, 32, 32, 32)其中9是尺寸爲32x32的圖像no我穿過了圖層。第一層的權重矩陣由Keras的get_weights()函數獲得。權重矩陣的維數爲(32, 3, 3, 2)

我使用進行換位卷積碼是

conv_out = K.deconv2d(self.x, W, (9,3,32,32), dim_ordering = "th") 
deconv_func = K.function([self.x, K.learning_phase()], conv_out) 
X_deconv = deconv_func([X, 0 ]) 

但得到錯誤:

CorrMM shape inconsistency: 
    bottom shape: 9 32 34 34 
    weight shape: 3 32 3 3 
    top shape: 9 32 32 32 (expected 9 3 32 32) 

誰能請告訴我,我錯了?

回答

0

您可以輕鬆使用Deconvolution2D圖層。

這裏是你想什麼來實現:

batch_sz = 1 
output_shape = (batch_sz,) + X_train.shape[1:] 
conv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output) 

deconv_func = K.function([model.input, K.learning_phase()], [conv_out]) 

test_x = np.random.random(output_shape) 
X_deconv = deconv_func([test_x, 0 ]) 

但其更好地創建一個功能模型,該模型將幫助訓練和預測..

batch_sz = 10 
output_shape = (batch_sz,) + X_train.shape[1:] 
conv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output) 

model2 = Model(model.input, [model.output, conv_out]) 
model2.summary() 
model2.compile(loss=['categorical_crossentropy', 'mse'], optimizer='adam') 
model2.fit(X_train, [Y_train, X_train], batch_size=batch_sz) 
0

在Keras,Conv2DTranspose層以其他術語去卷積執行轉置卷積。它支持後端庫,即Theano & Keras。

Keras Documentation說:

Conv2DTranspose

Transposed convolution layer (sometimes called Deconvolution).

The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.