2017-10-16 80 views
4

需要實現這樣一個自定義層:如何與多個輸入來實現自定義層Keras

class MaskedDenseLayer(Layer): 
    def __init__(self, output_dim, activation, **kwargs): 
     self.output_dim = output_dim 
     super(MaskedDenseLayer, self).__init__(**kwargs) 
     self._activation = activations.get(activation) 
    def build(self, input_shape): 

     # Create a trainable weight variable for this layer. 
     self.kernel = self.add_weight(name='kernel', 
            shape=(input_shape[0][1], self.output_dim), 
            initializer='glorot_uniform', 
            trainable=True) 
     super(MaskedDenseLayer, self).build(input_shape) 

    def call(self, l): 
     self.x = l[0] 
     self._mask = l[1][1] 
     print('kernel:', self.kernel) 
     masked = Multiply()([self.kernel, self._mask]) 
     self._output = K.dot(self.x, masked) 
     return self._activation(self._output) 


    def compute_output_shape(self, input_shape): 
    return (input_shape[0][0], self.output_dim) 

這就像Keras API介紹來實現自定義層的方式。 我需要給兩個輸入到這一層是這樣的:

def main(): 
    with np.load('datasets/simple_tree.npz') as dataset: 
     inputsize = dataset['inputsize'] 
     train_length = dataset['train_length'] 
     train_data = dataset['train_data'] 
     valid_length = dataset['valid_length'] 
     valid_data = dataset['valid_data'] 
     test_length = dataset['test_length'] 
     test_data = dataset['test_data'] 
     params = dataset['params'] 

    num_of_all_masks = 20 
    num_of_hlayer = 6 
    hlayer_size = 5 
    graph_size = 4 

    all_masks = generate_all_masks(num_of_all_masks, num_of_hlayer, hlayer_size, graph_size) 

    input_layer = Input(shape=(4,)) 

    mask_1 = Input(shape = (graph_size , hlayer_size)) 
    mask_2 = Input(shape = (hlayer_size , hlayer_size)) 
    mask_3 = Input(shape = (hlayer_size , hlayer_size)) 
    mask_4 = Input(shape = (hlayer_size , hlayer_size)) 
    mask_5 = Input(shape = (hlayer_size , hlayer_size)) 
    mask_6 = Input(shape = (hlayer_size , hlayer_size)) 
    mask_7 = Input(shape = (hlayer_size , graph_size)) 


    hlayer1 = MaskedDenseLayer(hlayer_size, 'relu')([input_layer, mask_1]) 
    hlayer2 = MaskedDenseLayer(hlayer_size, 'relu')([hlayer1, mask_2]) 
    hlayer3 = MaskedDenseLayer(hlayer_size, 'relu')([hlayer2, mask_3]) 
    hlayer4 = MaskedDenseLayer(hlayer_size, 'relu')([hlayer3, mask_4]) 
    hlayer5 = MaskedDenseLayer(hlayer_size, 'relu')([hlayer4, mask_5]) 
    hlayer6 = MaskedDenseLayer(hlayer_size, 'relu')([hlayer5, mask_6]) 
    output_layer = MaskedDenseLayer(graph_size, 'sigmoid')([hlayer6, mask_7]) 

    autoencoder = Model(inputs=[input_layer, mask_1, mask_2, mask_3, 
        mask_4, mask_5, mask_6, mask_7], outputs=[output_layer]) 

    autoencoder.compile(optimizer='adam', loss='binary_crossentropy') 
    #reassign_mask = ReassignMask() 

    for i in range(0, num_of_all_masks): 
     state = np.random.randint(0,20) 
     autoencoder.fit(x=[train_data, 
         np.tile(all_masks[state][0], [300, 1, 1]), 
         np.tile(all_masks[state][1], [300, 1, 1]), 
         np.tile(all_masks[state][2], [300, 1, 1]), 
         np.tile(all_masks[state][3], [300, 1, 1]), 
         np.tile(all_masks[state][4], [300, 1, 1]), 
         np.tile(all_masks[state][5], [300, 1, 1]), 
         np.tile(all_masks[state][6], [300, 1, 1])], 
        y=[train_data], 
        epochs=1, 
        batch_size=20, 
        shuffle=True, 
        #validation_data=(valid_data, valid_data), 
        #callbacks=[reassign_mask], 
        verbose=1) 

不幸的是,當我運行這段代碼我得到以下錯誤:

TypeError: can only concatenate tuple (not "int") to tuple 

我需要的是一種方法來實現自定義具有包含前一層和掩模矩陣的兩個輸入的層。 這裏的all_mask變量是一個包含所有圖層的一些預生成蒙版的列表。

任何人都可以幫忙嗎?我的代碼在這裏出了什麼問題。

更新

一些參數:

列車數據:(300,4)

數目隱層:6個

隱藏層單元:5

掩模:(上一層的尺寸,當前層的尺寸)

這裏是我的模型摘要:

__________________________________________________________________________________________________ 
Layer (type)     Output Shape   Param #  Connected to      
================================================================================================== 
input_361 (InputLayer)   (None, 4)   0            
__________________________________________________________________________________________________ 
input_362 (InputLayer)   (None, 4, 5)   0            
__________________________________________________________________________________________________ 
masked_dense_layer_281 (MaskedD (None, 5)   20   input_361[0][0]     
                   input_362[0][0]     
__________________________________________________________________________________________________ 
input_363 (InputLayer)   (None, 5, 5)   0            
__________________________________________________________________________________________________ 
masked_dense_layer_282 (MaskedD (None, 5)   25   masked_dense_layer_281[0][0]  
                   input_363[0][0]     
__________________________________________________________________________________________________ 
input_364 (InputLayer)   (None, 5, 5)   0            
__________________________________________________________________________________________________ 
masked_dense_layer_283 (MaskedD (None, 5)   25   masked_dense_layer_282[0][0]  
                   input_364[0][0]     
__________________________________________________________________________________________________ 
input_365 (InputLayer)   (None, 5, 5)   0            
__________________________________________________________________________________________________ 
masked_dense_layer_284 (MaskedD (None, 5)   25   masked_dense_layer_283[0][0]  
                   input_365[0][0]     
__________________________________________________________________________________________________ 
input_366 (InputLayer)   (None, 5, 5)   0            
__________________________________________________________________________________________________ 
masked_dense_layer_285 (MaskedD (None, 5)   25   masked_dense_layer_284[0][0]  
                   input_366[0][0]     
__________________________________________________________________________________________________ 
input_367 (InputLayer)   (None, 5, 5)   0            
__________________________________________________________________________________________________ 
masked_dense_layer_286 (MaskedD (None, 5)   25   masked_dense_layer_285[0][0]  
                   input_367[0][0]     
__________________________________________________________________________________________________ 
input_368 (InputLayer)   (None, 5, 4)   0            
__________________________________________________________________________________________________ 
masked_dense_layer_287 (MaskedD (None, 4)   20   masked_dense_layer_286[0][0]  
                   input_368[0][0]     
================================================================================================== 
Total params: 165 
Trainable params: 165 
Non-trainable params: 0 

回答

3

input_shape是一個元組列表。

input_shape: [(None, 4), (None, 4, 5)] 

你不能簡單地用input_shape[0]input_shape[1]。如果你想使用實際值,你必須選擇哪個元組,然後選擇哪個值。例如:

self.kernel = self.add_weight(name='kernel', 

           #here: 
           shape=(input_shape[0][1], self.output_dim), 


           initializer='glorot_uniform', 
           trainable=True) 

同樣是必要的(下面你自己的形狀規則)的方法compute_output_shape,它似乎你想要的是連接元組:

return input_shape[0] + (self.output_dim,) 

不要忘了取消註釋super(MaskedDenseLayer, self).build(input_shape)一行。

+0

Thx考慮Daniel,但仍然代碼不工作,錯誤更改爲:尺寸必須相同,但對於'masked_dense_layer_293/MatMul'(op:'MatMul'),其值爲5和4,輸入形狀爲[?, 5],[4,20]。 – muradin

+0

現在錯誤更容易理解。矩陣乘法的形狀不兼容。問題出在'output = K.dot(self.x,masked)'。什麼是您期望的self.x和蒙面的形狀?你想要什麼樣的倍增? –

+0

我的通話功能有兩種乘法。首先,我應該將mask與內核元素相乘,然後矩陣將結果與輸入x相乘。我的輸入數據x由100列4列訓練數據組成。我有20個面具,每個面具有7層(與層數相同)。他們是輸入*輸出矩陣,我想隨機使用這20個面具中的一個。 – muradin