2016-04-21 39 views
0

我試圖建立一個「雙」層先卷積然後最大池。該網絡將被饋送20x20輸入圖像,並且應該從[0,25]輸出一個類。嘗試構建函數時,激活卷積池圖層時出現錯誤TypeError: conv2d() got multiple values for argument 'input'Theano卷積:TypeError:conv2d()得到了多個值的參數'輸入'

class ConvPoolLayer: 
    conv_func = T.nnet.conv2d 
    pool_func = max_pool_2d 

    def __init__(self, image_shape, n_feature_maps, act_func, 
       local_receptive_field_size=(5,5), pool_size=(2,2), 
       init_weight_func=init_rand_weights, init_bias_weight_func=init_rand_weights): 
     """ 
     Generate a convolutional and a subsequent pooling layer with one bias node for each channel in the pooling layer. 
     :param image_shape: tuple(batch size, input channels, input rows, input columns) where 
      input_channels = number of feature maps in upstream layer 
      input rows, input columns = output size of upstream layer 
     :param n_feature_maps: number of feature maps/filters in this layer 
     :param local_receptive_field_size: = size of local receptive field 
     :param pool_size: 
     :param act_func: 
     :param init_weight_func: 
     :param init_bias_weight_func: 
     """ 
     self.image_shape = image_shape 
     self.filter_shape = (n_feature_maps, image_shape[1]) + local_receptive_field_size 
     self.act_func = act_func 
     self.pool_size = pool_size 
     self.weights = init_weight_func(self.filter_shape) 
     self.bias_weights = init_bias_weight_func((n_feature_maps,)) 
     self.params = [self.weights, self.bias_weights] 
     self.output_values = None 

    def activate(self, input_values): 
     """ 
     :param input_values: the output from the upstream layer (which is input to this layer) 
     :return: 
     """ 
     input_values = input_values.reshape(self.image_shape) 
     conv = self.conv_func(
      input=input_values, 
      image_shape=self.image_shape, 
      filters=self.weights, 
      filter_shape=self.filter_shape 
     ) 
     pooled = self.pool_func(
      input=conv, 
      ds=self.pool_size, 
      ignore_border=True 
     ) 
     self.output_values = self.act_func(pooled + self.bias_weights.dimshuffle('x', 0, 'x', 'x')) 

    def output(self): 
     assert self.output_values is not None, 'Asking for output before activating layer' 
     return self.output_values 


def test_conv_layer(): 
    batch_size = 10 
    input_shape = (20, 20) 
    output_shape = (26,) 
    image_shape = (batch_size, 1) + input_shape # e.g image_shape = (10, 1, 20, 20) 
    n_feature_maps = 10 
    convpool_layer = ConvPoolLayer(image_shape, n_feature_maps, T.nnet.relu) 

    x = T.fmatrix('X') 
    y = T.fmatrix('Y') 

    convpool_layer.activate(x) 


test_conv_layer() 

回答

1

問題是,您將conv_func()設置爲類ConvPoolLayer()的方法。所以,當你這樣做:

conv = self.conv_func(input=input_values, 
         image_shape=self.image_shape, 
         filters=self.weights, 
         filter_shape=self.filter_shape) 

Python中,後方的場景做到這一點:

conv = ConvPoolLayer.conv_func(self, input=input_values, 
           image_shape=self.image_shape, 
           filters=self.weights, 
           filter_shape=self.filter_shape) 

而且由於input是第一個參數,那麼你爲它提供多個值。

可以通過纏繞在靜態方法()這樣的方法避免了這一點:

conv_func = staticmethod(T.nnet.conv2d) 

或通過從__init__內設置conv_func屬性。請注意,您將遇到pool_func的相同問題。

+0

非常感謝。我一直在磨這個小時,沒有到任何地方!可選後續問題:爲什麼在將偏置權重添加到池層之前需要對其進行「減肥」? 'self.output_values = self.act_func(pooled + self.bias_weights.dimshuffle('x',0,'x','x'))'是否有可能首先將偏置權重塑造成正確的形狀? – tsorn

+1

如果你願意,你可以將bias_weights塑造成(1,n_feature_maps,1,1),但是dimshuffling只是暫時的。 你需要的原因是事物需要具有相同的(廣播兼容)形狀才能加在一起。 – abergeron

相關問題