2016-10-27 26 views
0

因爲我手動運行會話,我似乎無法收集特定圖層的可訓練權重。如何獲得在Keras中手動運行會話的可訓練權重?

x = Convolution2D(16, 3, 3, init='he_normal', border_mode='same')(img) 

    for i in range(0, self.blocks_per_group): 
     nb_filters = 16 * self.widening_factor 
     x = residual_block(x, nb_filters=nb_filters, subsample_factor=1) 

    for i in range(0, self.blocks_per_group): 
     nb_filters = 32 * self.widening_factor 
     if i == 0: 
      subsample_factor = 2 
     else: 
      subsample_factor = 1 
     x = residual_block(x, nb_filters=nb_filters, subsample_factor=subsample_factor) 

    for i in range(0, self.blocks_per_group): 
     nb_filters = 64 * self.widening_factor 
     if i == 0: 
      subsample_factor = 2 
     else: 
      subsample_factor = 1 
     x = residual_block(x, nb_filters=nb_filters, subsample_factor=subsample_factor) 

    x = BatchNormalization(axis=3)(x) 
    x = Activation('relu')(x) 
    x = AveragePooling2D(pool_size=(8, 8), strides=None, border_mode='valid')(x) 
    x = tf.reshape(x, [-1, np.prod(x.get_shape()[1:].as_list())]) 

    # Readout layer 
    preds = Dense(self.nb_classes, activation='softmax')(x) 

    loss = tf.reduce_mean(categorical_crossentropy(labels, preds)) 

    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) 

    with sess.as_default(): 

     for i in range(10): 

      batch = self.next_batch(self.batch_num) 
      _, l = sess.run([optimizer, loss], 
          feed_dict={img: batch[0], labels: batch[1]}) 
      print(l) 
      print(type(weights)) 

我想要得到最後一個卷積圖層的權重。

我試過get_trainable_weights(layer)layer.get_weights()但我沒有設法找到任何地方。

錯誤

AttributeError: 'Tensor' object has no attribute 'trainable_weights' 

回答

相關問題