2017-07-03 65 views
1

我想在用新數據微調keras上的Inception v3 CNN之後,從增加的密集層中提取特徵向量。基本上,我加載網絡結構和它的重量,從網絡的僅僅一些部分添加兩個緻密層(我的數據爲2類的問題),並更新權重,如下所述代碼顯示:如何在keras中的微調網絡中提取特徵向量

# create the base pre-trained model 
base_model = InceptionV3(weights='imagenet', include_top=False) 

# add a global spatial average pooling layer 
x = base_model.output 
x = GlobalAveragePooling2D()(x) 

# let's add a fully-connected layer 
x = Dense(64, activation='relu')(x) 

# and a logistic layer -- I have 2 classes only 
predictions = Dense(2, activation='softmax')(x) 

# this is the model to train 
model = Model(inputs=base_model.input, outputs=predictions) 

# first: train only the top layers (which were randomly initialized) 
# i.e. freeze all convolutional InceptionV3 layers 

for layer in base_model.layers: 
     layer.trainable = False 

# compile the model (should be done *after* setting layers to non-trainable) 
model.compile(optimizer='rmsprop', loss='categorical_crossentropy') 

#load new training data 
x_train, x_test, y_train, y_test =load_data(train_data, test_data, train_labels, test_labels) 

datagen = ImageDataGenerator()  
datagen.fit(x_train) 

epochs=1 
batch_size=32 

# train the model on the new data for a few epochs 
model.fit_generator(datagen.flow(x_train, y_train, 
           batch_size=batch_size), 
           steps_per_epoch=x_train.shape[0] // 
           batch_size, 
           epochs=epochs, 
           validation_data=(x_test, y_test)) 

# at this point, the top layers are well trained and 
#I can start fine-tuning convolutional layers from inception V3. 
#I will freeze the bottom N layers and train the remaining top layers. 
#I chose to train the top 2 inception blocks, i.e. I will freeze the 
#first 249 layers and unfreeze the rest: 

for layer in model.layers[:249]: 
    layer.trainable = False 
for layer in model.layers[249:]: 
    layer.trainable = True 

# I need to recompile the model for these modifications to take effect 
# I use SGD with a low learning rate 
from keras.optimizers import SGD 
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['binary_accuracy']) 

# I train our model again (this time fine-tuning the top 2 inception blocks alongside the top Dense layers 
model.fit_generator(datagen.flow(x_train, y_train, 
           batch_size=batch_size), 
           steps_per_epoch=x_train.shape[0] // 
           batch_size, 
           epochs=epochs, 
           validation_data=(x_test, y_test)) 

此代碼運行得非常好,它不是我的問題。

我的問題是,經過微調這個網絡,我想從我的列車上的最後一個圖層輸出,並測試數據,因爲我想使用這個新的網絡作爲特徵提取器。我想從網絡上的這部分輸出,你可以在上面的代碼中看到:

x = Dense(64, activation='relu')(x) 

我試着下面的代碼,但它不工作:

from keras import backend as K 
inputs = [K.learning_phase()] + model.inputs 
_convout1_f = K.function(inputs, model.get_layer(dense_1).output) 

的錯誤是以下

_convout1_f = K.function(inputs, model.get_layer(dense_1).output) 
NameError: global name 'dense_1' is not defined 

如何在我的新數據中對預先訓練好的網絡進行微調之後,從添加的新圖層中提取特徵?我在這裏做錯了什麼?

回答

1

我解決了我自己的問題。希望它也適合你。

首先,K.function提取特徵是此

_convout1_f = K.function([model.layers[0].input, K.learning_phase()],[model.layers[312].output]) 

其中312是第312層我想提取特徵

然後,我通過這_convout1_f參數的函數這樣

features_train, features_test=feature_vectors_generator(x_train,x_test,_convout1_f) 

以提取這些特徵的功能是這樣的

def feature_vectors_generator(x_train,x_test, _convout1_f): 

    print('Generating Training Feature Vectors...') 

    batch_size=100 
    index=0 
    if x_train.shape[0]%batch_size==0: 
      max_iterations=x_train.shape[0]/batch_size 
    else: 
      max_iterations=(x_train.shape[0]/batch_size)+1 


    for i in xrange(0, max_iterations): 

      if(i==0): 

        features=_convout1_f([x_train[index:batch_size], 1])[0] 
        index=index+batch_size 
        features = numpy.squeeze(features) 
        features_train = features 

      else: 
        if(i==max_iterations-1): 
       features=_convout1_f([x_train[index:x_train.shape[0],:], 1])[0] 
          features = numpy.squeeze(features) 
          features_train =numpy.append(features_train,features, axis=0) 

        else: 

      features=_convout1_f([x_train[index:index+batch_size,:], 1])[0] 
          index=index+batch_size 
          features = numpy.squeeze(features)   
          features_train=numpy.append(features_train,features, axis=0) 



print('Generating Testing Feature Vectors...') 

batch_size=100 
    index=0 
    if x_test.shape[0]%batch_size==0: 
      max_iterations=x_test.shape[0]/batch_size 
    else: 
      max_iterations=(x_test.shape[0]/batch_size)+1 


    for i in xrange(0, max_iterations): 

      if(i==0): 
     features=_convout1_f([x_test[index:batch_size], 0])[0] 
        index=index+batch_size 
        features = numpy.squeeze(features) 
        features_test = features 

      else: 
        if(i==max_iterations-1): 
      features=_convout1_f([x_test[index:x_test.shape[0],:], 0])[0] 
          features = numpy.squeeze(features) 
          features_test = numpy.append(features_test,features, axis=0) 

        else: 
      features=_convout1_f([x_test[index:index+batch_size,:], 0])[0] 
          index=index+batch_size 
          features = numpy.squeeze(features) 
          features_test=numpy.append(features_test,features, axis=0) 

return(features_train, features_test)