2017-10-05 115 views
1

我想實現CNN的分類任務。我想看看每個時代的權重是如何優化的。爲此,我需要倒數第二層的值。另外,我會自己編寫最後一層和反向傳播。請推薦API以及哪些有用的API。如何獲得倒數第二層的值卷積神經網絡(CNN)?

編輯:我從keras實例加入的碼。期待編輯它。 This鏈接提供了一些線索。我已經提到了需要輸出的層。

from __future__ import print_function 

from keras.preprocessing import sequence 
from keras.models import Sequential 
from keras.layers import Dense, Dropout, Activation 
from keras.layers import Embedding 
from keras.layers import Conv1D, GlobalMaxPooling1D 
from keras.datasets import imdb 

# set parameters: 
max_features = 5000 
maxlen = 400 
batch_size = 100 
embedding_dims = 50 
filters = 250 
kernel_size = 3 
hidden_dims = 250 
epochs = 100 

print('Loading data...') 
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) 
print(len(x_train), 'train sequences') 
print(len(x_test), 'test sequences') 

print('Pad sequences (samples x time)') 
x_train = sequence.pad_sequences(x_train, maxlen=maxlen) 
x_test = sequence.pad_sequences(x_test, maxlen=maxlen) 
print('x_train shape:', x_train.shape) 
print('x_test shape:', x_test.shape) 

print('Build model...') 
model = Sequential() 

# we start off with an efficient embedding layer which maps 
# our vocab indices into embedding_dims dimensions 
model.add(Embedding(max_features, 
        embedding_dims, 
        input_length=maxlen)) 
model.add(Dropout(0.2)) 

# we add a Convolution1D, which will learn filters 
# word group filters of size filter_length: 
model.add(Conv1D(filters, 
       kernel_size, 
       padding='valid', 
       activation='relu', 
       strides=1)) 
# we use max pooling: 
model.add(GlobalMaxPooling1D()) 

# We add a vanilla hidden layer: 
model.add(Dense(hidden_dims)) 
model.add(Dropout(0.2)) 
model.add(Activation('relu')) 

# We project onto a single unit output layer, and squash it with a sigmoid: 
model.add(Dense(1)) 
model.add(Activation('sigmoid')) #<======== I need output after this. 



model.compile(loss='binary_crossentropy', 
       optimizer='adam', 
       metrics=['accuracy']) 
model.fit(x_train, y_train, 
      batch_size=batch_size, 
      epochs=epochs, 
      validation_data=(x_test, y_test)) 

回答

0

你可以得到你的模型的各個層是這樣的:

num_layer = 7 # Dense(1) layer 
layer = model.layers[num_layer] 

我想看看如何權重,在每個時期被優化。

要得到層使用layer.get_weights()的權重是這樣的:

w, b = layer.get_weights() # weights and bias of Dense(1) 

我需要的倒數第二層的值。

要得到最後一層使用model.predict()的評價值:

prediction = model.predict(x_test) 

要得到任何其他層的評價與tensorflow做這樣的:

input = tf.placeholder(tf.float32) # Create input placeholder 
layer_output = layer(input) # create layer output operation 

init_op = tf.global_variables_initializer() # initialize variables 

with tf.Session() as sess: 
    sess.run(init_op) 

    # evaluate layer output 
    output = sess.run(layer_output, feed_dict = {input: x_test}) 
    print(output) 
+0

我想得到倒數第二層的輸出,即在它進入最後一層之前。其實我想用我自己的優化器而不是使用keras提供的任何優化器。我認爲倒數第二層的輸出是'model.add(Activation('relu'))'層的輸出。因此,對於25000個數據點,我想輸出爲25000 * 250。糾正我我錯了某個地方。 –

+0

我的回答的最後一位可以讓你做到這一點,請務必使用正確的層'層= model.layers [8]'。那麼'layer_output'是一個張量,所以你可以繼續添加純張量流的邏輯。 –

+0

我用我在[問題]提及(https://stackoverflow.com/questions/46885680/why-different-intermediate-layer-ouput-of-cnn-in-keras)的代碼,以獲取中間層輸出。 –

相關問題