您可以通過在模型頂部添加Lambda
層來輕鬆實現此目的。
這是一個很簡單的例子..你可以從這裏延伸:
import numpy as np
from keras import backend as K
from keras.models import Sequential, Model
from keras.layers import Dense, Lambda, Input, merge
X = np.random.random((1000,5))
Y = np.random.random((1000,1))
inp = Input(shape = (5,))
d1 = Dense(60, input_dim=5, init='normal', activation='relu')
d2 = Dense(1, init='normal', activation='sigmoid')
out = d2(d1(inp))
model = Model(input=[inp], output=[out])
model.compile(loss='binary_crossentropy', optimizer='adam')
model.summary()
model.fit(X, Y, nb_epoch=1)
X1 = np.random.random((10,3))
X2 = np.random.random((10,2))
inp1 = Input(shape = (3,))
inp2 = Input(shape = (2,))
p1 = Lambda(lambda x: K.sqrt(x))(inp1)
p2 = Lambda(lambda x: K.tf.exp(x))(inp2)
mer = merge([p1, p2], mode='concat')
out2 = d2(d1(mer))
model2 = Model(input=[inp1, inp2], output=[out2])
model2.summary()
ypred = model2.predict([X1, X2])
print ypred.shape
從
model.summary()
在這裏你可以看到兩個模型是如此實質上是使用過程中已經瞭解到的權重共享上層第一個模型的培訓