我正在訓練一個卷積神經網絡,具有連體結構和麪臨驗證任務的constrastive loss功能。從字面上的前三到五個時代開始,我面臨着培訓和驗證準確度的巨大差異。當訓練準確率達到95%時,我的驗證準確度達到〜65%。它在70%左右波動,但從未達到這個數字。 these are training and validation accuracy plotted on one chart如何克服卷積神經網絡中的過擬合問題?
因此,爲了避免這種情況,我嘗試了一系列標準技術,當涉及到過度配合時,但在此處列出它們之前,我應該說它們都沒有真正改變圖像。培訓和驗證準確性之間的差距保持不變。所以就用:
- L1正規化與拉姆達改變從0.0001至10000.0
- L2正規化與拉姆達0.0001至10000.0
- 差與0.2〜0.8
- 數據增量技術率變化(旋轉,移動,縮放)
- 刪除除最後一層以外的完全連接的層。
這些都沒有幫助,所以我非常感謝你們的建議。 和一些關於網絡本身的信息。我正在使用tensorflow。這是模型本身看起來怎麼樣:
net = tf.layers.conv2d(
inputs,
kernel_size=(7, 7),
filters=15,
strides=1,
activation=tf.nn.relu,
kernel_initializer=w_init,
kernel_regularizer=reg)
# 15 x 58 x 58
net = tf.layers.max_pooling2d(net, pool_size=(2, 2), strides=2)
# 15 x 29 x 29
net = tf.layers.conv2d(
net,
kernel_size=(6, 6),
filters=45,
strides=1,
activation=tf.nn.relu,
kernel_initializer=w_init,
kernel_regularizer=reg)
# 45 x 24 x 24
net = tf.layers.max_pooling2d(net, pool_size=(4, 4), strides=4)
# 45 x 6 x 6
net = tf.layers.conv2d(
net,
kernel_size=(6, 6),
filters=256,
strides=1,
activation=tf.nn.relu,
kernel_initializer=w_init,
kernel_regularizer=reg)
# 256 x 1 x 1
net = tf.reshape(net, [-1, 256])
net = tf.layers.dense(net, units=512, activation=tf.nn.relu, kernel_regularizer=reg, kernel_initializer=w_init)
net = tf.layers.dropout(net, rate=0.2)
# net = tf.layers.dense(net, units=256, activation=tf.nn.relu, kernel_regularizer=reg, kernel_initializer=w_init)
# net = tf.layers.dropout(net, rate=0.75)
return tf.layers.dense(net, units=embedding_size, activation=tf.nn.relu, kernel_initializer=w_init)
這是怎麼丟失的功能實現:
def contrastive_loss(out1, out2, labels, margin):
distance = compute_euclidian_distance_square(out1, out2)
positive_part = labels * distance
negative_part = (1 - labels) * tf.maximum(tf.square(margin) - distance, 0.0)
return tf.reduce_mean(positive_part + negative_part)/2
這是我如何獲得並增強數據(我使用LFW數據集):
ROTATIONS_RANGE = range(1, 25)
SHIFTS_RANGE = range(1, 18)
ZOOM_RANGE = (1.05, 1.075, 1.1, 1.125, 1.15, 1.175, 1.2, 1.225, 1.25, 1.275, 1.3, 1.325, 1.35, 1.375, 1.4)
IMG_SLICE = (slice(0, 64), slice(0, 64))
def pad_img(img):
return np.pad(img, ((0, 2), (0, 17)), mode='constant')
def get_data(rotation=False, shifting=False, zooming=False):
train_data = fetch_lfw_pairs(subset='train')
test_data = fetch_lfw_pairs(subset='test')
x1s_trn, x2s_trn, ys_trn, x1s_vld, x2s_vld = [], [], [], [], []
for (pair, y) in zip(train_data.pairs, train_data.target):
img1, img2 = pad_img(pair[0]), pad_img(pair[1])
x1s_trn.append(img1)
x2s_trn.append(img2)
ys_trn.append(y)
if rotation:
for angle in ROTATIONS_RANGE:
x1s_trn.append(np.asarray(rotate(img1, angle))[IMG_SLICE])
x2s_trn.append(np.asarray(rotate(img2, angle))[IMG_SLICE])
ys_trn.append(y)
x1s_trn.append(np.asarray(rotate(img1, -angle))[IMG_SLICE])
x2s_trn.append(np.asarray(rotate(img2, -angle))[IMG_SLICE])
ys_trn.append(y)
if shifting:
for pixels_to_shift in SHIFTS_RANGE:
x1s_trn.append(shift(img1, pixels_to_shift))
x2s_trn.append(shift(img2, pixels_to_shift))
ys_trn.append(y)
x1s_trn.append(shift(img1, -pixels_to_shift))
x2s_trn.append(shift(img2, -pixels_to_shift))
ys_trn.append(y)
if zooming:
for zm in ZOOM_RANGE:
x1s_trn.append(np.asarray(zoom(img1, zm))[IMG_SLICE])
x2s_trn.append(np.asarray(zoom(img2, zm))[IMG_SLICE])
ys_trn.append(y)
for (img1, img2) in test_data.pairs:
x1s_vld.append(pad_img(img1))
x2s_vld.append(pad_img(img2))
return (
np.array(x1s_trn),
np.array(x2s_trn),
np.array(ys_trn),
np.array(x1s_vld),
np.array(x2s_vld),
np.array(test_data.target)
)
謝謝大家!
你有沒有試過裝? – cprakashagr
由於數據太少,你不能指望一個很好的泛化。首先要做的就是收集儘可能多的數據。基本上,你有多少數據?還有多少個標籤? – debzsud
@cprakashagr不,但不應該退出相同的效果? –