0
我想設計一個用於檢測小型紅色足球的卷積神經網絡。我已經拍攝了不同配置(添加椅子,瓶子等等)的場景的aproxx 4000圖片,但沒有內部球,還有4000張不同配置的場景圖片,但球內有球。 我使用的是32x32像素的分辨率。在有圖片的地方可以看到球。 這些是一些積極的示例圖片(這裏是倒過來):在卷積NN項目驗證上無法獲得像樣的準確性
我試過卷積神經網絡設計的許多組合,但我找不到一個像樣的。我將介紹我嘗試過的兩種架構(「正常」大小和非常小的架構)。我一直在設計小型和小型網絡,因爲它認爲我會幫助我解決過度的問題。 所以,我曾嘗試: 普通網絡設計
Input: 32x32x3
First Conv Layer:
W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 3, 32], stddev=0.1), name=「w1」)
b_conv1 = tf.Variable(tf.constant(0.1, shape=[32]), name=「b1」) _
h_conv1 = tf.nn.relu(tf.nn.conv2d(x, W_conv1, strides=[1, 1, 1, 1], padding=‘SAME’)+ b_conv1, name=「conv1」)
h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding=‘SAME’, name=「pool1」)
第二層轉化率:
W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 16], stddev=0.1), name=「w2」)
b_conv2 = tf.Variable(tf.constant(0.1, shape=[16]), name=「b2」)
h_conv2 = tf.nn.relu(tf.nn.conv2d(h_pool1, W_conv2, strides=[1, 1, 1, 1], padding=‘SAME’)+ b_conv2, name=「conv2」)
h_pool2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding=‘SAME’, name=「pool2」)
完全連接層:
W_fc1 = tf.Variable(tf.truncated_normal([8 * 8* 16, 16], stddev=0.1), name=「w3」)
b_fc1 = tf.Variable(tf.constant(0.1, shape=[16]), name=「b3」)
h_pool2_flat = tf.reshape(h_pool2, [-1, 8816], name=「flat3」)
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1, name=「conv3」)
降
keep_prob = tf.placeholder(tf.float32, name=「keep3」)
h_fc2_drop = tf.nn.dropout(h_fc1, keep_prob, name=「drop3」)
個
讀出層
W_fc3 = tf.Variable(tf.truncated_normal([16, 2], stddev=0.1), name=「w4」)
b_fc3 = tf.Variable(tf.constant(0.1, shape=([2]), name=「b4」))
y_conv = tf.matmul(h_fc2_drop, W_fc3, name=「yconv」) + b_fc3
其他信息
cross_entropy = tf.reduce_mean(
_ tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_conv)+ 0.005 * tf.nn.l2_loss(W_conv1)+ 0.005 * tf.nn.l2_loss(W_fc1) + 0.005 * tf.nn.l2_loss(W_fc3)) _
train_step = tf.train.AdamOptimizer(1e-5,name=「trainingstep」).minimize(cross_entropy)
_#Percentage of correct _
prediction = tf.nn.softmax(y_conv, name=「y_prediction」) _
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y,1), name=「correct_pred」)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name=「acc」)
參數
keep_prob: 0.4
batch_size=500
training time in generations=55
結果
Training set final accuracy= 90.2%
Validation set final accuracy= 52.2%
圖表鏈接: Link to accuracy graph
小型網絡設計
Input: 32x32x3
首先轉化率層:
W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 3, 16], stddev=0.1), name=「w1」)
_b_conv1 = tf.Variable(tf.constant(0.1, shape=[16]), name=「b1」) _
h_conv1 = tf.nn.relu(tf.nn.conv2d(x, W_conv1, strides=[1, 1, 1, 1], padding=‘SAME’)+ b_conv1, name=「conv1」)
h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding=‘SAME’, name=「pool1」)
完全連接層:
W_fc1 = tf.Variable(tf.truncated_normal([16 * 16* 16, 8], stddev=0.1), name=「w3」)
b_fc1 = tf.Variable(tf.constant(0.1, shape=[8]), name=「b3」)
h_pool2_flat = tf.reshape(h_pool1, [-1, 161616], name=「flat3」)
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1, name=「conv3」)
降
keep_prob = tf.placeholder(tf.float32, name=「keep3」)
h_fc2_drop = tf.nn.dropout(h_fc1, keep_prob, name=「drop3」)
讀出層
W_fc3 = tf.Variable(tf.truncated_normal([8, 2], stddev=0.1), name=「w4」)
b_fc3 = tf.Variable(tf.constant(0.1, shape=([2]), name=「b4」))
y_conv = tf.matmul(h_fc2_drop, W_fc3, name=「yconv」) + b_fc3
其他信息
cross_entropy = tf.reduce_mean(
_ tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)+ 0.005 * tf.nn.l2_loss(W_conv1)+ 0.005 * tf.nn.l2_loss(W_fc1) + 0.005 * tf.nn.l2_loss(W_fc3)) _
train_step = tf.train.AdamOptimizer(1e-5,name=「trainingstep」).minimize(cross_entropy)
_#Percentage of correct _
prediction = tf.nn.softmax(y_conv, name=「y_prediction」) _
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y,1), name=「correct_pred」)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name=「acc」)
參數
keep_prob: 0.4
batch_size=500
training time in generations=55
結果
Training set final accuracy= 87%
Validation set final accuracy= 60.6%
所以,我所做的一切,我不能得到驗證測試一個體面的準確性。 我相信這是缺少的東西,但我不能確定什麼。我使用的輟學率和L2,但它似乎過度擬合反正
感謝您的閱讀和業餘或CNN先進的,請留下反饋
我認爲你應該使用更好的數據集,深度學習需要龐大的數據集 – bakaDev
BTW使用https://arxiv.org/abs/1512.03385 – bakaDev
感謝您的輸入@bakaDev。這是一個沒有太多圖層和重量的小型CNN,它的尺寸爲32x32,並且只有兩個輸出,看起來很簡單,在一個環境中可以識別出一個紅球。你認爲8000張照片是不夠的? – Vlad