2017-07-31 41 views
0

我正在嘗試使用深度學習技術構建分類器,並使用cifar-10數據集來構建分類器。我試圖用1024個隱藏節點構建分類器。每張圖像都是32 * 32 * 3(R-G-B)大小。由於我的電腦處理能力較低,我只從數據集的3/5個文件加載數據。在CIFAR-10數據集上使用深度網絡進行分類

from __future__ import print_function 
import matplotlib.pyplot as plt 
import numpy as np 
import tensorflow as tf 
import os 
import sys 
import tarfile 
import random 
from IPython.display import display, Image 
from scipy import ndimage 
from sklearn.linear_model import LogisticRegression 
from six.moves.urllib.request import urlretrieve 
from six.moves import cPickle as pickle 
from sklearn.preprocessing import MultiLabelBinarizer 

folder='/home/cifar-10-batches-py/' 

training_data=np.ndarray((30000,3072),dtype=np.float32) 
training_labels=np.ndarray(30000,dtype=np.int32) 

testing_data=np.ndarray((10000,3072),dtype=np.float32) 
testing_labels=np.ndarray(10000,dtype=np.int32) 

no_of_files=3 

begin=0 
end=10000 

for i in range(no_of_files): 
    with open(folder+"data_batch_"+str(i+1),'rb') as f: 
     s=pickle.load(f,encoding='bytes') 
     training_data[begin:end]=s[b'data'] 
     training_labels[begin:end]=s[b'labels'] 
     begin=begin+10000 
     end=end+10000 

test_path='/home/cifar-10-batches-py/test_batch' 
with open(test_path,'rb') as d: 
    s9=pickle.load(d,encoding='bytes') 
    tdata=s9[b'data'] 
    testing_data=tdata 
    tlabels=s9[b'labels'] 
    testing_labels=tlabels 
test_data=np.ndarray((5000,3072),dtype=np.float32) 
test_labels=np.ndarray(5000,dtype=np.int32) 
valid_data=np.ndarray((5000,3072),dtype=np.float32) 
valid_labels=np.ndarray(5000,dtype=np.int32) 

valid_data[:,:]=testing_data[:5000, :] 
valid_labels[:]=testing_labels[:5000] 
test_data[:,:]=testing_data[5000:, :] 
test_labels[:]=testing_labels[5000:] 
onehot_training_labels=np.eye(10)[training_labels.astype(int)] 
onehot_test_labels=np.eye(10)[test_labels.astype(int)] 
onehot_valid_labels=np.eye(10)[valid_labels.astype(int)] 
image_size=32 
num_labels=10 
train_subset = 10000 

def accuracy(predictions, labels): 
    return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) 
     /predictions.shape[0]) 

batch_size = 128 
relu_count = 1024 #hidden nodes count 

graph = tf.Graph() 
with graph.as_default(): 
    tf_train_dataset = tf.placeholder(tf.float32, 
            shape=(batch_size, image_size * image_size*3)) 
    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) 
    tf_valid_dataset = tf.constant(valid_data) 
    tf_test_dataset = tf.constant(test_data) 
    beta_regul = tf.placeholder(tf.float32) 

    weights1 = tf.Variable(
    tf.truncated_normal([image_size * image_size*3, relu_count])) 
    biases1 = tf.Variable(tf.zeros([relu_count])) 
    weights2 = tf.Variable(
    tf.truncated_normal([relu_count, num_labels])) 

    biases2 = tf.Variable(tf.zeros([num_labels])) 

    preds = tf.matmul(tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1), weights2) + biases2 


    loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(logits=preds, labels=tf_train_labels))+ \ 
     beta_regul * (tf.nn.l2_loss(weights1) + tf.nn.l2_loss(weights2)) 

    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) 
    train_prediction = tf.nn.softmax(preds) 
    lay1_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1) 
    valid_prediction = tf.nn.softmax(tf.matmul(lay1_valid, weights2) + biases2) 
    lay1_test = tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1) 
    test_prediction = tf.nn.softmax(tf.matmul(lay1_test, weights2) + biases2) 
num_steps = 5000 

with tf.Session(graph=graph) as session: 
    tf.initialize_all_variables().run() 
    print("Initialized") 
    for step in range(num_steps): 
     offset = (step * batch_size) % (onehot_training_labels.shape[0] - batch_size) 
     batch_data = training_data[offset:(offset + batch_size), :] 
     batch_labels = onehot_training_labels[offset:(offset + batch_size), :] 
     feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels,beta_regul : 1e-3} 
     _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict) 
     if (step % 500 == 0): 
      print("Minibatch loss at step %d: %f" % (step, l)) 
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) 
      print("Validation accuracy: %.1f%%" % accuracy(
      valid_prediction.eval(), onehot_valid_labels)) 
    print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), onehot_test_labels)) 

輸出此代碼是:

WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_should_use.py:170: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02. 
Instructions for updating: 
Use `tf.global_variables_initializer` instead. 
Initialized 
Minibatch loss at step 0: 117783.914062 
Minibatch accuracy: 14.8% 
Validation accuracy: 10.2% 
Minibatch loss at step 500: 3632989892247552.000000 
Minibatch accuracy: 12.5% 
Validation accuracy: 10.1% 
Minibatch loss at step 1000: 2203224941527040.000000 
Minibatch accuracy: 6.2% 
Validation accuracy: 9.9% 
Minibatch loss at step 1500: 1336172110413824.000000 
Minibatch accuracy: 10.9% 
Validation accuracy: 9.8% 
Minibatch loss at step 2000: 810328996708352.000000 
Minibatch accuracy: 8.6% 
Validation accuracy: 10.1% 
Minibatch loss at step 2500: 491423044468736.000000 
Minibatch accuracy: 9.4% 
Validation accuracy: 10.1% 
Minibatch loss at step 3000: 298025566076928.000000 
Minibatch accuracy: 12.5% 
Validation accuracy: 9.8% 
Minibatch loss at step 3500: 180741635833856.000000 
Minibatch accuracy: 10.9% 
Validation accuracy: 9.8% 
Minibatch loss at step 4000: 109611013111808.000000 
Minibatch accuracy: 15.6% 
Validation accuracy: 10.1% 
Minibatch loss at step 4500: 66473376612352.000000 
Minibatch accuracy: 3.9% 
Validation accuracy: 9.9% 
Test accuracy: 10.2% 

我在哪裏做錯了?我看到的準確度非常低。

+0

我覺得你的問題應該是更具體的 –

+0

或上發佈您的問題https://codereview.stackexchange.com/ –

回答

1
  1. 就我所見,您正在用Tensorflow構建一個簡單的2層FNN。雖然沒關係,但你不會得到很高的準確度。但是如果你嘗試,你需要仔細調整超參數 - 學習率,正則化強度,衰減率,隱藏層中的神經元數量。

  2. 您使用的不是所有數據,所以肯定會降低預測的質量。它仍然可以工作,但你應該檢查列車,val和測試集中的班級分佈。有些類可能在其中一個數據集中的值太小。您至少需要對您的選擇進行分層。

  3. 您確定您對深度學習有深刻的理解嗎?嘗試cs231n課程可能是個好主意。

+0

所以你說了,我寫的代碼是正確的,但需要更多的調整,以獲得高準確性? – Jayanth

+0

是的,它可以達到更好的準確性。但是對於在整個CIFAR-10上訓練的FNN,甚至60%也不是真正可達到的。所以在你的情況下,準確度會更低,但是應該比10%更好,因爲10%意味着模型總是預測一個值。 –

相關問題