2017-03-23 60 views
3

我想計算沒有循環的SVM的損失。但我無法做到。需要一些啓發。如何矢量化SVM中的損失

                   和                   

def svm_loss_vectorized(W, X, y, reg): 
    loss = 0.0 
    scores = np.dot(X, W) 
    correct_scores = scores[y] 
    deltas = np.ones(scores.shape) 
    margins = scores - correct_scores + deltas 

    margins[margins < 0] = 0 # max -> Boolean array indexing 
    margins[np.arange(scores.shape[0]), y] = 0 # Don't count j = yi 
    loss = np.sum(margins) 

    # Average 
    num_train = X.shape[0] 
    loss /= num_train 

    # Regularization 
    loss += 0.5 * reg * np.sum(W * W) 
    return loss 

它應輸出同一損失爲以下功能。

def svm_loss_naive(W, X, y, reg): 
    num_classes = W.shape[1] 
    num_train = X.shape[0] 
    loss = 0.0 

    for i in range(num_train): 
     scores = X[i].dot(W) 
     correct_class_score = scores[y[i]] 
     for j in range(num_classes): 
      if j == y[i]: 
       continue 
      margin = scores[j] - correct_class_score + 1 # note delta = 1 
      if margin > 0: 
       loss += margin 
    loss /= num_train # mean 
    loss += 0.5 * reg * np.sum(W * W) # l2 regularization 
    return loss 
+0

什麼是輸入的形狀? – Divakar

+0

W.shape =(3073,10),X.shape =(500,3073),y.shape(500,) – WeiJay

回答

1

這裏有一個量化的方法 -

delta = 1 
N = X.shape[0] 
M = W.shape[1] 
scoresv = X.dot(W) 
marginv = scoresv - scoresv[np.arange(N), y][:,None] + delta 

mask0 = np.zeros((N,M),dtype=bool) 
mask0[np.arange(N),y] = 1 
mask = (marginv<0) | mask0 
marginv[mask] = 0 

loss_out = marginv.sum()/num_train # mean 
loss_out += 0.5 * reg * np.sum(W * W) # l2 regularization 

此外,我們還可以優化np.sum(W * W)np.tensordot,像這樣 -

float(np.tensordot(W,W,axes=((0,1),(0,1)))) 

運行測試

建議方法,因爲功能 -

def svm_loss_vectorized_v2(W, X, y, reg): 
    delta = 1 
    N = X.shape[0] 
    M = W.shape[1] 
    scoresv = X.dot(W) 
    marginv = scoresv - scoresv[np.arange(N), y][:,None] + delta 

    mask0 = np.zeros((N,M),dtype=bool) 
    mask0[np.arange(N),y] = 1 
    mask = (marginv<=0) | mask0 
    marginv[mask] = 0 

    loss_out = marginv.sum()/num_train # mean 
    loss_out += 0.5 * reg * float(np.tensordot(W,W,axes=((0,1),(0,1)))) 
    return loss_out 

計時 -

In [86]: W= np.random.randn(3073,10) 
    ...: X= np.random.randn(500,3073) 
    ...: y= np.random.randint(0,10,(500)) 
    ...: reg = 4.56 
    ...: 

In [87]: svm_loss_naive(W, X, y, reg) 
Out[87]: 70380.938069371899 

In [88]: svm_loss_vectorized_v2(W, X, y, reg) 
Out[88]: 70380.938069371914 

In [89]: %timeit svm_loss_naive(W, X, y, reg) 
100 loops, best of 3: 10.2 ms per loop 

In [90]: %timeit svm_loss_vectorized_v2(W, X, y, reg) 
100 loops, best of 3: 2.94 ms per loop 
+0

謝謝,我非常感謝你的幫助。但是,你能告訴我我的代碼中的問題在哪裏嗎?我想不明白。 – WeiJay

+0

我知道了!我的'correct_scores'不正確:(再次感謝。 – WeiJay