2017-06-21 130 views
0

我已經看到了一個可以在朱古力定義例如EuclideanLoss定製損耗層像這樣的朱古力定製損耗層:試圖瞭解

import caffe 
import numpy as np 


    class EuclideanLossLayer(caffe.Layer): 
     """ 
     Compute the Euclidean Loss in the same manner as the C++ 
EuclideanLossLayer 
     to demonstrate the class interface for developing layers in Python. 
     """ 

     def setup(self, bottom, top): 
      # check input pair 
      if len(bottom) != 2: 
       raise Exception("Need two inputs to compute distance.") 

     def reshape(self, bottom, top): 
      # check input dimensions match 
      if bottom[0].count != bottom[1].count: 
       raise Exception("Inputs must have the same dimension.") 
      # difference is shape of inputs 
      self.diff = np.zeros_like(bottom[0].data, dtype=np.float32) 
      # loss output is scalar 
      top[0].reshape(1) 

     def forward(self, bottom, top): 
      self.diff[...] = bottom[0].data - bottom[1].data 
      top[0].data[...] = np.sum(self.diff**2)/bottom[0].num/2. 

     def backward(self, top, propagate_down, bottom): 
      for i in range(2): 
       if not propagate_down[i]: 
        continue 
       if i == 0: 
        sign = 1 
       else: 
        sign = -1 
       bottom[i].diff[...] = sign * self.diff/bottom[i].num 

不過,我有一個關於該代碼的幾個問題:

如果我想自定義這一層,改變損失的計算在這條線:

top[0].data[...] = np.sum(self.diff**2)/bottom[0].num/2. 

讓我們說:

channelsAxis = bottom[0].data.shape[1] 
self.diff[...] = np.sum(bottom[0].data, axis=channelAxis) - np.sum(bottom[1].data, axis=channelAxis) 
top[0].data[...] = np.sum(self.diff**2)/bottom[0].num/2. 

我該如何改變後退功能?對於歐幾里得損失它是:

bottom[i].diff[...] = sign * self.diff/bottom[i].num 

它是如何尋找我描述的損失?

這是什麼跡象?

+0

什麼重量和偏見是有歐幾里得損失?? – Shai

+0

我很抱歉,我也困惑了一下自己。我已經更新了這個問題! @Shai – thigi

+0

相關:https://stackoverflow.com/a/33797142/1714410 – Shai

回答

1

儘管將loss you are after作爲"Python"圖層實施可能是一次非常有教育意義的練習,但您可以使用現有的層獲得相同的損失。您只需要調用定期"EuclideanLoss"層之前增加一個"Reduction"層爲每個斑點:

layer { 
    type: "Reduction" 
    name: "rx1" 
    bottom: "x1" 
    top: "rx1" 
    reduction_param { axis: 1 operation: SUM } 
} 
layer { 
    type: "Reduction" 
    name: "rx2" 
    bottom: "x2" 
    top: "rx2" 
    reduction_param { axis: 1 operation: SUM } 
} 
layer { 
    type: "EuclideanLoss" 
    name: "loss" 
    bottom: "rx1" 
    bottom: "rx2" 
    top: "loss" 
} 

更新:基於your comment
,如果你只想求和通道維和保留其他所有尺寸不變,則可以使用固定的1x1 CONV(如你所說):

layer { 
    type: "Convolution" 
    name: "rx1" 
    bottom: "x1" 
    top: "rx1" 
    param { lr_mult: 0 decay_mult: 0 } # make this layer *fixed* 
    convolution_param { 
    num_output: 1 
    kernel_size: 1 
    bias_term: 0 # no need for bias 
    weight_filler: { type: "constant" value: 1 } # sum 
    } 
} 
+0

好的,那很完美!現在,我可以輕鬆添加weight_loss,並擁有兩個歐幾里得的對角線,我說得對嗎? – thigi

+0

@thigi準確。開始瞭解現有圖層可能會讓您變得非常懶惰) – Shai

+0

我曾考慮過解決方案,它是錯誤的!它總結所有值,而不是所有通道值。所以,但我想創建這樣的總和:y = channel1 + channel2 + channel3 ... channelN。所以通道的總和,而不是所有的軸。你能幫我解決嗎?我認爲可以使用'num_output = 1'和'weight_filler = constant,value = 1'的卷積圖層是否正確?你能更新你的答案嗎? – thigi