我已經看到了一個可以在朱古力定義例如EuclideanLoss定製損耗層像這樣的朱古力定製損耗層:試圖瞭解
import caffe
import numpy as np
class EuclideanLossLayer(caffe.Layer):
"""
Compute the Euclidean Loss in the same manner as the C++
EuclideanLossLayer
to demonstrate the class interface for developing layers in Python.
"""
def setup(self, bottom, top):
# check input pair
if len(bottom) != 2:
raise Exception("Need two inputs to compute distance.")
def reshape(self, bottom, top):
# check input dimensions match
if bottom[0].count != bottom[1].count:
raise Exception("Inputs must have the same dimension.")
# difference is shape of inputs
self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)
# loss output is scalar
top[0].reshape(1)
def forward(self, bottom, top):
self.diff[...] = bottom[0].data - bottom[1].data
top[0].data[...] = np.sum(self.diff**2)/bottom[0].num/2.
def backward(self, top, propagate_down, bottom):
for i in range(2):
if not propagate_down[i]:
continue
if i == 0:
sign = 1
else:
sign = -1
bottom[i].diff[...] = sign * self.diff/bottom[i].num
不過,我有一個關於該代碼的幾個問題:
如果我想自定義這一層,改變損失的計算在這條線:
top[0].data[...] = np.sum(self.diff**2)/bottom[0].num/2.
讓我們說:
channelsAxis = bottom[0].data.shape[1]
self.diff[...] = np.sum(bottom[0].data, axis=channelAxis) - np.sum(bottom[1].data, axis=channelAxis)
top[0].data[...] = np.sum(self.diff**2)/bottom[0].num/2.
我該如何改變後退功能?對於歐幾里得損失它是:
bottom[i].diff[...] = sign * self.diff/bottom[i].num
它是如何尋找我描述的損失?
這是什麼跡象?
什麼重量和偏見是有歐幾里得損失?? – Shai
我很抱歉,我也困惑了一下自己。我已經更新了這個問題! @Shai – thigi
相關:https://stackoverflow.com/a/33797142/1714410 – Shai