可以以類似的方式實施,以精度:
def sign_accuracy(y_true, y_pred):
return K.mean(K.greater(y_true * y_pred, 0.), axis=-1)
爲了測試它:
y_true = np.random.rand(5, 1) - 0.5
y_pred = np.random.rand(5, 1) - 0.5
acc = K.eval(sign_accuracy(K.variable(y_true), K.variable(y_pred)))
print(y_true)
[[ 0.20410185]
[ 0.12085985]
[ 0.39697642]
[-0.28178138]
[-0.37796012]]
print(y_pred)
[[-0.38281826]
[ 0.14268927]
[ 0.19218624]
[ 0.21394845]
[ 0.04044269]]
print(acc)
[ 0. 1. 1. 0. 0.]
在0軸的意思是,當你調用fit()
會自動Keras採取或evaluate()
,所以你不需要總計acc
並且除以y_pred.shape[0]
。
該度量還可以應用到多維變量:
y_true = np.random.rand(5, 3) - 0.5
y_pred = np.random.rand(5, 3) - 0.5
acc = K.eval(sign_accuracy(K.variable(y_true), K.variable(y_pred)))
print(y_true)
[[ 0.02745352 -0.27927986 -0.47882833]
[-0.40950793 -0.16218984 0.19184008]
[ 0.25002487 -0.08455175 -0.03606459]
[ 0.09315503 -0.19825522 0.19801222]
[-0.32129431 -0.02256616 0.47799333]]
print(y_pred)
[[-0.06733171 0.18156806 0.28396574]
[ 0.04054056 -0.45898607 -0.10661648]
[-0.05162396 -0.34005141 -0.25910923]
[-0.26283177 0.01532359 0.33764032]
[ 0.2754057 0.26896232 0.23089488]]
print(acc)
[ 0. 0.33333334 0.66666669 0.33333334 0.33333334]
對於你給第一種情況下,可以期望的輸出是'40%'或'0.4'?你的意思是,如果我把這個'sign_accuracy'放到命令'model.compile(optimizer ='adam',loss ='mae',metrics = [sign_accuracy])',我會得到'0.4'嗎? – Wedoso
是的。凱拉斯內部平均。 –