我想訓練一個實際值輸出的神經網絡,我只是給出了網絡插值集(看起來像方波),但是後向傳播總是不給我非常適合輸入,我試圖添加更多的輸入值和標準化輸出的特性,但它似乎沒有幫助。網絡是3層1輸入1隱藏1輸出和1輸出節點 我如何解決這個問題? 我也使用這個成本函數是否正確?使用反向傳播訓練實值神經網絡
for k = 1:m
C= C+(y(k)-a2(k))^2;
end
我的代碼:
clc;
clear all;
close all;
input_layer_size = 4;
hidden_layer_size = 60;
num_labels = 1;
load('Xs');
load('Y-s');
theta1=randInitializeWeights(4, 60);
theta2=randInitializeWeights(60, 1);
plot (xq,vq)
hold on
xq=polyFeatures(xq,4);
param=[theta1(:) ;theta2(:)];
[J ,Grad]= nnCostFunction(param,input_layer_size ,hidden_layer_size,num_labels,xq,vq,0);
options = optimset('MaxIter', 50);
costFunction = @(p) nnCostFunction(p, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, xq, vq, 10);
[nn_params, cost] = fmincg(costFunction, param, options);
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
l=xq(:,1);
out =predictTest(Theta1,Theta2,xq);
accuracy=mean(double(out == vq)) * 100
plot (l,out,'yellow');
hold off
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
y(841:901)=0;
y=y/2.2;
Theta1 = reshape((nn_params(1:hidden_layer_size * (input_layer_size+1))), ...
hidden_layer_size, (input_layer_size +1));
Theta2 = reshape(nn_params((1+(hidden_layer_size * (input_layer_size +1))):end), ...
num_labels, (hidden_layer_size +1));
m = size(X, 1);
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));
X= [ones(m,1) X];
z1=X*Theta1';
a1 = sigmoid(z1);
a1= [ones(size(a1,1),1) a1];
z2=a1*Theta2';
a2= sigmoid(z2);
for k = 1:m
J= J+(y(k)-a2(k))^2;
end
J= J/m;
Theta1(:,1)=zeros(1,size(Theta1,1));
Theta2(:,1)=zeros(1,size(Theta2,1));
s1=sum (sum (Theta1.^2));
s2=sum (sum (Theta2.^2));
s3= lambda *(s2 +s1);
s3=s3/(2*m);
J=J+s3;
D2=zeros(size(Theta2));
D1=zeros(size(Theta1));
for i= 1:m
delta3=a2(i)-y(i);
v=[0 sigmoidGradient(z1(i,:))];
delta2=(Theta2'*delta3').*v';
D2=D2+delta3'*a1(i,:) ;
D1=D1+delta2(2:end)*X(i,:);
end
Theta1_grad = D1./m + (lambda/m)*[zeros(size(Theta1,1), 1) Theta1(:, 2:end)];
Theta2_grad = D2./m + (lambda/m)*[zeros(size(Theta2,1), 1) Theta2(:, 2:end)];
grad = [Theta1_grad(:) ; Theta2_grad(:)];
end
function W = randInitializeWeights(L_in, L_out)
epsilon_init = 0.5;
W = rand(L_out, 1 + L_in)*2*epsilon_init - epsilon_init;
end
輸入是1:9插0.01增量和目標0之間是數字:2.2像方形脈衝
linear interpolation of data vs predicted in red
updated after increasing epochs
歡迎來到Stack Overflow。您能否提供更多關於網絡拓撲和一些示例輸入和輸出數據的信息。還請包括整個算法以及權重初始化。 –
謝謝,我已更新內容 –
您可以爲每個輸入添加一個預期輸出數據的小表嗎? –