有許多原因,爲什麼你會不想與newff工作,但RTFM:
newff創建前饋傳播網絡。
已在R2010b NNET 7.0中廢棄。最後在R2010a NNET 6.0.4中使用。 推薦的功能是feedforwardnet。
語法
net = newff(P,T,S)
net = newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF)
說明
newff(P,T,S) takes,
P - RxQ1 matrix of Q1 representative R-element input vectors.
T - SNxQ2 matrix of Q2 representative SN-element target vectors.
Si - Sizes of N-1 hidden layers, S1 to S(N-1), default = [].
(Output layer size SN is determined from T.)
and returns an N layer feed-forward backprop network.
newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF) takes optional inputs,
TFi - Transfer function of ith layer. Default is 'tansig' for
hidden layers, and 'purelin' for output layer.
BTF - Backprop network training function, default = 'trainlm'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
IPF - Row cell array of input processing functions.
Default is {'fixunknowns','remconstantrows','mapminmax'}.
OPF - Row cell array of output processing functions.
Default is {'remconstantrows','mapminmax'}.
DDF - Data division function, default = 'dividerand';
and returns an N layer feed-forward backprop network.
hardlim不是一個傳遞函數。使用trainbg的trainlm。
net = newff(minmax(input_training_data), [15 10], {'logsig','logsig'} , 'trainlm');
如果我不應該使用newff,你有什麼建議? – Marco 2012-03-04 23:11:57
從matlab文檔推薦feedforwardnet()。你也可以使用network(),對於更復雜的東西,比如級聯網絡,你應該使用檢查文檔。 – 2012-03-07 17:17:34
btw如果你發現答案有用,你應該upvote ... – 2012-03-07 17:19:04