Batch version of the back-propagation algorithm.
% Given a set of corresponding input-output pairs and an initial network
% [W1,W2,critvec,iter]=batbp(NetDef,W1,W2,PHI,Y,trparms) trains the
% network with backpropagation.
%
% The activation functions must be either linear or tanh. The network
% architecture is defined by the matrix NetDef consisting of two
% rows. The first row specifies the hidden layer while the second
% specifies the output layer.
%
This function calculates Akaike s final prediction error
% estimate of the average generalization error.
%
% [FPE,deff,varest,H] = fpe(NetDef,W1,W2,PHI,Y,trparms) produces the
% final prediction error estimate (fpe), the effective number of
% weights in the network if the network has been trained with
% weight decay, an estimate of the noise variance, and the Gauss-Newton
% Hessian.
%
% Train a two layer neural network with the Levenberg-Marquardt
% method.
%
% If desired, it is possible to use regularization by
% weight decay. Also pruned (ie. not fully connected) networks can
% be trained.
%
% Given a set of corresponding input-output pairs and an initial
% network,
% [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms)
% trains the network with the Levenberg-Marquardt method.
%
% The activation functions can be either linear or tanh. The
% network architecture is defined by the matrix NetDef which
% has two rows. The first row specifies the hidden layer and the
% second row specifies the output layer.
This function calculates Akaike s final prediction error
% estimate of the average generalization error for network
% models generated by NNARX, NNOE, NNARMAX1+2, or their recursive
% counterparts.
%
% [FPE,deff,varest,H] = nnfpe(method,NetDef,W1,W2,U,Y,NN,trparms,skip,Chat)
% produces the final prediction error estimate (fpe), the effective number
% of weights in the network if it has been trained with weight decay,
% an estimate of the noise variance, and the Gauss-Newton Hessian.
%