* Lightweight backpropagation neural network.
* This a lightweight library implementating a neural network for use
* in C and C++ programs. It is intended for use in applications that
* just happen to need a simply neural network and do not want to use
* needlessly complex neural network libraries. It features multilayer
* feedforward perceptron neural networks, sigmoidal activation function
* with bias, backpropagation training with settable learning rate and
* momentum, and backpropagation training in batches.
neural network utility is a Neural Networks library for the
C++ Programmer. It is entirely object oriented and focuses
on reducing tedious and confusing problems of programming neural networks.
By this I mean that network layers are easily defined. An
entire multi-layer network can be created in a few lines, and
trained with two functions. Layers can be connected to one another
easily and painlessly.
k-step ahead predictions determined by simulation of the
% one-step ahead neural network predictor. For NNARMAX
% models the residuals are set to zero when calculating the
% predictions. The predictions are compared to the observed output.
%
% Train a two layer neural network with the Levenberg-Marquardt
% method.
%
% If desired, it is possible to use regularization by
% weight decay. Also pruned (ie. not fully connected) networks can
% be trained.
%
% Given a set of corresponding input-output pairs and an initial
% network,
% [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms)
% trains the network with the Levenberg-Marquardt method.
%
% The activation functions can be either linear or tanh. The
% network architecture is defined by the matrix NetDef which
% has two rows. The first row specifies the hidden layer and the
% second row specifies the output layer.
This function applies the Optimal Brain Surgeon (OBS) strategy for
% pruning neural network models of dynamic systems. That is networks
% trained by NNARX, NNOE, NNARMAX1, NNARMAX2, or their recursive
% counterparts.
Train a two layer neural network with a recursive prediction error
% algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully
% connected) networks can be trained.
%
% The activation functions can either be linear or tanh. The network
% architecture is defined by the matrix NetDef , which has of two
% rows. The first row specifies the hidden layer while the second
% specifies the output layer.