We describe and demonstrate an algorithm that takes as input an
unorganized set of points fx1 xng IR3 on or near an unknown
manifold M, and produces as Output a simplicial surface that
approximates M. Neither the topology, the presence of boundaries,
nor the geometry of M are assumed to be known in advance — all
are inferred automatically from the data. This problem naturally
arises in a variety of practical situations such as range scanning
an object from multiple view points, recovery of biological shapes
from two-dimensional slices, and interactive surface sketching.
// chebysheve outlier detection
// this function is used to detect the abnormal value among a set of data
// input:
// delta: a set of data
// flag: discribe which data is already known as outlier
// p: restrict level
// Output:
// double[] door : byyond which the data may be considered as a outlier
// door[0]: the upperdoor
// door[1]: the lowerdoor
Flex chip implementation
File: UP2FLEX
JTAG jumper settings: down, down, up, up
Input:
Reset - FLEX_PB1
Input n - FLEX_SW switches 1 to 8
Output:
Countdown - two 7-segment LEDs.
Done light - decimal point on Digit1.
Operation:
Setup the binary input n number.
Press the Reset switch.
See the countdown from n down to 0 on the 7-segment LEDs.
Done light lit when program terminates.
k-step ahead predictions determined by simulation of the
% one-step ahead neural network predictor. For NNARMAX
% models the residuals are set to zero when calculating the
% predictions. The predictions are compared to the observed Output.
%
% Train a two layer neural network with the Levenberg-Marquardt
% method.
%
% If desired, it is possible to use regularization by
% weight decay. Also pruned (ie. not fully connected) networks can
% be trained.
%
% Given a set of corresponding input-Output pairs and an initial
% network,
% [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms)
% trains the network with the Levenberg-Marquardt method.
%
% The activation functions can be either linear or tanh. The
% network architecture is defined by the matrix NetDef which
% has two rows. The first row specifies the hidden layer and the
% second row specifies the Output layer.
Train a two layer neural network with a recursive prediction error
% algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully
% connected) networks can be trained.
%
% The activation functions can either be linear or tanh. The network
% architecture is defined by the matrix NetDef , which has of two
% rows. The first row specifies the hidden layer while the second
% specifies the Output layer.