?? bayesian filtering classes.html
字號:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><HTML><HEAD> <META HTTP-EQUIV="CONTENT-TYPE" CONTENT="text/html; charset=iso-8859-15"> <TITLE>Bayesian Filtering Classes</TITLE> <LINK REL="stylesheet" HREF="Deployment/paper.css" TYPE="text/css"></HEAD><BODY><H1 ALIGN=CENTER><U>Bayesian Filtering Classes</U></H1><H1> Introduction</H1><P ALIGN=LEFT> Bayesian Filtering is a probabilistic technique for data fusion. The technique combines a concise mathematical formulation of a system with observations of that system. Probabilities are used to represent the state of a system, likelihood functions to represent their relationships. In this form Bayesian inference can be applied and further related probabilities deduced. See <A HREF="http://www.wikipedia.org/">Wikipedia</A> for information on <A HREF="http://www.wikipedia.org/wiki/Probability_theory">Probabilitytheory</A>, <A HREF="http://www.wikipedia.org/wiki/Bayes'_theorem">Bayestheorem</A>, <A HREF="http://www.wikipedia.org/wiki/Bayesian_inference">BayesianInference</A>.</P><P ALIGN=LEFT>For <U>discrete</U> systems the Bayesian formulationresults in a naturally iterative data fusion solution. For <U>dynamic</U>systems there is a class of solutions, discrete <U>filters</U>, that combine observed outputs of the system with the system's dynamic model. An <U>estimator</U> computes a estimate of the systems state with each observationof the system. Linear estimators such as the Kalman Filter are commonly applied.</P><P>Bayes++ is an open source library of C++ classes. Theseclasses represent and implement a wide variety of numericalalgorithms for Bayesian Filtering of discrete systems. The classesprovide tested and consistent numerical methods and the classhierarchy explicitly represents the variety of filtering algorithmsand system model types</P><P>The following documentation summaries the classes available andprovides some general comments on their use. Each class's <B>.hpp</B>file provides a complete interface specification and the numericalform of the solution. Implementation issues are documented in the<B>.cpp</B> files.</P><H2>Numerical Schemes in Bayes++</H2><P>There is a wide range of different numerical techniques forfiltering. For example the Kalman filter (a linear estimator whichcalculates the 2<SUP>nd</SUP> order statistics of the system) isrepresented using either a state vector and a covariance matrix, oralternatively in information form.</P><P>Each numerical technique is a <B><I>Scheme</I></B>, and eachScheme is implemented as a class. Each scheme implements thealgorithms required by a filter's particular numerical form. Theschemes provide a common interface that allows:</P><DL> <DD>i. initialization and update of the state, and</DD> <DD>ii. predict and observe functions with parameterized models</DD></DL><P>Each Scheme has both advantages and disadvantages. The numericalcomplexity and efficiency varies, as does the numerical stability.The table below lists all Schemes together with any requirement onthe representation associated with the algorithm.</P><TABLE WIDTH=725 BORDER=5 CELLPADDING=0 CELLSPACING=0> <TR VALIGN=TOP> <TD WIDTH=30%> <P><B><FONT FACE="Verdana" SIZE=5>Scheme</FONT></B> </TD> <TD WIDTH=43%> <P><B><FONT FACE="Verdana" SIZE=4>Formulation and<BR>Algorithm used</FONT></B> </TD> <TD WIDTH=25%> <P><B><FONT FACE="Verdana" SIZE=4>Representation Requirements</FONT></B> </TD> </TR> <TR VALIGN=TOP> <TD WIDTH=30%> <H4>Covariance_scheme</H4> <P>Extended Kalman Filter (EKF)</P> <P><A HREF="#References">See Reference[6]</A></TD> <TD WIDTH=43%> <P>Classic 2nd order filter with state mean and covariance representation. Non-linear models require a gradient linearisation (Jacobian). Uses the common innovation update formulation.</TD> <TD WIDTH=25%> <P>Innovation covariance invertible</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=30%> <H4>Unscented_scheme</H4> <P>Kalman filter using unscented non-linear approximations</P> <P><A HREF="#References">See Reference[1]</A></TD> <TD WIDTH=43%> <P>Unscented non-linear transformations replace the Jacobians used in the EKF- reducing linearisation errors in all cases.</TD> <TD WIDTH=25%> <P>Innovation covariance invertible</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=30%> <H4>Iterated_covariance_scheme</H4> <P>Modified EKF update</P> <P><A HREF="#References">See Reference[6]</A></TD> <TD WIDTH=43%> <P>Kalman filter for highly non-linear observation models. Observation update is iterated using an Euler approximation.</TD> <TD WIDTH=25%> <P>State, observation and innovation covariance invertible</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=30%> <H4>Information_scheme</H4> <P>Extended Information Filter (EIF)</P> <P><A HREF="#References">See Reference[7]</A></TD> <TD WIDTH=43%> <P>Information (inverse covariance) filter.</P> <P>Invertible linear model allows direct information form prediction.<BR>Non-linear prediction requires conversion back to state representation.<BR>Observe with modified innovation formulation equivalent to EKF.</TD> <TD WIDTH=25%> <P>State, observation and innovation covariance invertible</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=30%> <H4>Information_root_scheme</H4> <P>Square root information filter (SRIF)</P> <P><A HREF="#References">See Reference[4]</A></TD> <TD WIDTH=43%> <P>Numerically stable solution using factorization of inverse covariance as R'R</TD> <TD WIDTH=25%> <P>Invertible prediction model. Prediction and observation covariance invertible</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=30%> <H4>UD_scheme</H4> <P>Square root covariance filter</P> <P><A HREF="#References">See Reference[2]</A></TD> <TD WIDTH=43%> <P>Numerically stable solution using UdU' factorization of covariance.</TD> <TD WIDTH=25%> <P>Innovation covariance invertible. Linearized observation requires uncorrelated noise</TD> </TR> <TR VALIGN=TOP> <TD> <P><B>CI_scheme<BR><BR></B>Covariance intersect filter </P> <P><A HREF="#References">See Reference[8]</A></TD> <TD> <P>CI is interesting as it provides a weaker but more robust fusion then traditional covariance based method such as the Kalman filter. It estimates state and an upper bound of what its covariance could be.</TD> <TD> <P>No solution if covariances don't intersect.</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=30%> <H4>SIR_scheme</H4> <P>Sequential Importance Re-sampling filter</P> <P><A HREF="#References">See Reference[5]</A></TD> <TD WIDTH=43%> <P>A bootstrap representation of the state distribution – no linear or Gaussian assumptions required.</P> <P>Uncorrelated linear roughening of samples.</TD> <TD WIDTH=25%> <P>Bootstrap samples collapse to a degenerate from with fewer samples then states.</TD> </TR> <TR VALIGN=TOP> <TD WIDTH=30%> <H4>SIR_kalman_scheme</H4> <P>Sequential Importance Re-sampling filter</TD> <TD WIDTH=43%> <P>A bootstrap representation of the state distribution – no linear or Gaussian assumptions required.</P> <P>Additional computes state mean and covariance. Correlated linear roughening of samples.</TD> <TD WIDTH=25%> As above. Correlation non-computable from degenerate samples.</TD> </TR></TABLE><H1>Three Examples</H1><H2>Simple Example (Initialize – Predict – Observe)</H2><P>This is very simple example for those who have never used theBayesian Filtering Classes before. The example shows how two classesare built. The first is the prediction model, the second theobservation model. In this example they represent a simple linearproblem with only one state variable and constant model noises.</P><P>The example then constructs a filter. The <B>Unscented</B> filterscheme is chosen to illustrate how this works, even on a simplelinear problem. After construction the filter is given the problem’sinitial conditions. A prediction using the predefine prediction modelis then made. This prediction is then fused with an externalobservation given by the example and the defined observation model.At each stage, the filter’s state estimate and varianceestimate are printed.</P><H2>Position and Velocity</H2><P>This example solves a simple position and velocity observationproblem using the Bayesian filter classes. The system has two states,position and velocity, and a noisy position observation is simulated.Two variants:</P><DL> <DD>1) direct filter</DD> <DD>2) indirect filter where the filter is preformed on error and state is estimated indirectly</DD></DL><P>are demonstrated using a specified numerical scheme. The exampleshows how to build <U>linear</U> prediction and observation modelsand to cycle the filter to produce state estimates.</P><H2>Quadratic Calibration</H2><P>This example implements a “Quadratic Calibration”filter. The observer is trying to estimate the state of a system aswell as the systems scale factor and bias. A simple system issimulated where the state is affected by a known perturbation.</P><P>The example shows how to build <U>linearized</U> prediction andobservation model and cycle a filter to produce state estimates.</P><H1>Dual Abstraction</H1><P>The aim of Bayes++ is to provide a powerful and extensiblestructure for Bayesian filtering. This is achieved by hierarchicalcomposition of related objects. In C++ these are represented by ahierarchy of polymorphic classes. Numerical Schemes are grouped bytheir state representation. The Schemes themselves provided thefiltering operations on these representations.</P><P>To complete a filter, an associated prediction and observationmodel must also be specified. These model classes parameterize thefiltering operations. The users responsibility is to chose a modeltype and provide the numerical algorithm for that model. Again modelclasses composed using a hierarchy of polymorphic abstract classes torepresents the structure of each model type.</P><P>This <EM>dual abstraction</EM>, separating numerical schemes frommodel structure, provides a great deal of flexibility. In particularit allows a well-specified model to be used with a variety ofnumerical schemes.</P><H2>Filter Hierarchy</H2><P>The table below lists a few of the key abstract filter classesupon which filter Schemes are build. The C++ class hierarchydocumentation gives the complete structure.</P><DL> <DD> <TABLE WIDTH=566 BORDER=3 CELLPADDING=2 CELLSPACING=0> <COL WIDTH=210> <COL WIDTH=342> <TR VALIGN=TOP> <TD WIDTH=210> <P><STRONG>Bayes_filter</STRONG> </TD> <TD WIDTH=342> <P>Base class for everything<BR>(contains no data) </TD> </TR> <TR VALIGN=TOP> <TD WIDTH=210> <P><STRONG>Likelihood_filter</STRONG></TD>
?? 快捷鍵說明
復(fù)制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -