When I first studied Kalman filtering, I saw many advanced signal processing submissions here at the MATLAB Central File exchange, but I didn t see a heavily commented, basic Kalman filter present to allow someone new to Kalman filters to learn about creating them. So, a year later, I ve written a very simple, heavily commented discrete filter.
In this article, we present an overview of methods for sequential simulation from posterior distributions.
These methods are of particular interest in Bayesian filtering for discrete time dynamic models
that are typically nonlinear and non-Gaussian. A general importance sampling framework is developed
that unifies many of the methods which have been proposed over the last few decades in several
different scientific disciplines. Novel extensions to the existing methods are also proposed.We showin
particular how to incorporate local linearisation methods similar to those which have previously been
employed in the deterministic filtering literature these lead to very effective importance distributions.
Furthermore we describe a method which uses Rao-Blackwellisation in order to take advantage of
the analytic structure present in some important classes of state-space models. In a final section we
develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models.
This directory contains code implementing the K-means algorithm. Source code
may be found in KMEANS.CPP. Sample data isfound in KM2.DAT. The KMEANS
program accepts input consisting of vectors and calculates the given
number of cluster centers using the K-means algorithm. Output is
directed to the screen.
GIS系統支持庫Geospatial Data Abstraction Library代碼.GDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial
Boosting is a meta-learning approach that aims at combining an ensemble of weak classifiers to form a strong classifier. Adaptive Boosting (Adaboost) implements this idea as a greedy search for a linear combination of classifiers by overweighting the examples that are misclassified by each classifier. icsiboost implements Adaboost over stumps (one-level decision trees) on discrete and continuous attributes (words and real values). See http://en.wikipedia.org/wiki/AdaBoost and the papers by Y. Freund and R. Schapire for more details [1]. This approach is one of most efficient and simple to combine continuous and nominal values. Our implementation is aimed at allowing training from millions of examples by hundreds of features in a reasonable time/memory.