-
this demo is to show you how to implement a generic SIR (a.k.a. particle, bootstrap, Monte Carlo) filter to estimate the Hidden states of a nonlinear, non-Gaussian state space model.
標簽:
a.k.a.
bootstrap
implement
particle
上傳時間:
2014-11-10
上傳用戶:caozhizhi
-
CHMMBOX, version 1.2, Iead Rezek, Oxford University, Feb 2001
Matlab toolbox for max. aposteriori estimation of two chain
Coupled Hidden Markov Models.
標簽:
aposteriori
University
CHMMBOX
version
上傳時間:
2014-01-23
上傳用戶:rocwangdp
-
madCollection 2.5.2.6 full source
This is not your every day VCL component collection. You won t see many new colored icons in the component palette. My packages don t offer many visual components to play with. Sorry, if you expected that!
My packages are about low-level stuff for the most part, with as easy handling as possible. To find the Hidden treasures, you will have to look at the documentation (which you re reading just in the moment). Later I plan on writing some nice demos, but for now the documentation must be enough to get you started.
標簽:
madCollection
collection
component
source
上傳時間:
2014-01-18
上傳用戶:yoleeson
-
Hidden_Markov_model_for_automatic_speech_recognition
This code implements in C++ a basic left-right Hidden Markov model
and corresponding Baum-Welch (ML) training algorithm. It is meant as
an example of the HMM algorithms described by L.Rabiner (1) and
others. Serious students are directed to the sources listed below for
a theoretical description of the algorithm. KF Lee (2) offers an
especially good tutorial of how to build a speech recognition system
using Hidden Markov models.
標簽:
Hidden_Markov_model_for_automatic
speech_recognition
implements
left-right
上傳時間:
2016-01-23
上傳用戶:569342831
-
If you have programming experience and a familiarity with C--the dominant language in embedded systems--Programming Embedded Systems, Second Edition is exactly what you need to get started with embedded software. This software is ubiquitous, Hidden away inside our watches, DVD players, mobile phones, anti-lock brakes, and even a few toasters. The military uses embedded software to guide missiles, detect enemy aircraft, and pilot UAVs. Communication satellites, deep-space probes, and many medical instruments would have been nearly impossible to create without embedded software.
標簽:
familiarity
programming
experience
dominant
上傳時間:
2013-12-11
上傳用戶:362279997
-
本人編寫的incremental 隨機神經(jīng)元網(wǎng)絡(luò)算法,該算法最大的特點是可以保證approximation特性,而且速度快效果不錯,可以作為學(xué)術(shù)上的比較和分析。目前只適合benchmark的regression問題。
具體效果可參考
G.-B. Huang, L. Chen and C.-K. Siew, “Universal Approximation Using Incremental Constructive Feedforward Networks with Random Hidden Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
標簽:
incremental
編寫
神經(jīng)元網(wǎng)絡(luò)
算法
上傳時間:
2016-09-18
上傳用戶:litianchu
-
Inside the C++ Object Model
Inside the C++ Object Model focuses on the underlying mechanisms that support object-oriented programming within C++: constructor semantics, temporary generation, support for encapsulation, inheritance, and "the virtuals"-virtual functions and virtual inheritance. This book shows how your understanding the underlying implementation models can help you code more efficiently and with greater confidence. Lippman dispells the misinformation and myths about the overhead and complexity associated with C++, while pointing out areas in which costs and trade offs, sometimes Hidden, do exist. He then explains how the various implementation models arose, points out areas in which they are likely to evolve, and why they are what they are. He covers the semantic implications of the C++ object model and how that model affects your programs.
標簽:
Inside
Object
the
Model
上傳時間:
2013-12-24
上傳用戶:zhouli
-
Batch version of the back-propagation algorithm.
% Given a set of corresponding input-output pairs and an initial network
% [W1,W2,critvec,iter]=batbp(NetDef,W1,W2,PHI,Y,trparms) trains the
% network with backpropagation.
%
% The activation functions must be either linear or tanh. The network
% architecture is defined by the matrix NetDef consisting of two
% rows. The first row specifies the Hidden layer while the second
% specifies the output layer.
%
標簽:
back-propagation
corresponding
input-output
algorithm
上傳時間:
2016-12-27
上傳用戶:exxxds
-
% Train a two layer neural network with the Levenberg-Marquardt
% method.
%
% If desired, it is possible to use regularization by
% weight decay. Also pruned (ie. not fully connected) networks can
% be trained.
%
% Given a set of corresponding input-output pairs and an initial
% network,
% [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms)
% trains the network with the Levenberg-Marquardt method.
%
% The activation functions can be either linear or tanh. The
% network architecture is defined by the matrix NetDef which
% has two rows. The first row specifies the Hidden layer and the
% second row specifies the output layer.
標簽:
Levenberg-Marquardt
desired
network
neural
上傳時間:
2016-12-27
上傳用戶:jcljkh
-
Train a two layer neural network with a recursive prediction error
% algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully
% connected) networks can be trained.
%
% The activation functions can either be linear or tanh. The network
% architecture is defined by the matrix NetDef , which has of two
% rows. The first row specifies the Hidden layer while the second
% specifies the output layer.
標簽:
recursive
prediction
algorithm
Gauss-Ne
上傳時間:
2016-12-27
上傳用戶:ljt101007