gibbs,beyesian network,intelligent inference, Markov, BeliefPropagation. It is a very good surce code for intelligent reasoning research
標(biāo)簽: gibbs
上傳時(shí)間: 2014-01-15
上傳用戶(hù):372825274
CHMMBOX, version 1.2, Iead Rezek, Oxford University, Feb 2001 Matlab toolbox for max. aposteriori estimation of two chain Coupled Hidden Markov Models.
標(biāo)簽: aposteriori University CHMMBOX version
上傳時(shí)間: 2014-01-23
上傳用戶(hù):rocwangdp
megahal is the conversation simulators conversing with a user in natural language. The program will exploit the fact that human beings tend to read much more meaning into what is said than is actually there MegaHAL differs from conversation simulators such as ELIZA in that it uses a Markov Model to learn how to hold a conversation. It is possible to teach MegaHAL to talk about new topics, and in different languages.
標(biāo)簽: conversation conversing simulators language
上傳時(shí)間: 2015-10-09
上傳用戶(hù):lnnn30
利用二元域的高斯消元法得到輸入矩陣H對(duì)應(yīng)的生成矩陣G,同時(shí)返回與G滿(mǎn)足mod(G*P ,2)=0的矩陣P,其中P 表示P的轉(zhuǎn)置 使用方法:[P,G]=Gaussian(H,x),x=1 or 2,1表示G的左邊為單位陣
上傳時(shí)間: 2014-11-27
上傳用戶(hù):semi1981
這是一個(gè)非常簡(jiǎn)單的遺傳算法源代碼,代碼保證盡可能少,實(shí)際上也不必查錯(cuò)。對(duì)一特定的應(yīng)用修正此代碼,用戶(hù)只需改變常數(shù)的定義并且定義“評(píng)價(jià)函數(shù)”即可。注意代碼 的設(shè)計(jì)是求最大值,其中的目標(biāo)函數(shù)只能取正值;且函數(shù)值和個(gè)體的適應(yīng)值之間沒(méi)有區(qū)別。該系統(tǒng)使用比率選擇、精華模型、單點(diǎn)雜交和均勻變異。如果用 Gaussian變異替換均勻變異,可能得到更好的效果。代碼沒(méi)有任何圖形,甚至也沒(méi)有屏幕輸出,主要是保證在平臺(tái)之間的高可移植性。讀者可以從ftp.uncc.edu, 目錄 coe/evol中的文件prog.c中獲得。要求輸入的文件應(yīng)該命名為‘gadata.txt’;系統(tǒng)產(chǎn)生的輸出文件為‘galog.txt’。輸入的 文件由幾行組成:數(shù)目對(duì)應(yīng)于變量數(shù)。且每一行提供次序——對(duì)應(yīng)于變量的上下界。如第一行為第一個(gè)變量提供上下界,第二行為第二個(gè)變量提供上下界,等等。
上傳時(shí)間: 2015-10-16
上傳用戶(hù):曹云鵬
基于libsvm,開(kāi)發(fā)的支持向量機(jī)圖形界面(初級(jí)水平)應(yīng)用程序,并提供了關(guān)于C和sigma的新的參數(shù)選擇方法,使得SVM的使用更加簡(jiǎn)單直觀.參考文章 Fast and Efficient Strategies for Model Selection of Gaussian Support Vector Machine 可google之。
標(biāo)簽: libsvm
上傳時(shí)間: 2015-10-16
上傳用戶(hù):cuibaigao
In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different scientific disciplines. Novel extensions to the existing methods are also proposed.We showin particular how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature these lead to very effective importance distributions. Furthermore we describe a method which uses Rao-Blackwellisation in order to take advantage of the analytic structure present in some important classes of state-space models. In a final section we develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models.
標(biāo)簽: sequential simulation posterior overview
上傳時(shí)間: 2015-12-31
上傳用戶(hù):225588
The need for accurate monitoring and analysis of sequential data arises in many scientic, industrial and nancial problems. Although the Kalman lter is effective in the linear-Gaussian case, new methods of dealing with sequential data are required with non-standard models. Recently, there has been renewed interest in simulation-based techniques. The basic idea behind these techniques is that the current state of knowledge is encapsulated in a representative sample from the appropriate posterior distribution. As time goes on, the sample evolves and adapts recursively in accordance with newly acquired data. We give a critical review of recent developments, by reference to oil well monitoring, ion channel monitoring and tracking problems, and propose some alternative algorithms that avoid the weaknesses of the current methods.
標(biāo)簽: monitoring sequential industria accurate
上傳時(shí)間: 2013-12-17
上傳用戶(hù):familiarsmile
用于產(chǎn)生gamma分布的噪聲序列,以及分析gaussian噪聲的各參數(shù)。
上傳時(shí)間: 2016-01-08
上傳用戶(hù):xfbs821
Hidden_Markov_model_for_automatic_speech_recognition This code implements in C++ a basic left-right hidden Markov model and corresponding Baum-Welch (ML) training algorithm. It is meant as an example of the HMM algorithms described by L.Rabiner (1) and others. Serious students are directed to the sources listed below for a theoretical description of the algorithm. KF Lee (2) offers an especially good tutorial of how to build a speech recognition system using hidden Markov models.
標(biāo)簽: Hidden_Markov_model_for_automatic speech_recognition implements left-right
上傳時(shí)間: 2016-01-23
上傳用戶(hù):569342831
蟲(chóng)蟲(chóng)下載站版權(quán)所有 京ICP備2021023401號(hào)-1