?? mmgauss.html
字號(hào):
<html><head> <meta HTTP-EQUIV="Content-Type" CONTENT="text/html;charset=ISO-8859-1"> <title>Contents.m</title><link rel="stylesheet" type="text/css" href="../../stpr.css"></head><body><table border=0 width="100%" cellpadding=0 cellspacing=0><tr valign="baseline"><td valign="baseline" class="function"><b class="function">MMGAUSS</b><td valign="baseline" align="right" class="function"><a href="../../probab/estimation/index.html" target="mdsdir"><img border = 0 src="../../up.gif"></a></table> <p><b>Minimax estimation of Gaussian distribution.
</b></p> <hr><div class='code'><code><span class=help>
</span><br><span class=help> <span class=help_field>Synopsis:</span></span><br><span class=help> model = mmgauss(X)
</span><br><span class=help> model = mmgauss(X,options)
</span><br><span class=help> model = mmgauss(X,options,init_model)
</span><br><span class=help>
</span><br><span class=help> <span class=help_field>Description:</span></span><br><span class=help> This function computes the minimax estimation of Gaussian
</span><br><span class=help> parameters. The minimax estimation (reffer to [<a href="../../references.html#SH10" title = "M.I.Schlesinger and V.Hlavac. Ten lectures on statistical and structural pattern recognition. Kluwer Academic Publishers, 2002." >SH10</a>]) for
</span><br><span class=help> Gaussian model is defined as:
</span><br><span class=help>
</span><br><span class=help> (Mean,Cov) = argmax min( pdfgauss(X, Mean, Cov) ).
</span><br><span class=help> Mean,Cov
</span><br><span class=help>
</span><br><span class=help> The sample data X should be good representatives of the
</span><br><span class=help> distribution. In contrast to maximum-likelihood estimation,
</span><br><span class=help> the data do not have to be i.i.d.
</span><br><span class=help>
</span><br><span class=help> An itrative algorithm is used for estimation. It iterates
</span><br><span class=help> until
</span><br><span class=help> upper_bound - lower_bound < eps,
</span><br><span class=help> where eps is prescribed precission and upper_bound, lower_bound
</span><br><span class=help> are bounds on the optimal solution
</span><br><span class=help> upper_bound > max min( pdfgauss(X, Mean, Cov) ) > lower_bound
</span><br><span class=help> Mean,Cov
</span><br><span class=help>
</span><br><span class=help> <span class=help_field>Input:</span></span><br><span class=help> X [dim x num_data] Data sample.
</span><br><span class=help>
</span><br><span class=help> options [struct] Control parameters:
</span><br><span class=help> .eps [1x1] Precision of found estimate (default 0.1).
</span><br><span class=help> .tmax [1x1] Maximal number of iterations (default inf).
</span><br><span class=help> .cov_type [int] Type of estimated covariance matrix:
</span><br><span class=help> cov_type = 'full' full covariance matrix (default)
</span><br><span class=help> cov_type = 'diag' diagonal covarinace matrix
</span><br><span class=help> cov_type = 'spherical' spherical covariance matrix
</span><br><span class=help> .verb [int] If 1 then info is printed (default 0).
</span><br><span class=help>
</span><br><span class=help> init_model [struct] Initial model:
</span><br><span class=help> .Alpha [1xnum_data] Weights of training vectors.
</span><br><span class=help> .t [1x1] (optional) Counter of iterations.
</span><br><span class=help>
</span><br><span class=help> <span class=help_field>Output:</span></span><br><span class=help> model [struct] Gaussian distribution:
</span><br><span class=help> .Mean [dim x 1] Estimated mean vector.
</span><br><span class=help> .Cov [dim x dim] Estimated covariance matrix.
</span><br><span class=help>
</span><br><span class=help> .t [1x1] Number of iterations.
</span><br><span class=help> .exitflag [1x1] 1 ... (upper_bound - lower_bound) < eps
</span><br><span class=help> 0 ... maximal number of iterations tmax exceeded.
</span><br><span class=help> .upper_bound [1x1] Upper bound on the optimized criterion.
</span><br><span class=help> .lower_bound [1x1] Lower bound on the optimized criterion.
</span><br><span class=help> .Alpha [1 x num_data] Data weights. The minimax estimate
</span><br><span class=help> is equal to maximum-likelihood estimate of weighted data.
</span><br><span class=help> .options [struct] Copy of used options.
</span><br><span class=help>
</span><br><span class=help> <span class=help_field>Example:</span></span><br><span class=help> X = [[0;0] [1;0] [0;1]];
</span><br><span class=help> mm_model = mmgauss(X);
</span><br><span class=help> figure; ppatterns(X);
</span><br><span class=help> pgauss(mm_model, struct('p',exp(mm_model.lower_bound')));
</span><br><span class=help>
</span><br><span class=help> See also
</span><br><span class=help> PDFGAUSS, MLCGMM, EMGMM.
</span><br><span class=help>
</span><br></code></div> <hr> <b>Source:</b> <a href= "../../probab/estimation/list/mmgauss.html">mmgauss.m</a> <p><b class="info_field">About: </b> Statistical Pattern Recognition Toolbox
<br> (C) 1999-2003, Written by Vojtech Franc and Vaclav Hlavac
<br> <a href="http://www.cvut.cz">Czech Technical University Prague</a>
<br> <a href="http://www.feld.cvut.cz">Faculty of Electrical Engineering</a>
<br> <a href="http://cmp.felk.cvut.cz">Center for Machine Perception</a>
<br> <p><b class="info_field">Modifications: </b>
<br> 26-may-2004, VF
<br> 30-apr-2004, VF
<br> 19-sep-2003, VF
<br> 27-feb-2003, VF
<br> 24. 6.00 V. Hlavac, comments polished.
<br></body></html>
?? 快捷鍵說明
復(fù)制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號(hào)
Ctrl + =
減小字號(hào)
Ctrl + -