亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? opencvref_ml.htm

?? OpenCV1.0 + C++Builder6 example of finding coners programm. Highlites coners it found in frame.
?? HTM
?? 第 1 頁 / 共 5 頁
字號:
an optimal linear discriminating function in this space (or an optimal hyperplane that fits into
the training data, ...). In case of SVM the kernel is not defined explicitly. Instead, a distance
between any 2 points in the hyperspace needs to be defined.</p>
<p>The solution is optimal in a sense that the margin between the separating hyperplane and the
nearest feature vectors from the both classes (in case of 2-class classifier) is maximal. The
feature vectors that are the closest to the hyperplane are called "support vectors", meaning that
the position of other vectors does not affect the hyperplane (the decision function).</p>

<!-- TODO: insert formulae -->

<p>There are a lot of good references on SVM. Here are only a few ones to start with.</p>
<b>[Burges98] C. Burges. "A tutorial on support vector machines for pattern recognition", Knowledge Discovery and Data
Mining 2(2), 1998.</b><br>
(available online at <a href="http://citeseer.ist.psu.edu/burges98tutorial.html">http://citeseer.ist.psu.edu/burges98tutorial.html</a>).<br>
<b>LIBSVM - A Library for Support Vector Machines. By Chih-Chung Chang and Chih-Jen Lin</b><br>
(<a href="http://www.csie.ntu.edu.tw/~cjlin/libsvm/">http://www.csie.ntu.edu.tw/~cjlin/libsvm/</a>)


<hr><h3><a name="decl_CvSVM">CvSVM</a></h3>
<p class="Blurb">Support Vector Machines</p>
<pre>
class CvSVM : public CvStatModel
{
public:
    // SVM type
    enum { C_SVC=100, NU_SVC=101, ONE_CLASS=102, EPS_SVR=103, NU_SVR=104 };

    // SVM kernel type
    enum { LINEAR=0, POLY=1, RBF=2, SIGMOID=3 };

    CvSVM();
    virtual ~CvSVM();

    CvSVM( const CvMat* _train_data, const CvMat* _responses,
           const CvMat* _var_idx=0, const CvMat* _sample_idx=0,
           CvSVMParams _params=CvSVMParams() );

    virtual bool train( const CvMat* _train_data, const CvMat* _responses,
                        const CvMat* _var_idx=0, const CvMat* _sample_idx=0,
                        CvSVMParams _params=CvSVMParams() );

    virtual float predict( const CvMat* _sample ) const;
    virtual int get_support_vector_count() const;
    virtual const float* get_support_vector(int i) const;
    virtual void clear();

    virtual void save( const char* filename, const char* name=0 );
    virtual void load( const char* filename, const char* name=0 );

    virtual void write( CvFileStorage* storage, const char* name );
    virtual void read( CvFileStorage* storage, CvFileNode* node );
    int get_var_count() const { return var_idx ? var_idx->cols : var_all; }

protected:
    ...
};
</pre>

<hr><h3><a name="decl_CvSVMParams">CvSVMParams</a></h3>
<p class="Blurb">SVM training parameters</p>
<pre>
struct CvSVMParams
{
    CvSVMParams();
    CvSVMParams( int _svm_type, int _kernel_type,
                 double _degree, double _gamma, double _coef0,
                 double _C, double _nu, double _p,
                 CvMat* _class_weights, CvTermCriteria _term_crit );

    int         svm_type;
    int         kernel_type;
    double      degree; // for poly
    double      gamma;  // for poly/rbf/sigmoid
    double      coef0;  // for poly/sigmoid

    double      C;  // for CV_SVM_C_SVC, CV_SVM_EPS_SVR and CV_SVM_NU_SVR
    double      nu; // for CV_SVM_NU_SVC, CV_SVM_ONE_CLASS, and CV_SVM_NU_SVR
    double      p; // for CV_SVM_EPS_SVR
    CvMat*      class_weights; // for CV_SVM_C_SVC
    CvTermCriteria term_crit; // termination criteria
};
</pre>
<p><dl>
<dt>svm_type<dd>Type of SVM, one of the following types:<br>
                CvSVM::C_SVC - n-class classification (n>=2), allows imperfect separation of classes with
                               penalty multiplier <code>C</code> for outliers.<br>
                CvSVM::NU_SVC - n-class classification with possible imperfect separation. Parameter <code>nu</code>
                                (in the range 0..1, the larger the value, the smoother the decision boundary) is used instead of <code>C</code>.<br>
                CvSVM::ONE_CLASS - one-class SVM. All the training data are from the same class, SVM builds
                                   a boundary that separates the class from the rest of the feature space.<br>
                CvSVM::EPS_SVR - regression. The distance between feature vectors from the training set and
                                 the fitting hyperplane must be less than <code>p</code>. For outliers
                                 the penalty multiplier <code>C</code> is used.<br>
                CvSVM::NU_SVR - regression; <code>nu</code> is used instead of <code>p</code>.
<dt>kernel_type<dd>The kernel type, one of the following types:<br>
                CvSVM::LINEAR - no mapping is done, linear discrimination (or regression) is done in the original feature space.
                                It is the fastest option. <em>d(x,y) = x&bull;y == (x,y)</em><br>
                CvSVM::POLY - polynomial kernel: <em>d(x,y) = (gamma*(x&bull;y)+coef0)<sup>degree</sup></em><br>
                CvSVM::RBF - radial-basis-function kernel; a good choice in most cases:
                             <em>d(x,y) = exp(-gamma*|x-y|<sup>2</sup>)</em><br>
                CvSVM::SIGMOID - sigmoid function is used as a kernel:
                             <em>d(x,y) = tanh(gamma*(x&bull;y)+coef0)</em><br>
<dt>degree, gamma, coef0<dd>Parameters of the kernel, see the formulas above.
<dt>C, nu, p<dd>Parameters in the generalized SVM optimization problem.
<dt>class_weights<dd>Optional weights, assigned to particular classes.
                They are multiplied by <code>C</code> and thus affect the misclassification penalty for different classes.
                The larger weight, the larger penalty on misclassification of data from the corresponding class.
<dt>term_crit<dd>Termination procedure for iterative SVM training procedure
                 (which solves a partial case of constrained quadratic optimization problem)
</dl><p>
The structure must be initialized and passed to the training method of <a href="#decl_CvSVM">CvSVM</a>


<hr><h3><a name="decl_CvSVM_train">CvSVM::train</a></h3>
<p class="Blurb">Trains SVM</p>
<pre>
bool CvSVM::train( const CvMat* _train_data, const CvMat* _responses,
                   const CvMat* _var_idx=0, const CvMat* _sample_idx=0,
                   CvSVMParams _params=CvSVMParams() );
</pre>
The method trains the SVM model. It follows the conventions of
generic <a href="#decl_CvStatModel_train">train</a> "method" with the following limitations:
only CV_ROW_SAMPLE data layout is supported, the input variables are all ordered,
the output variables can be either categorical (<code>_params.svm_type=CvSVM::C_SVC</code> or
<code>_params.svm_type=CvSVM::NU_SVC</code>), or ordered
(<code>_params.svm_type=CvSVM::EPS_SVR</code> or
<code>_params.svm_type=CvSVM::NU_SVR</code>), or not required at all
(<code>_params.svm_type=CvSVM::ONE_CLASS</code>),
missing measurements are not supported.</p>
<p>All the other parameters are gathered in <a href="#decl_CvSVMParams">CvSVMParams</a>
structure.</p>


<hr><h3><a name="decl_CvSVM_get_support_vector">CvSVM::get_support_vector*</a></h3>
<p class="Blurb">Retrieves the number of support vectors and the particular vector</p>
<pre>
int CvSVM::get_support_vector_count() const;
const float* CvSVM::get_support_vector(int i) const;
</pre>
<p>The methods can be used to retrieve the set of support vectors.</p>


<!-- *****************************************************************************************
     *****************************************************************************************
     ***************************************************************************************** -->

<hr><h2><a name="ch_dtree">Decision Trees</a></h2>

<p>The ML classes discussed in this section implement
Classification And Regression Tree algorithms, which is described in
<a href="#paper_Brieman84">[Brieman84]</a>.</p>
<p>The class <a href="#decl_CvDTree">CvDTree</a> represents a single decision tree that
may be used alone, or as a base class in tree ensembles
(see <a href=#ch_boosting>Boosting</a> and <a href=#ch_randomforest>Random Trees</a>).</p>
<p>Decision tree is a binary tree (i.e. tree where each non-leaf node has exactly 2 child nodes).
It can be used either for classification, when
each tree leaf is marked with some class label (multiple leafs may have the same label),
or for regression, when each tree leaf is also assigned a constant
(so the approximation function is piecewise constant).
<h3>Predicting with Decision Trees</h3>
<p>To reach a leaf node, and thus
to obtain a response for the input feature vector, the prediction procedure starts
with the root node. From each non-leaf node the procedure goes to the left (i.e. selects the
left child node as the next observed node), or to the right based on the value of
a certain variable, which index is stored in the observed node. The variable can be either
ordered or categorical. In the first case, the variable value is compared with the certain threshold
(which is also stored in the node); if the value is less than the threshold, the
procedure goes to the left,
otherwise, to the right (for example, if the weight is less than 1 kilo, the
procedure goes to the left,
else to the right). And in the second case the discrete variable value is tested, whether it
belongs to a certain subset of values (also stored in the node)
from a limited set of values the variable could take;
if yes, the procedure goes to the left, else - to the right (for example,
if the color is green or red, go to the left, else to the right).
That is, in each node, a pair of entities (&lt;variable_index&gt;, &lt;decision_rule (threshold/subset)&gt;)
is used.
This pair is called split (split on the variable #&lt;variable_index&gt;).
Once a leaf node is reached, the value assigned to this node is used as the output of prediction procedure.</p>
<p>Sometimes, certain features of the input vector are missed (for example, in the darkness
it is difficult to determine the object color), and the prediction procedure
may get stuck in the certain node (in the mentioned example if the node is split by color).
To avoid such situations, decision trees use so-called
surrogate splits. That is, in addition to the best "primary" split, every tree node may also
be split on one or more other variables with nearly the same results.</p>

<h3>Training Decision Trees</h3>
<p>The tree is built recursively, starting from the root node. The whole training data (feature
vectors and the responses) are used to split the root node. In each node the optimum
decision rule (i.e. the best &quot;primary&quot; split) is found based on some criteria (in ML <em>gini</em> "purity" criteria is used
for classification, and sum of squared errors is used for regression). Then, if necessary,
the surrogate
splits are found that resemble at the most the results of the primary split on
the training data; all data are divided using the primary and the surrogate splits
(just like it is done in the prediction procedure)
between the left and the right child node. Then the procedure recursively splits both left and right
nodes etc. At each node the recursive procedure may stop (i.e. stop splitting the node further)
in one of the following cases:<br>
<ul>
<li>depth of the tree branch being constructed has reached the specified maximum value.
<li>number of training samples in the node is less than the specified threshold, i.e.
    it is not statistically representative set to split the node further.
<li>all the samples in the node belong to the same class (or, in case of regression,
    the variation is too small).
<li>the best split found does not give any noticeable improvement comparing to just a random
    choice.
</ul>
</p>
<p>When the tree is built, it may be pruned using cross-validation procedure, if need.
That is, some branches of the tree that may lead to the model overfitting are cut off.
Normally, this procedure is only applied to standalone decision trees, while tree ensembles
usually build small enough trees and use their own protection schemes against overfitting.
</p>

<h3>Variable importance</h3>
<p>
Besides the obvious use of decision trees - prediction, the tree can be also 
used
for various data analysis.
One of the key properties of the constructed decision tree algorithms is that it is possible
to compute importance (relative decisive power) of each variable. For example, in a spam
filter that uses a set of words occurred in the message as a feature vector, the variable importance
rating can be used to determine the most "spam-indicating" words and thus help to keep the dictionary
size reasonable.</p>
<p>Importance of each variable is computed over all the splits on this variable in the tree, primary
and surrogate ones. Thus, to compute variable importance correctly, the surrogate splits must be enabled
in the training parameters, even if there is no missing data.</p>

<p><a name="paper_Brieman84"><b>[Brieman84]
Breiman, L., Friedman, J. Olshen, R. and Stone, C. (1984), "Classification and Regression Trees", Wadsworth.
</b></a></p>


<hr><h3><a name="decl_CvDTreeSplit">CvDTreeSplit</a></h3>
<p class="Blurb">Decision tree node split</p>
<pre>
struct CvDTreeSplit
{
    int var_idx;
    int inversed;
    float quality;
    CvDTreeSplit* next;
    union
    {
        int subset[2];
        struct
        {
            float c;
            int split_point;
        }
        ord;
    };
};
</pre>
<p><dl>
<dt>var_idx<dd>Index of the variable used in the split
<dt>inversed<dd>When it equals to 1, the inverse split rule is used
(i.e. left and right branches are exchanged in the expressions below)
<dt>quality<dd>The split quality, a positive number. It is used to choose the 
best primary split, then to choose and sort the surrogate splits.
After the tree is constructed, it is also used to compute variable importance.
<dt>next<dd>Pointer to the next split in the node split list.
<dt>subset<dd>Bit array indicating the value subset in case of split on a categorical variable.<br>
              The rule is: <code>if var_value in subset then next_node&lt;-left else next_node&lt;-right</code>
<dt>c<dd>The threshold value in case of split on an ordered variable.<br>
         The rule is: <code>if var_value &lt; c then next_node&lt;-left else next_node&lt;-right</code>

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
777奇米四色成人影色区| 国产成人免费在线视频| 91久久线看在观草草青青| 国产精品卡一卡二| 色综合 综合色| 亚洲一区在线看| 欧美一区二区福利在线| 国产成人精品三级麻豆| 国产精品动漫网站| 色嗨嗨av一区二区三区| 男女男精品网站| 国产日产欧美一区二区三区| 成人动漫一区二区在线| 亚洲精品国产精品乱码不99| 欧美日韩国产精品自在自线| 日本不卡在线视频| 国产欧美日韩中文久久| 91在线porny国产在线看| 亚洲午夜一区二区三区| 日韩亚洲电影在线| 成人av网站在线观看| 午夜精品福利久久久| 久久综合久久鬼色中文字| 91视频免费看| 青青草原综合久久大伊人精品 | www久久精品| 波多野结衣在线一区| 亚洲sss视频在线视频| 欧美大片国产精品| 91福利视频网站| 国产中文字幕精品| 一区二区三区四区国产精品| 欧美videofree性高清杂交| 91影院在线免费观看| 蜜臀国产一区二区三区在线播放| 国产精品免费免费| 欧美岛国在线观看| 欧美性高清videossexo| 成人性生交大合| 偷拍一区二区三区四区| **性色生活片久久毛片| 精品国产乱码久久久久久久久| 色女孩综合影院| 国产美女精品一区二区三区| 亚洲高清视频的网址| 国产欧美日韩卡一| 欧美va天堂va视频va在线| 欧美伊人久久久久久久久影院| 国产精品 日产精品 欧美精品| 日本 国产 欧美色综合| 亚洲精品水蜜桃| 国产欧美1区2区3区| 精品国产伦一区二区三区免费| 91久久香蕉国产日韩欧美9色| 国产成人免费网站| 久久se精品一区精品二区| 亚洲成人7777| 亚洲免费观看高清在线观看| 国产亚洲一本大道中文在线| 91精品国产乱| 欧美日韩不卡在线| 在线免费观看日韩欧美| 91在线观看地址| va亚洲va日韩不卡在线观看| 国产精品一区二区在线观看不卡| 九一九一国产精品| 麻豆精品一区二区| 老鸭窝一区二区久久精品| 日日夜夜免费精品视频| 亚洲成人精品影院| 亚洲电影一级黄| 午夜久久久影院| 婷婷国产在线综合| 97精品国产97久久久久久久久久久久| 亚洲蜜臀av乱码久久精品蜜桃| 一本一道综合狠狠老| 国产在线精品一区在线观看麻豆| 日本美女一区二区| 蜜桃视频在线一区| 麻豆视频观看网址久久| 欧美aaaaa成人免费观看视频| 日韩高清不卡在线| 久久精品72免费观看| 激情五月婷婷综合| 国产成人综合在线播放| 国产精品123| caoporen国产精品视频| 在线视频一区二区三区| 欧美日韩国产首页在线观看| 日韩亚洲欧美一区二区三区| 精品欧美一区二区三区精品久久| 精品剧情在线观看| 欧美激情综合五月色丁香| 最新欧美精品一区二区三区| 亚洲欧美日韩中文播放| 亚洲成av人影院| 精品一区二区三区在线视频| 国产精品白丝jk白祙喷水网站| 成人午夜伦理影院| 欧美色窝79yyyycom| 日韩一区二区三区三四区视频在线观看| 日韩精品中文字幕在线不卡尤物| 26uuu精品一区二区| 中文字幕一区二区三区av| 夜色激情一区二区| 激情文学综合丁香| 91电影在线观看| 精品国产网站在线观看| 国产精品久久久久久户外露出 | 天天亚洲美女在线视频| 激情综合网最新| 91美女在线视频| 911精品产国品一二三产区| 久久久久久久久久看片| 一个色妞综合视频在线观看| 免费成人av在线| 91丨porny丨蝌蚪视频| 制服丝袜国产精品| 中文字幕中文乱码欧美一区二区| 日精品一区二区三区| 高清不卡一区二区| 欧美日韩美少妇| 久久久99精品久久| 日欧美一区二区| 成人免费的视频| 日韩免费一区二区三区在线播放| 中文字幕在线观看一区二区| 男人操女人的视频在线观看欧美| 不卡视频一二三| 日韩欧美国产一区二区在线播放 | 国产在线精品一区二区不卡了| 色综合天天综合在线视频| 欧美成人a∨高清免费观看| 夜夜爽夜夜爽精品视频| 成人精品一区二区三区四区| 日韩天堂在线观看| 亚洲一区在线观看免费观看电影高清| 国产精品一区不卡| 日韩西西人体444www| 亚洲一区免费视频| 91免费国产在线| 日本一区二区三区久久久久久久久不| 日韩国产成人精品| 在线视频一区二区三| 中文字幕一区二区三中文字幕| 麻豆成人av在线| 精品视频在线免费| 亚洲欧美日韩久久| 成人美女视频在线观看| 久久久亚洲精品一区二区三区| 美女视频黄频大全不卡视频在线播放| 91天堂素人约啪| 国产精品人人做人人爽人人添| 国产在线不卡视频| 精品国产凹凸成av人网站| 日本三级亚洲精品| 欧美一区二区三区人| 亚洲 欧美综合在线网络| 在线观看免费亚洲| 亚洲国产视频a| 欧日韩精品视频| 一二三四社区欧美黄| 色一情一伦一子一伦一区| 亚洲欧洲日韩一区二区三区| 成人免费观看av| 国产精品免费av| 99re6这里只有精品视频在线观看| 国产精品久久影院| 91视频在线观看| 亚洲综合丁香婷婷六月香| 欧美日韩一级视频| 日韩精品三区四区| 精品少妇一区二区三区在线视频| 麻豆成人久久精品二区三区小说| 欧美电视剧免费全集观看| 久久66热re国产| 国产欧美一区二区三区鸳鸯浴 | 久久九九国产精品| 国产成人av影院| 国产精品不卡一区二区三区| 91美女在线视频| 亚洲一卡二卡三卡四卡| 678五月天丁香亚洲综合网| 蜜桃视频第一区免费观看| 欧美不卡123| 成年人网站91| 亚洲综合激情另类小说区| 欧美日韩国产精品成人| 韩国欧美国产1区| 亚洲天堂网中文字| 91精品国产色综合久久不卡电影 | 色噜噜偷拍精品综合在线| 亚欧色一区w666天堂| 欧美大片在线观看一区| 丁香六月久久综合狠狠色| 一区二区在线观看免费视频播放| 欧美日韩五月天| 另类小说色综合网站| 中文字幕一区二区三区在线播放 | 欧美日韩精品欧美日韩精品|