亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? 機器學習中文參考手冊 - opencv china.htm

?? When I use opencv, I use this very useful paper to begin the study. This is all I searched from the
?? HTM
?? 第 1 頁 / 共 5 頁
字號:
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title=機器學習中文參考手冊 
href="http://www.opencv.org.cn/index.php?title=%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E4%B8%AD%E6%96%87%E5%8F%82%E8%80%83%E6%89%8B%E5%86%8C&amp;action=edit&amp;section=28">編輯</A>]</DIV><A 
name=.E5.B8.B8.E7.94.A8libSVM.E8.B5.84.E6.96.99.E9.93.BE.E6.8E.A5></A>
<H2>常用libSVM資料鏈接 </H2>
<P><A class="external text" title=http://www.csie.ntu.edu.tw/~cjlin/libsvm/ 
href="http://www.csie.ntu.edu.tw/~cjlin/libsvm/" 
rel=nofollow>官方站點,有一些tutorial和測試數據</A> </P>
<P><A class="external text" title=http://bbs.ir-lab.org/cgi-bin/leoboard.cgi 
href="http://bbs.ir-lab.org/cgi-bin/leoboard.cgi" 
rel=nofollow>哈工大的機器學習論壇,非常好</A> </P>
<P>上交的一個研究生還寫過libsvm2.6版的代碼中文注釋,源鏈接找不著了,大家自己搜搜吧,寫得很好,上海交通大學模式分析與機器智能實驗室。 </P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title=機器學習中文參考手冊 
href="http://www.opencv.org.cn/index.php?title=%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E4%B8%AD%E6%96%87%E5%8F%82%E8%80%83%E6%89%8B%E5%86%8C&amp;action=edit&amp;section=29">編輯</A>]</DIV><A 
name=Decision_Trees></A>
<H1>Decision Trees</H1>
<P>The ML classes discussed in this section implement Classification And 
Regression Tree algorithms, which is described in [Brieman84]. </P>
<P>The class CvDTree represents a single decision tree that may be used alone, 
or as a base class in tree ensembles (see Boosting and Random Trees). </P>
<P>Decision tree is a binary tree (i.e. tree where each non-leaf node has 
exactly 2 child nodes). It can be used either for classification, when each tree 
leaf is marked with some class label (multiple leafs may have the same label), 
or for regression, when each tree leaf is also assigned a constant (so the 
approximation function is piecewise constant). </P>
<P><B>Predicting with Decision Trees</B> </P>
<P>To reach a leaf node, and thus to obtain a response for the input feature 
vector, the prediction procedure starts with the root node. From each non-leaf 
node the procedure goes to the left (i.e. selects the left child node as the 
next observed node), or to the right based on the value of a certain variable, 
which index is stored in the observed node. The variable can be either ordered 
or categorical. In the first case, the variable value is compared with the 
certain threshold (which is also stored in the node); if the value is less than 
the threshold, the procedure goes to the left, otherwise, to the right (for 
example, if the weight is less than 1 kilo, the procedure goes to the left, else 
to the right). And in the second case the discrete variable value is tested, 
whether it belongs to a certain subset of values (also stored in the node) from 
a limited set of values the variable could take; if yes, the procedure goes to 
the left, else - to the right (for example, if the color is green or red, go to 
the left, else to the right). That is, in each node, a pair of entities 
(&lt;variable_index&gt;, &lt;decision_rule (threshold/subset)&gt;) is used. This 
pair is called split (split on the variable #&lt;variable_index&gt;). Once a 
leaf node is reached, the value assigned to this node is used as the output of 
prediction procedure. </P>
<P>Sometimes, certain features of the input vector are missed (for example, in 
the darkness it is difficult to determine the object color), and the prediction 
procedure may get stuck in the certain node (in the mentioned example if the 
node is split by color). To avoid such situations, decision trees use so-called 
surrogate splits. That is, in addition to the best "primary" split, every tree 
node may also be split on one or more other variables with nearly the same 
results. </P>
<P><B>Training Decision Trees</B> </P>
<P>The tree is built recursively, starting from the root node. The whole 
training data (feature vectors and the responses) are used to split the root 
node. In each node the optimum decision rule (i.e. the best "primary" split) is 
found based on some criteria (in ML gini "purity" criteria is used for 
classification, and sum of squared errors is used for regression). Then, if 
necessary, the surrogate splits are found that resemble at the most the results 
of the primary split on the training data; all data are divided using the 
primary and the surrogate splits (just like it is done in the prediction 
procedure) between the left and the right child node. Then the procedure 
recursively splits both left and right nodes etc. At each node the recursive 
procedure may stop (i.e. stop splitting the node further) in one of the 
following cases: </P>
<UL>
  <LI>depth of the tree branch being constructed has reached the specified 
  maximum value. 
  <LI>number of training samples in the node is less than the specified 
  threshold, i.e. it is not statistically representative set to split the node 
  further. 
  <LI>all the samples in the node belong to the same class (or, in case of 
  regression, the variation is too small). 
  <LI>the best split found does not give any noticeable improvement comparing to 
  just a random choice. </LI></UL>
<P>When the tree is built, it may be pruned using cross-validation procedure, if 
need. That is, some branches of the tree that may lead to the model overfitting 
are cut off. Normally, this procedure is only applied to standalone decision 
trees, while tree ensembles usually build small enough trees and use their own 
protection schemes against overfitting. </P>
<P><B>Variable importance</B> </P>
<P>Besides the obvious use of decision trees - prediction, the tree can be also 
used for various data analysis. One of the key properties of the constructed 
decision tree algorithms is that it is possible to compute importance (relative 
decisive power) of each variable. For example, in a spam filter that uses a set 
of words occurred in the message as a feature vector, the variable importance 
rating can be used to determine the most "spam-indicating" words and thus help 
to keep the dictionary size reasonable. </P>
<P>Importance of each variable is computed over all the splits on this variable 
in the tree, primary and surrogate ones. Thus, to compute variable importance 
correctly, the surrogate splits must be enabled in the training parameters, even 
if there is no missing data. </P>
<P>[Brieman84] Breiman, L., Friedman, J. Olshen, R. and Stone, C. (1984), 
"Classification and Regression Trees", Wadsworth. </P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title=機器學習中文參考手冊 
href="http://www.opencv.org.cn/index.php?title=%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E4%B8%AD%E6%96%87%E5%8F%82%E8%80%83%E6%89%8B%E5%86%8C&amp;action=edit&amp;section=30">編輯</A>]</DIV><A 
name=CvDTreeSplit></A>
<H2>CvDTreeSplit</H2>
<P>Decision tree node split </P><PRE>struct CvDTreeSplit
{
    int var_idx;
    int inversed;
    float quality;
    CvDTreeSplit* next;
    union
    {
        int subset[2];
        struct
        {
            float c;
            int split_point;
        }
        ord;
    };
};
</PRE>
<DL>
  <DT>var_idx
  <DD>Index of the variable used in the split 
  <DT>inversed
  <DD>When it equals to 1, the inverse split rule is used (i.e. left and right 
  branches are exchanged in the expressions below) 
  <DT>quality
  <DD>The split quality, a positive number. It is used to choose the best 
  primary split, then to choose and sort the surrogate splits. After the tree is 
  constructed, it is also used to compute variable importance. 
  <DT>next
  <DD>Pointer to the next split in the node split list. 
  <DT>subset
  <DD>Bit array indicating the value subset in case of split on a categorical 
  variable. The rule is: if var_value in subset then next_node&lt;-left else 
  next_node&lt;-right 
  <DT>c
  <DD>The threshold value in case of split on an ordered variable. The rule is: 
  if var_value &lt; c then next_node&lt;-left else next_node&lt;-right 
  <DT>split_point
  <DD>Used internally by the training algorithm. </DD></DL>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title=機器學習中文參考手冊 
href="http://www.opencv.org.cn/index.php?title=%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E4%B8%AD%E6%96%87%E5%8F%82%E8%80%83%E6%89%8B%E5%86%8C&amp;action=edit&amp;section=31">編輯</A>]</DIV><A 
name=CvDTreeNode></A>
<H2>CvDTreeNode</H2>
<P>Decision tree node </P><PRE>struct CvDTreeNode
{
    int class_idx;
    int Tn;
    double value;

    CvDTreeNode* parent;
    CvDTreeNode* left;
    CvDTreeNode* right;

    CvDTreeSplit* split;

    int sample_count;
    int depth;
    ...
};
</PRE>
<DL>
  <DT>value
  <DD>The value assigned to the tree node. It is either a class label, or the 
  estimated function value. 
  <DT>class_idx
  <DD>The assigned to the node normalized class index (to 0..class_count-1 
  range), it is used internally in classification trees and tree ensembles. 
  <DT>Tn
  <DD>The tree index in a ordered sequence of trees. The indices are used during 
  and after the pruning procedure. The root node has the maximum value Tn of the 
  whole tree, child nodes have Tn less than or equal to the parent's Tn, and the 
  nodes with Tn≤CvDTree::pruned_tree_idx are not taken into consideration at the 
  prediction stage (the corresponding branches are considered as cut-off), even 
  if they have not been physically deleted from the tree at the pruning stage. 
  <DT>parent, left, right
  <DD>Pointers to the parent node, left and right child nodes. 
  <DT>split
  <DD>Pointer to the first (primary) split. 
  <DT>sample_count
  <DD>The number of samples that fall into the node at the training stage. It is 
  used to resolve the difficult cases - when the variable for the primary split 
  is missing, and all the variables for other surrogate splits are missing too, 
  the sample is directed to the left if 
  left-&gt;sample_count&gt;right-&gt;sample_count and to the right otherwise. 
  <DT>depth
  <DD>The node depth, the root node depth is 0, the child nodes depth is the 
  parent's depth + 1. </DD></DL>
<P>Other numerous fields of CvDTreeNode are used internally at the training 
stage. </P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title=機器學習中文參考手冊 
href="http://www.opencv.org.cn/index.php?title=%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E4%B8%AD%E6%96%87%E5%8F%82%E8%80%83%E6%89%8B%E5%86%8C&amp;action=edit&amp;section=32">編輯</A>]</DIV><A 
name=CvDTreeParams></A>
<H2>CvDTreeParams</H2>
<P>Decision tree training parameters </P><PRE>struct CvDTreeParams
{
    int max_categories;
    int max_depth;
    int min_sample_count;
    int cv

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
9i看片成人免费高清| heyzo一本久久综合| 亚洲福中文字幕伊人影院| 国产精品久久久久影院亚瑟 | 91在线播放网址| 不卡视频免费播放| av在线不卡电影| 91免费在线看| 91福利视频在线| 欧美猛男男办公室激情| 欧美色电影在线| 91精品免费在线观看| 欧美一二三四区在线| xnxx国产精品| 中文字幕制服丝袜成人av | 一区二区三区av电影 | 久久毛片高清国产| 欧美国产日韩亚洲一区| 亚洲人亚洲人成电影网站色| 亚洲免费观看高清完整版在线| 一区二区三区在线观看欧美| 亚洲午夜电影网| 精品写真视频在线观看| 国产精品66部| 在线日韩av片| 日韩美女一区二区三区| 久久久久久久久一| 一区二区三区日韩欧美| 日韩高清在线电影| 国产91在线看| 欧美三级资源在线| 久久久久久一二三区| 裸体歌舞表演一区二区| 成人av在线资源| 制服丝袜激情欧洲亚洲| 国产女人18水真多18精品一级做| 亚洲精品国产精品乱码不99| 麻豆精品精品国产自在97香蕉| 成人毛片老司机大片| 欧美日韩国产成人在线免费| 国产喂奶挤奶一区二区三区| 亚洲一区二区三区小说| 国产精品综合久久| 欧美日韩国产a| 欧美激情艳妇裸体舞| 日韩av一区二区三区| 粉嫩aⅴ一区二区三区四区| 欧美日韩精品综合在线| **欧美大码日韩| 加勒比av一区二区| 欧美日韩国产一级片| 国产精品久久精品日日| 国内成人精品2018免费看| 欧美日韩成人一区| 亚洲精品日韩综合观看成人91| 久久疯狂做爰流白浆xx| 欧美日韩精品三区| 国产精品日韩精品欧美在线| 精品中文av资源站在线观看| 欧美优质美女网站| 国产精品对白交换视频| 国产老肥熟一区二区三区| 欧美不卡视频一区| 亚洲福利视频导航| 欧美在线观看一区二区| 中文av字幕一区| 国产福利电影一区二区三区| 欧美成人欧美edvon| 日精品一区二区| 国产精品久久久久久久久久久免费看| 久久国产尿小便嘘嘘尿| 91精品蜜臀在线一区尤物| 亚洲影视在线播放| 欧美在线|欧美| 亚洲午夜久久久久久久久电影院 | 在线视频你懂得一区二区三区| 国产欧美精品一区二区色综合 | 欧美日韩中文字幕一区| 有坂深雪av一区二区精品| 97se亚洲国产综合自在线不卡| 国产精品久久久久毛片软件| 成人小视频在线| 国产精品欧美一区喷水| 91麻豆国产精品久久| 亚洲色图视频网站| 欧美做爰猛烈大尺度电影无法无天| 亚洲欧美激情插| 欧美艳星brazzers| 亚洲第一av色| 欧美一级在线免费| 久久国产欧美日韩精品| 欧美激情一区二区| 91在线视频播放| 亚洲一区欧美一区| 欧美一区二区三区在线观看| 麻豆精品国产传媒mv男同| 久久美女高清视频| 一本色道a无线码一区v| 亚洲成人免费在线观看| 精品福利视频一区二区三区| 国产精品一区二区视频| 自拍偷拍亚洲欧美日韩| 4438x成人网最大色成网站| 麻豆视频观看网址久久| 国产午夜久久久久| 91成人看片片| 精品在线亚洲视频| 日韩一区日韩二区| 91麻豆精品国产91久久久资源速度| 老司机精品视频在线| 国产精品水嫩水嫩| 欧美日韩国产一级片| 国产精品一区二区免费不卡| 亚洲美女偷拍久久| 精品福利一二区| 欧美午夜一区二区| 国产一区二区0| 亚洲不卡一区二区三区| 久久久99免费| 欧美日韩一区二区三区不卡| 国产盗摄女厕一区二区三区| 亚洲午夜日本在线观看| 中文字幕欧美三区| 日韩欧美激情一区| 色婷婷激情久久| 成人国产精品免费| 久久er精品视频| 亚洲一级电影视频| 国产精品无遮挡| 精品国产一区二区三区不卡| 色狠狠色噜噜噜综合网| 国产伦理精品不卡| 美女网站在线免费欧美精品| 亚洲一区视频在线观看视频| 国产精品久久久久久亚洲毛片| 51午夜精品国产| 欧美三级视频在线观看| 色一情一伦一子一伦一区| 国产成人自拍网| 久久电影网电视剧免费观看| 三级在线观看一区二区| 亚洲一区成人在线| 亚洲精品免费电影| 国产精品久久久爽爽爽麻豆色哟哟| 久久久亚洲高清| 日韩欧美久久久| 91精品国产色综合久久不卡蜜臀| 在线视频一区二区免费| 91欧美激情一区二区三区成人| 成人aaaa免费全部观看| 成人三级在线视频| 丁香五精品蜜臀久久久久99网站 | 99久久久免费精品国产一区二区| 国产精品中文字幕欧美| 国产一区999| 国产99精品国产| 成人aaaa免费全部观看| av影院午夜一区| 99精品久久99久久久久| 91啪在线观看| 欧美日韩视频一区二区| 欧美色图一区二区三区| 欧美日韩国产影片| 日韩美女天天操| 久久精品人人做人人爽97| 国产片一区二区| 亚洲男人的天堂在线aⅴ视频| 亚洲免费色视频| 午夜久久电影网| 男女男精品视频网| 国产伦精品一区二区三区视频青涩 | 午夜精品福利视频网站| 亚洲一本大道在线| 日韩精品一级二级 | 视频一区视频二区中文字幕| 视频一区二区中文字幕| 国产毛片一区二区| 91蜜桃免费观看视频| 欧美精品1区2区3区| 精品国产一区二区精华| 国产精品久久久久久久岛一牛影视 | 日日夜夜精品视频免费| 日本午夜精品一区二区三区电影 | 久久久精品国产免费观看同学| 国产精品视频一二三区| 亚洲一区二区免费视频| 国产一区在线观看视频| 日本久久精品电影| 欧美成人r级一区二区三区| 国产精品国产a级| 日韩福利电影在线| 成人av电影在线网| 日韩欧美中文字幕精品| 亚洲精品国产一区二区精华液| 免费成人美女在线观看.| 懂色av噜噜一区二区三区av | 久久不见久久见免费视频7| av午夜一区麻豆| 欧美一区二区三区系列电影| 中文字幕欧美激情|