亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來(lái)到蟲(chóng)蟲(chóng)下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲(chóng)蟲(chóng)下載站

?? faq.html

?? 是支持向量機(jī)的算法
?? HTML
?? 第 1 頁(yè) / 共 4 頁(yè)
字號(hào):
<p>
obj is the optimal objective value of the dual SVM problem.
rho is the bias term in the decision function
sgn(w^Tx - rho).
nSV and nBSV are number of support vectors and bounded support
vectors (i.e., alpha_i = C). nu-svm is a somewhat equivalent
form of C-SVM where C is replaced by nu. nu simply shows the
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, at this point libsvm solves linear/nonlinear SVMs by the
same way.
Note that there are some possible
tricks to save training/testing time if the
linear kernel is used.
Hence libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially for 
using large C on
problems whose number of data is much larger
than number of attributes.
You can 
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/bsvm>bsvm</a>,
which includes an efficient implementation for
linear SVMs.
More details can be found in the following study:
<p>
K.-M. Chung, W.-C. Kao, 
T. Sun, 
and
C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/linear.pdf">
Decomposition Methods for Linear Support Vector Machines.
</A> 
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>,
16(2004), 1689-1704. 
</ul>

<p> On the other hand, you do not really need to solve
linear SVMs. See the previous question about choosing
kernels for details.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running without showing any output. What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to contact us. We can show you some
tricks on improving the training time.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

?? 快捷鍵說(shuō)明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
激情文学综合丁香| 精品国产91乱码一区二区三区| 成人app网站| 国产高清不卡二三区| 国产一区视频在线看| 国产精品一区二区无线| 国产一区二区电影| 成人免费视频网站在线观看| 丁香婷婷综合激情五月色| 国产一区二区三区| 丁香六月久久综合狠狠色| 成人国产精品免费网站| 91亚洲精品一区二区乱码| 色拍拍在线精品视频8848| 欧美在线一二三| 欧美日韩一区不卡| 精品伦理精品一区| 欧美国产日韩亚洲一区| 国产精品久久久久久久第一福利| 亚洲欧美激情一区二区| 亚洲一区二区三区爽爽爽爽爽 | 首页亚洲欧美制服丝腿| 天天影视色香欲综合网老头| 久久国产生活片100| 国产福利精品一区| 色综合久久久网| 欧美乱妇20p| 亚洲精品一区二区三区在线观看| 久久久精品tv| 亚洲精选视频免费看| 免费成人av在线| 国产+成+人+亚洲欧洲自线| 99久久久精品免费观看国产蜜| 91福利精品第一导航| 欧美成人国产一区二区| 中文字幕一区二区三区精华液| 亚洲成a人v欧美综合天堂下载 | 国产自产v一区二区三区c| 高清久久久久久| 色哟哟一区二区| 欧美一区永久视频免费观看| 久久久久久毛片| 亚洲国产aⅴ成人精品无吗| 久久国产精品99精品国产 | 91免费版在线| 91精品久久久久久久99蜜桃| 国产精品色一区二区三区| 婷婷久久综合九色综合绿巨人| 国产在线不卡一区| 在线国产电影不卡| 国产清纯白嫩初高生在线观看91| 亚洲在线观看免费| 国产suv精品一区二区三区| 欧美日韩精品一区视频| 国产精品欧美久久久久一区二区| 婷婷夜色潮精品综合在线| 懂色av一区二区三区免费观看| 欧美人伦禁忌dvd放荡欲情| 亚洲国产精品黑人久久久| 日本美女视频一区二区| 91蜜桃传媒精品久久久一区二区| 日韩欧美第一区| 亚洲国产色一区| eeuss鲁片一区二区三区在线观看| 欧美一区二区三区四区视频| 亚洲视频小说图片| 国产精品1区二区.| 777欧美精品| 一区二区三区精品在线观看| 国产成a人无v码亚洲福利| 日韩一区二区三区高清免费看看| 亚洲视频图片小说| 成人激情免费网站| 久久综合久久综合九色| 午夜av一区二区三区| 一本一道久久a久久精品 | 中文字幕在线不卡国产视频| 韩国女主播成人在线观看| 欧美日韩国产电影| 亚洲精品视频自拍| av一本久道久久综合久久鬼色| 久久久亚洲综合| 久久er99热精品一区二区| 欧美日韩精品是欧美日韩精品| 亚洲蜜臀av乱码久久精品| 成人免费高清在线| 日本一区二区久久| 国产一区二区三区av电影| 欧美成人在线直播| 免费成人在线观看视频| 欧美老女人第四色| 午夜一区二区三区在线观看| 日本精品视频一区二区| 亚洲精品综合在线| 色94色欧美sute亚洲线路二| 亚洲日本在线看| av电影天堂一区二区在线观看| 国产精品美女久久久久高潮| 成人国产精品免费观看| 国产精品热久久久久夜色精品三区| 国产精品99久久久久久宅男| 亚洲精品一区二区精华| 国模套图日韩精品一区二区| 2021久久国产精品不只是精品| 久久精品免费观看| 精品国产成人系列| 国产真实乱对白精彩久久| 久久久久久久久免费| 粉嫩高潮美女一区二区三区 | 国产精品一二三在| 欧美激情一区二区三区全黄| 成人午夜视频在线| 最好看的中文字幕久久| 91国偷自产一区二区开放时间| 亚洲综合小说图片| 正在播放亚洲一区| 美女在线观看视频一区二区| 久久婷婷久久一区二区三区| 国产精品一区二区三区网站| 中文一区二区在线观看| 91在线观看免费视频| 亚洲午夜激情网页| 91精品国产高清一区二区三区| 精品一二线国产| 国产欧美一区二区三区鸳鸯浴| 成人av电影在线网| 亚洲国产综合91精品麻豆| 日韩欧美卡一卡二| 国产精品影音先锋| 亚洲欧美欧美一区二区三区| 欧美日韩高清一区二区不卡 | 欧美日韩三级一区| 毛片不卡一区二区| 国产精品免费看片| 欧美日韩高清一区二区| 极品少妇一区二区三区精品视频| 久久精品视频免费观看| 91国模大尺度私拍在线视频| 蜜臀av国产精品久久久久 | 极品少妇一区二区| 中文字幕一区免费在线观看| 欧美精品欧美精品系列| 国产高清无密码一区二区三区| 亚洲乱码中文字幕| 日韩三级中文字幕| jlzzjlzz欧美大全| 蜜臀av亚洲一区中文字幕| 国产精品国产自产拍高清av王其| 欧美日韩不卡在线| 国产suv精品一区二区6| 午夜欧美一区二区三区在线播放 | 男女性色大片免费观看一区二区| 国产色综合一区| 欧美人牲a欧美精品| 国产suv精品一区二区6| 五月天激情小说综合| 国产精品国产三级国产专播品爱网 | 一区二区三区中文免费| 欧美成人午夜电影| 在线看日本不卡| 国产91色综合久久免费分享| 天天综合网 天天综合色| 国产精品全国免费观看高清| 欧美一区二区三区精品| 色婷婷av一区二区| 丁香啪啪综合成人亚洲小说 | 青娱乐精品在线视频| 亚洲欧洲国产日韩| 欧美精品一区二区三区在线| 91国在线观看| av不卡免费电影| 国产久卡久卡久卡久卡视频精品| 亚洲国产精品天堂| 亚洲欧洲在线观看av| 久久伊人蜜桃av一区二区| 制服丝袜中文字幕一区| 一本久道久久综合中文字幕| 福利91精品一区二区三区| 麻豆专区一区二区三区四区五区| 亚洲国产精品影院| 亚洲精品视频一区二区| 国产精品美日韩| 国产片一区二区| 久久综合色鬼综合色| 日韩亚洲国产中文字幕欧美| 在线观看av不卡| 色婷婷激情综合| 99久久精品一区| 99久久99精品久久久久久| 国产凹凸在线观看一区二区| 国产一区二区免费视频| 看片网站欧美日韩| 免费观看日韩av| 美脚の诱脚舐め脚责91| 日韩激情av在线| 午夜久久福利影院| 性做久久久久久久久| 亚洲午夜精品在线| 亚洲午夜影视影院在线观看| 亚洲国产精品一区二区www|