亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? SVM是一種常用的模式分類機器學習算法
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
<p>
obj is the optimal objective value of the dual SVM problem.
rho is the bias term in the decision function
sgn(w^Tx - rho).
nSV and nBSV are number of support vectors and bounded support
vectors (i.e., alpha_i = C). nu-svm is a somewhat equivalent
form of C-SVM where C is replaced by nu. nu simply shows the
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, at this point libsvm solves linear/nonlinear SVMs by the
same way.
Note that there are some possible
tricks to save training/testing time if the
linear kernel is used.
Hence libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially for 
using large C on
problems whose number of data is much larger
than number of attributes.
You can 
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/bsvm>bsvm</a>,
which includes an efficient implementation for
linear SVMs.
More details can be found in the following study:
<p>
K.-M. Chung, W.-C. Kao, 
T. Sun, 
and
C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/linear.pdf">
Decomposition Methods for Linear Support Vector Machines.
</A> 
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>,
16(2004), 1689-1704. 
</ul>

<p> On the other hand, you do not really need to solve
linear SVMs. See the previous question about choosing
kernels for details.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running without showing any output. What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to contact us. We can show you some
tricks on improving the training time.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
天天综合日日夜夜精品| 国产成人午夜精品5599| 国产经典欧美精品| 欧美日韩午夜精品| 中文字幕一区二区三区色视频| 亚洲图片一区二区| 一本色道久久综合亚洲91| wwwwxxxxx欧美| 日本网站在线观看一区二区三区| jiyouzz国产精品久久| 久久只精品国产| 日韩成人一级大片| 欧美亚洲一区三区| 亚洲日本在线观看| 91在线云播放| 国产精品久久久久国产精品日日| 免费成人你懂的| 7777精品伊人久久久大香线蕉的| 1区2区3区国产精品| 成人午夜电影网站| 久久影院午夜片一区| 蜜臀av一区二区| 欧美一级二级三级乱码| 免费美女久久99| 337p亚洲精品色噜噜狠狠| 三级影片在线观看欧美日韩一区二区 | 亚洲人一二三区| 懂色av中文字幕一区二区三区| 精品国产一区久久| 韩国一区二区视频| 久久综合九色综合欧美亚洲| 男人的天堂亚洲一区| 91精品国产日韩91久久久久久| 亚洲国产精品久久久久婷婷884| 欧美亚洲愉拍一区二区| 亚洲综合一区在线| 欧美日免费三级在线| 亚洲五月六月丁香激情| 91精品国产综合久久精品麻豆| 午夜一区二区三区在线观看| 91精品综合久久久久久| 美女爽到高潮91| 国产亚洲综合在线| 91丨porny丨最新| 夜色激情一区二区| 91精品福利在线一区二区三区| 蜜桃视频一区二区| 中文字幕第一页久久| 日本韩国视频一区二区| 天堂va蜜桃一区二区三区| 日韩天堂在线观看| 成人精品一区二区三区四区| 亚洲日本va午夜在线影院| 欧美欧美欧美欧美| 国产成人午夜高潮毛片| 自拍视频在线观看一区二区| 色8久久精品久久久久久蜜| 亚洲免费av观看| 欧美精品久久一区二区三区| 日日夜夜免费精品| 精品福利二区三区| 成人综合日日夜夜| 国产精品乱人伦中文| 一本色道**综合亚洲精品蜜桃冫| 亚洲综合av网| 日韩一级片网站| 美美哒免费高清在线观看视频一区二区 | 国产在线观看免费一区| 日韩免费电影一区| 国产精品一区免费视频| 国产欧美日韩在线看| 99久精品国产| 天天射综合影视| 精品黑人一区二区三区久久 | 日韩一区二区三区三四区视频在线观看| 日产国产欧美视频一区精品 | 久久综合九色综合欧美98 | 99久久国产免费看| 一区二区三区在线视频免费| 337p亚洲精品色噜噜| 国产精品中文字幕日韩精品| 中文字幕亚洲区| 制服丝袜中文字幕亚洲| 国产成人aaaa| 亚洲国产aⅴ天堂久久| 久久久久久9999| 在线视频亚洲一区| 国产一区日韩二区欧美三区| 亚洲日本丝袜连裤袜办公室| 欧美精品久久99| 99视频一区二区| 国产一区二区在线观看免费| 亚洲国产精品二十页| 在线视频中文字幕一区二区| 国产精一品亚洲二区在线视频| 亚洲视频一二区| 精品精品国产高清一毛片一天堂| 色综合网站在线| 国产成a人无v码亚洲福利| 亚洲国产成人tv| 国产精品久久久久久久久免费桃花 | 在线亚洲人成电影网站色www| 精品一区二区三区视频在线观看| 亚洲香肠在线观看| 国产精品国产a| 欧美α欧美αv大片| 色视频成人在线观看免| 国产乱码精品一区二区三区av| 亚洲国产三级在线| 亚洲免费观看高清完整| 亚洲国产成人午夜在线一区| 欧美一区二区免费视频| 欧美日韩精品一区二区三区蜜桃| youjizz国产精品| 国产伦精品一区二区三区免费迷| 日本在线不卡视频一二三区| 五月婷婷激情综合| 午夜欧美视频在线观看| 亚洲欧美视频在线观看视频| 国产精品久久一级| 国产精品色呦呦| 欧美国产一区在线| 久久精品人人做人人爽97| 欧美一区二区三区影视| 欧美在线视频全部完| 91免费版pro下载短视频| 99在线精品视频| 99久久综合99久久综合网站| 国产精品一区二区三区四区| 九九国产精品视频| 青草av.久久免费一区| 亚洲超碰精品一区二区| 午夜不卡在线视频| 老汉av免费一区二区三区| 麻豆国产一区二区| 精品一区二区日韩| 国产乱码精品1区2区3区| 粉嫩一区二区三区性色av| 国产精品一区久久久久| 成人在线一区二区三区| 成人动漫在线一区| 欧美三区在线视频| 4438x成人网最大色成网站| 欧美成人一区二区三区片免费| 日韩精品中文字幕一区二区三区| 2022国产精品视频| 亚洲欧洲精品一区二区精品久久久| 亚洲欧美一区二区视频| 亚洲一区中文在线| 国内成+人亚洲+欧美+综合在线| 国内精品第一页| fc2成人免费人成在线观看播放| 日本丰满少妇一区二区三区| 欧美日韩极品在线观看一区| 日韩视频在线你懂得| 中文字幕国产一区| 国产精品看片你懂得| 日本欧美一区二区在线观看| 精品一区二区三区香蕉蜜桃| 高清国产一区二区| 精品视频一区二区不卡| 精品国产乱码久久久久久浪潮| 久久精品水蜜桃av综合天堂| 亚洲免费观看视频| 免费国产亚洲视频| a4yy欧美一区二区三区| 欧美老人xxxx18| 中文字幕精品一区二区精品绿巨人| 日韩成人dvd| 波多野结衣91| 欧美一级二级三级乱码| 中文字幕在线免费不卡| 日韩国产欧美视频| 91在线国内视频| 久久综合久久综合久久| 婷婷开心激情综合| 不卡视频一二三四| 欧美日韩一区视频| 国产色产综合色产在线视频 | 日韩在线一二三区| 99精品在线观看视频| 欧美电影免费观看高清完整版在| 亚洲色图第一区| 韩国精品免费视频| 欧美日韩亚洲另类| 一区在线观看视频| 成人午夜看片网址| 精品美女在线观看| 婷婷一区二区三区| 色偷偷88欧美精品久久久| 久久先锋影音av鲁色资源网| 亚洲成人av电影在线| 欧美日韩综合在线免费观看| 欧美国产国产综合| 韩国一区二区三区| 日韩欧美国产一区二区在线播放| 一区二区三区在线播| 成人黄色小视频在线观看| 欧美一级爆毛片| 精品一区二区三区久久久|