亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? libsvm-2.84.rar
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, at this point libsvm solves linear/nonlinear SVMs by the
same way.
Note that there are some possible
tricks to save training/testing time if the
linear kernel is used.
Hence libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially for 
using large C on
problems whose number of data is much larger
than number of attributes.
You can 
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/bsvm>bsvm</a>,
which includes an efficient implementation for
linear SVMs.
More details can be found in the following study:
<p>
K.-M. Chung, W.-C. Kao, 
T. Sun, 
and
C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/linear.pdf">
Decomposition Methods for Linear Support Vector Machines.
</A> 
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>,
16(2004), 1689-1704. 
</ul>

<p> On the other hand, you do not really need to solve
linear SVMs. See the previous question about choosing
kernels for details.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running (without showing any output). What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to train only a subset of the data.
You can use the program subset.py in the directory "tools" 
to obtain a random subset.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
日韩一卡二卡三卡| 亚洲精品一卡二卡| 亚洲男人的天堂一区二区| 性感美女极品91精品| 国产成人亚洲综合色影视| 欧美日韩日日骚| 中文字幕在线播放不卡一区| 日本欧美一区二区| 99久久久国产精品免费蜜臀| 精品电影一区二区| 亚洲一区二区四区蜜桃| 成人免费黄色大片| 久久美女艺术照精彩视频福利播放 | 国产欧美视频在线观看| 亚洲444eee在线观看| 99久久久精品| 国产精品乱码人人做人人爱| 人人爽香蕉精品| 欧美丝袜丝交足nylons| 中文字幕综合网| 国产a精品视频| 欧美大片一区二区三区| 丝袜国产日韩另类美女| 一本大道av一区二区在线播放| 欧美国产日产图区| 亚洲欧美一区二区三区孕妇| av一二三不卡影片| 国产亚洲污的网站| 国产一区二区三区四区五区入口| 欧美疯狂做受xxxx富婆| 午夜精品123| 欧美美女网站色| 亚洲成va人在线观看| 欧美日韩一级大片网址| 午夜精品久久久久久久99樱桃| 欧美在线观看一二区| 一区二区三区四区蜜桃| 91色综合久久久久婷婷| 亚洲女同ⅹxx女同tv| 色综合久久久久久久久| 日韩理论片在线| 欧美在线free| 首页欧美精品中文字幕| 欧美一级视频精品观看| 蜜臀av性久久久久av蜜臀妖精| 欧美撒尿777hd撒尿| 三级一区在线视频先锋| 日韩欧美一区二区不卡| 国产综合色产在线精品 | 欧美视频一区二| 亚洲第一激情av| 欧美一级艳片视频免费观看| 国精产品一区一区三区mba视频| 久久久精品日韩欧美| av激情亚洲男人天堂| 一区二区三区免费网站| 欧美精品在线观看播放| 久久成人麻豆午夜电影| 国产精品麻豆网站| 欧美日免费三级在线| 经典三级一区二区| 亚洲欧美综合色| 正在播放亚洲一区| 国产成人精品一区二区三区四区| 自拍av一区二区三区| 久久影院视频免费| 国产成人在线观看| 51精品久久久久久久蜜臀| 欧美日韩国产不卡| 亚洲美女在线一区| 日韩欧美黄色影院| av成人老司机| 日韩av不卡一区二区| 中文字幕国产一区| 日韩一区二区三区四区| 91年精品国产| 国内外成人在线| 成人免费视频在线观看| 日韩欧美国产一区在线观看| 99精品欧美一区二区三区综合在线| 亚洲一区二区三区视频在线播放 | 欧美视频一区二| 国产成人免费网站| 日本va欧美va欧美va精品| 中文字幕一区不卡| 精品国产123| 精品视频一区二区三区免费| 国产成人午夜99999| 午夜久久久久久| 亚洲欧美日韩成人高清在线一区| 精品国产乱码久久久久久牛牛| 色成年激情久久综合| 成人国产精品免费网站| 久久精品久久精品| 日本美女一区二区三区| 亚洲精品欧美综合四区| 国产精品色呦呦| 久久久一区二区三区| 日韩一区二区三区免费看 | 久久久久久久网| 欧美人妇做爰xxxⅹ性高电影| 成人国产精品免费观看动漫| 国产麻豆一精品一av一免费| 蜜乳av一区二区| 日本中文字幕一区二区有限公司| 亚洲素人一区二区| 综合色天天鬼久久鬼色| 国产精品三级在线观看| 久久综合九色综合欧美亚洲| 91精品国产全国免费观看| 欧美日韩精品一区二区| av一区二区三区| 91香蕉视频黄| 一本大道av伊人久久综合| 一本色道久久加勒比精品| 色婷婷精品久久二区二区蜜臀av | 国产欧美日韩综合精品一区二区| 欧美疯狂做受xxxx富婆| 欧美男人的天堂一二区| 欧美高清精品3d| 欧美老肥妇做.爰bbww| 欧美日韩成人在线| 欧美麻豆精品久久久久久| 欧美精品 日韩| 欧美一区二区三区电影| 日韩网站在线看片你懂的| 日韩一区二区电影在线| 日韩午夜在线观看视频| 日韩欧美中文字幕精品| 欧美r级在线观看| 国产亚洲婷婷免费| 国产精品乱码人人做人人爱| 亚洲欧美另类图片小说| 一区二区三区资源| 日日夜夜精品视频免费| 麻豆精品一二三| 高清shemale亚洲人妖| 99视频在线精品| 欧美人妇做爰xxxⅹ性高电影| 日韩欧美一二三区| 中文字幕不卡三区| 亚洲精品国产一区二区三区四区在线| 亚洲国产成人高清精品| 狂野欧美性猛交blacked| 国产一区二区三区视频在线播放| 成人午夜视频网站| 精品视频在线视频| 日韩丝袜美女视频| 国产精品二三区| 午夜久久久久久| 风间由美一区二区av101| 色综合婷婷久久| 精品国产123| 亚洲免费观看高清| 久久99久久精品| 91香蕉视频mp4| 欧美大肚乱孕交hd孕妇| 中文字幕在线不卡一区| 麻豆成人av在线| 色8久久精品久久久久久蜜| 欧美sm极限捆绑bd| 一区二区三区产品免费精品久久75| 日日摸夜夜添夜夜添国产精品| 国产·精品毛片| 欧美一级二级在线观看| 亚洲免费在线看| 成人一区二区视频| 精品精品欲导航| 夜夜嗨av一区二区三区四季av| 国内精品第一页| 欧美日韩国产欧美日美国产精品| 国产精品丝袜黑色高跟| 美腿丝袜在线亚洲一区| 欧美亚洲动漫制服丝袜| 国产精品国产自产拍在线| 久久国产欧美日韩精品| 欧美日韩国产在线播放网站| 最新不卡av在线| 国产成人av电影在线播放| 欧美哺乳videos| 亚洲最大的成人av| 9人人澡人人爽人人精品| 久久综合久久久久88| 日韩视频免费观看高清完整版| 国产剧情一区二区三区| 日日夜夜精品视频天天综合网| 色综合天天综合给合国产| 26uuu久久天堂性欧美| 蜜臀精品久久久久久蜜臀| 欧美在线999| 依依成人精品视频| 色婷婷精品大在线视频| 国产精品天干天干在观线 | 日韩和欧美一区二区| 色综合色狠狠天天综合色| 一区视频在线播放| 成人午夜激情在线| 国产精品激情偷乱一区二区∴| 国产河南妇女毛片精品久久久| 精品久久国产字幕高潮|