亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? faq.html

?? 一種應(yīng)用廣泛的分類算法
?? HTML
?? 第 1 頁(yè) / 共 4 頁(yè)
字號(hào):
<p>
obj is the optimal objective value of the dual SVM problem.
rho is the bias term in the decision function
sgn(w^Tx - rho).
nSV and nBSV are number of support vectors and bounded support
vectors (i.e., alpha_i = C). nu-svm is a somewhat equivalent
form of C-SVM where C is replaced by nu. nu simply shows the
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, at this point libsvm solves linear/nonlinear SVMs by the
same way.
Note that there are some possible
tricks to save training/testing time if the
linear kernel is used.
Hence libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially for 
using large C on
problems whose number of data is much larger
than number of attributes.
You can 
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/bsvm>bsvm</a>,
which includes an efficient implementation for
linear SVMs.
More details can be found in the following study:
<p>
K.-M. Chung, W.-C. Kao, 
T. Sun, 
and
C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/linear.pdf">
Decomposition Methods for Linear Support Vector Machines.
</A> 
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>,
16(2004), 1689-1704. 
</ul>

<p> On the other hand, you do not really need to solve
linear SVMs. See the previous question about choosing
kernels for details.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running without showing any output. What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to contact us. We can show you some
tricks on improving the training time.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
亚洲精品视频一区二区| 成人午夜在线播放| 精品视频999| 午夜天堂影视香蕉久久| 91麻豆精品国产无毒不卡在线观看| 亚洲人成精品久久久久久| 91成人看片片| 天天影视色香欲综合网老头| 欧美一级高清片在线观看| 毛片基地黄久久久久久天堂| 欧美大片在线观看| 成人午夜激情视频| 亚洲欧美激情视频在线观看一区二区三区| 不卡av在线网| 亚洲国产视频一区二区| 日韩欧美国产精品| 国产成人精品免费视频网站| 中文字幕亚洲一区二区av在线| 99精品欧美一区| 午夜精品久久久久久久99水蜜桃| 91精品国产一区二区三区| 国内精品嫩模私拍在线| 国产精品三级视频| 欧美日韩激情在线| 国产大陆精品国产| 一区二区三区四区中文字幕| 欧美日韩国产综合久久 | 日韩午夜在线观看视频| 国产麻豆视频一区二区| 亚洲免费观看视频| 日韩精品专区在线| 97精品超碰一区二区三区| 亚洲亚洲精品在线观看| 久久久国产午夜精品| 91麻豆精品秘密| 精品一区二区三区不卡| 亚洲欧美韩国综合色| 精品乱人伦小说| 一本色道久久加勒比精品| 蜜臀a∨国产成人精品| 日韩毛片一二三区| 日韩精品在线看片z| 色欧美日韩亚洲| 国产成人精品免费网站| 天堂av在线一区| 亚洲欧美日韩国产另类专区| 精品福利av导航| 欧美日韩你懂得| av亚洲精华国产精华精华| 蜜桃视频第一区免费观看| 亚洲精品少妇30p| 久久久精品免费网站| 777a∨成人精品桃花网| 色综合激情五月| 国产麻豆精品theporn| 日韩黄色小视频| 亚洲成a人片在线观看中文| 最新热久久免费视频| 久久久美女毛片| 精品国产人成亚洲区| 欧美一级高清大全免费观看| 欧美在线播放高清精品| 97久久超碰国产精品| 成人午夜免费av| 国产成人精品影院| 国产精品一区二区免费不卡 | 欧美性感一类影片在线播放| 岛国av在线一区| 国产麻豆成人精品| 老司机午夜精品| 偷拍日韩校园综合在线| 亚洲精品欧美二区三区中文字幕| 精品欧美乱码久久久久久| 91精品啪在线观看国产60岁| 色域天天综合网| 国产成人免费av在线| 另类的小说在线视频另类成人小视频在线 | 亚洲乱码国产乱码精品精98午夜 | 偷窥国产亚洲免费视频| 国产精品成人网| 久久―日本道色综合久久| 在线电影国产精品| 91美女视频网站| 在线精品视频免费播放| 成人国产电影网| 国产成a人无v码亚洲福利| 久久精品久久精品| 日本午夜精品视频在线观看 | 韩国一区二区在线观看| 五月激情综合色| 亚洲mv大片欧洲mv大片精品| 尤物在线观看一区| 中文字幕一区三区| 中文在线免费一区三区高中清不卡| 欧美哺乳videos| 欧美r级电影在线观看| 日韩三级电影网址| 精品久久一区二区三区| 欧美日韩免费不卡视频一区二区三区| 91国偷自产一区二区开放时间| 色综合天天做天天爱| 不卡视频在线看| 色噜噜狠狠色综合中国| 色婷婷久久综合| 色偷偷成人一区二区三区91| 欧美人体做爰大胆视频| 欧美精品久久一区| 日韩欧美高清一区| 精品成a人在线观看| 久久女同性恋中文字幕| 欧美高清在线视频| 亚洲欧美区自拍先锋| 亚洲伦理在线精品| 日韩精品五月天| 精品在线一区二区| 国产成人在线色| 91啪在线观看| 欧美日韩高清一区| 日韩欧美国产高清| 久久亚洲捆绑美女| 一区二区三区国产精品| 日本人妖一区二区| 国产一区二区三区精品欧美日韩一区二区三区 | 午夜精品一区二区三区电影天堂 | 亚洲综合一区二区精品导航| 亚洲高清三级视频| 一级做a爱片久久| 日韩av高清在线观看| 国产精品综合在线视频| 色综合久久88色综合天天6| 在线亚洲欧美专区二区| 日韩欧美在线一区二区三区| 久久美女高清视频| 亚洲九九爱视频| 日韩在线一区二区三区| 国产丶欧美丶日本不卡视频| 色天使色偷偷av一区二区| 欧美日韩国产片| 亚洲色图在线播放| 久久福利视频一区二区| 91一区一区三区| 精品区一区二区| 中文字幕在线不卡一区二区三区| 天涯成人国产亚洲精品一区av| 精东粉嫩av免费一区二区三区| 国产成人精品一区二| 国产成人免费视| 欧美一级xxx| 综合在线观看色| 高清在线观看日韩| 7777精品伊人久久久大香线蕉最新版| 亚洲国产高清不卡| 蜜乳av一区二区三区| 色吧成人激情小说| 久久久国产精华| 视频在线观看一区| 欧美日韩一区二区三区免费看 | 在线精品亚洲一区二区不卡| 久久这里都是精品| 视频一区中文字幕国产| 欧美亚洲一区二区在线观看| 中文字幕av不卡| 久久国产精品色| 欧美色爱综合网| 亚洲天堂精品在线观看| 国产传媒欧美日韩成人| 在线中文字幕一区二区| 国产欧美精品一区| 精品在线播放午夜| 欧美巨大另类极品videosbest | 男人操女人的视频在线观看欧美| 99国产欧美另类久久久精品| 久久精品日产第一区二区三区高清版 | 久久精品亚洲精品国产欧美 | 国产精品影视在线| 欧美一区二区三区免费在线看 | 亚洲在线免费播放| 91在线视频官网| 亚洲欧美激情一区二区| 不卡在线视频中文字幕| 国产精品丝袜91| 国产成人免费高清| 国产丝袜在线精品| 国模套图日韩精品一区二区| 欧美一区二区三区在线电影| 午夜精品久久久久久久久久 | 日本久久一区二区三区| 国产精品大尺度| 99国产精品国产精品久久| 国产精品黄色在线观看| 成人听书哪个软件好| 国产三级一区二区| 91极品视觉盛宴| 亚洲午夜在线视频| 欧美三级韩国三级日本三斤| 亚洲第一成人在线| 制服丝袜国产精品| 久久99精品国产麻豆婷婷洗澡| 欧美一区二区在线免费观看| 国产精品一区二区久久精品爱涩|