亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? faq.html

?? libsvm支持向量機(jī)(Support Vector Machine,簡(jiǎn)稱SVM)。
?? HTML
?? 第 1 頁(yè) / 共 4 頁(yè)
字號(hào):
<p>
obj is the optimal objective value of the dual SVM problem.
rho is the bias term in the decision function
sgn(w^Tx - rho).
nSV and nBSV are number of support vectors and bounded support
vectors (i.e., alpha_i = C). nu-svm is a somewhat equivalent
form of C-SVM where C is replaced by nu. nu simply shows the
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, at this point libsvm solves linear/nonlinear SVMs by the
same way.
Note that there are some possible
tricks to save training/testing time if the
linear kernel is used.
Hence libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially for 
using large C on
problems whose number of data is much larger
than number of attributes.
You can 
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/bsvm>bsvm</a>,
which includes an efficient implementation for
linear SVMs.
More details can be found in the following study:
<p>
K.-M. Chung, W.-C. Kao, 
T. Sun, 
and
C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/linear.pdf">
Decomposition Methods for Linear Support Vector Machines.
</A> 
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>,
16(2004), 1689-1704. 
</ul>

<p> On the other hand, you do not really need to solve
linear SVMs. See the previous question about choosing
kernels for details.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running without showing any output. What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to contact us. We can show you some
tricks on improving the training time.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
麻豆成人免费电影| 国产精品99久久久久久久女警| 欧美tickling挠脚心丨vk| 国产成人精品1024| 肉肉av福利一精品导航| 国产精品美女久久久久久2018| 欧美视频一区二| 国产mv日韩mv欧美| 蜜臀av性久久久久蜜臀aⅴ四虎| 国产精品久久久久影院色老大| 欧美一区二区国产| 欧美午夜精品一区二区三区| 国产一区二区免费视频| 天堂av在线一区| 亚洲精品一卡二卡| 日本一二三不卡| 精品理论电影在线观看| 欧美日韩极品在线观看一区| eeuss鲁片一区二区三区| 九色综合狠狠综合久久| 日韩精品久久理论片| 日韩毛片一二三区| 亚洲国产激情av| 精品黑人一区二区三区久久| 精品视频免费看| 色综合天天综合给合国产| 国产成人精品亚洲午夜麻豆| 奇米影视一区二区三区| 午夜精彩视频在线观看不卡| 一区二区三区精品在线| 亚洲欧美日韩一区| 自拍偷在线精品自拍偷无码专区| 国产婷婷色一区二区三区四区| 日韩午夜av一区| 在线成人av网站| 欧美天堂一区二区三区| 在线观看视频一区二区欧美日韩| 成人精品视频一区| 国产jizzjizz一区二区| 国产成人免费在线观看| 国产精品中文字幕一区二区三区| 另类小说视频一区二区| 麻豆成人综合网| 久久99久久精品| 韩国一区二区视频| 黄色资源网久久资源365| 激情av综合网| 国产.欧美.日韩| 国产一区二区三区久久久| 国产精品一区二区三区四区| 国产成人精品三级麻豆| www.亚洲在线| 色婷婷激情久久| 欧美性大战久久| 日韩一区二区三区在线观看| 日韩精品专区在线| 国产欧美日韩视频一区二区| 国产精品第四页| 亚洲黄网站在线观看| 视频一区视频二区中文| 乱中年女人伦av一区二区| 狠狠色丁香久久婷婷综合_中| 国产精品一卡二| 99re热这里只有精品免费视频| 99久久婷婷国产综合精品 | av在线不卡免费看| 91一区在线观看| 国产精品乱人伦中文| 国产欧美一区二区精品性| 国产精品久久久久久久久晋中| 青草av.久久免费一区| 美女视频网站黄色亚洲| 国产99精品国产| 色悠悠亚洲一区二区| 欧美欧美午夜aⅴ在线观看| 欧美成人福利视频| 国产精品嫩草影院av蜜臀| 一区二区三区不卡在线观看| 日韩在线一二三区| 国产福利一区二区三区视频| 色婷婷精品久久二区二区蜜臂av| 欧美精品v日韩精品v韩国精品v| 精品人在线二区三区| 日韩一区在线看| 另类中文字幕网| 91色综合久久久久婷婷| 日韩一区二区在线观看| 中日韩av电影| 日韩精品一二三四| 粉嫩av一区二区三区在线播放| 在线观看一区二区视频| 欧美成人一区二区三区片免费 | 国产亚洲成年网址在线观看| 一级中文字幕一区二区| 国产成人在线色| 欧美疯狂做受xxxx富婆| 国产精品久线在线观看| 青草国产精品久久久久久| 91蜜桃婷婷狠狠久久综合9色| 日韩一区二区麻豆国产| 亚洲免费在线电影| 国产一区激情在线| 欧美婷婷六月丁香综合色| 亚洲国产成人私人影院tom| 日韩av中文字幕一区二区| 不卡一卡二卡三乱码免费网站| 欧美电影免费观看高清完整版在线| 亚洲免费观看高清完整版在线观看| 激情综合色综合久久综合| 欧美日韩综合在线| 亚洲欧洲韩国日本视频| 国产精品99久久久| 日韩一区二区三区免费观看| 一区二区三区美女视频| 91在线免费看| 国产精品美女久久福利网站| 国产在线精品一区二区夜色 | 久久精品国产久精国产爱| 欧洲中文字幕精品| 一区精品在线播放| 国产91丝袜在线18| 久久噜噜亚洲综合| 久久国内精品视频| 日韩午夜中文字幕| 日本中文字幕不卡| 欧美精品久久久久久久多人混战| 国产精品久久久久久久午夜片| 国产盗摄视频一区二区三区| 欧美刺激脚交jootjob| 日韩国产高清在线| 欧美精品一二三| 亚洲h精品动漫在线观看| 91麻豆国产香蕉久久精品| 国产精品福利av| 99国内精品久久| 亚洲国产精品激情在线观看| 粉嫩av亚洲一区二区图片| 国产亚洲欧美一区在线观看| 国产美女主播视频一区| 久久亚洲综合色| 国产大片一区二区| 一区免费观看视频| 91国产福利在线| 亚洲 欧美综合在线网络| 7777精品伊人久久久大香线蕉超级流畅| 亚洲高清不卡在线观看| 欧美日韩一区二区三区高清| 亚洲国产精品久久人人爱蜜臀| 欧美日韩在线播放| 青青草国产精品亚洲专区无| 91精品国产91久久久久久一区二区| 免费高清成人在线| 久久精品亚洲国产奇米99| 成人在线综合网| 一区二区三区资源| 91精品黄色片免费大全| 国产一区二区三区精品视频| 国产精品久99| 欧美视频精品在线观看| 日本网站在线观看一区二区三区 | 国内偷窥港台综合视频在线播放| 精品伦理精品一区| 不卡一卡二卡三乱码免费网站| 一区二区三区日韩精品| 欧美一区二区三区播放老司机| 激情丁香综合五月| 亚洲色图在线播放| 91.麻豆视频| 国产电影一区在线| 玉米视频成人免费看| 日韩三级电影网址| 国产suv一区二区三区88区| 一区二区三区日韩精品视频| 欧美成人免费网站| 99久久精品国产精品久久| 日韩成人免费在线| 亚洲欧洲另类国产综合| 日韩一二三四区| 97久久超碰国产精品| 免费看欧美美女黄的网站| 国产精品久久久久婷婷| 91精品国产黑色紧身裤美女| 成人午夜激情片| 三级不卡在线观看| 综合亚洲深深色噜噜狠狠网站| 91精品国产综合久久福利软件 | 欧美一区二区三区人| 丰满亚洲少妇av| 日本色综合中文字幕| 中文字幕亚洲在| 精品免费国产二区三区 | 久久综合九色综合欧美98| 一本大道av伊人久久综合| 卡一卡二国产精品| 一区二区久久久久| 欧美国产成人精品| 欧美电视剧在线观看完整版| 色婷婷av一区二区三区大白胸| 国产一区激情在线| 日韩黄色一级片|