亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? 數據挖掘分類方面的lib-svm經典算法的C++實現
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, at this point libsvm solves linear/nonlinear SVMs by the
same way.
Note that there are some possible
tricks to save training/testing time if the
linear kernel is used.
Hence libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially for 
using large C on
problems whose number of data is much larger
than number of attributes.
You can 
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/bsvm>bsvm</a>,
which includes an efficient implementation for
linear SVMs.
More details can be found in the following study:
<p>
K.-M. Chung, W.-C. Kao, 
T. Sun, 
and
C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/linear.pdf">
Decomposition Methods for Linear Support Vector Machines.
</A> 
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>,
16(2004), 1689-1704. 
</ul>

<p> On the other hand, you do not really need to solve
linear SVMs. See the previous question about choosing
kernels for details.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running (without showing any output). What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to train only a subset of the data.
You can use the program subset.py in the directory "tools" 
to obtain a random subset.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
中文字幕在线播放不卡一区| 久久机这里只有精品| 日韩av高清在线观看| 粉嫩av亚洲一区二区图片| 欧美久久久久免费| 亚洲色图欧美在线| 国产毛片精品一区| 欧美精品一级二级三级| 亚洲欧美激情插| 成人av在线电影| 久久精品一区二区三区四区| 国产成人自拍高清视频在线免费播放| 91免费国产在线观看| 精品国一区二区三区| 日日骚欧美日韩| 欧美性色黄大片手机版| 亚洲欧美日韩中文播放| 粉嫩嫩av羞羞动漫久久久| 久久日韩精品一区二区五区| 免费在线欧美视频| 91精品在线麻豆| 亚洲午夜久久久久中文字幕久| 99re成人精品视频| 国产精品久久久久婷婷| 成人丝袜高跟foot| 国产婷婷色一区二区三区四区| 久久国产精品72免费观看| 欧美精品成人一区二区三区四区| 一区二区视频在线看| 99riav久久精品riav| 国产精品第13页| 99久久精品免费精品国产| 国产精品传媒在线| 在线免费视频一区二区| 一区二区三区在线观看欧美| 色狠狠桃花综合| 一区二区高清视频在线观看| 91成人看片片| 亚洲午夜国产一区99re久久| 欧美视频一区二| 天堂资源在线中文精品| 欧美老人xxxx18| 秋霞成人午夜伦在线观看| 欧美mv日韩mv国产网站app| 国内一区二区在线| 国产午夜一区二区三区| 成人激情免费网站| 一区二区三区免费看视频| 欧美日韩亚洲综合一区| 美女视频一区在线观看| 国产日韩欧美精品一区| 一本久久综合亚洲鲁鲁五月天 | 91视频国产观看| 亚洲乱码中文字幕综合| 6080国产精品一区二区| 国产一区二区三区观看| 成人欧美一区二区三区| 欧美在线一区二区| 蜜桃视频在线观看一区| 欧美国产精品一区| 欧美性xxxxxxxx| 国产一区二区三区精品视频| 亚洲精品菠萝久久久久久久| 91精品国产全国免费观看| 成人午夜视频在线| 午夜精品在线视频一区| 久久久一区二区三区捆绑**| 一本一道波多野结衣一区二区| 日韩制服丝袜av| 欧美韩国日本不卡| 欧美日韩国产精品自在自线| 粉嫩欧美一区二区三区高清影视 | 精品亚洲欧美一区| 亚洲欧洲制服丝袜| 2023国产精华国产精品| 日本道色综合久久| 国产精品综合在线视频| 亚洲综合视频在线| 久久精品男人天堂av| 欧美日韩久久久久久| 成人av免费观看| 午夜精品福利在线| 国产精品久久久久一区| 久久综合一区二区| 在线不卡一区二区| 99久久婷婷国产精品综合| 六月丁香婷婷色狠狠久久| 日本网站在线观看一区二区三区 | 亚洲一区二区影院| 国产日本欧洲亚洲| 日韩午夜小视频| 欧美亚洲自拍偷拍| 91天堂素人约啪| 国产风韵犹存在线视精品| 日本中文字幕一区| 亚洲第一主播视频| 亚洲精品欧美专区| 国产精品三级在线观看| 日韩欧美国产精品一区| 欧美日免费三级在线| 99国产精品国产精品毛片| 国产一区欧美一区| 久久99国产精品久久| 日韩 欧美一区二区三区| 性做久久久久久| 亚洲国产精品影院| 亚洲国产美女搞黄色| 尤物av一区二区| 一区二区在线免费| 亚洲精品伦理在线| 亚洲图片另类小说| 1区2区3区欧美| 亚洲欧美电影一区二区| 亚洲欧美激情一区二区| 亚洲猫色日本管| 洋洋av久久久久久久一区| 亚洲自拍与偷拍| 亚洲成人激情社区| 日本午夜精品一区二区三区电影| 水蜜桃久久夜色精品一区的特点| 午夜视频久久久久久| 五月天视频一区| 免费成人在线影院| 国产精品一区二区视频| 粉嫩蜜臀av国产精品网站| 99久久婷婷国产综合精品电影| www..com久久爱| 在线精品视频免费观看| 欧美日韩精品一区二区三区四区 | 国产精品麻豆99久久久久久| 中文字幕在线观看一区二区| 亚洲桃色在线一区| 一区二区三区在线观看视频| 日本视频中文字幕一区二区三区| 麻豆精品一区二区av白丝在线| 精品一区二区三区的国产在线播放| 极品销魂美女一区二区三区| 国产成人综合亚洲网站| caoporn国产精品| 欧美日韩精品一区二区三区蜜桃| 日韩欧美一区在线| 国产精品理伦片| 天天色天天操综合| 国产福利一区二区三区| 在线观看av一区二区| 日韩精品资源二区在线| 综合自拍亚洲综合图不卡区| 首页国产欧美久久| 国产高清精品久久久久| 91福利在线看| 久久亚洲一区二区三区明星换脸| 中文字幕在线不卡国产视频| 日韩av电影免费观看高清完整版在线观看| 免费观看一级特黄欧美大片| av日韩在线网站| 日韩一区二区视频| 中文在线资源观看网站视频免费不卡| 亚洲国产日韩一区二区| 国产美女精品在线| 欧美另类z0zxhd电影| 成人欧美一区二区三区视频网页 | 欧美精品一区二区三区久久久| 最新高清无码专区| 九九热在线视频观看这里只有精品| av成人老司机| 26uuu国产在线精品一区二区| 亚洲一区二区高清| 福利91精品一区二区三区| 91精品久久久久久蜜臀| 亚洲欧美自拍偷拍| 大胆欧美人体老妇| 日韩美女一区二区三区| 亚洲二区在线观看| 99精品国产热久久91蜜凸| 久久久久久久一区| 美女一区二区久久| 欧美三级中文字幕| 自拍偷在线精品自拍偷无码专区| 精品一区二区三区久久| 欧美一级生活片| 亚洲成人一区二区| 在线视频中文字幕一区二区| 国产精品无圣光一区二区| 激情都市一区二区| 日韩美一区二区三区| 天天色图综合网| 69久久夜色精品国产69蝌蚪网| 一区二区三区不卡在线观看| 97精品国产97久久久久久久久久久久| 久久综合九色综合久久久精品综合| 轻轻草成人在线| 欧美日韩不卡视频| 亚洲国产精品一区二区www| 91丝袜美女网| 亚洲视频中文字幕| 色999日韩国产欧美一区二区| 亚洲欧洲色图综合| 日本久久一区二区| 亚洲一区二区三区在线| 精品视频在线视频|