亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? lib-svm軟件包
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
<a name="431"><b>Q: I don't know class labels of test data. What should I put in the first column of the test file?</b></a>
<br/>                                                                                
<p>Any value is ok. In this situation, what you will use is the output file of svm-predict, which gives predicted class labels.


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f401"><b>Q: The output of training C-SVM is like the following. What do they mean?</b></a>
<br/>                                                                                
<br>optimization finished, #iter = 219
<br>nu = 0.431030
<br>obj = -100.877286, rho = 0.424632
<br>nSV = 132, nBSV = 107
<br>Total nSV = 132
<p>
obj is the optimal objective value of the dual SVM problem.
rho is the bias term in the decision function
sgn(w^Tx - rho).
nSV and nBSV are number of support vectors and bounded support
vectors (i.e., alpha_i = C). nu-svm is a somewhat equivalent
form of C-SVM where C is replaced by nu. nu simply shows the
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, libsvm solves linear/nonlinear SVMs by the
same way.
Some tricks may save training/testing time if the
linear kernel is used,
so libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially when
C is large and
the number of data is much larger
than the number of attributes.
You can either
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/liblinear>liblinear</a>,
which is designed for large-scale linear classification.
More details can be found in the following study:
<p>
C.-J. Lin, R. C. Weng, and S. S. Keerthi.
<a href=../papers/logistic.pdf>
Trust region Newton method for large-scale logistic
regression</a>.
Technical report, 2007. A short version appears
in <a href=http://oregonstate.edu/conferences/icml2007/>ICML 2007</a>.<br>
</ul>

<p> Please also see our <a href=../papers/guide/guide.pdf>SVM guide</a>
on the discussion of using RBF and linear
kernels.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running (without showing any output). What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to train only a subset of the data.
You can use the program subset.py in the directory "tools" 
to obtain a random subset.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
国产精品美女视频| 顶级嫩模精品视频在线看| 久久99精品国产麻豆不卡| 97久久超碰国产精品电影| 日韩女优电影在线观看| 亚洲午夜久久久久久久久电影网| 国产一区二区三区高清播放| 91精品国产91久久综合桃花| 日韩一区欧美一区| 国产99久久久久久免费看农村| 3d动漫精品啪啪一区二区竹菊| 亚洲精品菠萝久久久久久久| 成年人国产精品| 欧美激情在线一区二区| 久久av资源站| 制服丝袜亚洲网站| 亚洲va欧美va天堂v国产综合| 色综合中文字幕| 国产精品久久久久四虎| 国产91丝袜在线18| 日本一区二区三区dvd视频在线| 日产国产高清一区二区三区| 久久嫩草精品久久久久| 精品在线你懂的| 精品日韩在线观看| 国产综合色在线视频区| 精品国一区二区三区| 久久99久久久欧美国产| 日韩一级欧美一级| 久久99精品久久久| 精品1区2区在线观看| 国产一区二区日韩精品| 国产日韩欧美不卡| 99久久久国产精品| 亚洲精品国产高清久久伦理二区| 色欧美乱欧美15图片| 一区二区三区.www| 91.成人天堂一区| 免费人成黄页网站在线一区二区| 日韩一区二区高清| 韩国毛片一区二区三区| 久久九九全国免费| 99久久伊人精品| 亚洲午夜免费电影| 日韩欧美二区三区| 国产999精品久久久久久 | 国产麻豆视频精品| 国产欧美日韩精品在线| 色老头久久综合| 日韩和欧美一区二区三区| 欧美一区二区人人喊爽| 国产成人午夜精品影院观看视频| 国产精品初高中害羞小美女文| 91传媒视频在线播放| 麻豆一区二区三| 综合色中文字幕| 欧美一区二区在线播放| 国产高清视频一区| 亚洲国产三级在线| www国产成人免费观看视频 深夜成人网| 国产福利视频一区二区三区| 一二三区精品视频| 精品国产不卡一区二区三区| 91影院在线免费观看| 男女视频一区二区| 最新中文字幕一区二区三区| 欧美精品九九99久久| 懂色一区二区三区免费观看| 亚洲二区在线视频| 国产精品视频免费| 欧美一级日韩不卡播放免费| 成人免费高清在线| 另类小说视频一区二区| 亚洲人成网站影音先锋播放| 日韩女优av电影| 欧美网站一区二区| eeuss鲁片一区二区三区| 日韩综合小视频| 亚洲日本va午夜在线影院| 日韩三级在线免费观看| 色婷婷国产精品综合在线观看| 精品亚洲免费视频| 天天综合色天天综合色h| 国产精品美女www爽爽爽| 日韩精品中文字幕在线不卡尤物| 色狠狠综合天天综合综合| 欧美在线免费观看亚洲| 国产成人自拍网| 免费的成人av| 亚洲自拍偷拍图区| 亚洲欧美自拍偷拍色图| 久久久不卡影院| 欧美www视频| 欧美一卡二卡三卡| 7777精品伊人久久久大香线蕉经典版下载 | 日韩av网站免费在线| 亚洲欧洲综合另类在线| 中文字幕第一区二区| wwwwww.欧美系列| 欧美成人一区二区三区片免费 | 中文字幕亚洲区| 欧美国产视频在线| 国产亚洲综合在线| 精品久久人人做人人爽| 日韩免费在线观看| 欧美成人官网二区| 精品少妇一区二区三区日产乱码 | 中文字幕va一区二区三区| 久久久电影一区二区三区| 亚洲精品在线免费播放| 2024国产精品| 久久久久久99久久久精品网站| 精品国产免费视频| 久久久久久黄色| 中文成人综合网| 亚洲欧美在线另类| 综合色中文字幕| 亚洲一区在线电影| 午夜精品久久久| 免费精品视频在线| 美腿丝袜亚洲三区| 激情都市一区二区| 粉嫩av亚洲一区二区图片| 国产成人午夜精品5599| 97精品国产露脸对白| 色哟哟一区二区在线观看| 欧美午夜精品免费| 欧美日韩国产综合一区二区三区| 欧美精品 国产精品| 日韩精品一区在线| 亚洲国产电影在线观看| 亚洲欧美一区二区久久| 亚洲成人免费在线观看| 秋霞电影一区二区| 国产精品18久久久久| 97久久精品人人做人人爽| 欧美视频完全免费看| 欧美成人精品1314www| 国产精品人成在线观看免费| 一区二区欧美在线观看| 老汉av免费一区二区三区 | 欧美日韩精品一区二区三区蜜桃| 欧美一级黄色录像| 国产精品入口麻豆九色| 亚洲电影在线免费观看| 国产福利视频一区二区三区| 91福利在线看| 2019国产精品| 国产一区二区三区精品欧美日韩一区二区三区 | 日韩精品一区二区三区中文不卡| 中文一区一区三区高中清不卡| 亚洲一区二区三区四区不卡 | 奇米精品一区二区三区在线观看| 国产成人在线影院| 欧美日韩精品综合在线| 久久精品日韩一区二区三区| 亚洲国产精品久久艾草纯爱| 国产黄色精品网站| 3d动漫精品啪啪一区二区竹菊| 欧美国产国产综合| 欧美a一区二区| 色哟哟国产精品| 国产欧美日韩精品在线| 奇米在线7777在线精品| 一本到不卡免费一区二区| 久久婷婷成人综合色| 午夜视频一区在线观看| 99久久精品国产一区二区三区 | 亚洲免费观看高清完整| 国模大尺度一区二区三区| 欧美日韩国产免费一区二区 | 国产欧美一区二区在线| 免费在线观看不卡| 欧美日韩一级片网站| 专区另类欧美日韩| 国产精品1区二区.| 欧美mv和日韩mv国产网站| 视频一区在线播放| 色嗨嗨av一区二区三区| 国产精品伦理在线| 国产成人综合在线| 久久久久久免费网| 九九九精品视频| 日韩欧美亚洲国产另类| 日韩电影在线观看网站| 欧美日韩精品久久久| 亚洲国产另类精品专区| 在线区一区二视频| 亚洲精品菠萝久久久久久久| 99久久99久久精品国产片果冻| 国产三级欧美三级日产三级99| 精品一区二区日韩| 精品卡一卡二卡三卡四在线| 久久99精品一区二区三区| 日韩一二三区不卡| 蓝色福利精品导航| 久久综合五月天婷婷伊人| 国产一区二区三区四| 国产欧美一区二区精品秋霞影院| 国产一区二区三区免费|