亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? library for SVMclassification and regression. It solves C-SVM classification, nu-SVM classification
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
<a name="431"><b>Q: I don't know class labels of test data. What should I put in the first column of the test file?</b></a>
<br/>                                                                                
<p>Any value is ok. In this situation, what you will use is the output file of svm-predict, which gives predicted class labels.


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f401"><b>Q: The output of training C-SVM is like the following. What do they mean?</b></a>
<br/>                                                                                
<br>optimization finished, #iter = 219
<br>nu = 0.431030
<br>obj = -100.877286, rho = 0.424632
<br>nSV = 132, nBSV = 107
<br>Total nSV = 132
<p>
obj is the optimal objective value of the dual SVM problem.
rho is the bias term in the decision function
sgn(w^Tx - rho).
nSV and nBSV are number of support vectors and bounded support
vectors (i.e., alpha_i = C). nu-svm is a somewhat equivalent
form of C-SVM where C is replaced by nu. nu simply shows the
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, libsvm solves linear/nonlinear SVMs by the
same way.
Some tricks may save training/testing time if the
linear kernel is used,
so libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially when
C is large and
the number of data is much larger
than the number of attributes.
You can either
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/liblinear>liblinear</a>,
which is designed for large-scale linear classification.
More details can be found in the following study:
<p>
C.-J. Lin, R. C. Weng, and S. S. Keerthi.
<a href=../papers/logistic.pdf>
Trust region Newton method for large-scale logistic
regression</a>.
Technical report, 2007. A short version appears
in <a href=http://oregonstate.edu/conferences/icml2007/>ICML 2007</a>.<br>
</ul>

<p> Please also see our <a href=../papers/guide/guide.pdf>SVM guide</a>
on the discussion of using RBF and linear
kernels.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running (without showing any output). What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to train only a subset of the data.
You can use the program subset.py in the directory "tools" 
to obtain a random subset.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
久久精品国产99| 欧美日韩dvd在线观看| 成人免费福利片| 欧美日韩一级大片网址| 久久蜜桃av一区二区天堂 | 91麻豆免费看片| 欧美电影免费观看高清完整版在| 亚洲欧洲av在线| 国产一区二区三区国产| 9191久久久久久久久久久| 亚洲猫色日本管| 成人高清视频在线观看| 久久综合色一综合色88| 视频一区视频二区在线观看| 99久久免费国产| 国产精品久线在线观看| 国产一区二区三区观看| 欧美mv和日韩mv国产网站| 美日韩一区二区三区| 91精品国产综合久久国产大片| 亚洲女同一区二区| 99久久国产免费看| 综合久久久久综合| 成人免费电影视频| 中文字幕一区二区5566日韩| 懂色av一区二区夜夜嗨| 国产日韩欧美综合在线| 懂色中文一区二区在线播放| 久久噜噜亚洲综合| 大尺度一区二区| 中文字幕成人av| av在线不卡观看免费观看| 中文字幕日本不卡| 一本大道久久a久久综合| 一区二区激情视频| 欧美日韩午夜在线| 蜜乳av一区二区三区| 欧美电影免费观看高清完整版在线 | 欧美一区二区三区小说| 首页综合国产亚洲丝袜| 欧美一级久久久| 久久av资源网| 欧美国产一区二区在线观看| 成人在线视频一区二区| 亚洲女性喷水在线观看一区| 欧美视频精品在线观看| 视频一区二区三区入口| 国产精品理论片在线观看| 成人午夜在线免费| 亚洲一区二区三区视频在线| 欧美一级日韩一级| 国产成人精品三级麻豆| 亚洲精品美国一| 日韩午夜激情av| 国产一区二区0| 亚洲美腿欧美偷拍| 91精品免费观看| 国产成人8x视频一区二区| 一区二区三区精品在线| 欧美一级二级在线观看| 成人免费黄色大片| 婷婷久久综合九色国产成人 | 久久aⅴ国产欧美74aaa| 国产欧美一区二区三区沐欲| 99久久久国产精品| 丝袜亚洲精品中文字幕一区| 国产视频一区二区在线| 精品婷婷伊人一区三区三| 黄页视频在线91| 亚洲精品免费电影| 久久久久97国产精华液好用吗| 91麻豆免费看片| 国产在线乱码一区二区三区| 亚洲免费观看在线观看| 日韩一区二区影院| 色偷偷一区二区三区| 国产麻豆日韩欧美久久| 日韩国产高清在线| 国产精品乱码妇女bbbb| 欧美一区二区视频在线观看| 91视频免费看| 国产激情一区二区三区| 日韩和欧美一区二区三区| 亚洲欧洲日韩一区二区三区| 欧美成人video| 538在线一区二区精品国产| 99久久精品免费观看| 国产乱码精品一区二区三区av | 日韩欧美另类在线| 在线亚洲+欧美+日本专区| 国产大陆a不卡| 蜜桃久久久久久久| 日日摸夜夜添夜夜添精品视频| 中文字幕一区在线| 国产精品免费aⅴ片在线观看| 精品国偷自产国产一区| 欧美疯狂性受xxxxx喷水图片| 99这里只有久久精品视频| 国产成人一区在线| 国产真实乱子伦精品视频| 久久精品免费看| 毛片av一区二区| 精品系列免费在线观看| 免费成人在线观看| 蜜臀av亚洲一区中文字幕| 日本午夜一区二区| 视频一区欧美精品| 日本女人一区二区三区| 老司机午夜精品| 久久国产尿小便嘘嘘| 美女在线视频一区| 久久精品久久99精品久久| 免费高清成人在线| 国产美女主播视频一区| 国产91精品露脸国语对白| 大胆亚洲人体视频| 91麻豆国产自产在线观看| 在线精品视频一区二区三四| 欧美在线免费视屏| 911精品国产一区二区在线| 3d成人动漫网站| 精品理论电影在线观看| 久久影院视频免费| 国产精品短视频| 亚洲一区在线观看视频| 日韩av电影免费观看高清完整版 | 亚洲国产成人tv| 日韩电影在线一区| 精品亚洲国内自在自线福利| 国产一区在线精品| bt欧美亚洲午夜电影天堂| 日本高清视频一区二区| 欧美二区乱c少妇| 久久综合九色综合久久久精品综合 | 亚洲激情av在线| 图片区日韩欧美亚洲| 韩国一区二区视频| 不卡一区二区在线| 欧美色大人视频| 欧美v国产在线一区二区三区| 久久久久久久久久久久久女国产乱 | 国产欧美日韩卡一| 亚洲一二三区在线观看| 欧美aaa在线| 国产1区2区3区精品美女| 色八戒一区二区三区| 91精品国产综合久久小美女| 久久蜜桃一区二区| 一区二区三区四区国产精品| 免费高清不卡av| 91一区二区三区在线观看| 日韩精品最新网址| 一区二区三区在线看| 精品一区二区三区不卡 | 成人免费视频视频在线观看免费 | 一区二区三区中文字幕在线观看| 日韩精品成人一区二区在线| 国产麻豆成人传媒免费观看| 欧美偷拍一区二区| 日本一区二区免费在线观看视频| 香蕉影视欧美成人| 成人福利视频在线| 日韩亚洲欧美在线观看| 夜夜爽夜夜爽精品视频| 福利一区在线观看| 欧美一级片在线观看| 国产精品久久久久三级| 精品一区二区精品| 欧美一二三区精品| 亚洲图片欧美色图| 色综合一个色综合| 欧美成人性福生活免费看| 亚洲男人的天堂在线观看| 国产精品系列在线观看| 欧美一区二区三区在线| 午夜精品一区二区三区免费视频| www..com久久爱| 国产欧美一区二区三区沐欲| 麻豆国产一区二区| 91精品在线一区二区| 亚洲风情在线资源站| 日本精品一区二区三区四区的功能| 欧美精品一区男女天堂| 日韩高清在线一区| 欧美日韩视频专区在线播放| 亚洲精品成人少妇| 91丝袜国产在线播放| 日本一区免费视频| 国产福利一区二区三区在线视频| 欧美大片日本大片免费观看| 日韩高清国产一区在线| 欧美日韩一级黄| 青青青伊人色综合久久| 欧美一个色资源| 免费人成网站在线观看欧美高清| 777奇米四色成人影色区| 日韩一区精品视频| 欧美肥妇毛茸茸| 久久97超碰国产精品超碰| 精品免费日韩av|