亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? 使用svm實現了分類和擬合功能 帶有源文件
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
svm-predict test.feature train.model test.predicts
paste -d '#' test.predicts test.comments | sed 's/#/ #/' > test.results
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f401"><b>Q: The output of training C-SVM is like the following. What do they mean?</b></a>
<br/>                                                                                
<br>optimization finished, #iter = 219
<br>nu = 0.431030
<br>obj = -100.877286, rho = 0.424632
<br>nSV = 132, nBSV = 107
<br>Total nSV = 132
<p>
obj is the optimal objective value of the dual SVM problem.
rho is the bias term in the decision function
sgn(w^Tx - rho).
nSV and nBSV are number of support vectors and bounded support
vectors (i.e., alpha_i = C). nu-svm is a somewhat equivalent
form of C-SVM where C is replaced by nu. nu simply shows the
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, libsvm solves linear/nonlinear SVMs by the
same way.
Some tricks may save training/testing time if the
linear kernel is used,
so libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially when
C is large and
the number of data is much larger
than the number of attributes.
You can either
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/liblinear>liblinear</a>,
which is designed for large-scale linear classification.
</ul>

<p> Please also see our <a href=../papers/guide/guide.pdf>SVM guide</a>
on the discussion of using RBF and linear
kernels.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running (without showing any output). What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to train only a subset of the data.
You can use the program subset.py in the directory "tools" 
to obtain a random subset.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4151"><b>Q: How do I get the distance between a point and the hyperplane?</b></a>
<br/>                                                                                
<p>
The distance is |decision_value| / |w|. 
We have |w|^2 = w^Tw = alpha^T Q alpha = 2*(dual_obj + sum alpha_i). 
Thus in svm.cpp please find the place 
where we calculate the dual objective value
(i.e., the subroutine Solve())
and add a statement to print w^Tw.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: On 32-bit machines, if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dynamic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
The easiest solution is to switch to a
 64-bit machine.
Otherwise, there are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can eliminate the 2G
boundary for dynamic allocated memory. The `tub'
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
欧美激情一区二区三区不卡| 欧美综合一区二区三区| 亚洲一二三区在线观看| 一区二区三区av电影 | 天天操天天色综合| 亚洲第一av色| 日韩国产高清影视| 精久久久久久久久久久| 国产精品乡下勾搭老头1| 成人一二三区视频| 日本精品裸体写真集在线观看| 色哟哟在线观看一区二区三区| 在线观看免费成人| 国v精品久久久网| 久久精品一区四区| 日韩经典中文字幕一区| 亚洲综合激情网| 日本女人一区二区三区| 久久草av在线| 不卡的电影网站| 欧美在线视频不卡| 欧美一级搡bbbb搡bbbb| 久久精品水蜜桃av综合天堂| 国产精品久久久久久久久久免费看 | 欧美xxxxx牲另类人与| 日韩欧美的一区| 国产精品毛片久久久久久久| 亚洲精选一二三| 免费观看一级欧美片| av一本久道久久综合久久鬼色| 中文字幕免费一区| 国产精品婷婷午夜在线观看| 亚洲欧美日韩久久精品| 日本午夜一区二区| 99久久综合狠狠综合久久| 5566中文字幕一区二区电影| 久久综合久久鬼色中文字| 日韩一区在线播放| 美女网站色91| 色狠狠一区二区三区香蕉| 日韩免费观看高清完整版| 亚洲欧美日韩中文字幕一区二区三区| 日韩经典一区二区| 不卡电影一区二区三区| 精品久久一区二区三区| 亚洲人成精品久久久久| 国产在线视频精品一区| 欧美日韩免费高清一区色橹橹| 国产欧美日韩麻豆91| 日韩电影免费一区| 色av综合在线| 中文字幕一区二区三区在线播放 | 精品国产乱码久久久久久图片 | 亚洲人一二三区| 国内精品伊人久久久久av影院| 欧美自拍偷拍午夜视频| 国产三级精品在线| 日韩av午夜在线观看| 91丨九色丨蝌蚪丨老版| 国产欧美精品日韩区二区麻豆天美| 亚洲v精品v日韩v欧美v专区 | 欧美美女一区二区| 亚洲人成网站色在线观看| 国产麻豆视频一区| 久久久久久久综合狠狠综合| 蜜桃在线一区二区三区| 欧美高清dvd| 日韩中文字幕91| 在线不卡免费av| 午夜精品爽啪视频| 欧美乱妇15p| 午夜精品久久久久久| 欧美日韩中文字幕一区二区| 一区二区欧美在线观看| 色狠狠av一区二区三区| 亚洲狠狠丁香婷婷综合久久久| 成人国产精品免费观看| 国产精品污www在线观看| 成人永久看片免费视频天堂| 久久理论电影网| 国产成人精品一区二区三区四区| 国产女人18毛片水真多成人如厕| 国产传媒欧美日韩成人| 1000精品久久久久久久久| 91香蕉国产在线观看软件| 一区二区三区四区蜜桃| 欧美色图一区二区三区| 男人操女人的视频在线观看欧美| 精品久久久久久久久久久院品网| 久久99精品久久只有精品| 精品1区2区在线观看| 懂色av噜噜一区二区三区av | 色一区在线观看| 亚洲五月六月丁香激情| 91麻豆精品国产| 国产成人精品网址| 亚洲色图视频网站| 欧美一区二区三区色| 国产sm精品调教视频网站| 亚洲精品欧美激情| 日韩午夜中文字幕| 成人av电影在线| 亚洲成人av免费| 国产亚洲一区二区三区四区 | 久久精品久久精品| 欧美国产一区在线| 91精品国产综合久久久久久漫画| 国产乱码精品一区二区三区av| 亚洲自拍欧美精品| 日韩制服丝袜先锋影音| 色美美综合视频| 日韩电影免费一区| 亚洲色图第一区| 精品理论电影在线| 欧美最猛黑人xxxxx猛交| 国产一区二区三区av电影 | 欧洲另类一二三四区| 久久国产精品免费| 一区二区成人在线| 国产色产综合色产在线视频| 欧美日韩在线精品一区二区三区激情 | 91免费看片在线观看| 九九视频精品免费| 亚洲18色成人| 中文字幕一区av| 久久精品免视看| 亚洲制服欧美中文字幕中文字幕| 亚洲精品视频一区二区| 91色在线porny| 国产美女精品人人做人人爽| 亚洲成国产人片在线观看| 亚洲国产成人一区二区三区| 欧美一级欧美一级在线播放| 色婷婷久久久亚洲一区二区三区| 国产在线视频一区二区三区| 蜜臀久久99精品久久久久久9| 亚洲综合色视频| 一区二区在线免费| 国产精品女主播在线观看| 久久久噜噜噜久久中文字幕色伊伊| 777亚洲妇女| 欧美日韩黄色影视| 欧美三级三级三级| 欧美日韩久久久一区| 欧美日韩综合色| 欧美午夜视频网站| 欧美色偷偷大香| 欧美男女性生活在线直播观看| 欧美午夜片在线观看| 在线观看免费一区| 欧美日本在线播放| 欧美群妇大交群中文字幕| 欧美性受极品xxxx喷水| 欧美日韩一级片在线观看| 欧美日韩高清在线播放| 91精品国产综合久久香蕉的特点| 538在线一区二区精品国产| 欧美日本一道本| 日韩精品一区二区在线| 欧美videos大乳护士334| 在线观看免费亚洲| 丁香另类激情小说| 色婷婷av一区二区三区大白胸 | 94-欧美-setu| 91国偷自产一区二区使用方法| 91福利视频在线| 欧美午夜免费电影| 日韩欧美国产系列| 国产欧美一区二区精品性色 | 久久久久亚洲蜜桃| 国产拍欧美日韩视频二区 | 欧美日韩国产片| 精品av综合导航| 成人欧美一区二区三区白人| 亚洲综合另类小说| 开心九九激情九九欧美日韩精美视频电影 | 欧美日韩在线不卡| 欧美日韩国产天堂| 久久精品亚洲精品国产欧美kt∨ | 久久一区二区视频| 中文字幕二三区不卡| 香蕉影视欧美成人| 国产传媒一区在线| 欧美日韩中文字幕一区二区| www一区二区| 一区二区三区在线免费观看| 免费成人美女在线观看.| 成人黄色小视频| 欧美日韩大陆一区二区| 欧美激情一区二区三区| 午夜伦欧美伦电影理论片| 豆国产96在线|亚洲| 在线不卡的av| 亚洲天堂网中文字| 精品一区二区三区在线播放视频| av激情成人网| 欧美xxxx在线观看| 天天色图综合网| 色嗨嗨av一区二区三区| 国产无人区一区二区三区|