亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? 應用程序
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f401"><b>Q: The output of training C-SVM is like the following. What do they mean?</b></a>
<br/>                                                                                
<br>optimization finished, #iter = 219
<br>nu = 0.431030
<br>obj = -100.877286, rho = 0.424632
<br>nSV = 132, nBSV = 107
<br>Total nSV = 132
<p>
obj is the optimal objective value of the dual SVM problem.
rho is the bias term in the decision function
sgn(w^Tx - rho).
nSV and nBSV are number of support vectors and bounded support
vectors (i.e., alpha_i = C). nu-svm is a somewhat equivalent
form of C-SVM where C is replaced by nu. nu simply shows the
corresponding parameter. More details are in
<a href="http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf">
libsvm document</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f402"><b>Q: Can you explain more about the model file?</b></a>
<br/>                                                                                

<p>
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.) 
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients 
y*alpha where alpha are dual solution of the
following two class problems:
<br>
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
<br>
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.

For example, if there are 4 classes, the file looks like:

<pre>
+-+-+-+--------------------+
|1|1|1|                    |
|v|v|v|  SVs from class 1  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|2|                    |
|v|v|v|  SVs from class 2  |
|2|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 3  |
|3|3|4|                    |
+-+-+-+--------------------+
|1|2|3|                    |
|v|v|v|  SVs from class 4  |
|4|4|4|                    |
+-+-+-+--------------------+
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f403"><b>Q: Should I use float or double to store numbers in the cache ?</b></a>
<br/>                                                                                

<p>
We have float as the default as you can store more numbers
in the cache. 
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, libsvm solves linear/nonlinear SVMs by the
same way.
Some tricks may save training/testing time if the
linear kernel is used,
so libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially when
C is large and
the number of data is much larger
than the number of attributes.
You can either
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/liblinear>liblinear</a>,
which is designed for large-scale linear classification.
</ul>

<p> Please also see our <a href=../papers/guide/guide.pdf>SVM guide</a>
on the discussion of using RBF and linear
kernels.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running (without showing any output). What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
For large problems, please specify enough cache size (i.e.,
-m).
Slow convergence may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to train only a subset of the data.
You can use the program subset.py in the directory "tools" 
to obtain a random subset.

<p> When using large -e, you may want to check if -h 0 (no shrinking) or -h 1 (shrinking) is faster.
See the next question below.

<p>
If you are using polynomial kernels, please check the question on the pow() function.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4141"><b>Q: Does shrinking always help?</b></a>
<br/>                                                                                
<p>
If the number of iterations is high, then shrinking
often helps.
However, if the number of iterations is small
(e.g., you specify a large -e), then
probably using -h 0 (no shrinking) is better.
See the 
<a href=../papers/libsvm.pdf>implementation document</a> for details.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases. You
can obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4151"><b>Q: How do I get the distance between a point and the hyperplane?</b></a>
<br/>                                                                                
<p>
The distance is |decision_value| / |w|. 
We have |w|^2 = w^Tw = alpha^T Q alpha = 2*(dual_obj + sum alpha_i). 
Thus in svm.cpp please find the place 
where we calculate the dual objective value
(i.e., the subroutine Solve())
and add a statement to print w^Tw.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: On 32-bit machines, if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dynamic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
The easiest solution is to switch to a
 64-bit machine.
Otherwise, there are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can eliminate the 2G
boundary for dynamic allocated memory. The `tub'

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
精品美女被调教视频大全网站| 亚洲成人激情自拍| 亚洲国产一区二区视频| 久久99久久99| 在线观看视频一区二区| 国产欧美精品一区二区三区四区| 亚洲高清视频中文字幕| 99久久久无码国产精品| 欧美成人欧美edvon| 亚洲成人tv网| 色综合天天综合网天天狠天天| 久久久影视传媒| 日韩高清一级片| 欧美熟乱第一页| 亚洲另类色综合网站| 国产成人综合亚洲网站| 日韩精品中文字幕一区| 免费高清不卡av| 777午夜精品免费视频| 夜夜精品浪潮av一区二区三区| 不卡的电影网站| 久久久久国产免费免费| 激情偷乱视频一区二区三区| 欧美电影一区二区三区| 亚洲一区二区三区小说| 91传媒视频在线播放| 一区二区三区美女| 日本伦理一区二区| 亚洲伊人伊色伊影伊综合网| 日本电影欧美片| 一区二区三区四区乱视频| 91久久精品日日躁夜夜躁欧美| 成人免费在线视频观看| 97精品电影院| 亚洲美女在线国产| 欧美在线|欧美| 午夜精品久久久久久久久| 欧美一三区三区四区免费在线看 | 不卡的电视剧免费网站有什么| 久久精品一区二区| 高清成人免费视频| 国产精品视频第一区| 日本高清不卡一区| 午夜亚洲福利老司机| 欧美日韩国产欧美日美国产精品| 天天综合天天综合色| 精品免费国产一区二区三区四区| 国产精品一区二区在线播放| 国产婷婷色一区二区三区四区 | 91老司机福利 在线| 亚洲综合图片区| 宅男在线国产精品| 久久国产精品第一页| 国产精品毛片a∨一区二区三区| www.亚洲人| 亚洲综合色丁香婷婷六月图片| 欧美二区在线观看| 国产成人av电影在线| 亚洲欧美另类小说视频| 欧美一区二区三区日韩视频| 国产精品综合网| 亚洲综合色噜噜狠狠| 日韩精品中午字幕| 91蜜桃网址入口| 美女性感视频久久| 国产精品福利一区二区三区| 欧美怡红院视频| 国产精品123区| 一区二区免费在线播放| 久久影院视频免费| 色哟哟亚洲精品| 国产一区在线视频| 亚洲高清免费在线| 国产精品国产三级国产aⅴ中文| 欧美日韩亚洲综合| 国产精品一区二区x88av| 一区二区三区四区国产精品| 久久一留热品黄| 欧美三级韩国三级日本三斤| 国产成人在线视频免费播放| 亚洲6080在线| 国产精品护士白丝一区av| 91精品视频网| 91色porny| 丁香婷婷综合色啪| 日本成人超碰在线观看| 亚洲免费伊人电影| 国产视频一区二区在线观看| 91精品欧美一区二区三区综合在| 不卡一二三区首页| 国产精品影音先锋| 免费观看一级欧美片| 亚洲va在线va天堂| 一区二区三区四区乱视频| 国产精品五月天| 精品1区2区在线观看| 欧美一区二区三区小说| 91久久精品国产91性色tv| 成人av网站在线| 国产成人夜色高潮福利影视| 另类小说综合欧美亚洲| 亚洲大型综合色站| 亚洲午夜精品网| 亚洲精品久久久蜜桃| 亚洲欧洲韩国日本视频| 中文一区二区完整视频在线观看 | 国产欧美日韩三区| 久久亚洲捆绑美女| 欧美精品一区二区三区高清aⅴ| 777欧美精品| 欧美丰满嫩嫩电影| 7777精品伊人久久久大香线蕉最新版| 一本大道av伊人久久综合| 91网站最新地址| 色天使色偷偷av一区二区| 91福利国产精品| 欧美日韩一区二区欧美激情 | 午夜精品aaa| 天天亚洲美女在线视频| 天堂va蜜桃一区二区三区漫画版| 亚洲尤物视频在线| 奇米精品一区二区三区在线观看| 日本成人在线网站| 精品一区二区三区在线播放视频| 男男视频亚洲欧美| 看国产成人h片视频| 久久99精品国产| 国产很黄免费观看久久| 99久久免费精品高清特色大片| 91丝袜呻吟高潮美腿白嫩在线观看| 91美女片黄在线观看| 欧美三级在线播放| 欧美精品一区二区在线观看| 久久精品亚洲精品国产欧美| 一色桃子久久精品亚洲| 亚洲精品少妇30p| 婷婷一区二区三区| 国产资源在线一区| 色香蕉成人二区免费| 7777精品久久久大香线蕉| 久久亚洲一级片| 亚洲日本韩国一区| 日本欧美加勒比视频| 国产成人av资源| 欧洲精品一区二区三区在线观看| 欧美日韩高清不卡| 久久久蜜桃精品| 亚洲一区二区视频在线观看| 久久精品72免费观看| 成人精品国产福利| 911国产精品| 国产精品动漫网站| 日产国产高清一区二区三区| 国产成人精品亚洲日本在线桃色| 在线观看中文字幕不卡| 久久久精品日韩欧美| 亚洲一区二区三区不卡国产欧美| 狠狠色伊人亚洲综合成人| 91啪在线观看| 久久久三级国产网站| 亚洲福利视频三区| 成人激情开心网| 欧美一区二区视频在线观看2022| 一区免费观看视频| 国产在线精品一区二区夜色| 91成人在线免费观看| 国产三级精品三级| 奇米一区二区三区| 欧美视频一区二区三区四区 | 国产.欧美.日韩| 欧美精三区欧美精三区| 国产精品久久久久久福利一牛影视| 天堂蜜桃一区二区三区| 91美女在线看| 国产日产欧美一区二区视频| 男男视频亚洲欧美| 欧美日韩国产高清一区二区| 国产精品不卡视频| 国产91精品精华液一区二区三区 | 免费人成精品欧美精品| 日本韩国精品一区二区在线观看| 久久久国产精品麻豆| 日韩国产欧美视频| 欧洲av一区二区嗯嗯嗯啊| 最新国产精品久久精品| 国产成人超碰人人澡人人澡| 欧美精品一区二区三区蜜臀| 美女爽到高潮91| 欧美挠脚心视频网站| 亚洲成人黄色影院| 欧美色图一区二区三区| 亚洲精品成人悠悠色影视| 91麻豆国产在线观看| 中文字幕一区二区三区不卡| 成人黄色在线看| 国产精品久久毛片| 成人午夜av电影| 国产精品久久福利| 97精品视频在线观看自产线路二| 国产精品成人一区二区艾草 |