亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? libsvm升級到2.8版本
?? HTML
?? 第 1 頁 / 共 3 頁
字號:
<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM?</b></a>
<br/>                                                                                

<p>

No, at this point libsvm solves linear/nonlinear SVMs by the
same way.
Note that there are some possible
tricks to save training/testing time if the
linear kernel is used.
Hence libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially for 
using large C on
problems whose number of data is much larger
than number of attributes.
You can 
<ul>
<li>
Use small C only. We have shown in the following paper
that after C is larger than a certain threshold,
the decision function is the same. 
<p>
<a href="http://guppy.mpe.nus.edu.sg/~mpessk/">S. S. Keerthi</a>
and
<B>C.-J. Lin</B>.
<A HREF="papers/limit.ps.gz">
Asymptotic behaviors of support vector machines with 
Gaussian kernel
</A>
.
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>, 15(2003), 1667-1689.


<li>
Check <a href=http://www.csie.ntu.edu.tw/~cjlin/bsvm>bsvm</a>,
which includes an efficient implementation for
linear SVMs.
More details can be found in the following study:
<p>
K.-M. Chung, W.-C. Kao, 
T. Sun, 
and
C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/linear.pdf">
Decomposition Methods for Linear Support Vector Machines.
</A> 
<I><A HREF="http://mitpress.mit.edu/journal-home.tcl?issn=08997667">Neural Computation</A></I>,
16(2004), 1689-1704. 
</ul>

<p> On the other hand, you do not really need to solve
linear SVMs. See the previous question about choosing
kernels for details.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1]?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running without showing any output. What should I do?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do?</b></a>
<br/>                                                                                
<p>
This may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to contact us. We can show you some
tricks on improving the training time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s)?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases, so 
you obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: For some problem sets if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dynamic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
There are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can elimate the 2G
boundary for dymanic allocated memory. The `tub'
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.ps.gz">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
洋洋av久久久久久久一区| 日韩一区二区在线观看视频播放| 韩国中文字幕2020精品| 麻豆91精品91久久久的内涵| 免播放器亚洲一区| 91免费看片在线观看| av一区二区不卡| 色综合色综合色综合色综合色综合| 大尺度一区二区| jlzzjlzz国产精品久久| 在线精品视频免费播放| 678五月天丁香亚洲综合网| 成人精品视频.| 日韩一区二区在线观看| 99re热这里只有精品视频| 国产成人精品亚洲777人妖 | 亚洲国产精品国自产拍av| 国产精品视频yy9299一区| 偷拍日韩校园综合在线| 国产乱人伦偷精品视频不卡 | 色综合天天综合网天天看片| 91精品国产综合久久小美女| 亚洲精品写真福利| 国产精品影视在线观看| 欧美一区二区日韩| 麻豆精品久久精品色综合| 欧美一三区三区四区免费在线看| 亚洲日本乱码在线观看| 色偷偷88欧美精品久久久| 国产丝袜在线精品| 国产精品91xxx| 久久综合999| 99re热这里只有精品视频| 一区二区三区电影在线播| 在线观看中文字幕不卡| 偷窥国产亚洲免费视频| 欧美一级理论性理论a| 激情五月婷婷综合网| 久久久蜜臀国产一区二区| 国产成人在线免费观看| 亚洲免费观看高清| 日韩欧美亚洲一区二区| 国产美女视频91| 亚洲国产成人av网| 久久久久久久久久久久电影| 欧美在线观看视频一区二区 | 日韩欧美一区二区在线视频| 欧美一级黄色片| 欧美亚洲国产一区二区三区va| 国产不卡视频在线播放| 国产乱色国产精品免费视频| 激情伊人五月天久久综合| 国产一区二区电影| 国产亚洲女人久久久久毛片| 国产日产欧美一区二区视频| 欧美国产综合一区二区| 一区二区在线观看视频| 婷婷亚洲久悠悠色悠在线播放| 亚洲超丰满肉感bbw| 久久机这里只有精品| 成人激情文学综合网| 欧美精品丝袜久久久中文字幕| 欧美精品一区二| 国产精品欧美极品| 另类小说色综合网站| 色综合中文综合网| 一区二区三区在线观看视频| 久久久精品天堂| 中文字幕在线不卡一区二区三区 | 精品免费视频.| 国产三级一区二区三区| 亚洲天堂精品在线观看| 一区二区三区精密机械公司| 久久精品国产一区二区三 | 欧美日本乱大交xxxxx| 久久欧美中文字幕| 亚洲综合色区另类av| 久久综合综合久久综合| 色婷婷精品大在线视频| 久久久www成人免费毛片麻豆 | 国产亚洲精品aa午夜观看| 亚洲女人****多毛耸耸8| 日本aⅴ精品一区二区三区| 成人综合在线观看| 精品少妇一区二区三区视频免付费| 中文字幕制服丝袜成人av| 久久91精品久久久久久秒播| 99精品国产91久久久久久| 精品伦理精品一区| 午夜精品久久久久久久| 色噜噜狠狠色综合中国| 日本一区二区视频在线观看| 久久精品国产免费看久久精品| 一本大道av伊人久久综合| 综合久久久久综合| 99久久精品情趣| 欧美电影免费观看高清完整版在线 | 国产精品麻豆视频| 成人网男人的天堂| 国产精品美女www爽爽爽| 成人av免费观看| 自拍av一区二区三区| 一本久久精品一区二区| 亚洲欧美电影院| 欧美日韩精品一二三区| 日韩av中文字幕一区二区三区| 91精品免费观看| 国产69精品久久久久毛片| 国产区在线观看成人精品 | 色女孩综合影院| 日韩电影在线观看一区| 国产欧美一区二区在线| 色综合网色综合| 免费视频最近日韩| 国产精品视频一二| 91精品国产欧美一区二区| 国产一区二区女| 亚洲图片欧美色图| 26uuu欧美日本| 欧美久久一二三四区| 国产精品99久久久久| 午夜欧美电影在线观看| 久久综合五月天婷婷伊人| 色综合网站在线| 成人久久久精品乱码一区二区三区| 欧美激情一区二区| 在线中文字幕不卡| 9人人澡人人爽人人精品| 美女性感视频久久| 爽好多水快深点欧美视频| 国产精品盗摄一区二区三区| 精品国产91洋老外米糕| 7777精品久久久大香线蕉| 欧美日韩一区视频| 色哟哟国产精品| 99久久99久久精品免费观看| 国产最新精品精品你懂的| 久久精品国产亚洲a| 国产精品一区久久久久| 国产精品综合二区| 国产成人福利片| 国产大片一区二区| 成人av在线电影| 在线精品亚洲一区二区不卡| 欧美写真视频网站| 在线播放中文字幕一区| 日韩欧美在线不卡| 国产农村妇女毛片精品久久麻豆 | 亚洲天堂免费看| 亚洲福利一区二区| 国产精品一级在线| 91国内精品野花午夜精品| 欧美一区二区三区四区视频| 精品剧情在线观看| 亚洲精品伦理在线| 久久国产剧场电影| 在线观看一区二区视频| 精品欧美一区二区久久| 日韩和欧美一区二区三区| 久久国产生活片100| 97久久精品人人爽人人爽蜜臀| 在线播放中文一区| 亚洲免费观看高清| av福利精品导航| 久久久99免费| 激情综合色丁香一区二区| 一本一道久久a久久精品综合蜜臀| 精品成人在线观看| 亚洲国产美国国产综合一区二区 | 国产精品久久久久久妇女6080| 午夜精品久久久久久久蜜桃app| 99久久99久久精品免费看蜜桃| 欧美精品丝袜久久久中文字幕| 夜夜精品视频一区二区| 成人sese在线| 国产精品剧情在线亚洲| av午夜一区麻豆| 国产日韩欧美一区二区三区综合 | 精品国产99国产精品| 亚洲国产日韩a在线播放性色| 国内精品久久久久影院一蜜桃| 欧美精品一区男女天堂| 国产在线视视频有精品| 亚洲国产岛国毛片在线| www.色精品| 日韩国产欧美一区二区三区| 欧美一区永久视频免费观看| 麻豆成人久久精品二区三区红| 日韩亚洲欧美在线| 国产福利91精品一区二区三区| 精品国产成人系列| 91麻豆福利精品推荐| 免费欧美在线视频| 樱桃视频在线观看一区| 欧美肥大bbwbbw高潮| 成人高清视频在线| 美女诱惑一区二区| 一区二区三区精密机械公司| 精品国产自在久精品国产| 91免费观看在线|