亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? 一個計算線性支持向量機的matlab源代碼
?? HTML
?? 第 1 頁 / 共 3 頁
字號:
In general this is good enough but for few difficult
cases (e.g. C very very large) where solutions are huge
numbers, it might be possible that the numerical precision is not
enough using only float.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f404"><b>Q: How do I choose the kernel ?</b></a>
<br/>                                                                                

<p>
In general we suggest you to try the RBF kernel first.
A recent result by Keerthi and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/limit.ps.gz>
download paper here</a>)
shows that if RBF is used with model selection,
then there is no need to consider the linear kernel.
The kernel matrix using sigmoid may not be positive definite
and in general it's accuracy is not better than RBF.
(see the paper by Lin and Lin
(<a href=http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf>
download paper here</a>).
Polynomial kernels are ok but if a high degree is used,
numerical difficulties tend to happen
(thinking about dth power of (<1) goes to 0
and (>1) goes to infinity).
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f405"><b>Q: Does libsvm have special treatments for linear SVM ?</b></a>
<br/>                                                                                

<p>

No, at this point libsvm solves linear/nonlinear SVMs by the
same way.
Note that there are some possible
tricks to save training/testing time if the
linear kernel is used.
Hence libsvm is <b>NOT</b> particularly efficient for linear SVM,
especially for problems whose number of data is much larger
than number of attributes.
If you plan to solve this type of problems, you may want 
to check <a href=http://www.csie.ntu.edu.tw/~cjlin/bsvm>bsvm</a>,
which includes an efficient implementation for
linear SVMs.
More details can be found in the following study:
K.-M. Chung, W.-C. Kao, 
T. Sun, 
and
C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/linear.pdf">
Decomposition Methods for Linear Support Vector Machines
</a>

<p> On the other hand, you do not really need to solve
linear SVMs. See the previous question about choosing
kernels for details.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f406"><b>Q: The number of free support vectors is large. What should I do ?</b></a>
<br/>                                                                                
 <p>
This usually happens when the data are overfitted.
If attributes of your data are in large ranges,
try to scale them. Then the region
of appropriate parameters may be larger.
Note that there is a scale program
in libsvm. 
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f407"><b>Q: Should I scale training and testing data in a similar way ?</b></a>
<br/>                                                                                
<p>
Yes, you can do the following:
<br> svm-scale -s scaling_parameters train_data > scaled_train_data
<br> svm-scale -r scaling_parameters test_data > scaled_test_data
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f408"><b>Q: Does it make a big difference  if I scale each attribute to [0,1] instead of [-1,1] ?</b></a>
<br/>                                                                                

<p>
For the linear scaling method, if the RBF kernel is
used and parameter selection is conducted, there
is no difference. Assume Mi and mi are 
respectively the maximal and minimal values of the
ith attribute. Scaling to [0,1] means
<pre>
                x'=(x-mi)/(Mi-mi)
</pre>
For [-1,1],
<pre>
                x''=2(x-mi)/(Mi-mi)-1.
</pre>
In the RBF kernel,
<pre>
                x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
</pre>
Hence, using (C,g) on the [0,1]-scaled data is the
same as (C,g/2) on the [-1,1]-scaled data.

<p> Though the performance is the same, the computational
time may be different. For data with many zero entries,
[0,1]-scaling keeps the sparsity of input data and hence
may save the time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f409"><b>Q: The prediction rate is low. How could I improve it ?</b></a>
<br/>                                                                                
<p>
Try to use the model selection tool grid.py in the python
directory find
out good parameters. To see the importance of model selection,
please 
see my  talk:
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/talks/freiburg.pdf">
A practical guide to support vector 
classification 
</A>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f410"><b>Q: My data are unbalanced. Could libsvm handle such problems ?</b></a>
<br/>                                                                                
<p>
Yes, there is a -wi options. For example, if you use
<p>
 svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
<p>
the penalty for class "-1" is larger.
Note that this -w option is for C-SVC only.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f411"><b>Q: What is the difference between nu-SVC and C-SVC ?</b></a>
<br/>                                                                                
<p>
Basically they are the same thing but with different
parameters. The range of C is from zero to infinity
but nu is always between [0,1]. A nice property
of nu is that it is related to the ratio of 
support vectors and the ratio of the training
error.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f412"><b>Q: The program keeps running without showing any output. What should I do ?</b></a>
<br/>                                                                                
<p>
You may want to check your data. Each training/testing
data must be in one line. It cannot be separated.
In addition, you have to remove empty lines.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f413"><b>Q: The program keeps running (with output, i.e. many dots). What should I do ?</b></a>
<br/>                                                                                
<p>
In theory libsvm guarantees to converge if the kernel
matrix is positive semidefinite. 
After version 2.4 it can also handle non-PSD
kernels such as the sigmoid (tanh).
Therefore, this means you are
handling ill-conditioned situations
(e.g. too large/small parameters) so numerical
difficulties occur.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f414"><b>Q: The training time is too long. What should I do ?</b></a>
<br/>                                                                                
<p>
This may happen for some difficult cases (e.g. -c is large).
You can try to use a looser stopping tolerance with -e.
If that still doesn't work, you may want to contact us. We can show you some
tricks on improving the training time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f415"><b>Q: How do I get the decision value(s) ?</b></a>
<br/>                                                                                
<p>
We print out decision values for regression. For classification,
we solve several binary SVMs for multi-class cases, so 
you obtain values by easily calling the subroutine
svm_predict_values. Their corresponding labels
can be obtained from svm_get_labels. 
Details are in 
README of libsvm package. 

<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: For some problem sets if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dymanic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
There are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can elimate the 2G
boundary for dymanic allocated memory. The `tub'
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.ps.gz">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
91美女精品福利| 中文字幕av一区二区三区免费看| 欧美一区二区视频观看视频| 国产免费成人在线视频| 亚洲一区视频在线| 国产成人综合在线观看| 欧美午夜电影一区| 欧美极品美女视频| 麻豆精品视频在线观看免费| 日本道精品一区二区三区| 日韩欧美国产一区二区三区| 日韩伦理免费电影| 国产99一区视频免费| 日韩午夜电影av| 亚洲二区在线视频| 99久久精品费精品国产一区二区| 久久综合五月天婷婷伊人| 日韩精品电影在线| 欧美三级资源在线| 亚洲激情自拍偷拍| 成人白浆超碰人人人人| 久久先锋影音av鲁色资源网| 日韩av一二三| 5858s免费视频成人| 亚洲第一会所有码转帖| 在线视频中文字幕一区二区| 国产精品传媒在线| 成人黄色国产精品网站大全在线免费观看 | 91精品国产综合久久福利软件 | 日韩视频在线你懂得| 一区二区三区蜜桃| 91精品1区2区| 亚洲精品老司机| 色噜噜狠狠成人中文综合| 中文字幕一区二区三区四区不卡| 国产精品99久久久久久久女警| 精品久久国产字幕高潮| 开心九九激情九九欧美日韩精美视频电影| 欧美人体做爰大胆视频| 日本美女一区二区| 欧美xxxxx裸体时装秀| 久热成人在线视频| 久久亚洲综合色一区二区三区| 久久国产三级精品| 国产亚洲精品超碰| 成人黄页毛片网站| 亚洲一卡二卡三卡四卡无卡久久| 色成年激情久久综合| 亚洲国产精品久久久久婷婷884| 欧美少妇性性性| 久久国产生活片100| 国产欧美一区二区精品仙草咪| 成a人片亚洲日本久久| 一区二区三区中文字幕电影| 欧美顶级少妇做爰| 国产一区亚洲一区| 亚洲品质自拍视频网站| 欧美三级在线看| 九九**精品视频免费播放| 国产精品无码永久免费888| 色综合久久综合中文综合网| 调教+趴+乳夹+国产+精品| 精品少妇一区二区三区| va亚洲va日韩不卡在线观看| 亚洲成人资源网| 精品第一国产综合精品aⅴ| 99精品视频中文字幕| 日韩成人伦理电影在线观看| 精品88久久久久88久久久| 色婷婷综合久久久久中文| 伦理电影国产精品| 亚洲欧洲精品一区二区精品久久久| 欧美性生交片4| 国产一区二区按摩在线观看| 亚洲一级二级三级| 久久久精品影视| 欧美日韩免费一区二区三区视频 | 亚洲风情在线资源站| 久久青草国产手机看片福利盒子 | 亚洲男人的天堂av| 欧美丰满一区二区免费视频| 国产精品一品二品| 日韩av一级电影| 亚洲精品自拍动漫在线| 久久久九九九九| 7777精品伊人久久久大香线蕉| 成人影视亚洲图片在线| 日韩黄色小视频| 亚洲日本在线a| 亚洲精品在线电影| 制服丝袜av成人在线看| 91香蕉国产在线观看软件| 九一九一国产精品| 日韩电影一区二区三区四区| 一区二区三区丝袜| 亚洲同性gay激情无套| 国产喷白浆一区二区三区| 日韩欧美在线一区二区三区| 欧美在线小视频| 91丨porny丨户外露出| 国产成人高清在线| 国产一区二区不卡| 久久精品国产精品亚洲综合| 日韩国产成人精品| 亚洲宅男天堂在线观看无病毒| 国产精品美日韩| 欧美激情自拍偷拍| 久久久不卡网国产精品一区| 久久这里只有精品6| 日韩一区二区三区在线| 91精品麻豆日日躁夜夜躁| 欧美网站大全在线观看| 欧美日韩视频专区在线播放| 欧美三级电影一区| 欧美日韩国产精品成人| 欧美另类高清zo欧美| 欧美少妇bbb| 在线不卡欧美精品一区二区三区| 在线免费精品视频| 欧美日韩亚州综合| 欧美日本国产视频| 欧美一区二区三区思思人| 欧美一区二区三区日韩| 日韩女优av电影| 久久蜜桃香蕉精品一区二区三区| 国产亚洲精品中文字幕| 国产欧美日韩中文久久| 亚洲色图制服丝袜| 一区二区三区自拍| 日本最新不卡在线| 韩国欧美国产1区| 豆国产96在线|亚洲| 色94色欧美sute亚洲线路二| 欧美日韩一区 二区 三区 久久精品| 制服丝袜日韩国产| 日韩欧美的一区二区| 欧美极品少妇xxxxⅹ高跟鞋 | 91首页免费视频| 91福利区一区二区三区| 欧美精品久久久久久久多人混战| 日韩免费成人网| 中文字幕乱码久久午夜不卡 | 免费看日韩精品| 韩国一区二区在线观看| 国产成人av影院| 欧美午夜视频网站| 欧美大肚乱孕交hd孕妇| 国产精品你懂的在线| 亚洲一区视频在线| 国产黄色精品视频| 日本精品视频一区二区| 欧美大胆人体bbbb| 亚洲精品网站在线观看| 久久99国产精品尤物| 99riav一区二区三区| 日韩欧美另类在线| 亚洲日本韩国一区| 韩日av一区二区| 欧美性色综合网| 久久久蜜桃精品| 性做久久久久久久免费看| 韩国成人福利片在线播放| 色综合欧美在线视频区| 久久午夜羞羞影院免费观看| 亚洲国产一区二区视频| 东方aⅴ免费观看久久av| 欧美浪妇xxxx高跟鞋交| 综合色天天鬼久久鬼色| 精一区二区三区| 欧美日韩高清一区| 日韩理论片中文av| 国产激情一区二区三区桃花岛亚洲| 欧美日韩国产成人在线91| 亚洲欧美日韩在线| 国产精品一区二区不卡| 欧美一二三四区在线| 亚洲国产成人va在线观看天堂| 成人午夜激情在线| 久久这里只有精品视频网| 日日夜夜一区二区| 欧美调教femdomvk| 亚洲黄网站在线观看| 91美女视频网站| 国产精品无圣光一区二区| 国产成人自拍在线| 日韩美女视频在线| 五月天视频一区| 在线亚洲+欧美+日本专区| 日韩毛片高清在线播放| 不卡视频在线观看| 国产日产欧美一区| 国产激情偷乱视频一区二区三区| 日韩你懂的电影在线观看| 日韩av电影免费观看高清完整版| 欧美性大战久久| 午夜激情综合网| 欧美精选在线播放| 五月婷婷久久丁香| 91.com视频| 日本亚洲最大的色成网站www|