亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? SVM是一種常用的模式分類機器學習算法
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4151"><b>Q: How do I get the distance between a point and the hyperplane?</b></a>
<br/>                                                                                
<p>
The distance is |decision_value| / |w|. 
We have |w|^2 = w^Tw = alpha^T Q alpha = 2*(dual_obj + sum alpha_i). 
Thus in svm.cpp please find the place 
where we print the dual objective value
and add a statement to print w^Tw.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: For some problem sets if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dynamic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
There are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can elimate the 2G
boundary for dymanic allocated memory. The `tub'
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>
"1-against-the rest" is a good method whose performance
is comparable to "1-against-1." We do the latter
simply because its training time is shorter.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f420"><b>Q: After doing cross validation, why there is no model file outputted ?</b></a>
<br/>                                                                                
<p>
Cross validation is used for selecting good parameters.
After finding them, you want to re-train the whole
data without the -v option.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f421"><b>Q: I would like to try different random partition for cross validation, how could I do it ?</b></a>
<br/>                                                                                
<p>
If you use GNU C library,
the default seed 1 is considered. Thus you always
get the same result of running svm-train -v.
To have different seeds, you can add the following code
in svm-train.c:
<pre>
#include &lt;time.h&gt;
</pre>
and in the beginning of the subroutine do_cross_validation(),
<pre>
srand(time(0));
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f422"><b>Q: I would like to solve L2-SVM (i.e., error term is quadratic). How should I modify the code ?</b></a>
<br/>                                                                                
<p>
It is extremely easy. Taking c-svc for example, only two 
places of svm.cpp have to be changed. 
First, modify the following line of 
solve_c_svc from 
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, Cp, Cn, param->eps, si, param->shrinking);
</pre>
to
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, INF, INF, param->eps, si, param->shrinking);
</pre>
Second, in  the class  of SVC_Q, declare C as 
a private variable:
<pre>
	double C;
</pre> 
In the constructor we assign it to param.C:
<pre>
        this->C = param.C;		
</pre>
Than in the the subroutine get_Q, after the for loop, add
<pre>
        if(i >= start && i < len) 
		data[i] += 1/C;
</pre>

For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
<pre>
	data[real_i] += 1/C;
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f424"><b>Q: How do I choose parameters for one-class svm as training data are in only one class?</b></a>
<br/>                                                                                
<p>
You have pre-specified true positive rate in mind and then search for
parameters which achieve similar cross-validation accuracy.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f425"><b>Q: Why training a probability model (i.e., -b 1) takes longer time</b></a>
<br/>                                                                                
<p>
To construct this probability model, we internaly conduct a 
cross validation, which is more time consuming than
a regular training.
Hence, in general you do parameter selection first without
-b 1. You only use -b 1 when good parameters have been
selected. In other words, you avoid using -b 1 and -v
together.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f426"><b>Q: Why using the -b option does not give me better accuracy?</b></a>
<br/>                                                                                
<p>
There is absolutely no reason the probability outputs guarantee
you better accuracy. The main purpose of this option is
to provide you the probability estimates, but not to boost
prediction accuracy. From our experience, 
after proper parameter selections, in general with
and without -b have similar accuracy. Occasionally there
are some differences.
It is not recommended to compare the two under 
just a fixed parameter
set as more differences will be observed.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f427"><b>Q: Why the code gives NaN (not a number) results?</b></a>
<br/>                                                                                
<p>
This rarely happens, but few users reported the problem.
It seems that their 
computers for training libsvm have the VPN client
running. The VPN software has some bugs and causes this
problem. Please try to close or disconnect the VPN client.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f428"><b>Q: Why on windows sometimes grid.py fails?</b></a>
<br/>                                                                                
<p>
The error message is probably
<pre>
Traceback (most recent call last):
  File "grid.py", line 349, in ?
    main()
  File "grid.py", line 344, in main
    redraw(db)
  File "grid.py", line 132, in redraw
    gnuplot.write("set term windows\n")
IOError: [Errno 22] Invalid argument
</pre>

<p>There are some problems about using gnuplot on windows.

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
日韩精品中文字幕一区二区三区| 在线亚洲欧美专区二区| 天天色天天操综合| 亚洲乱码中文字幕综合| 国产精品成人网| 中文字幕在线一区免费| ...中文天堂在线一区| 亚洲男人天堂av网| 捆绑变态av一区二区三区| 五月天精品一区二区三区| 日韩国产精品久久久久久亚洲| 午夜一区二区三区视频| 婷婷一区二区三区| 美女爽到高潮91| 国产综合色精品一区二区三区| 国产在线视频精品一区| 成人高清伦理免费影院在线观看| eeuss鲁一区二区三区| 91视频一区二区三区| 欧美色精品天天在线观看视频| 3d动漫精品啪啪一区二区竹菊| 日韩欧美一区二区视频| 国产清纯美女被跳蛋高潮一区二区久久w | 欧美在线综合视频| 欧美美女直播网站| 精品欧美一区二区三区精品久久| 国产天堂亚洲国产碰碰| 一区二区三区日韩欧美| 日韩av在线发布| 国产成人鲁色资源国产91色综| 97久久久精品综合88久久| 在线播放/欧美激情| 久久综合久久综合亚洲| 亚洲免费三区一区二区| 美女视频网站久久| 色婷婷国产精品综合在线观看| 欧美精品久久天天躁| 国产日韩欧美激情| 性久久久久久久| av在线不卡网| 日韩欧美一二区| 一区二区三区日韩欧美精品| 精彩视频一区二区三区| 欧美午夜影院一区| 欧美国产禁国产网站cc| 青青草国产精品亚洲专区无| av电影一区二区| 久久亚区不卡日本| 天天综合天天综合色| 色域天天综合网| 久久人人97超碰com| 日韩福利电影在线| 色av综合在线| 亚洲日本va午夜在线影院| 国产精品91xxx| 制服丝袜亚洲色图| 亚洲自拍偷拍av| www.久久精品| 中文字幕va一区二区三区| 久久精品国产成人一区二区三区| 精品视频一区二区不卡| 亚洲免费av观看| 99久久精品国产一区| 亚洲国产精华液网站w| 国产尤物一区二区| 欧美tickling挠脚心丨vk| 日日夜夜精品视频免费| 欧美做爰猛烈大尺度电影无法无天| 国产精品理论片| 成人午夜激情影院| 国产亚洲一区字幕| 成人免费电影视频| 欧美国产成人在线| av动漫一区二区| 亚洲视频一二三区| 91福利国产成人精品照片| 亚洲乱码国产乱码精品精的特点 | 欧美一区二区三区日韩| 视频一区二区中文字幕| 欧美一区二区成人6969| 欧美aaa在线| 日韩欧美国产一二三区| 久草中文综合在线| 国产欧美日韩综合精品一区二区| 国产精品一区三区| 亚洲三级小视频| 欧美吞精做爰啪啪高潮| 亚洲国产成人porn| 日韩精品一区二区在线| 国产99精品在线观看| 国产精品久久久久一区 | 懂色av中文字幕一区二区三区| 日韩精品一区二区三区在线观看| 日本成人中文字幕在线视频| 91精品蜜臀在线一区尤物| 欧美一区二区三区视频免费播放 | 精品国产伦一区二区三区观看方式 | 成人涩涩免费视频| 日本一区二区三区电影| 国产精品白丝jk白祙喷水网站 | 久久免费视频色| 国产精品一区三区| 中文字幕亚洲在| 欧美中文一区二区三区| 日韩国产一二三区| 欧美一区二区播放| 成人性生交大片免费看中文| 亚洲欧美视频在线观看视频| 欧美中文字幕亚洲一区二区va在线 | 99这里只有精品| 亚洲自拍都市欧美小说| 91精品国产综合久久精品性色| 六月婷婷色综合| 欧美国产国产综合| 欧美男人的天堂一二区| 成人午夜伦理影院| 婷婷国产在线综合| 久久亚洲综合色一区二区三区 | 精品国产区一区| 日本道精品一区二区三区| 国产精品色哟哟网站| 亚洲视频你懂的| 国产一区二三区好的| 亚洲精品高清在线观看| 欧美精品一区二区三区高清aⅴ| 国产精品主播直播| 裸体在线国模精品偷拍| 亚洲欧洲另类国产综合| 一区二区三区欧美日韩| 亚洲欧美日韩精品久久久久| 中文字幕一区二区在线观看| 国产精品亲子乱子伦xxxx裸| 国产三级一区二区| 亚洲人成亚洲人成在线观看图片| 18成人在线视频| 一区二区三区资源| 亚洲成av人片www| 三级久久三级久久| 视频一区视频二区在线观看| 精品亚洲成av人在线观看| 狠狠狠色丁香婷婷综合激情| 国产乱码精品一区二区三 | 国产精品久久久久7777按摩| 日本一二三四高清不卡| 国产精品国产三级国产aⅴ无密码 国产精品国产三级国产aⅴ原创 | 成人app在线观看| 国产高清精品久久久久| 99久久99久久综合| 91久久香蕉国产日韩欧美9色| 99视频一区二区三区| 91女神在线视频| 91免费视频网址| 在线免费不卡视频| 久久亚区不卡日本| 亚洲视频中文字幕| 日本欧美大码aⅴ在线播放| 国产一区二区三区| 国产成人综合在线观看| 国产精品91xxx| 91网页版在线| 久久综合给合久久狠狠狠97色69| 国产精品私人自拍| 亚洲18影院在线观看| 韩国精品免费视频| 91在线porny国产在线看| 日韩免费视频一区二区| 国产精品福利av| 青青草原综合久久大伊人精品 | 成人福利视频在线| 欧美天堂亚洲电影院在线播放| 欧美一区二区三区思思人| 日本一区二区高清| 免费精品99久久国产综合精品| 国产成人鲁色资源国产91色综| 欧美揉bbbbb揉bbbbb| 精品久久久久久综合日本欧美| 国产精品高清亚洲| 午夜日韩在线观看| 91麻豆精品秘密| 精品免费一区二区三区| 一区二区三区日韩欧美| 国产精品一区二区黑丝| 色综合久久综合网97色综合| 日本一区二区免费在线| 天天色 色综合| 色婷婷激情综合| 国产清纯白嫩初高生在线观看91| 天堂蜜桃91精品| 岛国精品在线播放| 久久婷婷久久一区二区三区| 亚洲成av人片| 色狠狠一区二区三区香蕉| 久久久美女毛片| 日本中文一区二区三区| 欧美日韩国产系列| 国产精品激情偷乱一区二区∴| 亚洲国产精品欧美一二99| 色综合天天综合给合国产| 亚洲精品一区二区三区影院| 日韩黄色片在线观看|