亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? libsvm支持向量機(Support Vector Machine,簡稱SVM)。
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4151"><b>Q: How do I get the distance between a point and the hyperplane?</b></a>
<br/>                                                                                
<p>
The distance is |decision_value| / |w|. 
We have |w|^2 = w^Tw = alpha^T Q alpha = 2*(dual_obj + sum alpha_i). 
Thus in svm.cpp please find the place 
where we print the dual objective value
and add a statement to print w^Tw.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: For some problem sets if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dynamic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
There are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can elimate the 2G
boundary for dymanic allocated memory. The `tub'
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>
"1-against-the rest" is a good method whose performance
is comparable to "1-against-1." We do the latter
simply because its training time is shorter.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f420"><b>Q: After doing cross validation, why there is no model file outputted ?</b></a>
<br/>                                                                                
<p>
Cross validation is used for selecting good parameters.
After finding them, you want to re-train the whole
data without the -v option.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f421"><b>Q: I would like to try different random partition for cross validation, how could I do it ?</b></a>
<br/>                                                                                
<p>
If you use GNU C library,
the default seed 1 is considered. Thus you always
get the same result of running svm-train -v.
To have different seeds, you can add the following code
in svm-train.c:
<pre>
#include &lt;time.h&gt;
</pre>
and in the beginning of the subroutine do_cross_validation(),
<pre>
srand(time(0));
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f422"><b>Q: I would like to solve L2-SVM (i.e., error term is quadratic). How should I modify the code ?</b></a>
<br/>                                                                                
<p>
It is extremely easy. Taking c-svc for example, only two 
places of svm.cpp have to be changed. 
First, modify the following line of 
solve_c_svc from 
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, Cp, Cn, param->eps, si, param->shrinking);
</pre>
to
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, INF, INF, param->eps, si, param->shrinking);
</pre>
Second, in  the class  of SVC_Q, declare C as 
a private variable:
<pre>
	double C;
</pre> 
In the constructor we assign it to param.C:
<pre>
        this->C = param.C;		
</pre>
Than in the the subroutine get_Q, after the for loop, add
<pre>
        if(i >= start && i < len) 
		data[i] += 1/C;
</pre>

For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
<pre>
	data[real_i] += 1/C;
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f424"><b>Q: How do I choose parameters for one-class svm as training data are in only one class?</b></a>
<br/>                                                                                
<p>
You have pre-specified true positive rate in mind and then search for
parameters which achieve similar cross-validation accuracy.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f425"><b>Q: Why training a probability model (i.e., -b 1) takes longer time</b></a>
<br/>                                                                                
<p>
To construct this probability model, we internaly conduct a 
cross validation, which is more time consuming than
a regular training.
Hence, in general you do parameter selection first without
-b 1. You only use -b 1 when good parameters have been
selected. In other words, you avoid using -b 1 and -v
together.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f426"><b>Q: Why using the -b option does not give me better accuracy?</b></a>
<br/>                                                                                
<p>
There is absolutely no reason the probability outputs guarantee
you better accuracy. The main purpose of this option is
to provide you the probability estimates, but not to boost
prediction accuracy. From our experience, 
after proper parameter selections, in general with
and without -b have similar accuracy. Occasionally there
are some differences.
It is not recommended to compare the two under 
just a fixed parameter
set as more differences will be observed.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f427"><b>Q: Why the code gives NaN (not a number) results?</b></a>
<br/>                                                                                
<p>
This rarely happens, but few users reported the problem.
It seems that their 
computers for training libsvm have the VPN client
running. The VPN software has some bugs and causes this
problem. Please try to close or disconnect the VPN client.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f428"><b>Q: Why on windows sometimes grid.py fails?</b></a>
<br/>                                                                                
<p>
The error message is probably
<pre>
Traceback (most recent call last):
  File "grid.py", line 349, in ?
    main()
  File "grid.py", line 344, in main
    redraw(db)
  File "grid.py", line 132, in redraw
    gnuplot.write("set term windows\n")
IOError: [Errno 22] Invalid argument
</pre>

<p>There are some problems about using gnuplot on windows.

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
色婷婷激情综合| 国产成人8x视频一区二区| 色欧美乱欧美15图片| 亚洲激情自拍偷拍| 在线观看日韩精品| 午夜影视日本亚洲欧洲精品| 欧美视频一二三区| 青青国产91久久久久久| 精品国产伦理网| 成人免费视频一区| 夜夜精品视频一区二区 | 亚洲影院免费观看| 精品污污网站免费看| 免费观看成人av| 亚洲图片有声小说| 日韩一区二区视频在线观看| 国产在线精品一区二区三区不卡| 久久精品一区八戒影视| 91蜜桃在线免费视频| 午夜国产精品影院在线观看| 日韩一级视频免费观看在线| 福利电影一区二区| 亚洲女厕所小便bbb| 欧美区一区二区三区| 国产成人自拍网| 亚洲综合在线第一页| 日韩免费看的电影| 99免费精品视频| 蜜臀av性久久久久蜜臀aⅴ流畅| 国产日韩视频一区二区三区| 在线这里只有精品| 韩国精品久久久| 一区二区三区自拍| www成人在线观看| 在线观看91精品国产入口| 麻豆国产一区二区| 亚洲一区二三区| 国产午夜精品理论片a级大结局| 欧美少妇bbb| 成人一区在线看| 麻豆国产欧美日韩综合精品二区| 成人欧美一区二区三区白人 | 国产suv精品一区二区三区| 一区二区三区国产精品| 26uuu成人网一区二区三区| 91精品1区2区| 成人蜜臀av电影| 麻豆国产一区二区| 午夜不卡av在线| 亚洲欧美一区二区视频| 久久蜜桃一区二区| 欧美一区二区三区男人的天堂| www.亚洲色图.com| 国产精品一区二区在线看| 亚洲午夜日本在线观看| 自拍偷拍国产精品| 久久久久久9999| 精品日韩一区二区三区免费视频| 欧美日韩一区精品| 色综合久久六月婷婷中文字幕| 国产精品中文有码| 国产呦萝稀缺另类资源| 免费看日韩精品| 日本视频在线一区| 无吗不卡中文字幕| 亚洲国产欧美另类丝袜| 亚洲精品国产第一综合99久久| 国产精品热久久久久夜色精品三区| 久久免费午夜影院| 日韩一二三区视频| 欧美一区二区久久| 日韩久久精品一区| 日韩欧美一二三区| 欧美成人综合网站| 一区二区三区自拍| 亚洲精品成人在线| 亚洲国产日日夜夜| 午夜欧美在线一二页| 亚洲成人一区在线| 五月天一区二区| 日本中文字幕不卡| 美女网站在线免费欧美精品| 青青草97国产精品免费观看| 日韩不卡在线观看日韩不卡视频| 日本一不卡视频| 精品一区二区影视| 国产成人亚洲综合a∨猫咪| 国产91精品一区二区麻豆亚洲| 丁香婷婷综合激情五月色| av成人老司机| 欧美中文字幕一区二区三区| 欧美日韩国产天堂| 日韩精品一区二| 中文字幕不卡在线播放| 亚洲欧美在线视频| 亚洲午夜久久久久久久久电影网 | 亚洲二区在线视频| 人人超碰91尤物精品国产| 蜜桃视频一区二区三区在线观看 | 日本成人在线看| 国产麻豆91精品| 99久久精品费精品国产一区二区| 一本到高清视频免费精品| 欧美日韩一二区| 欧美精品一区二区在线观看| 国产精品美女久久久久久久久 | 亚洲色图第一区| 性做久久久久久久久| 久久电影网站中文字幕| 成人黄色小视频| 7777女厕盗摄久久久| 国产欧美一区二区三区沐欲| 亚洲伦理在线免费看| 美女视频黄免费的久久| 成人综合在线观看| 7777女厕盗摄久久久| 欧美激情综合五月色丁香| 国产最新精品精品你懂的| 一本一道波多野结衣一区二区| 欧美精品xxxxbbbb| 日本一区二区三区高清不卡| 亚洲高清视频在线| 粉嫩绯色av一区二区在线观看| 欧美日韩国产综合视频在线观看 | 91视频在线看| 日韩精品一区在线观看| 国产精品久久久久久久久免费桃花| 午夜a成v人精品| 99re免费视频精品全部| 欧美一级高清片在线观看| 中文字幕日韩av资源站| 狠狠色丁香婷婷综合| 欧美午夜电影一区| 国产精品萝li| 国产精品亚洲专一区二区三区| 欧美亚一区二区| 国产精品欧美经典| 国产一区二三区好的| 欧美无乱码久久久免费午夜一区| 国产视频一区二区在线| 久久精品久久精品| 欧美日韩高清一区二区不卡| 亚洲欧美一区二区三区久本道91 | 亚洲色图20p| 成人一级黄色片| 精品国产一二三| 日本不卡123| 欧美午夜精品理论片a级按摩| 国产精品久久久久天堂| 国产成人在线影院| 久久午夜电影网| 国产专区欧美精品| 精品日韩在线一区| 美腿丝袜亚洲色图| 欧美一区日本一区韩国一区| 亚洲高清在线精品| 在线视频你懂得一区| 中文字幕综合网| av在线播放一区二区三区| 久久久不卡影院| 国产成人精品午夜视频免费| 久久久久久久久岛国免费| 久久激情五月激情| 久久综合av免费| 精品一二三四区| 久久久亚洲精品石原莉奈| 精品综合久久久久久8888| 欧美精品一区二区三区四区| 精品在线播放午夜| 2020国产成人综合网| 久久精品国产亚洲aⅴ| 精品日韩成人av| 国产伦精品一区二区三区免费 | 亚洲男人的天堂网| 91成人免费在线视频| 亚洲国产精品综合小说图片区| 91福利在线免费观看| 五月天中文字幕一区二区| 91超碰这里只有精品国产| 久久精品国产77777蜜臀| 26uuu色噜噜精品一区| 懂色av一区二区三区蜜臀 | 欧美精品一区二区三| 国产不卡在线一区| 国产精品久久久久影视| 精品国产99国产精品| 国产麻豆一精品一av一免费| 中文字幕av一区 二区| 一本久久精品一区二区| 亚洲国产另类av| 欧美成人一区二区三区片免费| 精品无码三级在线观看视频| 国产精品九色蝌蚪自拍| 在线观看亚洲a| 激情综合色综合久久| 欧美国产1区2区| 欧美日韩电影在线播放| 国产精品一级在线| 亚洲一区二区中文在线| 久久人人爽爽爽人久久久|