亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? 我一直覺得 SVM 是個很有趣的東西
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
<p>
We do not recommend the following. But if you would
like to get values for 
TWO-class classification with labels +1 and -1
(note: +1 and -1 but not things like 5 and 10)
in the easiest way, simply add 
<pre>
		printf("%f\n", dec_values[0]*model->label[0]);
</pre>
after the line
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4151"><b>Q: How do I get the distance between a point and the hyperplane?</b></a>
<br/>                                                                                
<p>
The distance is |decision_value| / |w|. 
We have |w|^2 = w^Tw = alpha^T Q alpha = 2*(dual_obj + sum alpha_i). 
Thus in svm.cpp please find the place 
where we print the dual objective value
and add a statement to print w^Tw.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: For some problem sets if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dynamic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
There are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can elimate the 2G
boundary for dymanic allocated memory. The `tub'
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>
"1-against-the rest" is a good method whose performance
is comparable to "1-against-1." We do the latter
simply because its training time is shorter.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f420"><b>Q: After doing cross validation, why there is no model file outputted ?</b></a>
<br/>                                                                                
<p>
Cross validation is used for selecting good parameters.
After finding them, you want to re-train the whole
data without the -v option.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f421"><b>Q: I would like to try different random partition for cross validation, how could I do it ?</b></a>
<br/>                                                                                
<p>
If you use GNU C library,
the default seed 1 is considered. Thus you always
get the same result of running svm-train -v.
To have different seeds, you can add the following code
in svm-train.c:
<pre>
#include &lt;time.h&gt;
</pre>
and in the beginning of the subroutine do_cross_validation(),
<pre>
srand(time(0));
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f422"><b>Q: I would like to solve L2-SVM (i.e., error term is quadratic). How should I modify the code ?</b></a>
<br/>                                                                                
<p>
It is extremely easy. Taking c-svc for example, only two 
places of svm.cpp have to be changed. 
First, modify the following line of 
solve_c_svc from 
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, Cp, Cn, param->eps, si, param->shrinking);
</pre>
to
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, INF, INF, param->eps, si, param->shrinking);
</pre>
Second, in  the class  of SVC_Q, declare C as 
a private variable:
<pre>
	double C;
</pre> 
In the constructor we assign it to param.C:
<pre>
        this->C = param.C;		
</pre>
Than in the the subroutine get_Q, after the for loop, add
<pre>
        if(i >= start && i < len) 
		data[i] += 1/C;
</pre>

For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
<pre>
	data[real_i] += 1/C;
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f424"><b>Q: How do I choose parameters for one-class svm as training data are in only one class?</b></a>
<br/>                                                                                
<p>
You have pre-specified true positive rate in mind and then search for
parameters which achieve similar cross-validation accuracy.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f425"><b>Q: Why training a probability model (i.e., -b 1) takes longer time</b></a>
<br/>                                                                                
<p>
To construct this probability model, we internaly conduct a 
cross validation, which is more time consuming than
a regular training.
Hence, in general you do parameter selection first without
-b 1. You only use -b 1 when good parameters have been
selected. In other words, you avoid using -b 1 and -v
together.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f426"><b>Q: Why using the -b option does not give me better accuracy?</b></a>
<br/>                                                                                
<p>
There is absolutely no reason the probability outputs guarantee
you better accuracy. The main purpose of this option is
to provide you the probability estimates, but not to boost
prediction accuracy. From our experience, 
after proper parameter selections, in general with
and without -b have similar accuracy. Occasionally there
are some differences.
It is not recommended to compare the two under 
just a fixed parameter
set as more differences will be observed.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f427"><b>Q: Why the code gives NaN (not a number) results?</b></a>
<br/>                                                                                
<p>
This rarely happens, but few users reported the problem.
It seems that their 
computers for training libsvm have the VPN client
running. The VPN software has some bugs and causes this
problem. Please try to close or disconnect the VPN client.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f428"><b>Q: Why on windows sometimes grid.py fails?</b></a>
<br/>                                                                                
<p>
The error message is probably
<pre>
Traceback (most recent call last):
  File "grid.py", line 349, in ?
    main()
  File "grid.py", line 344, in main
    redraw(db)
  File "grid.py", line 132, in redraw
    gnuplot.write("set term windows\n")
IOError: [Errno 22] Invalid argument
</pre>

<p>There are some problems about using gnuplot on windows.

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
亚洲色图色小说| 免费在线观看一区| 日韩欧美亚洲一区二区| 色偷偷久久一区二区三区| 免费成人在线观看| 亚洲一区二区三区四区不卡| 国产亚洲美州欧州综合国| 欧美日韩精品欧美日韩精品| 成人激情图片网| 韩国欧美国产1区| 图片区日韩欧美亚洲| 亚洲欧美日韩人成在线播放| 国产婷婷精品av在线| 3d成人动漫网站| 精品视频色一区| 99国产精品视频免费观看| 国产做a爰片久久毛片| 日韩在线a电影| 国内偷窥港台综合视频在线播放| 亚洲靠逼com| 亚洲欧美怡红院| 国产精品天美传媒| 久久精品欧美日韩| 久久久国产午夜精品| 欧美成人猛片aaaaaaa| 在线91免费看| 欧美日韩二区三区| 欧美精品日韩一区| 欧美三级日韩三级| 欧美视频在线一区| 欧美亚洲动漫精品| 欧美性极品少妇| 欧美日韩久久一区二区| 欧美三级电影网站| 欧洲一区在线观看| 欧美日韩在线综合| 欧美精品乱码久久久久久按摩| 欧美日韩一区二区三区不卡| 欧美在线视频日韩| 欧美日韩第一区日日骚| 欧美另类变人与禽xxxxx| 51精品久久久久久久蜜臀| 欧美一区二区在线视频| 欧美一区在线视频| 欧美成人r级一区二区三区| 日韩精品中午字幕| 久久蜜桃一区二区| 国产欧美日韩麻豆91| 欧美激情综合在线| 亚洲欧洲精品天堂一级 | 免费成人小视频| 麻豆精品一区二区三区| 狠狠色丁香婷综合久久| 韩国成人精品a∨在线观看| 国产乱码精品一区二区三区av| 国产成人精品www牛牛影视| 成人白浆超碰人人人人| 色综合久久综合中文综合网| 欧美少妇性性性| 欧美成人精品3d动漫h| 久久久欧美精品sm网站| 成人免费视频在线观看| 一片黄亚洲嫩模| 青青草97国产精品免费观看| 国产精品一区二区三区四区| 91在线观看视频| 欧美一卡二卡三卡| 国产精品视频一二三区| 亚洲成人免费电影| 精品一区二区在线观看| 成人福利视频网站| 欧美日韩国产首页在线观看| 精品国产一区二区三区不卡| 亚洲国产高清在线观看视频| 亚洲一区二区在线免费看| 久久99国产精品麻豆| 成人免费毛片片v| 欧美私人免费视频| 久久综合九色综合久久久精品综合 | 欧美日韩亚洲另类| 精品对白一区国产伦| 亚洲视频一区在线| 麻豆成人免费电影| 99免费精品视频| 91精品国产欧美一区二区| 国产精品少妇自拍| 日韩在线卡一卡二| 91在线观看免费视频| 精品久久久久久久久久久久久久久| 亚洲天堂精品在线观看| 久久99久久精品| 色妹子一区二区| 久久免费偷拍视频| 午夜精品久久一牛影视| 成人激情校园春色| 337p日本欧洲亚洲大胆色噜噜| 一区二区视频在线看| 国产做a爰片久久毛片| 欧美精品久久久久久久久老牛影院| 欧美国产日韩一二三区| 另类小说欧美激情| 欧美优质美女网站| 国产精品私人自拍| 精品一区二区三区免费播放| 欧美在线不卡一区| 中文字幕五月欧美| 国产呦精品一区二区三区网站| 欧美亚洲日本一区| 自拍av一区二区三区| 国产成人综合亚洲91猫咪| 日韩一区二区免费电影| 午夜精品福利一区二区三区av| 色香蕉成人二区免费| 国产精品电影一区二区三区| 国产精品综合二区| 精品福利视频一区二区三区| 青青青爽久久午夜综合久久午夜| 欧美亚洲另类激情小说| 中文字幕亚洲欧美在线不卡| 国产成人av电影在线观看| 26uuu久久天堂性欧美| 久久国产精品99久久久久久老狼 | 久久精品久久99精品久久| 欧美精品1区2区3区| 一区二区三区在线看| 99久久免费精品| 中文字幕亚洲在| 91麻豆免费视频| 亚洲人成网站色在线观看| 不卡的电影网站| 日韩一区日韩二区| 97久久精品人人爽人人爽蜜臀| 国产精品久久久久婷婷| 99精品在线免费| 亚洲日本青草视频在线怡红院 | 99综合电影在线视频| 欧美国产日韩一二三区| eeuss鲁片一区二区三区| 日韩毛片视频在线看| 91视频com| 亚洲已满18点击进入久久| 色88888久久久久久影院野外| 一区二区三区在线免费播放| 欧美三区在线视频| 奇米精品一区二区三区四区| 欧美成人精精品一区二区频| 国产美女一区二区三区| 国产欧美久久久精品影院| 成人午夜免费视频| 一区二区三区不卡在线观看| 欧美久久久久久久久中文字幕| 天堂久久一区二区三区| 日韩色视频在线观看| 国产一区二区三区在线观看精品| 国产性做久久久久久| 99久久精品免费| 午夜在线成人av| 欧美sm美女调教| 成人爽a毛片一区二区免费| 亚洲色图一区二区| 欧美日韩第一区日日骚| 伦理电影国产精品| 国产精品短视频| 在线91免费看| 国产.欧美.日韩| 亚洲r级在线视频| 久久日韩精品一区二区五区| 成+人+亚洲+综合天堂| 午夜在线成人av| 国产日韩一级二级三级| 91在线云播放| 精品中文av资源站在线观看| 国产精品天天摸av网| 欧美日韩精品一区二区三区蜜桃| 精品在线播放免费| 亚洲欧美一区二区三区极速播放| 欧美色图在线观看| 国产一区91精品张津瑜| 亚洲黄色片在线观看| 久久一二三国产| 欧美三级三级三级| 成人在线视频首页| 美女高潮久久久| 亚洲欧美日韩国产成人精品影院| 日韩亚洲电影在线| 91福利国产精品| 国产精品亚洲人在线观看| 亚洲第一福利视频在线| 中文字幕精品一区二区精品绿巨人 | 国产精品久久久久婷婷二区次| 欧美日本一区二区在线观看| 国产91丝袜在线观看| 视频一区免费在线观看| 综合久久久久久| 久久精品欧美日韩| 日韩视频不卡中文| 欧美日韩亚洲另类| 99久久777色| 丁香桃色午夜亚洲一区二区三区| 日韩国产高清在线|