亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? lib-svm軟件包
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4151"><b>Q: How do I get the distance between a point and the hyperplane?</b></a>
<br/>                                                                                
<p>
The distance is |decision_value| / |w|. 
We have |w|^2 = w^Tw = alpha^T Q alpha = 2*(dual_obj + sum alpha_i). 
Thus in svm.cpp please find the place 
where we calculate the dual objective value
(i.e., the subroutine Solve())
and add a statement to print w^Tw.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: On 32-bit machines, if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dynamic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
The easiest solution is to switch to a
 64-bit machine.
Otherwise, there are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can eliminate the 2G
boundary for dynamic allocated memory. The `tub'
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>
"1-against-the rest" is a good method whose performance
is comparable to "1-against-1." We do the latter
simply because its training time is shorter.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f420"><b>Q: After doing cross validation, why there is no model file outputted ?</b></a>
<br/>                                                                                
<p>
Cross validation is used for selecting good parameters.
After finding them, you want to re-train the whole
data without the -v option.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f421"><b>Q: I would like to try different random partition for cross validation, how could I do it ?</b></a>
<br/>                                                                                
<p>
If you use GNU C library,
the default seed 1 is considered. Thus you always
get the same result of running svm-train -v.
To have different seeds, you can add the following code
in svm-train.c:
<pre>
#include &lt;time.h&gt;
</pre>
and in the beginning of the subroutine do_cross_validation(),
<pre>
srand(time(0));
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f422"><b>Q: I would like to solve L2-loss SVM (i.e., error term is quadratic). How should I modify the code ?</b></a>
<br/>                                                                                
<p>
It is extremely easy. Taking c-svc for example, only two 
places of svm.cpp have to be changed. 
First, modify the following line of 
solve_c_svc from 
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, Cp, Cn, param->eps, si, param->shrinking);
</pre>
to
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, INF, INF, param->eps, si, param->shrinking);
</pre>
Second, in  the class  of SVC_Q, declare C as 
a private variable:
<pre>
	double C;
</pre> 
In the constructor we assign it to param.C:
<pre>
        this->C = param.C;		
</pre>
Than in the the subroutine get_Q, after the for loop, add
<pre>
        if(i >= start && i < len) 
		data[i] += 1/C;
</pre>

<p>
For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
<pre>
	data[real_i] += 1/C;
</pre>


<p>
For large linear L2-loss SVM, please use
<a href=../liblinear>LIBLINEAR</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f424"><b>Q: How do I choose parameters for one-class svm as training data are in only one class?</b></a>
<br/>                                                                                
<p>
You have pre-specified true positive rate in mind and then search for
parameters which achieve similar cross-validation accuracy.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f427"><b>Q: Why the code gives NaN (not a number) results?</b></a>
<br/>                                                                                
<p>
This rarely happens, but few users reported the problem.
It seems that their 
computers for training libsvm have the VPN client
running. The VPN software has some bugs and causes this
problem. Please try to close or disconnect the VPN client.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f428"><b>Q: Why on windows sometimes grid.py fails?</b></a>
<br/>                                                                                
<p>
The error message is probably
<pre>
Traceback (most recent call last):
  File "grid.py", line 349, in ?
    main()
  File "grid.py", line 344, in main
    redraw(db)
  File "grid.py", line 132, in redraw
    gnuplot.write("set term windows\n")
IOError: [Errno 22] Invalid argument
</pre>

<p>There are some problems about using gnuplot on windows.
So far we have not found a good solution. 
Please try to close gnuplot windows and rerun.
If the problem still occurs, comment the following
two lines in grid.py by inserting "#" in the beginning:
<pre>
        redraw(db)
        redraw(db,1)
</pre>
Then you get accuracy only but not cross validation contours.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f429"><b>Q: Why grid.py/easy.py sometimes generates the following warning message?</b></a>
<br/>                                                                                
<pre>
Warning: empty z range [62.5:62.5], adjusting to [61.875:63.125]
Notice: cannot contour non grid data!
</pre>
<p>Nothing is wrong and please disregard the 
message. It is from gnuplot when drawing
the contour.  
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f430"><b>Q: Why the sign of predicted labels and decision values are sometimes reversed?</b></a>
<br/>                                                                                
<p>Nothing is wrong. Very likely you have two labels +1/-1 and the first instance in your data
has -1.
Think about the case of labels +5/+10. Since
SVM needs to use +1/-1, internally
we map +5/+10 to +1/-1 according to which
label appears first.
Hence a positive decision value implies
that we should predict the "internal" +1,
which may not be the +1 in the input file.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q5:_Probability_outputs"></a>
<a name="f425"><b>Q: Why training a probability model (i.e., -b 1) takes longer time</b></a>
<br/>                                                                                
<p>
To construct this probability model, we internally conduct a 
cross validation, which is more time consuming than
a regular training.
Hence, in general you do parameter selection first without
-b 1. You only use -b 1 when good parameters have been
selected. In other words, you avoid using -b 1 and -v
together.
<p align="right">
<a href="#_TOP">[Go Top]</a>  

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
欧美一卡二卡三卡四卡| 国产精品亚洲午夜一区二区三区 | 五月天精品一区二区三区| av一二三不卡影片| 中文字幕一区二区三区四区不卡 | 亚洲男同性视频| 欧美亚洲国产一区在线观看网站| 亚洲激情av在线| 欧美日韩一区二区三区视频 | 国内精品自线一区二区三区视频| 日韩一卡二卡三卡四卡| 国产在线精品国自产拍免费| 欧美国产日韩精品免费观看| 99国产精品99久久久久久| 亚洲一区在线观看视频| 日韩精品一区二区三区在线观看 | 欧美一区二区三区免费观看视频| 轻轻草成人在线| 久久久久久**毛片大全| 91麻豆自制传媒国产之光| 亚洲成人av一区二区| 精品少妇一区二区三区在线视频 | 日本免费在线视频不卡一不卡二| 精品国产免费一区二区三区香蕉| 成人三级伦理片| 亚洲一区二三区| 精品国产露脸精彩对白| 91小视频在线免费看| 天堂久久一区二区三区| 中文字幕精品综合| 欧美日韩精品欧美日韩精品一| 六月丁香婷婷久久| 亚洲精品水蜜桃| 久久久久88色偷偷免费| 色噜噜狠狠色综合中国| 久久99精品久久久久久国产越南| 自拍偷拍国产精品| 欧美电影免费观看高清完整版| av爱爱亚洲一区| 久久国内精品自在自线400部| 国产精品成人一区二区三区夜夜夜| 欧美理论电影在线| 成人毛片老司机大片| 日韩av电影免费观看高清完整版在线观看| 国产日韩av一区二区| 51精品秘密在线观看| 99精品在线免费| 激情综合一区二区三区| 亚洲成人av中文| 1024成人网| 久久久亚洲精品一区二区三区| 欧美日韩国产精选| 97aⅴ精品视频一二三区| 极品少妇一区二区三区精品视频| 亚洲福利视频三区| 亚洲欧美aⅴ...| 国产日本亚洲高清| 欧美sm极限捆绑bd| 8x福利精品第一导航| 欧亚一区二区三区| 91在线精品一区二区三区| 国产91精品久久久久久久网曝门| 久久不见久久见免费视频1| 舔着乳尖日韩一区| 午夜精品久久久久久久久久 | 亚洲大片一区二区三区| 一区免费观看视频| 国产精品免费免费| 欧美国产视频在线| 国产视频亚洲色图| 久久精品视频一区二区三区| 91精品国产欧美日韩| 91精品免费观看| 中文字幕精品一区二区精品绿巨人 | 久久精品欧美一区二区三区不卡 | 色婷婷国产精品| hitomi一区二区三区精品| 国产伦精品一区二区三区在线观看| 日韩激情中文字幕| 偷拍一区二区三区四区| 午夜精品一区在线观看| 亚洲午夜在线电影| 午夜一区二区三区视频| 午夜精品久久久久久久久久久| 亚洲高清久久久| 日韩电影在线一区二区| 日韩成人一区二区三区在线观看| 视频在线在亚洲| 久久99精品国产麻豆婷婷| 激情丁香综合五月| 国产suv一区二区三区88区| www..com久久爱| 色综合中文字幕国产 | 奇米色一区二区| 蜜桃av一区二区| 黄一区二区三区| 国产成人免费视频| 99久久免费视频.com| 日本高清不卡视频| 欧美日韩夫妻久久| 精品福利一二区| 国产精品美女久久久久久久| 亚洲日本在线看| 婷婷国产v国产偷v亚洲高清| 看片的网站亚洲| 成人av在线影院| 欧美视频一区二区三区四区| 777亚洲妇女| 久久久久久久av麻豆果冻| 日韩码欧中文字| 日韩va亚洲va欧美va久久| 狠狠色综合日日| 99精品久久久久久| 3d动漫精品啪啪1区2区免费| 久久久久久**毛片大全| 亚洲精品欧美二区三区中文字幕| 天涯成人国产亚洲精品一区av| 国产一区二区在线视频| 色视频欧美一区二区三区| 欧美一区二区三区在线视频| 国产亚洲人成网站| 成人精品视频一区| 欧美日韩精品专区| 国产精品色眯眯| 日本网站在线观看一区二区三区| 国产成人自拍在线| 欧美久久久一区| 国产精品初高中害羞小美女文| 日日嗨av一区二区三区四区| www.亚洲精品| 日韩女优制服丝袜电影| 尤物在线观看一区| 风间由美一区二区av101| 91麻豆精品国产无毒不卡在线观看| 久久亚洲欧美国产精品乐播| 亚洲国产日韩综合久久精品| 国产成人午夜99999| 在线电影欧美成精品| 国产精品灌醉下药二区| 久久超碰97人人做人人爱| 91久久精品一区二区二区| 国产亚洲自拍一区| 日韩av一区二区在线影视| 色婷婷av一区二区三区gif| 国产欧美精品国产国产专区| 蜜桃免费网站一区二区三区| 欧美在线一二三| 国产精品久久久久久久久动漫 | 香蕉久久夜色精品国产使用方法| 国产.精品.日韩.另类.中文.在线.播放| 欧美三级一区二区| 亚洲色图视频网| 成人免费高清视频| 2023国产一二三区日本精品2022| 亚洲成av人在线观看| 91捆绑美女网站| 国产精品久久综合| 成人激情动漫在线观看| 精品成人佐山爱一区二区| 免费久久精品视频| 91精品国产91久久久久久最新毛片| 亚洲综合在线免费观看| 一本色道久久综合亚洲精品按摩| 中文幕一区二区三区久久蜜桃| 久久国产欧美日韩精品| 欧美tickling网站挠脚心| 理论电影国产精品| 日韩精品一区二区三区在线观看 | 一区2区3区在线看| 在线观看日韩电影| 一区二区三区小说| 色婷婷精品久久二区二区蜜臂av| 亚洲色图色小说| 91色视频在线| 樱花影视一区二区| 欧美日韩日本视频| 日韩中文字幕91| 日韩亚洲欧美成人一区| 免费不卡在线视频| 精品盗摄一区二区三区| 国产精品538一区二区在线| 国产日产欧美一区| 91老司机福利 在线| 一个色妞综合视频在线观看| 欧美日韩亚洲高清一区二区| 日韩影视精彩在线| 精品久久久久久久久久久院品网| 九九视频精品免费| 欧美国产丝袜视频| 欧美羞羞免费网站| 免费观看成人av| 国产午夜精品一区二区三区嫩草| 成人一区二区在线观看| 亚洲卡通动漫在线| 911精品国产一区二区在线| 国产一区二区导航在线播放| 中国色在线观看另类| 91国产免费观看| 久久99国产精品久久| 国产三级精品在线|