亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? libsvm-2.84.rar
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4151"><b>Q: How do I get the distance between a point and the hyperplane?</b></a>
<br/>                                                                                
<p>
The distance is |decision_value| / |w|. 
We have |w|^2 = w^Tw = alpha^T Q alpha = 2*(dual_obj + sum alpha_i). 
Thus in svm.cpp please find the place 
where we calculate the dual objective value
(i.e., the subroutine Solve())
and add a statement to print w^Tw.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: On 32-bit machines, if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dynamic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
The easiest solution is to switch to a
 64-bit machine.
Otherwise, there are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can eliminate the 2G
boundary for dynamic allocated memory. The `tub'
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>
"1-against-the rest" is a good method whose performance
is comparable to "1-against-1." We do the latter
simply because its training time is shorter.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f420"><b>Q: After doing cross validation, why there is no model file outputted ?</b></a>
<br/>                                                                                
<p>
Cross validation is used for selecting good parameters.
After finding them, you want to re-train the whole
data without the -v option.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f421"><b>Q: I would like to try different random partition for cross validation, how could I do it ?</b></a>
<br/>                                                                                
<p>
If you use GNU C library,
the default seed 1 is considered. Thus you always
get the same result of running svm-train -v.
To have different seeds, you can add the following code
in svm-train.c:
<pre>
#include &lt;time.h&gt;
</pre>
and in the beginning of the subroutine do_cross_validation(),
<pre>
srand(time(0));
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f422"><b>Q: I would like to solve L2-SVM (i.e., error term is quadratic). How should I modify the code ?</b></a>
<br/>                                                                                
<p>
It is extremely easy. Taking c-svc for example, only two 
places of svm.cpp have to be changed. 
First, modify the following line of 
solve_c_svc from 
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, Cp, Cn, param->eps, si, param->shrinking);
</pre>
to
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, INF, INF, param->eps, si, param->shrinking);
</pre>
Second, in  the class  of SVC_Q, declare C as 
a private variable:
<pre>
	double C;
</pre> 
In the constructor we assign it to param.C:
<pre>
        this->C = param.C;		
</pre>
Than in the the subroutine get_Q, after the for loop, add
<pre>
        if(i >= start && i < len) 
		data[i] += 1/C;
</pre>

For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
<pre>
	data[real_i] += 1/C;
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f424"><b>Q: How do I choose parameters for one-class svm as training data are in only one class?</b></a>
<br/>                                                                                
<p>
You have pre-specified true positive rate in mind and then search for
parameters which achieve similar cross-validation accuracy.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f427"><b>Q: Why the code gives NaN (not a number) results?</b></a>
<br/>                                                                                
<p>
This rarely happens, but few users reported the problem.
It seems that their 
computers for training libsvm have the VPN client
running. The VPN software has some bugs and causes this
problem. Please try to close or disconnect the VPN client.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f428"><b>Q: Why on windows sometimes grid.py fails?</b></a>
<br/>                                                                                
<p>
The error message is probably
<pre>
Traceback (most recent call last):
  File "grid.py", line 349, in ?
    main()
  File "grid.py", line 344, in main
    redraw(db)
  File "grid.py", line 132, in redraw
    gnuplot.write("set term windows\n")
IOError: [Errno 22] Invalid argument
</pre>

<p>There are some problems about using gnuplot on windows.
So far we have not found a good solution. 
Please try to close gnuplot windows and rerun.
If the problem still occurs, comment the following
two lines in grid.py by inserting "#" in the beginning:
<pre>
        redraw(db)
        redraw(db,1)
</pre>
Then you get accuracy only but not cross validation contours.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f429"><b>Q: Why grid.py/easy.py sometimes generates the following warning message?</b></a>
<br/>                                                                                
<pre>
Warning: empty z range [62.5:62.5], adjusting to [61.875:63.125]
Notice: cannot contour non grid data!
</pre>
<p>Nothing is wrong and please disregard the 
message. It is from gnuplot when drawing
the contour.  
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f430"><b>Q: Why the sign of predicted labels and decision values are sometimes reversed?</b></a>
<br/>                                                                                
<p>Nothing is wrong. Very likely you have two labels +1/-1 and the first instance in your data
has -1.
Think about the case of labels +5/+10. Since
SVM needs to use +1/-1, internally
we map +5/+10 to +1/-1 according to which
label appears first.
Hence a positive decision value implies
that we should predict the "internal" +1,
which may not be the +1 in the input file.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q5:_Probability_outputs"></a>
<a name="f425"><b>Q: Why training a probability model (i.e., -b 1) takes longer time</b></a>
<br/>                                                                                
<p>
To construct this probability model, we internally conduct a 
cross validation, which is more time consuming than
a regular training.
Hence, in general you do parameter selection first without
-b 1. You only use -b 1 when good parameters have been
selected. In other words, you avoid using -b 1 and -v
together.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q5:_Probability_outputs"></a>
<a name="f426"><b>Q: Why using the -b option does not give me better accuracy?</b></a>
<br/>                                                                                

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
精品一区二区三区在线观看国产| 欧美日韩夫妻久久| 色88888久久久久久影院按摩| 91国内精品野花午夜精品| 欧美精品v日韩精品v韩国精品v| 精品久久久久久久久久久久久久久久久 | 在线视频国内一区二区| 欧美日韩不卡在线| 日韩精品一区二区三区视频播放 | 国产精品一区二区男女羞羞无遮挡| 国产91精品在线观看| 欧美在线观看18| 欧美成人国产一区二区| 中文字幕中文字幕在线一区| 婷婷综合在线观看| 国产不卡视频在线播放| 欧美丰满一区二区免费视频| 国产午夜精品久久久久久免费视| 亚洲一区二区三区四区中文字幕| 久久99热这里只有精品| 91在线视频官网| 精品理论电影在线观看 | 最新国产成人在线观看| 久久99热99| 99精品国产99久久久久久白柏 | 亚洲精品国产一区二区精华液 | 久久综合九色综合97婷婷| 亚洲色图视频网站| av综合在线播放| 欧美日韩视频专区在线播放| 国产片一区二区| 午夜久久福利影院| av在线综合网| 久久蜜桃一区二区| 日日嗨av一区二区三区四区| 成人免费观看男女羞羞视频| 欧美一区二区视频在线观看2022| 亚洲同性gay激情无套| 国产精品一区在线观看乱码 | 亚洲网友自拍偷拍| 成人av免费观看| 国产在线精品一区二区夜色| 91免费小视频| 一区二区成人在线视频| 国产精品99久久久| 欧美电影影音先锋| 亚洲人成亚洲人成在线观看图片| 国产尤物一区二区| 欧美一区二区人人喊爽| 亚洲图片自拍偷拍| 欧洲一区二区三区在线| 国产精品乱码人人做人人爱 | 欧美一卡在线观看| 亚洲精品成a人| 成人久久18免费网站麻豆| 久久综合资源网| 美洲天堂一区二卡三卡四卡视频| 欧美日韩1区2区| 亚洲免费在线播放| 91香蕉视频污| 国产欧美va欧美不卡在线| 激情丁香综合五月| 欧美精品一区二区三区蜜桃| 蜜桃视频免费观看一区| 欧美一级夜夜爽| 全国精品久久少妇| 日韩视频一区二区三区在线播放| 亚洲成av人综合在线观看| 欧美少妇bbb| 亚洲国产欧美日韩另类综合 | 亚洲成人免费影院| 在线观看日韩国产| 一区二区欧美国产| 欧美日韩黄色影视| 全部av―极品视觉盛宴亚洲| 91精品国产综合久久香蕉的特点| 肉色丝袜一区二区| 日韩欧美一二三| 国产主播一区二区三区| 久久先锋资源网| 高清在线观看日韩| 亚洲欧洲日韩综合一区二区| 91色porny| 亚洲成人午夜影院| 欧美成人伊人久久综合网| 韩国女主播成人在线观看| 国产欧美日韩麻豆91| 成人avav影音| 亚洲最大成人综合| 欧美一区二区女人| 国产一区二区三区av电影| 欧美高清在线视频| 99精品桃花视频在线观看| 亚洲黄色av一区| 欧美人与z0zoxxxx视频| 精品影院一区二区久久久| 国产亚洲女人久久久久毛片| 成人一区二区在线观看| 亚洲色图丝袜美腿| 91精品国产综合久久精品麻豆| 九九精品一区二区| 成人免费一区二区三区在线观看| 欧洲一区在线电影| 美日韩一区二区| 国产精品美女一区二区在线观看| 色婷婷久久99综合精品jk白丝 | 91精品一区二区三区在线观看| 裸体在线国模精品偷拍| 26uuu精品一区二区三区四区在线| 成人免费视频一区| 亚洲国产精品久久一线不卡| 精品国产免费一区二区三区香蕉| 粉嫩绯色av一区二区在线观看 | 欧美日韩国产美女| 激情av综合网| 亚洲精品乱码久久久久| 欧美一区二区观看视频| 波多野结衣中文一区| 午夜av一区二区三区| 久久久99久久| 欧美日韩亚洲另类| 成人一区二区三区在线观看| 亚洲综合激情另类小说区| 精品国内片67194| 日本高清视频一区二区| 国产一区亚洲一区| 一区二区三区蜜桃网| 亚洲精品自拍动漫在线| 欧美成人激情免费网| 色国产精品一区在线观看| 久久se精品一区二区| 亚洲欧美成人一区二区三区| 欧美不卡视频一区| 色噜噜狠狠色综合欧洲selulu | 亚洲国产精品激情在线观看| 欧美三区在线观看| 成人国产精品视频| 麻豆91在线播放免费| 亚洲午夜久久久久久久久电影院 | 成人动漫中文字幕| 久久国产成人午夜av影院| 一级日本不卡的影视| 国产三级精品三级| 日韩三级av在线播放| 色婷婷av一区二区三区大白胸| 国产一区二区在线电影| 日本女人一区二区三区| 亚洲乱码精品一二三四区日韩在线| 欧美videossexotv100| 欧美日韩国产免费| 色综合久久天天综合网| 国产**成人网毛片九色| 久久99精品国产麻豆不卡| 亚洲一区二区三区自拍| 最新成人av在线| 久久久美女艺术照精彩视频福利播放| 欧美色综合网站| 色先锋资源久久综合| 成人激情电影免费在线观看| 国产综合一区二区| 毛片av一区二区| 日韩精品久久久久久| 亚洲制服丝袜在线| 亚洲精品写真福利| 18成人在线观看| 中文字幕亚洲一区二区va在线| 久久蜜臀精品av| 久久久久久夜精品精品免费| 欧美成人三级在线| 91精品国产一区二区人妖| 日韩福利视频导航| 亚洲综合精品久久| 久久久国产精品麻豆| 欧美精品一区二区三区蜜臀| 欧美一级黄色录像| 678五月天丁香亚洲综合网| 欧美三级电影网站| 欧美三级电影一区| 欧美性猛片aaaaaaa做受| 色www精品视频在线观看| 色悠悠亚洲一区二区| 色综合久久久久久久| 色综合久久99| 久久蜜桃av一区二区天堂| 久久久亚洲高清| 亚洲国产成人在线| 中文字幕日韩欧美一区二区三区| 欧美高清一级片在线观看| 国产精品国产三级国产aⅴ原创| 国产三级精品三级| 亚洲欧洲www| 亚洲另类在线一区| 亚洲韩国一区二区三区| 天天影视色香欲综合网老头| 男女视频一区二区| 国产一区欧美日韩| 日韩你懂的电影在线观看| 成人美女视频在线观看| 日韩欧美一区二区三区在线| 欧美一区二区在线视频|