亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? faq.html

?? library for SVMclassification and regression. It solves C-SVM classification, nu-SVM classification
?? HTML
?? 第 1 頁 / 共 4 頁
字號(hào):
<pre>
		svm_predict_values(model, x, dec_values);
</pre>
of the file svm.cpp.
Positive (negative)
decision values correspond to data predicted as +1 (-1).


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4151"><b>Q: How do I get the distance between a point and the hyperplane?</b></a>
<br/>                                                                                
<p>
The distance is |decision_value| / |w|. 
We have |w|^2 = w^Tw = alpha^T Q alpha = 2*(dual_obj + sum alpha_i). 
Thus in svm.cpp please find the place 
where we calculate the dual objective value
(i.e., the subroutine Solve())
and add a statement to print w^Tw.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f416"><b>Q: On 32-bit machines, if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"</b></a>
<br/>                                                                                
<p>

On 32-bit machines, the maximum addressable
memory is 4GB. The Linux kernel uses 3:1
split which means user space is 3G and
kernel space is 1G. Although there are
3G user space, the maximum dynamic allocation
memory is 2G. So, if you specify -m near 2G,
the memory will be exhausted. And svm-train
will fail when it asks more memory.
For more details, please read 
<a href=http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=3BA164F6.BAFA4FB%40daimi.au.dk>
this article</a>.
<p>
The easiest solution is to switch to a
 64-bit machine.
Otherwise, there are two ways to solve this. If your
machine supports Intel's PAE (Physical Address
Extension), you can turn on the option HIGHMEM64G
in Linux kernel which uses 4G:4G split for
kernel and user space. If you don't, you can
try a software `tub' which can eliminate the 2G
boundary for dynamic allocated memory. The `tub'
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>
"1-against-the rest" is a good method whose performance
is comparable to "1-against-1." We do the latter
simply because its training time is shorter.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f420"><b>Q: After doing cross validation, why there is no model file outputted ?</b></a>
<br/>                                                                                
<p>
Cross validation is used for selecting good parameters.
After finding them, you want to re-train the whole
data without the -v option.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f421"><b>Q: I would like to try different random partition for cross validation, how could I do it ?</b></a>
<br/>                                                                                
<p>
If you use GNU C library,
the default seed 1 is considered. Thus you always
get the same result of running svm-train -v.
To have different seeds, you can add the following code
in svm-train.c:
<pre>
#include &lt;time.h&gt;
</pre>
and in the beginning of the subroutine do_cross_validation(),
<pre>
srand(time(0));
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f422"><b>Q: I would like to solve L2-loss SVM (i.e., error term is quadratic). How should I modify the code ?</b></a>
<br/>                                                                                
<p>
It is extremely easy. Taking c-svc for example, only two 
places of svm.cpp have to be changed. 
First, modify the following line of 
solve_c_svc from 
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, Cp, Cn, param->eps, si, param->shrinking);
</pre>
to
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, INF, INF, param->eps, si, param->shrinking);
</pre>
Second, in  the class  of SVC_Q, declare C as 
a private variable:
<pre>
	double C;
</pre> 
In the constructor we assign it to param.C:
<pre>
        this->C = param.C;		
</pre>
Than in the the subroutine get_Q, after the for loop, add
<pre>
        if(i >= start && i < len) 
		data[i] += 1/C;
</pre>

<p>
For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
<pre>
	data[real_i] += 1/C;
</pre>


<p>
For large linear L2-loss SVM, please use
<a href=../liblinear>LIBLINEAR</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f424"><b>Q: How do I choose parameters for one-class svm as training data are in only one class?</b></a>
<br/>                                                                                
<p>
You have pre-specified true positive rate in mind and then search for
parameters which achieve similar cross-validation accuracy.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f427"><b>Q: Why the code gives NaN (not a number) results?</b></a>
<br/>                                                                                
<p>
This rarely happens, but few users reported the problem.
It seems that their 
computers for training libsvm have the VPN client
running. The VPN software has some bugs and causes this
problem. Please try to close or disconnect the VPN client.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f428"><b>Q: Why on windows sometimes grid.py fails?</b></a>
<br/>                                                                                
<p>
The error message is probably
<pre>
Traceback (most recent call last):
  File "grid.py", line 349, in ?
    main()
  File "grid.py", line 344, in main
    redraw(db)
  File "grid.py", line 132, in redraw
    gnuplot.write("set term windows\n")
IOError: [Errno 22] Invalid argument
</pre>

<p>There are some problems about using gnuplot on windows.
So far we have not found a good solution. 
Please try to close gnuplot windows and rerun.
If the problem still occurs, comment the following
two lines in grid.py by inserting "#" in the beginning:
<pre>
        redraw(db)
        redraw(db,1)
</pre>
Then you get accuracy only but not cross validation contours.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f429"><b>Q: Why grid.py/easy.py sometimes generates the following warning message?</b></a>
<br/>                                                                                
<pre>
Warning: empty z range [62.5:62.5], adjusting to [61.875:63.125]
Notice: cannot contour non grid data!
</pre>
<p>Nothing is wrong and please disregard the 
message. It is from gnuplot when drawing
the contour.  
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f430"><b>Q: Why the sign of predicted labels and decision values are sometimes reversed?</b></a>
<br/>                                                                                
<p>Nothing is wrong. Very likely you have two labels +1/-1 and the first instance in your data
has -1.
Think about the case of labels +5/+10. Since
SVM needs to use +1/-1, internally
we map +5/+10 to +1/-1 according to which
label appears first.
Hence a positive decision value implies
that we should predict the "internal" +1,
which may not be the +1 in the input file.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q5:_Probability_outputs"></a>
<a name="f425"><b>Q: Why training a probability model (i.e., -b 1) takes longer time</b></a>
<br/>                                                                                
<p>
To construct this probability model, we internally conduct a 
cross validation, which is more time consuming than
a regular training.
Hence, in general you do parameter selection first without
-b 1. You only use -b 1 when good parameters have been
selected. In other words, you avoid using -b 1 and -v
together.
<p align="right">
<a href="#_TOP">[Go Top]</a>  

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
麻豆精品一区二区三区| 亚洲乱码中文字幕综合| 精品亚洲免费视频| 欧美精品乱人伦久久久久久| 国产午夜精品一区二区三区嫩草 | 一色桃子久久精品亚洲| 国产激情视频一区二区三区欧美| 国产精品视频一二三区| 国产一区二区三区免费| 91色婷婷久久久久合中文| 亚洲精品一卡二卡| 日本一区二区三区dvd视频在线| 欧美剧情电影在线观看完整版免费励志电影| 国产酒店精品激情| 免费欧美高清视频| 亚洲成人www| 亚洲乱码精品一二三四区日韩在线| 亚洲精品一区二区在线观看| 91麻豆精品国产91久久久更新时间| 91年精品国产| 丁香啪啪综合成人亚洲小说| 国内精品第一页| 蜜臀91精品一区二区三区| 五月婷婷激情综合| 亚洲国产日韩综合久久精品| 亚洲日本va在线观看| 国产精品狼人久久影院观看方式| 精品粉嫩aⅴ一区二区三区四区| 欧美一区二区视频在线观看2022| 欧美日韩国产另类一区| 欧美在线观看视频在线| 日本韩国一区二区三区视频| 91麻豆免费看| 99久久免费精品| 91在线porny国产在线看| 国产麻豆91精品| 国产一区免费电影| 国内精品国产成人| 国产综合色精品一区二区三区| 蜜臀精品久久久久久蜜臀| 免费观看一级特黄欧美大片| 日韩福利电影在线观看| 欧美a一区二区| 久久国产精品99久久久久久老狼| 麻豆精品一区二区综合av| 精品一区二区免费视频| 狠狠色丁香婷综合久久| 国产成人精品免费看| 成人免费视频一区二区| av在线播放不卡| 99热精品一区二区| 91美女视频网站| 欧美亚洲国产bt| 欧美一区二区不卡视频| 精品国产一区二区三区久久久蜜月 | 亚洲成人在线网站| 青青草伊人久久| 国产精品1024| 97se亚洲国产综合自在线| 一本久久a久久免费精品不卡| 欧美日韩在线不卡| 日韩欧美电影在线| 中文字幕不卡一区| 亚洲一二三区视频在线观看| 日本不卡的三区四区五区| 国产精品一区二区久久不卡| 99久久精品费精品国产一区二区| 欧美少妇一区二区| 欧美成人在线直播| 国产精品欧美久久久久无广告| 亚洲精品乱码久久久久| 免费久久99精品国产| 成人免费观看av| 欧美视频日韩视频在线观看| 精品久久人人做人人爱| 一区精品在线播放| 免费日韩伦理电影| 99久久婷婷国产| 日韩一区二区精品在线观看| 中文字幕一区二区在线播放| 亚洲va国产天堂va久久en| 国产精品亚洲第一| 欧美日韩高清一区二区| 久久精品欧美日韩| 亚洲福利一区二区三区| 激情文学综合插| 欧美午夜片在线看| 国产免费成人在线视频| 午夜精品123| 97国产精品videossex| 日韩一级片在线播放| 亚洲精品v日韩精品| 精品系列免费在线观看| 欧美中文一区二区三区| 国产欧美va欧美不卡在线| 视频在线在亚洲| 97精品视频在线观看自产线路二| 欧美成人一区二区| 亚洲综合自拍偷拍| gogo大胆日本视频一区| 精品噜噜噜噜久久久久久久久试看| 夜夜嗨av一区二区三区中文字幕| 国产成人av资源| 日韩美一区二区三区| 亚洲国产精品精华液网站| 成人黄色软件下载| 久久伊人蜜桃av一区二区| 日本亚洲最大的色成网站www| 91在线视频播放| 国产网站一区二区三区| 麻豆精品精品国产自在97香蕉| 欧美在线一区二区| 亚洲精品自拍动漫在线| 不卡电影一区二区三区| 国产视频在线观看一区二区三区| 麻豆国产欧美日韩综合精品二区| 欧美最猛黑人xxxxx猛交| 成人欧美一区二区三区小说| 国产福利91精品| www国产精品av| 看电影不卡的网站| 91精品一区二区三区在线观看| 亚洲一区二区三区四区的| 99久久久久久| 亚洲欧洲美洲综合色网| 成熟亚洲日本毛茸茸凸凹| 久久综合成人精品亚洲另类欧美| 日本aⅴ亚洲精品中文乱码| 欧美日韩国产大片| 视频一区欧美精品| 在线不卡一区二区| 亚洲a一区二区| 欧美日韩国产小视频| 亚洲国产视频一区| 7878成人国产在线观看| 日韩电影免费在线看| 日韩欧美电影一二三| 激情综合网天天干| 欧美成人激情免费网| 精品中文av资源站在线观看| 精品国产免费一区二区三区四区 | 激情六月婷婷久久| 欧美不卡在线视频| 国产精品1区二区.| 国产精品久久久久永久免费观看| 成人app软件下载大全免费| 亚洲欧洲精品一区二区精品久久久| 白白色亚洲国产精品| 亚洲视频免费在线观看| 在线观看亚洲成人| 日韩成人av影视| 亚洲精品在线观看网站| 成人一区在线观看| 亚洲天堂av一区| 欧日韩精品视频| 奇米精品一区二区三区四区| 久久女同性恋中文字幕| 99久久国产免费看| 午夜a成v人精品| 久久综合久久鬼色中文字| 丁香五精品蜜臀久久久久99网站| 综合久久久久综合| 欧美男同性恋视频网站| 狠狠色综合播放一区二区| 国产精品久久久久一区二区三区共 | 中文字幕日韩一区| 欧美日韩一区二区三区四区五区| 麻豆成人91精品二区三区| 亚洲国产精品v| 91麻豆.com| 奇米四色…亚洲| 国产精品国产三级国产普通话三级 | 欧美美女一区二区在线观看| 免费在线看成人av| 国产精品欧美久久久久无广告| 欧美优质美女网站| 激情国产一区二区| 夜夜爽夜夜爽精品视频| 精品少妇一区二区三区| 91麻豆福利精品推荐| 久久超级碰视频| 亚洲女与黑人做爰| 精品女同一区二区| 色综合一个色综合亚洲| 奇米888四色在线精品| 亚洲日本电影在线| 久久婷婷色综合| 欧美在线免费观看视频| 国产成人免费视频一区| 日韩精品免费专区| 亚洲你懂的在线视频| wwwwww.欧美系列| 欧美日韩精品系列| 99麻豆久久久国产精品免费| 免费黄网站欧美| 亚洲一二三四久久| 成人欧美一区二区三区视频网页| 欧美xxx久久| 欧美另类一区二区三区| 96av麻豆蜜桃一区二区|