亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? faq.html

?? 支撐向量機(jī)SVM的工具LIBSVM
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>
"1-against-the rest" is a good method whose performance
is comparable to "1-against-1." We do the latter
simply because its training time is shorter.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f420"><b>Q: After doing cross validation, why there is no model file outputted ?</b></a>
<br/>                                                                                
<p>
Cross validation is used for selecting good parameters.
After finding them, you want to re-train the whole
data without the -v option.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4201"><b>Q: Why my cross-validation results are different from those in the Practical Guide?</b></a>
<br/>                                                                                
<p>

Due to random partitions of
the data, on different systems CV accuracy values
may be different.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f421"><b>Q: But on some systems CV accuracy is the same in several runs. How could I use different data partitions?</b></a>
<br/>                                                                                
<p>
If you use GNU C library,
the default seed 1 is considered. Thus you always
get the same result of running svm-train -v.
To have different seeds, you can add the following code
in svm-train.c:
<pre>
#include &lt;time.h&gt;
</pre>
and in the beginning of the subroutine do_cross_validation(),
<pre>
srand(time(0));
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f422"><b>Q: I would like to solve L2-loss SVM (i.e., error term is quadratic). How should I modify the code ?</b></a>
<br/>                                                                                
<p>
It is extremely easy. Taking c-svc for example, to solve
<p>
min_w w^Tw/2 + C \sum max(0, 1- (y_i w^Tx_i+b))^2,
<p>
only two 
places of svm.cpp have to be changed. 
First, modify the following line of 
solve_c_svc from 
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, Cp, Cn, param->eps, si, param->shrinking);
</pre>
to
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, INF, INF, param->eps, si, param->shrinking);
</pre>
Second, in  the class  of SVC_Q, declare C as 
a private variable:
<pre>
	double C;
</pre> 
In the constructor we assign it to param.C:
<pre>
        this->C = param.C;		
</pre>
Then in the subroutine get_Q, after the for loop, add
<pre>
        if(i >= start && i < len) 
		data[i] += 0.5/C;
</pre>

<p>
For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
<pre>
	data[real_i] += 0.5/C;
</pre>


<p>
For large linear L2-loss SVM, please use
<a href=../liblinear>LIBLINEAR</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f424"><b>Q: How do I choose parameters for one-class svm as training data are in only one class?</b></a>
<br/>                                                                                
<p>
You have pre-specified true positive rate in mind and then search for
parameters which achieve similar cross-validation accuracy.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f427"><b>Q: Why the code gives NaN (not a number) results?</b></a>
<br/>                                                                                
<p>
This rarely happens, but few users reported the problem.
It seems that their 
computers for training libsvm have the VPN client
running. The VPN software has some bugs and causes this
problem. Please try to close or disconnect the VPN client.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f428"><b>Q: Why on windows sometimes grid.py fails?</b></a>
<br/>                                                                                
<p>

This problem shouldn't happen after version
2.85. If you are using earlier versions,
please download the latest one.

<!--
<p>
If you are using earlier 
versions, the error message is probably
<pre>
Traceback (most recent call last):
  File "grid.py", line 349, in ?
    main()
  File "grid.py", line 344, in main
    redraw(db)
  File "grid.py", line 132, in redraw
    gnuplot.write("set term windows\n")
IOError: [Errno 22] Invalid argument
</pre>

<p>Please try to close gnuplot windows and rerun.
If the problem still occurs, comment the following
two lines in grid.py by inserting "#" in the beginning:
<pre>
        redraw(db)
        redraw(db,1)
</pre>
Then you get accuracy only but not cross validation contours.
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f429"><b>Q: Why grid.py/easy.py sometimes generates the following warning message?</b></a>
<br/>                                                                                
<pre>
Warning: empty z range [62.5:62.5], adjusting to [61.875:63.125]
Notice: cannot contour non grid data!
</pre>
<p>Nothing is wrong and please disregard the 
message. It is from gnuplot when drawing
the contour.  
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f430"><b>Q: Why the sign of predicted labels and decision values are sometimes reversed?</b></a>
<br/>                                                                                
<p>Nothing is wrong. Very likely you have two labels +1/-1 and the first instance in your data
has -1.
Think about the case of labels +5/+10. Since
SVM needs to use +1/-1, internally
we map +5/+10 to +1/-1 according to which
label appears first.
Hence a positive decision value implies
that we should predict the "internal" +1,
which may not be the +1 in the input file.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f431"><b>Q: I don't know class labels of test data. What should I put in the first column of the test file?</b></a>
<br/>                                                                                
<p>Any value is ok. In this situation, what you will use is the output file of svm-predict, which gives predicted class labels.


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f432"><b>Q: How can I use OpenMP to parallelize LIBSVM on a multicore/shared-memory computer?</b></a>
<br/>                                                                                

<p>It is very easy if you are using GCC 4.2
or after. 

<p> In Makefile, add -fopenmp  to CFLAGS.

<p> In class SVC_Q of svm.cpp, modify the for loop
of get_Q to:
<pre>
#pragma omp parallel for private(j) 
			for(j=start;j&lt;len;j++)
</pre>
Note that j must be declared outside the for loop.
<p> In the subroutine svm_predict_values of svm.cpp, add one line to the for loop:
<pre>
#pragma omp parallel for private(i) 
		for(i=0;i&lt;l;i++)
			kvalue[i] = Kernel::k_function(x,model-&gt;SV[i],model-&gt;param);
</pre>

<p> Then rebuild the package. Kernel evaluations in training/testing will be parallelized. An example of running this modification on
an 8-core machine using the data set
<a href=../libsvmtools/datasets/binary/ijcnn1.bz2>ijcnn1</a>:

<p> 8 cores:
<pre>
%setenv OMP_NUM_THREADS 8
%time svm-train -c 16 -g 4 -m 400 ijcnn1
27.1sec
</pre>
1 core:
<pre>
%setenv OMP_NUM_THREADS 1
%time svm-train -c 16 -g 4 -m 400 ijcnn1
79.8sec
</pre>
For this data, kernel evaluations take 80% of training time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f433"><b>Q: How could I know which training instances are support vectors?</b></a>
<br/>                                                                                

<p>
It's very simple. Please replace 
<pre>
			if(nonzero[i]) model->SV[p++] = x[i];
</pre>
in svm_train() of svm.cpp with
<pre>
			if(nonzero[i]) 
			{
				model->SV[p++] = x[i];
				info("%d\n", perm[i]);
			}
</pre>
If there are many requests, we may
provide a function to return indices
of support vectors. In the mean time,
if you need such information in your code,
you can add the array nonzero to the model
structure. This array has the same size as
the number of data, so alternatively you can
store only indices of support vectors.


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q5:_Probability_outputs"></a>
<a name="f425"><b>Q: Why training a probability model (i.e., -b 1) takes a longer time?</b></a>
<br/>                                                                                
<p>
To construct this probability model, we internally conduct a 
cross validation, which is more time consuming than
a regular training.
Hence, in general you do parameter selection first without
-b 1. You only use -b 1 when good parameters have been
selected. In other words, you avoid using -b 1 and -v
together.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
欧美三级日韩三级国产三级| 七七婷婷婷婷精品国产| 91亚洲精品久久久蜜桃网站| 久久久久久久电影| 成人网在线免费视频| 国产精品麻豆视频| 色狠狠av一区二区三区| 亚洲电影在线免费观看| 3d动漫精品啪啪1区2区免费 | 国产日韩成人精品| 不卡一区二区三区四区| 一区二区三区丝袜| 欧美一级在线视频| 国产激情一区二区三区桃花岛亚洲| 国产日韩影视精品| 色综合久久综合网欧美综合网| 一区二区三区欧美在线观看| 制服丝袜日韩国产| 国产mv日韩mv欧美| 一区二区三区国产| 欧美成人三级电影在线| www.视频一区| 日日骚欧美日韩| 国产亚洲欧洲一区高清在线观看| 成人深夜福利app| 五月天一区二区| 久久精品亚洲麻豆av一区二区| 91片在线免费观看| 久久精品久久99精品久久| 国产精品美女视频| 日韩欧美亚洲国产精品字幕久久久 | 欧美在线一区二区| 91色porny在线视频| 免费在线视频一区| 亚洲视频 欧洲视频| 欧美大片免费久久精品三p | 精久久久久久久久久久| 成人免费在线视频| 日韩美女天天操| 在线视频你懂得一区| 国产在线精品一区二区| 亚洲高清免费视频| 国产精品伦一区二区三级视频| 在线成人av网站| 99精品久久99久久久久| 九九视频精品免费| 天天综合网 天天综合色| 中文欧美字幕免费| 欧美不卡在线视频| 欧美日韩精品系列| av在线不卡网| 国产寡妇亲子伦一区二区| 视频一区中文字幕| 亚洲精品视频一区| 亚洲欧洲在线观看av| 久久精品视频一区| 欧美成人一区二区| 欧美一区二区视频在线观看| 色屁屁一区二区| 北岛玲一区二区三区四区| 国产在线精品不卡| 奇米777欧美一区二区| 一区二区在线看| 亚洲免费资源在线播放| 亚洲欧美一区二区视频| 欧美韩日一区二区三区| 国产午夜精品久久久久久免费视 | 欧美美女一区二区三区| 色婷婷国产精品久久包臀| 成人激情免费网站| 日韩一区二区三区视频在线| 一本久久综合亚洲鲁鲁五月天| 国产成人99久久亚洲综合精品| 国产一区二区0| 韩国成人福利片在线播放| 狠狠色丁香婷婷综合| 激情久久久久久久久久久久久久久久| 美国三级日本三级久久99| 免费观看在线色综合| 日韩av午夜在线观看| 日韩精品午夜视频| 青椒成人免费视频| 九九久久精品视频| 经典三级视频一区| 国产精品一区二区三区网站| 国产福利视频一区二区三区| 国产不卡视频一区| av亚洲精华国产精华精华| 91在线porny国产在线看| 色综合久久综合网| 欧美日韩国产大片| 日韩欧美一区二区视频| 日韩免费观看高清完整版| 337p日本欧洲亚洲大胆色噜噜| 欧美精品一区二区在线观看| 国产日韩亚洲欧美综合| 国产精品初高中害羞小美女文| 亚洲欧美一区二区三区孕妇| 亚洲一区二区在线免费看| 日韩精品欧美精品| 激情文学综合丁香| 9色porny自拍视频一区二区| 91传媒视频在线播放| 亚洲欧美国产毛片在线| 亚洲一区二区精品视频| 秋霞av亚洲一区二区三| 国产99久久久久久免费看农村| 99精品久久只有精品| 欧美日韩高清一区二区三区| 欧美xxxx老人做受| 中文字幕中文字幕在线一区| 亚洲成av人片观看| 国产精品亚洲一区二区三区在线| 不卡视频在线看| 欧美一区二区三区不卡| 欧美韩国日本不卡| 亚洲丰满少妇videoshd| 国产成人午夜高潮毛片| 欧美在线观看你懂的| 精品国产乱码久久久久久久久 | 欧美一区二区三区在线观看| 国产欧美日韩卡一| 日韩精品福利网| k8久久久一区二区三区 | 久久久精品国产免大香伊| 亚洲美女精品一区| 国内一区二区视频| 欧美三级欧美一级| 蜜桃视频免费观看一区| 91在线观看免费视频| 精品国产一区二区三区忘忧草| 亚洲男人天堂一区| 国产精品自在在线| 91精品福利在线一区二区三区| 国产精品久久久久天堂| 久久国产婷婷国产香蕉| 91福利视频网站| 国产精品国产三级国产aⅴ中文 | 国产在线播放一区三区四| 欧美伊人久久大香线蕉综合69| 国产亚洲欧美中文| 免费在线观看一区二区三区| 一本在线高清不卡dvd| 国产欧美一区二区三区鸳鸯浴| 视频精品一区二区| 日本高清不卡一区| 国产精品传媒入口麻豆| 国产一区二区电影| 日韩精品一区二区三区视频在线观看| 亚洲欧美日韩人成在线播放| 国产精品区一区二区三区| 亚洲一区二区三区四区中文字幕| 国产精品1024| 日韩视频免费直播| 一区二区三区 在线观看视频| 成人福利视频在线看| 精品国产91久久久久久久妲己 | 亚洲成a天堂v人片| 91在线看国产| 中文字幕亚洲电影| 国产精品资源网站| 精品精品国产高清a毛片牛牛| 日韩黄色一级片| 3d动漫精品啪啪1区2区免费| 亚瑟在线精品视频| 欧美日韩国产a| 午夜天堂影视香蕉久久| 欧美日韩一区二区欧美激情| 亚洲不卡一区二区三区| 欧美日韩五月天| 日韩精品亚洲一区二区三区免费| 欧美性受xxxx黑人xyx| 亚洲第四色夜色| 51午夜精品国产| 免费精品99久久国产综合精品| 欧美一区二区私人影院日本| 毛片一区二区三区| 欧美成人精品二区三区99精品| 久久精品国产第一区二区三区| 精品国产免费视频| 国产精品资源在线观看| 国产精品美日韩| 国产精品青草久久| 99久久久精品免费观看国产蜜| 亚洲少妇中出一区| 欧美日韩一区 二区 三区 久久精品| 亚洲图片欧美一区| 日韩午夜在线观看视频| 国内外精品视频| 欧美国产激情一区二区三区蜜月| k8久久久一区二区三区| 亚洲自拍偷拍麻豆| 69堂国产成人免费视频| 国产一区二区精品久久99| 欧美国产成人在线| 欧美色男人天堂| 国产一区二区伦理| 一区二区三区四区在线播放| 欧美一级片免费看| 豆国产96在线|亚洲|