亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? 此是關于lisvm的編程
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>
"1-against-the rest" is a good method whose performance
is comparable to "1-against-1." We do the latter
simply because its training time is shorter.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f420"><b>Q: After doing cross validation, why there is no model file outputted ?</b></a>
<br/>                                                                                
<p>
Cross validation is used for selecting good parameters.
After finding them, you want to re-train the whole
data without the -v option.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4201"><b>Q: Why my cross-validation results are different from those in the Practical Guide?</b></a>
<br/>                                                                                
<p>

Due to random partitions of
the data, on different systems CV accuracy values
may be different.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f421"><b>Q: But on some systems CV accuracy is the same in several runs. How could I use different data partitions?</b></a>
<br/>                                                                                
<p>
If you use GNU C library,
the default seed 1 is considered. Thus you always
get the same result of running svm-train -v.
To have different seeds, you can add the following code
in svm-train.c:
<pre>
#include &lt;time.h&gt;
</pre>
and in the beginning of the subroutine do_cross_validation(),
<pre>
srand(time(0));
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f422"><b>Q: I would like to solve L2-loss SVM (i.e., error term is quadratic). How should I modify the code ?</b></a>
<br/>                                                                                
<p>
It is extremely easy. Taking c-svc for example, to solve
<p>
min_w w^Tw/2 + C \sum max(0, 1- (y_i w^Tx_i+b))^2,
<p>
only two 
places of svm.cpp have to be changed. 
First, modify the following line of 
solve_c_svc from 
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, Cp, Cn, param->eps, si, param->shrinking);
</pre>
to
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, INF, INF, param->eps, si, param->shrinking);
</pre>
Second, in  the class  of SVC_Q, declare C as 
a private variable:
<pre>
	double C;
</pre> 
In the constructor we assign it to param.C:
<pre>
        this->C = param.C;		
</pre>
Then in the subroutine get_Q, after the for loop, add
<pre>
        if(i >= start && i < len) 
		data[i] += 0.5/C;
</pre>

<p>
For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
<pre>
	data[real_i] += 0.5/C;
</pre>


<p>
For large linear L2-loss SVM, please use
<a href=../liblinear>LIBLINEAR</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f424"><b>Q: How do I choose parameters for one-class svm as training data are in only one class?</b></a>
<br/>                                                                                
<p>
You have pre-specified true positive rate in mind and then search for
parameters which achieve similar cross-validation accuracy.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f427"><b>Q: Why the code gives NaN (not a number) results?</b></a>
<br/>                                                                                
<p>
This rarely happens, but few users reported the problem.
It seems that their 
computers for training libsvm have the VPN client
running. The VPN software has some bugs and causes this
problem. Please try to close or disconnect the VPN client.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f428"><b>Q: Why on windows sometimes grid.py fails?</b></a>
<br/>                                                                                
<p>

This problem shouldn't happen after version
2.85. If you are using earlier versions,
please download the latest one.

<!--
<p>
If you are using earlier 
versions, the error message is probably
<pre>
Traceback (most recent call last):
  File "grid.py", line 349, in ?
    main()
  File "grid.py", line 344, in main
    redraw(db)
  File "grid.py", line 132, in redraw
    gnuplot.write("set term windows\n")
IOError: [Errno 22] Invalid argument
</pre>

<p>Please try to close gnuplot windows and rerun.
If the problem still occurs, comment the following
two lines in grid.py by inserting "#" in the beginning:
<pre>
        redraw(db)
        redraw(db,1)
</pre>
Then you get accuracy only but not cross validation contours.
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f429"><b>Q: Why grid.py/easy.py sometimes generates the following warning message?</b></a>
<br/>                                                                                
<pre>
Warning: empty z range [62.5:62.5], adjusting to [61.875:63.125]
Notice: cannot contour non grid data!
</pre>
<p>Nothing is wrong and please disregard the 
message. It is from gnuplot when drawing
the contour.  
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f430"><b>Q: Why the sign of predicted labels and decision values are sometimes reversed?</b></a>
<br/>                                                                                
<p>Nothing is wrong. Very likely you have two labels +1/-1 and the first instance in your data
has -1.
Think about the case of labels +5/+10. Since
SVM needs to use +1/-1, internally
we map +5/+10 to +1/-1 according to which
label appears first.
Hence a positive decision value implies
that we should predict the "internal" +1,
which may not be the +1 in the input file.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f431"><b>Q: I don't know class labels of test data. What should I put in the first column of the test file?</b></a>
<br/>                                                                                
<p>Any value is ok. In this situation, what you will use is the output file of svm-predict, which gives predicted class labels.


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f432"><b>Q: How can I use OpenMP to parallelize LIBSVM on a multicore/shared-memory computer?</b></a>
<br/>                                                                                

<p>It is very easy if you are using GCC 4.2
or after. 

<p> In Makefile, add -fopenmp  to CFLAGS.

<p> In class SVC_Q of svm.cpp, modify the for loop
of get_Q to:
<pre>
#pragma omp parallel for private(j) 
			for(j=start;j&lt;len;j++)
</pre>
Note that j must be declared outside the for loop.
<p> In the subroutine svm_predict_values of svm.cpp, add one line to the for loop:
<pre>
#pragma omp parallel for private(i) 
		for(i=0;i&lt;l;i++)
			kvalue[i] = Kernel::k_function(x,model-&gt;SV[i],model-&gt;param);
</pre>

<p> Then rebuild the package. Kernel evaluations in training/testing will be parallelized. An example of running this modification on
an 8-core machine using the data set
<a href=../libsvmtools/datasets/binary/ijcnn1.bz2>ijcnn1</a>:

<p> 8 cores:
<pre>
%setenv OMP_NUM_THREADS 8
%time svm-train -c 16 -g 4 -m 400 ijcnn1
27.1sec
</pre>
1 core:
<pre>
%setenv OMP_NUM_THREADS 1
%time svm-train -c 16 -g 4 -m 400 ijcnn1
79.8sec
</pre>
For this data, kernel evaluations take 80% of training time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f433"><b>Q: How could I know which training instances are support vectors?</b></a>
<br/>                                                                                

<p>
It's very simple. Please replace 
<pre>
			if(nonzero[i]) model->SV[p++] = x[i];
</pre>
in svm_train() of svm.cpp with
<pre>
			if(nonzero[i]) 
			{
				model->SV[p++] = x[i];
				info("%d\n", perm[i]);
			}
</pre>
If there are many requests, we may
provide a function to return indices
of support vectors. In the mean time,
if you need such information in your code,
you can add the array nonzero to the model
structure. This array has the same size as
the number of data, so alternatively you can
store only indices of support vectors.


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q5:_Probability_outputs"></a>
<a name="f425"><b>Q: Why training a probability model (i.e., -b 1) takes a longer time?</b></a>
<br/>                                                                                
<p>
To construct this probability model, we internally conduct a 
cross validation, which is more time consuming than
a regular training.
Hence, in general you do parameter selection first without
-b 1. You only use -b 1 when good parameters have been
selected. In other words, you avoid using -b 1 and -v
together.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
色婷婷av一区二区三区软件| 国产精品女主播在线观看| 国产亚洲综合在线| 亚洲午夜久久久久久久久电影院| 国产一区二区免费在线| 欧美在线不卡一区| 欧美国产一区在线| 美腿丝袜在线亚洲一区 | 久久97超碰国产精品超碰| eeuss影院一区二区三区| 日韩欧美综合在线| 亚洲第一福利一区| 日本道色综合久久| 中文字幕一区二区在线观看| 激情五月婷婷综合| 这里只有精品99re| 亚洲人123区| 99免费精品视频| 国产亲近乱来精品视频| 久久电影网站中文字幕| 欧美人成免费网站| 亚洲成人动漫精品| 欧美日韩国产片| 亚洲一区二区视频在线观看| 色综合色综合色综合色综合色综合| 久久精品夜夜夜夜久久| 精品亚洲国内自在自线福利| 日韩欧美一级精品久久| 青青国产91久久久久久 | 中文字幕av不卡| 国产一区二三区好的| 欧美不卡一区二区三区四区| 久久精品理论片| wwww国产精品欧美| 国产高清视频一区| 国产精品剧情在线亚洲| 99re热这里只有精品免费视频| 久久久久国产精品麻豆ai换脸| 国产精品一二三四| 亚洲欧洲精品一区二区精品久久久| 高清不卡一区二区在线| 国产精品伦一区二区三级视频| 懂色av一区二区夜夜嗨| 国产精品美女久久福利网站| 95精品视频在线| 亚洲一区自拍偷拍| 91精品国产欧美一区二区18| 久久精品免费观看| 亚洲国产精品精华液ab| 91啪亚洲精品| 日韩激情在线观看| 久久九九99视频| 日本电影亚洲天堂一区| 五月天亚洲婷婷| 精品国产91乱码一区二区三区 | 亚洲精品在线一区二区| 国产黄人亚洲片| 亚洲综合色噜噜狠狠| 欧美精品久久一区二区三区| 青青国产91久久久久久| 中文天堂在线一区| 欧美午夜片在线观看| 亚洲一二三四在线| 成人性视频网站| 99久久99久久久精品齐齐| 欧美日韩久久不卡| 精品亚洲免费视频| 亚洲乱码国产乱码精品精小说 | 国产亚洲福利社区一区| www.欧美日韩| 免费高清不卡av| 国产精品久久久久久久久晋中| 欧美日韩在线播放三区四区| 久久国产精品99精品国产| 亚洲日韩欧美一区二区在线| 日韩一区二区精品在线观看| 成人免费的视频| 美腿丝袜亚洲一区| 一区二区久久久| 国产精品情趣视频| 日韩一级片在线播放| 色爱区综合激月婷婷| 国产成人一级电影| 欧美96一区二区免费视频| 一区二区三区四区五区视频在线观看| 日韩精品专区在线| 欧美日韩高清在线播放| 99久久99久久精品免费看蜜桃| 九九精品一区二区| 天堂蜜桃一区二区三区| 亚洲女同一区二区| 国产精品欧美一区二区三区| 欧美大片一区二区三区| 欧美日本免费一区二区三区| 99视频在线观看一区三区| 国产一区二区三区四区五区入口| 午夜精品福利视频网站| 亚洲网友自拍偷拍| 亚洲黄色免费网站| 亚洲精品成a人| 中文字幕亚洲在| 国产欧美一区在线| 国产视频一区二区三区在线观看| 欧美一二三四区在线| 欧美日韩久久一区| 欧美在线免费视屏| 在线精品观看国产| 欧美私模裸体表演在线观看| 91久久人澡人人添人人爽欧美| 成人免费观看男女羞羞视频| 国产99久久久国产精品潘金网站| 九九视频精品免费| 国产一区亚洲一区| 国产馆精品极品| 波多野结衣中文字幕一区| 国产成人精品综合在线观看 | 在线免费亚洲电影| 欧美在线观看一二区| 91九色最新地址| 欧美一a一片一级一片| 欧美日韩国产大片| 日韩三区在线观看| 亚洲精品在线免费观看视频| 精品国产免费人成电影在线观看四季| 欧美本精品男人aⅴ天堂| 精品日韩一区二区| 国产欧美精品一区二区色综合| 国产日韩欧美一区二区三区乱码 | 精品福利二区三区| 国产三级一区二区| 国产精品久久久久久久久快鸭| 综合久久一区二区三区| 亚洲男人天堂一区| 奇米综合一区二区三区精品视频| 国产一区二区三区香蕉| 99精品热视频| 精品视频在线视频| 日韩三级在线观看| 国产精品福利电影一区二区三区四区| 亚洲欧美欧美一区二区三区| 一区二区三区精品在线| 婷婷综合久久一区二区三区| 国内精品写真在线观看| 色综合天天综合网国产成人综合天| 欧美群妇大交群的观看方式| 欧美v亚洲v综合ⅴ国产v| 国产精品乱人伦| 亚洲成a人在线观看| 精品一二线国产| 日本电影亚洲天堂一区| 日韩免费观看高清完整版 | 日韩电影免费在线观看网站| 久久91精品国产91久久小草 | 亚洲综合在线第一页| 黑人巨大精品欧美黑白配亚洲| 成人午夜激情在线| 欧美一区午夜视频在线观看| 国产精品日日摸夜夜摸av| 天涯成人国产亚洲精品一区av| 国产成人精品午夜视频免费| 欧美亚洲尤物久久| 国产欧美日产一区| 美女一区二区三区| 日本韩国精品一区二区在线观看| 2020国产精品自拍| 视频一区二区国产| 91农村精品一区二区在线| 欧美一区日本一区韩国一区| 综合在线观看色| 国内精品嫩模私拍在线| 欧美日韩在线观看一区二区 | 人禽交欧美网站| 在线观看网站黄不卡| 久久久噜噜噜久久中文字幕色伊伊 | 91一区二区在线| 国产亚洲午夜高清国产拍精品| 午夜精品福利一区二区三区av| 99久久99久久久精品齐齐| 26uuu色噜噜精品一区| 午夜影院久久久| 色狠狠av一区二区三区| 国产精品传媒视频| 成人午夜激情影院| 国产嫩草影院久久久久| 精品一区二区在线免费观看| 欧美一区二区三区视频免费播放 | 成人免费毛片片v| 久久免费偷拍视频| 久草热8精品视频在线观看| 欧美一二三区在线观看| 亚洲成人免费av| 欧美日韩一区不卡| 亚洲午夜精品一区二区三区他趣| 91丝袜高跟美女视频| 中文字幕色av一区二区三区| 成人一区在线观看| 国产精品高清亚洲| 91色porny蝌蚪| 有坂深雪av一区二区精品| 在线看国产一区二区|