亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? 應用程序
?? HTML
?? 第 1 頁 / 共 4 頁
字號:
is available at 
<a href=http://www.bitwagon.com/tub.html>http://www.bitwagon.com/tub.html</a>.


<!--

This may happen only  when the cache is large, but each cached row is
not large enough. <b>Note:</b> This problem is specific to 
gnu C library which is used in linux.
The solution is as follows:

<p>
In our program we have malloc() which uses two methods 
to allocate memory from kernel. One is
sbrk() and another is mmap(). sbrk is faster, but mmap 
has a larger address
space. So malloc uses mmap only if the wanted memory size is larger
than some threshold (default 128k).
In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache ,
the address space for sbrk can be exhausted. The solution is to
lower the threshold to force malloc to use mmap
and increase the maximum number of chunks to allocate
with mmap.

<p>
Therefore, in the main program (i.e. svm-train.c) you want
to have
<pre>
      #include &lt;malloc.h&gt;
</pre>
and then in main():
<pre>
      mallopt(M_MMAP_THRESHOLD, 32768);
      mallopt(M_MMAP_MAX,1000000);
</pre>
You can also set the environment variables instead
of writing them in the program:
<pre>
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....
</pre>
More information can be found by 
<pre>
$ info libc "Malloc Tunable Parameters"
</pre>
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f417"><b>Q: How do I disable screen output of svm-train and svm-predict ?</b></a>
<br/>                                                                                
<p>
Simply update svm.cpp:
<pre>
#if 1
void info(char *fmt,...)
</pre>
to
<pre>
#if 0
void info(char *fmt,...)
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f418"><b>Q: I would like to use my own kernel but find out that there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?</b></a>
<br/>                                                                                
<p>
The reason why we have two functions is as follows:
For the RBF kernel exp(-g |xi - xj|^2), if we calculate
xi - xj first and then the norm square, there are 3n operations.
Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
and by calculating all |xi|^2 in the beginning, 
the number of operations is reduced to 2n.
This is for the training.  For prediction we cannot
do this so a regular subroutine using that 3n operations is
needed.

The easiest way to have your own kernel is
to  put the same code in these two
subroutines by replacing any kernel.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f419"><b>Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method ?</b></a>
<br/>                                                                                
<p>
It is one-against-one. We chose it after doing the following
comparison:
C.-W. Hsu and C.-J. Lin.
<A HREF="http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf">
A comparison of methods 
for multi-class support vector machines
</A>, 
<I>IEEE Transactions on Neural Networks</A></I>, 13(2002), 415-425.

<p>
"1-against-the rest" is a good method whose performance
is comparable to "1-against-1." We do the latter
simply because its training time is shorter.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f420"><b>Q: After doing cross validation, why there is no model file outputted ?</b></a>
<br/>                                                                                
<p>
Cross validation is used for selecting good parameters.
After finding them, you want to re-train the whole
data without the -v option.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f4201"><b>Q: Why my cross-validation results are different from those in the Practical Guide?</b></a>
<br/>                                                                                
<p>

Due to random partitions of
the data, on different systems CV accuracy values
may be different.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f421"><b>Q: But on some systems CV accuracy is the same in several runs. How could I use different data partitions?</b></a>
<br/>                                                                                
<p>
If you use GNU C library,
the default seed 1 is considered. Thus you always
get the same result of running svm-train -v.
To have different seeds, you can add the following code
in svm-train.c:
<pre>
#include &lt;time.h&gt;
</pre>
and in the beginning of the subroutine do_cross_validation(),
<pre>
srand(time(0));
</pre>
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f422"><b>Q: I would like to solve L2-loss SVM (i.e., error term is quadratic). How should I modify the code ?</b></a>
<br/>                                                                                
<p>
It is extremely easy. Taking c-svc for example, to solve
<p>
min_w w^Tw/2 + C \sum max(0, 1- (y_i w^Tx_i+b))^2,
<p>
only two 
places of svm.cpp have to be changed. 
First, modify the following line of 
solve_c_svc from 
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, Cp, Cn, param->eps, si, param->shrinking);
</pre>
to
<pre>
	s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
		alpha, INF, INF, param->eps, si, param->shrinking);
</pre>
Second, in  the class  of SVC_Q, declare C as 
a private variable:
<pre>
	double C;
</pre> 
In the constructor we assign it to param.C:
<pre>
        this->C = param.C;		
</pre>
Then in the subroutine get_Q, after the for loop, add
<pre>
        if(i >= start && i < len) 
		data[i] += 0.5/C;
</pre>

<p>
For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
<pre>
	data[real_i] += 0.5/C;
</pre>


<p>
For large linear L2-loss SVM, please use
<a href=../liblinear>LIBLINEAR</a>.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f424"><b>Q: How do I choose parameters for one-class svm as training data are in only one class?</b></a>
<br/>                                                                                
<p>
You have pre-specified true positive rate in mind and then search for
parameters which achieve similar cross-validation accuracy.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f427"><b>Q: Why the code gives NaN (not a number) results?</b></a>
<br/>                                                                                
<p>
This rarely happens, but few users reported the problem.
It seems that their 
computers for training libsvm have the VPN client
running. The VPN software has some bugs and causes this
problem. Please try to close or disconnect the VPN client.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f428"><b>Q: Why on windows sometimes grid.py fails?</b></a>
<br/>                                                                                
<p>

This problem shouldn't happen after version
2.85. If you are using earlier versions,
please download the latest one.

<!--
<p>
If you are using earlier 
versions, the error message is probably
<pre>
Traceback (most recent call last):
  File "grid.py", line 349, in ?
    main()
  File "grid.py", line 344, in main
    redraw(db)
  File "grid.py", line 132, in redraw
    gnuplot.write("set term windows\n")
IOError: [Errno 22] Invalid argument
</pre>

<p>Please try to close gnuplot windows and rerun.
If the problem still occurs, comment the following
two lines in grid.py by inserting "#" in the beginning:
<pre>
        redraw(db)
        redraw(db,1)
</pre>
Then you get accuracy only but not cross validation contours.
-->
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f429"><b>Q: Why grid.py/easy.py sometimes generates the following warning message?</b></a>
<br/>                                                                                
<pre>
Warning: empty z range [62.5:62.5], adjusting to [61.875:63.125]
Notice: cannot contour non grid data!
</pre>
<p>Nothing is wrong and please disregard the 
message. It is from gnuplot when drawing
the contour.  
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f430"><b>Q: Why the sign of predicted labels and decision values are sometimes reversed?</b></a>
<br/>                                                                                
<p>Nothing is wrong. Very likely you have two labels +1/-1 and the first instance in your data
has -1.
Think about the case of labels +5/+10. Since
SVM needs to use +1/-1, internally
we map +5/+10 to +1/-1 according to which
label appears first.
Hence a positive decision value implies
that we should predict the "internal" +1,
which may not be the +1 in the input file.

<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f431"><b>Q: I don't know class labels of test data. What should I put in the first column of the test file?</b></a>
<br/>                                                                                
<p>Any value is ok. In this situation, what you will use is the output file of svm-predict, which gives predicted class labels.


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f432"><b>Q: How can I use OpenMP to parallelize LIBSVM on a multicore/shared-memory computer?</b></a>
<br/>                                                                                

<p>It is very easy if you are using GCC 4.2
or after. 

<p> In Makefile, add -fopenmp  to CFLAGS.

<p> In class SVC_Q of svm.cpp, modify the for loop
of get_Q to:
<pre>
#pragma omp parallel for private(j) 
			for(j=start;j&lt;len;j++)
</pre>
Note that j must be declared outside the for loop.
<p> In the subroutine svm_predict_values of svm.cpp, add one line to the for loop:
<pre>
#pragma omp parallel for private(i) 
		for(i=0;i&lt;l;i++)
			kvalue[i] = Kernel::k_function(x,model-&gt;SV[i],model-&gt;param);
</pre>

<p> Then rebuild the package. Kernel evaluations in training/testing will be parallelized. An example of running this modification on
an 8-core machine using the data set
<a href=../libsvmtools/datasets/binary/ijcnn1.bz2>ijcnn1</a>:

<p> 8 cores:
<pre>
%setenv OMP_NUM_THREADS 8
%time svm-train -c 16 -g 4 -m 400 ijcnn1
27.1sec
</pre>
1 core:
<pre>
%setenv OMP_NUM_THREADS 1
%time svm-train -c 16 -g 4 -m 400 ijcnn1
79.8sec
</pre>
For this data, kernel evaluations take 80% of training time.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q4:_Training_and_prediction"></a>
<a name="f433"><b>Q: How could I know which training instances are support vectors?</b></a>
<br/>                                                                                

<p>
It's very simple. Please replace 
<pre>
			if(nonzero[i]) model->SV[p++] = x[i];
</pre>
in svm_train() of svm.cpp with
<pre>
			if(nonzero[i]) 
			{
				model->SV[p++] = x[i];
				info("%d\n", perm[i]);
			}
</pre>
If there are many requests, we may
provide a function to return indices
of support vectors. In the mean time,
if you need such information in your code,
you can add the array nonzero to the model
structure. This array has the same size as
the number of data, so alternatively you can
store only indices of support vectors.


<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>
  <a name="/Q5:_Probability_outputs"></a>
<a name="f425"><b>Q: Why training a probability model (i.e., -b 1) takes a longer time?</b></a>
<br/>                                                                                
<p>
To construct this probability model, we internally conduct a 
cross validation, which is more time consuming than
a regular training.
Hence, in general you do parameter selection first without
-b 1. You only use -b 1 when good parameters have been
selected. In other words, you avoid using -b 1 and -v
together.
<p align="right">
<a href="#_TOP">[Go Top]</a>  
<hr/>

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
精品国一区二区三区| 极品美女销魂一区二区三区| 欧美国产1区2区| 久久午夜色播影院免费高清| 日韩一区二区三区免费看| 91精品国产综合久久小美女| 制服丝袜av成人在线看| 91精品国产91热久久久做人人| 制服丝袜日韩国产| 91精品欧美久久久久久动漫| 欧美一级理论片| 日韩免费性生活视频播放| 日韩你懂的在线播放| 久久夜色精品一区| 国产欧美一区二区三区在线老狼| 久久久久久黄色| 国产精品青草久久| 亚洲女与黑人做爰| 午夜精品一区在线观看| 首页欧美精品中文字幕| 老司机午夜精品99久久| 国产成人在线视频网站| 91蝌蚪国产九色| 欧美日韩一区三区四区| 欧美一级片在线看| 日本一二三不卡| 亚洲精品成人在线| 日本麻豆一区二区三区视频| 国产九色精品成人porny| 99vv1com这只有精品| 在线区一区二视频| 日韩免费在线观看| 中国av一区二区三区| 亚洲综合免费观看高清完整版在线 | 精品成人在线观看| 日本一区二区三区高清不卡| 亚洲品质自拍视频| 青青草国产精品97视觉盛宴| 国产在线国偷精品产拍免费yy| 国产99久久久国产精品免费看| 色综合久久精品| 日韩一区二区在线观看视频| 亚洲国产激情av| 亚洲国产精品久久一线不卡| 激情深爱一区二区| 色婷婷综合久久久中文字幕| 日韩欧美专区在线| 国产精品久久久久久久久免费樱桃| 亚洲最新在线观看| 国产精品影视在线观看| 在线视频观看一区| 久久精品一二三| 亚洲观看高清完整版在线观看| 国产毛片精品一区| 欧美色视频在线观看| 精品奇米国产一区二区三区| 亚洲女爱视频在线| 国产在线不卡一卡二卡三卡四卡| 成人在线一区二区三区| 91精品国产综合久久久久久久| 中文字幕精品三区| 日本中文字幕一区| 不卡一区二区在线| 精品日韩一区二区三区免费视频| 亚洲图片激情小说| 久久99久久久欧美国产| 欧美在线啊v一区| 国产拍揄自揄精品视频麻豆 | 99久久综合色| 日韩欧美一区二区久久婷婷| 亚洲乱码中文字幕| 国产成人激情av| 日韩欧美一区二区免费| 亚洲国产另类av| 成人精品在线视频观看| 亚洲精品一线二线三线无人区| 亚洲午夜久久久久久久久电影网| 国产成人精品免费| 精品国产sm最大网站免费看| 亚洲超碰97人人做人人爱| 97国产一区二区| 国产精品五月天| 国产一区二区电影| 日韩手机在线导航| 丝瓜av网站精品一区二区| 色av一区二区| 中文字幕一区二区5566日韩| 国产乱码精品一区二区三区忘忧草 | 日本黄色一区二区| 国产精品拍天天在线| 久久99精品久久久久久| 欧美一级理论片| 日本成人中文字幕| 91精品国产高清一区二区三区 | 粉嫩嫩av羞羞动漫久久久 | 天天影视网天天综合色在线播放| 一本色道a无线码一区v| 中文字幕一区在线观看| 国产成人精品亚洲日本在线桃色 | 亚洲视频每日更新| 成人黄色在线视频| 国产精品色在线观看| 国产精品69毛片高清亚洲| 久久久综合九色合综国产精品| 蜜臀av一区二区在线观看| 91精品一区二区三区久久久久久| 午夜精品久久久久久不卡8050| 欧美手机在线视频| 日日夜夜一区二区| 欧美一区二区精品在线| 蜜臀av亚洲一区中文字幕| 日韩欧美一区中文| 极品美女销魂一区二区三区| 久久久一区二区三区捆绑**| 国产一区二区三区在线观看精品 | 亚洲欧美日韩成人高清在线一区| www.在线欧美| 亚洲欧美偷拍三级| 色综合视频一区二区三区高清| 亚洲同性gay激情无套| 欧美亚洲国产怡红院影院| 亚洲18色成人| 欧美一级黄色大片| 国精品**一区二区三区在线蜜桃| 久久久久久久综合| 成人综合在线视频| 亚洲精品成人精品456| 欧美日韩国产高清一区二区| 人禽交欧美网站| 国产日韩欧美精品在线| 99久久精品国产一区二区三区 | 波多野结衣在线一区| 国产精品不卡一区二区三区| 91婷婷韩国欧美一区二区| 亚洲成人免费看| 337p粉嫩大胆噜噜噜噜噜91av| 成人动漫一区二区在线| 亚洲欧美国产三级| 91精品国产综合久久国产大片 | 色综合久久久久久久| 五月综合激情网| 精品欧美一区二区在线观看| 风间由美性色一区二区三区| 色综合久久天天| 男人的天堂久久精品| 久久精品人人做| 亚洲欧美视频在线观看| 日韩一区在线看| 精品视频在线看| 91免费精品国自产拍在线不卡| 美女国产一区二区三区| 欧美亚洲国产一卡| 国产中文一区二区三区| 亚洲美女视频在线| 精品国产乱码久久久久久影片| a4yy欧美一区二区三区| 日韩主播视频在线| 中文一区在线播放| 欧美日韩色综合| 国产精品69毛片高清亚洲| 一区二区久久久久| 精品国产人成亚洲区| 91国偷自产一区二区使用方法| 日韩电影在线观看电影| 国产欧美一区二区精品性色超碰| 欧美色网一区二区| 成人av资源站| 激情成人综合网| 亚洲一级在线观看| 国产精品婷婷午夜在线观看| 制服丝袜亚洲网站| 色系网站成人免费| 国产成人免费视频网站高清观看视频 | 欧美精选在线播放| 成人免费视频视频| 精品中文字幕一区二区小辣椒| 一区二区在线看| 亚洲国产精品99久久久久久久久| 91麻豆精品国产91久久久久久久久 | 精品国偷自产国产一区| 欧美唯美清纯偷拍| 97精品国产97久久久久久久久久久久| 久久超碰97人人做人人爱| 亚洲综合一区在线| 国产精品理论片| 国产日韩欧美精品电影三级在线| 日韩一区二区三区视频在线观看| 欧美少妇xxx| 色先锋久久av资源部| 91在线视频免费观看| 粉嫩av一区二区三区粉嫩 | 欧美另类久久久品| 在线亚洲精品福利网址导航| 成人午夜激情影院| 国产福利91精品一区| 另类小说一区二区三区| 免费看欧美美女黄的网站| 午夜免费欧美电影| 午夜精品一区在线观看| 亚洲国产日韩a在线播放性色|