亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? smo_mex.c

?? support vector machine的一個matlab工具箱
?? C
字號:
/* -------------------------------------------------------------------- smo_mex.c: MEX-file for Sequential Minimal Optimizer. Compile:  mex smo_mex.c ../kernels/kernel_fun.c Synopsis:  [Alpha,bias,nsv,kercnt,trnerr,margin] =       smo_mex(data,labels,ker,arg,C,eps,tol,init_Alpha,init_bias )  Input:    data [dim x num_data ] Training vectors.   labels [1 x num_data] Labels (1 or 2).   ker [string] Kernel identifier.    arg [1 x nargs] Kernel argument(s).     C [1x1] or [2 x 1] or [num_data x 1] Regularization constant.   eps [1x1] SMO parameter (default 0.001).   tol [1x1] Tolerance of KKT-conditions (default 0.001).   init_Alpha [num_data x 1] Initial values of optimized Lagrangeians.   init_bias [1x1] Initial bias value.  Output:   Alpha [num_data x 1] Optimized Lagrangians.   bias [1x1] Bias.   nsv [1x1] Number of Support Vectors (number of Alpha > ZERO_LIM).   kercnt [1x1] Number of kernel evaluations.   trnerr [1x1] Training classification error.   margin [1x1] Margin. About: Statistical Pattern Recognition Toolbox (C) 1999-2003, Written by Vojtech Franc and Vaclav Hlavac <a href="http://www.cvut.cz">Czech Technical University Prague</a> <a href="http://www.feld.cvut.cz">Faculty of Electrical Engineering</a> <a href="http://cmp.felk.cvut.cz">Center for Machine Perception</a> Modifications: 23-may-2004, VF 14-January-2003, VF 23-october-2001, V.Franc 16-October-2001, V.Franc 27-september-2001, V.Franc, roundig of a2 in takeStep removed. 23-September-2001, V.Franc, different trade-off C1 and C2. 22-September-2001, V.Franc, kernel.c used. 19-September-2001, V.Franc, computation of nsv and nerr added. 17-September-2001, V.Franc, created. -------------------------------------------------------------------- */#include "mex.h"#include "matrix.h"#include <math.h>#include <stdlib.h>#include <string.h>#include "kernel_fun.h"/* if RANDOM is defined then a random element is used within optimization procedure as originally suggested. */#define RANDOM#define ZERO_LIM   1e-9     /* patterns with alpha > ZERO_LIM are SV */#define MAX(A,B)   (((A) > (B)) ? (A) : (B) )#define MIN(A,B)   (((A) < (B)) ? (A) : (B) )#define C(arg)   (const_C[arg])/* --- Global variables ---------------------------------------------- */unsigned long N = 0;       /* number of training patterns */double *const_C;           /* trade-off constants */double tolerance=0.001;    /* tolerance in KKT fulfilment  */double eps=0.001;          /* minimal Lagrangeian change */double *data;              /* pointer at patterns */double *target;            /* pointer at labels */double *error_cache;       /* error cache */double *alpha;             /* Lagrange multipliers */double *b;                 /* Bias (threshold) *//* ============================================================== Implementation of Sequential Minimal Optimizer (SMO)============================================================== *//* -------------------------------------------------------------- Computes value of the learned function for k-th pattern.-------------------------------------------------------------- */double learned_func( long k ){   double s = 0.;   long i;   for( i = 0; i < N; i++ ) {      if( alpha[i] > 0 )         s += alpha[i]*target[i]*kernel(i,k);   }  s -= *b;  return( s );}/* -------------------------------------------------------------- Optimizes objective function for i1-th and i2-th pattern.-------------------------------------------------------------- */long takeStep( long i1, long i2 ) {   double y1, y2, s;   long i;   double alpha1, alpha2;    double a1, a2;   double E1, E2, L, H, k11, k22, k12, eta, Lobj, Hobj;   double c1, c2;   double t;   double b1, b2, bnew;   double delta_b;   double t1, t2;   if( i1 == i2 ) return( 0 );   alpha1 = alpha[i1];   y1 = target[i1];   if( alpha1 > 0 && alpha1 < C(i1) )      E1 = error_cache[i1];   else      E1 = learned_func(i1) - y1;   alpha2 = alpha[i2];   y2 = target[i2];   if( alpha2 > 0 && alpha2 < C(i2) )      E2 = error_cache[i2];   else      E2 = learned_func(i2) - y2;   s = y1 * y2;   if(s < 0)   {      L = MAX(0, alpha2 - alpha1);      H = MIN(C(i2), C(i1) + alpha2 - alpha1);   }   else   {     L = MAX(0, alpha2 + alpha1 - C(i1) );     H = MIN(C(i2), alpha2 + alpha1);   }   if( L == H ) return( 0 );   k11 = kernel(i1,i1);   k12 = kernel(i1,i2);   k22 = kernel(i2,i2);   eta = 2 * k12 - k11 - k22;   if( eta < 0 ) {      a2 = alpha2 + y2 * (E2 - E1) / eta;      if( a2 < L )         a2 = L;      else if( a2 > H )         a2 = H;   }   else {      c1 = eta/2;      c2 = y2 * (E1-E2)- eta * alpha2;      Lobj = c1 * L * L + c2 * L;      Hobj = c1 * H * H + c2 * H;      if( Lobj > Hobj+eps )         a2 = L;      else if( Lobj < Hobj-eps )         a2 = H;      else         a2 = alpha2;   }   if( fabs(a2-alpha2) < eps*(a2+alpha2+eps )) return( 0 );   a1 = alpha1 - s * (a2 - alpha2 );   if( a1 < 0 ) {      a2 += s * a1;      a1 = 0;   }   else if( a1 > C(i1) ) {      t = a1-C(i1);      a2 += s * t;      a1 = C(i1);   }   if( a1 > 0 && a1 < C(i1) )      bnew = *b + E1 + y1 * (a1 - alpha1) * k11 + y2 * (a2 - alpha2) * k12;   else {      if( a2 > 0 && a2 < C(i2) )         bnew = *b + E2 + y1 *(a1 - alpha1)*k12 + y2*(a2 - alpha2) * k22;      else {         b1 = *b + E1 + y1 * (a1 - alpha1) * k11 + y2 * (a2 - alpha2) * k12;         b2 = *b + E2 + y1 * (a1 - alpha1) * k12 + y2 * (a2 - alpha2) * k22;         bnew = (b1 + b2) / 2;      }   }   delta_b = bnew - *b;   *b = bnew;   t1 = y1 * (a1-alpha1);   t2 = y2 * (a2-alpha2);   for( i = 0; i < N; i++ ) {     if (0 < alpha[i] && alpha[i] < C(i)) {        error_cache[i] +=  t1 * kernel(i1,i) + t2 * kernel(i2,i) - delta_b;     }   }   error_cache[i1] = 0;   error_cache[i2] = 0;   alpha[i1] = a1;     alpha[i2] = a2;     return( 1 );}/* -------------------------------------------------------------- Finds the second Lagrange multiplayer to be optimize.-------------------------------------------------------------- */long examineExample( long i1 ){   double y1, alpha1, E1, r1;   double tmax;   double E2, temp;   long k, i2;   long k0;   y1 = target[i1];   alpha1 = alpha[i1];   if( alpha1 > 0 && alpha1 < C(i1) )      E1 = error_cache[i1];   else      E1 = learned_func(i1) - y1;   r1 = y1 * E1;   if(( r1 < -tolerance && alpha1 < C(i1) )      || (r1 > tolerance && alpha1 > 0)) {    /* Try i2 by three ways; if successful, then immediately return 1; */      for( i2 = (-1), tmax = 0, k = 0; k < N; k++ ) {         if( alpha[k] > 0 && alpha[k] < C(k) ) {            E2 = error_cache[k];            temp = fabs(E1 - E2);            if( temp > tmax ) {               tmax = temp;               i2 = k;            }         }      }      if( i2 >= 0 ) {         if( takeStep(i1,i2) )            return( 1 );      }#ifdef RANDOM      for( k0 = rand(), k = k0; k < N + k0; k++ ) {         i2 = k % N;#else      for( k = 0; k < N; k++) {         i2 = k;#endif         if( alpha[i2] > 0 && alpha[i2] < C(i2) ) {            if( takeStep(i1,i2) )               return( 1 );         }      }#ifdef RANDOM      for( k0 = rand(), k = k0; k < N + k0; k++ ) {         i2 = k % N;#else      for( k = 0; k < N; k++) {         i2 = k;#endif         if( takeStep(i1,i2) )            return( 1 );      }   } /* if( ... ) */   return( 0 );}/* -------------------------------------------------------------- Main SMO optimization cycle.-------------------------------------------------------------- */void runSMO( void ){   long numChanged = 0;   long examineAll = 1;   long k;   while( numChanged > 0 || examineAll ) {      numChanged = 0;      if( examineAll ) {         for( k = 0; k < N; k++ ) {            numChanged += examineExample( k );         }      }      else {         for( k = 0; k < N; k++ ) {            if( alpha[k] != 0 && alpha[k] != C(k) )               numChanged += examineExample( k );         }      }      if( examineAll == 1 )         examineAll = 0;      else if( numChanged == 0 )         examineAll = 1;   }}/* ============================================================== Main MEX function - interface to Matlab.============================================================== */void mexFunction( int nlhs, mxArray *plhs[],		  int nrhs, const mxArray*prhs[] ){   long i,j ;   double *labels12, *initAlpha, *nsv, *tmp, *trn_err, *margin;   double nerr;   double C1, C2;   /* ---- get input arguments  ----------------------- */   if(nrhs < 5)      mexErrMsgTxt("Not enough input arguments.");   /* data matrix [dim x N ] */   if( !mxIsNumeric(prhs[0]) || !mxIsDouble(prhs[0]) ||       mxIsEmpty(prhs[0])    || mxIsComplex(prhs[0]) )      mexErrMsgTxt("Input X must be a real matrix.");   /* vector of labels (1,2) */   if( !mxIsNumeric(prhs[1]) || !mxIsDouble(prhs[1]) ||       mxIsEmpty(prhs[1])    || mxIsComplex(prhs[1]) ||       (mxGetN(prhs[1]) != 1 && mxGetM(prhs[1]) != 1))      mexErrMsgTxt("Input I must be a real vector.");   labels12 = mxGetPr(prhs[1]);    /* labels (1,2) */   dataA = mxGetPr(prhs[0]);  /* pointer at patterns */   dataB = dataA;   dim = mxGetM(prhs[0]);     /* data dimension */   N = mxGetN(prhs[0]);       /* number of data */   /* kernel identifier */   ker = kernel_id( prhs[2] );   if( ker == -1 )      mexErrMsgTxt("Improper kernel identifier.");         /*  get pointer to arguments  */   arg1 = mxGetPr(prhs[3]);   /*  one or two real trade-off constant(s)  */   if( !mxIsNumeric(prhs[4]) || !mxIsDouble(prhs[4]) ||       mxIsEmpty(prhs[4])    || mxIsComplex(prhs[4]) ||       (mxGetN(prhs[4]) != 1  && mxGetM(prhs[4]) != 1 ))      mexErrMsgTxt("Improper input argument C.");   else {      /* allocate memory for constant C */      if( (const_C = mxCalloc(N, sizeof(double) )) == NULL) {        mexErrMsgTxt("Not enough memory.");      }      if( MAX( mxGetN(prhs[4]), mxGetM(prhs[4])) == 1 ) {        C1 = mxGetScalar(prhs[4]);        for( i=0; i < N; i++ ) const_C[i] = C1;       } else      if( MAX( mxGetN(prhs[4]), mxGetM(prhs[4])) == 2 ) {         tmp = mxGetPr(prhs[4]);         C1 = tmp[0];         C2 = tmp[1];         for( i=0; i < N; i++ ) {           if( labels12[i]==1) const_C[i] = C1; else const_C[i] = C2;         }      } else      if( MAX( mxGetN(prhs[4]), mxGetM(prhs[4])) == N ) {          tmp = mxGetPr(prhs[4]);         for( i=0; i < N; i++ ) const_C[i] = tmp[i];       } else {        mexErrMsgTxt("Improper argument C.");      }   }   /* real parameter eps */   if( nrhs >= 6 ) {      if( !mxIsNumeric(prhs[5]) || !mxIsDouble(prhs[5]) ||         mxIsEmpty(prhs[5])    || mxIsComplex(prhs[5]) ||         mxGetN(prhs[5]) != 1  || mxGetM(prhs[5]) != 1 )         mexErrMsgTxt("Input eps must be a scalar.");      else         eps = mxGetScalar(prhs[5]);   /* take eps argument */   }   /* real parameter tol */   if(nrhs >= 7) {      if( !mxIsNumeric(prhs[6]) || !mxIsDouble(prhs[6]) ||         mxIsEmpty(prhs[6])    || mxIsComplex(prhs[6]) ||         mxGetN(prhs[6]) != 1  || mxGetM(prhs[6]) != 1 )         mexErrMsgTxt("Input tol must be a scalar.");      else         tolerance = mxGetScalar(prhs[6]);  /* take tolerance argument */   }   /* real vector of Lagrangeian multipliers */   if(nrhs >= 8) {      if( !mxIsNumeric(prhs[7]) || !mxIsDouble(prhs[7]) ||          mxIsEmpty(prhs[7])    || mxIsComplex(prhs[7]) ||          (mxGetN(prhs[7]) != 1  && mxGetM(prhs[7]) != 1 ))          mexErrMsgTxt("Input Alpha must be a vector.");   }   /* real scalar - bias */   if( nrhs >= 9 ) {      if( !mxIsNumeric(prhs[8]) || !mxIsDouble(prhs[8]) ||         mxIsEmpty(prhs[8])    || mxIsComplex(prhs[8]) ||         mxGetN(prhs[8]) != 1  || mxGetM(prhs[8]) != 1 )         mexErrMsgTxt("Input bias must be a scalar.");   }   /* ---- init variables ------------------------------- */      ker_cnt = 0;   /* allocate memory for targets (labels) (1,-1) */   if( (target = mxCalloc(N, sizeof(double) )) == NULL) {      mexErrMsgTxt("Not enough memory.");   }   /* transform labels12 (1,2) from to targets (1,-1) */   for( i = 0; i < N; i++ ) {      target[i] = - labels12[i]*2 + 3;   }   /* create output variable for bias */   plhs[1] = mxCreateDoubleMatrix(1,1,mxREAL);   b = mxGetPr(plhs[1]);   /* take init value of bias if given */   if( nrhs >= 9 ) {      *b = -mxGetScalar(prhs[8]);     }   /* allocate memory for error_cache */   if( (error_cache = mxCalloc(N, sizeof(double) )) == NULL) {      mexErrMsgTxt("Not enough memory for error cache.");   }   /* create vector for Lagrangeians */   plhs[0] = mxCreateDoubleMatrix(N,1,mxREAL);   alpha = mxGetPr(plhs[0]);   /* if Lagrangeians given then use them as initial values */   if( nrhs >= 8 ) {      initAlpha = mxGetPr(prhs[7]);      for( i = 0; i < N; i++ ) {         alpha[i] = initAlpha[i];      }      /* Init error cache for non-bound multipliers. */      for( i = 0; i < N; i++ ) {         if( alpha[i] != 0 && alpha[i] != C(i) ) {            error_cache[i] = learned_func(i) - target[i];         }      }   }   /* ---- run SMO ------------------------------------------- */   runSMO();   /* ---- outputs  --------------------------------- */   if( nlhs >= 3 ) {      /* count number of support vectors */      plhs[2] = mxCreateDoubleMatrix(1,1,mxREAL);      nsv = mxGetPr(plhs[2]);      *nsv = 0;      for( i = 0; i < N; i++ ) {         if( alpha[i] > ZERO_LIM ) (*nsv)++; else alpha[i] = 0;      }   }   if( nlhs >= 4 ) {     plhs[3] = mxCreateDoubleMatrix(1,1,mxREAL);     (*mxGetPr(plhs[3])) = (double)ker_cnt;   }   if( nlhs >= 5) {     /* evaluates classification error on traning patterns */     plhs[4] = mxCreateDoubleMatrix(1,1,mxREAL);     trn_err = mxGetPr(plhs[4]);     nerr = 0;     for( i = 0; i < N; i++ ) {        if( target[i] == 1 ) {           if( learned_func(i) < 0 ) nerr++;        }        else           if( learned_func(i) >= 0 ) nerr++;     }     *trn_err = nerr/N;   }   if( nlhs >= 6) {           /* compute margin */      plhs[5] = mxCreateDoubleMatrix(1,1,mxREAL);      margin = mxGetPr(plhs[5]);      *margin = 0;      for( i = 0; i < N; i++ ) {        for( j = 0; j < N; j++ ) {           if( alpha[i] > 0 && alpha[j] > 0 )              *margin += alpha[i]*alpha[j]*target[i]*target[j]*kernel(i,j);        }      }      *margin = 1/sqrt(*margin);   }   /* decision function of type <w,x>+b is used */   *b = -*b;   /* ----- free memory --------------------------------------- */   mxFree( error_cache );   mxFree( target );}

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
中文字幕在线观看一区二区| 丝袜美腿成人在线| 亚洲午夜av在线| 国产精品中文字幕日韩精品 | 亚洲欧美日韩国产一区二区三区 | 欧美日韩日本视频| 欧美激情一区二区在线| 日韩综合小视频| 色系网站成人免费| 国产精品天干天干在线综合| 日本视频一区二区三区| 色婷婷久久久久swag精品| 久久久久国产精品麻豆| 久久福利资源站| 91精品啪在线观看国产60岁| 一区二区三区四区不卡在线| 波多野结衣精品在线| 国产丝袜在线精品| 久久成人精品无人区| 欧美乱妇23p| 亚洲国产精品一区二区www| 色综合久久综合网| 亚洲欧洲韩国日本视频| 波多野结衣亚洲| 亚洲人成亚洲人成在线观看图片| 国产精品中文有码| 久久久亚洲高清| 国产一区二区精品久久91| 日韩女优电影在线观看| 人人超碰91尤物精品国产| 欧美片在线播放| 婷婷综合另类小说色区| 欧美日韩国产另类一区| 午夜成人在线视频| 在线不卡a资源高清| 午夜天堂影视香蕉久久| 欧美二区在线观看| 日本欧美一区二区在线观看| 日韩视频一区在线观看| 久久国产精品无码网站| 国产三级三级三级精品8ⅰ区| 国产一区二区三区电影在线观看| 久久久久久99久久久精品网站| 丰满白嫩尤物一区二区| 国产精品欧美一级免费| 91美女在线视频| 午夜精品国产更新| 日韩免费一区二区三区在线播放| 久久精品99国产精品| 久久久精品免费网站| 99久久精品国产麻豆演员表| 一区二区成人在线观看| 欧美高清视频不卡网| 精品午夜久久福利影院| 国产女人18毛片水真多成人如厕| 成人精品电影在线观看| 亚洲一级在线观看| 日韩天堂在线观看| 成人精品国产福利| 亚洲成a人v欧美综合天堂 | 狠狠色丁香婷综合久久| 国产午夜精品一区二区| 色综合久久综合网| 美脚の诱脚舐め脚责91| 国产精品人人做人人爽人人添| 欧美吻胸吃奶大尺度电影| 精品一区免费av| 亚洲精品水蜜桃| 欧美一区二区三区电影| 国产成人99久久亚洲综合精品| 亚洲乱码国产乱码精品精的特点 | 欧美日韩一区二区欧美激情 | 一区二区三区四区五区视频在线观看| 老司机精品视频导航| 欧美疯狂做受xxxx富婆| 亚洲国产精品一区二区www| 色综合久久88色综合天天6| 欧美激情在线免费观看| 国产乱码一区二区三区| 国产午夜精品久久| 国产精品亚洲午夜一区二区三区 | 成人福利电影精品一区二区在线观看 | 成人黄色av电影| 国产精品免费视频观看| 成人app软件下载大全免费| 日韩一区中文字幕| 色婷婷久久99综合精品jk白丝| 亚洲狠狠丁香婷婷综合久久久| 色综合婷婷久久| 亚洲第一搞黄网站| 日韩一区二区视频在线观看| 麻豆精品在线看| 久久精品水蜜桃av综合天堂| 国产不卡免费视频| 亚洲欧洲综合另类| 欧美四级电影在线观看| 爽好多水快深点欧美视频| 欧美成人精品1314www| 国产精品系列在线观看| 国产精品久久久久久福利一牛影视 | 日韩一区欧美一区| 欧美中文字幕亚洲一区二区va在线| 香蕉影视欧美成人| 欧美精品一区二| 91在线精品一区二区| 亚洲在线中文字幕| 精品国产第一区二区三区观看体验 | 精品日韩一区二区| 成人av先锋影音| 日韩中文字幕区一区有砖一区 | 奇米一区二区三区av| 亚洲精品在线观| 日本韩国一区二区| 久久国产精品色婷婷| 国产精品国产三级国产a| 99久久精品国产观看| 亚欧色一区w666天堂| 日韩免费高清视频| 成人av影院在线| 天天影视网天天综合色在线播放| 日韩午夜在线影院| 成人av午夜影院| 日本亚洲一区二区| 欧美国产精品劲爆| 欧美视频一区二区三区| 麻豆成人91精品二区三区| 国产丝袜美腿一区二区三区| 一本色道亚洲精品aⅴ| 免费成人在线视频观看| 国产精品你懂的在线欣赏| 99热这里都是精品| 一二三区精品视频| 日韩精品在线网站| 色综合久久综合网欧美综合网| 免费人成网站在线观看欧美高清| 中文欧美字幕免费| 欧美人与性动xxxx| 不卡的看片网站| 日本三级亚洲精品| 午夜精品爽啪视频| 亚洲五月六月丁香激情| 99精品视频中文字幕| 夜夜爽夜夜爽精品视频| 亚洲人123区| 亚洲欧美日韩电影| 亚洲日本va午夜在线电影| 国产日韩欧美精品综合| 26uuu国产在线精品一区二区| 在线成人av影院| 欧美日本一道本在线视频| 91福利国产成人精品照片| 91香蕉视频污在线| 91视视频在线直接观看在线看网页在线看| 国产精品一区二区三区网站| 美女被吸乳得到大胸91| 免费在线欧美视频| 奇米四色…亚洲| 久久国产尿小便嘘嘘尿| 六月丁香婷婷久久| 国产一区二区三区在线观看免费视频 | 综合久久给合久久狠狠狠97色| 国产欧美一区二区三区在线看蜜臀| 精品免费一区二区三区| 精品国产乱码久久久久久1区2区 | 成人黄色国产精品网站大全在线免费观看| 国产精华液一区二区三区| 国产精品91一区二区| 成人av在线看| 日本精品一级二级| 制服丝袜亚洲精品中文字幕| 日韩欧美久久一区| 国产日本一区二区| 亚洲日本中文字幕区| 亚洲成va人在线观看| 久久99精品久久久久婷婷| 国内外成人在线| 91亚洲精品久久久蜜桃网站| 欧美视频一区二区在线观看| 337p亚洲精品色噜噜狠狠| 精品999久久久| 亚洲欧洲精品一区二区三区| 一区二区三区四区不卡视频| 免费在线观看一区二区三区| 国产一区二区看久久| 日韩你懂的电影在线观看| 国产精品天干天干在线综合| 亚洲最新视频在线播放| 欧美96一区二区免费视频| 国产丶欧美丶日本不卡视频| 色天使色偷偷av一区二区| 91精品国产乱码久久蜜臀| 中文字幕欧美区| 亚洲不卡一区二区三区| 国产精品一卡二卡在线观看| 91婷婷韩国欧美一区二区| 日韩欧美国产午夜精品| 亚洲丝袜自拍清纯另类| 蜜桃精品在线观看| 一本大道久久a久久精二百| 日韩欧美www|