亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? smo.java

?? MacroWeka擴展了著名數據挖掘工具weka
?? JAVA
?? 第 1 頁 / 共 5 頁
字號:
/*
 *    This program is free software; you can redistribute it and/or modify
 *    it under the terms of the GNU General Public License as published by
 *    the Free Software Foundation; either version 2 of the License, or
 *    (at your option) any later version.
 *
 *    This program is distributed in the hope that it will be useful,
 *    but WITHOUT ANY WARRANTY; without even the implied warranty of
 *    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 *    GNU General Public License for more details.
 *
 *    You should have received a copy of the GNU General Public License
 *    along with this program; if not, write to the Free Software
 *    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
 */

/*
 *    SMO.java
 *    Copyright (C) 1999 Eibe Frank
 *
 */

package weka.classifiers.functions;

import weka.classifiers.functions.supportVector.*;
import weka.classifiers.Classifier;
import weka.classifiers.Evaluation;
import weka.classifiers.functions.Logistic;
import weka.filters.unsupervised.attribute.NominalToBinary;
import weka.filters.unsupervised.attribute.ReplaceMissingValues;
import weka.filters.unsupervised.attribute.Normalize;
import weka.filters.unsupervised.attribute.Standardize;
import weka.filters.Filter;
import java.util.*;
import java.io.*;
import weka.core.*;

/**
 * Implements John C. Platt's sequential minimal optimization
 * algorithm for training a support vector classifier using polynomial
 * or RBF kernels. 
 *
 * This implementation globally replaces all missing values and
 * transforms nominal attributes into binary ones. It also
 * normalizes all attributes by default. (Note that the coefficients
 * in the output are based on the normalized/standardized data, not the
 * original data.)
 *
 * Multi-class problems are solved using pairwise classification.
 *
 * To obtain proper probability estimates, use the option that fits
 * logistic regression models to the outputs of the support vector
 * machine. In the multi-class case the predicted probabilities
 * will be coupled using Hastie and Tibshirani's pairwise coupling
 * method.
 *
 * Note: for improved speed standardization should be turned off when
 * operating on SparseInstances.<p>
 *
 * For more information on the SMO algorithm, see<p>
 *
 * J. Platt (1998). <i>Fast Training of Support Vector
 * Machines using Sequential Minimal Optimization</i>. Advances in Kernel
 * Methods - Support Vector Learning, B. Schoelkopf, C. Burges, and
 * A. Smola, eds., MIT Press. <p>
 *
 * S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, K.R.K. Murthy, 
 * <i>Improvements to Platt's SMO Algorithm for SVM Classifier Design</i>. 
 * Neural Computation, 13(3), pp 637-649, 2001. <p>
 *
 * Valid options are:<p>
 *
 * -C num <br>
 * The complexity constant C. (default 1)<p>
 *
 * -E num <br>
 * The exponent for the polynomial kernel. (default 1)<p>
 *
 * -G num <br>
 * Gamma for the RBF kernel. (default 0.01)<p>
 *
 * -N <0|1|2> <br>
 * Whether to 0=normalize/1=standardize/2=neither. (default 0=normalize)<p>
 *
 * -F <br>
 * Feature-space normalization (only for non-linear polynomial kernels). <p>
 *
 * -O <br>
 * Use lower-order terms (only for non-linear polynomial kernels). <p>
 *
 * -R <br>
 * Use the RBF kernel. (default poly)<p>
 *
 * -A num <br>
 * Sets the size of the kernel cache. Should be a prime number. 
 * (default 250007, use 0 for full cache) <p>
 *
 * -L num <br>
 * Sets the tolerance parameter. (default 1.0e-3)<p>
 *
 * -P num <br>
 * Sets the epsilon for round-off error. (default 1.0e-12)<p>
 *
 * -M <br>
 * Fit logistic models to SVM outputs.<p>
 *
 * -V num <br>
 * Number of folds for cross-validation used to generate data
 * for logistic models. (default -1, use training data)
 *
 * -W num <br>
 * Random number seed for cross-validation. (default 1)
 *
 * @author Eibe Frank (eibe@cs.waikato.ac.nz)
 * @author Shane Legg (shane@intelligenesis.net) (sparse vector code)
 * @author Stuart Inglis (stuart@reeltwo.com) (sparse vector code)
 * @version $Revision: 1.1 $ */
public class SMO extends Classifier implements WeightedInstancesHandler {

  /**
   * Returns a string describing classifier
   * @return a description suitable for
   * displaying in the explorer/experimenter gui
   */
  public String globalInfo() {

    return  "Implements John Platt's sequential minimal optimization "
      + "algorithm for training a support vector classifier.\n\n"
      + "This implementation globally replaces all missing values and "
      + "transforms nominal attributes into binary ones. It also "
      + "normalizes all attributes by default. (In that case the coefficients "
      + "in the output are based on the normalized data, not the "
      + "original data --- this is important for interpreting the classifier.)\n\n"
      + "Multi-class problems are solved using pairwise classification.\n\n"
      + "To obtain proper probability estimates, use the option that fits "
      + "logistic regression models to the outputs of the support vector "
      + "machine. In the multi-class case the predicted probabilities "
      + "are coupled using Hastie and Tibshirani's pairwise coupling "
      + "method.\n\n"
      + "Note: for improved speed normalization should be turned off when "
      + "operating on SparseInstances.\n\n"
      + "For more information on the SMO algorithm, see\n\n"
      + "J. Platt (1998). \"Fast Training of Support Vector "
      + "Machines using Sequential Minimal Optimization\". Advances in Kernel "
      + "Methods - Support Vector Learning, B. Schoelkopf, C. Burges, and "
      + "A. Smola, eds., MIT Press. \n\n"
      + "S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, K.R.K. Murthy,  "
      + "\"Improvements to Platt's SMO Algorithm for SVM Classifier Design\".  "
      + "Neural Computation, 13(3), pp 637-649, 2001.";
  }

  /**
   * Class for building a binary support vector machine.
   */
  protected class BinarySMO implements Serializable {
    
    /** The Lagrange multipliers. */
    protected double[] m_alpha;

    /** The thresholds. */
    protected double m_b, m_bLow, m_bUp;

    /** The indices for m_bLow and m_bUp */
    protected int m_iLow, m_iUp;

    /** The training data. */
    protected Instances m_data;

    /** Weight vector for linear machine. */
    protected double[] m_weights;

    /** Variables to hold weight vector in sparse form.
	(To reduce storage requirements.) */
    protected double[] m_sparseWeights;
    protected int[] m_sparseIndices;

    /** Kernel to use **/
    protected Kernel m_kernel;

    /** The transformed class values. */
    protected double[] m_class;

    /** The current set of errors for all non-bound examples. */
    protected double[] m_errors;

    /** The five different sets used by the algorithm. */
    protected SMOset m_I0; // {i: 0 < m_alpha[i] < C}
    protected SMOset m_I1; // {i: m_class[i] = 1, m_alpha[i] = 0}
    protected SMOset m_I2; // {i: m_class[i] = -1, m_alpha[i] =C}
    protected SMOset m_I3; // {i: m_class[i] = 1, m_alpha[i] = C}
    protected SMOset m_I4; // {i: m_class[i] = -1, m_alpha[i] = 0}

    /** The set of support vectors */
    protected SMOset m_supportVectors; // {i: 0 < m_alpha[i]}

    /** Stores logistic regression model for probability estimate */
    protected Logistic m_logistic = null;

    /** Stores the weight of the training instances */
    protected double m_sumOfWeights = 0;

    /**
     * Fits logistic regression model to SVM outputs analogue
     * to John Platt's method.  
     *
     * @param insts the set of training instances
     * @param cl1 the first class' index
     * @param cl2 the second class' index
     * @exception Exception if the sigmoid can't be fit successfully
     */
    protected void fitLogistic(Instances insts, int cl1, int cl2,
			     int numFolds, Random random) 
      throws Exception {

      // Create header of instances object
      FastVector atts = new FastVector(2);
      atts.addElement(new Attribute("pred"));
      FastVector attVals = new FastVector(2);
      attVals.addElement(insts.classAttribute().value(cl1));
      attVals.addElement(insts.classAttribute().value(cl2));
      atts.addElement(new Attribute("class", attVals));
      Instances data = new Instances("data", atts, insts.numInstances());
      data.setClassIndex(1);

      // Collect data for fitting the logistic model
      if (numFolds <= 0) {

	// Use training data
	for (int j = 0; j < insts.numInstances(); j++) {
	  Instance inst = insts.instance(j);
	  double[] vals = new double[2];
	  vals[0] = SVMOutput(-1, inst);
	  if (inst.classValue() == cl2) {
	    vals[1] = 1;
	  }
	  data.add(new Instance(inst.weight(), vals));
	}
      } else {

	// Check whether number of folds too large
	if (numFolds > insts.numInstances()) {
	  numFolds = insts.numInstances();
	}

	// Make copy of instances because we will shuffle them around
	insts = new Instances(insts);
	
	// Perform three-fold cross-validation to collect
	// unbiased predictions
	insts.randomize(random);
	insts.stratify(numFolds);
	for (int i = 0; i < numFolds; i++) {
	  Instances train = insts.trainCV(numFolds, i, random);
	  SerializedObject so = new SerializedObject(this);
	  BinarySMO smo = (BinarySMO)so.getObject();
	  smo.buildClassifier(train, cl1, cl2, false, -1, -1);
	  Instances test = insts.testCV(numFolds, i);
	  for (int j = 0; j < test.numInstances(); j++) {
	    double[] vals = new double[2];
	    vals[0] = smo.SVMOutput(-1, test.instance(j));
	    if (test.instance(j).classValue() == cl2) {
	      vals[1] = 1;
	    }
	    data.add(new Instance(test.instance(j).weight(), vals));
	  }
	}
      }

      // Build logistic regression model
      m_logistic = new Logistic();
      m_logistic.buildClassifier(data);
    }

    /**
     * Method for building the binary classifier.
     *
     * @param insts the set of training instances
     * @param cl1 the first class' index
     * @param cl2 the second class' index
     * @param fitLogistic true if logistic model is to be fit
     * @param numFolds number of folds for internal cross-validation
     * @param random random number generator for cross-validation
     * @exception Exception if the classifier can't be built successfully
     */
    protected void buildClassifier(Instances insts, int cl1, int cl2,
				 boolean fitLogistic, int numFolds,
				 int randomSeed) throws Exception {
      
      // Initialize some variables
      m_bUp = -1; m_bLow = 1; m_b = 0; 
      m_alpha = null; m_data = null; m_weights = null; m_errors = null;
      m_logistic = null; m_I0 = null; m_I1 = null; m_I2 = null;
      m_I3 = null; m_I4 = null;	m_sparseWeights = null; m_sparseIndices = null;

      // Store the sum of weights
      m_sumOfWeights = insts.sumOfWeights();
      
      // Set class values
      m_class = new double[insts.numInstances()];
      m_iUp = -1; m_iLow = -1;
      for (int i = 0; i < m_class.length; i++) {
	if ((int) insts.instance(i).classValue() == cl1) {
	  m_class[i] = -1; m_iLow = i;
	} else if ((int) insts.instance(i).classValue() == cl2) {
	  m_class[i] = 1; m_iUp = i;
	} else {
	  throw new Exception ("This should never happen!");
	}
      }

      // Check whether one or both classes are missing
      if ((m_iUp == -1) || (m_iLow == -1)) {
	if (m_iUp != -1) {
	  m_b = -1;
	} else if (m_iLow != -1) {
	  m_b = 1;
	} else {
	  m_class = null;
	  return;
	}
	if (!m_useRBF && m_exponent == 1.0) {
	  m_sparseWeights = new double[0];
	  m_sparseIndices = new int[0];
	  m_class = null;
	} else {
	  m_supportVectors = new SMOset(0);
	  m_alpha = new double[0];
	  m_class = new double[0];
	}

	// Fit sigmoid if requested
	if (fitLogistic) {
	  fitLogistic(insts, cl1, cl2, numFolds, new Random(randomSeed));
	}
	return;
      }
      
      // Set the reference to the data
      m_data = insts;

      // If machine is linear, reserve space for weights
      if (!m_useRBF && m_exponent == 1.0) {
	m_weights = new double[m_data.numAttributes()];
      } else {
	m_weights = null;
      }
      
      // Initialize alpha array to zero
      m_alpha = new double[m_data.numInstances()];
      
      // Initialize sets
      m_supportVectors = new SMOset(m_data.numInstances());
      m_I0 = new SMOset(m_data.numInstances());
      m_I1 = new SMOset(m_data.numInstances());
      m_I2 = new SMOset(m_data.numInstances());
      m_I3 = new SMOset(m_data.numInstances());
      m_I4 = new SMOset(m_data.numInstances());

      // Clean out some instance variables
      m_sparseWeights = null;
      m_sparseIndices = null;
      
      // Initialize error cache
      m_errors = new double[m_data.numInstances()];
      m_errors[m_iLow] = 1; m_errors[m_iUp] = -1;
     
      // Initialize kernel
      if(m_useRBF) {
	m_kernel = new RBFKernel(m_data, m_cacheSize, m_gamma);
      } else {
	if (m_featureSpaceNormalization) {
	  m_kernel = new NormalizedPolyKernel(m_data, m_cacheSize, m_exponent, 
					      m_lowerOrder);
	} else {
	  m_kernel = new PolyKernel(m_data, m_cacheSize, m_exponent, m_lowerOrder);
	}
      }
      
      // Build up I1 and I4
      for (int i = 0; i < m_class.length; i++ ) {
	if (m_class[i] == 1) {
	  m_I1.insert(i);
	} else {
	  m_I4.insert(i);
	}
      }
      
      // Loop to find all the support vectors
      int numChanged = 0;
      boolean examineAll = true;
      while ((numChanged > 0) || examineAll) {
	numChanged = 0;
	if (examineAll) {
	  for (int i = 0; i < m_alpha.length; i++) {
	    if (examineExample(i)) {
	      numChanged++;
	    }
	  }
	} else {
	  
	  // This code implements Modification 1 from Keerthi et al.'s paper
	  for (int i = 0; i < m_alpha.length; i++) {
	    if ((m_alpha[i] > 0) &&  
		(m_alpha[i] < m_C * m_data.instance(i).weight())) {
	      if (examineExample(i)) {
		numChanged++;
	      }
	      
	      // Is optimality on unbound vectors obtained?
	      if (m_bUp > m_bLow - 2 * m_tol) {
		numChanged = 0;
		break;
	      }
	    }
	  }
	  
	  //This is the code for Modification 2 from Keerthi et al.'s paper
	  /*boolean innerLoopSuccess = true; 
	    numChanged = 0;
	    while ((m_bUp < m_bLow - 2 * m_tol) && (innerLoopSuccess == true)) {
	    innerLoopSuccess = takeStep(m_iUp, m_iLow, m_errors[m_iLow]);
	    }*/
	}
	

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
亚洲日本欧美天堂| 精品对白一区国产伦| 日韩欧美国产一区在线观看| 国产亚洲精品7777| 亚洲欧美日韩国产一区二区三区 | 国产一区二区剧情av在线| 成人免费视频视频| 欧美色老头old∨ideo| 精品国产a毛片| 国产精品久久久久久福利一牛影视| 亚洲国产视频一区| 麻豆国产精品官网| 色综合天天综合网天天看片| 欧美电视剧在线观看完整版| 亚洲欧美影音先锋| 男人的天堂久久精品| eeuss影院一区二区三区| 欧美乱妇15p| 国产精品久久久久影院色老大| 亚洲影视在线观看| 国产精品一区一区三区| 在线观看一区二区精品视频| 国产偷国产偷亚洲高清人白洁| 亚洲一区在线播放| 国产成人av福利| 欧美日韩免费视频| 中文字幕va一区二区三区| 日本视频免费一区| 91污片在线观看| 久久久久久久综合色一本| 亚洲成人精品一区二区| 成人免费高清在线观看| 欧美大尺度电影在线| 尤物av一区二区| 欧美特级限制片免费在线观看| 精品日韩在线一区| 亚洲综合无码一区二区| 岛国精品在线播放| 欧美电影免费观看高清完整版在| 一区二区三区视频在线观看| 国产成人免费在线观看| 精品国免费一区二区三区| 亚洲大片免费看| 91在线国产福利| 国产欧美精品区一区二区三区 | 欧洲av在线精品| 国产农村妇女精品| 久久av中文字幕片| 欧美日韩夫妻久久| 亚洲一区二区影院| 99re视频精品| 国产精品美女久久久久aⅴ| 久久91精品国产91久久小草 | 激情综合五月婷婷| 91精品国产综合久久久久久久 | 欧美日本高清视频在线观看| 亚洲日本va在线观看| 粉嫩绯色av一区二区在线观看| 欧美精品一区二区三区久久久| 肉肉av福利一精品导航| 欧美日韩一区小说| 亚洲国产aⅴ成人精品无吗| 一本一道久久a久久精品 | 成人18视频日本| 久久精品视频一区二区| 激情综合色播五月| 日韩欧美色综合| 蜜桃精品视频在线| 日韩精品一区二区三区三区免费 | 日韩欧美成人午夜| 老司机一区二区| 日韩欧美中文字幕精品| 欧美96一区二区免费视频| 欧美一区二区三区视频| 免费人成精品欧美精品| 欧美成人国产一区二区| 久久精品99国产国产精| 久久综合五月天婷婷伊人| 国产最新精品精品你懂的| 精品国产91乱码一区二区三区| 国产一区二区精品在线观看| 久久久久久久久久久久久女国产乱| 韩国在线一区二区| 久久久久久**毛片大全| 风间由美中文字幕在线看视频国产欧美 | 亚洲色图在线播放| 91麻豆精品秘密| 一区二区三区免费网站| 久久婷婷国产综合国色天香| 国产精品一二三在| 中文字幕制服丝袜成人av| 99久久国产综合精品女不卡| 亚洲综合色视频| 欧美日韩久久久久久| 日韩1区2区日韩1区2区| 久久综合久久综合九色| www.一区二区| 午夜在线成人av| 日韩欧美中文字幕公布| 国产1区2区3区精品美女| 日韩美女视频一区| 欧美福利视频导航| 久久er99精品| 日韩理论片网站| 欧美日韩国产一二三| 国内精品写真在线观看| 中文字幕一区二区三区在线播放| 日本乱码高清不卡字幕| 日韩国产欧美一区二区三区| 精品黑人一区二区三区久久| 99久久精品国产导航| 调教+趴+乳夹+国产+精品| 久久免费电影网| 91国模大尺度私拍在线视频| 久久精品国产亚洲aⅴ| 国产精品久久久久久福利一牛影视 | 欧美精品一区二| 99久久免费视频.com| 午夜精品免费在线| 久久久亚洲精品石原莉奈| 91视视频在线观看入口直接观看www | 国产一区二区中文字幕| 亚洲男同1069视频| 日韩久久久精品| 99久久久精品免费观看国产蜜| 视频一区二区三区中文字幕| 国产欧美一区二区三区沐欲| 欧美日韩一区高清| 国产a级毛片一区| 午夜av电影一区| 亚洲国产成人一区二区三区| 欧美日韩精品一区二区三区四区| 国产在线精品一区二区| 亚洲一区二区av电影| 国产欧美日韩精品在线| 这里只有精品视频在线观看| 成人亚洲精品久久久久软件| 污片在线观看一区二区| 国产精品灌醉下药二区| 日韩午夜av电影| 色婷婷综合五月| 国产91在线|亚洲| 麻豆精品在线看| 亚洲最色的网站| 亚洲国产精品成人久久综合一区| 日韩欧美中文字幕一区| 欧美三级在线播放| 波多野结衣中文字幕一区| 国内精品伊人久久久久av一坑 | 欧美三级电影网| 日韩一级完整毛片| 色综合激情五月| 成人激情午夜影院| 经典三级一区二区| 天天免费综合色| 悠悠色在线精品| 亚洲三级小视频| 亚洲国产精品成人综合色在线婷婷| 精品日韩99亚洲| 日韩欧美一区二区三区在线| 欧美美女喷水视频| 欧美性一二三区| 色综合视频一区二区三区高清| 国产不卡在线视频| 激情另类小说区图片区视频区| 日一区二区三区| 婷婷综合久久一区二区三区| 一区二区三区免费在线观看| 日韩理论片在线| 亚洲视频你懂的| 国产精品成人免费| 中文字幕永久在线不卡| 中文字幕精品—区二区四季| 久久精品欧美一区二区三区麻豆 | 丁香婷婷综合激情五月色| 国产一区二区伦理| 国产一区二区三区黄视频 | 日韩一区二区精品在线观看| 91精品国产欧美一区二区18| 欧美午夜一区二区三区| 色综合网色综合| 91行情网站电视在线观看高清版| 色噜噜夜夜夜综合网| 色婷婷av一区二区三区gif| 日本乱人伦aⅴ精品| 欧美亚洲综合另类| 欧美日韩国产片| 日韩欧美一级片| 久久久午夜精品| 国产精品久久影院| 国产精品久久久久久久裸模| 国产精品久久久久婷婷二区次| 《视频一区视频二区| 亚洲精品国产精品乱码不99 | 91精品国产免费| 日韩一区二区三区在线观看| 日韩西西人体444www| 久久影视一区二区| 国产蜜臀av在线一区二区三区| 亚洲欧洲无码一区二区三区|