亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? smo.java

?? MacroWeka擴展了著名數(shù)據(jù)挖掘工具weka
?? JAVA
?? 第 1 頁 / 共 5 頁
字號:
/*
 *    This program is free software; you can redistribute it and/or modify
 *    it under the terms of the GNU General Public License as published by
 *    the Free Software Foundation; either version 2 of the License, or
 *    (at your option) any later version.
 *
 *    This program is distributed in the hope that it will be useful,
 *    but WITHOUT ANY WARRANTY; without even the implied warranty of
 *    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 *    GNU General Public License for more details.
 *
 *    You should have received a copy of the GNU General Public License
 *    along with this program; if not, write to the Free Software
 *    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
 */

/*
 *    SMO.java
 *    Copyright (C) 1999 Eibe Frank
 *
 */

package weka.classifiers.functions;

import weka.classifiers.functions.supportVector.*;
import weka.classifiers.Classifier;
import weka.classifiers.Evaluation;
import weka.classifiers.functions.Logistic;
import weka.filters.unsupervised.attribute.NominalToBinary;
import weka.filters.unsupervised.attribute.ReplaceMissingValues;
import weka.filters.unsupervised.attribute.Normalize;
import weka.filters.unsupervised.attribute.Standardize;
import weka.filters.Filter;
import java.util.*;
import java.io.*;
import weka.core.*;

/**
 * Implements John C. Platt's sequential minimal optimization
 * algorithm for training a support vector classifier using polynomial
 * or RBF kernels. 
 *
 * This implementation globally replaces all missing values and
 * transforms nominal attributes into binary ones. It also
 * normalizes all attributes by default. (Note that the coefficients
 * in the output are based on the normalized/standardized data, not the
 * original data.)
 *
 * Multi-class problems are solved using pairwise classification.
 *
 * To obtain proper probability estimates, use the option that fits
 * logistic regression models to the outputs of the support vector
 * machine. In the multi-class case the predicted probabilities
 * will be coupled using Hastie and Tibshirani's pairwise coupling
 * method.
 *
 * Note: for improved speed standardization should be turned off when
 * operating on SparseInstances.<p>
 *
 * For more information on the SMO algorithm, see<p>
 *
 * J. Platt (1998). <i>Fast Training of Support Vector
 * Machines using Sequential Minimal Optimization</i>. Advances in Kernel
 * Methods - Support Vector Learning, B. Schoelkopf, C. Burges, and
 * A. Smola, eds., MIT Press. <p>
 *
 * S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, K.R.K. Murthy, 
 * <i>Improvements to Platt's SMO Algorithm for SVM Classifier Design</i>. 
 * Neural Computation, 13(3), pp 637-649, 2001. <p>
 *
 * Valid options are:<p>
 *
 * -C num <br>
 * The complexity constant C. (default 1)<p>
 *
 * -E num <br>
 * The exponent for the polynomial kernel. (default 1)<p>
 *
 * -G num <br>
 * Gamma for the RBF kernel. (default 0.01)<p>
 *
 * -N <0|1|2> <br>
 * Whether to 0=normalize/1=standardize/2=neither. (default 0=normalize)<p>
 *
 * -F <br>
 * Feature-space normalization (only for non-linear polynomial kernels). <p>
 *
 * -O <br>
 * Use lower-order terms (only for non-linear polynomial kernels). <p>
 *
 * -R <br>
 * Use the RBF kernel. (default poly)<p>
 *
 * -A num <br>
 * Sets the size of the kernel cache. Should be a prime number. 
 * (default 250007, use 0 for full cache) <p>
 *
 * -L num <br>
 * Sets the tolerance parameter. (default 1.0e-3)<p>
 *
 * -P num <br>
 * Sets the epsilon for round-off error. (default 1.0e-12)<p>
 *
 * -M <br>
 * Fit logistic models to SVM outputs.<p>
 *
 * -V num <br>
 * Number of folds for cross-validation used to generate data
 * for logistic models. (default -1, use training data)
 *
 * -W num <br>
 * Random number seed for cross-validation. (default 1)
 *
 * @author Eibe Frank (eibe@cs.waikato.ac.nz)
 * @author Shane Legg (shane@intelligenesis.net) (sparse vector code)
 * @author Stuart Inglis (stuart@reeltwo.com) (sparse vector code)
 * @version $Revision: 1.1 $ */
public class SMO extends Classifier implements WeightedInstancesHandler {

  /**
   * Returns a string describing classifier
   * @return a description suitable for
   * displaying in the explorer/experimenter gui
   */
  public String globalInfo() {

    return  "Implements John Platt's sequential minimal optimization "
      + "algorithm for training a support vector classifier.\n\n"
      + "This implementation globally replaces all missing values and "
      + "transforms nominal attributes into binary ones. It also "
      + "normalizes all attributes by default. (In that case the coefficients "
      + "in the output are based on the normalized data, not the "
      + "original data --- this is important for interpreting the classifier.)\n\n"
      + "Multi-class problems are solved using pairwise classification.\n\n"
      + "To obtain proper probability estimates, use the option that fits "
      + "logistic regression models to the outputs of the support vector "
      + "machine. In the multi-class case the predicted probabilities "
      + "are coupled using Hastie and Tibshirani's pairwise coupling "
      + "method.\n\n"
      + "Note: for improved speed normalization should be turned off when "
      + "operating on SparseInstances.\n\n"
      + "For more information on the SMO algorithm, see\n\n"
      + "J. Platt (1998). \"Fast Training of Support Vector "
      + "Machines using Sequential Minimal Optimization\". Advances in Kernel "
      + "Methods - Support Vector Learning, B. Schoelkopf, C. Burges, and "
      + "A. Smola, eds., MIT Press. \n\n"
      + "S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, K.R.K. Murthy,  "
      + "\"Improvements to Platt's SMO Algorithm for SVM Classifier Design\".  "
      + "Neural Computation, 13(3), pp 637-649, 2001.";
  }

  /**
   * Class for building a binary support vector machine.
   */
  protected class BinarySMO implements Serializable {
    
    /** The Lagrange multipliers. */
    protected double[] m_alpha;

    /** The thresholds. */
    protected double m_b, m_bLow, m_bUp;

    /** The indices for m_bLow and m_bUp */
    protected int m_iLow, m_iUp;

    /** The training data. */
    protected Instances m_data;

    /** Weight vector for linear machine. */
    protected double[] m_weights;

    /** Variables to hold weight vector in sparse form.
	(To reduce storage requirements.) */
    protected double[] m_sparseWeights;
    protected int[] m_sparseIndices;

    /** Kernel to use **/
    protected Kernel m_kernel;

    /** The transformed class values. */
    protected double[] m_class;

    /** The current set of errors for all non-bound examples. */
    protected double[] m_errors;

    /** The five different sets used by the algorithm. */
    protected SMOset m_I0; // {i: 0 < m_alpha[i] < C}
    protected SMOset m_I1; // {i: m_class[i] = 1, m_alpha[i] = 0}
    protected SMOset m_I2; // {i: m_class[i] = -1, m_alpha[i] =C}
    protected SMOset m_I3; // {i: m_class[i] = 1, m_alpha[i] = C}
    protected SMOset m_I4; // {i: m_class[i] = -1, m_alpha[i] = 0}

    /** The set of support vectors */
    protected SMOset m_supportVectors; // {i: 0 < m_alpha[i]}

    /** Stores logistic regression model for probability estimate */
    protected Logistic m_logistic = null;

    /** Stores the weight of the training instances */
    protected double m_sumOfWeights = 0;

    /**
     * Fits logistic regression model to SVM outputs analogue
     * to John Platt's method.  
     *
     * @param insts the set of training instances
     * @param cl1 the first class' index
     * @param cl2 the second class' index
     * @exception Exception if the sigmoid can't be fit successfully
     */
    protected void fitLogistic(Instances insts, int cl1, int cl2,
			     int numFolds, Random random) 
      throws Exception {

      // Create header of instances object
      FastVector atts = new FastVector(2);
      atts.addElement(new Attribute("pred"));
      FastVector attVals = new FastVector(2);
      attVals.addElement(insts.classAttribute().value(cl1));
      attVals.addElement(insts.classAttribute().value(cl2));
      atts.addElement(new Attribute("class", attVals));
      Instances data = new Instances("data", atts, insts.numInstances());
      data.setClassIndex(1);

      // Collect data for fitting the logistic model
      if (numFolds <= 0) {

	// Use training data
	for (int j = 0; j < insts.numInstances(); j++) {
	  Instance inst = insts.instance(j);
	  double[] vals = new double[2];
	  vals[0] = SVMOutput(-1, inst);
	  if (inst.classValue() == cl2) {
	    vals[1] = 1;
	  }
	  data.add(new Instance(inst.weight(), vals));
	}
      } else {

	// Check whether number of folds too large
	if (numFolds > insts.numInstances()) {
	  numFolds = insts.numInstances();
	}

	// Make copy of instances because we will shuffle them around
	insts = new Instances(insts);
	
	// Perform three-fold cross-validation to collect
	// unbiased predictions
	insts.randomize(random);
	insts.stratify(numFolds);
	for (int i = 0; i < numFolds; i++) {
	  Instances train = insts.trainCV(numFolds, i, random);
	  SerializedObject so = new SerializedObject(this);
	  BinarySMO smo = (BinarySMO)so.getObject();
	  smo.buildClassifier(train, cl1, cl2, false, -1, -1);
	  Instances test = insts.testCV(numFolds, i);
	  for (int j = 0; j < test.numInstances(); j++) {
	    double[] vals = new double[2];
	    vals[0] = smo.SVMOutput(-1, test.instance(j));
	    if (test.instance(j).classValue() == cl2) {
	      vals[1] = 1;
	    }
	    data.add(new Instance(test.instance(j).weight(), vals));
	  }
	}
      }

      // Build logistic regression model
      m_logistic = new Logistic();
      m_logistic.buildClassifier(data);
    }

    /**
     * Method for building the binary classifier.
     *
     * @param insts the set of training instances
     * @param cl1 the first class' index
     * @param cl2 the second class' index
     * @param fitLogistic true if logistic model is to be fit
     * @param numFolds number of folds for internal cross-validation
     * @param random random number generator for cross-validation
     * @exception Exception if the classifier can't be built successfully
     */
    protected void buildClassifier(Instances insts, int cl1, int cl2,
				 boolean fitLogistic, int numFolds,
				 int randomSeed) throws Exception {
      
      // Initialize some variables
      m_bUp = -1; m_bLow = 1; m_b = 0; 
      m_alpha = null; m_data = null; m_weights = null; m_errors = null;
      m_logistic = null; m_I0 = null; m_I1 = null; m_I2 = null;
      m_I3 = null; m_I4 = null;	m_sparseWeights = null; m_sparseIndices = null;

      // Store the sum of weights
      m_sumOfWeights = insts.sumOfWeights();
      
      // Set class values
      m_class = new double[insts.numInstances()];
      m_iUp = -1; m_iLow = -1;
      for (int i = 0; i < m_class.length; i++) {
	if ((int) insts.instance(i).classValue() == cl1) {
	  m_class[i] = -1; m_iLow = i;
	} else if ((int) insts.instance(i).classValue() == cl2) {
	  m_class[i] = 1; m_iUp = i;
	} else {
	  throw new Exception ("This should never happen!");
	}
      }

      // Check whether one or both classes are missing
      if ((m_iUp == -1) || (m_iLow == -1)) {
	if (m_iUp != -1) {
	  m_b = -1;
	} else if (m_iLow != -1) {
	  m_b = 1;
	} else {
	  m_class = null;
	  return;
	}
	if (!m_useRBF && m_exponent == 1.0) {
	  m_sparseWeights = new double[0];
	  m_sparseIndices = new int[0];
	  m_class = null;
	} else {
	  m_supportVectors = new SMOset(0);
	  m_alpha = new double[0];
	  m_class = new double[0];
	}

	// Fit sigmoid if requested
	if (fitLogistic) {
	  fitLogistic(insts, cl1, cl2, numFolds, new Random(randomSeed));
	}
	return;
      }
      
      // Set the reference to the data
      m_data = insts;

      // If machine is linear, reserve space for weights
      if (!m_useRBF && m_exponent == 1.0) {
	m_weights = new double[m_data.numAttributes()];
      } else {
	m_weights = null;
      }
      
      // Initialize alpha array to zero
      m_alpha = new double[m_data.numInstances()];
      
      // Initialize sets
      m_supportVectors = new SMOset(m_data.numInstances());
      m_I0 = new SMOset(m_data.numInstances());
      m_I1 = new SMOset(m_data.numInstances());
      m_I2 = new SMOset(m_data.numInstances());
      m_I3 = new SMOset(m_data.numInstances());
      m_I4 = new SMOset(m_data.numInstances());

      // Clean out some instance variables
      m_sparseWeights = null;
      m_sparseIndices = null;
      
      // Initialize error cache
      m_errors = new double[m_data.numInstances()];
      m_errors[m_iLow] = 1; m_errors[m_iUp] = -1;
     
      // Initialize kernel
      if(m_useRBF) {
	m_kernel = new RBFKernel(m_data, m_cacheSize, m_gamma);
      } else {
	if (m_featureSpaceNormalization) {
	  m_kernel = new NormalizedPolyKernel(m_data, m_cacheSize, m_exponent, 
					      m_lowerOrder);
	} else {
	  m_kernel = new PolyKernel(m_data, m_cacheSize, m_exponent, m_lowerOrder);
	}
      }
      
      // Build up I1 and I4
      for (int i = 0; i < m_class.length; i++ ) {
	if (m_class[i] == 1) {
	  m_I1.insert(i);
	} else {
	  m_I4.insert(i);
	}
      }
      
      // Loop to find all the support vectors
      int numChanged = 0;
      boolean examineAll = true;
      while ((numChanged > 0) || examineAll) {
	numChanged = 0;
	if (examineAll) {
	  for (int i = 0; i < m_alpha.length; i++) {
	    if (examineExample(i)) {
	      numChanged++;
	    }
	  }
	} else {
	  
	  // This code implements Modification 1 from Keerthi et al.'s paper
	  for (int i = 0; i < m_alpha.length; i++) {
	    if ((m_alpha[i] > 0) &&  
		(m_alpha[i] < m_C * m_data.instance(i).weight())) {
	      if (examineExample(i)) {
		numChanged++;
	      }
	      
	      // Is optimality on unbound vectors obtained?
	      if (m_bUp > m_bLow - 2 * m_tol) {
		numChanged = 0;
		break;
	      }
	    }
	  }
	  
	  //This is the code for Modification 2 from Keerthi et al.'s paper
	  /*boolean innerLoopSuccess = true; 
	    numChanged = 0;
	    while ((m_bUp < m_bLow - 2 * m_tol) && (innerLoopSuccess == true)) {
	    innerLoopSuccess = takeStep(m_iUp, m_iLow, m_errors[m_iLow]);
	    }*/
	}
	

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
国产一区二区不卡| 狠狠色狠狠色综合系列| 777午夜精品视频在线播放| 国产剧情一区在线| 亚洲一区二区三区在线| 久久久久久久久免费| 精品婷婷伊人一区三区三| 国产精品亚洲一区二区三区妖精 | 欧美一区二区三区小说| av爱爱亚洲一区| 久久精品国产亚洲高清剧情介绍 | 国产日产欧美一区二区三区| 欧美军同video69gay| www.亚洲在线| 国产麻豆日韩欧美久久| 日韩主播视频在线| 一区二区在线免费| 亚洲丝袜美腿综合| 中文字幕欧美三区| 久久综合九色欧美综合狠狠| 51久久夜色精品国产麻豆| 在线免费精品视频| 99久久伊人网影院| 成人精品视频网站| 国产激情精品久久久第一区二区 | 久久国产精品无码网站| 午夜精品在线看| 亚洲精品国产精品乱码不99| 国产精品视频你懂的| 26uuu亚洲综合色| 日韩一级大片在线观看| 欧美日本国产一区| 欧美羞羞免费网站| 欧美性受xxxx黑人xyx性爽| 91影院在线免费观看| 99久久伊人久久99| 91丨porny丨蝌蚪视频| 成人短视频下载| av成人免费在线观看| 99精品视频一区| 色婷婷激情综合| 色综合网色综合| 日本高清不卡aⅴ免费网站| 色哟哟在线观看一区二区三区| 99久久精品国产网站| 99精品久久久久久| 在线一区二区三区四区| 欧美中文字幕一区二区三区 | 亚洲欧美日韩在线播放| 亚洲精品成人精品456| 亚洲一二三区在线观看| 亚洲成人免费av| 日本va欧美va精品| 国产综合久久久久久久久久久久| 韩国中文字幕2020精品| 国产精品一区二区你懂的| 高清av一区二区| 91一区二区在线观看| 欧美丝袜丝交足nylons图片| 欧美一区二区在线看| 2024国产精品| 国产精品久久网站| 亚洲一线二线三线久久久| 热久久国产精品| 国产乱子轮精品视频| 成人精品视频.| 欧美色涩在线第一页| 91精品国产91综合久久蜜臀| 久久久精品人体av艺术| 亚洲少妇30p| 天天色 色综合| 国产精品一二三四| 91免费版在线| 91精品国产手机| 国产亚洲欧美一区在线观看| 亚洲另类在线视频| 久久精品噜噜噜成人av农村| www.欧美日韩国产在线| 宅男在线国产精品| 久久精品夜色噜噜亚洲a∨| 亚洲欧美二区三区| 激情综合色播激情啊| 日本高清成人免费播放| 欧美白人最猛性xxxxx69交| 中文字幕在线不卡| 日韩精品电影在线观看| av亚洲精华国产精华| 91精品国产全国免费观看 | 喷白浆一区二区| heyzo一本久久综合| 欧美一区中文字幕| 亚洲色图制服诱惑 | 欧美色欧美亚洲另类二区| 精品久久久久久最新网址| 亚洲美女视频一区| 黄页网站大全一区二区| 在线免费观看日本欧美| 国产精品素人一区二区| 日韩va欧美va亚洲va久久| gogogo免费视频观看亚洲一| 日韩欧美国产1| 一区二区三区**美女毛片| 韩国精品在线观看| 欧美日韩情趣电影| 亚洲美女区一区| 粗大黑人巨茎大战欧美成人| 91精品国产一区二区人妖| 亚洲激情网站免费观看| 风流少妇一区二区| 日韩精品中文字幕一区| 亚洲国产精品久久不卡毛片| 91一区二区三区在线播放| 久久久99免费| 久久99九九99精品| 欧美精三区欧美精三区| 一级特黄大欧美久久久| gogogo免费视频观看亚洲一| 久久久三级国产网站| 久久精品国产成人一区二区三区 | 91精品国产综合久久精品麻豆 | 国产呦精品一区二区三区网站| 欧美精品第一页| 亚洲综合图片区| 91丨九色丨尤物| 中文一区在线播放 | 在线观看91av| 亚洲福利一区二区| 91久久精品午夜一区二区| 自拍偷拍亚洲欧美日韩| 成人高清在线视频| 国产欧美精品一区| 国产成人在线影院| 国产日产欧美一区二区视频| 国产麻豆精品视频| 国产亚洲精品7777| 国产99久久久国产精品潘金| 久久九九久精品国产免费直播| 国产一区二区三区视频在线播放| 精品剧情在线观看| 国产在线精品一区二区夜色| 欧美成人a在线| 久久99久久99精品免视看婷婷| 日韩一区二区在线看片| 美女视频一区在线观看| 日韩美女一区二区三区| 精品中文字幕一区二区| 精品国产电影一区二区| 国产精品中文有码| 欧美激情资源网| 色综合久久天天| 亚洲成人久久影院| 日韩欧美一级特黄在线播放| 精品无人码麻豆乱码1区2区 | 日本一区二区免费在线| 成人精品一区二区三区中文字幕| 国产精品青草久久| 色噜噜狠狠一区二区三区果冻| 一区二区成人在线| 7777精品伊人久久久大香线蕉超级流畅| 丝袜美腿成人在线| 欧美成人女星排行榜| 国产大片一区二区| 亚洲精品网站在线观看| 欧美夫妻性生活| 国内久久精品视频| 亚洲欧美乱综合| 日韩欧美一二三区| 成人午夜视频免费看| 亚洲电影一区二区| 久久综合色天天久久综合图片| www.欧美.com| 日本不卡视频在线观看| 久久精品免费在线观看| 色嗨嗨av一区二区三区| 蜜臀av性久久久久蜜臀aⅴ四虎| 国产日韩精品一区二区三区在线| 91免费版pro下载短视频| 天天综合天天综合色| 久久久久国产一区二区三区四区 | 99久久99久久综合| 日韩精品乱码免费| 国产精品免费丝袜| 欧美一区二区三区视频免费| 波多野结衣中文字幕一区二区三区| 亚洲成av人片| 国产精品人成在线观看免费 | 国产成人免费视频一区| 一区二区三区不卡视频| 久久综合丝袜日本网| 欧美性猛交xxxxxxxx| 国产精品一品二品| 丝袜美腿亚洲一区二区图片| 国产精品久久久久影视| 欧美一三区三区四区免费在线看| 99久久精品国产麻豆演员表| 激情小说欧美图片| 亚洲不卡av一区二区三区| 国产精品不卡一区二区三区| 欧美成人伊人久久综合网| 欧美日精品一区视频|