亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? (matlab)kmeans.m

?? 關于k-means聚類算法的MATLAB實現
?? M
?? 第 1 頁 / 共 2 頁
字號:
function [idx, C, sumD, D] = kmeans(X, k, varargin)
%KMEANS K-means clustering.
%   IDX = KMEANS(X, K) partitions the points in the N-by-P data matrix
%   X into K clusters.  This partition minimizes the sum, over all
%   clusters, of the within-cluster sums of point-to-cluster-centroid
%   distances.  Rows of X correspond to points, columns correspond to
%   variables.  KMEANS returns an N-by-1 vector IDX containing the
%   cluster indices of each point.  By default, KMEANS uses squared
%   Euclidean distances.
%
%   [IDX, C] = KMEANS(X, K) returns the K cluster centroid locations in
%   the K-by-P matrix C.
%
%   [IDX, C, SUMD] = KMEANS(X, K) returns the within-cluster sums of
%   point-to-centroid distances in the 1-by-K vector sumD.
%
%   [IDX, C, SUMD, D] = KMEANS(X, K) returns distances from each point
%   to every centroid in the N-by-K matrix D.
%
%   [ ... ] = KMEANS(..., 'PARAM1',val1, 'PARAM2',val2, ...) allows you to
%   specify optional parameter name/value pairs to control the iterative
%   algorithm used by KMEANS.  Parameters are:
%
%   'Distance' - Distance measure, in P-dimensional space, that KMEANS
%      should minimize with respect to.  Choices are:
%            {'sqEuclidean'} - Squared Euclidean distance
%             'cityblock'    - Sum of absolute differences, a.k.a. L1
%             'cosine'       - One minus the cosine of the included angle
%                              between points (treated as vectors)
%             'correlation'  - One minus the sample correlation between
%                              points (treated as sequences of values)
%             'Hamming'      - Percentage of bits that differ (only
%                              suitable for binary data)
%
%   'Start' - Method used to choose initial cluster centroid positions,
%      sometimes known as "seeds".  Choices are:
%                 {'sample'} - Select K observations from X at random
%                  'uniform' - Select K points uniformly at random from
%                              the range of X.  Not valid for Hamming distance.
%                  'cluster' - Perform preliminary clustering phase on
%                              random 10% subsample of X.  This preliminary
%                              phase is itself initialized using 'sample'.
%                  matrix    - A K-by-P matrix of starting locations.  In
%                              this case, you can pass in [] for K, and
%                              KMEANS infers K from the first dimension of
%                              the matrix.  You can also supply a 3D array,
%                              implying a value for 'Replicates'
%                              from the array's third dimension.
%
%   'Replicates' - Number of times to repeat the clustering, each with a
%      new set of initial centroids [ positive integer | {1}]
%
%   'Maxiter' - The maximum number of iterations [ positive integer | {100}]
%
%   'EmptyAction' - Action to take if a cluster loses all of its member
%      observations.  Choices are:
%               {'error'}    - Treat an empty cluster as an error
%                'drop'      - Remove any clusters that become empty, and
%                              set corresponding values in C and D to NaN.
%                'singleton' - Create a new cluster consisting of the one
%                              observation furthest from its centroid.
%
%   'Display' - Display level [ 'off' | {'notify'} | 'final' | 'iter' ]
%
%   Example:
%
%       X = [randn(20,2)+ones(20,2); randn(20,2)-ones(20,2)];
%       [cidx, ctrs] = kmeans(X, 2, 'dist','city', 'rep',5, 'disp','final');
%       plot(X(cidx==1,1),X(cidx==1,2),'r.', ...
%            X(cidx==2,1),X(cidx==2,2),'b.', ctrs(:,1),ctrs(:,2),'kx');
%
%   See also LINKAGE, CLUSTERDATA, SILHOUETTE.

%   KMEANS uses a two-phase iterative algorithm to minimize the sum of
%   point-to-centroid distances, summed over all K clusters.  The first
%   phase uses what the literature often describes as "batch" updates,
%   where each iteration consists of reassigning points to their nearest
%   cluster centroid, all at once, followed by recalculation of cluster
%   centroids. This phase may be thought of as providing a fast but
%   potentially only approximate solution as a starting point for the
%   second phase.  The second phase uses what the literature often
%   describes as "on-line" updates, where points are individually
%   reassigned if doing so will reduce the sum of distances, and cluster
%   centroids are recomputed after each reassignment.  Each iteration
%   during this second phase consists of one pass though all the points.
%   KMEANS can converge to a local optimum, which in this case is a
%   partition of points in which moving any single point to a different
%   cluster increases the total sum of distances.  This problem can only be
%   solved by a clever (or lucky, or exhaustive) choice of starting points.
%
% References:
%
%   [1] Seber, G.A.F., Multivariate Observations, Wiley, New York, 1984.
%   [2] Spath, H. (1985) Cluster Dissection and Analysis: Theory, FORTRAN
%       Programs, Examples, translated by J. Goldschmidt, Halsted Press,
%       New York, 226 pp.

%   Copyright 1993-2000 The MathWorks, Inc.
%   $Revision: 1.4 $  $Date: 2002/05/30 16:13:31 $

%
if nargin < 2
    error('At least two input arguments required.');
end

% n points in p dimensional space
[n, p] = size(X);
Xsort = []; Xord = [];

pnames = {   'distance'  'start' 'replicates' 'maxiter' 'emptyaction' 'display'};
dflts =  {'sqeuclidean' 'sample'          []       100        'error'  'notify'};
[errmsg,distance,start,reps,maxit,emptyact,display] ...
                       = statgetargs(pnames, dflts, varargin{:});
error(errmsg);

if ischar(distance)
    distNames = {'sqeuclidean','cityblock','cosine','correlation','hamming'};
    i = strmatch(lower(distance), distNames);
    if length(i) > 1
        error(sprintf('Ambiguous ''distance'' parameter value:  %s.', distance));
    elseif isempty(i)
        error(sprintf('Unknown ''distance'' parameter value:  %s.', distance));
    end
    distance = distNames{i};
    switch distance 
    case 'cityblock'
        [Xsort,Xord] = sort(X,1);
    case 'cosine'
        Xnorm = sqrt(sum(X.^2, 2));
        if any(min(Xnorm) <= eps * max(Xnorm))
            error(['Some points have small relative magnitudes, making them ', ...
                   'effectively zero.\nEither remove those points, or choose a ', ...
                   'distance other than ''cosine''.'], []);
        end
        X = X ./ Xnorm(:,ones(1,p));
    case 'correlation'
        X = X - repmat(mean(X,2),1,p);
        Xnorm = sqrt(sum(X.^2, 2));
        if any(min(Xnorm) <= eps * max(Xnorm))
            error(['Some points have small relative standard deviations, making them ', ...
                   'effectively constant.\nEither remove those points, or choose a ', ...
                   'distance other than ''correlation''.'], []);
        end
        X = X ./ Xnorm(:,ones(1,p));
    case 'hamming'
        if ~all(ismember(X(:),[0 1]))
            error('Non-binary data cannot be clustered using Hamming distance.');
        end
    end
else
    error('The ''distance'' parameter value must be a string.');
end

if ischar(start)
    startNames = {'uniform','sample','cluster'};
    i = strmatch(lower(start), startNames);
    if length(i) > 1
        error(sprintf('Ambiguous ''start'' parameter value:  %s.', start));
    elseif isempty(i)
        error(sprintf('Unknown ''start'' parameter value:  %s.', start));
    elseif isempty(k)
        error('You must specify the number of clusters, K.');
    end
    start = startNames{i};
    if strcmp(start, 'uniform')
        if strcmp(distance, 'hamming')
            error('Hamming distance cannot be initialized with uniform random values.');
        end
        Xmins = min(X,1);
        Xmaxs = max(X,1);
    end
elseif isnumeric(start)
    CC = start;
    start = 'numeric';
    if isempty(k)
        k = size(CC,1);
    elseif k ~= size(CC,1);
        error('The ''start'' matrix must have K rows.');
    elseif size(CC,2) ~= p
        error('The ''start'' matrix must have the same number of columns as X.');
    end
    if isempty(reps)
        reps = size(CC,3);
    elseif reps ~= size(CC,3);
        error('The third dimension of the ''start'' array must match the ''replicates'' parameter value.');
    end
    
    % Need to center explicit starting points for 'correlation'. (Re)normalization
    % for 'cosine'/'correlation' is done at each iteration.
    if isequal(distance, 'correlation')
        CC = CC - repmat(mean(CC,2),[1,p,1]);
    end
else
    error('The ''start'' parameter value must be a string or a numeric matrix or array.');
end

if ischar(emptyact)
    emptyactNames = {'error','drop','singleton'};
    i = strmatch(lower(emptyact), emptyactNames);
    if length(i) > 1
        error(sprintf('Ambiguous ''emptyaction'' parameter value:  %s.', emptyact));
    elseif isempty(i)
        error(sprintf('Unknown ''emptyaction'' parameter value:  %s.', emptyact));
    end
    emptyact = emptyactNames{i};
else
    error('The ''emptyaction'' parameter value must be a string.');
end

if ischar(display)
    i = strmatch(lower(display), strvcat('off','notify','final','iter'));
    if length(i) > 1
        error(sprintf('Ambiguous ''display'' parameter value:  %s.', display));
    elseif isempty(i)
        error(sprintf('Unknown ''display'' parameter value:  %s.', display));
    end
    display = i-1;
else
    error('The ''display'' parameter value must be a string.');
end

if k == 1
    error('The number of clusters must be greater than 1.');
elseif n < k
    error('X must have more rows than the number of clusters.');
end

% Assume one replicate
if isempty(reps)
    reps = 1;
end

%
% Done with input argument processing, begin clustering
%

dispfmt = '%6d\t%6d\t%8d\t%12g';
D = repmat(NaN,n,k);   % point-to-cluster distances
Del = repmat(NaN,n,k); % reassignment criterion
m = zeros(k,1);

totsumDBest = Inf;
for rep = 1:reps
    switch start
    case 'uniform'
        C = unifrnd(Xmins(ones(k,1),:), Xmaxs(ones(k,1),:));
        % For 'cosine' and 'correlation', these are uniform inside a subset
        % of the unit hypersphere.  Still need to center them for
        % 'correlation'.  (Re)normalization for 'cosine'/'correlation' is
        % done at each iteration.
        if isequal(distance, 'correlation')
            C = C - repmat(mean(C,2),1,p);
        end
    case 'sample'
        C = double(X(randsample(n,k),:)); % X may be logical
    case 'cluster'
        Xsubset = X(randsample(n,floor(.1*n)),:);
        [dum, C] = kmeans(Xsubset, k, varargin{:}, 'start','sample', 'replicates',1);
    case 'numeric'
        C = CC(:,:,rep);
    end    
    changed = 1:k; % everything is newly assigned
    idx = zeros(n,1);
    totsumD = Inf;
    
    if display > 2 % 'iter'
        disp(sprintf('  iter\t phase\t     num\t         sum'));
    end
    
    %
    % Begin phase one:  batch reassignments
    %
    
    converged = false;
    iter = 0;
    while true
        % Compute the distance from every point to each cluster centroid
        D(:,changed) = distfun(X, C(changed,:), distance, iter);
        
        % Compute the total sum of distances for the current configuration.
        % Can't do it first time through, there's no configuration yet.
        if iter > 0
            totsumD = sum(D((idx-1)*n + (1:n)'));
            % Test for a cycle: if objective is not decreased, back out
            % the last step and move on to the single update phase
            if prevtotsumD <= totsumD
                idx = previdx;
                [C(changed,:), m(changed)] = gcentroids(X, idx, changed, distance, Xsort, Xord);
                iter = iter - 1;
                break;
            end
            if display > 2 % 'iter'
                disp(sprintf(dispfmt,iter,1,length(moved),totsumD));
            end
            if iter >= maxit, break; end
        end

        % Determine closest cluster for each point and reassign points to clusters
        previdx = idx;
        prevtotsumD = totsumD;
        [d, nidx] = min(D, [], 2);

        if iter == 0
            % Every point moved, every cluster will need an update
            moved = 1:n;
            idx = nidx;
            changed = 1:k;
        else
            % Determine which points moved
            moved = find(nidx ~= previdx);
            if length(moved) > 0
                % Resolve ties in favor of not moving
                moved = moved(D((previdx(moved)-1)*n + moved) > d(moved));
            end
            if length(moved) == 0
                break;
            end
            idx(moved) = nidx(moved);

            % Find clusters that gained or lost members
            changed = unique([idx(moved); previdx(moved)])';
        end

        % Calculate the new cluster centroids and counts.
        [C(changed,:), m(changed)] = gcentroids(X, idx, changed, distance, Xsort, Xord);
        iter = iter + 1;
        

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
国产精品你懂的在线| 首页国产欧美日韩丝袜| 99久久久免费精品国产一区二区| 国产欧美一区视频| 91黄色免费网站| 日本女人一区二区三区| 玉足女爽爽91| 久久久亚洲高清| 欧美性猛交xxxx乱大交退制版| 蜜臀久久久久久久| 中文天堂在线一区| 日韩欧美你懂的| 欧美日韩一卡二卡| av在线播放成人| 国产成人亚洲精品狼色在线| 午夜欧美一区二区三区在线播放| 欧美激情一区二区在线| 精品区一区二区| 日韩一级片网站| 欧美精选在线播放| 欧美日韩在线亚洲一区蜜芽| 成人高清视频在线观看| 黄网站免费久久| 国产一区高清在线| 国产精品羞羞答答xxdd| 九九精品一区二区| 久久99国产精品久久| 久久超碰97中文字幕| 激情综合五月天| 狠狠色狠狠色综合| 国产成人夜色高潮福利影视| 国产精品影视天天线| 成人深夜在线观看| 欧美特级限制片免费在线观看| 97久久久精品综合88久久| 在线亚洲一区二区| 欧美日韩国产一二三| 日韩欧美高清dvd碟片| 久久久精品tv| 亚洲少妇30p| 久久激情综合网| 成人av免费在线播放| 欧美日韩精品一区二区| 日韩欧美视频在线| 亚洲乱码日产精品bd| 免费成人在线观看| 色综合天天综合网天天看片| 欧美日韩国产美女| 亚洲同性同志一二三专区| 免费视频最近日韩| 91久久精品一区二区三区| 日韩欧美在线综合网| 亚洲一区二区三区在线播放| 国产麻豆精品久久一二三| 欧美精品一级二级| 国产精品福利av| 国产成人免费在线视频| 欧美绝品在线观看成人午夜影视| 国产欧美日韩激情| 国产精品69久久久久水密桃| 欧美日韩国产在线观看| 亚洲一区二区三区美女| 91蜜桃视频在线| 成人免费在线视频观看| www.欧美.com| 成人欧美一区二区三区小说| 成人午夜免费av| 国产视频一区在线观看| 成人免费毛片片v| 国产精品灌醉下药二区| 国产91综合网| 亚洲天堂免费看| 欧美日韩国产天堂| 精一区二区三区| 久久美女高清视频| 成人动漫av在线| 亚洲444eee在线观看| 日韩精品在线一区| 成人一区二区在线观看| 日韩一区中文字幕| 欧美一区二区三区在线观看视频| 美女视频免费一区| 国产精品美女久久久久久久久 | 亚洲一区二区在线免费观看视频| 欧美在线不卡视频| 国产乱码精品1区2区3区| 国产精品免费aⅴ片在线观看| 欧美三级在线视频| 成人午夜短视频| 亚洲高清不卡在线观看| 欧美大片在线观看| 91论坛在线播放| 国产综合一区二区| 亚洲日本丝袜连裤袜办公室| 欧美r级电影在线观看| 成人a区在线观看| 日本不卡123| 丝袜脚交一区二区| 亚洲精品国产a久久久久久 | 7777精品伊人久久久大香线蕉完整版 | 蜜桃一区二区三区在线观看| 亚洲人午夜精品天堂一二香蕉| 日韩精品一区二区在线观看| gogo大胆日本视频一区| 成人免费毛片高清视频| 久久久久久久久久久久久女国产乱| 粉嫩久久99精品久久久久久夜| ...av二区三区久久精品| 国产亚洲一二三区| 欧美日韩一区精品| 在线观看www91| 色就色 综合激情| 91在线国产福利| 国产不卡高清在线观看视频| 国内精品伊人久久久久av影院| 亚洲18影院在线观看| 亚洲一区二区三区激情| 亚洲综合色视频| 日韩二区在线观看| 久久aⅴ国产欧美74aaa| 久88久久88久久久| 国产精品77777| 国产mv日韩mv欧美| 色综合一个色综合| 91精品国产色综合久久不卡蜜臀| 91精品国产综合久久精品麻豆 | 亚洲免费在线视频一区 二区| 亚洲色图在线播放| 美女精品自拍一二三四| 国产电影一区在线| 欧洲色大大久久| 精品蜜桃在线看| 亚洲三级在线免费| 久久99精品一区二区三区三区| 国产高清一区日本| 欧美日韩免费一区二区三区| 日韩欧美中文一区二区| 国产精品少妇自拍| 日韩精品一二三| 不卡av免费在线观看| 日韩午夜在线观看视频| 亚洲人一二三区| 国产精品香蕉一区二区三区| 欧美日韩国产精品自在自线| 欧美精品一区二区三区久久久| 亚洲r级在线视频| 色综合久久久久久久久| 国产日韩高清在线| 久久国产成人午夜av影院| 欧美日韩一区二区三区不卡| 中文字幕佐山爱一区二区免费| 国产一区二区久久| 久久亚洲综合av| 国内精品伊人久久久久av一坑| 6080yy午夜一二三区久久| 亚洲激情av在线| 欧美系列在线观看| 亚洲精品成人悠悠色影视| 成人黄色网址在线观看| 国产精品美女久久久久久久网站| 国产福利一区二区| 国产精品视频看| 欧美日韩国产精品自在自线| 亚洲另类一区二区| 欧美在线|欧美| 国内成+人亚洲+欧美+综合在线| 久久你懂得1024| 成人免费高清在线观看| 亚洲欧美日韩一区二区 | 亚洲电影视频在线| 欧美午夜寂寞影院| 国产一区二区不卡老阿姨| 亚洲图片你懂的| 日韩美女视频在线| 99综合影院在线| 美女国产一区二区| 亚洲激情综合网| 精品福利一区二区三区| 色一情一伦一子一伦一区| 日韩精品1区2区3区| 亚洲丝袜自拍清纯另类| 日韩欧美美女一区二区三区| 成人性色生活片免费看爆迷你毛片| 国产成人午夜高潮毛片| 国产午夜精品福利| 欧美性色黄大片| 欧美在线视频你懂得| 99久久99久久免费精品蜜臀| 国产精品一区专区| 免播放器亚洲一区| 日韩电影免费一区| 全部av―极品视觉盛宴亚洲| 亚洲制服丝袜一区| 蜜臀久久99精品久久久久久9| 亚洲精品久久嫩草网站秘色| 国产精品久久久久久久久动漫| 欧美一级理论片| 欧美电影免费观看高清完整版| 欧美va日韩va| 国产欧美日韩在线视频|