?? gda.html
字號:
<html><head> <meta HTTP-EQUIV="Content-Type" CONTENT="text/html;charset=ISO-8859-1"> <title>gda.m</title><link rel="stylesheet" type="text/css" href="../../../m-syntax.css"></head><body><code><span class=defun_kw>function</span> <span class=defun_out>model </span>= <span class=defun_name>gda</span>(<span class=defun_in>data,options</span>)
<br><span class=h1>% GDA Generalized Discriminant Analysis.
</span><br><span class=help>%
</span><br><span class=help>% <span class=help_field>Synopsis:</span></span><br><span class=help>% model = gda(data)
</span><br><span class=help>% model = gda(data,options)
</span><br><span class=help>%
</span><br><span class=help>% <span class=help_field>Description:</span></span><br><span class=help>% This function is implimentation of the Generalized Discriminant
</span><br><span class=help>% Analysis (GDA) [Baudat01]. The GDA is kernelized version of
</span><br><span class=help>% the Linear Discriminant Analysis (LDA). It produce the kernel data
</span><br><span class=help>% projection which increases class separability of the projected
</span><br><span class=help>% training data.
</span><br><span class=help>%
</span><br><span class=help>% <span class=help_field>Input:</span></span><br><span class=help>% data [struct] Labeled training data:
</span><br><span class=help>% .X [dim x num_data] Training vectors.
</span><br><span class=help>% .y [1 x num_data] Labels (1,2,..,mclass).
</span><br><span class=help>%
</span><br><span class=help>% options [struct] Defines kernel and a output dimension:
</span><br><span class=help>% .ker [string] Kernel identifier (default 'linear');
</span><br><span class=help>% see 'help kernel' for more info.
</span><br><span class=help>% .arg [1 x nargs] Kernel arguments (default 1).
</span><br><span class=help>% .new_dim [1x1] Output dimension (default dim).
</span><br><span class=help>%
</span><br><span class=help>% <span class=help_field>Output:</span></span><br><span class=help>% model [struct] Kernel projection:
</span><br><span class=help>% .Alpha [num_data x new_dim] Multipliers.
</span><br><span class=help>% .b [new_dim x 1] Bias.
</span><br><span class=help>% .sv.X [dim x num_data] Training data.
</span><br><span class=help>% .options [struct] Copy of used options.
</span><br><span class=help>% .rankK [int] Rank of centered kernel matrix.
</span><br><span class=help>% .nsv [int] Number of training data.
</span><br><span class=help>%
</span><br><span class=help>% <span class=help_field>Example:</span></span><br><span class=help>% in_data = load('iris');
</span><br><span class=help>% model = gda(in_data,struct('ker','rbf','arg',1));
</span><br><span class=help>% out_data = kernelproj( in_data, model );
</span><br><span class=help>% figure; ppatterns( out_data );
</span><br><span class=help>%
</span><br><span class=help>% See also
</span><br><span class=help>% KERNELPROJ, KPCA.
</span><br><span class=help>%
</span><br><hr><br><span class=help1>% <span class=help1_field>About:</span> Statistical Pattern Recognition Toolbox
</span><br><span class=help1>% (C) 1999-2003, Written by Vojtech Franc and Vaclav Hlavac
</span><br><span class=help1>% <a href="http://www.cvut.cz">Czech Technical University Prague</a>
</span><br><span class=help1>% <a href="http://www.feld.cvut.cz">Faculty of Electrical Engineering</a>
</span><br><span class=help1>% <a href="http://cmp.felk.cvut.cz">Center for Machine Perception</a>
</span><br><br><span class=help1>% <span class=help1_field>Modifications:</span>
</span><br><span class=help1>% 24-may-2004, VF
</span><br><span class=help1>% 4-may-2004, VF
</span><br><br><br><hr><span class=comment>% process input arguments
</span><br><span class=comment>%-----------------------------
</span><br>
<br><span class=comment>% allos data to be given as a cell
</span><br>data=c2s(data);
<br>
<br><span class=comment>% get dimensions
</span><br>[dim,num_data]=size(data.X);
<br>nclass = max(data.y);
<br>
<br><span class=keyword>if</span> <span class=stack>nargin</span> < 2, options=[]; <span class=keyword>else</span> options=c2s(options); <span class=keyword>end</span>
<br><span class=keyword>if</span> ~isfield(options, <span class=quotes>'ker'</span>), options.ker = <span class=quotes>'linear'</span>; <span class=keyword>end</span>
<br><span class=keyword>if</span> ~isfield(options, <span class=quotes>'arg'</span>), options.arg = 1; <span class=keyword>end</span>
<br><span class=keyword>if</span> ~isfield(options, <span class=quotes>'new_dim'</span>), options.new_dim = dim; <span class=keyword>end</span>
<br>
<br><span class=comment>% sort data according to labels
</span><br>[tmp,inx] = sort(data.y);
<br>data.y=data.y(inx);
<br>data.X=data.X(:,inx);
<br>
<br><span class=comment>% kernel matrix
</span><br>K = kernel( data.X, options.ker, options.arg );
<br>
<br><span class=comment>% centering matrix
</span><br>J=ones(num_data,num_data)/num_data;
<br>JK = J*K;
<br>
<br><span class=comment>% centering data in non-linear space
</span><br>Kc = K - JK' - JK + JK*J;
<br>
<br><span class=comment>% Kc decomposition; Kc = P*Gamma*P'
</span><br>[P, Gamma]=eig( Kc );
<br>Gamma=diag(Gamma);
<br>[tmp,inx]=sort(Gamma); <span class=comment>% sort eigenvalues in ascending order
</span><br>inx=inx([num_data:-1:1]); <span class=comment>% swap indices
</span><br>Gamma=Gamma(inx);
<br>P=P(:,inx);
<br>
<br><span class=comment>% removes eigenvectors with small value
</span><br>minEigv = Gamma(1,1)/1000;
<br>inx = find( Gamma >= minEigv );
<br>P=P(:,inx);
<br>Gamma=Gamma(inx);
<br>rankKc = length(inx);
<br>
<br>Kc = P*diag(Gamma)*P';
<br>
<br><span class=comment>% make diagonal block matrix W
</span><br>W=[];
<br><span class=keyword>for</span> i=1:nclass,
<br> num_data_class=length(find(data.y==i));
<br> W=blkdiag(W,ones(num_data_class)/num_data_class);
<br><span class=keyword>end</span>
<br>
<br><span class=comment>% new dimension of data
</span><br>model.new_dim=min([options.new_dim, rankKc, nclass-1]);
<br>
<br><span class=comment>% compute vector alpha and its normalization
</span><br>[Beta, Lambda] = eig( P'*W*P );
<br>Lambda=diag(Lambda);
<br>[tmp,inx]=sort(Lambda); <span class=comment>% sort eigenvalues in ascending order
</span><br>inx=inx([length(Lambda):-1:1]); <span class=comment>% swap indices
</span><br>Lambda=Lambda(inx);
<br>Beta=Beta(:,inx(1:model.new_dim));
<br>
<br><span class=comment>%model.Alpha=P*inv(diag(Gamma))*Beta;
</span><br>model.Alpha=P*diag(1./Gamma)*Beta;
<br>
<br><span class=comment>% normalization of vectors Alpha
</span><br><span class=keyword>for</span> i=1:model.new_dim,
<br> model.Alpha(:,i) = model.Alpha(:,i)/...
<br> sqrt(model.Alpha(:,i)'* Kc * model.Alpha(:,i));
<br><span class=keyword>end</span>
<br>
<br><span class=comment>% centering Alpha and computing Bias
</span><br>sumK=sum(K);
<br>model.b=(-sumK*model.Alpha/num_data+...
<br> sum(model.Alpha)*sum(sumK)/num_data^2)';
<br>
<br><span class=keyword>for</span> i=1:size(model.Alpha,2),
<br> model.Alpha(:,i) = model.Alpha(:,i)-sum(model.Alpha(:,i))/num_data;
<br><span class=keyword>end</span>
<br>
<br><span class=comment>% fill model
</span><br>model.options = options;
<br>model.sv = data;
<br>model.rankK = rankKc;
<br>model.nsv = num_data;
<br>model.fun = <span class=quotes>'kernelproj'</span>;
<br>
<br><span class=jump>return</span>;
<br></code>
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -