亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? trainpso.m

?? particle swarm optimization toolbox for matlab.粒子群優化的工具箱
?? M
字號:
%TRAINPSO Particle Swarm Optimization backpropagation.
%
%  Syntax
%  
%    [net,tr,Ac,El] = trainpso(net,Pd,Tl,Ai,Q,TS,VV,TV)
%    info = trainpso(code)
%
%  Description
%
%    TRAINPSO is a network training function that updates weight and
%    bias values according to particle swarm optimization.
%
%    TRAINPSO(NET,Pd,Tl,Ai,Q,TS,VV,TV) takes these inputs,
%      NET - Neural network.
%      Pd  - Delayed input vectors.
%      Tl  - Layer target vectors.
%      Ai  - Initial input delay conditions.
%      Q   - Batch size.
%      TS  - Time steps.
%      VV  - Empty matrix [] or structure of validation vectors.
%      TV  - Empty matrix [] or structure of test vectors.
%    and returns:
%      NET - Trained network.
%      TR  - Training record of various values over each epoch:
%            TR.epoch - Epoch number.
%            TR.perf  - Training performance.
%            TR.vperf - Validation performance.
%            TR.tperf - Test performance.
%      Ac  - Collective layer outputs for last epoch.
%      El  - Layer errors for last epoch.
%
%    Training occurs according to the TRAINPSO's training parameters
%    shown here with their default values:
%     trainParam.display  = 0;           iterations to update display (0 means never)
%     trainParam.maxit    = 2000;        maximum iterations
%     trainParam.popsz    = 25;          population size
%     trainParam.ac       = [2,2];       acceleration constants (for type = 0)
%     trainParam.inwt     = [0.9,0.4];   inertia weights (for type = 0)
%     trainParam.it_inwt  = floor(0.8*trainParam.maxit) iters to reach final inertia weight
%     trainParam.egd      = 1e-9;        minimum error gradient
%     trainParam.iter_egd = floor(0.2*trainParam.maxit) iters at errgrad value before exit
%     trainParam.eg       = 0;           error goal, NaN is unconstrained
%     trainParam.type     = 2;           chooses type of pso (common, clerc, etc)
%     trainParam.seedflag = 0;           flag to tell if we want to seed particles
%     trainParam.plotfcn  = 'goplotpso'; plotting function  
%     trainParam.seedvals = NaN;         Seed values
%
%    TRAINPSO(CODE) returns useful information for each CODE string:
%      'pnames'    - Names of training parameters.
%      'pdefaults' - Default training parameters.

% Structure of this code was taken from traingd.m by:
% Mark Beale, 11-31-97
% Copyright 1992-2002 The MathWorks, Inc.
% $Revision: 1.10 $ $Date: 2002/04/14 21:35:53 $
%
% All the PSO specific stuff is of course by Brian Birge
% Rev 1.0 - 8/31/05

function [net,tr,Ac,El,v5,v6,v7,v8] = ...
   trainpso(net,Pd,Tl,Ai,Q,TS,VV,TV,v9,v10,v11,v12)

  %global Tl Pd net Ai Q TS simfuncname
  
% FUNCTION INFO
% =============

if isstr(net)
  switch (net)
    case 'pnames',
      net = {...
             'display  = iterations to update display (0 means never)';...
             'maxit    = maximum iterations';...
             'popsz    = population size';...
             'ac       = acceleration constants (for type = 0)';...
             'inwt     = inertia weights (for type = 0)';...
             'it_inwt  = iterations to reach final inertia weight';...
             'egd      = minimum error gradient';...
             'iter_egd = # of iters at errgrad value before exit';...
             'eg       = error goal, NaN means unconstrained';...
             'type     = version of pso to use (0=common, 1,2=Trelea, 3=Clerc)';...
             'seedflag = 0 for no seed, 1 for seeded particles';...
             'plotfcn  = plot function to use if display ~= 0';...
             'seedvals = initial particle positions to use if seedflag~=0';...
             'minmax   = variable range for search [min,max], each row is a dimension';...
             'mvden    = maximum velocity divisor, default = 2';...
             'IWrange  = range to search for initial layer weights [min,max], default = [-100,100]';...
             'LWrange  = range to search for hidden/output layer weights [min,max], default = [-100,100]';...
             'BIrange  = range to search for hidden/output biases [min,max], default = [-8,8]';...
             'outlyrrange = range of output layer weights [min,max], defaults same as LWrange';...
            };

    case 'pdefaults',
      trainParam.display  = 25;    % iterations to update display (0 means never)
      trainParam.maxit    = 2000;  % maximum iterations
      trainParam.popsz    = 25;    % population size
      trainParam.ac       = [2,2]; % acceleration constants (for type = 0)
      trainParam.inwt     = [0.9,0.4]; % inertia weights (for type = 0)
      trainParam.it_inwt  = floor(0.8*trainParam.maxit); % iterations to reach final inertia weight
      trainParam.egd      = 1e-9;  % minimum error gradient
      trainParam.iter_egd = floor(0.2*trainParam.maxit); % # of iters at errgrad value before exit
      trainParam.eg       = 0;     % NaN is unconstrained optimization
      trainParam.type     = 2;     % chooses type of pso (common, clerc, etc)
      trainParam.seedflag = 0;     % flag to tell if we want to seed particles
      trainParam.plotfcn  = 'goplotpso4net';
      trainParam.seedvals = NaN; % only activated if seedflag == 1
      
      % this little section allows different search ranges based on whether
      % parameter is initial layer weight, hidden layer weight, or bias.
      % This has varying degrees of usefuleness depending on your activation function
      % choices and whether you normalize the i/o to the net
      trainParam.IWrange = [-100,100]; % Initial Layer Weight Range for search
      trainParam.LWrange = [-100,100]; % Hidden Layer Weight Range for search
      trainParam.BIrange = [-8,8]; % Bias Range for search
      trainParam.outlyrrange = trainParam.LWrange; % output layer weights, useful if you have a purelin act fcn for the output layer, you can then make this larger
      trainParam.mvden  = 2;
      % trainParam.keymap = ???? <--- this must be calculated in trainpso after
      %trainParam.minmax = ???? ; % default variable range, setup below using IWrange, LWrange, BIrange
      
      % initialization
      net = trainParam;
      
    otherwise,
      error('Unrecognized code.')
  end
  return
end

% this part needed to pass to goplotpso4net
P = Pd{1};
T = Tl{end};

% setup min/max values separately for bias, input weight, and layer weights
% this uses hacked code from getx.m to figure out which indices in the vector
% correspond to biases, input weights, and layer weights

% keymap variable is useful for quickly finding out what is a weight and what is
% a bias, used in goplotpso4net
% the minmax variable is needed by the PSO to determine search ranges for each
% particle component, this mess here just allows separate search ranges for
% bias, initial layer weight, and hidden layer weight, NOTICE!!! --> the 2
% variables keymap and minmax are not created during a call to newff, they are
% created only at first call to train, when using trainpso
   inputLearn     = net.hint.inputLearn;
   layerLearn     = net.hint.layerLearn;
   biasLearn      = net.hint.biasLearn;
   inputWeightInd = net.hint.inputWeightInd;
   layerWeightInd = net.hint.layerWeightInd;
   biasInd        = net.hint.biasInd;

   % setup range for weights & biases used in training
   % and create key telling us which indices are for input weights, layer
   % weights, or biases, when using x=getx(net);
   % keymap = [x,y], where x tells whether it is a weight or bias, and y is
   % the hidden/output layer associated with it
   % x=0 bias
   % x=1 initial layer weight
   % x=2 hidden/output layer weight
   % y = layer that x is on
   net.trainParam.minmax = zeros(net.hint.xLen,2);
   net.trainParam.keymap = zeros(net.hint.xLen,2);
   for i=1:net.numLayers
     for j=find(inputLearn(i,:))
       % range to search for input weights 
       net.trainParam.minmax(inputWeightInd{i,j},1:2) =...
          repmat(net.trainParam.IWrange,length(inputWeightInd{i,j}),1);
       
       net.trainParam.keymap(inputWeightInd{i,j},1:2) =...
          repmat([1,i],length(inputWeightInd{i,j}),1);
       
     end
     
     for j=find(layerLearn(i,:))
       % range to search for layer weights  
       net.trainParam.minmax(layerWeightInd{i,j},1:2) =...
          repmat(net.trainParam.LWrange,length(layerWeightInd{i,j}),1);
       
       net.trainParam.keymap(layerWeightInd{i,j},1:2) =...
          repmat([2,i],length(layerWeightInd{i,j}),1);
     end

     if biasLearn(i)
       % range to search for biases  
       net.trainParam.minmax(biasInd{i},1:2) =...
          repmat(net.trainParam.BIrange,length(biasInd{i}),1);
       
       net.trainParam.keymap(biasInd{i},1:2) =...
          repmat([0,i],length(biasInd{i}),1);
     end
   end
   for j=find(layerLearn(net.numLayers,:))
       net.trainParam.minmax(layerWeightInd{net.numLayers,j},1:2) =...
          repmat(net.trainParam.outlyrrange,length(layerWeightInd{net.numLayers,j}),1);
   end
%------------------------------------------------------------------------------
%------------------------------------------------------------------------------
%------------------------------------------------------------------------------
% CALCULATION
% ===========
%assignin('base','net',net);
% Generate functions
simfunc         = gensimm(net);
[x,simfuncname] = fileparts(simfunc);

% Constants
this = 'TRAINPSO';
doValidation = ~isempty(VV);
doTest = ~isempty(TV);

% Initialize
stop = '';
startTime = clock;
X = getx(net);

if (doValidation)
  VV.net = net;
  vperf = feval(simfuncname,net,VV.Pd,VV.Ai,VV.Tl,VV.Q,VV.TS);
  VV.perf = vperf;
  VV.numFail = 0;
end
tr = newtr(net.trainParam.maxit,'perf','vperf','tperf');

% extract network architecture and convert to format needed for PSO runs
dims=length(X);

% PSO Parameters
mvden    = net.trainParam.mvden; % max velocity divisor
varrange = [];
mv       = [];

minx        = net.trainParam.minmax(:,1);
maxx        = net.trainParam.minmax(:,2);

for i=1:dims
   if length(minx)<dims
     varrange = [varrange;minx(1) maxx(1)]; 
   else
     varrange = [varrange;minx(i) maxx(i)];
   end
   mv       = [mv;(varrange(i,2)-varrange(i,1))/mvden];
end
minmax   = 0; % sets pso to 'minimize'

shw         = net.trainParam.display;
epoch       = net.trainParam.maxit;
ps          = net.trainParam.popsz;
ac          = net.trainParam.ac;
Iwt         = net.trainParam.inwt;
wt_end      = net.trainParam.it_inwt;
errgrad     = net.trainParam.egd;
errgraditer = net.trainParam.iter_egd;
errgoal     = net.trainParam.eg;
modl        = net.trainParam.type;
PSOseedflag = net.trainParam.seedflag;

psoparams   = [shw epoch ps ac(1) ac(2) Iwt(1) Iwt(2) wt_end errgrad ...
               errgraditer errgoal modl PSOseedflag];

plotfcn     = net.trainParam.plotfcn;
PSOseedVal  = net.trainParam.seedvals;

% call PSO routine, returns weights and biases of new network
[pso_out, tr.epoch, tr.perf] = pso_Trelea_vectorized('pso_neteval', dims, mv,...
                                          varrange,...
                                          minmax, psoparams, plotfcn,...
                                          PSOseedVal);

%assignin('base','bestweights',pso_out(1:end-1));

% apply new weights and biases to network
net = setx(net,pso_out(1:end-1));
% this is just to get El and Ac to return to caller
[perf,El,Ac,N,Zl,Zi,Zb] = feval(simfuncname,net,Pd,Ai,Tl,Q,TS);

return

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
欧美一级电影网站| 中文字幕一区二区视频| 中文幕一区二区三区久久蜜桃| 亚洲人xxxx| 国产一区二区三区久久久| 色婷婷精品久久二区二区蜜臂av| 欧美xxxx在线观看| 亚洲一区二区偷拍精品| 国产精品12区| 精品国产一区二区精华| 亚洲国产精品久久人人爱蜜臀| 成人午夜精品在线| 精品国产髙清在线看国产毛片| 亚洲在线观看免费| av在线不卡免费看| 国产日韩精品一区二区浪潮av| 日韩国产精品大片| 欧美视频中文字幕| 亚洲欧美激情插| 成人av网站免费观看| 26uuu欧美日本| 美女尤物国产一区| 欧美一区二区三区日韩| 亚洲一区在线观看视频| 在线日韩一区二区| 一区二区三区四区亚洲| 91在线观看视频| 成人欧美一区二区三区视频网页| 国产精品一区一区三区| 久久在线免费观看| 国产精品1区2区| 国产视频911| 国产成人精品aa毛片| 国产清纯白嫩初高生在线观看91| 国产一区二区三区高清播放| 国产欧美日韩激情| 成人性生交大片免费看中文网站| 久久久久久久国产精品影院| 国产在线精品国自产拍免费| 久久久亚洲高清| 大尺度一区二区| 中文字幕免费在线观看视频一区| 国产成人精品影视| 亚洲视频一区二区免费在线观看 | 国产一区二区三区蝌蚪| 欧美电视剧在线观看完整版| 激情五月激情综合网| 久久精品日产第一区二区三区高清版| 国产综合色产在线精品| 亚洲国产精品v| 色综合久久中文综合久久97| 亚洲自拍偷拍麻豆| 欧美日韩国产精选| 国内精品伊人久久久久av一坑| wwwwxxxxx欧美| 91影院在线免费观看| 午夜精品爽啪视频| 2023国产精品自拍| av一二三不卡影片| 性久久久久久久久久久久| 欧美成人猛片aaaaaaa| 国产91富婆露脸刺激对白| 亚洲精品视频一区| 日韩欧美视频一区| 91香蕉视频mp4| 奇米色777欧美一区二区| 欧美激情一区在线| 欧美日韩精品综合在线| 国产精品一区二区三区四区| 亚洲精品国产无套在线观| 日韩一区二区精品| 波波电影院一区二区三区| 日韩精品电影一区亚洲| 国产精品毛片无遮挡高清| 欧美精品色一区二区三区| 成人丝袜高跟foot| 欧美aⅴ一区二区三区视频| 中文字幕一区二区三区视频| 欧美一区二区人人喊爽| 色综合一个色综合亚洲| 麻豆久久久久久| 一区二区三区欧美激情| 久久久久久久久伊人| 欧美高清视频不卡网| 99在线视频精品| 国产乱码一区二区三区| 午夜精品一区在线观看| 国产精品不卡一区| 欧美mv和日韩mv的网站| 欧美视频三区在线播放| 99精品欧美一区二区三区小说 | 久久久99免费| 欧美日本一道本在线视频| aaa欧美色吧激情视频| 狠狠色综合色综合网络| 亚洲国产中文字幕| 亚洲色图欧洲色图| 欧美国产一区在线| 久久婷婷色综合| 欧美一个色资源| 91精品国产综合久久久久久漫画| 91蜜桃传媒精品久久久一区二区| 国产在线精品国自产拍免费| 日韩激情一区二区| 五月激情丁香一区二区三区| 亚洲男人都懂的| 亚洲欧美偷拍另类a∨色屁股| 国产免费成人在线视频| 久久精品免视看| 久久久美女毛片 | 欧美一区二区三区不卡| 欧美日韩国产美女| 欧美人牲a欧美精品| 欧美专区日韩专区| 欧美日韩一区久久| 欧美日韩国产中文| 7777精品伊人久久久大香线蕉超级流畅 | 亚洲一区二区三区四区五区黄| 国产精品久久久久久妇女6080| 国产日产欧美一区| 欧美激情综合五月色丁香| 欧美国产精品一区| 国产日韩v精品一区二区| 国产亚洲精品aa午夜观看| 日本一区二区三区dvd视频在线| 精品福利视频一区二区三区| 精品国产一区二区在线观看| 精品成人a区在线观看| 久久精品综合网| 久久精品夜色噜噜亚洲aⅴ| 国产日韩欧美精品综合| 亚洲欧洲国产日韩| 一区二区三区四区精品在线视频| 一区二区三区中文字幕在线观看| 亚洲一区二区三区四区的| 五月开心婷婷久久| 久久99精品一区二区三区| 国产精一区二区三区| av在线不卡免费看| 欧美日本一区二区| 国产午夜精品理论片a级大结局| 欧美激情一区三区| 一区二区三区不卡视频在线观看 | 亚洲欧洲精品一区二区精品久久久| 亚洲免费观看视频| 人人狠狠综合久久亚洲| 国产福利一区二区三区在线视频| av中文字幕不卡| 亚洲欧美日韩综合aⅴ视频| 亚洲国产毛片aaaaa无费看| 极品少妇一区二区三区精品视频| 国产河南妇女毛片精品久久久 | 国产福利不卡视频| 91免费国产在线观看| 欧美一区二区三区在| 国产精品久久福利| 蜜臀精品一区二区三区在线观看 | 国产精品久久久久久一区二区三区| 亚洲靠逼com| 国产精品一区二区x88av| 色狠狠av一区二区三区| 欧美xxxx在线观看| 亚洲国产精品麻豆| 成人免费黄色在线| 日韩视频免费观看高清完整版 | 成人在线视频一区| 911国产精品| 亚洲欧洲综合另类在线| 久久99国产精品久久99| 色88888久久久久久影院野外 | 国产美女在线精品| 在线成人av网站| 国产精品美女一区二区三区 | 亚洲精品美国一| 国产福利一区在线| 日韩欧美成人午夜| 亚洲a一区二区| 99久久精品久久久久久清纯| 欧美一二三区在线| 亚洲高清免费在线| 91色在线porny| 国产精品―色哟哟| 国模少妇一区二区三区| 欧美一区二区三区人| 午夜伦欧美伦电影理论片| 欧美在线一区二区| 一区二区三区欧美日| 成人av动漫网站| 亚洲国产高清在线观看视频| 极品销魂美女一区二区三区| 日韩精品视频网站| 欧美精彩视频一区二区三区| 日韩在线一二三区| 精品视频123区在线观看| 亚洲欧美日韩人成在线播放| 日韩一区二区三区电影| 亚洲欧美一区二区三区国产精品 | 欧美美女视频在线观看| 亚洲最新视频在线观看| 色综合久久88色综合天天|