Batch version of the back-propagation algorithm. % Given a set of corresponding input-output pairs and an initial network % [W1,w2,critvec,iter]=batbp(NetDef,W1,w2,PHI,Y,trparms) trains the % network with backpropagation. % % The activation functions must be either linear or tanh. The network % architecture is defined by the matrix NetDef consisting of two % rows. The first row specifies the hidden layer while the second % specifies the output layer. %
標簽: back-propagation corresponding input-output algorithm
上傳時間: 2016-12-27
上傳用戶:exxxds
This function calculates Akaike s final prediction error % estimate of the average generalization error. % % [FPE,deff,varest,H] = fpe(NetDef,W1,w2,PHI,Y,trparms) produces the % final prediction error estimate (fpe), the effective number of % weights in the network if the network has been trained with % weight decay, an estimate of the noise variance, and the Gauss-Newton % Hessian. %
標簽: generalization calculates prediction function
上傳時間: 2014-12-03
上傳用戶:maizezhen
% Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,w2,critvec,iteration,lambda]=marq(NetDef,W1,w2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.
標簽: Levenberg-Marquardt desired network neural
上傳時間: 2016-12-27
上傳用戶:jcljkh
This function calculates Akaike s final prediction error % estimate of the average generalization error for network % models generated by NNARX, NNOE, NNARMAX1+2, or their recursive % counterparts. % % [FPE,deff,varest,H] = nnfpe(method,NetDef,W1,w2,U,Y,NN,trparms,skip,Chat) % produces the final prediction error estimate (fpe), the effective number % of weights in the network if it has been trained with weight decay, % an estimate of the noise variance, and the Gauss-Newton Hessian. %
標簽: generalization calculates prediction function
上傳時間: 2016-12-27
上傳用戶:腳趾頭
一個簡化的背包問題:一個背包能裝總重量為 T,現有 n 個物件,其重量分別為(W1、w2、…、Wn)。問能否從這 n 個物件中挑選若干個物件放入背包中,使其總重量正好為 T ?若有解則給出全部解,否則輸出無解。
標簽: 背包問題
上傳時間: 2017-01-16
上傳用戶:tianyi223
一個旅行者有一個最多能用m公斤的背包,現在有n件物品,它們的重量分別是W1,w2,...,Wn,它們的價值分別為C1,C2,...,Cn.若每種物品只有一件求旅行者能獲得最大總價值。
標簽: 旅行
上傳時間: 2017-01-23
上傳用戶:奇奇奔奔
哈夫曼樹又稱最優二叉樹,是一種帶權路徑長度最短的二叉樹。所謂樹的帶權路徑長度,就是樹中所有的葉結點的權值乘上其到根結點的路徑長度(若根結點為0層,葉結點到根結點的路徑長度為葉結點的層數)。樹的帶權路徑長度記為WPL=(W1*L1+w2*L2+W3*L3+...+Wn*Ln),N個權值Wi(i=1,2,...n)構成一棵有N個葉結點的二叉樹,相應的葉結點的路徑長度為Li(i=1,2,...n)??梢宰C明哈夫曼樹的WPL是最小的。
上傳時間: 2017-06-09
上傳用戶:wang5829
采用CMAC網絡對對頻率為0.1572HZ的諧波進行辨識,選擇聯想單元A*=4,權值Wa=W1+w2+W3+W4
上傳時間: 2014-02-17
上傳用戶:cazjing
批處理感知器算法的代碼matlab w1=[1,0.1,1.1;1,6.8,7.1;1,-3.5,-4.1;1,2.0,2.7;1,4.1,2.8;1,3.1,5.0;1,-0.8,-1.3; 1,0.9,1.2;1,5.0,6.4;1,3.9,4.0]; w2=[1,7.1,4.2;1,-1.4,-4.3;1,4.5,0.0;1,6.3,1.6;1,4.2,1.9;1,1.4,-3.2;1,2.4,-4.0; 1,2.5,-6.1;1,8.4,3.7;1,4.1,-2.2]; w3=[1,-3.0,-2.9;1,0.5,8.7;1,2.9,2.1;1,-0.1,5.2;1,-4.0,2.2;1,-1.3,3.7;1,-3.4,6.2; 1,-4.1,3.4;1,-5.1,1.6;1,1.9,5.1]; figure; plot(w3(:,2),w3(:,3),'ro'); hold on; plot(w2(:,2),w2(:,3),'b+'); W=[w2;-w3];%增廣樣本規范化 a=[0,0,0]; k=0;%記錄步數 n=1; y=zeros(size(W,2),1);%記錄錯分的樣本 while any(y<=0) k=k+1; y=a*transpose(W);%記錄錯分的樣本 a=a+sum(W(find(y<=0),:));%更新a if k >= 250 break end end if k<250 disp(['a為:',num2str(a)]) disp(['k為:',num2str(k)]) else disp(['在250步以內沒有收斂,終止']) end %判決面:x2=-a2*x1/a3-a1/a3 xmin=min(min(w1(:,2)),min(w2(:,2))); xmax=max(max(w1(:,2)),max(w2(:,2))); x=xmin-1:xmax+1;%(xmax-xmin): y=-a(2)*x/a(3)-a(1)/a(3); plot(x,y)
上傳時間: 2016-11-07
上傳用戶:a1241314660
DIP SSOP QFN TQFP TO SOP SOT常用芯片2D3D AD封裝庫AD19封裝庫器件庫文件33庫合集PCB Library : .PcbLibDate : 2021/1/4Time : 17:10:26Component Count : 200DIP4DIP6DIP8DIP8_MHDIP14DIP14_MHDIP16DIP16_MHDIP18DIP18_MHDIP20DIP20_MHDIP20_TPDIP22DIP22_MHDIP24DIP24_MHDIP24LDIP24L_MHDIP28DIP28LDIP40DIP40_MHDIP40_TPDIP48LQFP32 7X7LQFP44 10X10LQFP48 7X7LQFP52 10X10LQFP64 7x7LQFP64 10x10LQFP64 14x14LQFP80 12x12LQFP80 14x14LQFP100 14x14LQFP120 14x14LQFP128 14x14LQFP128 14x20LQFP144 20X20LQFP176 24x24LQFP208 28x28LQFP216 24x24LQFP256 28x28SOP4SOP4-w2.54SOP8SOP8-w2.54SOP8WSOP10SOP14SOP16SOP16-w2.54SOP16NSOP16WSOP18SOP18WSOP20SOP20ZSOP22SOP24SOP28SOP30SSOP8SSOP14SSOP16SSOP20SSOP24SSOP28SSOP48TQFP32 5x5TQFP32 7x7TQFP40 5x5TQFP44 10x10TQFP48 7x7TQFP52 10X10TQFP64 7x7TQFP64 10x10TQFP64 14x14TQFP80 12x12TQFP80 14x14TQFP100 14x14TQFP120 14x14TQFP128 14x14TQFP128 20x20TQFP144 20x20TQFP176 20x20
標簽: dip ssop qfn tqfp to sop sot 芯片
上傳時間: 2022-03-05
上傳用戶: