Batch version of the back-propagation algorithm. % Given a set of corresponding input-output pairs and an initial network % [W1,W2,critvec,iter]=batbp(NetDef,W1,W2,PHI,Y,trparms) trains the % network with backpropagation. % % The activation functions must be either linear or tanh. The network % architecture is defined by the matrix NetDef consisting of two % rows. The first row specifies the hidden layer while the second % specifies the output layer. %
標(biāo)簽: back-propagation corresponding input-output algorithm
上傳時(shí)間: 2016-12-27
上傳用戶(hù):exxxds
This function calculates Akaike s final prediction error % estimate of the average generalization error. % % [FPE,deff,varest,H] = fpe(NetDef,W1,W2,PHI,Y,trparms) produces the % final prediction error estimate (fpe), the effective number of % weights in the network if the network has been trained with % weight decay, an estimate of the noise variance, and the Gauss-Newton % Hessian. %
標(biāo)簽: generalization calculates prediction function
上傳時(shí)間: 2014-12-03
上傳用戶(hù):maizezhen
% Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.
標(biāo)簽: Levenberg-Marquardt desired network neural
上傳時(shí)間: 2016-12-27
上傳用戶(hù):jcljkh
This function calculates Akaike s final prediction error % estimate of the average generalization error for network % models generated by NNARX, NNOE, NNARMAX1+2, or their recursive % counterparts. % % [FPE,deff,varest,H] = nnfpe(method,NetDef,W1,W2,U,Y,NN,trparms,skip,Chat) % produces the final prediction error estimate (fpe), the effective number % of weights in the network if it has been trained with weight decay, % an estimate of the noise variance, and the Gauss-Newton Hessian. %
標(biāo)簽: generalization calculates prediction function
上傳時(shí)間: 2016-12-27
上傳用戶(hù):腳趾頭
【歐拉算法】 微分方程的本質(zhì)特征是方程中含有導(dǎo)數(shù)項(xiàng),數(shù)值解法的第一步就是...歐拉(Euler)算法是數(shù)值求解中最基本、最簡(jiǎn)單的方法,但其求解精度較低,一般不在...對(duì)于常微分方程: dy/dx=f(x,y),x∈[a,b] y(a)=y0 可以將區(qū)
上傳時(shí)間: 2014-01-09
上傳用戶(hù):www240697738
針對(duì)於滑動(dòng)模式控制應(yīng)用於系統(tǒng)控制的基礎(chǔ),可以快速上手,有詳細(xì)註解可以方便大家使用!
上傳時(shí)間: 2017-01-05
上傳用戶(hù):helmos
對(duì)於一般不懂可變結(jié)構(gòu)控制的人員而言,此程式可以快速理解,並針對(duì)程式註解有詳細(xì)解釋?zhuān)屖褂谜呖焖偕鲜?/p>
標(biāo)簽: 控制
上傳時(shí)間: 2014-01-20
上傳用戶(hù):源碼3
#include "iostream.h" #include "iomanip.h" #define N 20 //學(xué)習(xí)樣本個(gè)數(shù) #define IN 1 //輸入層神經(jīng)元數(shù)目 #define HN 8 //隱層神經(jīng)元數(shù)目 #define ON 1 //輸出層神經(jīng)元數(shù)目 double P[IN] //單個(gè)樣本輸入數(shù)據(jù) double T[ON] //單個(gè)樣本教師數(shù)據(jù) double W[HN][IN] //輸入層至隱層權(quán)值 double V[ON][HN] //隱層至輸出層權(quán)值 double X[HN] //隱層的輸入 double Y[ON] //輸出層的輸入 double H[HN] //隱層的輸出
標(biāo)簽: define include iostream iomanip
上傳時(shí)間: 2014-01-01
上傳用戶(hù):凌云御清風(fēng)
1.推動(dòng)教育學(xué)發(fā)展的內(nèi)在動(dòng)力是( D)的發(fā)展。A.教育規(guī)律 B.教育價(jià)值 C.教育現(xiàn)象 D.教育問(wèn)題 2.提出“泛智”教育思想,探討“把一切事物教給一切人類(lèi)的全部藝術(shù)”的教育家是( B)A.培根 B.夸美紐斯 C.赫爾巴特 D.贊可夫
上傳時(shí)間: 2017-01-06
上傳用戶(hù):1427796291
三維曲線曲面比較演示系統(tǒng)程序設(shè)計(jì) 設(shè)計(jì)一個(gè)圖形用戶(hù)界面(GUI)演示常見(jiàn)的三維函數(shù)圖形,至少包含“三維繪圖” 、“選項(xiàng)” 、“退出”等菜單,三維繪圖的包括:參數(shù)方程x=e-t/20cos(t), y= e-t/20sin(t),z=t其中t 為0到2π、參數(shù)方程x=t,y=t2,z=t3其中t為0到1之間(在同一圖形界面中分別繪制它們的三維曲面和三維曲線圖)。“選項(xiàng)”菜單主要包括:網(wǎng)格開(kāi)關(guān),圖例開(kāi)關(guān),坐標(biāo)邊框開(kāi)關(guān),色度空間選擇菜單,曲線顏色菜單。
標(biāo)簽: GUI 比較 圖形用戶(hù)界面 函數(shù)
上傳時(shí)間: 2017-01-10
上傳用戶(hù):hasan2015
蟲(chóng)蟲(chóng)下載站版權(quán)所有 京ICP備2021023401號(hào)-1