員工培訓(xùn)系統(tǒng).在ODBC數(shù)據(jù)源內(nèi)添加Microsoft Access數(shù)據(jù)庫train.mdb,并將數(shù)據(jù)源名設(shè)定為train即可實現(xiàn)數(shù)據(jù)庫和應(yīng)用程序的正常連接,程序才能正常實現(xiàn)數(shù)據(jù)庫的訪問。
標(biāo)簽: Microsoft Access train ODBC
上傳時間: 2013-12-16
上傳用戶:yangbo69
Tornado train workshop的配套實驗教材,與tornado train workshop的各個章節(jié)配套,有詳細的實驗操作過程,并附上相應(yīng)的demo代碼
標(biāo)簽: workshop Tornado train 實驗
上傳時間: 2015-03-08
上傳用戶:417313137
The purpose of this computer program is to allow the user to construct, train and test differenttypes of artificial neural networks. By implementing the concepts of templates, inheritance andderived classes from C++ object oriented programming, the necessity for declaring multiple largestructures and duplicate attributes is reduced. Utilizing dynamic binding and memory allocationafforded by C++, the user can choose to develop four separate types of neural networks:
標(biāo)簽: differenttype construct computer purpose
上傳時間: 2013-12-06
上傳用戶:13517191407
hide markov model(HMM) for matlab including initiate,train,and test three parts
標(biāo)簽: including initiate markov matlab
上傳時間: 2015-11-10
上傳用戶:bruce
小弟撰寫的類神經(jīng)網(wǎng)路backpropagataion,可以train如xor等互斥問題,使用bcb所完成,因為開發(fā)介面較為便利, 大部分使用類別的方法撰寫,所以若有興趣移植到vc的朋友,應(yīng)該也不會有太大的障礙。
標(biāo)簽: backpropagataion train bcb xor
上傳時間: 2013-12-30
上傳用戶:jeffery
train for support vector machine
標(biāo)簽: support machine vector train
上傳時間: 2016-05-11
上傳用戶:mpquest
人工神經(jīng)網(wǎng)絡(luò)系統(tǒng)的訓(xùn)練 train BP算法存在局部極小點,收斂速度慢等缺點,改進的BP算法。
標(biāo)簽: train 人工神經(jīng) BP算法 網(wǎng)絡(luò)系統(tǒng)
上傳時間: 2016-06-21
上傳用戶:ls530720646
”BP.m“文件是BP神經(jīng)網(wǎng)絡(luò)整個模型的源程序; “train.fig”是訓(xùn)練時最后得到的圖片; “程序運行的人口數(shù)量原始數(shù)據(jù).fig”是預(yù)測結(jié)果繪制的圖; “程序運行時matlab命令窗口的內(nèi)容.txt”是運行程序是在matlab命令窗口顯示的東西; “程序運行完產(chǎn)生的數(shù)據(jù).mat”是程序運行完畢產(chǎn)生的數(shù)據(jù)。 .bmp文件和.fig文件一樣
上傳時間: 2013-12-08
上傳用戶:stewart·
IDA Pro DLL Analysis train material
標(biāo)簽: Analysis material train IDA
上傳時間: 2013-12-28
上傳用戶:變形金剛
% train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.
標(biāo)簽: Levenberg-Marquardt desired network neural
上傳時間: 2016-12-27
上傳用戶:jcljkh
蟲蟲下載站版權(quán)所有 京ICP備2021023401號-1