This directory contains 3 file system modules: - file system ISO9660 iso9660.c iso9660.h - file system FAT12/16 fat.c fat.h - file system FAT32 fat32.c fat32.h file.c and file.h contains all high levels functions/macro/definition for your application. fs_variable.c contains all definitions of VARIABLES that can be shared with the all file systems. config.h must contain the definition of the file system used by your application.
標(biāo)簽: 9660 file system directory
上傳時(shí)間: 2015-03-15
上傳用戶(hù):腳趾頭
自適應(yīng)(Adaptive)神經(jīng)網(wǎng)絡(luò)源程序 The adaptive Neural Network Library is a collection of blocks that implement several Adaptive Neural Networks featuring different adaptation algorithms.~..~ There are 11 blocks that implement basically these 5 kinds of neural networks: 1) Adaptive Linear Network (ADALINE) 2) Multilayer Layer Perceptron with Extended Backpropagation algorithm (EBPA) 3) Radial Basis Functions (RBF) Networks 4) RBF Networks with Extended Minimal Resource Allocating algorithm (EMRAN) 5) RBF and Piecewise Linear Networks with Dynamic Cell Structure (DCS) algorithm A simulink example regarding the approximation of a scalar nonlinear function of 4 VARIABLES
標(biāo)簽: collection implement Adaptive adaptive
上傳時(shí)間: 2015-04-09
上傳用戶(hù):ywqaxiwang
The adaptive Neural Network Library is a collection of blocks that implement several Adaptive Neural Networks featuring different adaptation algorithms.~..~ There are 11 blocks that implement basically these 5 kinds of neural networks: 1) Adaptive Linear Network (ADALINE) 2) Multilayer Layer Perceptron with Extended Backpropagation algorithm (EBPA) 3) Radial Basis Functions (RBF) Networks 4) RBF Networks with Extended Minimal Resource Allocating algorithm (EMRAN) 5) RBF and Piecewise Linear Networks with Dynamic Cell Structure (DCS) algorithm A simulink example regarding the approximation of a scalar nonlinear function of 4 VARIABLES is included
標(biāo)簽: Neural collection implement Adaptive
上傳時(shí)間: 2013-12-23
上傳用戶(hù):teddysha
% [BestPop,Trace]=fmaxga(FUN,LB,UB,eranum,popsize,pcross,pmutation) % Finds a maximum of a function of several VARIABLES. % fmaxga solves problems of the form: % max F(X) subject to: LB <= X <= UB % BestPop--------最優(yōu)的群體即為最優(yōu)的染色體群 % Trace----------最佳染色體所對(duì)應(yīng)的目標(biāo)函數(shù)值 % FUN------------目標(biāo)函數(shù) % LB-------------自變量下限 % UB-------------自變量上限 % eranum---------種群的代數(shù),取100--1000(默認(rèn)1000) % popsize--------每一代種群的規(guī)模;此可取50--100(默認(rèn)50) % pcross---------交叉的概率,此概率一般取0.5--0.85之間較好(默認(rèn)0.8) % pmutation------變異的概率,該概率一般取0.05-0.2左右較好(默認(rèn)0.1) % options--------1×2矩陣,options(1)=0二進(jìn)制編碼(默認(rèn)0),option(1)~=0十進(jìn)制編碼,option(2)設(shè)定求解精度(默認(rèn)1e-4)
標(biāo)簽: pmutation BestPop popsize maximum
上傳時(shí)間: 2015-07-16
上傳用戶(hù):Altman
CCALC provides convenient way to for performing calculations. You can use standard infix notation for expressions and store results in VARIABLES.
標(biāo)簽: calculations convenient performing provides
上傳時(shí)間: 2015-08-18
上傳用戶(hù):dave520l
This lab exercise will cover the use of AccelDSP’s design exploration capabilities that include mapping VARIABLES to memory and unrolling loop and vector operations. You will learn how to create different hardware architectures without modifying the MATLAB source to explore different area/performance tradeoffs.
標(biāo)簽: capabilities exploration AccelDSP exercise
上傳時(shí)間: 2014-12-22
上傳用戶(hù):eclipse
We propose a novel approach for head tracking, which combines particle filters with Isomap. The particle filter works on the low-dimensional embedding of training images. It indexes into the Isomap with its state VARIABLES to find the closest template for each particle. The most weighted particle approximates the location of head. We develop a synthetic video sequence to test our technique. The results we get show that the tracker tracks the head which changes position, poses and lighting conditions.
標(biāo)簽: approach combines particle tracking
上傳時(shí)間: 2016-01-02
上傳用戶(hù):yy541071797
8051 Web Server project See Makefile for build notes Written for Keil C51 V5.1 compiler, notes: It uses big endian order, which is the same as the network byte order, unlike x86 systems. Use OPTIMIZE(2)or higher so that automatic VARIABLES get shared // between functions, to stay within the 256 bytes idata space
標(biāo)簽: notes for Makefile compiler
上傳時(shí)間: 2013-12-17
上傳用戶(hù):ikemada
This demo nstrates the use of the reversible jump MCMC algorithm for neural networks. It uses a hierarchical full Bayesian model for neural networks. This model treats the model dimension (number of neurons), model parameters, regularisation parameters and noise parameters as random VARIABLES that need to be estimated. The derivations and proof of geometric convergence are presented, in detail, in: Christophe Andrieu, Nando de Freitas and Arnaud Doucet. Robust Full Bayesian Learning for Neural Networks. Technical report CUED/F-INFENG/TR 343, Cambridge University Department of Engineering, May 1999. After downloading the file, type "tar -xf rjMCMC.tar" to uncompress it. This creates the directory rjMCMC containing the required m files. Go to this directory, load matlab5 and type "rjdemo1". In the header of the demo file, one can select to monitor the simulation progress (with par.doPlot=1) and modify the simulation parameters.
標(biāo)簽: reversible algorithm the nstrates
上傳時(shí)間: 2014-01-08
上傳用戶(hù):cuibaigao
4. Write a program that sorts three integers. The integers are entered from the console and stored in VARIABLES num1, num2 and num3, respectively. The program sorts the numbers so that num1 <= num2 <= num3.
標(biāo)簽: integers program entered console
上傳時(shí)間: 2016-05-05
上傳用戶(hù):龍飛艇
蟲(chóng)蟲(chóng)下載站版權(quán)所有 京ICP備2021023401號(hào)-1