亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

蟲蟲首頁| 資源下載| 資源專輯| 精品軟件
登錄| 注冊

Matrix-chain

  • 書系統地介紹MATLAB 7.0的混合編程方法和技巧。全書共分為13章。第1章和第2章介紹MATLAB的基礎知識

    書系統地介紹MATLAB 7.0的混合編程方法和技巧。全書共分為13章。第1章和第2章介紹MATLAB的基礎知識,第3章簡要介紹MATLAB混合編程,第4章至第9章分別介紹幾種典型的混合編程方法,包括C-MEX、MATLAB引擎、MAT數據文件共享、Mideva、Matrix和Add-in。第10章、第11章介紹MATLAB與Delphi和Excel的混合編程。第12章介紹MATLAB COM Builder,第13章以圖像處理為例介紹了一個綜合應用實例。    本書按混合編程的具體方法進行邏輯編排,自始至終用實例描述,每章著重闡述各種混合編程方法的實質和要點,同時穿插了作者多年使用MATLAB的經驗和體會。本書既適合初學者自學,也適用于高級MATLAB用戶,可作為高等數學、計算機、電子工程、數值分析、信息工程等課程的教學參考書,也可供上述領域的科研工作者參考。    本書所附光盤內容詳盡、實例豐富,包含MATLAB實例的源文件、函數/命令和注解以及程序實例。

    標簽: MATLAB 7.0 混合編程

    上傳時間: 2013-12-24

    上傳用戶:一諾88

  • the text file QMLE contains the quasi maximum likelyhood estimating procedure and performing Infor

    the text file QMLE contains the quasi maximum likelyhood estimating procedure and performing Information Matrix test for a univariate GARCH(1,1) model

    標簽: estimating likelyhood performing the

    上傳時間: 2014-11-22

    上傳用戶:zhenyushaw

  • This toolbox was designed as a teaching aid, which matlab is particularly good for since source cod

    This toolbox was designed as a teaching aid, which matlab is particularly good for since source code is relatively legible and simple to modify. However, it is still reasonably fast if used with the supplied optimiser. However, if you really want to speed things up you should consider compiling the matrix composition routine for H into a mex function. Then again if you really want to speed things up you probably shouldn t be using matlab anyway... Get hold of a dedicated C program once you understand the algorithm.

    標簽: particularly designed teaching toolbox

    上傳時間: 2016-11-25

    上傳用戶:hustfanenze

  • PRINCIPLE: The UVE algorithm detects and eliminates from a PLS model (including from 1 to A componen

    PRINCIPLE: The UVE algorithm detects and eliminates from a PLS model (including from 1 to A components) those variables that do not carry any relevant information to model Y. The criterion used to trace the un-informative variables is the reliability of the regression coefficients: c_j=mean(b_j)/std(b_j), obtained by jackknifing. The cutoff level, below which c_j is considered to be too small, indicating that the variable j should be removed, is estimated using a matrix of random variables.The predictive power of PLS models built on the retained variables only is evaluated over all 1-a dimensions =(yielding RMSECVnew).

    標簽: from eliminates PRINCIPLE algorithm

    上傳時間: 2016-11-27

    上傳用戶:凌云御清風

  • Batch version of the back-propagation algorithm. % Given a set of corresponding input-output pairs

    Batch version of the back-propagation algorithm. % Given a set of corresponding input-output pairs and an initial network % [W1,W2,critvec,iter]=batbp(NetDef,W1,W2,PHI,Y,trparms) trains the % network with backpropagation. % % The activation functions must be either linear or tanh. The network % architecture is defined by the matrix NetDef consisting of two % rows. The first row specifies the hidden layer while the second % specifies the output layer. %

    標簽: back-propagation corresponding input-output algorithm

    上傳時間: 2016-12-27

    上傳用戶:exxxds

  • % Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is p

    % Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.

    標簽: Levenberg-Marquardt desired network neural

    上傳時間: 2016-12-27

    上傳用戶:jcljkh

  • Train a two layer neural network with a recursive prediction error % algorithm ("recursive Gauss-Ne

    Train a two layer neural network with a recursive prediction error % algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully % connected) networks can be trained. % % The activation functions can either be linear or tanh. The network % architecture is defined by the matrix NetDef , which has of two % rows. The first row specifies the hidden layer while the second % specifies the output layer.

    標簽: recursive prediction algorithm Gauss-Ne

    上傳時間: 2016-12-27

    上傳用戶:ljt101007

  • Mapack可用來做矩陣運算 Mapack is a .NET class library for basic linear algebra computations. It supports th

    Mapack可用來做矩陣運算 Mapack is a .NET class library for basic linear algebra computations. It supports the following matrix operations and properties: Multiplication, Addition, Subtraction, Determinant, Norm1, Norm2, Frobenius Norm, Infinity Norm, Rank, Condition, Trace, Cholesky, LU, QR, Single Value decomposition, Least Squares solver, Eigenproblem solver, Equation System solver. The algorithms were adapted from Mapack for COM, Lapack and the Java Matrix Package.

    標簽: Mapack computations supports algebra

    上傳時間: 2017-01-26

    上傳用戶:tb_6877751

  • The software is capable to simulate space time code [1] for QPSK modulation using different number o

    The software is capable to simulate space time code [1] for QPSK modulation using different number of state. Examples of generator matrix up to 256 stetes are provided. Variable signal to noise ratio (SNR) might be applied to produce bit error rate (BER) or frame error rate (FER) curves.

    標簽: modulation different software simulate

    上傳時間: 2014-01-22

    上傳用戶:qq1604324866

  • SuperLU is a general purpose library for the direct solution of large, sparse, nonsymmetric systems

    SuperLU is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on high performance machines. The library is written in C and is callable from either C or Fortran. The library routines will perform an LU decomposition with partial pivoting and triangular system solves through forward and back substitution. The LU factorization routines can handle non-square matrices but the triangular solves are performed only for square matrices. The matrix columns may be preordered (before factorization) either through library or user supplied routines. This preordering for sparsity is completely separate from the factorization. Working precision iterative refinement subroutines are provided for improved backward stability. Routines are also provided to equilibrate the system, estimate the condition number, calculate the relative backward error, and estimate error bounds for the refined solutions.

    標簽: nonsymmetric solution SuperLU general

    上傳時間: 2017-02-20

    上傳用戶:lepoke

主站蜘蛛池模板: 金昌市| 玉门市| 安龙县| 玉溪市| 滁州市| 墨玉县| 黑山县| 汉源县| 涞源县| 外汇| 黄陵县| 阳东县| 洛扎县| 门源| 扶沟县| 法库县| 连云港市| 玉屏| 全南县| 邳州市| 河池市| 教育| 汤原县| 若尔盖县| 奉贤区| 黄大仙区| 武穴市| 甘洛县| 灌阳县| 策勒县| 长乐市| 江孜县| 阿城市| 桃江县| 民权县| 武定县| 信宜市| 固原市| 吉木萨尔县| 从江县| 个旧市|