?? programs_17b.m
字號:
% Chapter 17 - Neural Networks.% Programs_17b - Generalized delta learing rule and backpropagation of errors% for a multilayer network (Figure 17.8).% Copyright Birkhauser 2004. Stephen Lynch.function Programs_17b% Load full Boston housing data.load housing.txtX = housing(:,1:13);t = housing(:,14);% Scale to zero mean, unit variance and introduce bias on input.xmean = mean(X);xstd = std(X);X = (X-ones(size(X,1),1)*xmean)./(ones(size(X,1),1)*xstd);X = [ones(size(X,1),1) X];tmean = mean(t);tstd = std(t);t = (t-tmean)/tstd;% Iterate over a number of hidden nodesmaxHidden = 2;for numHidden=1:maxHidden% Initialise random weight vector.% Wh are hidden weights, wo are output weights.randn('seed', 123456);Wh = 0.1*randn(size(X,2),numHidden);wo = 0.1*randn(numHidden+1,1);% Do numEpochs iterations of batch error back propagation.numEpochs = 2000;numPatterns = size(X,1);% Set eta.eta = 0.05/numPatterns;for i=1:numEpochs % Calculate outputs, errors, and gradients. phi = [ones(size(X,1),1) tanh(X*Wh)]; y = phi*wo; err = y-t; go = phi'*err; Gh = X'*((1-phi(:,2:numHidden+1).^2).*(err*wo(2:numHidden+1)')); % Perform gradient descent. wo = wo - eta*go; Wh = Wh - eta*Gh; % Update performance statistics. mse(i) = var(err);endplot(1:numEpochs, mse, '-')hold onendfsize=15;set(gca,'xtick',[0:500:2000],'FontSize',fsize)set(gca,'ytick',[0:0.5:1],'FontSize',fsize)xlabel('Number of Epochs','FontSize',fsize)ylabel('Mean Squared Error','FontSize',fsize)hold off% End of Programs_17b.
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -