?? gradff_snn.m
字號:
function de_dx = gradff_snn(X, net, data)%GRADFF_SNN Derivative of cost function with respect to weights and% biases.%% Syntax%% dE_dX = gradff_snn(net, data)% dE_dX = gradff_snn(X, net, data)%% Description%% GRADFF_SNN takes% net - a net_struct% data - the data for net.costFcn.name% X - a vector containing connection weights and biases (optional; % default these will be taken from net)% and returns% dE_dX - dE_dX (where E is cost function)%%% See also%% GETX_SNN%%#function lintf_snn exptf_snn logsigtf_snn radbastf_snn tansigtf_snn %#function dlintf_snn dexptf_snn dlogsigtf_snn dradbastf_snn dtansigtf_snn %#function wcf_snn dwcf_snn if (nargin == 2) % gradff_snn(net, data) de_dx = gradff_all(X, net);elseif (nargin == 3) net = setx_snn(net, X); de_dx = gradff_all(net, data);end%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function de_dx = gradff_all(net, data)M = net.numLayers;N{1} = net.weights{1}*data.P + repmat(net.biases{1}, 1, size(data.P,2));V{1} = feval(net.transferFcn{1}, N{1});if (M > 1) for m = 2:M N{m} = net.weights{m}*V{m-1} + repmat(net.biases{m}, 1, size(data.P,2)); V{m} = feval(net.transferFcn{m}, N{m}); endenddelta{M} = feval(feval(net.transferFcn{M}, 'deriv'), N{M}, V{M}) .* ... feval(feval(net.costFcn.name, 'deriv'), net, data, V{M});for m = [(M-1):-1:1] delta{m} = feval(feval(net.transferFcn{m}, 'deriv'), N{m}, V{m}) .* ... (net.weights{m+1}'*delta{m+1});endde_db{1} = sum(delta{1},2);de_dw{1} = delta{1}*data.P';if (M > 1) for m = 2:M de_db{m} = sum(delta{m},2); de_dw{m} = delta{m}*(V{m-1}'); endendind_1 = 1;for m = 1:M [rows, columns] = size(de_dw{m}); ind_2 = ind_1 + rows*columns; de_dx([ind_1:(ind_2-1)],1) = reshape(de_dw{m}, rows*columns, 1); ind_1 = ind_2 + rows; de_dx([ind_2:(ind_1-1)],1) = de_db{m};end
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -