亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? theory.html~

?? Single-layer neural networks can be trained using various learning algorithms. The best-known algori
?? HTML~
字號:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
  <meta http-equiv="Content-Type"
 content="text/html; charset=iso-8859-1">
  <meta name="GENERATOR"
 content="Mozilla/4.04 [en] (X11; I; Linux 2.0.30 i686) [Netscape]">
  <title>Perceptron</title>
</head>
<body alink="#ff0000" bgcolor="#ffffff" link="#0000ee" text="#000000"
 vlink="#551a8b">
<h2>
Supervised learning in a single-layer neural network</h2>
Let's consider a single-layer neural network with <i>b</i> inputs and <i>c</i>
outputs:
<center><img src="theory_dateien/layer.gif" nosave="" height="141"
 width="361"></center>
<center>&nbsp;</center>
<ul>
  <li>
    <i>W</i><sub>ij</sub> = weight from input i to unit j in output
layer;
W<i><sub>j </sub></i>is the vector of all the weights of the j-th
neuron
in the output layer.</li>
  <li>
    <i>I</i><sup>p</sup> = input vector (pattern p) = (<i>I</i><sub>1</sub><sup>p</sup>,
    <i>I</i><sub>2</sub><sup>p</sup>, ..., <i>I</i><sub>b</sub><sup>p</sup>).</li>
  <li>
    <i>T</i><sup>p</sup> = target output vector (pattern p) = (<i>T</i><sub>1</sub><sup>p</sup>,
    <i>T</i><sub>2</sub><sup>p</sup>, ..., <i>T</i><sub>c</sub><sup>p</sup>).</li>
  <li>
    <i>A</i><sup>p</sup> = Actual output vector (pattern p) = (<i>A</i><sub>1</sub><sup>p</sup>,
    <i>A</i><sub>2</sub><sup>p</sup>, ..., <i>A</i><sub>c</sub><sup>p</sup>).</li>
  <li>
    <i>g()</i> = sigmoid activation function: <i>g(a )</i> = [1 + exp
(-<i>a</i>)]<sup>-1</sup></li>
</ul><hr width="100%"><h3> <a name="Supervised_learning"></a>Supervised learning</h3> We have seen that different weights of a neural network produce
different functions of the input. To train a network, we can present some sample inputs and compare the actual output to the desired results.&nbsp; The
difference is called the <b>error</b>.
<center><img src="theory_dateien/learning.gif"
 alt="[an error term is computed and fed back]" nosave="" height="164"
 width="251"></center>
The different learning rules tell us which way to adjust the weights to
reduce this error.&nbsp; We say that training has converged when this
error
reaches some small, acceptable level.
<p>Often the learning rule takes the following form:
<br>
&nbsp;&nbsp; <i>W<sub>ij&nbsp;</sub></i> <i>(t+1)</i> = <i>W<sub>ij&nbsp;</sub></i>
<i>(t) + eta&nbsp; . err (p)</i>
<br>
where 0<i> &lt;= eta &lt; </i>1 is a parameter that controls the
learning
rate, and <i>err(p)</i> is the error when input pattern <i>p</i> is
presented.
</p>
<p><a
 href="http://diwww.epfl.ch/mantra/tutorial/english/apb/html/index.html">[Back
to the Adaline/Perceptron/Backprop applet page]</a>
</p>
<hr width="100%"><h3><a name="Adaline"></a>Adaline learning</h3>
ADALINE is an acronym for ADAptive LINear Element (or ADAptive LInear
NEuron).&nbsp;
It was developed by Bernard Widrow and Marcian Hoff (1960).
<p>The adaline learning rule (also known as the least-mean-squares
rule,
the delta rule, and the Widrow-Hoff rule) is a training rule that
minimises
the output error using (approximate) gradient descent. After each
training
pattern <i>I<sup>p</sup></i>&nbsp; is presented, the correction to
apply
to the weights is proportional to the error.&nbsp; The correction is
calculated
<i>before</i> the thresholding step, using <i>err<sub>ij</sub></i> <i>(p)</i>=<i>T</i><sup>p</sup>-<i>W<sub>ij</sub></i>
<i>I<sup>p</sup></i>:
</p>
<center><img src="theory_dateien/adaline.gif"
 alt="error=(inner product - target value)" nosave="" height="113"
 width="198"></center>
&nbsp;
<p>Thus, the weights are adjusted by
</p>
<p>&nbsp;&nbsp;&nbsp; <i>W<sub>ij</sub>&nbsp; (t+1) = W<sub>ij</sub>&nbsp;
(t) + eta&nbsp; (T<sup>p</sup>-W<sub>ij</sub></i> <i>I<sup>p</sup>)</i>&nbsp;
(<i>I<sup>p</sup>)</i>
<br>
This corresponds to gradient descent on the quadratic error surface,
<i>E<sub>j</sub></i>=Sum<i><sub>p</sub></i> [<i>T<sup>p</sup></i> - <i>W<sub>j</sub>
<sup>.</sup> I<sup>p</sup></i>] <sup>2</sup>
</p>
<p><a
 href="http://diwww.epfl.ch/mantra/tutorial/english/apb/html/index.html">[Back
to the Adaline/Perceptron/Backprop applet page]</a>
</p>
<hr width="100%"><h3><a name="Perceptron"></a>Perceptron learning</h3>
In perceptron learning, the weights are adjusted <b>only when a
pattern
is misclassified</b>.&nbsp;&nbsp;&nbsp; The correction to the weights
after
applying the training pattern <i>p</i> is
<br>
&nbsp;&nbsp;&nbsp; <i>W<sub>ij</sub>&nbsp; (t+1) = W<sub>ij</sub>&nbsp;
(t)&nbsp; + eta (T<sup>p </sup>- A<sup>p</sup>)</i>&nbsp; (<i>I<sup>p</sup>)</i>
<br>
This corresponds to gradient descent on the error surface&nbsp; E (<i>W<sub>ij</sub>
</i>)= Sum<sub>misclassified</sub> [<i>W<sub>ij</sub> (A<sup>p</sup>)(I<sup>p</sup></i>)].
<p><a
 href="http://diwww.epfl.ch/mantra/tutorial/english/apb/html/index.html">[Back
to the Adaline/Perceptron/Backprop applet page]</a></p>
<p></p>
<hr width="100%"><hr width="100%"><h3><a name="Pocket"></a>Pocket algorithm</h3>
The perceptron learning algorithm does not terminate if the learning
set
is not linearly separable.&nbsp; In many real-world cases,
however,&nbsp;
we want to find the "best" linear separation even when the learning
sets
are not ideal. The pocket algorithm is a modification of the perceptron
rule proposed by S. I. Gallant (1990). It stores the best weight vector
so far in a "pocket" while continuing to learn.&nbsp; The weights are
actually
modified only if a better weight vector is found.
<br>
&nbsp;
<p><a
 href="http://diwww.epfl.ch/mantra/tutorial/english/apb/html/index.html">[Back
to the Adaline/Perceptron/Backprop applet page]</a>
</p><hr width="100%"><h3><a name="Backpropagation"></a>Backpropagation</h3>
The backpropagation algorithm was developed for training multilayer
perceptron
networks. In this applet, we will study how it works for a single-layer
network.&nbsp; It was popularized by Rumelhart, Hinton and Williams
(1986),
although similar ideas had been developed previously by others (Werbos,
1974; Parker, 1985).&nbsp; The idea is to train a network by
propagating
the output errors backward through the layers. The errors serve to
evaluate
the derivatives of the error function with respect to the weights,
which
can then be adjusted.
<p>The backpropagation algorithm for a single-layer network using the
sum-of-squares
error function consists of two phases:
</p>
<ol>
  <li>
    <b>Feedforward</b> - apply an input; evaluate the activations <i>a<sub>j</sub>
    </i>and store the error <i>delta<sub>j </sub></i>at each node <i>j</i></li>
  <br>
&nbsp;&nbsp;&nbsp; <i>a<sub>j</sub> </i>= <i>Sum <sub>i</sub>(W<sub>ij</sub>&nbsp;
(t)&nbsp;&nbsp; I<sup>p</sup><sub>i</sub>)</i>
  <br>
  <i>&nbsp;&nbsp;&nbsp; A<sup>p</sup><sub>j&nbsp;</sub> = g (a<sub>j</sub>
  </i>)
  <br>
&nbsp;&nbsp;&nbsp; <i>delta<sub>j&nbsp;</sub> = A<sup>p</sup><sub>j&nbsp;</sub>
-I<sup>p</sup><sub>j</sub></i>
  <br>
&nbsp;
  <li><b>Backpropagation</b> - compute the adjustments and update the
weights.&nbsp;
Since there is just one layer, the output layer, we compute</li>
  <br>
&nbsp;&nbsp;&nbsp;&nbsp; <i>W<sub>ij</sub>&nbsp; (t+1) = W<sub>ij</sub>&nbsp;
(t) - eta&nbsp; delta<sub>i&nbsp;</sub> I<sup>p</sup><sub>j</sub></i>
  <br>
(This is called "on-line" learning, because the weights are adjusted
each time a new input is presented.&nbsp; In "batch" learning, the
weights
are adjusted after summing over all the patterns in the training set.)
</ol>
<a
 href="http://diwww.epfl.ch/mantra/tutorial/english/apb/html/index.html">[Back
to the Adaline/Perceptron/Backprop applet page]</a><hr width="100%"><h3><a name="optimal"></a>Optimal Perceptron learning</h3>
In the case of linear separable problems a perceptron can find
different solutions:<br>
<img style="width: 403px; height: 315px;" alt=""
 src="theory_dateien/solutions.png"><br>
<br>
&nbsp;It would now be interesting to find the hyperplane that assures
the maximal safety tolerance:<br>
<img style="width: 403px; height: 315px;" alt=""
 src="theory_dateien/optimal.png"><br>
<br>
The margins of &nbsp;that hyperplane touches a limited number of
special points which define the hyperplane and which are called the <span
 style="font-style: italic;">Support Vectors</span>.<br>
<math:math xmlns:math="http://www.w3.org/1998/Math/MathML"><math:semantics><math:mrow><math:mrow><math:mover
 math:accent="true"></math:mover></math:mrow></math:mrow></math:semantics><br>
</math:math>
<p style="margin-bottom: 0cm;" align="center"><img
 src="temp_html_16539589.gif" name="Objekt2" align="middle" height="36"
 hspace="8" width="383"></p>
<math:math xmlns:math="http://www.w3.org/1998/Math/MathML"></math:math>
<p style="margin-bottom: 0cm;">The perceptron has to determine the
samples for which <img src="temp_html_m32b2ad78.gif" name="Objekt1"
 align="middle" height="20" hspace="8" width="65">.&nbsp;The remaining
sam<span lang="de-DE">ples with <img src="temp_html_m411eef90.gif"
 name="Objekt3" align="middle" height="20" hspace="8" width="39">are
the Support Vectors <i>sv</i><span style="font-style: normal;">.&nbsp;</span></span></p>
<p style="margin-bottom: 0cm;"><span lang="de-DE"><span
 style="font-style: normal;"><img style="width: 403px; height: 315px;"
 alt="" src="theory_dateien/optimal0.png"></span></span></p>
<p style="margin-bottom: 0cm;"><img src="temp_html_m4e014a7b.gif"
 name="Objekt4" align="middle" height="20" hspace="8" width="40">Represents
the distance between a sample and<img src="temp_html_7bb41a07.gif"
 name="Objekt5" align="middle" height="20" hspace="8" width="17">.
<i>z-</i><sub> </sub>and <i>z+</i><sub> </sub>represent the
projection of the critical points on the axis defined by<img
 src="temp_html_13dea929.gif" name="Objekt6" align="middle" height="18"
 hspace="8" width="19">.</p>
<p style="margin-bottom: 0cm;">Algorithm of the Optimal Perceptron:</p>
<br>
<img style="width: 287px; height: 522px;" alt=""
 src="theory_dateien/optimal_algorithm.png"><br>
<p><a
 href="http://diwww.epfl.ch/mantra/tutorial/english/apb/html/index.html">[Back
to the Adaline/Perceptron/Backprop applet page]</a>
</p>
<p></p>
<hr width="100%"><h3><a name="reading"></a><i>Further reading</i></h3>
<ul>
  <li>
C. M. Bishop.<i> Neural Networks for Pattern Recognition.</i> Clarendon
Press, Oxford, 1995. pp 95-103 (adaline and perceptron); pp 140-148
(backprop)</li>
  <li>
J. Hertz, A. Krogh, and&nbsp; R.G. Palmer. <i>Introduction to the
Theory
of Neural Computation</i>. Addison-Wesley, Redwood City CA, 1991. pp
89-111</li>
  <li>
R. Rojas. <i>Neural Networks: A Systematic Introduction</i>.
Springer-Verlag,
Berlin 1996. pp 84-91 (perceptron learning); pp 159-162 (backprop)</li>
</ul>
<hr width="100%">
<p><a
 href="http://diwww.epfl.ch/mantra/tutorial/english/apb/html/index.html">[Back
to the Adaline/Perceptron/Backprop applet page]</a>
</p>
</body>
</html>

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
这里只有精品99re| 91精品久久久久久久91蜜桃| 日本中文一区二区三区| 一区二区三区波多野结衣在线观看| 国产亚洲精品aa| 久久你懂得1024| 欧美经典一区二区| 国产精品人成在线观看免费 | 一本久久精品一区二区| 成人性生交大片免费看中文网站| 国产成人亚洲精品青草天美| 国产精品一区二区久久不卡| 盗摄精品av一区二区三区| 不卡欧美aaaaa| 色美美综合视频| 欧美三级中文字| 日韩欧美一级二级三级| 精品福利一区二区三区免费视频| 久久久高清一区二区三区| 国产精品国产三级国产aⅴ无密码| 最新热久久免费视频| 亚洲成人tv网| 国内成人自拍视频| 99re在线精品| 91精品国产免费| 国产网红主播福利一区二区| 最新热久久免费视频| 中文字幕一区在线观看| 一区二区三区四区不卡视频 | 日韩一区二区麻豆国产| 久久精品男人的天堂| 亚洲情趣在线观看| 久久国产成人午夜av影院| 成人国产一区二区三区精品| 欧美日韩精品三区| 久久久不卡网国产精品一区| 一区二区三区美女视频| 久久成人免费电影| 日本精品一区二区三区高清| 日韩一区二区中文字幕| 成人欧美一区二区三区在线播放| 蜜臀av性久久久久蜜臀aⅴ四虎 | 日韩电影在线观看电影| 成人网男人的天堂| 日韩一区二区三区三四区视频在线观看| 国产欧美日韩久久| 日本特黄久久久高潮| 99精品桃花视频在线观看| 欧美大片拔萝卜| 一区二区国产视频| 成人免费视频视频| 精品久久国产字幕高潮| 亚洲.国产.中文慕字在线| 粉嫩aⅴ一区二区三区四区五区| 欧美军同video69gay| 国产精品国产三级国产普通话蜜臀| 麻豆一区二区99久久久久| 色婷婷av久久久久久久| 国产精品三级电影| 国产传媒日韩欧美成人| 日韩你懂的电影在线观看| 亚洲午夜免费视频| 一本高清dvd不卡在线观看| 国产精品素人视频| 国产河南妇女毛片精品久久久| 欧美一级欧美一级在线播放| 波多野结衣的一区二区三区| 成人一级视频在线观看| 国产mv日韩mv欧美| av在线不卡观看免费观看| 精品精品国产高清a毛片牛牛 | 在线成人午夜影院| 国产欧美日韩不卡免费| 国产精品视频九色porn| 懂色一区二区三区免费观看| 亚洲成人777| 国产欧美一区二区精品婷婷 | 亚洲精品一卡二卡| 精品久久久久久最新网址| 97se亚洲国产综合在线| 韩国一区二区在线观看| 中文字幕日韩av资源站| 日韩欧美在线影院| 色综合久久久久综合99| 奇米一区二区三区| 亚洲免费毛片网站| 国产日产精品一区| 欧美一区二区三区思思人| av在线不卡免费看| 久久超碰97中文字幕| 亚洲精品免费在线观看| 国产清纯白嫩初高生在线观看91 | 欧美精品一区二区久久婷婷| 色综合激情五月| 粉嫩久久99精品久久久久久夜| 日本美女一区二区三区视频| 一区二区成人在线视频| 国产精品视频一区二区三区不卡| 欧美一区二区三区思思人| 欧美丝袜丝交足nylons| 91麻豆国产香蕉久久精品| 国产馆精品极品| 国产一区二区三区综合| 久久精品999| 日本伊人色综合网| 爽好久久久欧美精品| 夜夜揉揉日日人人青青一国产精品| 国产精品全国免费观看高清| 久久网站热最新地址| 欧美一级日韩一级| 91精品在线免费| 91精品综合久久久久久| 欧美剧情电影在线观看完整版免费励志电影| a亚洲天堂av| 99re66热这里只有精品3直播| 国产成人高清在线| 懂色一区二区三区免费观看| 粉嫩aⅴ一区二区三区四区五区| 国产另类ts人妖一区二区| 国产精品综合久久| 高清日韩电视剧大全免费| 国产成人欧美日韩在线电影| 懂色av一区二区三区免费观看 | 日本欧美一区二区| 日本vs亚洲vs韩国一区三区二区| 日韩国产在线观看| 久久精品99久久久| 国产美女精品一区二区三区| 国产成人av网站| hitomi一区二区三区精品| 91影院在线观看| 精品视频色一区| 日韩三级视频在线看| 久久美女艺术照精彩视频福利播放| 亚洲精品一区在线观看| 中文字幕乱码亚洲精品一区| 一区在线播放视频| 亚洲国产精品久久艾草纯爱| 婷婷综合另类小说色区| 国产尤物一区二区在线| 国产精品小仙女| 色婷婷综合久久久中文一区二区| 欧美日韩成人在线| 欧美精品一区二区精品网| 国产精品国产三级国产a| 亚洲国产视频一区二区| 另类小说图片综合网| 成人永久aaa| 欧美绝品在线观看成人午夜影视| xf在线a精品一区二区视频网站| 亚洲欧洲日韩女同| 奇米精品一区二区三区四区| 国产高清亚洲一区| 欧美日韩一区二区三区在线看| 精品久久久久香蕉网| 一区二区三区波多野结衣在线观看 | 国产午夜精品久久久久久免费视 | 欧美电影免费观看高清完整版在| 国产欧美精品一区aⅴ影院 | 亚洲欧洲制服丝袜| 麻豆精品久久精品色综合| 99国产欧美另类久久久精品| 日韩精品一区二区三区视频播放 | av一本久道久久综合久久鬼色| 在线播放亚洲一区| 国产精品卡一卡二卡三| 麻豆成人在线观看| 在线视频一区二区三| 亚洲国产高清在线观看视频| 偷拍自拍另类欧美| 91网址在线看| 国产女人18水真多18精品一级做| 婷婷综合久久一区二区三区| 99国产精品国产精品毛片| 精品久久国产字幕高潮| 同产精品九九九| 色婷婷一区二区三区四区| 欧美激情综合五月色丁香 | 欧美精品一区二区在线观看| 亚洲电影在线免费观看| 97久久超碰精品国产| 日本一区二区免费在线观看视频| 蜜臀av一级做a爰片久久| 欧美影视一区二区三区| 亚洲欧洲日韩在线| 懂色av一区二区三区蜜臀| 2欧美一区二区三区在线观看视频| 香蕉成人啪国产精品视频综合网 | 国产大陆a不卡| 精品福利一区二区三区| 久久精品久久99精品久久| 制服丝袜日韩国产| 亚洲18色成人| 欧美日韩大陆一区二区| 亚洲成av人片一区二区梦乃| 在线精品视频一区二区三四| 亚洲精品成人精品456| 色菇凉天天综合网| 一区二区三区免费观看| 欧美三级在线播放|