亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? article771.asp.htm

?? 《電腦游戲中的人工智能制作》
?? HTM
?? 第 1 頁 / 共 5 頁
字號:
        <td>&nbsp;</td>
        <td>-1</td>
        <td>-1</td>
        <td>(-1) +( -1) -1 = 3 &lt; 0 don't fire,
        output -1</td>
    </tr>
    <tr>
        <td>&nbsp;</td>
        <td>-1</td>
        <td>1</td>
        <td>(-1) + (1) -1 = -1&lt; 0 don't fire,
        output -1</td>
    </tr>
    <tr>
        <td>&nbsp;</td>
        <td>1</td>
        <td>-1</td>
        <td>(1) + (-1) -1 = -2 &lt; 0 don't fire,
        output -1</td>
    </tr>
    <tr>
        <td>&nbsp;</td>
        <td>1</td>
        <td>1</td>
        <td>(1) + (1)-1 = 1 &gt; 0 fire, output 1</td>
    </tr>
</table>
</BLOCKQUOTE>

<P>As you can see, the neural network with the proper weights and bias solves the problem perfectly. Moreover, there are a whole family of weights that will do just as well (sliding the decision boundary in a direction perpendicular to itself). However, there is an important point here. Without the bias or threshold, only lines through the origin would be possible since the <I>X</I><SUB>2</SUB> intercept would have to be 0. This is very important and the basis for using a bias or threshold, so this example has proven to be an important one since it has flushed this fact out. So, are we closer to seeing how to algorithmically find weights? Yes, we now have a geometrical analogy and this is the beginning of finding an algorithm.
<H1>The Ebb of Hebbian</H1>
<P>Now we are ready to see the first learning algorithm and its application to a neural net. One of the simplest learning algorithms was invented by <I>Donald Hebb</I> and it is based on using the input vectors to modify the weights in a way so that the weight create the best possible linear separation of the inputs and outputs. Alas, the algorithm works just OK. Actually, for inputs that are orthogonal it is perfect, but for non-orthogonal inputs, the algorithm falls apart. Even though, the algorithm doesn't result in correct weight for all inputs, it is the basis of most learning algorithms, so we will start here.
<P>Before we see the algorithm, remember that it is for a single neurode, single layer neural net. You can of course, place a number of neurodes in the layer, but they will all work in parallel and can be taught in parallel. Are you starting to see the massive parallization that neural nets exhibit? Instead of using a single weight vector, a multi-neurode net uses a weight matrix. Anyway, the algorithm is simple, it goes something like this:


<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>Given:</I>

<UL>
	<LI>Inputs vectors are in bipolar form <I>I</I> = (-1,1,0,...-1,1) and contain k elements.
	<LI>There are <I>n</I> input vectors and we will refer to the set as <I>I</I> and the <I>j</I>th element as <I>I</I><SUB>j</SUB><I>.</I>
	<LI>Outputs will be referred to as <I>y</I><SUB>j</SUB> and there are k of them, one for each input <I>I</I><SUB>j</SUB>
	<LI>The weights <I>w</I><SUB>1</SUB><I>-w</I><SUB>k</SUB> are contained in a single vector <I>w</I> = (<I>w</I><SUB>1</SUB>, <I>w</I><SUB>2</SUB>, ... <I>w</I><SUB>k</SUB>).
</UL>
</FONT>
</BLOCKQUOTE>

<P><I>Step 1.</I> Initialize all your weights to 0, and let them be contained in a vector <I>w</I> that has <I>n</I> entries. Also initialize the bias b to 0.

<P><I>Step 2.</I> For <I>j</I> = 1 to <I>n</I> do

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>b</I> = <I>b</I> + <I>y</I><SUB><I>j</I></SUB>
(where y is the desired output)

<P><I>w</I> = <I>w</I> + <I>I</I><SUB><I>j</I></SUB>
* <I>y</I><SUB><I>j</I></SUB> (remember this is a vector
operation)
</FONT>
</BLOCKQUOTE>

<P>end do
<P>The algorithm is nothing more than an <I>&quot;accumulator&quot;</I> of sorts. Shifting, the decision boundary based on the changes in the input and output. The only problem is that it sometimes can't move the boundary fast enough (or at all) and <I>&quot;learning&quot;</I> doesn't take place.
<P>So how do we use <I>Hebbian</I> learning? The answer is, the same as the previous network except that now we have an algorithmic method teach the net with, thus we refer to the net as a <I>Hebb </I>or <I>Hebbian Net</I>. As an example, let's take our trusty logical <I>AND</I> function and see if the algorithm can find the proper weights and bias to solve the problem. The following summation is equivalent to running the algorithm:

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>w</I> = [<I>I</I><SUB>1</SUB>*<I>y</I><SUB>1</SUB>]
+ [<I>I</I><SUB>2</SUB>*<I>y</I><SUB>2</SUB>] + [<I>I</I><SUB>3</SUB>*<I>y</I><SUB>3</SUB>]
+ [<I>I</I><SUB>4</SUB>*<I>y</I><SUB>4</SUB>] = [(-1, -1)*(-1)] +
[(-1, 1)*(-1)] + [( 1, -1)*(-1)] + [(1, 1)*(1)] = (2,2) 

<P><I>b</I> = <I>y</I><SUB>1</SUB> + <I>y</I><SUB>2</SUB>
+ <I>y</I><SUB>3</SUB> + <I>y</I><SUB>4</SUB> = (-1) + (-1) +
(-1) + (1) = -2
</FONT>
</BLOCKQUOTE>

<P>Therefore, <I>w</I><SUB>1</SUB>=2, <I>w</I><SUB>2</SUB>=2, and <I>b</I>=-2. These are simply scaled versions of the values <I>w</I><SUB>1</SUB>=1, <I>w</I><SUB>2</SUB>=1, <I>b</I>=-1 that we derived geometrically in the previous section. Killer huh! With this simple learning algorithm we can train a neural net (consisting of a single neurode) to respond to a set of inputs and either classify the input as true or false, 1 or -1. Now if we were to array these neurodes together to create a network of neurodes then instead of simple classifying the inputs as on or off, we can associate patterns with the inputs. This is one of the foundations for the next network neural net structure; the <I>Hopfield</I> net. One more thing, the activation function used for a Hebb Net is a step with a threshold of 0.0 and bipolar outputs 1 and -1.
<P>To get a feel for Hebbian learning and how to implement an actual Hebb Net, Listing 2.0 contains a complete Hebbian Neural Net Simulator. You can create networks with up to 16 inputs and 16 neurodes (outputs). The program is self explanatory, but there are a couple of interesting properties: you can select 1 of 3 activation functions, and you can input any kind of data you wish. Normally, we would stick to the Step activation function and inputs/outputs would be binary or bipolar. However, in the light of discovery, maybe you will find something interesting with these added degrees of freedom. However, I suggest that you begin with the step function and all bipolar inputs and outputs.

<BLOCKQUOTE>
<SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Listing 2.0 - A Hebb Net Simulator (in neuralnet.zip).</I></FONT></SPAN>
</BLOCKQUOTE>

<H1>Playing the Hopfield</H1>

<BLOCKQUOTE>
<SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Figure 10.0 - A 4 Node Hopfield Autoassociative Neural Net.</I></FONT></SPAN>
<P ALIGN=CENTER><IMG SRC="xneuralnet/Image23.jpg" tppabs="http://www.gamedev.net/reference/articles/xneuralnet/Image23.jpg" width="532" height="376">
</BLOCKQUOTE>

<P>John Hopfield is a physicist that likes to play with neural nets (which is good for us). He came up with a simple (in structure at least), but effective neural network called the <I>Hopfield Net.</I> It is used for autoassociation, you input a vector <I>x</I> and you get <I>x</I> back (hopefully). A Hopfield net is shown in Figure 10.0. It is a single layer network with a number of neurodes equal to the number of inputs <I>X</I><SUB>i</SUB>. The network is fully connected meaning that every neurode is connected to every other neurode and the inputs are also the outputs. This should strike you as weird since there is <I>feedback</I>. Feedback is one of the key features of the Hopfield net and this feedback is the basis for the convergence to the correct result. 
<P>The Hopfield network is an <I>iterative autoassociative memory.</I> This means that is may take one or more cycles to return the correct result (if at all). Let me clarify; the Hopfield network takes an input and then feeds it back, the resulting output may or may not be the desired input. This feedback cycle may occur a number of times before the input vector is returned. Hence, a Hopfield network functional sequence is: first we determine the weights based on our input vectors that we want to autoassociate, then we input a vector and see what comes out of the activations. If the result is the same as our original input then we are done, if not, then we take the result vector and feed it back through the network. Now let's take a look at the weight matrix and learning algorithm used for Hopfield nets.
<P>The learning algorithm for Hopfield nets is based on the Hebbian rule and is simply a summation of products. However, since the Hopfield network has a number of input neurons the weights are no longer a single array or vector, but a collection of vectors which are most compactly contained in a single matrix. Thus the weight matrix <I>W</I> for a Hopfield net is created based on this equation:

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>Given:</I>
<UL>
	<LI>Inputs vectors are in bipolar form <I>I</I> = (-1,1,,...-1,1) and contain <I>k</I> elements.
	<LI>There are <I>n</I> input vectors and we will refer to the set as <I>I</I> and the <I>j</I>th element as <I>I</I><SUB>j</SUB>.
	<LI>Outputs will be referred to as <I>y</I><SUB>j</SUB> and there are <I>k</I> of them, one for each input <I>I</I><SUB>j</SUB>.
	<LI>The weight matrix <I>W</I> is square and has dimension <I>k</I>x<I>k</I> since there are <I>k</I> inputs.
</UL>

<P><SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Eq. 8.0</I></FONT></SPAN>

<P><I>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;k</I>
<br>
<I>W</I> <SUB>(kxk)</SUB> = <font face="Symbol">q</font> <I>I</I><SUB>i</SUB><SUP>t</SUP> x <I>I</I><SUB>i</SUB>
<br>
<I>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;i</I>
= 1<br>
</FONT>
</BLOCKQUOTE>

<P>note: each outer product will have dimension <I>k</I>
x <I>k</I>, since we are multiplying a column vector and a row
vector.

<P>and, <I>W</I><SUB>ii</SUB> = 0, for all<I> i</I>.
<P>Notice that there are no bias terms and the main diagonal of <I>W</I> must be all zero's. The weight matrix is simply the sum of matrices generated by multiplying the transpose <I>I</I><SUB>i</SUB><SUP>t</SUP> x <I>I</I><SUB>i </SUB>for all <I>i</I> from 1 to <I>n</I>. This is almost identical to the Hebbian algorithm for a single neurode except that instead of multiplying the input by the output, the input is multiplied by itself, which is equivalent to the output in the case of autoassociation. Finally, the activation function <I>f</I><SUB>h</SUB><I>(x) </I>is shown below:

<BLOCKQUOTE>
<SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Eq. 9.0</I></FONT></SPAN>
<FONT COLOR=RED>
<P><I>f</I><SUB>h</SUB><I>(x)</I> <I>= </I>1, if <I>x</I>
<font face="Symbol">'</font> 0<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0,
if <I>x</I> &lt; 0
</FONT>
</BLOCKQUOTE>

<P><I>f</I><SUB>h</SUB><I>(x) </I>it is a step function with a binary output. This means that the inputs must be binary, but we already said that inputs are bipolar? Well, they are, and they aren't. When the weight matrix is generated we convert all input vectors to bipolar, but for normal operation we use the binary version of the inputs and the output of the Hopfield net will also be binary. This convention is not necessary, but makes the network discussion a little simpler. Anyway, let's move on to an example. Say we want to create a four node Hopfield net and we want it to recall these vectors: 

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>I</I><SUB>1</SUB>=(0,0,1,0), <I>I</I><SUB>2</SUB>=(1,0,0,0),
<I>I</I><SUB>3</SUB>=(0,1,0,1) Note: they are all orthogonal.
</FONT>
</BLOCKQUOTE>

<P>Converting to bipolar *, we have:

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>I</I><SUB>1</SUB><SUP>*</SUP> = (-1,-1,1,-1)
, <I>I</I><SUB>2</SUB><SUP>*</SUP> = (1,-1,-1,-1) , <I>I</I><SUB>3</SUB><SUP>*</SUP>
= (-1,1,-1,1)
</FONT>
</BLOCKQUOTE>

<P>Now we need to compute <I>W</I><SUB>1</SUB>, <I>W</I><SUB>2</SUB>, <I>W</I><SUB>3</SUB>, where <I>W</I><SUB>i</SUB> is the product of the transpose of each input with itself.

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>W</I><SUB>1</SUB>= [ <I>I</I><SUB>1</SUB><SUP>*t</SUP> x <I>I</I><SUB>1</SUB><SUP>*</SUP> ] = (-1,-1,1,-1)<SUP>t</SUP> x (-1,-1,1,-1) = 

<table border="0" cellpadding="7" cellspacing="0" width="96">
    <tr>
        <td valign="top" width="25%">1</td>
        <td valign="top" width="25%">1</td>
        <td valign="top" width="25%">-1</td>
        <td valign="top" width="25%">1</td>
    </tr>
    <tr>
        <td valign="top" width="25%">1</td>
        <td valign="top" width="25%">1</td>
        <td valign="top" width="25%">-1</td>
        <td valign="top" width="25%">1</td>
    </tr>
    <tr>
        <td valign="top" width="25%">-1</td>
        <td valign="top" width="25%">-1</td>

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
91精品国产一区二区三区| 国产日韩精品一区二区浪潮av| 欧美三级在线看| 26uuu色噜噜精品一区二区| 中文字幕在线不卡一区| 麻豆国产一区二区| 日本乱人伦aⅴ精品| 久久精品男人天堂av| 首页亚洲欧美制服丝腿| 99精品热视频| 国产欧美日韩综合| 免费成人在线观看| 欧美日韩一级二级| 亚洲欧美日韩国产综合在线| 国内成人精品2018免费看| 欧美三区在线观看| 亚洲精品国产无套在线观| 成人免费va视频| 久久久无码精品亚洲日韩按摩| 午夜一区二区三区在线观看| 91年精品国产| 国产精品久久久久久久久动漫| 久久99精品国产.久久久久| 欧美男生操女生| 亚洲一区av在线| 欧美亚洲一区三区| 亚洲综合一区二区精品导航| 不卡的电视剧免费网站有什么| 国产午夜精品久久久久久免费视 | 国产亚洲精品7777| 久久se这里有精品| 91精品国产色综合久久| 日本sm残虐另类| 制服丝袜中文字幕一区| 爽爽淫人综合网网站| 欧美狂野另类xxxxoooo| 亚洲h在线观看| 欧美剧情电影在线观看完整版免费励志电影| 综合久久一区二区三区| 91一区在线观看| 亚洲精品中文在线观看| 91福利资源站| 日韩专区一卡二卡| 欧美成人精品1314www| 激情五月激情综合网| 国产亚洲精品bt天堂精选| 懂色av中文一区二区三区 | 国产一区三区三区| 国产日本欧美一区二区| av成人动漫在线观看| 一区二区三区日韩| 欧美日韩一级片网站| 蜜桃视频一区二区三区在线观看| 欧美tickling挠脚心丨vk| 国产精品69久久久久水密桃| 国产精品麻豆欧美日韩ww| 91麻豆免费观看| 日本欧美一区二区三区乱码 | 久久久美女毛片| 成人毛片视频在线观看| 一区二区三区在线免费观看| 欧美美女bb生活片| 国产精品一区二区在线播放| 最新成人av在线| 91精品国产综合久久久久久 | 亚洲国产日日夜夜| 日韩欧美一区二区免费| 成人禁用看黄a在线| 亚洲va欧美va人人爽| 久久久国产一区二区三区四区小说 | 美女视频第一区二区三区免费观看网站| 日韩欧美区一区二| 91亚洲精品久久久蜜桃| 美女久久久精品| 一区二区久久久久久| 久久先锋影音av鲁色资源网| 一本一道久久a久久精品| 奇米色一区二区| 中文字幕中文字幕在线一区| 日韩欧美中文字幕公布| 色婷婷国产精品| 国产美女一区二区三区| 亚洲午夜久久久久| 亚洲国产精华液网站w| 91精品一区二区三区在线观看| 国产成人av影院| 美女久久久精品| 午夜欧美视频在线观看| 亚洲图片你懂的| 国产亚洲一区二区三区在线观看| 在线电影国产精品| 色婷婷综合视频在线观看| 国产综合久久久久影院| 日韩成人一级片| 亚洲乱码中文字幕综合| 中文字幕第一区二区| 日韩一区二区在线播放| 欧美日韩激情一区| 色综合久久久久综合| 成人av网在线| 岛国一区二区在线观看| 精品综合久久久久久8888| 五月婷婷久久丁香| 亚洲国产日韩综合久久精品| 亚洲色图视频网| 亚洲日本电影在线| 17c精品麻豆一区二区免费| 欧美国产综合色视频| 久久综合九色综合97婷婷| 日韩免费观看高清完整版在线观看| 欧美日韩久久久久久| 欧美伊人精品成人久久综合97| 91啪亚洲精品| 日本高清成人免费播放| 在线看一区二区| 欧美影院午夜播放| 欧美午夜在线观看| 在线观看欧美日本| 欧美日韩中文国产| 欧美日韩视频第一区| 精品1区2区3区| 欧美日韩亚洲国产综合| 在线播放/欧美激情| 3d成人h动漫网站入口| 日韩视频一区在线观看| 精品黑人一区二区三区久久 | 狂野欧美性猛交blacked| 免费的成人av| 国产精品主播直播| voyeur盗摄精品| 欧洲色大大久久| 欧美日本一区二区在线观看| 欧美日韩第一区日日骚| 日韩精品一区二区三区在线| 久久婷婷久久一区二区三区| 日本一区二区三区在线不卡| 亚洲天堂免费看| 首页国产丝袜综合| 国产麻豆精品95视频| 99v久久综合狠狠综合久久| 91在线观看一区二区| 欧美人牲a欧美精品| 久久伊人中文字幕| 亚洲欧美日韩中文字幕一区二区三区| 亚洲老司机在线| 免费高清视频精品| 白白色亚洲国产精品| 欧美日韩免费在线视频| 久久精品一区蜜桃臀影院| 亚洲欧美国产毛片在线| 日韩中文字幕区一区有砖一区| 国产主播一区二区| 色婷婷综合久久久中文一区二区| 欧美一区二区三区播放老司机| 亚洲国产精品传媒在线观看| 午夜伦理一区二区| 成人午夜av电影| 欧美一二三四在线| 亚洲美女电影在线| 国产一区二区毛片| 欧美日韩精品一区二区三区蜜桃| 久久婷婷综合激情| 丝袜亚洲另类欧美| 99九九99九九九视频精品| 91麻豆精品国产91久久久久| 中文字幕在线一区| 精品一区二区在线观看| 欧美午夜免费电影| 中文字幕日韩欧美一区二区三区| 日本不卡在线视频| 91国模大尺度私拍在线视频| 久久久www成人免费毛片麻豆| 亚洲成人久久影院| bt7086福利一区国产| 久久一二三国产| 日本一不卡视频| 欧美三电影在线| 亚洲欧美色综合| 成人理论电影网| 日本一二三四高清不卡| 国产一区中文字幕| 欧美一区二区三区人| 亚洲成人激情综合网| 91麻豆精品在线观看| 国产精品久久久久影视| 国产一区二区伦理| 欧美成人三级在线| 久久精品国产免费看久久精品| 欧美日韩国产成人在线免费| 亚洲一区二区三区四区五区中文 | 久色婷婷小香蕉久久| 欧美日韩亚洲高清一区二区| 亚洲精品视频观看| 色综合久久88色综合天天免费| 中文字幕精品综合| jizzjizzjizz欧美| 国产精品福利电影一区二区三区四区| 国产jizzjizz一区二区| 国产夜色精品一区二区av| 国产精品羞羞答答xxdd|