亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? gpunn.aspx.htm

?? 文章介紹如何使用CUDA實(shí)現(xiàn)神經(jīng)網(wǎng)絡(luò)
?? HTM
?? 第 1 頁 / 共 5 頁
字號(hào):
			<div id="contentdiv">
			
			<!-- Main Page Contents Start -->
			

<!-- Article Starts -->


<ul class="Download">
<li><a href="http://www.codeproject.com/KB/graphics/GPUNN/GPUNN_demo.zip">Download demo (release build requiring CUDA and 120 dpi) - 584.61 KB</a></li>

<li><a href="http://www.codeproject.com/KB/graphics/GPUNN/GPUNN_GUI.zip">Download GUI source code - 509.68 KB</a> </li>

<li><a href="http://www.codeproject.com/KB/graphics/GPUNN/GPUNN_kernel.zip">Download kernel (the Neural Network core) - 2.78 KB</a></li>
</ul>

<h2>Introduction</h2>

<p>An Artificial Neural Network is an information processing method
that was inspired by the way biological nervous systems function, such
as the brain, to process information. It is composed of a large number
of highly interconnected processing elements (neurons) working in
unison to solve specific problems. Neural Networks have been widely
used in "analogous" signal classifications, including handwriting,
voice and image recognitions. Neural network can also be used in
computer games. It enables games with the ability to adaptively learn
from player behaviors. This technique has been used in racing games,
such that opponent cars controlled by computers can learn how to drive
by human players.</p>

<p>Since a Neural Network requires a considerable number of vector and
matrix operations to get results, it is very suitable to be implemented
in a parallel programming model and run on Graphics Processing Units
(GPUs). Our goal is to utilize and unleash the power of GPUs to boost
the performance of a Neural Network solving handwriting recognition
problems.</p>

<p>This project was originally our graphics architecture course
project. We ran on GPU the same Neural Network described by Mike
O'Neill in his brilliant article "<a href="http://www.codeproject.com/KB/library/NeuralNetRecognition.aspx">Neural Network for Recognition of Handwritten Digits</a>".</p>

<h2>About the Neural Network </h2>

<p>A Neural Network consists of two basic kinds of elements, neurons
and connections. Neurons connect with each other through connections to
form a network. This is a simplified theory model of the human brain.</p>

<p>A Neural Network often has multiple layers; neurons of a certain
layer connect neurons of the next level in some way. Every connection
between them is assigned with a weight value. At the beginning, input
data are fed into the neurons of the first layer, and by computing the
weighted sum of all connected first layer neurons, we can get the
neuron value of a second layer neuron and so on. Finally, we can reach
the last layer, which is the output. All the computations involved in
operating a Neural Network are a bunch of dot products.</p>

<p>The secret of a Neural Network is all about weight values. Right
values make it perfect. However, at the beginning, we don't know those
values. Therefore, we need to train our network with sample inputs and
compare the outcomes with our desired answers. Some algorithm can take
the errors as inputs and modify the network weights. If patient enough,
the Neural Network can be trained to achieve high accuracy.</p>
<img alt="IllustrationNeuralNet.gif" src="GPUNN.aspx_files/IllustrationNeuralNet.gif" width="599" border="0" height="300" hspace="0"> 
<p>The neural network we implemented was a 5 layer network called
convolutional neural network. This kind of network is proven to be
suitable for recognizing handwritten digits. For more theoretical
details, please check out Mike's article and the references he has
listed.</p>

<p>The first three layers of our neural network consist of several
feature maps. Each of them is shrunken from the previous layer. Our
input is a 29*29 image of a digit. Therefore, we have 29*29=841 neurons
in the first layer. The second layer is a convolutional layer with 6
feature maps. Each feature map which is a 13*13 image is sampled from
the first layer. Each pixel/neuron in a feature map is a 5*5
convolutional kernel of the input layer. So, there are 13*13*6 = 1014
nodes/neurons in this layer, and (5*5+1(bias node))*6 = 156 weights,
1014*(5*5+1) = 26364 connections linking to the first layer.</p>

<p>Layer 3 is also a convolutional layer, but with 50 smaller feature
maps. Each feature map is 5*5 in size, and each pixel in these feature
maps is a 5*5 convolutional kernel of corresponding areas of all 6
feature maps of the previous layer. There are thus 5*5*50 = 1250
neurons in this layer, (5*5+1)*6*50 = 7800 weights, and 1250*26 = 32500
connections.</p>

<p>The fourth layer is a fully-connected layer with 100 neurons. Since
it is fully-connected, each of the 100 neurons in the layer is
connected to all 1250 neurons in the previous layer. There are
therefore 100 neurons in it, 100*(1250+1) = 125100 weights and 100x1251
= 125100 connections.</p>

<p>Layer 5 is the final output layer. This layer is also a
fully-connected layer with 10 units. Each of the 10 neurons in this
layer is connected to all 100 neurons of the previous layer. There are
10 neurons in Layer 5, 10*(100+1) = 1010 weights and 10x101 = 1010
connections.</p>

<p>As you can see, although structurally simple, this Neural Network is a huge data structure.</p>

<h2>Previous GPU Implementation </h2>

<p><a href="http://leenissen.dk/fann/html_latest/files2/gpu-txt.html">Fast Neural Network Library</a>
(FANN) has a very simple implementation of Neural Network on GPU with
GLSL. Each neural is represented by a single color channel of a texture
pixel. This network is very specific; neurons are ranging from 0 to 1
and have an accuracy of only 8 bits. This implementation takes the
advantage of hardware accelerated dot product function to calculate
neurons. Both neurons and weights are carried on texture maps.</p>

<p>This implementation is straightforward and easy, however limited.
First, in our neural network, we require 32-bit float accuracy for each
neuron. Since our network has five layers, accuracy lost at the first
level could be accumulated and alter the final results. And because it
is important that a handwriting recognition system should be sensitive
enough to detect slight differences between different inputs, using
only 8 bits to represent a neuron is unacceptable. Secondly, normal
Neural Networks map neuron values to the range from 0 to 1. However, in
our program, the Neural Network which is specifically designed for
handwriting recognition has a special activation function mapping each
neuron value to the range from -1 to 1. Therefore, if the neuron is
represented by a single color value as in FANN library, our neurons
will lose accuracy further. Finally, the FANN method uses a dot product
to compute neurons, which is suitable for full connected Neural
Networks. In our implementation, the Neural Network is partially
connected. Computations performed on our Neural Network involve dot
products of large vectors. </p>

<h2>Our Implementation </h2>

<p>Due to all the inconvenience about GLSL mentioned above, we finally
choose CUDA. The reason that the Neural Network is suitable for GPU is
that the training and execution of a Neural Network are two separate
processes. Once properly trained, no writing access is required while
using a Neural Network. Therefore there is no synchronization issue
that needs to be addressed. Moreover, neurons on a same network level
are completely isolated, such that neuron value computations can
achieve highly parallelization.</p>

<p>In our code, weights for the first layer are stored as an array, and
those inputs are copied to device. For each network level, there is a
CUDA function handling the computation of neuron values of that level,
since parallelism can only be achieved within one level and the
connections are different between levels. The connections of the Neural
Network are implicitly defined in CUDA functions with the equations of
next level neuron computation. No explicit connection data structure
exists in our code. This is one main difference between our code and
the CPU version by Mike.</p>
<img alt="cuda.PNG" src="GPUNN.aspx_files/cuda.PNG" width="456" border="0" height="499" hspace="0"> 
<p>For example, each neuron value of the second level is a weighted sum
of 25 neurons of the first level and one bias. The second neuron level
is composed of 6 feature maps; each has a size of 13*13. We assign a <code>blockID</code> for each feature map and a <code>threadID</code> for each neuron on a feature map. Every feature map is handled by a block and each pixel on it is dealt with by a thread.</p>

<p>This is the CUDA function that computes the second network layer:</p>

<div class="SmallText" id="premain0" style="width: 100%; cursor: pointer;"><img preid="0" src="GPUNN.aspx_files/minus.gif" id="preimg0" width="9" height="9"><span preid="0" style="margin-bottom: 0pt;" id="precollapse0"> Collapse</span></div><pre style="margin-top: 0pt;" id="pre0" lang="C++">__global__ <span class="code-keyword">void</span> executeFirstLayer
    (<span class="code-keyword">float</span> *Layer1_Neurons_GPU,<span class="code-keyword">float</span> *Layer1_Weights_GPU,<span class="code-keyword">float</span> *Layer2_Neurons_GPU)
{
    <span class="code-keyword">int</span> blockID=blockIdx.x;
    <span class="code-keyword">int</span> pixelX=threadIdx.x;
    <span class="code-keyword">int</span> pixelY=threadIdx.y;

    <span class="code-keyword">int</span> kernelTemplate[<span class="code-digit">25</span>] = {
        <span class="code-digit">0</span>,  <span class="code-digit">1</span>,  <span class="code-digit">2</span>,  <span class="code-digit">3</span>,  <span class="code-digit">4</span>,
        <span class="code-digit">29</span>, <span class="code-digit">30</span>, <span class="code-digit">31</span>, <span class="code-digit">32</span>, <span class="code-digit">33</span>,
        <span class="code-digit">58</span>, <span class="code-digit">59</span>, <span class="code-digit">60</span>, <span class="code-digit">61</span>, <span class="code-digit">62</span>,
        <span class="code-digit">87</span>, <span class="code-digit">88</span>, <span class="code-digit">89</span>, <span class="code-digit">90</span>, <span class="code-digit">91</span>,
        <span class="code-digit">116</span>,<span class="code-digit">117</span>,<span class="code-digit">118</span>,<span class="code-digit">119</span>,<span class="code-digit">120</span> };

    <span class="code-keyword">int</span> weightBegin=blockID*<span class="code-digit">26</span>;
    <span class="code-keyword">int</span> windowX=pixelX*<span class="code-digit">2</span>;
    <span class="code-keyword">int</span> windowY=pixelY*<span class="code-digit">2</span>;

    <span class="code-keyword">float</span> result=0;

    result+=Layer1_Weights_GPU[weightBegin];

    ++weightBegin;

    <span class="code-keyword">for</span>(<span class="code-keyword">int</span> i=0;i<span class="code-keyword">&lt;</span><span class="code-digit">25</span>;++i)
    {
        result+=Layer1_Neurons_GPU
            [windowY*29+windowX+kernelTemplate[i]]*Layer1_Weights_GPU[weightBegin+i];
    }

    result=(<span class="code-digit">1</span>.<span class="code-digit">7159</span>*tanhf(<span class="code-digit">0</span>.<span class="code-digit">66666667</span>*result));

    Layer2_Neurons_GPU[<span class="code-digit">13</span>*<span class="code-digit">13</span>*blockID+pixelY*13+pixelX]=result;
} </pre>

<p>All other levels are computed the same way; the only difference is the equation of calculating neurons. </p>
<img alt="program.PNG" src="GPUNN.aspx_files/program.PNG" width="548" border="0" height="199" hspace="0"> 
<p>The main program first transfers all the input data to GPU and then
calls each CUDA function in order and finally gets the answer.</p>
<a href="http://www.codeproject.com/KB/graphics/GPUNN/recod.jpg"><img alt="recod.jpg" src="GPUNN.aspx_files/recod_small.jpg" width="640" border="0" height="259" hspace="0"> </a>
<p>The user interface is a separate program using C#. Users can draw a
digit with the mouse on the input pad, the program then generates a
29*29 image and calls the kernel Neural Network program. The kernel, as
described above, will read the input image and feed it into our Neural
Network. Results are also returned with files and then read back by the
user interface.</p>

<p>Here is a screenshot. After drawing a digit, we can get all the 10
neuron values of the last network layer. The index of the maximum

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
中文字幕在线一区| 麻豆国产欧美日韩综合精品二区| 国产三级欧美三级| 26uuu成人网一区二区三区| 欧美成人一区二区三区片免费| 91精品欧美一区二区三区综合在 | 欧美三电影在线| 91麻豆国产精品久久| 99久久免费精品高清特色大片| 成人免费视频免费观看| 不卡欧美aaaaa| 色偷偷久久一区二区三区| 91丨九色丨蝌蚪丨老版| 色综合色综合色综合| 在线视频国内一区二区| 欧美综合一区二区| 欧美另类变人与禽xxxxx| 欧美裸体bbwbbwbbw| 日韩写真欧美这视频| 日韩视频一区二区三区在线播放| 欧美电视剧免费观看| 2020国产精品| 国产精品久久久久久久午夜片| 亚洲欧美另类图片小说| 亚洲一区二区在线播放相泽| 日韩精品1区2区3区| 久久av资源网| www.亚洲激情.com| 在线免费观看一区| 7777精品伊人久久久大香线蕉经典版下载 | 喷白浆一区二区| 国产麻豆9l精品三级站| av不卡一区二区三区| 欧美三级在线播放| 欧美成人精品二区三区99精品| 久久精品亚洲精品国产欧美kt∨| 欧美激情一区二区三区四区| 亚洲精品国产a久久久久久| 视频一区二区不卡| 国产精品自拍三区| 在线亚洲欧美专区二区| 91精品国产免费| 欧美国产一区在线| 日韩精品一级中文字幕精品视频免费观看 | 国产精品天美传媒| 亚洲图片有声小说| 国产精品一区二区果冻传媒| 色噜噜久久综合| 日韩欧美国产一二三区| 中文字幕欧美一区| 老司机精品视频一区二区三区| 99在线视频精品| 精品日韩欧美在线| 亚洲一区二区视频在线| 国产精品香蕉一区二区三区| 欧美三级中文字幕在线观看| 久久久不卡影院| 日韩av一级片| 91美女福利视频| 国产日韩精品一区二区三区| 亚洲福利视频导航| 不卡视频在线看| 欧美精品一区二区在线播放| 亚洲激情图片小说视频| 国产精品一区2区| 3d成人动漫网站| 亚洲婷婷国产精品电影人久久| 国产一区二区在线免费观看| 欧美性受xxxx黑人xyx性爽| 欧美国产亚洲另类动漫| 美日韩一级片在线观看| 欧美最猛黑人xxxxx猛交| 欧美韩日一区二区三区四区| 久久99精品久久久久久| 欧美三级电影网站| 亚洲靠逼com| 粉嫩嫩av羞羞动漫久久久| 日韩精品一区二区三区四区视频 | 精品一区二区三区免费观看| 欧美三级欧美一级| 亚洲天堂免费在线观看视频| 国产大陆a不卡| 精品欧美一区二区三区精品久久| 无码av中文一区二区三区桃花岛| 91蜜桃传媒精品久久久一区二区| 国产欧美va欧美不卡在线| 精油按摩中文字幕久久| 91精品国产91热久久久做人人 | 免费在线观看一区| 欧美视频完全免费看| 亚洲欧美一区二区三区孕妇| 丰满亚洲少妇av| 久久久国产精品午夜一区ai换脸| 蜜臀精品一区二区三区在线观看 | 国产综合久久久久久鬼色| 3d动漫精品啪啪1区2区免费| 丝瓜av网站精品一区二区| 欧美在线免费观看亚洲| 亚洲综合无码一区二区| 91精彩视频在线观看| 亚洲小说春色综合另类电影| 一本到一区二区三区| 亚洲欧美日韩在线| 色综合天天在线| 亚洲视频一区二区在线| 色综合久久天天综合网| 日韩毛片高清在线播放| 色综合色狠狠天天综合色| 亚洲精品中文在线| 色噜噜狠狠色综合欧洲selulu| 一区二区在线观看视频| 欧美影视一区二区三区| 亚洲大尺度视频在线观看| 欧美日韩高清在线| 日韩成人精品在线观看| 日韩视频免费观看高清完整版在线观看 | 欧美揉bbbbb揉bbbbb| 亚洲成人av资源| 欧美一区二区三区喷汁尤物| 天堂影院一区二区| 精品日韩在线一区| 国产成人亚洲精品青草天美| 中文字幕一区二区三区精华液 | 91久久香蕉国产日韩欧美9色| 亚洲综合一区二区| 91精品黄色片免费大全| 精品一区中文字幕| 国产欧美精品区一区二区三区| 99re这里只有精品首页| 亚洲一区二区三区小说| 制服丝袜亚洲播放| 国产精品一区一区三区| 亚洲欧洲国产日韩| 91.成人天堂一区| 国内外成人在线| 中文字幕日韩欧美一区二区三区| 欧美丝袜自拍制服另类| 麻豆精品视频在线观看| 国产精品嫩草影院com| 欧美在线小视频| 久久爱另类一区二区小说| 国产精品久久久久久久蜜臀| 欧美日韩高清一区二区| 国产久卡久卡久卡久卡视频精品| 自拍偷拍国产亚洲| 日韩欧美激情四射| 99国产精品久久久| 奇米888四色在线精品| 国产精品久久久久国产精品日日| 欧美日韩在线亚洲一区蜜芽| 国产自产2019最新不卡| 亚洲人成伊人成综合网小说| 91精品国产综合久久久久久| 国产.欧美.日韩| 天使萌一区二区三区免费观看| 国产欧美日韩精品在线| 欧美午夜精品一区二区蜜桃| 国产一区二区主播在线| 亚洲一区二区在线观看视频| 国产婷婷色一区二区三区四区| 欧美天堂一区二区三区| 成人免费高清视频| 日韩av中文字幕一区二区 | 成人动漫一区二区| 免费高清在线视频一区·| 亚洲欧洲精品一区二区三区不卡| 日韩一区二区在线免费观看| 91在线观看下载| 激情久久五月天| 午夜精品成人在线视频| 国产精品美日韩| 精品精品国产高清a毛片牛牛| 欧洲精品在线观看| 99久久国产综合色|国产精品| 老色鬼精品视频在线观看播放| 一区二区三区中文字幕| 亚洲国产高清aⅴ视频| 欧美大片在线观看一区| 欧美男人的天堂一二区| 99re热视频精品| 成人免费观看av| 国产精品资源网| 精品一区二区在线免费观看| 亚洲电影在线播放| 亚洲色图欧美偷拍| 国产精品视频一二三| 精品国产人成亚洲区| 91麻豆精品国产91久久久久久久久| 色婷婷综合久久久| 97成人超碰视| 99久久精品国产麻豆演员表| 成人一区在线看| 国产成人综合在线播放| 极品少妇xxxx精品少妇| 久久国产人妖系列| 男人的天堂亚洲一区| 日本色综合中文字幕| 日韩精品一级二级 | 国产精品自产自拍| 国内精品久久久久影院一蜜桃|