亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? faq.html

?? All the tool for build a network able to reconize any shapes. Very complete, a good base for anythi
?? HTML
?? 第 1 頁 / 共 2 頁
字號(hào):
<html>
	<head>
		<meta http-equiv="Keywords" content="neural, neural net, neural network, neural programming, mlp, backpropagation, back propagation, multilayer, perceptron, java, java package, example, class, training, train, incremental, batch, mini-batch, mini batch, generalized, modular, feed forward, feed, forward, cross validation, cross, validation, test, free, source, code">
        <meta http-equiv="Content-Language" content="en-us">
		<meta http-equiv="Description" content="This work is a java implementation of feed forward neural nets. Using this package, you can easily build and train multilayer, generalized feed forward, modular feed forward nets with any number of layers and any number of units.">
		<title>FAQ - FEED FORWARD NEURAL NETWORKS - A JAVA IMPLEMENTATION v2.0</title>
		<META HTTP-EQUIV="pragma" CONTENT="no-cache">
		<META HTTP-EQUIV="Expires" CONTENT="Fri, 30 May 1980 01:00:00 GMT">
	</head>
	<body bgcolor="#e2e0e0" style="font-family: Verdana">
		<div align="center">
          <center>
		<table width="90%" border="0" cellspacing="20" cellpadding="0" style="border-collapse: collapse" bordercolor="#111111">
			<tr>
				<td align="left">
					<a href="index.html">Home</a></td>
			</tr>
			<tr>
				<td align="left">
					<b><font size="4"><a name="top"></a>FEED FORWARD NEURAL 
                    NETWORK<span lang="tr">S</span> - A JAVA IMPLEMENTATION v2.0 </font></b>
                    <br><b><span lang="tr"><font size="5">FAQ</font></span></b></td>
			</tr>
			<tr>
				<td align="left">
					<font size="2">- <a href="#1">What is this?</a><br>
                    - <a href="#2">What kind of skills do I need in 
                    order to use this package?</a><br>
                    - <a href="#3">What kind of neural nets can be 
                    created using this package?</a><br>
                    - <a href="#4">What is a 'multilayer perceptron' 
                    and how can I create one?</a><br>
                    - <a href="#5">What is a 'generalized feed forward 
                    net'?</a><br>
                    - <a href="#6">What is a 'modular feed forward 
                    net'?</a><br>
                    - <a href="#7">How can I create generalized and 
                    modular feed forward nets?</a><br>
                    - <a href="#8">What kind of training methods can I 
                    use?</a><br>
                    - <a href="#9">Which activation functions can I 
                    use?</a><br>
                    - <a href="#10">Can I use different flatness for 
                    each neuron?</a><br>
                    - <a href="#11">Can I use momentum?</a><br>
                    - <a href="#12">Can I use different learning rates 
                    for each layer?</a><br>
                    - <a href="#13">What is the function of the class 
                    'Neuron'?</a><br>
                    - <a href="#14">What is the function of the class 
                    'Synapse'?</a><br>
                    - <a href="#15">What is the function of the class 'NeuralNet'?</a><br>
                    - <a href="#16">What is the function of the class 
                    'Pattern'?</a><br>
                    - <a href="#17">What is the function of the class 'PatternSet'?</a><br>
                    - <a href="#18">What is the function of the class 
                    'Randomizer'?</a><br>
                    - <a href="#19">What is the function of the class 'LineReader'?</a><br>
                    - <a href="#20">What is 'cross validation data'?</a><br>
                    - <a href="#21">What is 'test data'?</a><br>
                    - <a href="#22">What is the function of the methods 
                    'CrossValErrorRatio' and 'TestValErrorRatio'?</a><br>
                    - <a href="#23">Can I feed my own cross validation 
                    and test patterns?</a><br>
                    - <a href="#24">How can I set random weights after 
                    I create a net?</a><br>
                    - <a href="#25">What is a configuration file and 
                    what is the format of a configuration file?</a><br>
                    - <a href="#26">What is a pattern file and what is 
                    the format of a pattern file?</a><br>
                    - <a href="#27">What is a weight file and what is 
                    the format of a weight file?</a><br>
                    - <a href="#28">Do you give the source code for 
                    free?</a><br>
                    - <a href="#29">How can I get more detailed 
                    documentation?</a><br>
                    - <a href="#30">In which development environment 
                    has this package been tested?</a><br>
                    - <a href="#31">How can I compile the code?</a><br>
                    - <a href="#32">Error management in this package is 
                    not coded properly. Are you aware of it?</a><br>
                    - <a href="#33">Where can I find basic information 
                    about neural networks?</a><br>
                    - <a href="#34">How can I learn Java?</a><p>&nbsp;</p>
                    <p><u><b><a name="1"></a>
                    What is this?</b></u><br>
                    This is basically a feed forward neural network 
                    implementation coded in Java. It supports topologies such as 
                    multilayer perceptron, generalized feed forward nets and 
                    modular feed forward nets.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="2"></a>What kind of skills do I need in order to use this 
                    package?</b></u><br>
                    That depends on what you want to do. But at least, you 
                    should have a basic knowledge of java programming, feed 
                    forward neural networks and back propagation.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="3"></a>What kind of neural nets can be created using this 
                    package?</b></u><br>
                    You can create multilayer perceptrons, generalized feed 
                    forward nets and modular feed forward nets. You can use 
                    momentum, different activation functions, different flatness 
                    for those functions, different learning rates etc.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="4"></a>What is a 'multilayer perceptron' and how can I create 
                    one?</b></u><br>
                    Multilayer perceptrons are layered feed forward networks 
                    typically trained with static back propagation. These 
                    networks have found their way into countless applications 
                    requiring static pattern classification. Their main 
                    advantage is that they are easy to use, and that they can 
                    approximate any input / output map. The key disadvantages 
                    are that they train slowly, and require lots of training 
                    data (typically three times more training samples than 
                    network weights).(*) You can create a multilayer perceptron 
                    very easily using a constructor in 'NeuralNet' class 
                    (example 1 and example 2).<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="5"></a>What is a 'generalized feed forward net'?</b></u><br>
                    Generalized feed forward networks are a generalization of 
                    the MLP such that connections can jump over one or more 
                    layers. In theory, a MLP can solve any problem that a 
                    generalized feed forward network can solve. In practice, 
                    however, generalized feed forward networks often solve the 
                    problem much more efficiently. A classic example of this is 
                    the two spiral problem. Without describing the problem, it 
                    suffices to say that a standard MLP requires hundreds of 
                    times more training epochs than the generalized feed forward 
                    network containing the same number of processing elements(*) 
                    (see example 1)<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="6"></a>What is a 'modular feed forward net'?</b></u><br>
                    Modular feed forward networks are a special class of MLP. 
                    These networks process their input using several parallel 
                    MLP's, and then recombine the results. This tends to create 
                    some structure within the topology, which will foster 
                    specialization of function in each sub-module. In contrast 
                    to the MLP, modular networks do not have full 
                    interconnectivity between their layers. Therefore, a smaller 
                    number of weights are required for the same size network 
                    (i.e. the same number of PEs). This tends to speed up 
                    training times and reduce the number of required training 
                    exemplars. There are many ways to segment a MLP into 
                    modules. It is unclear how to best design the modular 
                    topology based on the data. There are no guarantees that 
                    each module is specializing its training on a unique portion 
                    of the data.(*)<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <b><u><a name="7"></a>How can I create generalized and modular feed forward 
                    nets?</u></b><br>
                    Although it is possible to create these kind of nets using 
                    this package, it is not easy to do so practically. Unless 
                    you develop your own editing application, you will have to 
                    edit a configuration file and create neurons / synapses 
                    manually (example 1).<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="8"></a>What kind of training methods can I use?</b></u><br>
                    You can use back propagation methods such as batch training 
                    (example 1), mini batch training (example 2) and incremental 
                    training (example 3). You can use them one after another, 
                    without saving the weights.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="9"></a>Which activation functions can I use?</b></u><br>
                    The package supports logistic (outputs between 0 and 1), 
                    tanh (outputs between -1 and 1) and linear activation 
                    functions. Flatness of the curves can be changed for each 
                    neuron. You can also add your own function by changing the 
                    code.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="10"></a>Can I use different flatness for each neuron?</b></u><br>
                    Yes. You can even use different flatness parameters for each 
                    neuron. The bigger is the flatness parameter, the flatter 
                    will be the function.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="11"></a>Can I use momentum?</b></u><br>
                    Yes. You can even use different momentum parameters for each 
                    neuron. You can use momentum with all of the training 
                    techniques (batch, mini batch and incremental).<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="12"></a>Can I use different learning rates for each layer?</b></u><br>
                    Yes. You can even use different learning rates for each 
                    neuron. Every neuron has a learning rate coefficient. If you 
                    wish to train the net with a learning rate of 0.5 and the 
                    learning rate coefficient of the neuron is 0.8; the neuron 
                    will be trained with a learning rate of 0.5 * 0.8 = 0.4. 
                    Note that just after you instantiate a net using the 
                    constructor for multilayer perceptron you will not be able 
                    to have different coefficients for the units in the same 
                    layer. If you wish to do so, you have to change them 
                    manually.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="13"></a>What is the function of the class 'Neuron'?</b></u><br>
                    It simply represents a neuron, the basic unit of any neural 
                    network, responsible for the processing. The class has two 
                    constructors, one for input units and the other for hidden 
                    and output units. A neuron object includes information about 
                    itself (activation function, flatness, learning coefficient, 
                    momentum rate, current output, layer no, etc.) and also 
                    information about all synapses related to it. It can update 
                    its output and train itself.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="14"></a>What is the function of the class 'Synapse'?</b></u><br>
                    Represents a synapse between two neurons. For every 
                    relationship between two neurons, there should be a synapse 
                    object. It doesn't have any methods, it has only one 
                    constructor.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="15"></a>What is the function of the class 'NeuralNet'?</b></u><br>
                    It simply represents a neural net. It has two constructors. 
                    One is for creating a multilayer perceptron with given 
                    parameters. The other is a more general constructor. Using 
                    it you can create generalized feed forward nets, modular 
                    feed forward nets, as well as multilayer perceptrons. It 
                    reads a configuration file and creates a net out of the 
                    parameters in this file. If you don't code your own editing 
                    application, you have to edit the file manually.<br>
                    It has methods for saving the current configuration, loading 

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
欧美一区午夜精品| 欧美精品一区二区三区蜜臀 | 欧美不卡视频一区| 一本一本久久a久久精品综合麻豆| 久久爱www久久做| 亚洲国产欧美日韩另类综合 | 一区二区三区四区不卡在线| 国产性色一区二区| 日韩欧美一级精品久久| 欧美午夜电影网| 99精品一区二区三区| 极品少妇一区二区三区精品视频 | 色婷婷亚洲精品| 粉嫩一区二区三区性色av| 国产一区二区视频在线| 日韩电影在线一区| 亚洲精品高清在线观看| 自拍视频在线观看一区二区| 国产精品美日韩| 欧美激情一区在线观看| 国产亚洲精品免费| 久久久久九九视频| 国产亚洲精品7777| 91精品国产丝袜白色高跟鞋| 色婷婷精品大视频在线蜜桃视频 | 99re这里只有精品首页| 成人国产精品免费观看动漫| 国产.欧美.日韩| 国产成人h网站| 成人av电影观看| 色噜噜偷拍精品综合在线| 91久久精品国产91性色tv| 欧美亚洲愉拍一区二区| 国产成人综合在线观看| 豆国产96在线|亚洲| jizzjizzjizz欧美| 色伊人久久综合中文字幕| 91看片淫黄大片一级在线观看| 91麻豆国产福利在线观看| 欧美在线不卡视频| 欧美高清www午色夜在线视频| 在线亚洲+欧美+日本专区| 欧美日韩中文精品| 在线播放亚洲一区| 日韩精品一区二区三区在线| 久久久久国产精品人| 国产精品久久毛片av大全日韩| 国产精品乱子久久久久| 亚洲一级片在线观看| 亚洲国产成人精品视频| 精品一区二区三区免费| 国产成人av电影在线| 在线亚洲高清视频| 欧美一级视频精品观看| 国产人妖乱国产精品人妖| 1000精品久久久久久久久| 天天综合天天做天天综合| 蜜桃一区二区三区在线观看| 国产在线视频精品一区| 色哦色哦哦色天天综合| 91精品在线观看入口| 久久精品水蜜桃av综合天堂| 亚洲精品中文在线影院| 秋霞电影一区二区| 成人性色生活片免费看爆迷你毛片| 91亚洲国产成人精品一区二区三| 欧美人妇做爰xxxⅹ性高电影| 久久综合九色综合97婷婷 | 精品久久人人做人人爽| 中文字幕国产一区| 亚洲国产另类精品专区| 蜜臀va亚洲va欧美va天堂| 国产精品99久久久久久似苏梦涵| 色哟哟一区二区三区| 精品成人私密视频| 一区二区三区国产| 国产麻豆一精品一av一免费| 色乱码一区二区三区88| 久久婷婷国产综合精品青草| 国产精品久久久久久久久动漫 | 99re这里只有精品首页| 欧美va亚洲va香蕉在线| 一区二区三区国产豹纹内裤在线 | 欧美一级高清大全免费观看| 欧美国产日韩a欧美在线观看| 午夜日韩在线电影| 成人av在线看| 精品国产人成亚洲区| 一区二区三区在线免费观看| 老色鬼精品视频在线观看播放| 91网站在线观看视频| 91精品久久久久久久91蜜桃 | 久久亚洲综合av| 亚洲va欧美va天堂v国产综合| 国产成人av电影在线| 日韩亚洲电影在线| 亚洲国产毛片aaaaa无费看| 丁香六月久久综合狠狠色| 日韩一区二区三区四区五区六区| 亚洲欧美另类图片小说| 国产69精品久久99不卡| 欧美大片顶级少妇| 日韩中文欧美在线| 色综合久久中文字幕综合网| 国产日产欧产精品推荐色 | 五月婷婷久久丁香| 日本道免费精品一区二区三区| 国产无遮挡一区二区三区毛片日本| 婷婷综合久久一区二区三区| 日本丰满少妇一区二区三区| 国产精品乱码人人做人人爱| 九九视频精品免费| 日韩精品专区在线影院重磅| 日韩高清不卡一区| 欧美日韩一级片网站| 亚洲一区二区四区蜜桃| 91免费精品国自产拍在线不卡| 国产日韩综合av| 国产一区二区三区在线观看精品| 88在线观看91蜜桃国自产| 亚洲无人区一区| 欧美综合视频在线观看| 亚洲精品国产一区二区精华液 | 午夜成人在线视频| 欧美日韩中字一区| 午夜电影网一区| 51精品秘密在线观看| 国产综合色视频| 中文字幕欧美一| 欧美视频中文字幕| 极品少妇xxxx精品少妇偷拍| 国产精品久久久久久久久动漫 | 91国偷自产一区二区使用方法| 亚洲成人av中文| 久久亚洲欧美国产精品乐播| 不卡的电影网站| 午夜成人免费电影| 久久久三级国产网站| 色噜噜狠狠成人网p站| 热久久免费视频| 国产精品久久久久婷婷二区次| 欧美日韩综合不卡| 国产精品香蕉一区二区三区| 一区二区三区视频在线看| 欧美大片在线观看一区二区| aaa欧美大片| 蜜桃久久av一区| 亚洲视频在线观看一区| 日韩一级在线观看| 99久久精品国产一区二区三区| 视频一区视频二区中文| 亚洲国产成人自拍| 欧美电影在哪看比较好| 丁香婷婷综合色啪| 日本成人在线一区| 国产精品网站一区| 欧美高清视频不卡网| 成人av免费网站| 久久99久久久欧美国产| 亚洲精品视频在线看| 久久久91精品国产一区二区精品| 欧美视频精品在线| 成人激情视频网站| 蜜臀av性久久久久蜜臀aⅴ四虎| 亚洲男帅同性gay1069| 精品剧情v国产在线观看在线| 色婷婷亚洲精品| 成人一区二区三区视频| 日本少妇一区二区| 亚洲午夜在线视频| 国产精品嫩草久久久久| 精品久久免费看| 欧美另类videos死尸| 色综合久久中文综合久久97 | 日韩美女一区二区三区| 在线视频国内自拍亚洲视频| 国产黄色精品视频| 美女国产一区二区三区| 亚洲成人自拍网| 亚洲久本草在线中文字幕| 中日韩免费视频中文字幕| 日韩欧美在线一区二区三区| 欧美亚洲综合一区| 白白色亚洲国产精品| 国产精品自在欧美一区| 久99久精品视频免费观看| 日韩影院在线观看| 亚洲午夜久久久| 亚洲精品视频在线看| 中文字幕一区二区三区视频| 国产农村妇女毛片精品久久麻豆 | 精品国产乱码久久久久久牛牛| 欧美日韩国产高清一区二区三区 | 国产精品久久夜| 久久综合狠狠综合| 欧美大片拔萝卜| 欧美一区二区三区视频免费播放| 欧美日本在线视频| 欧美日韩国产天堂| 欧美三级视频在线|