亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? faq.html

?? All the tool for build a network able to reconize any shapes. Very complete, a good base for anythi
?? HTML
?? 第 1 頁 / 共 2 頁
字號:
<html>
	<head>
		<meta http-equiv="Keywords" content="neural, neural net, neural network, neural programming, mlp, backpropagation, back propagation, multilayer, perceptron, java, java package, example, class, training, train, incremental, batch, mini-batch, mini batch, generalized, modular, feed forward, feed, forward, cross validation, cross, validation, test, free, source, code">
        <meta http-equiv="Content-Language" content="en-us">
		<meta http-equiv="Description" content="This work is a java implementation of feed forward neural nets. Using this package, you can easily build and train multilayer, generalized feed forward, modular feed forward nets with any number of layers and any number of units.">
		<title>FAQ - FEED FORWARD NEURAL NETWORKS - A JAVA IMPLEMENTATION v2.0</title>
		<META HTTP-EQUIV="pragma" CONTENT="no-cache">
		<META HTTP-EQUIV="Expires" CONTENT="Fri, 30 May 1980 01:00:00 GMT">
	</head>
	<body bgcolor="#e2e0e0" style="font-family: Verdana">
		<div align="center">
          <center>
		<table width="90%" border="0" cellspacing="20" cellpadding="0" style="border-collapse: collapse" bordercolor="#111111">
			<tr>
				<td align="left">
					<a href="index.html">Home</a></td>
			</tr>
			<tr>
				<td align="left">
					<b><font size="4"><a name="top"></a>FEED FORWARD NEURAL 
                    NETWORK<span lang="tr">S</span> - A JAVA IMPLEMENTATION v2.0 </font></b>
                    <br><b><span lang="tr"><font size="5">FAQ</font></span></b></td>
			</tr>
			<tr>
				<td align="left">
					<font size="2">- <a href="#1">What is this?</a><br>
                    - <a href="#2">What kind of skills do I need in 
                    order to use this package?</a><br>
                    - <a href="#3">What kind of neural nets can be 
                    created using this package?</a><br>
                    - <a href="#4">What is a 'multilayer perceptron' 
                    and how can I create one?</a><br>
                    - <a href="#5">What is a 'generalized feed forward 
                    net'?</a><br>
                    - <a href="#6">What is a 'modular feed forward 
                    net'?</a><br>
                    - <a href="#7">How can I create generalized and 
                    modular feed forward nets?</a><br>
                    - <a href="#8">What kind of training methods can I 
                    use?</a><br>
                    - <a href="#9">Which activation functions can I 
                    use?</a><br>
                    - <a href="#10">Can I use different flatness for 
                    each neuron?</a><br>
                    - <a href="#11">Can I use momentum?</a><br>
                    - <a href="#12">Can I use different learning rates 
                    for each layer?</a><br>
                    - <a href="#13">What is the function of the class 
                    'Neuron'?</a><br>
                    - <a href="#14">What is the function of the class 
                    'Synapse'?</a><br>
                    - <a href="#15">What is the function of the class 'NeuralNet'?</a><br>
                    - <a href="#16">What is the function of the class 
                    'Pattern'?</a><br>
                    - <a href="#17">What is the function of the class 'PatternSet'?</a><br>
                    - <a href="#18">What is the function of the class 
                    'Randomizer'?</a><br>
                    - <a href="#19">What is the function of the class 'LineReader'?</a><br>
                    - <a href="#20">What is 'cross validation data'?</a><br>
                    - <a href="#21">What is 'test data'?</a><br>
                    - <a href="#22">What is the function of the methods 
                    'CrossValErrorRatio' and 'TestValErrorRatio'?</a><br>
                    - <a href="#23">Can I feed my own cross validation 
                    and test patterns?</a><br>
                    - <a href="#24">How can I set random weights after 
                    I create a net?</a><br>
                    - <a href="#25">What is a configuration file and 
                    what is the format of a configuration file?</a><br>
                    - <a href="#26">What is a pattern file and what is 
                    the format of a pattern file?</a><br>
                    - <a href="#27">What is a weight file and what is 
                    the format of a weight file?</a><br>
                    - <a href="#28">Do you give the source code for 
                    free?</a><br>
                    - <a href="#29">How can I get more detailed 
                    documentation?</a><br>
                    - <a href="#30">In which development environment 
                    has this package been tested?</a><br>
                    - <a href="#31">How can I compile the code?</a><br>
                    - <a href="#32">Error management in this package is 
                    not coded properly. Are you aware of it?</a><br>
                    - <a href="#33">Where can I find basic information 
                    about neural networks?</a><br>
                    - <a href="#34">How can I learn Java?</a><p>&nbsp;</p>
                    <p><u><b><a name="1"></a>
                    What is this?</b></u><br>
                    This is basically a feed forward neural network 
                    implementation coded in Java. It supports topologies such as 
                    multilayer perceptron, generalized feed forward nets and 
                    modular feed forward nets.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="2"></a>What kind of skills do I need in order to use this 
                    package?</b></u><br>
                    That depends on what you want to do. But at least, you 
                    should have a basic knowledge of java programming, feed 
                    forward neural networks and back propagation.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="3"></a>What kind of neural nets can be created using this 
                    package?</b></u><br>
                    You can create multilayer perceptrons, generalized feed 
                    forward nets and modular feed forward nets. You can use 
                    momentum, different activation functions, different flatness 
                    for those functions, different learning rates etc.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="4"></a>What is a 'multilayer perceptron' and how can I create 
                    one?</b></u><br>
                    Multilayer perceptrons are layered feed forward networks 
                    typically trained with static back propagation. These 
                    networks have found their way into countless applications 
                    requiring static pattern classification. Their main 
                    advantage is that they are easy to use, and that they can 
                    approximate any input / output map. The key disadvantages 
                    are that they train slowly, and require lots of training 
                    data (typically three times more training samples than 
                    network weights).(*) You can create a multilayer perceptron 
                    very easily using a constructor in 'NeuralNet' class 
                    (example 1 and example 2).<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="5"></a>What is a 'generalized feed forward net'?</b></u><br>
                    Generalized feed forward networks are a generalization of 
                    the MLP such that connections can jump over one or more 
                    layers. In theory, a MLP can solve any problem that a 
                    generalized feed forward network can solve. In practice, 
                    however, generalized feed forward networks often solve the 
                    problem much more efficiently. A classic example of this is 
                    the two spiral problem. Without describing the problem, it 
                    suffices to say that a standard MLP requires hundreds of 
                    times more training epochs than the generalized feed forward 
                    network containing the same number of processing elements(*) 
                    (see example 1)<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="6"></a>What is a 'modular feed forward net'?</b></u><br>
                    Modular feed forward networks are a special class of MLP. 
                    These networks process their input using several parallel 
                    MLP's, and then recombine the results. This tends to create 
                    some structure within the topology, which will foster 
                    specialization of function in each sub-module. In contrast 
                    to the MLP, modular networks do not have full 
                    interconnectivity between their layers. Therefore, a smaller 
                    number of weights are required for the same size network 
                    (i.e. the same number of PEs). This tends to speed up 
                    training times and reduce the number of required training 
                    exemplars. There are many ways to segment a MLP into 
                    modules. It is unclear how to best design the modular 
                    topology based on the data. There are no guarantees that 
                    each module is specializing its training on a unique portion 
                    of the data.(*)<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <b><u><a name="7"></a>How can I create generalized and modular feed forward 
                    nets?</u></b><br>
                    Although it is possible to create these kind of nets using 
                    this package, it is not easy to do so practically. Unless 
                    you develop your own editing application, you will have to 
                    edit a configuration file and create neurons / synapses 
                    manually (example 1).<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="8"></a>What kind of training methods can I use?</b></u><br>
                    You can use back propagation methods such as batch training 
                    (example 1), mini batch training (example 2) and incremental 
                    training (example 3). You can use them one after another, 
                    without saving the weights.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="9"></a>Which activation functions can I use?</b></u><br>
                    The package supports logistic (outputs between 0 and 1), 
                    tanh (outputs between -1 and 1) and linear activation 
                    functions. Flatness of the curves can be changed for each 
                    neuron. You can also add your own function by changing the 
                    code.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="10"></a>Can I use different flatness for each neuron?</b></u><br>
                    Yes. You can even use different flatness parameters for each 
                    neuron. The bigger is the flatness parameter, the flatter 
                    will be the function.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="11"></a>Can I use momentum?</b></u><br>
                    Yes. You can even use different momentum parameters for each 
                    neuron. You can use momentum with all of the training 
                    techniques (batch, mini batch and incremental).<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="12"></a>Can I use different learning rates for each layer?</b></u><br>
                    Yes. You can even use different learning rates for each 
                    neuron. Every neuron has a learning rate coefficient. If you 
                    wish to train the net with a learning rate of 0.5 and the 
                    learning rate coefficient of the neuron is 0.8; the neuron 
                    will be trained with a learning rate of 0.5 * 0.8 = 0.4. 
                    Note that just after you instantiate a net using the 
                    constructor for multilayer perceptron you will not be able 
                    to have different coefficients for the units in the same 
                    layer. If you wish to do so, you have to change them 
                    manually.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="13"></a>What is the function of the class 'Neuron'?</b></u><br>
                    It simply represents a neuron, the basic unit of any neural 
                    network, responsible for the processing. The class has two 
                    constructors, one for input units and the other for hidden 
                    and output units. A neuron object includes information about 
                    itself (activation function, flatness, learning coefficient, 
                    momentum rate, current output, layer no, etc.) and also 
                    information about all synapses related to it. It can update 
                    its output and train itself.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="14"></a>What is the function of the class 'Synapse'?</b></u><br>
                    Represents a synapse between two neurons. For every 
                    relationship between two neurons, there should be a synapse 
                    object. It doesn't have any methods, it has only one 
                    constructor.<br>
                    <br>
                    <a href="#top">top</a><br>
                    <br>
                    <u><b><a name="15"></a>What is the function of the class 'NeuralNet'?</b></u><br>
                    It simply represents a neural net. It has two constructors. 
                    One is for creating a multilayer perceptron with given 
                    parameters. The other is a more general constructor. Using 
                    it you can create generalized feed forward nets, modular 
                    feed forward nets, as well as multilayer perceptrons. It 
                    reads a configuration file and creates a net out of the 
                    parameters in this file. If you don't code your own editing 
                    application, you have to edit the file manually.<br>
                    It has methods for saving the current configuration, loading 

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
99re6这里只有精品视频在线观看| 亚洲一区二区三区四区在线免费观看 | www国产精品av| 韩国一区二区三区| 国产无一区二区| 99在线视频精品| 亚洲h在线观看| 日韩区在线观看| 国产精品77777| 国产精品美女久久福利网站| 色哟哟在线观看一区二区三区| 亚洲六月丁香色婷婷综合久久 | 亚洲国产综合人成综合网站| 欧美日韩日日夜夜| 国产原创一区二区| 亚洲欧洲综合另类| 在线不卡中文字幕| 国产精品一二三四五| 国产精品久久久久久久久免费丝袜 | 欧美日韩第一区日日骚| 美国十次综合导航| 国产精品免费aⅴ片在线观看| 色综合久久综合网97色综合| 麻豆免费精品视频| 中文字幕一区二区不卡| 欧美一区二区免费视频| www.成人在线| 毛片av一区二区| 亚洲欧美电影一区二区| 日韩欧美综合在线| 一本久道中文字幕精品亚洲嫩| 亚洲成人动漫精品| 国产精品美女久久久久久久网站| 欧美巨大另类极品videosbest | 日本久久电影网| 精品一区二区三区免费视频| 综合久久久久综合| 精品少妇一区二区三区在线视频| 91美女蜜桃在线| 国产一区福利在线| 香港成人在线视频| 亚洲女人的天堂| 国产喷白浆一区二区三区| 3atv一区二区三区| 色婷婷久久一区二区三区麻豆| 开心九九激情九九欧美日韩精美视频电影 | 婷婷中文字幕一区三区| 欧美激情一区二区在线| 欧美一级久久久| 欧美少妇性性性| 99久久99久久久精品齐齐| 久久99精品国产麻豆不卡| 一区二区三区成人在线视频| 久久综合九色综合欧美就去吻 | 精品日韩欧美一区二区| 在线亚洲一区二区| 99re成人在线| www.在线欧美| 国产一区二区剧情av在线| 麻豆国产精品官网| 日韩福利电影在线观看| 亚洲激情欧美激情| 亚洲欧美日韩久久| 中文字幕在线一区免费| 欧美激情综合在线| 2021中文字幕一区亚洲| 91精品国产麻豆| 91精品国产乱码久久蜜臀| 91精品一区二区三区久久久久久 | 亚洲制服欧美中文字幕中文字幕| 国产精品久久久久一区二区三区| 国产欧美综合色| 国产精品美女久久久久久久| 欧美精品一区二| 精品国产一区二区三区久久影院| 91精品国产欧美一区二区成人 | 亚洲第一福利一区| 午夜视频在线观看一区二区| 一区二区三区在线播放| 一区二区三区四区av| 一区二区三区四区国产精品| 亚洲综合色噜噜狠狠| 亚洲一区在线观看网站| 午夜精品福利在线| 麻豆精品新av中文字幕| 国产乱人伦偷精品视频免下载| 国产一区啦啦啦在线观看| 国产69精品一区二区亚洲孕妇| 99精品黄色片免费大全| 欧美最新大片在线看| 欧美一区二区三区日韩| 精品国产乱码久久| 欧美国产一区二区在线观看| 国产精品国产三级国产aⅴ入口| 亚洲欧美日韩国产中文在线| 亚洲福利一区二区三区| 精品一区二区日韩| 成人福利在线看| 欧美专区在线观看一区| 欧美一区二区在线免费播放| 精品少妇一区二区三区在线视频 | 久久免费午夜影院| 国产精品久久久久aaaa| 亚洲一区二区美女| 精品一区二区三区久久| 91免费版pro下载短视频| 欧美日韩国产在线播放网站| 欧美xxx久久| 日韩理论电影院| 激情六月婷婷综合| 91蝌蚪porny| 欧美电影精品一区二区| 自拍偷自拍亚洲精品播放| 午夜国产不卡在线观看视频| 国产a视频精品免费观看| 精品视频一区二区不卡| 精品国产露脸精彩对白| 亚洲一区二区3| 国产寡妇亲子伦一区二区| 色狠狠综合天天综合综合| 久久综合九色综合97婷婷女人 | 免费国产亚洲视频| 99久久久无码国产精品| 日韩精品中文字幕一区二区三区| 亚洲视频一区在线| 国产一区二区三区电影在线观看| 色综合一个色综合| xfplay精品久久| 日韩av中文字幕一区二区| 色综合天天综合网国产成人综合天 | 欧美久久久一区| 中文字幕一区二区视频| 麻豆国产精品视频| 欧美猛男超大videosgay| 亚洲国产精品v| 久久精品72免费观看| 在线这里只有精品| 国产精品久久久久影视| 国产乱一区二区| 欧美电影免费观看高清完整版在线| 亚洲色图色小说| 成人精品一区二区三区中文字幕| 日韩欧美一级二级三级| 婷婷中文字幕综合| 欧美日韩在线免费视频| |精品福利一区二区三区| 成人免费黄色大片| 国产偷国产偷亚洲高清人白洁 | 欧美男生操女生| 一区二区在线电影| 91蝌蚪国产九色| 亚洲色图色小说| 99国产精品久久久久久久久久| 久久亚洲免费视频| 国产九色sp调教91| 精品少妇一区二区| 美国欧美日韩国产在线播放| 欧美精品少妇一区二区三区 | 蜜臀av一级做a爰片久久| 欧美性做爰猛烈叫床潮| 亚洲精选免费视频| 91高清视频在线| 一区二区三区免费网站| 91免费看片在线观看| 亚洲精品成人天堂一二三| 99精品视频中文字幕| 亚洲欧洲精品天堂一级| 99精品视频在线观看| 亚洲色图.com| 欧美性猛交xxxxxx富婆| 亚洲五月六月丁香激情| 精品视频在线看| 美女性感视频久久| 精品国产91乱码一区二区三区| 毛片av中文字幕一区二区| 久久久影视传媒| 成人av电影免费观看| 自拍av一区二区三区| 欧美私人免费视频| 久久精品国内一区二区三区| 欧美精品一区二区久久久| 国产一区二区不卡老阿姨| 国产欧美一区二区在线观看| 91首页免费视频| 丝袜美腿高跟呻吟高潮一区| 日韩欧美中文字幕制服| 国产精品66部| 亚洲免费观看高清| 制服丝袜亚洲色图| 国产精品1区2区3区在线观看| 国产精品国产三级国产a| 欧美私人免费视频| 国产乱码精品1区2区3区| 亚洲日本va在线观看| 欧美一区二区大片| 国产**成人网毛片九色 | 国产高清视频一区| 一区二区三区精品在线观看| 欧美一区二区在线看| 成人免费福利片|