亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? network.h

?? 也是遺傳算法的源代碼
?? H
字號:
/***************************************************************************                          network.h  -  description                             -------------------    copyright            : (C) 2001, 2002 by Matt Grover    email                : mgrover@amygdala.org ***************************************************************************//*************************************************************************** *                                                                         * *   This program is free software; you can redistribute it and/or modify  * *   it under the terms of the GNU General Public License as published by  * *   the Free Software Foundation; either version 2 of the License, or     * *   (at your option) any later version.                                   * *                                                                         * ***************************************************************************/#ifndef NETWORK_H#define NETWORK_Husing namespace std;#include <amygdala/types.h>// check for g++ version 3.0.0 or higher#if GCC_VERSION >= 30000    #include <ext/hash_map>#else    #include <hash_map>#endif#include <string>#include <queue>#include <amygdala/mpspikeinput.h>enum NEvent { NOACTION, RMSPIKE, SPIKE, INPUTSPIKE, RESPIKE };class Neuron;class FunctionLookup;class Layer;class SpikeInput;class Synapse;/** @class Network network.h amygdala/network.h  * @brief This class manages the NN as it runs.  *  * Network acts as a container for Neurons and Layers and  * also coordinates spike transmission between neurons.  * Amygdala supports multi-threading through the use of the  * MpNetwork class which allows neural nets to be partitioned  * across multiple Network objects. MpNetwork handles all communication  * between Networks and Neurons running in separate threads. This  * model will be extended in the future to allow Amygdala networks  * to run on clustered computer systems.  * <P>If there is a need to run two or more entirely separate  * neural nets at the same time, it is recommended that they be put into  * separate processes using fork() (or any other method that is useful).  * It would be possible to run separate networks by using MpNetwork, but  * the results may not be what is expected since MpNetwork is designed with  * the assumption that all MpNetwork objects belong to the same physical neural net.  * @see MpNetwork  * @author Matt Grover  */class Network {public:    //friend void Neuron::SendSpike(AmTimeInt& now);    friend class Neuron;    friend void MpSpikeInput::ReadInputBuffer();    Network();    virtual ~Network();    /**     * Schedule an event.  Only SPIKE, and INPUTSPIKE are currently used     * for the eventType.     * @param eventType The action that the neuron should carry out.     * @param eventTime Time that the event should be triggered.     * @param reqNrn Pointer to the neuron that will execute the event. */    void ScheduleNEvent(NEvent eventType,                        AmTimeInt eventTime,                        Neuron* reqNrn);    /**     * Schedule an event using the Neuron's ID instead of a pointer to     * the Neuron.     * @param eventType The action that the neuron should carry out.     * @param eventTime Time that the event should be triggered.     * @param reqNrnId ID of neuron that will execute the event. */    void ScheduleNEvent(NEvent eventType,                        AmTimeInt eventTime,                        AmIdInt reqNrnId);    /** Add a neuron to the network with a pre-determined neuron ID.     * @param nRole One of INPUTNEURON, HIDDENNEURON, or OUTPUTNEURON.     * @param nId The desired neuron ID.     * @return A pointer to the new neuron on success. Null pointer     * on failure. */    Neuron* AddNeuron(NeuronRole nRole, AmIdInt nId);    /** Add a Neuron to the network.     * @param nRole One of INPUTNEURON, HIDDENNEURON, or OUTPUTNEURON.     * @param nrn Pointer to a Neuron.  Network assumes ownership     * of the pointer once it has been added to the network.     * @return A pointer to the neuron is returned on success, a null     * pointer on failure. */    Neuron* AddNeuron(NeuronRole nRole, Neuron* nrn);    /** Connect two neurons together.     * @param preSynapticNeuron The pre-synaptic (originating) neuron ID.     * @param postSynapticNeuron The post-synaptic (receiving) neuron ID.     * @param weight A weight value in the range [-1, 1].     * @param delay The spike transmission delay in microseconds. Delay values     * will be rounded to the nearest whole multiple of the time step size.     * @return True on success. */    bool ConnectNeurons(AmIdInt preSynapticNeuron,                        AmIdInt postSynapticNeuron,                        float weight,                        AmTimeInt delay=0);    /** Connect two neurons together.     * @param preSynapticNeuron The pre-synaptic (originating) neuron pointer.     * @param postSynapticNeuron The post-synaptic (receiving) neuron pointer.     * @param weight A weight value in the range [-1, 1].     * @param delay The spike transmission delay in microseconds. Delay values     * will be rounded to the nearest whole multiple of the time step size.     * @return True on success. */    bool ConnectNeurons(Neuron* preSynapticNeuron,                        Neuron* postSynapticNeuron,                        float weight,                        AmTimeInt delay=0);    /** Run the simulation. The simulation will continue     * until either the event queues are completely empty     * or the network has run for maximum alloted time.  If     * streaming input or MP mode is in use, the simulation will     * continue until maxRunTime has been reached even if the event     * queues are empty.     * @param maxRunTime Maximum time to run before returning     * in microseconds. */    void Run(AmTimeInt maxRunTime);    /** @return Number of neurons in the network. */    int Size() { return net.size(); }    /** @return The number of layers in the network. */    int LayerCount() { return layers.size(); }    /** Define an iterator to parse through all Neurons     * contained in the network. */    typedef hash_map<AmIdInt, Neuron*>::const_iterator const_iterator;    /** @return The first position of the iterator. */    const_iterator begin() const { return net.begin(); }    /** @return The final position of the iterator. */    const_iterator end() const { return net.end(); }    /** Define an iterator to parse through all Layers     * contained in the network. */    typedef hash_map<unsigned int, Layer*>::iterator layer_iterator;    /** @return The first position of the layer iterator. */    layer_iterator layer_begin() { return layers.begin(); }    /** @return The final position of the layer iterator. */    layer_iterator layer_end() { return layers.end(); }    /** Add a layer to the network. The Network assumes ownership over the     * Layer object and the Neuron objects contained within that Layer.     * @param newLayer Pointer to the Layer being added. */    void AddLayer(Layer* newLayer);    /** @return The largest neuron ID currently in the network. */    AmIdInt MaxNeuronId() { return (nextNeuronId - 1); }    /** @return True if the Network contains Layers. */    bool Layered() { return isLayered; }    /** Set the size of the simulation time steps. This     * must be set before Neurons are added.     * @param stepSize Time step size in microseconds. The defaults     * to 100 us. */    static void SetTimeStepSize(AmTimeInt stepSize);    /** @return The size of the time step in microseconds. */    static AmTimeInt TimeStepSize() { return simStepSize; }    /** @return The current simulation time in microseconds. */    static AmTimeInt SimTime() { return simTime; }    /** Get a reference to a specific Neuron.     * @param nId The neuron ID of the Neuron.     * @return Pointer to the neuron with ID nId. */    Neuron* GetNeuron(AmIdInt nId) { return net[nId]; }    /** @return Pointer to the SpikeInput object that is being used. */    SpikeInput* GetSpikeInput() { return spikeInput; }    /** Specify a SpikeInput object for the Network to use.     * Network creates a SimpleSpikeInput object in the constructor,     * so there is no need to call this function unless a different     * SpikeInput class is needed. Network assumes ownership of the     * pointers and the existing SpikeInput object is destroyed     * when a new one is passed in.     * @param sIn Pointer to the SpikeInput object. */    void SetSpikeInput(SpikeInput* sIn);    /** @return True if streaming input is in use. */    bool StreamingInput() { return streamingInput; }    /** Toggle the streaming input state.     * @param streaming A value of true turns on streaming     * input. Streaming input should be used if the input spike queue     * cannot be filled before calling Network::Run(). */    void StreamingInput(bool streaming) { streamingInput = streaming; }    /** Reset the simulation time to 0. */    static void ResetSimTime();    /** Toggle the training mode.     * @param tMode True if training should be enabled. */    void SetTrainingMode(bool tMode);    /** @return True if training is enabled. */    bool GetTrainingMode() const { return trainingMode; }    /** @return a pointer to the Layer with layerID */    Layer * GetLayer(AmIdInt layerId);    /** Turn on spike batching (send all spikes to a neuron in     * a group, rather than one at a time).  This is turned on     * automatically if spike delays are used and cannot be     * turned off again once this function has been called. */    void EnableSpikeBatching() { spikeDelaysOn = true; }protected:    /** Schedule the transmission of a spike down an axon. This     *  may be done in order to implement spike batching or to     *  model transmission delays. This is normally called from     *  Neuron.     *  @param axon The axon vector from a Neuron. A spike will     *  be scheduled to cross each Synapse after the delay time has     *  passed.  Delay times are stored in Synapse and set when     *  Neurons are connected together.     *  @see Neuron::SendSpike(), Network::ConnectNeurons(). */    void ScheduleSpikeDelay(vector<Synapse*>& axon);    /** Increment the simTime variable to the next time step.     * In multithreaded mode, simTime is not incremented until     * all Nodes have called IncrementSimTime(). */    virtual void IncrementSimTime();    int pspLSize;                   // Size of lookup tables    int pspLRes;                    // Lookup table timestep resolution    static AmTimeInt simStepSize;    int netSize;    bool streamingInput;    AmIdInt nextNeuronId;    unsigned int nextLayerId;    FunctionLookup* functionRef;    // Container for lookup tables    SpikeInput* spikeInput;    hash_map<AmIdInt, Neuron*> net;     // Neurons that make up net. Key is neuron ID.    hash_map<unsigned int, Layer*> layers; // layer container    /** SpikeRequest is used to keep track of scheduled spikes in the     * event queue. The priority_queue from the STL ranks entries     * based on the < operator (defined below). The ranking will be     * in order of spikeTime, requestTime, and requestOrder. */    struct SpikeRequest {        AmTimeInt spikeTime;     // Desired time of spike        AmTimeInt requestTime;   // Time SpikeRequest was entered in queue        unsigned int requestOrder;  // Entry number within a given time step.        Neuron* requestor;          // Neuron scheduling spike        // operator< overloaded to make the priority_queue happy        bool operator<(const SpikeRequest& sr) const        {            if(spikeTime != sr.spikeTime) {                return spikeTime>sr.spikeTime;            }            else if(requestTime != sr.requestTime) {                return requestTime>sr.requestTime;            }            else {                return requestOrder<sr.requestOrder;            }        }    };    priority_queue<SpikeRequest> eventQ;    // Main event queue    priority_queue<SpikeRequest> inputQ;    // Queue of inputs into network    vector< vector<Synapse*> > delayedSpikeQ;    AmTimeInt currSpikeDelayOffset;    AmTimeInt maxOffset;    bool spikeDelaysOn;    AmTimeInt maxSpikeDelay;    Synapse* maxSpikeDelaySyn;    static AmTimeInt simTime;               // Current simulation time    static unsigned int runCount;           // number of Network objects calling Run()    unsigned int spikeBatchCount;    inline void IncrementDelayOffset();private:    /** Call Neuron::InputSpike() for each delayed spike     *  scheduled for the next offset.  This function will     *  increment currSpikeDelayOffset; */    void SendDelayedSpikes();    /** Set default network parameters. */    void SetDefaults();    /** Initialize the delayed spike queue. */    void InitializeDelayedSpikeQ();    unsigned int eventRequestCount;         // Counter for SpikeRequest.requestOrder    bool isLayered;    bool trainingMode;};#endif

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
精品在线播放免费| 91久久国产最好的精华液| 男女激情视频一区| 肉色丝袜一区二区| 日本成人超碰在线观看| 亚洲国产乱码最新视频 | 一区二区久久久| 国产精品动漫网站| 亚洲欧洲精品一区二区三区 | 一卡二卡三卡日韩欧美| 一区二区三区精品在线| 亚洲成人1区2区| 亚洲成年人影院| 秋霞影院一区二区| 国产精品一区在线观看你懂的| 国产一区二区免费看| 国产成人免费视频一区| 97se亚洲国产综合自在线不卡| 色欧美乱欧美15图片| 欧美日韩一区三区四区| 在线播放国产精品二区一二区四区| 欧美日本一区二区三区四区| 精品少妇一区二区三区日产乱码 | 精品国产乱码久久久久久影片| 日韩免费电影一区| 国产嫩草影院久久久久| 亚洲乱码国产乱码精品精98午夜| 亚洲成人动漫在线观看| 蜜臀av性久久久久蜜臀aⅴ | 日韩女优毛片在线| 久久久久久久久久久久电影| 国产精品网曝门| 亚洲一区二区三区视频在线| 青草国产精品久久久久久| 国产一区二区视频在线| 99久久精品费精品国产一区二区| 欧美综合色免费| 日韩免费视频一区| 亚洲欧美偷拍另类a∨色屁股| 污片在线观看一区二区| 国产精品小仙女| 欧美视频日韩视频在线观看| 久久综合九色综合欧美亚洲| 一区二区三区中文在线观看| 精品一区二区三区视频在线观看| 97se亚洲国产综合在线| 日韩欧美成人一区二区| 亚洲人成伊人成综合网小说| 看片的网站亚洲| 91美女在线看| 精品国产髙清在线看国产毛片| 国产精品国产三级国产有无不卡| 免费看黄色91| 91国产免费观看| 久久精品亚洲麻豆av一区二区| 亚洲一区二区三区四区中文字幕| 国产精品影视在线观看| 欧美精品一卡二卡| 亚洲男帅同性gay1069| 国产一区二区三区av电影| 欧美日韩成人一区二区| 国产精品久久久久毛片软件| 蜜臀va亚洲va欧美va天堂| 在线观看av不卡| 日本一区二区在线不卡| 秋霞午夜av一区二区三区| 91视视频在线直接观看在线看网页在线看| 91精品国产综合久久福利| 亚洲人123区| 国产精品一二一区| 亚洲激情一二三区| 国产不卡免费视频| 精品欧美一区二区在线观看| 一区二区久久久久| 91麻豆国产精品久久| 久久看人人爽人人| 久久福利资源站| 欧美精品v国产精品v日韩精品 | 国产经典欧美精品| 欧美精选午夜久久久乱码6080| 中文字幕在线播放不卡一区| 国产一区二区三区国产| 日韩女优av电影在线观看| 日韩二区三区四区| 欧美日韩一区二区三区免费看| 亚洲少妇最新在线视频| 国产成人精品网址| 国产调教视频一区| 国产精品99久久久久久有的能看| 日韩一区二区三区在线观看| 爽好久久久欧美精品| 精品视频1区2区| 尤物视频一区二区| 色综合亚洲欧洲| **欧美大码日韩| 99久久99久久精品免费看蜜桃| 中文字幕 久热精品 视频在线| 国产成人精品网址| 亚洲国产高清不卡| 成人国产精品免费观看视频| 国产精品私人影院| 成人黄色软件下载| 亚洲色图视频网| 日本道在线观看一区二区| 亚洲蜜臀av乱码久久精品| 91久久精品国产91性色tv | 97成人超碰视| 《视频一区视频二区| 色婷婷国产精品| 亚洲黄网站在线观看| 欧美性生交片4| 亚洲成人av在线电影| 5月丁香婷婷综合| 久久精品国产成人一区二区三区| 日韩一级黄色大片| 韩国成人在线视频| 国产精品素人一区二区| 日本韩国欧美在线| 香蕉乱码成人久久天堂爱免费| 欧美精品久久一区| 国产自产2019最新不卡| 欧美国产日韩一二三区| 91丝袜高跟美女视频| 亚洲第一福利视频在线| 欧美videossexotv100| 高清国产午夜精品久久久久久| 亚洲色图在线播放| 91精品国产麻豆| 国产成人av资源| 亚洲欧美另类综合偷拍| 91精品国产综合久久蜜臀| 黄网站免费久久| 亚洲欧美色图小说| 日韩午夜av电影| 风间由美一区二区三区在线观看 | 99这里只有久久精品视频| 国产91对白在线观看九色| 综合激情网...| 欧美精品乱人伦久久久久久| 国产精品66部| 亚洲国产精品一区二区www在线| 欧美成人女星排名| 一本一道久久a久久精品| 日韩电影在线一区二区三区| 国产精品天美传媒沈樵| 欧美日本一区二区三区| 国产成人日日夜夜| 偷拍一区二区三区| 欧美经典一区二区三区| 欧美午夜精品久久久久久孕妇 | 国产精品影视网| 亚洲欧美日韩在线| 欧美mv日韩mv亚洲| 在线亚洲高清视频| 国产又粗又猛又爽又黄91精品| 亚洲欧美区自拍先锋| 欧美精品一区二区在线播放| 91香蕉视频mp4| 久久av老司机精品网站导航| 中文字幕在线一区免费| 日韩三区在线观看| 色综合久久久久网| 国产成人夜色高潮福利影视| 亚洲亚洲精品在线观看| 久久伊99综合婷婷久久伊| 欧美最猛黑人xxxxx猛交| 成人中文字幕在线| 免费成人在线观看| 一区二区三区在线视频播放| 久久久久国产成人精品亚洲午夜 | 久久久99久久| 欧美高清视频不卡网| 色综合久久久久综合| 国产成人8x视频一区二区| 蜜臀av亚洲一区中文字幕| 亚洲一卡二卡三卡四卡五卡| 中文字幕在线一区免费| 久久亚区不卡日本| 欧美一区二区女人| 欧美日韩三级一区二区| eeuss鲁片一区二区三区 | 亚洲主播在线播放| 中文字幕不卡一区| 久久这里都是精品| 欧美成人性福生活免费看| 欧美精品一二三| 欧美日韩国产精品成人| 色噜噜狠狠色综合欧洲selulu| 国产999精品久久久久久| 国产又粗又猛又爽又黄91精品| 美女一区二区三区在线观看| 天天免费综合色| 三级久久三级久久| 天天色 色综合| 日欧美一区二区| 天堂影院一区二区| 亚洲午夜一二三区视频| 亚洲精品久久久蜜桃| 亚洲另类一区二区| 亚洲蜜臀av乱码久久精品|