亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來(lái)到蟲(chóng)蟲(chóng)下載站! | ?? 資源下載 ?? 資源專(zhuān)輯 ?? 關(guān)于我們
? 蟲(chóng)蟲(chóng)下載站

?? network.h

?? 此代碼經(jīng)過(guò)大量使用
?? H
字號(hào):
/***************************************************************************                          network.h  -  description                             -------------------    copyright            : (C) 2001, 2002 by Matt Grover    email                : mgrover@amygdala.org ***************************************************************************//*************************************************************************** *                                                                         * *   This program is free software; you can redistribute it and/or modify  * *   it under the terms of the GNU General Public License as published by  * *   the Free Software Foundation; either version 2 of the License, or     * *   (at your option) any later version.                                   * *                                                                         * ***************************************************************************/#ifndef NETWORK_H#define NETWORK_Husing namespace std;#include <amygdala/types.h>// check for g++ version 3.0.0 or higher#if GCC_VERSION >= 30000    #include <ext/hash_map>#else    #include <hash_map>#endif#include <string>#include <queue>#include <amygdala/mpspikeinput.h>enum NEvent { NOACTION, RMSPIKE, SPIKE, INPUTSPIKE, RESPIKE };class Neuron;class FunctionLookup;class Layer;class SpikeInput;class Synapse;/** @class Network network.h amygdala/network.h  * @brief This class manages the NN as it runs.  *  * Network acts as a container for Neurons and Layers and  * also coordinates spike transmission between neurons.  * Amygdala supports multi-threading through the use of the  * MpNetwork class which allows neural nets to be partitioned  * across multiple Network objects. MpNetwork handles all communication  * between Networks and Neurons running in separate threads. This  * model will be extended in the future to allow Amygdala networks  * to run on clustered computer systems.  * <P>If there is a need to run two or more entirely separate  * neural nets at the same time, it is recommended that they be put into  * separate processes using fork() (or any other method that is useful).  * It would be possible to run separate networks by using MpNetwork, but  * the results may not be what is expected since MpNetwork is designed with  * the assumption that all MpNetwork objects belong to the same physical neural net.  * @see MpNetwork  * @author Matt Grover  */class Network {public:    //friend void Neuron::SendSpike(AmTimeInt& now);    friend class Neuron;    friend void MpSpikeInput::ReadInputBuffer();    Network();    virtual ~Network();    /**     * Schedule an event.  Only SPIKE, and INPUTSPIKE are currently used     * for the eventType.     * @param eventType The action that the neuron should carry out.     * @param eventTime Time that the event should be triggered.     * @param reqNrn Pointer to the neuron that will execute the event. */    void ScheduleNEvent(NEvent eventType,                        AmTimeInt eventTime,                        Neuron* reqNrn);    /**     * Schedule an event using the Neuron's ID instead of a pointer to     * the Neuron.     * @param eventType The action that the neuron should carry out.     * @param eventTime Time that the event should be triggered.     * @param reqNrnId ID of neuron that will execute the event. */    void ScheduleNEvent(NEvent eventType,                        AmTimeInt eventTime,                        AmIdInt reqNrnId);    /** Add a neuron to the network with a pre-determined neuron ID.     * @param nRole One of INPUTNEURON, HIDDENNEURON, or OUTPUTNEURON.     * @param nId The desired neuron ID.     * @return A pointer to the new neuron on success. Null pointer     * on failure. */    Neuron* AddNeuron(NeuronRole nRole, AmIdInt nId);    /** Add a Neuron to the network.     * @param nRole One of INPUTNEURON, HIDDENNEURON, or OUTPUTNEURON.     * @param nrn Pointer to a Neuron.  Network assumes ownership     * of the pointer once it has been added to the network.     * @return A pointer to the neuron is returned on success, a null     * pointer on failure. */    Neuron* AddNeuron(NeuronRole nRole, Neuron* nrn);    /** Connect two neurons together.     * @param preSynapticNeuron The pre-synaptic (originating) neuron ID.     * @param postSynapticNeuron The post-synaptic (receiving) neuron ID.     * @param weight A weight value in the range [-1, 1].     * @param delay The spike transmission delay in microseconds. Delay values     * will be rounded to the nearest whole multiple of the time step size.     * @return True on success. */    bool ConnectNeurons(AmIdInt preSynapticNeuron,                        AmIdInt postSynapticNeuron,                        float weight,                        AmTimeInt delay=0);    /** Connect two neurons together.     * @param preSynapticNeuron The pre-synaptic (originating) neuron pointer.     * @param postSynapticNeuron The post-synaptic (receiving) neuron pointer.     * @param weight A weight value in the range [-1, 1].     * @param delay The spike transmission delay in microseconds. Delay values     * will be rounded to the nearest whole multiple of the time step size.     * @return True on success. */    bool ConnectNeurons(Neuron* preSynapticNeuron,                        Neuron* postSynapticNeuron,                        float weight,                        AmTimeInt delay=0);    /** Run the simulation. The simulation will continue     * until either the event queues are completely empty     * or the network has run for maximum alloted time.  If     * streaming input or MP mode is in use, the simulation will     * continue until maxRunTime has been reached even if the event     * queues are empty.     * @param maxRunTime Maximum time to run before returning     * in microseconds. */    void Run(AmTimeInt maxRunTime);    /** @return Number of neurons in the network. */    int Size() { return net.size(); }    /** @return The number of layers in the network. */    int LayerCount() { return layers.size(); }    /** Define an iterator to parse through all Neurons     * contained in the network. */    typedef hash_map<AmIdInt, Neuron*>::const_iterator const_iterator;    /** @return The first position of the iterator. */    const_iterator begin() const { return net.begin(); }    /** @return The final position of the iterator. */    const_iterator end() const { return net.end(); }    /** Define an iterator to parse through all Layers     * contained in the network. */    typedef hash_map<unsigned int, Layer*>::iterator layer_iterator;    /** @return The first position of the layer iterator. */    layer_iterator layer_begin() { return layers.begin(); }    /** @return The final position of the layer iterator. */    layer_iterator layer_end() { return layers.end(); }    /** Add a layer to the network. The Network assumes ownership over the     * Layer object and the Neuron objects contained within that Layer.     * @param newLayer Pointer to the Layer being added. */    void AddLayer(Layer* newLayer);    /** @return The largest neuron ID currently in the network. */    AmIdInt MaxNeuronId() { return (nextNeuronId - 1); }    /** @return True if the Network contains Layers. */    bool Layered() { return isLayered; }    /** Set the size of the simulation time steps. This     * must be set before Neurons are added.     * @param stepSize Time step size in microseconds. The defaults     * to 100 us. */    static void SetTimeStepSize(AmTimeInt stepSize);    /** @return The size of the time step in microseconds. */    static AmTimeInt TimeStepSize() { return simStepSize; }    /** @return The current simulation time in microseconds. */    static AmTimeInt SimTime() { return simTime; }    /** Get a reference to a specific Neuron.     * @param nId The neuron ID of the Neuron.     * @return Pointer to the neuron with ID nId. */    Neuron* GetNeuron(AmIdInt nId) { return net[nId]; }    /** @return Pointer to the SpikeInput object that is being used. */    SpikeInput* GetSpikeInput() { return spikeInput; }    /** Specify a SpikeInput object for the Network to use.     * Network creates a SimpleSpikeInput object in the constructor,     * so there is no need to call this function unless a different     * SpikeInput class is needed. Network assumes ownership of the     * pointers and the existing SpikeInput object is destroyed     * when a new one is passed in.     * @param sIn Pointer to the SpikeInput object. */    void SetSpikeInput(SpikeInput* sIn);    /** @return True if streaming input is in use. */    bool StreamingInput() { return streamingInput; }    /** Toggle the streaming input state.     * @param streaming A value of true turns on streaming     * input. Streaming input should be used if the input spike queue     * cannot be filled before calling Network::Run(). */    void StreamingInput(bool streaming) { streamingInput = streaming; }    /** Reset the simulation time to 0. */    static void ResetSimTime();    /** Toggle the training mode.     * @param tMode True if training should be enabled. */    void SetTrainingMode(bool tMode);    /** @return True if training is enabled. */    bool GetTrainingMode() const { return trainingMode; }    /** @return a pointer to the Layer with layerID */    Layer * GetLayer(AmIdInt layerId);    /** Turn on spike batching (send all spikes to a neuron in     * a group, rather than one at a time).  This is turned on     * automatically if spike delays are used and cannot be     * turned off again once this function has been called. */    void EnableSpikeBatching() { spikeDelaysOn = true; }protected:    /** Schedule the transmission of a spike down an axon. This     *  may be done in order to implement spike batching or to     *  model transmission delays. This is normally called from     *  Neuron.     *  @param axon The axon vector from a Neuron. A spike will     *  be scheduled to cross each Synapse after the delay time has     *  passed.  Delay times are stored in Synapse and set when     *  Neurons are connected together.     *  @see Neuron::SendSpike(), Network::ConnectNeurons(). */    void ScheduleSpikeDelay(vector<Synapse*>& axon);    /** Increment the simTime variable to the next time step.     * In multithreaded mode, simTime is not incremented until     * all Nodes have called IncrementSimTime(). */    virtual void IncrementSimTime();    int pspLSize;                   // Size of lookup tables    int pspLRes;                    // Lookup table timestep resolution    static AmTimeInt simStepSize;    int netSize;    bool streamingInput;    AmIdInt nextNeuronId;    unsigned int nextLayerId;    FunctionLookup* functionRef;    // Container for lookup tables    SpikeInput* spikeInput;    hash_map<AmIdInt, Neuron*> net;     // Neurons that make up net. Key is neuron ID.    hash_map<unsigned int, Layer*> layers; // layer container    /** SpikeRequest is used to keep track of scheduled spikes in the     * event queue. The priority_queue from the STL ranks entries     * based on the < operator (defined below). The ranking will be     * in order of spikeTime, requestTime, and requestOrder. */    struct SpikeRequest {        AmTimeInt spikeTime;     // Desired time of spike        AmTimeInt requestTime;   // Time SpikeRequest was entered in queue        unsigned int requestOrder;  // Entry number within a given time step.        Neuron* requestor;          // Neuron scheduling spike        // operator< overloaded to make the priority_queue happy        bool operator<(const SpikeRequest& sr) const        {            if(spikeTime != sr.spikeTime) {                return spikeTime>sr.spikeTime;            }            else if(requestTime != sr.requestTime) {                return requestTime>sr.requestTime;            }            else {                return requestOrder<sr.requestOrder;            }        }    };    priority_queue<SpikeRequest> eventQ;    // Main event queue    priority_queue<SpikeRequest> inputQ;    // Queue of inputs into network    vector< vector<Synapse*> > delayedSpikeQ;    AmTimeInt currSpikeDelayOffset;    AmTimeInt maxOffset;    bool spikeDelaysOn;    AmTimeInt maxSpikeDelay;    Synapse* maxSpikeDelaySyn;    static AmTimeInt simTime;               // Current simulation time    static unsigned int runCount;           // number of Network objects calling Run()    unsigned int spikeBatchCount;    inline void IncrementDelayOffset();private:    /** Call Neuron::InputSpike() for each delayed spike     *  scheduled for the next offset.  This function will     *  increment currSpikeDelayOffset; */    void SendDelayedSpikes();    /** Set default network parameters. */    void SetDefaults();    /** Initialize the delayed spike queue. */    void InitializeDelayedSpikeQ();    unsigned int eventRequestCount;         // Counter for SpikeRequest.requestOrder    bool isLayered;    bool trainingMode;};#endif

?? 快捷鍵說(shuō)明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
91精品国产综合久久久久久| 91精品国产综合久久久久久久久久| 日本最新不卡在线| 久久久不卡网国产精品二区| 在线观看中文字幕不卡| 国产精品一区免费视频| 偷偷要91色婷婷| 亚洲蜜桃精久久久久久久| 欧美tickle裸体挠脚心vk| 色妞www精品视频| 国产成人免费视频精品含羞草妖精| 一区二区三区日韩在线观看| 欧美精品一区二区久久婷婷| 欧美日韩免费电影| 一本色道久久综合亚洲精品按摩| 国内外成人在线视频| 亚瑟在线精品视频| 亚洲男女一区二区三区| 国产欧美一区二区精品性色| 日韩欧美一区在线| 欧美精品色一区二区三区| 91丝袜美腿高跟国产极品老师| 久久99久久精品欧美| 三级久久三级久久| 亚洲综合图片区| 亚洲女同女同女同女同女同69| 国产色爱av资源综合区| 久久在线免费观看| 精品卡一卡二卡三卡四在线| 欧美酷刑日本凌虐凌虐| 欧美中文字幕亚洲一区二区va在线| 91在线一区二区| 99久久综合99久久综合网站| 成人中文字幕在线| 国产精品 日产精品 欧美精品| 久久精品国产一区二区三 | 久久免费的精品国产v∧| 91精品婷婷国产综合久久性色 | 亚洲高清一区二区三区| 亚洲人成在线播放网站岛国| 亚洲欧洲精品天堂一级 | 国产精品美女久久福利网站| 国产欧美精品一区| 国产精品高潮久久久久无| 国产欧美一区二区精品久导航 | 欧美一区二区三区爱爱| 91精品国产综合久久久久久| 日韩欧美国产综合一区 | 国产调教视频一区| 国产精品丝袜一区| 亚洲天堂免费看| 亚洲欧美国产毛片在线| 亚洲午夜av在线| 免费视频最近日韩| 国产综合一区二区| 成人美女视频在线看| 91最新地址在线播放| 欧美日韩一区二区在线观看视频 | 成人av网址在线| 91丝袜国产在线播放| 欧美日韩的一区二区| 欧美成人精品高清在线播放| 久久99精品一区二区三区三区| 自拍偷拍欧美激情| 一区二区成人在线视频| 美女脱光内衣内裤视频久久网站| 久久国产精品露脸对白| 成人午夜在线播放| 日本乱人伦aⅴ精品| 91精品国产欧美一区二区成人| 日韩一区二区免费电影| 日本一区二区高清| 亚洲成人自拍网| 国产一区在线视频| 91麻豆国产福利在线观看| 777a∨成人精品桃花网| 国产亚洲一区二区三区四区| 亚洲美女区一区| 美洲天堂一区二卡三卡四卡视频| 成人一级片在线观看| 欧美日韩一区二区不卡| 久久久久久99精品| 性欧美大战久久久久久久久| 国精产品一区一区三区mba桃花 | 欧美成人性战久久| 亚洲天堂av一区| 久久精品国产久精国产爱| av男人天堂一区| 日韩一级免费观看| 亚洲欧美综合网| 精品一区二区三区久久| 日本高清不卡一区| 久久亚洲一区二区三区四区| 亚洲一区二区免费视频| 国产69精品久久99不卡| 日韩一级在线观看| 亚洲美女屁股眼交3| 韩国女主播成人在线观看| 欧美精品v日韩精品v韩国精品v| 久久精品一区四区| 日韩av一区二区在线影视| www.久久久久久久久| 日韩一区二区免费高清| 亚洲国产综合在线| 97精品久久久午夜一区二区三区 | 91麻豆精品91久久久久久清纯| 国产精品久久久久精k8| 久88久久88久久久| 欧美日韩一区在线| 亚洲精品欧美二区三区中文字幕| 国产一区二区三区久久久 | 精品久久久久久久久久久久包黑料 | 最新成人av在线| 国产成人福利片| 日韩精品一区二区三区蜜臀| 亚洲午夜国产一区99re久久| 91天堂素人约啪| 国产精品麻豆一区二区 | 欧美日本在线看| 亚洲女厕所小便bbb| 成人一区二区三区视频在线观看| 欧美一区二区成人6969| 亚洲第一综合色| 色综合久久久久| 韩国一区二区在线观看| 欧美一级艳片视频免费观看| 午夜精品久久久久久久久 | 欧美xxxxx裸体时装秀| 天天综合天天做天天综合| 日本丶国产丶欧美色综合| 中文字幕亚洲精品在线观看| 成人激情免费网站| 中文字幕不卡的av| aaa欧美色吧激情视频| 亚洲天堂2016| 在线观看日韩av先锋影音电影院| 国产精品毛片a∨一区二区三区| 成人在线综合网| 国产精品福利影院| 色一情一伦一子一伦一区| 亚洲精品视频自拍| 欧美影院午夜播放| 午夜精品在线视频一区| 91精品国产综合久久婷婷香蕉| 日韩精品电影一区亚洲| 日韩一区二区三区视频| 精品午夜久久福利影院| 久久精品视频免费| 成人一级视频在线观看| 亚洲三级在线观看| 欧美视频精品在线| 午夜日韩在线电影| 日韩一区国产二区欧美三区| 韩国欧美一区二区| 国产精品女主播在线观看| 91在线精品秘密一区二区| 一区二区三区 在线观看视频| 欧美亚洲禁片免费| 蜜桃久久精品一区二区| 久久夜色精品国产噜噜av| 不卡大黄网站免费看| 亚洲国产精品自拍| 日韩精品资源二区在线| 成人国产视频在线观看| 亚洲一区二区三区美女| 精品精品国产高清a毛片牛牛 | 欧美国产精品一区二区三区| 91麻豆蜜桃一区二区三区| 亚洲图片欧美色图| 久久这里只精品最新地址| 91欧美一区二区| 日韩激情一二三区| 欧美国产亚洲另类动漫| 欧美色成人综合| 国产资源在线一区| 亚洲乱码国产乱码精品精98午夜 | 国产精品久久久久天堂| 欧美日韩在线播放一区| 国产精品18久久久久久vr| 亚洲永久免费av| 精品国产sm最大网站| eeuss影院一区二区三区 | 爽好久久久欧美精品| 久久无码av三级| 欧美系列一区二区| 国产剧情一区在线| 香蕉av福利精品导航| 国产亚洲成aⅴ人片在线观看| 在线欧美小视频| 国产一区二区视频在线| 亚洲午夜精品一区二区三区他趣| 久久久久久久一区| 正在播放一区二区| 色视频成人在线观看免| 久久 天天综合| 亚洲综合在线视频| 色婷婷精品大在线视频| 国产精品久久久久久久久免费相片| 欧美三级电影一区| 成人精品小蝌蚪|