?? type_gneuralnet.html
字號:
<html><head><title>Generated Documentation</title></head><body> <image src="headerimage.png"> <br><br><table><tr><td><big><big><big style="font-family: arial;"><b>GNeuralNet</b></big></big></big><br>extends <a href="type_GSupervisedLearner.html">GSupervisedLearner</a><br></td><td></td></tr></table><br><br><big><big><i>Constructors (public)</i></big></big><br><div style="margin-left: 40px;"><big><b>GNeuralNet</b></big>(<a href="type_GArffRelation.html">GArffRelation</a>* pRelation)<br></div><br><big><big><i>Destructors</i></big></big><br><div style="margin-left: 40px;"><big><b>~GNeuralNet</b></big>()<br></div><br><big><big><i>Virtual (public)</i></big></big><br><div style="margin-left: 40px;">void <big><b>Eval</b></big>(double* pRow)<br><div style="margin-left: 80px;"><font color=brown> Evaluates the input values in the provided row and deduce the output values</font></div><br>void <big><b>Train</b></big>(<a href="type_GArffData.html">GArffData</a>* pData)<br><div style="margin-left: 80px;"><font color=brown> Splits the provided data into a training and validation set and trains the network. To set the ratio, use SetTrainingPortion.</font></div><br></div><br><big><big><i>Public</i></big></big><br><div style="margin-left: 40px;">void <big><b>AddLayer</b></big>(int nNodes)<br><div style="margin-left: 80px;"><font color=brown> Adds a layer to the network. (The input and output layers are implicit, so you only need to add the hidden layers before calling Train.) The first hidden layer you add will be ajacent to the output layer. The last hidden layer you add will be ajacent to the input layer. It's not common to add more than two hidden layers because that results in large training times.</font></div><br><a href="type_GArffRelation.html">GArffRelation</a>* <big><b>GetInternalRelation</b></big>()<br><div style="margin-left: 80px;"><font color=brown> Returns the Relation corresponding to the internal data. This relation will contain all continuous attributes and the inputs and outputs will correspond to the actual input and output neurons in the network topology.</font></div><br>int <big><b>GetWeightCount</b></big>()<br><div style="margin-left: 80px;"><font color=brown> Returns the number of weights in the network</font></div><br>void <big><b>GetWeights</b></big>(double* pOutWeights)<br><div style="margin-left: 80px;"><font color=brown> Serializes the network weights into an array of doubles. The number of doubles in the array can be determined by calling GetWeightCount().</font></div><br>void <big><b>ReleaseInternalData</b></big>()<br><div style="margin-left: 80px;"><font color=brown> For efficiency purposes the neural net produces an internal copy of the training data with values normalized to ranges that the neural net can handle. This method tells the neural net that training is complete so it's okay to free up that memory.</font></div><br>void <big><b>SetAcceptableMeanSquareError</b></big>(double d)<br><div style="margin-left: 80px;"><font color=brown> If the mean square error ever falls below this value, training will stop. Note that if you use this as the primary stopping criteria, you will be prone to overfitting issues. To avoid overfitting, keep this number very small so it will stop based on other conditions. This is more of a safety-harness for cases where overfitting is okay (ie compression) so that it will stop if the results are good enough even if it can keep getting better.</font></div><br>void <big><b>SetIterationsPerValidationCheck</b></big>(int n)<br><div style="margin-left: 80px;"><font color=brown> Sets the number of iterations that will be performed before each time the network is tested again with the validation set to determine if we have a better best-set of weights, and whether or not it's achieved the termination condition yet. (An iteration is defined as a single pass through all rows in the training set.)</font></div><br>void <big><b>SetLearningDecay</b></big>(double d)<br><div style="margin-left: 80px;"><font color=brown> Set the rate at which the learning rate decays. (The learning rate will be multiplied by this value after every pass through the training data.)</font></div><br>void <big><b>SetLearningRate</b></big>(double d)<br><div style="margin-left: 80px;"><font color=brown> Set the rate of convergence</font></div><br>void <big><b>SetMaximumEpochs</b></big>(int n)<br><div style="margin-left: 80px;"><font color=brown> Sets the maximum number of times per pass to train with all the data.</font></div><br>void <big><b>SetMomentum</b></big>(double d)<br><div style="margin-left: 80px;"><font color=brown> Momentum has the effect of speeding convergence and helping the gradient descent algorithm move past some local minimums</font></div><br>void <big><b>SetRunEpochs</b></big>(int n)<br><div style="margin-left: 80px;"><font color=brown> Training will terminate when this number of epochs are performed without finding another best epoch for the validation set.</font></div><br>void <big><b>SetTrainingPortion</b></big>(double d)<br><div style="margin-left: 80px;"><font color=brown> Set the portion of the data that will be used for training. The rest will be used for validation.</font></div><br>void <big><b>SetWeights</b></big>(double* pWeights)<br><div style="margin-left: 80px;"><font color=brown> Sets all the weights from an array of doubles. The number of doubles in the array can be determined by calling GetWeightCount().</font></div><br>int <big><b>Train</b></big>(<a href="type_GArffData.html">GArffData</a>* pTrainingData, <a href="type_GArffData.html">GArffData</a>* pValidationData)<br><div style="margin-left: 80px;"><font color=brown> Train the network until the termination condition is met. Returns the number of epochs required to train it. This is sort of an all-in-one method that calls TrainInit, followed by several calls to TrainEpoch and TrainValidate.</font></div><br>int <big><b>TrainBatch</b></big>(<a href="type_GArffData.html">GArffData</a>* pTrainingData, <a href="type_GArffData.html">GArffData</a>* pValidationData)<br><div style="margin-left: 80px;"><font color=brown> Same as Train except it uses batch updates instead of incremental updates</font></div><br>void <big><b>TrainEpoch</b></big>()<br><div style="margin-left: 80px;"><font color=brown> Trains with a single epoch</font></div><br>void <big><b>TrainInit</b></big>(<a href="type_GArffData.html">GArffData</a>* pTrainingData, <a href="type_GArffData.html">GArffData</a>* pValidationData)<br><div style="margin-left: 80px;"><font color=brown> This must be called before you call TrainEpoch</font></div><br>double <big><b>TrainValidate</b></big>()<br><div style="margin-left: 80px;"><font color=brown> Measures the mean squared error against the internal validation set</font></div><br></div><br><big><big><i>Protected</i></big></big><br><div style="margin-left: 40px;">void <big><b>Criticize</b></big>(double* pModel)<br><div style="margin-left: 80px;"><font color=brown> This computes the error on the output nodes and uses backpropagation to assign the appropriate amount of error to all upstream nodes</font></div><br>void <big><b>EvalInternal</b></big>(double* pRow)<br><div style="margin-left: 80px;"><font color=brown> Evaluates a row of data for the internal relation. It doesn't set any output values, it just leaves those in the output nodes so it's safe to pass the original training data in to this method.</font></div><br>void <big><b>ExternalToInternalData</b></big>(<a href="type_GArffData.html">GArffData</a>* pExternal, <a href="type_GArffData.html">GArffData</a>* pInternal)<br><div style="margin-left: 80px;"><font color=brown> Converts a collection of external data to the internal format</font></div><br>void <big><b>InputsToInternal</b></big>(double* pExternal, double* pInternal)<br><div style="margin-left: 80px;"><font color=brown> Convert all the input values to the internal representation</font></div><br>void <big><b>MakeInputLayer</b></big>()<br><div style="margin-left: 80px;"><font color=brown> Adds the input layer with the appropriate number of nodes corresponding to the number if input attributes in the relation.</font></div><br>void <big><b>MakeInternalRelationAndOutputLayer</b></big>()<br>void <big><b>MeasureMinAndRanges</b></big>(<a href="type_GArffData.html">GArffData</a>* pTrainingData)<br><div style="margin-left: 80px;"><font color=brown> Measures the min and range of every attribute in the external training set. This data is used when converting continuous values between the internal and external format</font></div><br>void <big><b>OutputsToExternal</b></big>(double* pInternal, double* pExternal)<br><div style="margin-left: 80px;"><font color=brown> Convert the internal output values to the external representation</font></div><br>void <big><b>OutputsToInternal</b></big>(double* pExternal, double* pInternal)<br><div style="margin-left: 80px;"><font color=brown> Convert all the output values to the internal representation</font></div><br>void <big><b>PrintNeurons</b></big>()<br>void <big><b>ReadOutput</b></big>(double* pRow)<br>void <big><b>RestoreBestWeights</b></big>()<br><div style="margin-left: 80px;"><font color=brown> Restores the best known set of weights</font></div><br>void <big><b>UpdateBestWeights</b></big>()<br><div style="margin-left: 80px;"><font color=brown> Remembers the current weights as the best set known so far</font></div><br></div><br></body></html>
?? 快捷鍵說明
復(fù)制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -