亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? tanksdoc.cpp

?? 人工智能游戲的代碼啊!絕對經典!希望喜歡的人要!
?? CPP
?? 第 1 頁 / 共 2 頁
字號:
//Tanks
//Copyright John Manslow
//29/09/2001

// TanksDoc.cpp : implementation of the CTanksDoc class
//

#include "stdafx.h"
#include "Tanks.h"
#include "TanksDoc.h"

#ifdef _DEBUG
#define new DEBUG_NEW
#undef THIS_FILE
static char THIS_FILE[] = __FILE__;
#endif

const double pi=3.1415926535;

#include "CWorld.h"
#include "CTank.h"
#include "CProjectile.h"
#include "CMLP.h"
#include "CConditionalDistribution.h"
#include "CUnconditionalDistribution.h"
#include "math.h"
#include "fstream.h"

//This information is worked out from the exemplar files so these values aren't too
//important.
unsigned long ulNumberOfPatterns=1000;
unsigned long ulNumberOfErrorPatterns=1000;

//The number of inputs to the neural network that calculates the optimal barrel angle and to the conditional
//aiming error distribution model
unsigned long ulNumberOfInputs=3;
unsigned long ulNumberOfErrorInputs=1;

//Ten hidden neurons are used in the barrel angle MLP, and six in the MLP component of the conditional error
//distribution model. You should generally use as few as possible, since networks with smaller numbers 
//train more quickly, operate more quickly once trained and tend to learn the exemplars more robustly.
unsigned long ulNumberOfHiddenNodes=10;
unsigned long ulNumberOfHiddenNodesInConditionalErrorModel=6;

//This many bins gives good resolution but produces overfitting in the conditional error distribution model when 
//the preceding shot was poor (since such shots are relatively rare in the exemplar data). This isn't a huge problem
//in game, since such shots are rare there too!
unsigned long ulNumberOfBinsInDistributionModels=50;

//For the barrel angle MLP, we have just one output - the angle of the AI tank's barrel taht is necessary to hit 
//the player.
unsigned long ulNumberOfOutputs=1;

//This level of error on the training samples was found to produce a barrel angle neural network that hit the
//player's tank around 98% of the time. The value of termination error will have to be changed
//for different types of error measure (see CMLP::dGetPerformance).
double dTerminationError=0.000025;

//The error measure used by the conditional probability model has different units to that used by the neural network
//that calculates the AI tank's barrel angle. In fact, for the distribution model, the units are log-probabilities. Terminating
//training at this error seems to give a good distribution of aiming errors for the AI tank
double dErrorModelTerminationError=2.455;

//Pointers to the distribution models, the barrel angle MLP, and the game world object
CConditionalDistribution *pErrorDistribution;
CUnconditionalDistribution *pUnconditionalErrorDistribution;
CMLP *pMLP;
CWorld *pWorld;

//Should always be true. See the book Game Programming Gems 2 for the original purpose of this flag. 
BOOL boGeneratingTrainingData=FALSE;

//A flag to indicate whether we're just generating training data (i.e. writing examples of
//aiming errors to disk or actually playing a proper game. In the former case, the AI doesn't get
//a turn and the player controls the AI tank. The aiming errors made as the player does this
//are logged to disk and can be used later to train the unconditional and conditional aiming error 
//distribution models.
BOOL boGeneratingErrorTrainingData=FALSE;

//Should always be true. See the book Game Programming Gems 2 for the original purpose of this flag. 
BOOL boLoadTrainedNetwork=TRUE;

//When true, tells the game to load an pre-trained conditional distribution model. When 
//false causes the game to create a new model and train it using examples loaded from disk
//(which were generated by playing the game with boGeneratingErrorTrainingData=TRUE).
//Note that unconditional distribution models are so quick to create we don't bother saving them 
//and re-train them every time the game runs
BOOL boLoadTrainedErrorNetwork=TRUE;

//Information used to scale the inputs to the various models (see below)
double *pdMin;
double *pdMax;
double *pdErrorInputMin;
double *pdErrorInputMax;
double dErrorMin,dErrorMax;

//The file containing the exemplar data
#define ExemplarData "BarrelAngleExemplarData.txt"

//#define ErrorExemplarData "NewErrorExemplarData.txt"
#define ErrorExemplarData "AimingErrorExemplarData.txt"

//The file containing the trained neural network that is used to calculate the optimal barrel angle for the AI tank
#define TrainedMLP "TrainedBarrelAngleMLP.mlp"

//The file containing the trained aiming error distribution model that is used to add random
//variation to the aiming of the AI tank.
#define TrainedConditionalDistributionModel "TrainedAimingErrorModel.cdm"

IMPLEMENT_DYNCREATE(CTanksDoc, CDocument)

BEGIN_MESSAGE_MAP(CTanksDoc, CDocument)
	//{{AFX_MSG_MAP(CTanksDoc)
		// NOTE - the ClassWizard will add and remove mapping macros here.
		//    DO NOT EDIT what you see in these blocks of generated code!
	//}}AFX_MSG_MAP
END_MESSAGE_MAP()

CTanksDoc::CTanksDoc()
{
	//Seed the random number generator
	srand(unsigned(time(NULL)));

	TRACE("---------------------------------------------------------------------------------\n");
	TRACE("Initialising.\n");
	TRACE("---------------------------------------------------------------------------------\n");

	//Create the game world. The terrain will be 760 units long.
	TRACE("Creating world...\n");
	pWorld=new CWorld(760);

	//Check to make sure it was a success.
	if(pWorld)
	{
		TRACE("successful.\n");
	}
	else
	{
		//If not, inform the user and assert.
		TRACE("failed.\n");
		ASSERT(FALSE);
	}

	//Initialise these arrays with NULL pointers so we don't try to delete them later
	//even if they're unused.
	pdMin=NULL;
	pdMax=NULL;

	/******************************************************************************************************************/
	//This section of code loads training data for and trains the MLP neural network that computes the "optimal"
	//angle for the AI tank's barrel. The way this is done was described in detail in the book 
	//"Game Programming Gems 2".
	if(!boGeneratingTrainingData)
	{
		unsigned long i,j;
		TRACE("Opening training data file...");

		//Open the file containing the training data.
		ifstream *pTrainingData;
		pTrainingData=new ifstream(ExemplarData,ios::nocreate);
		if(pTrainingData)
		{
			TRACE("successful.\n");
		}
		else
		{
			TRACE("failed.\n");
			ASSERT(FALSE);
		}

		//Read in the number of examples (patterns) the file contains and how many inputs. In this
		//case, there should be three inputs, corresponging to x and y displacement between the
		//tanks and wind speed. We don't need to load the number of outputs because in this 
		//application, we'll always have only one - the angle of the tank's barrel
		*pTrainingData>>ulNumberOfPatterns;
		*pTrainingData>>ulNumberOfInputs;
		
		TRACE("Loading training data...");

		//Allocate some memory for the example data
		double **ppdTrainingInputs;
		double **ppdTrainingTargets;

		ppdTrainingInputs=new double*[ulNumberOfPatterns];
		ppdTrainingTargets=new double*[ulNumberOfPatterns];

		//Allocate memory to record the maximum and minimum values of each input
		pdMin=new double[ulNumberOfInputs];
		pdMax=new double[ulNumberOfInputs];

		//If any of the memory allocation failed, alert the user.
		if(!(	ppdTrainingInputs &&
				ppdTrainingTargets &&
				pdMin &&
				pdMax))
		{
			TRACE("failed.\n");
			ASSERT(FALSE);
		}

		//Initialise the min/max statistics to large values to ensure that they'll be overwritten
		//when the data are analysed.
		for(i=0;i<ulNumberOfInputs;i++)
		{
			pdMin[i]=1000.0;
			pdMax[i]=-1000.0;
		}

		//This code loads the example data and records the minimum and maximum values attained by
		//each input
		for(i=0;i<ulNumberOfPatterns;i++)
		{
			//Allocate memory to store the ith input example and check to make sure it succeded
			ppdTrainingInputs[i]=new double[ulNumberOfInputs];
			if(!ppdTrainingInputs[i])
			{
				TRACE("failed.\n");
				ASSERT(FALSE);
			}

			//Load the ith example
			for(j=0;j<ulNumberOfInputs;j++)
			{
				*pTrainingData>>ppdTrainingInputs[i][j];
			}

			//Allocate memory to store the corresponding output (barrel angle) of the ith pattern
			ppdTrainingTargets[i]=new double[1];
			if(!ppdTrainingTargets[i])
			{
				TRACE("failed.\n");
				ASSERT(FALSE);
			}

			//Load it
			*pTrainingData>>ppdTrainingTargets[i][0];

			//Maintain the record of the maximum and minimum values of each input
			for(j=0;j<ulNumberOfInputs;j++)
			{
				if(ppdTrainingInputs[i][j]<pdMin[j])
				{
					pdMin[j]=ppdTrainingInputs[i][j];
				}
				if(ppdTrainingInputs[i][j]>pdMax[j])
				{
					pdMax[j]=ppdTrainingInputs[i][j];
				}
			}
		}

		pTrainingData->close();
		delete pTrainingData;

		//Once all data has been loaded, this code scales the inputs so that they all lie in the range
		//-1 to +1. This is not strictly necessary but can often speed up neural network learning
		//quite significantly. Note that all values put into the network in future must be scaled 
		//in the same way, so the array of min/max values for each input have to be stored for future 
		//use. For example, look in the OnTimer function in TanksView.cpp where the inputs to the
		//network are scaled using the same min/max values used here.
		for(i=0;i<ulNumberOfPatterns;i++)
		{
			for(j=0;j<ulNumberOfInputs;j++)
			{
				//Inputs range between min and max
				ppdTrainingInputs[i][j]-=pdMin[j];
				//Inputs range between 0 and max-min
				ppdTrainingInputs[i][j]/=(pdMax[j]-pdMin[j]);
				//Inputs range between 0 and 1
				ppdTrainingInputs[i][j]-=0.5;
				//Inputs range between -0.5 and +0.5
				ppdTrainingInputs[i][j]*=2.0;
				//Inputs range between -1.0 and +1.0
			}
		}
		TRACE("successful.\n");

		//Now we have the example data with which to teach a neural network, we need a network
		//to teach.

		//Create the MLP neural network, telling it how many inputs we need (3 in this case,
		//x-displacement, y-displacement, and wind speed), how many hidden neurons (or nodes)
		//and how many outputs (only one - for the inclination of the AI tank's barrel). Ten hidden
		//neurons are used here, but you should generally use as few a possible. This helps to
		//speed training and maximise performance in-game. The MLP created has a random set of
		//weights and hence will not do anything useful unless it is trained or a trained set of 
		//weights is loaded into it (as with the pMLP->Load call below).
		pMLP=new CMLP(
						ulNumberOfInputs,
						ulNumberOfHiddenNodes,
						ulNumberOfOutputs
						);

		//Create a variable to store return values
		int nReturnValue;

		//Do we want to load a pre-trained network?
		if(boLoadTrainedNetwork)
		{
			//If yes, load it.
			TRACE("Loading MLP...");
			nReturnValue=pMLP->Load(&TrainedMLP);
			if(nReturnValue)
			{
				TRACE("successful.\n");
			}
			else
			{
				TRACE("failed.\n");
				ASSERT(FALSE);
			}
		}

		TRACE("Training MLP...\n");

		//Create a variable to record the number of training iterations
		unsigned long ulIteration=0;

		//This do-while loop actually does the training. It continually calls the neural network's
		//training function (each call doing only a single training step) until the network's
		//ability to reproduce the targets (barrel angles) contained in the example data (as
		//measured by the network's dBestError variable) is adequate. This training process will
		//require many tens or hundred of thousands of steps and can last several hours. Of course,
		//this shouldn't really be done here in the constructor...
		do
		{
			//To perform a training step, tell the network how many patterns (examples) there are
			//in the example data and pass it pointers to it.
			pMLP->dTrainingStep(
									ulNumberOfPatterns,
									ppdTrainingInputs,
									ppdTrainingTargets
									);


			//Every hundred training steps provide some feedback on the progress of the network's
			//learning
			if(ulIteration%100==0)
			{
				TRACE("\n\tIteration: %9u  Training Error: %+3.12e",
								ulIteration,
								pMLP->dGetPerformance());
			}

			//Keep a count of the number of steps so far
			ulIteration++;
		}
		//Keep going until the network's ability to reproduce the barrel angles in the example
		//data is good enough. The error produced by the network will never be zero, so 
		//dTerminationError will need to be some positive value. Some experimentation will be
		//required to see what value of dTerminationError is required to give adequate performance
		//in game. Start off with quite large values of dTerminationError like 0.1 and work down.
		//Bear in mind that if the output variable varies wildly, larger errors will tend to be 
		//produced. I.e. if the output varies between +1000 and -1000, then a termination error of, 
		//say, 50 may be more suitable.
		while(
				pMLP->dGetPerformance()>dTerminationError &&
				!boGeneratingTrainingData
				);

		TRACE("\nsuccessful.\n");

		//Once we've finished training, we don't need the exemplar data any more, so we can delete it.
		for(i=0;i<ulNumberOfPatterns;i++)
		{
			delete []ppdTrainingInputs[i];
			delete []ppdTrainingTargets[i];
		}
		delete []ppdTrainingInputs;
		delete []ppdTrainingTargets;
		
		//Save the trained network
		TRACE("Saving trained MLP...");

		//We'll save this new network under a different name so we don't overwrite the old version
		nReturnValue=pMLP->Save(&"NewTrainedMLP.mlp");
		if(nReturnValue)
		{
			TRACE("successful.\n");
		}
		else

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
国产成人激情av| 亚洲免费观看高清完整版在线| 色婷婷综合久久久久中文| 国产激情视频一区二区三区欧美| 日韩成人一级大片| 日韩黄色免费网站| 天天亚洲美女在线视频| 日日夜夜精品视频免费| 免费一级欧美片在线观看| 麻豆免费精品视频| 国产在线观看免费一区| 国产精品99久久久久| 风间由美中文字幕在线看视频国产欧美| 国产精品一区免费视频| 国产成人高清视频| 成人久久久精品乱码一区二区三区 | 成人福利电影精品一区二区在线观看| 国产九色精品成人porny| 床上的激情91.| 99久精品国产| 欧美日本在线播放| 精品国产乱码久久久久久久久| 久久影院午夜片一区| 国产精品嫩草99a| 亚洲国产色一区| 精品亚洲国产成人av制服丝袜| 国产99久久久国产精品| 91麻豆视频网站| 欧美一级精品在线| 国产精品无码永久免费888| 亚洲人成亚洲人成在线观看图片| 婷婷综合五月天| 国v精品久久久网| 在线看不卡av| 精品粉嫩超白一线天av| 亚洲欧美中日韩| 日本麻豆一区二区三区视频| 粉嫩久久99精品久久久久久夜 | 亚洲丝袜自拍清纯另类| 午夜欧美一区二区三区在线播放| 激情综合色综合久久| 色综合久久久久| 337p粉嫩大胆色噜噜噜噜亚洲| 中文字幕一区二区三区在线观看| 日日夜夜免费精品视频| 99久久er热在这里只有精品15| 欧美一级在线观看| 亚洲精品日韩一| 国产精品77777竹菊影视小说| 日本丶国产丶欧美色综合| 精品处破学生在线二十三| 亚洲伊人伊色伊影伊综合网| 国产成人在线免费| 欧美一区二区三区视频在线观看| 国产精品萝li| 国内不卡的二区三区中文字幕| 在线观看av不卡| 亚洲少妇最新在线视频| 高清在线不卡av| 欧美r级电影在线观看| 亚洲综合色婷婷| jizzjizzjizz欧美| 国产夜色精品一区二区av| 日韩中文字幕一区二区三区| 欧洲激情一区二区| 国产精品成人网| 成人性生交大合| 国产片一区二区| 精品综合久久久久久8888| 欧美一区二区在线观看| 五月天久久比比资源色| 欧美性xxxxxxxx| 亚洲综合免费观看高清完整版在线| 欧美猛男男办公室激情| 一个色妞综合视频在线观看| 99久久国产免费看| 成人欧美一区二区三区| 91在线播放网址| 亚洲天堂中文字幕| 93久久精品日日躁夜夜躁欧美| 国产精品夫妻自拍| 99久久精品免费看国产免费软件| 欧美国产综合一区二区| 国产米奇在线777精品观看| 日韩欧美国产不卡| 国产一区二区影院| 国产偷v国产偷v亚洲高清| 国产精品18久久久久久久久 | 国产精选一区二区三区| 久久久精品综合| 国产成人精品影院| 国产精品毛片大码女人| 97久久精品人人爽人人爽蜜臀| 亚洲日本成人在线观看| 欧美日韩在线播放三区四区| 无码av中文一区二区三区桃花岛| 欧美男人的天堂一二区| 日本中文字幕一区二区视频| 日韩欧美在线123| 国产精品一区二区黑丝| 亚洲欧美自拍偷拍| 欧美午夜在线一二页| 喷水一区二区三区| 国产视频一区二区三区在线观看| fc2成人免费人成在线观看播放| 一区二区三区中文在线观看| 欧美日韩国产美| 韩国一区二区视频| 亚洲欧洲av在线| 欧美日韩国产综合视频在线观看| 精彩视频一区二区三区| 亚洲视频在线观看三级| 日韩视频免费观看高清完整版在线观看 | 欧美色网一区二区| 精品在线播放免费| 亚洲欧洲日韩av| 日韩精品在线看片z| caoporen国产精品视频| 日韩在线一区二区| 久久99在线观看| 亚洲视频每日更新| 欧美大片一区二区| 91官网在线免费观看| 国内精品不卡在线| 丰满少妇久久久久久久| 91美女片黄在线| 亚洲私人影院在线观看| 欧美日韩中文另类| 国产91精品入口| 久久国产精品99久久人人澡| 亚洲激情综合网| 国产女人aaa级久久久级| 欧美丰满少妇xxxxx高潮对白 | 成人影视亚洲图片在线| 日韩精品欧美精品| 亚洲一区在线看| 亚洲日本乱码在线观看| 欧美激情一区二区三区| 日韩免费电影网站| 欧美精品亚洲一区二区在线播放| 99这里只有久久精品视频| 韩国视频一区二区| 麻豆视频观看网址久久| 日韩av网站在线观看| 亚洲最大成人网4388xx| 中文字幕在线免费不卡| 日本一区二区三区在线观看| 精品久久久久久久久久久久包黑料 | 51精品国自产在线| 色系网站成人免费| av电影在线观看完整版一区二区| 国产一区二区日韩精品| 美女任你摸久久 | caoporn国产一区二区| 国产成人啪免费观看软件| 久久成人羞羞网站| 日本va欧美va精品发布| 免费成人av在线播放| 奇米色一区二区| 伦理电影国产精品| 极品少妇xxxx精品少妇偷拍| 国内精品久久久久影院色| 国产精品一区二区91| 从欧美一区二区三区| 99久久精品免费精品国产| 91在线看国产| 欧美在线观看你懂的| 欧美性色黄大片| 欧美一区二区三区在线观看| 日韩一二在线观看| 国产网站一区二区| 亚洲人成人一区二区在线观看| 一区二区三区视频在线观看| 午夜在线成人av| 久久国内精品视频| 国产精品1区2区| 97久久精品人人做人人爽| 欧美日韩精品一区二区三区蜜桃| 欧美一区二区三区人| 久久先锋影音av鲁色资源| 国产精品久线在线观看| 亚洲日本va在线观看| 视频一区免费在线观看| 国产一区二区三区香蕉| 91丨porny丨户外露出| 欧美日韩一区不卡| 久久久久久夜精品精品免费| 亚洲欧美区自拍先锋| 日本不卡在线视频| 北岛玲一区二区三区四区| 欧美日韩成人在线| 26uuu精品一区二区在线观看| 中文字幕永久在线不卡| 香蕉久久夜色精品国产使用方法| 国产精品一区二区三区99| 日本韩国精品在线| 日本一区二区免费在线观看视频 | 91精品国产综合久久福利软件| 欧美www视频| 一区二区三区四区国产精品|