亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來(lái)到蟲(chóng)蟲(chóng)下載站! | ?? 資源下載 ?? 資源專(zhuān)輯 ?? 關(guān)于我們
? 蟲(chóng)蟲(chóng)下載站

?? neural.htm

?? 關(guān)于windows游戲編程的一些文章還有相關(guān)圖形
?? HTM
?? 第 1 頁(yè) / 共 5 頁(yè)
字號(hào):
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=windows-1252">
<META NAME="Generator" CONTENT="Microsoft Word 97">
<TITLE>Neural NetWare</TITLE>
</HEAD>
<BODY>

<B><I><FONT SIZE=5><P>Neural NetWare</P>
</B></I></FONT><P>by Andre' LaMothe</P>
<FONT SIZE=2><P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
</FONT><B><FONT SIZE=4><P>And There Was Light...</P>
</B></FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><FONT SIZE=2><P>The funny thing about high technology is that sometimes it's hundreds of years old! For example, Calculus was independently invented by both Newton and Leibniz over 300 years ago. What used to be magic, is now well known. And of course we all know that geometry was invented by Euclid a couple thousand years ago. The point is that many times it takes years for something to come into "vogue". <B>Neural Nets</B> are a prime example. We all have heard about neural nets, and about what their promises are, but we don't really see too many real world applications such as we do for <B>ActiveX</B> or the <B>Bubblesort.</B> The reason for this is that the true nature of neural nets is extremely mathematical and understanding and proving the theorems that govern them takes Calculus, Probability<B> </B>Theory, and Combinatorial Analysis not to mention Physiology and Neurology<B>.</B> </P>
<P>&nbsp;</P>
<P>The key to unlocking any technology is for a person or persons to create a Killer App for it. We all know how <B>DOOM</B> works by now, i.e. by using BSP trees. However, John Carmack didn't invent them, he read about them in a paper written in the 1960's. This paper described BSP technology. John took the next step an realized what BSP trees could be used for and <B>DOOM</B> was born. I suspect that Neural Nets may have the same revelation in the next few years. Computers are fast enough to simulate them, VLSI designers are building them right into the silicon, and there are hundreds of books that have been published about them. And since Neural Nets are more mathematical entities then anything else, they are not tied to any physical representation, we can create them with software or create actual physical models of them with silicon. The key is that neural nets are abstractions or models.</P>
<P>&nbsp;</P>
<P>In many ways the computational limits of digital computers have been realized. Sure we will keep making them faster, smaller and cheaper, but digital computers will always process digital information since they are based on deterministic binary models of computation. Neural nets on the other hand are based on different models of computation. They are based on highly parallel, distributed, probabilistic models that don't necessarily model a solution to a problem as does a computer program, but model a network of cells that can find, ascertain, or correlate possible solutions to a problem in a more biological way by solving the problem a in little pieces and putting the result together. This article is a whirlwind tour of what neural nets are, and how they work in as much detail as can be covered in a few pages. I know that a few pages doesn't do the topic justice, but maybe we can talk the management into a small series??? </P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<B><P>Figure 1.0 - A Basic Biological Neuron. </P>
</B><P><IMG SRC="Image1.gif" WIDTH=576 HEIGHT=281></P>
</FONT><B><FONT SIZE=4><P>Biological Analogs</P>
</B></FONT><FONT SIZE=2><P>&nbsp;</P>
<P>Neural Nets where inspired by our own brains. Literally, some brain in someone's head said, "I wonder how I work?" and then proceeded to create a simple model of itself. Weird huh? The model of the standard neurode is based on a simplified model of a human neuron invented over 50 years ago. Take a look at Figure 1.0. As you can see, there are 3 main parts to a neuron, they are:</P>
<P>&nbsp;</P>

<UL>
<B><LI>Dendrite(s)</B> ...........................Responsible for collecting incoming signals.</LI>
<B><LI>Soma</B>......................................Responsible for the main processing and summation of signals.</LI>
<B><LI>Axon</B>......................................Responsible for transmitting signals to other dendrites.</LI></UL>

<P>&nbsp;</P>
<P>The average human brain has about 100,000,000,000 or 10<SUP>11</SUP> neurons and each neuron has up to 10,000 connections via the <B>dendrites</B>. The signals are passed via electro-chemical processes based on <B>NA</B> (sodium), <B>K</B> (potassium), and <B>CL</B> (chloride) ions. Signals are transferred by accumulation and potential differences caused by these ions, the chemistry is unimportant, but the signals can be thought of simple electrical impulses that travel from <B>axon </B>to<B> dendrite</B>. The connections from one dendrite to axon are called <B>synapses</B> and these are the basic signal transfer points.</P>
<P>&nbsp;</P>
<P>So how does a neuron work? Well, that doesn't have a simple answer, but for our purposes the following explanation will suffice. The dendrites collect the signals received from other neurons, then the soma performs a summation of sorts and based on the result causes the axon to fire and transmit the signal. The firing is contingent upon a number of factors, but we can model it as an transfer function that takes the summed inputs, processes them, and then creates an output if the properties of the transfer function are met. In addition, the output is non-linear in real neurons, that is, signals aren't digital, they are analog. In fact, neurons are constantly receiving and sending signals and the real model of them is frequency dependent and must be analyzed in the <B>S-domain</B> (the frequency domain). The real transfer function of a simple biological neuron has, in fact, been derived and it fills a number of chalkboards up. </P>
<P>&nbsp;</P>
<P>Now that we have some idea of what neurons are and what we are trying to model, let's digress for a moment and talk about what we can use neural nets for in video games.</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
</FONT><B><FONT SIZE=4><P>Applications to Games</P>
</B></FONT><FONT SIZE=2><P>&nbsp;</P>
<P>Neural nets seem to be the answer that we all are looking for. If we could just give the characters in our games a little brains, imagine how cool a game would be! Well, this is possible in a sense. Neural nets model the structure of neurons in a crude way, but not the high level functionality of reason and deduction, at least in the classical sense of the words. It takes a bit of thought to come up with ways to apply neural net technology to game AI, but once you get  the hang of it, then you can use it in conjunction with deterministic algorithms, fuzzy logic, and genetic algorithms to create very robust thinking models for your games. Without a doubt better then anything you can do with hundreds of <B>if-then</B> statements or scripted logic. Neural nets can be used for such things as:</P>
<P>&nbsp;</P>
<B><P>Environmental Scanning and Classification</B> - A neural net can be feed with information that could be interpreted as vision or auditory information. This information can then be used to select an output response or teach the net. These responses can be learned in real-time and updated to optimize the response.</P>
<P>&nbsp;</P>
<B><P>Memory</B> - A neural net can be used by game creatures as a form of memory. The neural net can learn through experience a set of responses, then when a new experience occurs, the net can respond with something that is the best guess at what should be done. </P>
<P>&nbsp;</P>
<B><P>Behavioral Control</B> - The output of a neural net can be used to control the actions of a game creature. The inputs can be various variables in the game engine. The net can then control the behavior of the creature. </P>
<P>&nbsp;</P>
<B><P>Response Mapping</B> - Neural nets are really good at "association" which is the mapping of one space to another. Association comes in two flavors: <B>autoassociation</B> which is the mapping of an input with itself and <B>heterassociation</B> which is the mapping of an input with something else. Response mapping uses a neural net at the back end or output to create another layer of indirection in the control or behavior of an object. Basically, we might have a number of control variables, but we only have crisp responses for a number of certain combinations that we can teach the net with. However, using a neural net on the output, we can obtain other responses that are in the same ballpark as our well defined ones.</P>
<P>&nbsp;</P>
<P>The above examples may seem a little fuzzy, and they are. The point is that neural nets are tools that we can use in whatever way we like. The key is to use them in cool ways that make our <B>AI</B> programming simpler and make game creatures respond more intelligently.</P>
<P>&nbsp;</P>
</FONT><B><FONT SIZE=4><P>Neural Nets 101</P>
</B></FONT><FONT SIZE=2><P>&nbsp;</P>
<P>In this section we're going to cover the basic terminology and concepts used in neural net discussions. This isn't easy since neural nets are really the work of a number of different disciplines, and therefore, each discipline creates their own vocabulary. Alas, the vocabulary that we will learn is a good intersection of all the well know vocabularies and should suffice. In addition, neural network theory is replete with research that is redundant, meaning that many people re-invent the wheel. This has had the effect of creating a number of neural net architectures that have names. I will try to keep things as generic as possible, so that we don't get caught up in naming conventions. Later in the article we will cover some nets that are distinct enough that we will refer to them will their proper names. As you read don't be too alarmed if you don't make the "connections" with all of the concepts, just read them, we will cover most of them again in full context in the remainder of the article. Let's begin...</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P><IMG SRC="Image2.gif" WIDTH=496 HEIGHT=304><B>Figure 2.0 - A Single Neurode with n Inputs.</P>
</B><P>&nbsp;</P>
<P>&#9;Now that we have seen the wetware version of a neuron, let's take a look at the basic artificial neuron to base our discussions on. Figure 2.0 is a graphic of a standard <B>"neurode"</B> or "<B>artificial neuron".</B> As you can see, it has a number of inputs labeled <B>X<SUB>1</SUB> - X<SUB>n</B></SUB> and <B>B</B>. These inputs each have an associated weight <B>w<SUB>1</SUB> - w<SUB>n</B></SUB>, and <B>b</B> attached to them. In addition, there is a summing junction <B>Y</B> and a single output <B>y.</B> The output <B>y</B> of the neurode is based on a transfer or <B>"activation"</B> function which is a function of the net input to the neurode. The inputs come from the <B>X<SUB>i'</SUB>s</B> and from <B>B</B> which is a bias node. Think of <B>B</B> as a <B>"past history",</B> <B>"memory",</B> or <B>"inclination".</B> The basic operation of the neurode is as follows: the inputs <B>X<SUB>i</B></SUB> are each multiplied by their associated weights and summed. The output of the summing is referred to as the i<B>nput activation</B> <B>Y<SUB>a</SUB>. </B>The activation is then fed to the activation function <B>f<SUB>a</SUB>(x)</B> and the final output is <B>y.</B>  The equations for this is:</P>
</FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><B><FONT SIZE=2><P>Eq. 1.0</P>
</B><P>   &#9;     <B>n</P>
<P>Y</B><SUB>a</SUB> = <B>B</B>*<B>b</B> + <FONT FACE="Symbol">&#229;</FONT>
<B> X</B><SUB>i</SUB> * <B>w</B><SUB>i</SUB> </P>
<P>&#9;    <B>i </B>=1</P>
<P>&nbsp;</P>
<P>and,</P>
<P>&nbsp;</P>
<B><P>y</B> =<B> f<SUB>a</SUB>(Y</B><SUB>a</SUB><B>)</B>, the various forms of <B>f<SUB>a</SUB>(x)</B> will be covered in a moment.</P>
</FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><FONT SIZE=2><P>Before we move on, we need to talk about the inputs <B>X<SUB>i</SUB>,</B> the weights <B>w<SUB>i</B></SUB>, and their respective domains. In most cases, inputs consist of the positive and negative integers in the set <B>(-<FONT FACE="Symbol">&#165;</FONT>
, +<FONT FACE="Symbol">&#165;</FONT>
).</B> However, many neural nets use simpler <B>bivalent</B> values (meaning that they have only two values). The reason for using such a simple input scheme is that ultimately all inputs are <B>binary</B> or <B>bipolar</B> and complex inputs are converted to pure binary or bipolar representations anyway. In addition, many times we are trying to solve computer problems such as image or voice recognition which lend themselves to bivalent representations. Nevertheless, this is not etched in stone. In any case, the values used in bivalent systems are primarily 0 and 1 in a binary system or -1 and 1 in a bipolar system. Both systems are similar except that bipolar representations turn out to be mathematically better than binary ones. The weights <B>w</B><SUB>i</SUB> on each input are typically in the range <B>(-<FONT FACE="Symbol">&#165;</FONT>
, +<FONT FACE="Symbol">&#165;</FONT>
).</B> and are referred to as <B>excitatory</B>, and <B>inhibitory</B> for positive and negative values respectively. The extra input <B>B</B> which is called the bias is always 1.0 and is scaled or multiplied by <B>b</B>, that is, <B>b</B> is it's weight in a sense. This is illustrated in Eq.1.0 by the leading term.</P>
<P>&nbsp;</P>
<P>Continuing with our analysis, once the activation <B>Y<SUB>a</B></SUB> is found for a neurode then it is applied to the activation function and the output <B>y</B> can be computed. There are a number of activation functions and they have different uses. The basic activation functions <B>f<SUB>a</SUB>(x)</B> are:</P>
</FONT><FONT SIZE=1><P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P><DIR>
<DIR>

</FONT><B><FONT SIZE=2><P>    Step</B>                                            <B>Linear                                        Exponential</B> </P><IMG SRC="Image3.gif" WIDTH=104 HEIGHT=78 ALIGN="LEFT" HSPACE=12><IMG SRC="Image4.gif" WIDTH=100 HEIGHT=75 ALIGN="LEFT" HSPACE=12><IMG SRC="Image5.gif" WIDTH=100 HEIGHT=75 ALIGN="LEFT" HSPACE=12>
</FONT><FONT SIZE=1><P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
</FONT><FONT SIZE=2><P>Eq. 2.0&#9;&#9;                    Eq.3.0&#9;                               Eq. 4.0</P>
<B><P>F</B><SUB>s</SUB><B>(x)</B>    =1,  if <B>x</B> <FONT FACE="Symbol">&#179;</FONT>
 <FONT FACE="Symbol">&#113;</FONT>
&#9;      <B>F</B><SUB>l</SUB><B>(x)</B> = <B>x</B>, for all <B>x</B>&#9;                  <B>F</B><SUB>e</SUB><B>(x)</B> = 1/(1+e<SUP>-<FONT FACE="Symbol">&#115;</FONT>
<B>x</B></SUP>)</P><DIR>
<DIR>

<P>0,  if <B>x</B> &lt; <FONT FACE="Symbol">&#113;</FONT>
 &#9;&#9; &#9;&#9;&#9;&#9;&#9;</P></DIR>
</DIR>
</DIR>
</DIR>

<P>&#9;&#9;&#9;&#9;</FONT><FONT SIZE=1>&#9;&#9;&#9;&#9;&#9;</P>
</FONT><FONT SIZE=2><P>The equations for each are fairly simple, but each are derived to model or fit various properties.</P>
<P>&nbsp;</P>
<P>The <B>step</B> function is used in a number of neural nets and  models a neuron firing when a critical input signal is reached. This is the purpose of the factor <FONT FACE="Symbol">&#113;</FONT>
, it models the critical input level or threshold that the neurode should fire at. The <B>linear</B> <B>activation</B> function is used when we want the output of the neurode to more closely follow the input activation. This kind of activation function would be used in modeling <B>linear systems</B> such as basic motion with constant velocity. Finally, the <B>exponential</B> <B>activation function is</B> used to create a <B>non-linear response</B> which is the only possible way to create neural nets that have non-linear responses and model non-linear processes. The <B>exponential activation function</B> is key in advanced neural nets since the composition of linear and step activation functions will <I>always</I> be linear or step, we will never be able to create a net that has non-linear response, therefore, we need the exponential activation function to address the non-linear problems that we want to solve with neural nets. However, we are not locked into using the exponential function. <B>Hyperbolic</B>, <B>logarithmic</B>, and <B>transcendental</B> functions can be used as well depending on the desired properties of the net. Finally, we can scale and shift all the functions if we need to.</P>
<P>&nbsp;</P>
<P><IMG SRC="Image6.gif" WIDTH=624 HEIGHT=368><B>Figure 3.0 - A 4 Input, 3 Neurode, Single Layer Neural Net.</P>
<P>Figure 4.0 - A 2 Layer Neural Network.</P>
<P>&nbsp;</P>
</B><P><IMG SRC="Image7.gif" WIDTH=576 HEIGHT=326></P>
<P>As you can imagine, a single neurode isn't going to do alot for us, so we need to take a group of them and create a layer of neurodes, this is shown in Figure 3.0. The figure illustrates a single layer neural network. The neural net in Figure 3.0 has a number of inputs and a number of output nodes. By convention this is a single layer net since the input layer is not counted unless it is the only layer in the network. In this case, the input layer is also the output layer and hence there is one layer. Figure 4.0 shows a two layer neural net. Notice that the input layer is still not counted and the internal layer is referred to as <B>"hidden".</B> The output layer is referred to as the <B>output</B> or <B>response</B> layer. Theoretically, there is no limit to the number of layers a neural net can have, however, it may be difficult to derive the relationship of the various layers and come up with tractable training methods. The best way to create multilayer neural nets is to make each network one or two layers and then connect them as components or functional blocks.</P>
<P>&nbsp;</P>
<P>All right, now let's talk about <B>temporal</B> or time related topics. We all know that our brains are fairly slow compared to a digital computer. In fact, our brains have cycle times in the millisecond range whereas digital computers have cycle times in the nanosecond and soon sub-nanosecond times. This means that signals take time to travel from neuron to neuron. This is also modeled by artificial neurons in the sense that we perform the computations layer by layer and transmit the results sequentially. This helps to better model the time lag involved in the signal transmission in biological systems such as us. </P>
<P>&nbsp;</P>
<P>We are almost done with the preliminaries, let's talk about some high level concepts and then finish up with a couple more terms. The question that you should be asking is, "what the heck to neural nets do?" This is a good question, and it's a hard one to answer definitively. The question is more, "what do you want to try and make them do?" They are basically mapping devices that help map one space to another space. In essence, they are a type of memory. And like any memory we can use some familiar terms to describe them. Neural nets have both <B>STM</B> (<B>Short Term Memory</B>) and <B>LTM</B> (<B>Long Term Memory</B>). STM is the ability for a neural net to remember something it just learned, whereas, LTM is the ability of a neural net to remember something it learned some time ago amongst its new learning. This leads us to the concepts of <B>plasticity</B> or in other words how a neural net deals with new information or training. Can a neural net learn more information and still recall previously stored information correctly? If so, does the neural net become unstable since it is holding so much information that the data starts to overlapping or has common intersections. This is referred to as <B>stability.</B> The bottom line is we want a neural net to have a good LTM, a good STM, be plastic (in most cases) and exhibit stability. Of course, some neural nets have no analog to memory they are more for functional mapping, so these concepts don't apply as is, but you get the idea. Now that we know about the aforementioned concepts relating to memory, let's finish up by talking some of  the mathematical factors that help measure and understand these properties. </P>
<P>&nbsp;</P>
<P>One of the main uses for neural nets are memories that can process input that is either incomplete or noisy and return a response. The response may be the input itself <B>(autoassociation) </B>or another output that is totally different from the input <B>(heteroassociation).</B> Also, the mapping may be from a <B>n</B>-dimensional space to a <B>m</B>-dimensional space and non-linear to boot. The bottom line is that we want to some how store information in the neural net so that inputs (perfect as well as noisy) can be processed in parallel. This means that a neural net is a kind of hyperdimensional memory unit since it can associate an input <B>n</B>-tuple with an output <B>m</B>-tupple where <B>m</B> can equal <B>n</B>, but doesn't have to.</P>
<P>&nbsp;</P>
<P>&nbsp;</P>
<P>What neural nets do in essence is partition an <B>n</B>-dimensional space into regions that uniquely map the input to the output or classify the input into distinct classes like a funnel of sorts. Now, as the number of input values (vectors) in the input data set increase which we will refer to as <B>S</B>, it logically follows that the neural net is going to have harder time separating the information. And as a neural net is filled with information, the input values that are to be recalled will overlap since the input space can no longer keep everything partitioned in a finite number of dimensions. This overlap results in <B>crosstalk,</B> meaning that some inputs are not as distinct as they could be. This may or may not be desired. Although this problem isn't a concern in all cases, it is a concern in associative memory neural nets, so to illustrate the concept let's assume that we are trying to associate <B>n</B>-tuple input vectors with some output set. The output set isn't as much of a concern to proper functioning as is the input set <B>S</B> is.</P>
<P>&nbsp;</P>
<P>If a set of inputs <B>S</B> is straight binary then we are looking at sequences in the form 1101010...10110 let's say that our input bit vectors are only 3 bits each, therefore the entire input space consist of the vectors:</P>
<P>&nbsp;</P>
<B><P>v<SUB>0</B></SUB> = (0,0,0), <B>v<SUB>1</B></SUB> = (0,0,1), <B>v<SUB>2</B></SUB> = (0,1,0), <B>v<SUB>3</B></SUB> = (0,1,1), <B>v<SUB>4</B></SUB> = (1,0,0), <B>v<SUB>5</B></SUB> = (1,0,1), <B>v<SUB>6</B></SUB> = (1,1,0), </P>
<B><P>v<SUB>7</B> </SUB>= (1,1,1)</P>
<P>&nbsp;</P>
<P>To be more precise the <B>Basis</B> for this set of vectors is:</P>
<P>&nbsp;</P>
<B><P>v</B> = (1,0,0) * <B>b<SUB>2</B></SUB> + (0,1,0) * <B>b<SUB>1</B></SUB> + (0,0,1) * <B>b<SUB>0</B></SUB>, where <B>b<SUB>i</B></SUB> can take on the values 0 or 1.</P>
<P>&nbsp;</P>
<P>For example if we let <B>b<SUB>2</B></SUB>=1, <B>b<SUB>1</B></SUB>=0, and <B>b<SUB>0</B></SUB>=1 then we get the vector:</P>
<P>&nbsp;</P>
<B><P>v</B> = (1,0,0) * 1 + (0,1,0) * 0 + (0,0,1) * 1 = (1,0,0) + (0,0,0) + (0,0,1) = (1,0,1) which is <B>v<SUB>5</B></SUB> in our possible input set.</P>
</FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><FONT SIZE=2><P>A<B> basis</B> is a special vector summation that describes a set of vectors in a space. So <B>v</B> describes all the vector in our space. Now to make a long story short, the more <B>orthogonal</B> the vectors in the input set are the better they will distribute in a neural net and the better they can be recalled. Orthogonality refers to the independence of the vectors or in other words if two vector are orthogonal then their dot product is 0, their projection onto one another is 0, and they can't be written in terms of one another. In the set <B>v</B> there are a lot of orthogonal vectors, but they come in small groups, for example <B>v<SUB>0</B></SUB> is orthogonal to all the vectors, so we can always include it. But if we include <B>v</B><SUB>1</SUB> in our set <B>S</B> then the only other vectors that will fit and maintain orthogonality are <B>v<SUB>2</B></SUB> and <B>v<SUB>4</B></SUB> or the set:</P>
</FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><B><FONT SIZE=2><P>v<SUB>0</B></SUB> = (0,0,0), <B>v<SUB>1</B></SUB> = (0,0,1), <B>v<SUB>2</B></SUB>= (0,1,0), <B>v<SUB>4</B></SUB> = (1,0,0)</P>
</FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><FONT SIZE=2><P>Why? Because <B>v</B><SUB>i</SUB> <FONT FACE="Symbol">&#183;</FONT>
 <B>v</B><SUB>j</SUB> for all <B>i,j</B> from 0..3 is equal to 0. In other words, the dot product of all the pairs of vectors in 0, so they must all be orthogonal. Therefore, this set will do very well in a neural net as input vectors. However, the set:</P>

?? 快捷鍵說(shuō)明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
国产资源在线一区| 精品一区二区三区蜜桃| 国产精品毛片a∨一区二区三区| 日韩三级伦理片妻子的秘密按摩| 欧美主播一区二区三区美女| 在线国产亚洲欧美| 在线亚洲人成电影网站色www| 色妞www精品视频| 色呦呦国产精品| 欧美午夜理伦三级在线观看| 欧美视频在线观看一区| 7878成人国产在线观看| 欧美一卡2卡3卡4卡| 精品久久久三级丝袜| 亚洲精品一区二区三区影院| 国产三级精品视频| 自拍偷自拍亚洲精品播放| 一区二区三区在线免费观看| 午夜久久久久久| 毛片一区二区三区| 国产成人av电影在线观看| 成人国产电影网| 91高清在线观看| 欧美老女人在线| 337p粉嫩大胆色噜噜噜噜亚洲| 久久九九全国免费| 亚洲欧美区自拍先锋| 亚洲国产精品久久人人爱| 蜜桃视频在线观看一区| 国产高清成人在线| 在线亚洲人成电影网站色www| 日韩一区和二区| 国产蜜臀av在线一区二区三区| 亚洲青青青在线视频| 婷婷中文字幕综合| 国产成人免费av在线| 一本大道久久精品懂色aⅴ| 欧美精品日韩精品| 久久久久9999亚洲精品| 一区二区三区中文在线| 美美哒免费高清在线观看视频一区二区| 国产98色在线|日韩| 欧美专区亚洲专区| 国产日产欧美一区二区三区| 一区二区三区四区精品在线视频 | 久久精品99国产精品日本| 国产精品1区2区3区在线观看| 色香蕉成人二区免费| 日韩区在线观看| 亚洲免费高清视频在线| 久久狠狠亚洲综合| 91国产丝袜在线播放| 精品国产91久久久久久久妲己| 国产精品毛片a∨一区二区三区 | 92国产精品观看| 日韩丝袜美女视频| 亚洲精品高清在线| 国产精品自拍av| 欧美电影在哪看比较好| 亚洲欧洲日本在线| 国产在线精品视频| 欧美日韩国产高清一区二区 | 亚洲欧美欧美一区二区三区| 九九**精品视频免费播放| 日本精品一级二级| 国产欧美日韩在线| 免费美女久久99| 欧美在线播放高清精品| 欧美激情中文字幕一区二区| 日本亚洲视频在线| 色爱区综合激月婷婷| 国产欧美精品一区aⅴ影院| 视频一区二区不卡| 色噜噜狠狠色综合欧洲selulu | 国产一区二区三区黄视频| 欧美亚洲动漫制服丝袜| 国产精品国产三级国产有无不卡 | 国产精品久久久久三级| 韩国女主播一区| 337p亚洲精品色噜噜狠狠| 亚洲精品免费电影| av一区二区三区四区| 久久蜜桃一区二区| 久久99国产乱子伦精品免费| 欧美欧美欧美欧美| 亚洲国产成人av好男人在线观看| av亚洲精华国产精华精华| 国产亚洲综合av| 国产精品资源在线观看| 精品女同一区二区| 美女在线观看视频一区二区| 在线不卡一区二区| 香蕉加勒比综合久久| 在线观看91精品国产入口| 亚洲激情校园春色| 99re热视频精品| 综合电影一区二区三区| 99久久久国产精品| 亚洲人吸女人奶水| 91一区在线观看| 亚洲精品中文字幕乱码三区| 99久久精品一区二区| 亚洲免费视频中文字幕| 国产日韩欧美高清在线| 国产麻豆视频一区| 国产亚洲综合性久久久影院| 国产成人av影院| 中文字幕一区二区三区四区不卡| 不卡的av在线| 亚洲另类一区二区| 欧美伊人精品成人久久综合97| 亚洲一区国产视频| 欧美日本不卡视频| 麻豆一区二区99久久久久| 精品处破学生在线二十三| 国产精品一区二区免费不卡| 国产清纯在线一区二区www| 成人午夜电影久久影院| 国产精品久久久久影院亚瑟 | 欧美浪妇xxxx高跟鞋交| 日韩成人免费看| 精品国产91久久久久久久妲己 | 久久蜜桃av一区精品变态类天堂 | 成人激情免费电影网址| 亚洲天堂av一区| 欧美日韩高清一区| 极品美女销魂一区二区三区免费| 国产午夜精品一区二区 | 成人免费一区二区三区视频| 欧美综合亚洲图片综合区| 天天综合色天天综合| 精品福利av导航| 99免费精品视频| 亚洲成av人片在线观看| 精品国产一区二区在线观看| 成人免费高清视频在线观看| 一片黄亚洲嫩模| 日韩欧美国产午夜精品| heyzo一本久久综合| 午夜视频在线观看一区| 欧美精品一区二区三区蜜臀| 99国产精品久久| 欧美婷婷六月丁香综合色| 久久99精品久久久久久动态图| 国产情人综合久久777777| 欧美色爱综合网| 国产曰批免费观看久久久| 亚洲精品国产成人久久av盗摄| 日韩欧美二区三区| 99久久99精品久久久久久| 奇米色一区二区| 亚洲欧洲制服丝袜| 欧美成人在线直播| 色噜噜狠狠成人网p站| 久久se这里有精品| 一区二区三区成人在线视频| 26uuu亚洲综合色| 欧美日韩一区二区三区不卡| 国产91在线看| 日本欧美在线观看| 亚洲女同一区二区| 国产午夜精品久久久久久免费视| 欧美三级视频在线| 不卡一区二区三区四区| 日本 国产 欧美色综合| 亚洲精品乱码久久久久久久久| 久久亚洲一区二区三区四区| 欧美亚洲动漫制服丝袜| 成人av在线资源网站| 精品亚洲porn| 三级欧美韩日大片在线看| 亚洲人妖av一区二区| 国产日韩欧美麻豆| 日韩欧美一区二区视频| 欧美色图免费看| 99精品视频中文字幕| 国产在线播精品第三| 美女视频一区二区| 亚洲成人一二三| 亚洲激情校园春色| 中文字幕一区二区三区不卡 | 久久电影网站中文字幕| 亚洲图片欧美视频| 亚洲男人天堂av| 国产精品久久久久久久蜜臀| 久久精工是国产品牌吗| 亚洲午夜久久久久久久久电影网 | 奇米777欧美一区二区| 亚洲小说欧美激情另类| 亚洲欧美激情小说另类| 国产精品久久久久三级| 欧美激情一区二区三区全黄| 久久综合视频网| 欧美成人精品3d动漫h| 777a∨成人精品桃花网| 欧美美女网站色| 欧美日韩免费在线视频| 欧美在线观看视频在线| 欧美最新大片在线看| 欧美在线free|