亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? expectation-maximization algorithm.htm

?? 這個書很重要 可以好好幫助 不信你可以下來看看 真的很好 看看哦
?? HTM
?? 第 1 頁 / 共 3 頁
字號:
likelihood given the unobservable data <IMG class=tex 
alt="p(\mathbf y|\mathbf x, \theta)" 
src="Expectation-maximization algorithm.files/950284f5644685dd65b2f878efe4d47c.png">, 
as well as the probability of the unobservable data <IMG class=tex 
alt="p(\mathbf x |\theta)" 
src="Expectation-maximization algorithm.files/76103fd6f4ab39a77fd2c7c01c9d7e01.png">.</P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: Maximize expected log-likelihood for the complete dataset" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=3">edit</A>]</DIV>
<P><A id=Maximize_expected_log-likelihood_for_the_complete_dataset 
name=Maximize_expected_log-likelihood_for_the_complete_dataset></A></P>
<H3>Maximize expected log-likelihood for the complete dataset</H3>
<P>An EM algorithm will then iteratively improve an initial estimate <SPAN 
class=texhtml>θ<SUB>0</SUB></SPAN> and construct new estimates <IMG class=tex 
alt="\theta_1, \dots,\theta_n, \dots" 
src="Expectation-maximization algorithm.files/7cc3eb4bff04c447f4298f90756ec1f0.png">. 
An individual re-estimation step that derives <IMG class=tex alt=\theta_{n+1}\, 
src="Expectation-maximization algorithm.files/6c6314555e52d30597a0333553f133d4.png"> 
from <IMG class=tex alt=\theta_n\, 
src="Expectation-maximization algorithm.files/edd8f154244c8d342895c3a6da2d4a74.png"> 
takes the following form:</P>
<DL>
  <DD><IMG class=tex 
  alt="\theta_{n+1} = \arg\max_{\theta}  E_{\mathbf x} \! \! \left[ \log p \left(\mathbf y, \mathbf x \,|\, \theta \right) \Big| \mathbf y \right]," 
  src="Expectation-maximization algorithm.files/b45b58d4ff5b798908e1f4e25e5946a0.png"> 
  </DD></DL>
<P>where <IMG class=tex alt="E_{\mathbf x} \! \! \left[ \cdot \right]" 
src="Expectation-maximization algorithm.files/670834265b79bee2153d2cf870f8e4e4.png"> 
denotes the conditional expectation of <IMG class=tex 
alt="\log p \left( \mathbf y, \mathbf x \,|\, \theta \right)" 
src="Expectation-maximization algorithm.files/881efb92880c9bb21573bab40e7c8318.png"> 
being taken with <SPAN class=texhtml>θ</SPAN> in the conditional distribution of 
<B>x</B> fixed at <SPAN class=texhtml>θ<SUB><I>n</I></SUB></SPAN>. 
Log-likelihood <IMG class=tex 
alt="\log p \left( \mathbf y, \mathbf x \,|\, \theta \right)" 
src="Expectation-maximization algorithm.files/881efb92880c9bb21573bab40e7c8318.png"> 
is often used instead of true likelihood <IMG class=tex 
alt="p \left( \mathbf y, \mathbf x \,|\, \theta \right)" 
src="Expectation-maximization algorithm.files/22846f74665fa6358dcab3a8093ab183.png"> 
because it leads to easier formulas, but still attains its maximum at the same 
point as the likelihood.</P>
<P>In other words, <SPAN class=texhtml>θ<SUB><I>n</I> + 1</SUB></SPAN> is the 
value that maximizes (M) the <A title="Conditional expectation" 
href="http://en.wikipedia.org/wiki/Conditional_expectation">conditional 
expectation</A> (E) of the complete data log-likelihood given the observed 
variables under the previous parameter value. This expectation is usually 
denoted as <SPAN class=texhtml><I>Q</I>(θ)</SPAN>. In the continuous case, it 
would be given by</P>
<DL>
  <DD><IMG class=tex 
  alt="Q(\theta) = E_{\mathbf x} \! \! \left[ \log p \left(\mathbf y, \mathbf x \,|\, \theta \right) \Big| \mathbf y \right] = \int^\infty _{- \infty}  p \left(\mathbf x \,|\, \mathbf y, \theta_n \right)  \log p \left(\mathbf y, \mathbf x \,|\, \theta \right) d\mathbf x" 
  src="Expectation-maximization algorithm.files/530962eb2126e9a6cb4874e6534d5005.png"> 
  </DD></DL>
<P>Speaking of an expectation (E) step is a bit of a misnomer. What is 
calculated in the first step are the fixed, data-dependent parameters of the 
function <I>Q</I>. Once the parameters of <I>Q</I> are known, it is fully 
determined and is maximized in the second (M) step of an EM algorithm.</P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: Properties" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=4">edit</A>]</DIV>
<P><A id=Properties name=Properties></A></P>
<H3>Properties</H3>
<P>It can be shown that an EM iteration does not decrease the observed data 
likelihood function. However, there is no guarantee that the sequence converges 
to a <A title="Maximum likelihood estimator" 
href="http://en.wikipedia.org/wiki/Maximum_likelihood_estimator">maximum 
likelihood estimator</A>. For multimodal distributions, this means that an EM 
algorithm will converge to a <A title="Local maximum" 
href="http://en.wikipedia.org/wiki/Local_maximum">local maximum</A> (or <A 
title="Saddle point" href="http://en.wikipedia.org/wiki/Saddle_point">saddle 
point</A>) of the observed data likelihood function, depending on starting 
values. There are a variety of heuristic approaches for escaping a local maximum 
such as using several different random initial estimates, <SPAN 
class=texhtml>θ<SUB>0</SUB></SPAN>, or applying <A title="Simulated annealing" 
href="http://en.wikipedia.org/wiki/Simulated_annealing">simulated 
annealing</A>.</P>
<P>EM is particularly useful when <A title="Maximum likelihood estimation" 
href="http://en.wikipedia.org/wiki/Maximum_likelihood_estimation">maximum 
likelihood estimation</A> of a complete data model is easy. If <A 
title="Closed form" 
href="http://en.wikipedia.org/wiki/Closed_form">closed-form</A> estimators 
exist, the M step is often trivial. A classic example is maximum likelihood 
estimation of a finite mixture of <A title="Normal distribution" 
href="http://en.wikipedia.org/wiki/Normal_distribution">Gaussians</A>, where 
each component of the mixture can be estimated trivially if the mixing 
distribution is known.</P>
<P>"Expectation-maximization" is a description of a class of related algorithms, 
not a specific algorithm; EM is a recipe or meta-algorithm which is used to 
devise particular algorithms. The <A title="Baum-Welch algorithm" 
href="http://en.wikipedia.org/wiki/Baum-Welch_algorithm">Baum-Welch 
algorithm</A> is an example of an EM algorithm applied to <A 
title="Hidden Markov model" 
href="http://en.wikipedia.org/wiki/Hidden_Markov_model">hidden Markov 
models</A>. Another example is the EM algorithm for fitting a <A 
title="Mixture density" 
href="http://en.wikipedia.org/wiki/Mixture_density">mixture density</A> 
model.</P>
<P>An EM algorithm can also find <A title="Maximum a posteriori" 
href="http://en.wikipedia.org/wiki/Maximum_a_posteriori">maximum a 
posteriori</A> (MAP) estimates, by performing MAP estimation in the M step, 
rather than maximum likelihood.</P>
<P>There are other methods for finding maximum likelihood estimates, such as <A 
title="Gradient descent" 
href="http://en.wikipedia.org/wiki/Gradient_descent">gradient descent</A>, <A 
title="Conjugate gradient" 
href="http://en.wikipedia.org/wiki/Conjugate_gradient">conjugate gradient</A> or 
variations of the <A title="Gauss-Newton method" 
href="http://en.wikipedia.org/wiki/Gauss-Newton_method">Gauss-Newton method</A>. 
Unlike EM, such methods typically require the evaluation of first and/or second 
derivatives of the likelihood function.</P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: Incremental versions" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=5">edit</A>]</DIV>
<P><A id=Incremental_versions name=Incremental_versions></A></P>
<H2>Incremental versions</H2>
<P>The classic EM procedure is to replace both <I>Q</I> and <I>θ</I> with their 
optimal possible (argmax) values at each iteration. However it can be shown (see 
Neal &amp; Hinton, 1999) that simply finding <I>Q</I> and <I>θ</I> to give 
<I>some</I> improvement over their current value will also ensure successful 
convergence.</P>
<P>For example, to improve <I>Q</I>, we could restrict the space of possible 
functions to a computationally simple distribution such as a factorial 
distribution,</P>
<DL>
  <DD><IMG class=tex alt="Q=\prod_i Q_i. \!" 
  src="Expectation-maximization algorithm.files/a1c675d4d94cda72dcef28cbce0fb211.png"> 
  </DD></DL>
<P>Thus at each E step we compute the variational approximation of <I>Q</I>.</P>
<P>To improve <I>θ</I>, we could use any <A title="Hill climbing" 
href="http://en.wikipedia.org/wiki/Hill_climbing">hill-climbing</A> method, and 
not worry about finding the optimal <I>θ</I>, just some improvement. This method 
is also known as <I>Generalized EM</I> (GEM).</P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: Relation to variational Bayes methods" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=6">edit</A>]</DIV>
<P><A id=Relation_to_variational_Bayes_methods 
name=Relation_to_variational_Bayes_methods></A></P>
<H2>Relation to variational Bayes methods</H2>
<P>EM is a partially non-Bayesian, maximum likelihood method. Its final result 
gives a <A title="Probability distribution" 
href="http://en.wikipedia.org/wiki/Probability_distribution">probability 
distribution</A> over the latent variables (in the Bayesian style) together with 
a point estimate for <I>θ</I> (either a <A title="Maximum likelihood estimation" 
href="http://en.wikipedia.org/wiki/Maximum_likelihood_estimation">maximum 
likelihood estimate</A> or a posterior mode). We may want a fully Bayesian 
version of this, giving a probability distribution over <I>θ</I> as well as the 
latent variables. In fact the Bayesian approach to inference is simply to treat 
<I>θ</I> as another latent variable. In this paradigm, the distinction between 
the E and M steps disappears. If we use the factorized Q approximation as 
described above (<A title="Variational Bayes" 
href="http://en.wikipedia.org/wiki/Variational_Bayes">variational Bayes</A>), we 
may iterate over each latent variable (now including <I>θ</I>) and optimize them 
one at a time. There are now <I>k</I> steps per iteration, where <I>k</I> is the 
number of latent variables. For <A title="Graphical models" 
href="http://en.wikipedia.org/wiki/Graphical_models">graphical models</A> this 
is easy to do as each variable's new <I>Q</I> depends only on its <A 
title="Markov blanket" href="http://en.wikipedia.org/wiki/Markov_blanket">Markov 
blanket</A>, so local <A title="Message passing" 
href="http://en.wikipedia.org/wiki/Message_passing">message passing</A> can be 
used for efficient inference.</P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: Example: Mixture Gaussian" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=7">edit</A>]</DIV>
<P><A id=Example:_Mixture_Gaussian name=Example:_Mixture_Gaussian></A></P>
<H2>Example: Mixture Gaussian</H2>
<P>Assume that the samples <IMG class=tex alt="\mathbf y_1, \dots, \textbf{y}_m" 
src="Expectation-maximization algorithm.files/28ba0e9e07634fc32d928d81cee07067.png">, 
where <IMG class=tex alt="\mathbf y_j \in \mathbb{R}^l" 
src="Expectation-maximization algorithm.files/1888f78eb094700a628d4dc6ca080581.png">, 
are drawn from the <A title="Normal distribution" 
href="http://en.wikipedia.org/wiki/Normal_distribution">gaussians</A> <IMG 
class=tex alt="x_1, \dots, x_n" 
src="Expectation-maximization algorithm.files/89b5279f29fc0071b7a5b002c785178f.png">, 
such that</P>
<P><IMG class=tex 
alt="P(\mathbf y | x_i,\theta) = \mathcal{N}(\mu_i,\Sigma_i) = (2\pi)^{-l/2} {\left| \Sigma_i \right|}^{-1/2} \exp\left(-\frac{1}{2}(\mathbf y - \mathbf \mu_i)^T \Sigma_i^{-1} (\mathbf y - \mathbf \mu_i)\right)" 
src="Expectation-maximization algorithm.files/040a0c85965f8f94baf5469221e31642.png"></P>
<P>The model you are trying to estimate is <IMG class=tex 
alt="\theta = \left\{ \mu_1, \dots, \mu_n, \Sigma_1, \dots, \Sigma_n, P(x_1), \dots, P(x_n) \right\}" 
src="Expectation-maximization algorithm.files/2ea0caa9575c9b880319d4d07ecc3e27.png"></P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: E-step:" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=8">edit</A>]</DIV>
<P><A id=E-step: name=E-step:></A></P>
<H3>E-step:</H3>
<P>Estimation for unobserved event (which Gaussian is used), conditioned on the 
observation, using the values from the last maximisation step:</P>
<DL>
  <DD><IMG class=tex 
  alt="P(x_i|\mathbf y_j,\theta_t) = \frac{p(\mathbf y_j|x_i,\theta_t) P(x_i|\theta_t)}{p(\mathbf y_j|\theta_t)} = \frac{p(\mathbf y_j|x_i,\theta_t) P(x_i|\theta_t)}{\sum_{k=1}^n p(x_k,\mathbf y_j | \theta_t)}" 
  src="Expectation-maximization algorithm.files/e0404238dd7bc017c9c4ebac5511692b.png"> 
  </DD></DL>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: M-step" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=9">edit</A>]</DIV>
<P><A id=M-step name=M-step></A></P>
<H3>M-step</H3>
<P>You want to maximise the expected log-likelihood of the joint event:</P>
<DL>
  <DD><IMG class=tex 
  alt="\begin{matrix} Q(\theta)   &amp;=&amp; E_{x} \left[ \ln \prod_{j=1}^m p \left(\mathbf y_j, \mathbf x | \theta \right) \Big| \mathbf y_j \right] \\  &amp;=&amp; E_{x} \left[ \sum_{j=1}^m \ln p \left(\mathbf y_j, \mathbf x | \theta \right) \Big| \mathbf y_j \right] \\  &amp;=&amp; \sum_{j=1}^m E_{x} \left[ \ln p \left(\mathbf y_j, \mathbf x | \theta \right) \Big| \mathbf y_j \right] \\  &amp;=&amp; \sum_{j=1}^m \sum_{i=1}^n  P \left(x_i | \mathbf y_j, \theta_t \right) \ln p\left(x_i, \mathbf y_j | \theta \right) \\ \end{matrix}" 
  src="Expectation-maximization algorithm.files/c2f1a19de7dbb35aa01a9d9ea435bdc0.png"> 
  </DD></DL>
<P>If we expand the probability of the joint event, we get</P>
<DL>
  <DD><IMG class=tex 
  alt="Q(\theta)  = \sum_{j=1}^m \sum_{i=1}^n  P(x_i | \mathbf y_j, \theta_t) \ln \left( p(\mathbf y_j | x_i, \theta) P(x_i | \theta) \right)" 
  src="Expectation-maximization algorithm.files/f127a3b101d3387346e29964fda2c76b.png"> 
  </DD></DL>
<P>You have a constraint</P>
<DL>
  <DD><IMG class=tex alt="\sum_{i=1}^{n} P(x_i|\theta) = 1" 
  src="Expectation-maximization algorithm.files/997ae8471b2e4b972fe3e52dda137f7a.png"> 
  </DD></DL>
<P>If we add a <A title=Lagrangian 
href="http://en.wikipedia.org/wiki/Lagrangian">lagrangian</A>, and expand the 
pdf, we get</P>
<DL>
  <DD><IMG class=tex 
  alt="\begin{matrix} \mathcal{L}(\theta)   &amp;=&amp; \left( \sum_{j=1}^m \sum_{i=1}^n  P(x_i | \mathbf y_j, \theta_t) \left( - \frac{l}{2} \ln (2\pi) - \frac{1}{2} \ln \left| \Sigma_i \right| - \frac{1}{2}(\mathbf y_j - \mathbf \mu_i)^T \Sigma_i^{-1} (\mathbf y_j - \mathbf \mu_i) + \ln P(x_i | \theta) \right) \right) \\  &amp; &amp; - \lambda \left( \sum_{i=1}^{n} P(x_i | \theta) - 1 \right) \end{matrix}" 
  src="Expectation-maximization algorithm.files/86df080f55d3fb5339a3628aceb730ae.png"> 
  </DD></DL>
<P>To find the new estimate <SPAN class=texhtml>θ<SUB><I>t</I> + 1</SUB></SPAN>, 
you find a maxima where <IMG class=tex 
alt="\frac{\partial \mathcal{L}(\theta)}{\partial \theta} = 0" 
src="Expectation-maximization algorithm.files/605a86528c76325cc7b2ee9a0f458fa8.png">.</P>
<P>New estimate for mean (using some differentiation rules from <A 
title="Matrix calculus" 
href="http://en.wikipedia.org/wiki/Matrix_calculus">matrix calculus</A>):</P>

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
国产精品无码永久免费888| 亚洲地区一二三色| 欧美日韩不卡一区二区| 国产精品一区二区男女羞羞无遮挡 | 欧美年轻男男videosbes| 国产一区二区三区四| 亚洲五码中文字幕| 国产日韩精品一区| 884aa四虎影成人精品一区| 成人高清视频在线观看| 极品少妇xxxx偷拍精品少妇| 亚洲一区二区高清| 国产精品电影一区二区| 欧美精品一区二区久久婷婷| 在线不卡免费av| 在线看一区二区| av亚洲精华国产精华精华| 美腿丝袜在线亚洲一区| 亚洲国产精品嫩草影院| 亚洲精品成人天堂一二三| 国产精品美女久久福利网站| 久久久99精品久久| 精品国一区二区三区| 日韩一级成人av| 欧美理论片在线| 欧洲一区二区av| 一本大道久久a久久综合| 成人性生交大合| 国产乱码精品一区二区三| 久久99热这里只有精品| 日本成人在线不卡视频| 亚洲第一二三四区| 亚洲国产乱码最新视频 | 欧美区在线观看| 欧美日韩久久不卡| 欧美性猛交一区二区三区精品| 99久久精品国产观看| 高清成人免费视频| 豆国产96在线|亚洲| 成人永久免费视频| 成人黄色软件下载| www.av精品| 色欧美片视频在线观看| 91在线无精精品入口| 99国内精品久久| 一本到一区二区三区| 日本黄色一区二区| 日本二三区不卡| 在线视频国产一区| 在线播放日韩导航| 欧美成人a视频| 国产日本欧洲亚洲| 综合亚洲深深色噜噜狠狠网站| 日韩美女精品在线| 一区二区三区日本| 日本特黄久久久高潮| 蜜桃av一区二区三区电影| 久久91精品久久久久久秒播| 国产成都精品91一区二区三| 成年人国产精品| 欧美影院午夜播放| 日韩免费电影网站| 国产午夜亚洲精品羞羞网站| 亚洲欧美日韩精品久久久久| 亚洲一区成人在线| 裸体一区二区三区| www.亚洲国产| 91精品一区二区三区久久久久久| 337p日本欧洲亚洲大胆精品| 国产精品对白交换视频 | 亚洲在线成人精品| 青青草97国产精品免费观看无弹窗版 | 欧美va亚洲va国产综合| 国产亚洲欧美在线| 一区二区三区免费观看| 麻豆精品国产传媒mv男同| 成人理论电影网| 欧美精品第一页| 国产亚洲短视频| 午夜精彩视频在线观看不卡| 国产精品性做久久久久久| 欧美在线观看一二区| 亚洲精品在线免费观看视频| 亚洲精品视频免费观看| 九九视频精品免费| 日本大香伊一区二区三区| 欧美r级在线观看| 亚洲国产一区二区三区| 粉嫩蜜臀av国产精品网站| 欧美日本韩国一区二区三区视频 | 欧美96一区二区免费视频| av爱爱亚洲一区| 欧美一区二区网站| 亚洲欧美偷拍另类a∨色屁股| 久久成人av少妇免费| 色噜噜夜夜夜综合网| 国产色产综合产在线视频| 偷窥少妇高潮呻吟av久久免费| jlzzjlzz国产精品久久| 欧美大胆一级视频| 亚洲自拍偷拍麻豆| 国产精品系列在线观看| 91精品午夜视频| 亚洲国产va精品久久久不卡综合| 成人毛片在线观看| 久久久电影一区二区三区| 免费看日韩精品| 欧美男生操女生| 亚洲免费在线电影| 成人av电影在线播放| 久久久久九九视频| 久久精品久久99精品久久| 欧美日韩国产一级片| 亚洲欧美另类小说| 粉嫩13p一区二区三区| 久久一二三国产| 久久国产精品第一页| 777午夜精品免费视频| 一区二区三区在线不卡| www.成人网.com| 亚洲欧洲精品一区二区三区不卡 | 777a∨成人精品桃花网| 亚洲最大的成人av| 99国产精品国产精品久久| 日本一区二区免费在线观看视频 | 国产一区二区三区精品欧美日韩一区二区三区 | 国产一区二区三区四区五区入口| 91精品国产综合久久久蜜臀图片| 亚洲免费在线视频| 色综合久久中文字幕| 亚洲丝袜另类动漫二区| 99在线精品观看| 亚洲欧美一区二区三区国产精品 | 欧美激情一区二区三区不卡| 久久精品国产第一区二区三区| 91精品午夜视频| 蜜桃精品视频在线| 2021久久国产精品不只是精品| 久久精品久久久精品美女| 久久亚洲综合色| 成人免费的视频| 亚洲人成7777| 欧美视频精品在线| 亚洲va欧美va国产va天堂影院| 欧美精品在线一区二区| 美日韩一级片在线观看| 久久一二三国产| 不卡的av在线| 一区二区三区欧美亚洲| 欧美人与禽zozo性伦| 另类调教123区| 国产欧美日韩精品在线| 91老司机福利 在线| 亚洲综合网站在线观看| 欧美精品久久一区二区三区| 老色鬼精品视频在线观看播放| 久久久久久久久久看片| 成人av网站在线观看免费| 亚洲精品精品亚洲| 欧美丰满嫩嫩电影| 国产盗摄一区二区| 亚洲日本在线天堂| 欧美一区二区三区播放老司机| 国产一区久久久| 日韩一区在线播放| 51精品国自产在线| 国产成人免费在线视频| 亚洲男人的天堂在线观看| 91精品国产综合久久精品| 国产成人三级在线观看| 亚洲综合免费观看高清完整版在线| 91精品国产麻豆国产自产在线| 国产精品一二三| 亚洲国产wwwccc36天堂| 久久精品欧美日韩精品| 欧美性受xxxx黑人xyx| 国产一区二区三区四区在线观看| 亚洲伦在线观看| 久久影院午夜论| 欧美日韩视频在线第一区| 国产成人免费视频| 视频一区视频二区中文字幕| 国产亚洲欧美日韩在线一区| 欧美色倩网站大全免费| 国产成人免费xxxxxxxx| 日韩电影在线一区| 中文字幕在线不卡| 精品欧美一区二区在线观看| 色欧美日韩亚洲| 国产盗摄一区二区| 秋霞电影网一区二区| 亚洲欧美日韩在线| 久久影院视频免费| 91精品国产综合久久精品性色| a4yy欧美一区二区三区| 精品一区二区三区免费毛片爱| 亚洲一区二区三区四区五区中文| 国产拍欧美日韩视频二区| 欧美一卡在线观看| 欧美探花视频资源|