亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關(guān)于我們
? 蟲蟲下載站

?? using a particle filter for gesture recognition.htm

?? 一個(gè)很好的粒子濾波算法
?? HTM
?? 第 1 頁 / 共 2 頁
字號(hào):
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<!-- saved from url=(0035)http://www.mit.edu/~alexgru/vision/ -->
<HTML><HEAD><TITLE>Using a Particle Filter for Gesture Recognition</TITLE>
<META http-equiv=Content-Type content="text/html; charset=gb2312">
<META content="MSHTML 6.00.2743.600" name=GENERATOR></HEAD>
<BODY>Back to my <A href="http://www.mit.edu/~alexgru/">homepage</A> 
<H1>Using a Particle Filter for Gesture Recognition</H1>
<H2>Alexander Gruenstein </H2>
<H3>Introduction</H3>For my final project, I experimented with applying a 
particle filter to the problem of gesture recognition. Specifically, I attempted 
to differentiate between the following two American Sign Language (ASL) signs 
(special thanks goes to Anna for allowing me to film her!) (please click on the 
signs to see a sample video): 
<UL>
  <LI><A href="http://www.mit.edu/~alexgru/vision/sign2_3.mov">Leftover</A> 
  <LI><A href="http://www.mit.edu/~alexgru/vision/sign3_3.mov">Paddle</A> 
</LI></UL>My procedure was as follows: 
<OL>
  <LI>Film 7 examples of each sign 
  <LI>Find skin-colored pixels in every frame 
  <LI>Find the three largest 'blobs' of skin-colored pixels in each frame (the 
  head, right hand, and left hand) 
  <LI>Calculate the motion trajectories of each blob over time 
  <LI>Create a model of the average trajectories of the blobs for each of the 
  two signs, using 5 of the examples 
  <LI>Use a particle filter to classify both the test and training sets of video 
  sequences as one of the two signs </LI></OL>
<H3>Related Work </H3>
<P>There has been a lot of work in recognizing sign language. My work builds 
mainly on two papers: </P>
<P>Black and Jepson have previously applied particle filters (CONDENSATION) to 
the problem of gesture recognition. Their work differs from mine in that they 
recognized only 'whiteboard' style gestures like "save," "cut," and "paste" made 
with a distinctly colored object (a "phicon"). In my project, I recognize actual 
ASL signs made with two hands. </P>
<P>Yang and Ahuja also use motion trjaectories to recognize ASL signs. However, 
they use a Time-Delayed Nueral Network (TDNN) to classify signs, while I use a 
particle filter </P>
<P>My work, then, builds on Yang and Ahuja's in the sense that I use the motion 
trajectories of the hands to recognize ASL signs. Unlike Yang and Ahuja, I don't 
attempt to robustly solve of the problem of tracking the hands in the first 
place. I extend the work of Black and Jepson by applying CONDENSATION to 
multiple motion trajectories simultaneously.</P>
<H3>Filming The Examples </H3>The image sequences were filmed using a Sony 
DCR-TRV900 MiniDV camera. They were manually aligned and then converted into 
sequences of TIFs to be processed in MATLAB. Each TIF was 243x360 pixels, 24bit 
color. The lighting and background in each sequence is held constant; the 
background is not cluttered. The focus of my project was not to solve the 
tracking problem, hence I wanted the hands to be relatively easy to track. I 
collected 7 film sequences of each sign. 
<H3>Finding Skin Colored Pixels </H3>
<P>In order to segment out skin-colored pixels, I used the color_segment routine 
we developed in MATLAB for our last homework assignment. Every image in every 
each sequence was divided into the following regions: skin, background, clothes, 
and outliers. The source code is here: 
<UL>
  <LI><A 
  href="http://www.mit.edu/~alexgru/vision/color_segment.m">color_segment.m</A> 
  <LI><A 
  href="http://www.mit.edu/~alexgru/vision/gaussdensity.m">gaussdensity.m</A> 
  <LI><A 
  href="http://www.mit.edu/~alexgru/vision/image_to_hsv2data.m">image_to_hsv2data.m</A> 
  </LI></UL>The original image: <BR><IMG height="50%" 
src="Using a Particle Filter for Gesture Recognition.files/segment.jpg" 
width="50%"> <BR>The skin pixel mask: <BR><IMG height="50%" 
src="Using a Particle Filter for Gesture Recognition.files/segment_skin.jpg" 
width="50%"> <BR>
<H3>Finding Skin-Colored Blobs </H3>
<P>I then calculated the centroids of the three largest skin colored 'blobs' in 
each image. Blobs were calculated by processing the skin pixel mask generated in 
the previous step. A blob is defined to be a connected region of 1's in the 
mask. Finding blobs turned out to be a bit more difficult than I had originally 
thought. My first implementation was a straightforward recursive algorithm which 
scans the top down from left to right until it comes across a skin pixel which 
has yet to be assigned to a blob. It then recursively checks each of that 
pixel's neighbors to see if they too are skin pixels. If they are, it assigns 
them to the same blob and recurses. On such large images, this quickly led to 
stack overflow and huge inefficiency in MATLAB. </P>
<P>The working algorithm I eventually came up with is an iterative one that 
scans the skin pixel mask from left to right top down. When it comes across a 
skin pixel that has yet to be assigned to a blob, it first checks pixels 
neighbors (to the left and above) to see if they are in a blob. If they aren't, 
it creates a new blob and adds the newly found pixel to the blob. If any of the 
neighbors are in a blob, it assigns the pixel to the neighbor's blob. However, 
two non-adjacent nieghbors might be in different blobs, so these blobs must be 
merged into a single blob.</P>
<P>Finally, the algorithm searches for the 3 largest blobs and calculates each 
of their respective centroids. </P>
<P>The MATLAB code can be found here: <A 
href="http://www.mit.edu/~alexgru/vision/blob2.m">blob2.m</A> </P>
<P>These videos show the 3 largest skin-colored blobs tracked for the two 
example video sequences above (ignore the funny colors -- they are just an 
artifact of the blob search)</P>
<UL>
  <LI><A href="http://www.mit.edu/~alexgru/vision/tracking2_3.mov">Leftover</A> 
  <LI><A href="http://www.mit.edu/~alexgru/vision/tracking3_3.mov">Paddle</A> 
  </LI></UL>
<H3>Calculating the Blobs' Motion Trajectories over Time </H3>At this point, 
tracking the trajectories of the blobs over time was fairly simple. For a given 
video sequence, I made a list of the position of the centroid for each of the 3 
largest blobs in each frame (source code: <A 
href="http://www.mit.edu/~alexgru/vision/centroids.m">centroids.m</A>). Then, I 
examined the first frame in the sequence and determined which centroid was 
farthest to the left and which was farthest to the right. The one on the left 
corresponds to the right hand of signer, the one to the right corresponds to the 
left hand of the signer. Then, for each successive frame, I simply determined 
which centroid was closest to each of the previous left centroid and called this 
the new left centroid; I did the same for the blob on the right. Once the two 
blobs were labeled, I calculated the horizontal and vertical velocity of both 
blobs across the two frames using [(change in position)/time]. I recorded these 
values for each sequential frame pair in the sequence. The source code is here: 
<A 
href="http://www.mit.edu/~alexgru/vision/split_centroids.m">split_centroids.m</A>. 

<H3>Creating the Motion Models </H3>I then created models of the hand motions 
involved in each sign. Specifically, for each frame in the sign, I used 5 
training instances to calculate the average horizontal and vertical velocities 
of both hands in that particular frame. The following graphs show the models 
derived for both signs: (These turned out a bit grainy as jpegs, there are pdf 
versions here: <A 
href="http://www.mit.edu/~alexgru/vision/model1.pdf">model1.pdf</A> and <A 
href="http://www.mit.edu/~alexgru/vision/model2.pdf">model2.pdf</A>).<BR><BR>Model 
1 "leftover": <BR><IMG height="80%" 
src="Using a Particle Filter for Gesture Recognition.files/model1.jpg" 
width="80%"> <BR>Model 2 "paddle": <BR><IMG height="80%" 
src="Using a Particle Filter for Gesture Recognition.files/model2.jpg" 
width="80%"> <BR><BR>Sample source code for creating a model is here: <A 
href="http://www.mit.edu/~alexgru/vision/make_model2.m">make_model2.m</A> 
<H3>Using CONDENSATION to Classify New Video Sequences</H3>All the image 
preprocessing is now finished and the 2 motion models have been created. Now 
follows a brief description of the Condensation algorithm and then a description 
of how I applied it to this specific task. 
<H3><A name=SECTION00010000000000000000>The Basics of Condensation</A></H3>
<P>The Condensation algorithm (Conditional Density Propagation over time) makes 
use of random sampling in order to model arbitrarily complex probability density 
functions. That is, rather than attempting to fit a specific equation to 
observed data, it uses <I>N</I> weighted samples to approximate the curve 
described by the data. Each sample consists of a <EM>state</EM> and a 
<EM>weight</EM> proportional to the probability that the state is predicted by 
the input data. As the number of samples increases, the precision with which the 
samples model the observed pdf increases. 
<P>Now assume that a series of observations are made during time steps <IMG 
height=20 alt=tex2html_wrap_inline130 
src="Using a Particle Filter for Gesture Recognition.files/img1.gif" width=64 
align=middle> . In order to generate the new sample set at time <I>t</I>+1, 
states are randomly selected (with replacement) from the sample set at <I>t</I>, 
based on their weight; that is, the weight of each sample determines the 
probability it will be chosen. Given such a randomly sampled state <IMG 
height=14 alt=tex2html_wrap_inline136 
src="Using a Particle Filter for Gesture Recognition.files/img2.gif" width=11 
align=middle> , a prediction of a new state <IMG height=15 
alt=tex2html_wrap_inline138 
src="Using a Particle Filter for Gesture Recognition.files/img3.gif" width=27 
align=middle> at time step <I>t</I>+1 is made based on a predictive model. This 
corresponds to sampling from the process density <IMG height=24 
alt=tex2html_wrap_inline142 
src="Using a Particle Filter for Gesture Recognition.files/img4.gif" width=116 
align=middle> , where <IMG height=22 alt=tex2html_wrap_inline144 
src="Using a Particle Filter for Gesture Recognition.files/img5.gif" width=17 
align=middle> is a vector of parameters describing the object's state. Finally, 
<IMG height=15 alt=tex2html_wrap_inline138 
src="Using a Particle Filter for Gesture Recognition.files/img3.gif" width=27 
align=middle> is assigned a weight proportional to the probability <IMG 
height=24 alt=tex2html_wrap_inline148 
src="Using a Particle Filter for Gesture Recognition.files/img6.gif" width=85 
align=middle> , where <IMG height=15 alt=tex2html_wrap_inline150 
src="Using a Particle Filter for Gesture Recognition.files/img7.gif" width=28 
align=middle> is a set of parameters describing the observed state of the object 
at time <I>t</I>+1. Then the process iterates for the next observation. In this 
way, predicted states that correspond better to the data receive larger weights. 
Since arbitrarily complex pdfs can be modeled, an arbitrary number of competing 
hypotheses (assuming sufficiently large <I>N</I>) can be maintained until a 
single hypothesis dominates. 
<P>
<H3><A name=SECTION00020000000000000000>Applying Condensation to Recognizing 
ASL</A></H3>
<P>In order to apply the Condensation Algorithm to sign-language recognition, I 
extend the methods described by Black and Jepson. Specifically, a <EM>state</EM> 
at time <I>t</I> is described as a parameter vector: <IMG height=27 
alt=tex2html_wrap_inline158 
src="Using a Particle Filter for Gesture Recognition.files/img8.gif" width=125 
align=middle> where: <BR><BR><IMG height=14 alt=tex2html_wrap_inline160 
src="Using a Particle Filter for Gesture Recognition.files/img9.gif" width=8 
align=middle> is the integer index of the predictive model; <BR><IMG height=30 
alt=tex2html_wrap_inline162 
src="Using a Particle Filter for Gesture Recognition.files/img10.gif" width=13 
align=middle> indicates the current position in the model; <BR><IMG height=15 
alt=tex2html_wrap_inline164 
src="Using a Particle Filter for Gesture Recognition.files/img11.gif" width=14 
align=bottom> refers to an amplitudal scaling factor; <BR><IMG height=30 
alt=tex2html_wrap_inline166 
src="Using a Particle Filter for Gesture Recognition.files/img12.gif" width=13 
align=middle> is a scale factor in the time dimension. <BR>where <IMG height=24 
alt=tex2html_wrap_inline168 
src="Using a Particle Filter for Gesture Recognition.files/img13.gif" width=60 
align=middle> <BR><BR>Note that <I>i</I> indicates which hand's motion 
trajectory this <IMG height=24 alt=tex2html_wrap_inline172 
src="Using a Particle Filter for Gesture Recognition.files/img14.gif" width=14 
align=middle> , <IMG height=11 alt=tex2html_wrap_inline174 
src="Using a Particle Filter for Gesture Recognition.files/img15.gif" width=15 
align=bottom> , or <IMG height=22 alt=tex2html_wrap_inline176 
src="Using a Particle Filter for Gesture Recognition.files/img16.gif" width=14 
align=middle> refers to. My models contain data about the motion trajectory of 
both the left hand and the right hand; by allowing two sets of parameters, I 
allow the motion trajectory of the left hand to be scaled and shifted separetely 
from the motion trajectory of the right hand (so, for example, <IMG height=28 
alt=tex2html_wrap_inline178 
src="Using a Particle Filter for Gesture Recognition.files/img17.gif" width=12 
align=middle> refers to the current position in the model for the left hand's 
trajectory, while <IMG height=24 alt=tex2html_wrap_inline180 

?? 快捷鍵說明

復(fù)制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號(hào) Ctrl + =
減小字號(hào) Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
久久精品国产99| 欧美影院午夜播放| 精品粉嫩超白一线天av| 欧美私人免费视频| 色老汉一区二区三区| jlzzjlzz亚洲日本少妇| 日韩影院精彩在线| 欧美国产激情一区二区三区蜜月| 色哟哟国产精品免费观看| 色偷偷成人一区二区三区91| 欧美三区在线观看| 精品国产一区二区三区不卡| 久久噜噜亚洲综合| 亚洲婷婷在线视频| 美女视频网站久久| 97精品久久久午夜一区二区三区 | 在线精品国精品国产尤物884a| 日日骚欧美日韩| 国产日产亚洲精品系列| 91麻豆精品国产自产在线 | 91国模大尺度私拍在线视频| 一区二区三区精品视频| 日本aⅴ亚洲精品中文乱码| 日韩欧美一二三区| 中文在线资源观看网站视频免费不卡| 久久久99精品免费观看不卡| 99久久综合精品| 福利一区二区在线| 不卡的av在线播放| 精品视频色一区| 精品国产乱码久久| 国产亚洲欧美在线| 亚洲国产一区二区视频| 欧美肥妇毛茸茸| 国产老妇另类xxxxx| 最近中文字幕一区二区三区| 欧美美女bb生活片| 激情综合色综合久久综合| 国产精品家庭影院| 国产精品每日更新| 91小视频免费观看| 日韩专区中文字幕一区二区| 国产日产欧美一区二区视频| 欧美午夜免费电影| 成人性生交大片免费| 首页国产欧美日韩丝袜| 国产精品日产欧美久久久久| 欧美日韩国产片| 成人aa视频在线观看| 日韩电影在线看| 国产精品丝袜一区| 欧美一二三区精品| 欧美在线色视频| 成人黄色电影在线| 久久国产精品99精品国产| 亚洲精品国产一区二区三区四区在线| 久久一区二区视频| 欧美一区二区三区啪啪| 色综合天天在线| 国产精品1区2区| 美日韩一区二区| 天涯成人国产亚洲精品一区av| 国产精品无遮挡| 欧美成人激情免费网| 欧美日韩高清一区二区| 99麻豆久久久国产精品免费| 国产一区二区网址| 九九精品视频在线看| 男女男精品网站| 日本欧美一区二区| 亚洲一区二区三区四区在线 | 精品国产自在久精品国产| 欧美影视一区在线| 色综合久久中文字幕综合网| 国v精品久久久网| 狠狠色狠狠色综合| 美脚の诱脚舐め脚责91| 日本va欧美va欧美va精品| 亚洲电影在线免费观看| 亚洲伦理在线免费看| 亚洲天堂成人在线观看| 中文字幕一区三区| 日韩美女视频19| 亚洲男人的天堂在线aⅴ视频 | 中文字幕一区二| 成人免费一区二区三区在线观看| 国产精品全国免费观看高清 | 成人av片在线观看| 国产一区二区网址| 亚洲成a人v欧美综合天堂下载| 久久亚洲影视婷婷| 日韩一二三区不卡| 欧美性生活久久| 日本久久一区二区三区| 粉嫩在线一区二区三区视频| 精品一区二区精品| 欧美三级电影在线看| 亚洲女人的天堂| 久久精品无码一区二区三区| 精品国产不卡一区二区三区| 欧美videofree性高清杂交| 精品乱人伦小说| 中文字幕免费一区| 尤物av一区二区| 麻豆一区二区99久久久久| 国产精品亚洲视频| 一本色道久久综合亚洲精品按摩 | 国产一区二区福利| 99视频在线精品| 欧美日韩成人在线| 久久午夜国产精品| 亚洲欧美日韩一区二区三区在线观看| 亚洲国产aⅴ天堂久久| 老司机免费视频一区二区三区| 国产美女在线精品| 色94色欧美sute亚洲13| 欧美一区二区三区四区五区| 久久这里只精品最新地址| 亚洲欧美乱综合| 久久精工是国产品牌吗| 99在线精品视频| 欧美一级专区免费大片| 国产精品久线在线观看| 亚洲一区免费在线观看| 国产一区二区视频在线| 在线免费观看日本一区| 26uuu国产在线精品一区二区| 中文字幕亚洲视频| 美国十次综合导航| 91香蕉视频黄| 精品福利视频一区二区三区| 亚洲欧美国产77777| 狠狠色丁香九九婷婷综合五月| 色一情一伦一子一伦一区| 亚洲精品一区二区三区在线观看| 亚洲欧美日韩国产综合在线| 国产中文一区二区三区| 欧美三区在线观看| 国产精品蜜臀在线观看| 国内成人精品2018免费看| 在线免费视频一区二区| 中文字幕成人av| 极品美女销魂一区二区三区| 欧美三级日韩三级| 亚洲天堂中文字幕| 成人午夜又粗又硬又大| 日韩午夜精品视频| 亚洲乱码国产乱码精品精98午夜| 黄页视频在线91| 久久超碰97中文字幕| 欧美成人精精品一区二区频| 日本午夜精品一区二区三区电影 | 日韩电影免费在线| 91久久精品一区二区三区| 国产精品久久久久桃色tv| 国产一区二区久久| 久久综合九色综合97婷婷女人 | 欧美日韩国产区一| 亚洲同性同志一二三专区| 国产成人av福利| 精品美女一区二区三区| 免费看黄色91| 777色狠狠一区二区三区| 亚洲国产三级在线| 在线视频国产一区| 一区二区三区在线视频免费| 波多野结衣在线一区| 日韩主播视频在线| 欧美手机在线视频| 亚洲国产精品尤物yw在线观看| 色婷婷狠狠综合| 一区二区三区中文字幕电影| 91丝袜美女网| 亚洲免费av观看| 欧美视频在线播放| 午夜精品久久久久| 欧美一级免费大片| 久久精品国产一区二区三| 日韩美女视频在线| 国产一区二区美女诱惑| 久久久精品国产免费观看同学| 国产一区二区在线电影| 亚洲国产成人自拍| 99久久国产综合色|国产精品| 亚洲人成小说网站色在线| 欧美中文字幕亚洲一区二区va在线| 一级女性全黄久久生活片免费| 欧美自拍偷拍一区| 日本va欧美va瓶| 久久精品视频一区| 99re这里只有精品首页| 亚洲午夜日本在线观看| 日韩一区和二区| 国产精品羞羞答答xxdd| 亚洲欧洲av在线| 欧美日韩一区精品| 精品一区二区三区在线观看| 国产精品人人做人人爽人人添| 色综合中文字幕| 日本成人在线视频网站|