亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? practical.tex~

?? 移動機器人同時定位與地圖創建最前沿技術
?? TEX~
字號:
\newcommand{\bmath}[1]{\mbox{$\mathbf{#1}$}}\newcommand{\q}{\bmath{q}}\newcommand{\y}{\bmath{y}}\newcommand{\x}{\bmath{x}}\newcommand{\de}{\bmathepgivjb}\newcommand{\ere}{\bmath{r}}\newcommand{\omegav}{\bmath{\omega}}\newcommand{\f}{\bmath{f}}\newcommand{\n}{\bmath{n}}\newcommand{\Pe}{\bmath{P}}\newcommand{\F}{\bmath{F}}\newcommand{\G}{\bmath{G}}\newcommand{\I}{\bmath{I}}\newcommand{\J}{\bmath{J}}\newcommand{\K}{\bmath{K}}\newcommand{\Ese}{\bmath{S}}\newcommand{\R}{\bmath{R}}\newcommand{\Ha}{\bmath{H}}\newcommand{\h}{\bmath{h}}\newcommand{\m}{\bmath{m}}\newcommand{\z}{\bmath{z}}\newcommand{\uve}{\bmath{v}}\newcommand{\V}{\bmath{V}}\newcommand{\undist}{\bmath{uds}}\newcommand{\dist}{\bmath{ds}}\newcommand{\amin}{\bmath{a}}\documentclass[a4paper,12pt]{article}\usepackage{times,epsfig}\setlength{\oddsidemargin}{-7mm}\setlength{\evensidemargin}{-7mm}\setlength{\topmargin}{-14mm}\setlength{\parindent}{0mm}\setlength{\parskip}{1mm}\setlength{\textwidth}{173mm}\setlength{\textheight}{244mm}\setlength{\unitlength}{1mm}%\input newcom.tex%\input symbols.tex\newcommand{\coursetitle}{SLAM Summer School 2006}\newcommand{\tutorialtitle}{Practical 2: SLAM using Monocular Vision}\newcommand{\docauthor}{Javier Civera, University of Zaragoza\\                        Andrew J. Davison, Imperial College London\\                        J.M.M Montiel, University of Zaragoza.}\newcommand{\docemails}{josemari@unizar.es, jcivera@unizar.es, ajd@doc.ic.ac.uk}\begin{document}\bibliographystyle{plain}\begin{center}\hrule\vspace{2mm}{\LARGE\bf \coursetitle}\vspace{2mm}{\Large\bf \tutorialtitle}\vspace{2mm}{\large \docauthor}\\{\large \texttt{\docemails}}\end{center}\hrule%\hrule\vspace{5mm}%\vspace{10mm}%\hrule%\vspace{2mm}\setcounter{page}{1}\setcounter{section}{0}\section{Objectives}\begin{enumerate}\item Understanding the characteristics of efficient (potentiallyreal-time) SLAM using a monocularcamera as the only sensor.\begin{enumerate}\item Map management.\item Feature initialization.\item Near and far features.\end{enumerate}\item Understanding the inverse depth parametrization of mapfeatures in monocular SLAM.\item Understanding the performancelimits of a constant velocity motion model for a camera when no odometry is available.\end{enumerate}\section{Exercise 1. Feature selection and matching.}One of the characteristics of vision-based SLAM is that there is toomuch information in an image sequence forcurrent computers to process in real-time. We therefore use heuristics to select which features toinclude in the map. The desirable properties of mapfeatures are:\begin{enumerate}\item Saliency: features have to be identified by distinct texture patches.\item A minimum number (e.g. 14) should  be visible in the image at all times --- when this is not the case, new map features are initialized.\item The features should be spread over the whole image.\end{enumerate}The goal of this exercise is to manually initialize featuresin order to meet the above criteria, and to understand better how anautomatic initialization algorithm should work. Run\texttt{mono\_slam.m}. With the user interface, you can addfeatures and perform step by step EKF SLAM:\begin{enumerate}\item In the first image add about ten salient features spread over the image.You can watch the movie \texttt{juslibol\_SLAM.mpg} (using forinstance \texttt{mpeg\_play} on a Unix workstation) as an example of howto select suitable features (but youcan of course select other ones). This movie shows the results ofapplying automatic feature selection.\item As the camera moves, some features will leave the field of view, and you willhave to add new ones in order to maintain around 14 visible map features.\end{enumerate}\section{Exercise 2. Near features and far features.}A camera is a bearing-only sensor. This means that the depth of afeature cannot be estimated with a single image measurement. Thedepth of the feature can be estimated only if the feature isobserved from different points of view (only if the cameratranslates enough to produce a significant parallax). So, it canbe distant features whose depth cannot be correctly estimated fora long time, or even never if the translation is never big enoughto produce parallax.The goal of this exercise is to observe the different estimationevolution for near and distant features and their influence in thecamera location estimate.\begin{enumerate}\item Open the video \texttt{parallax.mpg}.      Observe the different motion \emph{in the image} of different depth features.      Open the video \texttt{noparallax.mpg} and see now that the motion in the image       of different depth features is the same.\item Now analyse the video we are using for this practical (\texttt{juslibol.mpg}).Distinguish in this video parts with \emph{low parallax motion}.\item Run \texttt{mono\_slam.m}.\begin{itemize}\item  Observe what happens to the features in the 3D map(initialization value and covariance and value and covarianceafter some frames). Red dots are the estimated values and redlines limit the $95\%$ acceptance region.\item The code singles out features \#5 and \#15 and displays depthestimation and its $95\%$ acceptance region: [lower limit,estimation, upper limit]. When clicking, make sure that feature \#5corresponds to a near one (for example, on the car) and feature \#15corresponds to a far one (for example, the tree appearing on theleft).\item   Notice the difference between the evolution of near anddistant features.\item Observe the camera location uncertaintyevolution (Use the axes limit controls in the user interface).\item Observe what happen to features and camera location uncertaintiesduring the low parallax motion part discussed in point 2. Noticethe difference between this part of the 3D map (constructed withlow parallax information) and the high parallax parts.\end{itemize}\end{enumerate}\section{Exercise 3. Inverse depth parameterization.}Initializing a feature in monocular SLAM is a challenging issue,because the depth uncertainty is not well modelled by a Gaussian.This problem is overcome using inverse depth instead of theclassical $XYZ$ representation.\begin{figure}\centering\includegraphics[width=0.5\columnwidth]{FeatureObservationAndParameterization.eps}\caption{Feature parameterization and measurement equation.}\label{fig_feat_par}\end{figure}Matlab code of the practical is coded in inverse depth so statevector is:\begin{equation}\x=\left(\x_v^\top, \y_1^\top, \y_2^\top, \ldots\y_n^\top\right)^\top.\end{equation}it is composed of:\begin{enumerate}\item 13 components that correspond to the location and velocity ofthe camera:\begin{equation}\x_v=\left(\begin{array}{c}\ere^{WC}\\\q^{WC}\\\uve^{W}\\\omegav^{W}\end{array}\right).\end{equation}\item The rest of the components are features. Each feature is represented by 6 parameters, that are the position of the camera the first time the feature was seen $x_i,  y_i,  z_i$, the ray coded with azimuth-elevation angles ($\theta,\phi$), in absolute reference, and the inverse depth, $\rho$, of the feature along the ray:\begin{equation}\y_i=\left(\begin{array}{cccccc}x_i & y_i & z_i & \theta_i &\phi_i & \rho_i\end{array}\right)^\top\end{equation}So the transformation from the inverse depth coding to theEucleidean coding in the absolute frame is:\begin{equation}\left(\begin{array}{c}x\\y\\z\end{array}\right)=\left(\begin{array}{c}x_i\\y_i\\z_i\end{array}\right)+                    \frac{1}{\rho_i}\m\left(\theta_i,                    \phi_i\right).\end{equation}where:\begin{equation}\m=\left(\begin{array}{ccc}\cos\phi_i \sin\theta_i&                     -\sin\phi_i&                     \cos\phi_i \cos\theta_i\end{array}\right)^\top                     \label{eq-m}\end{equation}\end{enumerate}The goal of this exercise is to understand the inverse depthparameterisation.\begin{enumerate}\item The code stores partial information about features \#5 and\#15 in the file \texttt{history.mat}:\begin{itemize}\item \texttt{feature5History} is a 6 row matrix, Each column containing the feature \#5location coded in inverse depth at step $k$.\item \texttt{rhoHistory\_5} is a row vector containing the inverse depth estimationhistory for feature 5.\item \texttt{rhoHistory\_15} is a row vector containing the inverse depth estimationhistory for feature 15.\item \texttt{rhoStdHistory\_5} is a row vector containing the inverse depthstandard deviation history for feature 5.\item \texttt{rhoStdHistory\_15} is a row vector containing the inverse depth standard deviationhistory for feature 15.\end{itemize}\item Compute the $XYZ$ Euclidean location for feature \#5 afterprocessing all images.\item Do a graph with the value of the inverse depth and the  $95\%$ acceptance region history for both features \#5 and \#15.  Use the matlab functions \texttt{open}, \texttt{figure}, \texttt{hold}, and \texttt{plot}.  Comment the difference between the two graphs.\item After processing the whole sequence, what is the estimation and theacceptance region \emph{expressed in depth} for both features?Think about a feature at infinity, what inverse depth would have?Try to see in the previous graphs when the infinity is included inboth features estimation, that is, the feature can be at any depthin the ray.\end{enumerate}\section{Exercise 4. Constant velocity motion model (optional)}Monocular SLAM uses camera as the unique sensor, without anyodometry input. It is used a constant velocity model so thesystems needs as input both the camera frame rate and the maximumexpected angular and linear acceleration.For a given camera acceleration, the frame rate defines theacceptance region for the point matching, so for any acceleration,the acceptance regions can be kept low if a frame rate high enoughis used.The goal of this exercise is to analyse the effect of the linearacceleration, the angular acceleration and the frame rate.\begin{enumerate}\item The initial tuning is $6\frac{m}{s^2}$ and $6\frac{\mbox{rad}}{s^2}$.\item Increase the angular acceleration only (for instance, doublethe value) and analyse the effect on the search region. Find thisparameter in the file \texttt{mono\_slam.m}, its name is\texttt{sigma\_alphaNoise}.\item Increase the linear acceleration only (for instance, doublethe value), analyse the effect and compare with the previouspoint. The name of this parameter is \texttt{sigma\_aNoise}.\item Reduce the frame rate processing and see the effect:\begin{enumerate}\item 1 out of 2 images.\item 1 out of 4 images.\end{enumerate}The processing time has increased or decreased as the result ofprocessing less images? Can you explain this?Clue: Find the variable step and the code associated with thisvariable. You will also have to modify the variable\texttt{deltat}, that codes the time between frames.\end{enumerate}\nocite{Hartley2004,Davison2003,Montiel2006RSS,Bar-Shalom-88}\bibliographystyle{ieee}\bibliography{IEEEabrv,practical}\end{document}

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
精品视频全国免费看| 中文字幕精品一区| 国产精品高清亚洲| 强制捆绑调教一区二区| 色视频欧美一区二区三区| 亚洲精品一区二区三区99 | 色狠狠一区二区三区香蕉| 日韩免费一区二区三区在线播放| 1区2区3区欧美| 国产精品自拍一区| 91精品欧美综合在线观看最新| 亚洲欧洲三级电影| 成人黄色国产精品网站大全在线免费观看 | 欧美激情一区二区三区全黄 | 亚洲天堂成人在线观看| 国产毛片精品一区| 26uuu亚洲综合色欧美 | 精品久久国产97色综合| 无码av免费一区二区三区试看| 99久久99久久精品国产片果冻| 久久久亚洲精华液精华液精华液| 久久精品国产亚洲5555| 欧美电影影音先锋| 日韩成人午夜电影| 3751色影院一区二区三区| 亚洲综合成人在线视频| 在线亚洲人成电影网站色www| 国产精品成人在线观看| 成人91在线观看| 日韩美女视频一区二区| 91在线精品一区二区| 国产精品久久久一区麻豆最新章节| 国产999精品久久久久久绿帽| 久久久蜜桃精品| 国产高清不卡一区二区| 国产精品毛片高清在线完整版| 成人精品一区二区三区四区| 国产精品免费av| 91香蕉视频污在线| 亚洲国产精品欧美一二99| 欧美精品日韩一区| 麻豆精品久久久| 久久精品人人爽人人爽| 成人av网站免费观看| 亚洲综合男人的天堂| 91精品中文字幕一区二区三区| 美国精品在线观看| 中文字幕乱码久久午夜不卡| 色综合天天天天做夜夜夜夜做| 一区二区三区**美女毛片| 欧美高清hd18日本| 国产在线精品一区二区| 国产精品国产三级国产aⅴ中文 | 国产精品妹子av| 欧美无砖砖区免费| 国产一区日韩二区欧美三区| 国产精品麻豆99久久久久久| av电影一区二区| 亚洲三级在线免费观看| 91精品在线免费观看| 一二三四区精品视频| 3d成人动漫网站| 国产自产高清不卡| 亚洲精品五月天| 日韩精品一区二区三区在线| 91香蕉视频黄| 老司机一区二区| 亚洲免费观看高清完整版在线观看熊| 欧美日韩性生活| 国产91对白在线观看九色| 调教+趴+乳夹+国产+精品| 国产精品久久久久婷婷二区次| 欧美精品第1页| 成人精品电影在线观看| 日本女优在线视频一区二区| 国产精品激情偷乱一区二区∴| 欧美一区二区啪啪| 99国产精品视频免费观看| 麻豆精品一区二区| 一区二区在线免费| 国产欧美综合在线观看第十页| 欧美视频一区在线| a在线欧美一区| 国产一区二区三区在线观看免费视频| 一区二区三区在线高清| 国产三级一区二区三区| 91精品国产色综合久久ai换脸 | 波多野洁衣一区| 久久99精品国产| 肉色丝袜一区二区| 亚洲日本成人在线观看| 久久只精品国产| 欧美一区三区四区| 欧美三区在线视频| 97精品国产97久久久久久久久久久久| 美女mm1313爽爽久久久蜜臀| 一二三四社区欧美黄| 中文字幕制服丝袜一区二区三区| 欧美精品一区二区不卡 | 午夜av一区二区| 亚洲永久精品国产| 亚洲女同一区二区| 中文字幕亚洲一区二区av在线| 久久青草国产手机看片福利盒子| 日韩视频免费观看高清在线视频| 欧美在线观看一区二区| 成人高清在线视频| 成人一区二区三区中文字幕| 风间由美一区二区av101| 国产91精品一区二区麻豆网站| 九九**精品视频免费播放| 秋霞午夜鲁丝一区二区老狼| 日韩精品每日更新| 日韩成人免费在线| 精品综合免费视频观看| 久国产精品韩国三级视频| 久久精品国产色蜜蜜麻豆| 精品一区二区三区在线观看| 国内成人自拍视频| 国产精品 欧美精品| 国产黄色91视频| 成人免费视频网站在线观看| www.亚洲精品| 91老师国产黑色丝袜在线| 在线观看视频一区二区欧美日韩 | 国产98色在线|日韩| av在线播放不卡| 在线这里只有精品| 欧美酷刑日本凌虐凌虐| 欧美成人女星排行榜| 亚洲国产精品国自产拍av| 1000部国产精品成人观看| 亚洲第一久久影院| 麻豆一区二区三| 成人av在线观| 欧美日韩成人一区| 精品国产乱码久久久久久1区2区| 国产欧美综合在线| 亚洲韩国精品一区| 另类小说视频一区二区| 国产91丝袜在线18| 在线亚洲+欧美+日本专区| 日韩欧美一区二区免费| 中文字幕精品三区| 亚洲国产精品嫩草影院| 国产乱淫av一区二区三区| 91久久一区二区| 久久色中文字幕| 亚洲一区二区三区三| 国产综合色视频| 在线观看精品一区| 久久这里只有精品首页| 一区二区三区中文字幕| 精品亚洲国产成人av制服丝袜| 成人精品小蝌蚪| 欧美一级片在线看| 一区二区三区欧美久久| 麻豆成人久久精品二区三区红| 99久久精品国产一区| 亚洲精品在线观看网站| 亚洲国产日韩a在线播放性色| 国产精选一区二区三区| 欧美片网站yy| 亚洲人成人一区二区在线观看| 日韩av一二三| 91搞黄在线观看| 欧美国产一区二区在线观看 | 亚洲在线视频网站| 国产福利一区二区三区视频| 欧美在线不卡一区| 国产精品初高中害羞小美女文| 久久精品国内一区二区三区| 91视频www| 久久久久国产一区二区三区四区| 午夜精品在线看| 色综合激情久久| 日韩久久一区二区| 波多野结衣亚洲一区| 久久久亚洲精品石原莉奈| 奇米一区二区三区av| 欧美日韩高清一区二区不卡| 亚洲色图欧美激情| av在线不卡免费看| 中文字幕乱码久久午夜不卡| 国产一区二区导航在线播放| 日韩一级黄色片| 视频一区欧美日韩| 欧美喷潮久久久xxxxx| 亚洲综合精品久久| 在线观看91精品国产入口| 1区2区3区欧美| 一本大道久久a久久综合婷婷| 中文字幕一区av| 97久久精品人人做人人爽50路| 国产精品色眯眯| 91丨九色丨蝌蚪丨老版| 国产精品久久久久aaaa| 99久久久精品免费观看国产蜜| 国产精品理论在线观看| 色综合色综合色综合|