亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? paper.tex

?? 國外免費地震資料處理軟件包
?? TEX
?? 第 1 頁 / 共 2 頁
字號:
\def\cakedir{.}\def\figdir{./Fig}\lefthead{Fomel}\righthead{Conjugate directions}\footer{SEP--92}\lstset{language=c,numbers=left,numberstyle=\tiny,showstringspaces=false}\title{Least-square inversion with inexact adjoints. \\Method of conjugate directions: A tutorial}%\keywords{inversion, algorithm, modeling, linear }\email{sergey@sep.stanford.edu}\author{Sergey Fomel}\maketitle\begin{abstract}  This tutorial describes the classic method of conjugate directions:  the generalization of the conjugate-gradient method in iterative  least-square inversion. I derive the algebraic equations of the  conjugate-direction method from general optimization principles. The  derivation explains the ``magic'' properties of conjugate gradients.  It also justifies the use of conjugate directions in cases when  these properties are distorted either by computational errors or by  inexact adjoint operators. The extra cost comes from storing a  larger number of previous search directions in the computer memory.  A simple program and two examples illustrate the method.\end{abstract}\section{Introduction}%%%%This paper describes the method of conjugate directions for solvinglinear operator equations in Hilbert space. This method is usuallydescribed in the numerous textbooks on unconstrained optimization asan introduction to the much more popular method of conjugategradients. See, for example, {\em Practical optimization} by\cite{gill} and its bibliography. The famous conjugate-gradient solverpossesses specific properties, well-known from the original works of\cite{hestenes} and \cite{fletcher}. For linear operators and exactcomputations, it guarantees finding the solution after, at most, $n$iterative steps, where $n$ is the number of dimensions in the solutionspace. The method of conjugate gradients doesn't require explicitcomputation of the objective function and explicit inversion of theHessian matrix.  This makes it particularly attractive for large-scaleinverse problems, such as those of seismic data processing andinterpretation.  However, it does require explicit computation of theadjoint operator. \cite{Claerbout.blackwell.92,iee} shows dozens ofsuccessful examples of the conjugate gradient application withnumerically precise adjoint operators.\parThe motivation for this tutorial is to explore the possibility ofusing different types of preconditioning operators in the place ofadjoints in iterative least-square inversion. For some linear orlinearized operators, implementing the exact adjoint may pose adifficult problem. For others, one may prefer differentpreconditioners because of their smoothness\cite[]{Claerbout.sep.89.201,Crawley.sep.89.207}, simplicity\cite[]{kleinman}, or asymptotic properties \cite[]{herman}. In thosecases, we could apply the natural generalization of the conjugategradient method, which is the method of conjugate directions. The costdifference between those two methods is in the volume of memorystorage. In the days when the conjugate gradient method was invented,this difference looked too large to even consider a practicalapplication of conjugate directions. With the evident increase ofcomputer power over the last 30 years, we can afford to do it now.\parI derive the main equations used in the conjugate-direction methodfrom very general optimization criteria, with minimum restrictionsimplied. The textbook algebra is illustrated with a simple program andtwo simple examples.\section{IN SEARCH OF THE MINIMUM} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%We are looking for the solution of the linear operator equation\begin{equation}{\bf d = A\,m}\;,\label{eqn:equation}\end{equation} where ${\bf m}$ is the unknown model in the linear model space, ${\bfd}$ stands for the given data, and ${\bf A}$ is the forward modelingoperator. The data vector ${\bf d}$ belongs to a Hilbert space witha defined norm and dot product. The solution is constructed by iterativesteps in the model space, starting from an initial guess ${\bfm}_0$. Thus, at the $n$-th iteration, the current model ${\bf m}_n$ isfound by the recursive relation\begin{equation}{\bf m}_n = {\bf m}_{n-1} + \alpha_n {\bf s}_n\;,\label{eqn:mn}\end{equation}where ${\bf s}_n$ denotes the step direction, and $\alpha_n$ standsfor the scaling coefficient. The residual at the $n$-th iteration isdefined by\begin{equation}{\bf r}_n = {\bf d - A\,m}_{n}\;.\label{eqn:residual}\end{equation}Substituting (\ref{eqn:mn}) into (\ref{eqn:residual}) leads to the equation\begin{equation}{\bf r}_n = {\bf r}_{n-1} - \alpha_n {\bf A\,s}_n\;.\label{eqn:rn}\end{equation}For a given step ${\bf s}_n$, we can choose $\alpha_n$ to minimize thesquared norm of the residual\begin{equation}\|{\bf r}_n\|^2 = \|{\bf r}_{n-1}\|^2 - 2\,\alpha_n \left({\bf r}_{n-1},\,{\bf A\,s}_n\right) +\alpha_n^2\,\|{\bf A\,s}_n\|^2\;.\label{eqn:rnorm}\end{equation}The parentheses denote the dot product, and $\|{\bf x}\|=\sqrt{({\bf x,\,x})}$ denotes the norm of $x$ in thecorresponding Hilbert space. The optimal value of $\alpha_n$ is easilyfound from equation (\ref{eqn:rnorm}) to be\begin{equation}\alpha_n = {{\left({\bf r}_{n-1},\,{\bf A\,s}_n\right)} \over{\|{\bf A\,s}_n\|^2}}\;.\label{eqn:alpha}\end{equation}Two important conclusions immediately follow from this fact. First,substituting the value of $\alpha_n$ from formula (\ref{eqn:alpha}) intoequation (\ref{eqn:rn}) and multiplying both sides of this equation by ${\bfr}_n$, we can conclude that\begin{equation}\left({\bf r}_n,\,{\bf A\,s}_n\right) = 0\;,\label{eqn:rasn}\end{equation}which means that the new residual is orthogonal to the correspondingstep in the residual space. This situation is schematically shown inFigure \ref{fig:dirres}. Second, substituting formula (\ref{eqn:alpha}) into (\ref{eqn:rnorm}), we can conclude that the new residual decreases accordingto\begin{equation}\|{\bf r}_n\|^2 = \|{\bf r}_{n-1}\|^2 - {{\left({\bf r}_{n-1},\,{\bf A\,s}_n\right)^2} \over{\|{\bf A\,s}_n\|^2}}\;,\label{eqn:pythagor}\end{equation}(``Pythagoras's theorem'' ), unless ${\bf r}_{n-1}$ and ${\bf A\,s}_n$are orthogonal. These two conclusions are the basic features ofoptimization by the method of steepest descent. They will help usdefine an improved search direction at each iteration.\inputdir{XFig}\sideplot{dirres}{height=2.5in}{Geometry of the residual in thedata space (a scheme).}\section{IN SEARCH OF THE DIRECTION} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Let's sup\-pose we have a ge\-ne\-ra\-tor that pro\-videsparti\-cu\-lar search directions at each step. The new direction canbe the gradient of the objective function (as in the method ofsteepest descent), some other operator applied on the residual fromthe previous step, or, generally speaking, any arbitrary vector in themodel space. Let us denote the automatically generated direction by${\bf c}_n$. According to formula (\ref{eqn:pythagor}), the residualdecreases as a result of choosing this direction by\begin{equation}\|{\bf r}_{n-1}\|^2 - \|{\bf r}_{n}\|^2 = {{\left({\bf r}_{n-1},\,{\bf A\,c}_n\right)^2} \over {\|{\bfA\,c}_n\|^2}}\;.\label{eqn:deltar}\end{equation}How can we improve on this result? \subsection{First step of the improvement}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Assuming $n>1$, we can add someamount of the previous step ${\bf s}_{n-1}$ to the chosen direction${\bf c}_n$ to produce a new search direction ${\bf s}_n^{(n-1)}$, asfollows:\begin{equation}{\bf s}_n^{(n-1)} =  {\bf c}_n + \beta_n^{(n-1)}\,{\bf s}_{n-1}\;,\label{eqn:cn}\end{equation}where $\beta_n^{(n-1)}$ is an adjustable scalar coefficient. According toto the fundamental orthogonality principle (\ref{eqn:rasn}), \begin{equation}\left({\bfr}_{n-1},\,{\bf A\,s}_{n-1}\right) = 0\;.  \label{eqn:rasn1}\end{equation}As follows from equation (\ref{eqn:rasn1}), the numerator on the right-handside of equation (\ref{eqn:deltar}) is not affected by the new choice of thesearch direction:\begin{equation}\left({\bf r}_{n-1},\,{\bf A\,s}_n^{(n-1)}\right)^2 = \left[\left({\bf r}_{n-1},\,{\bf A\,c}_n\right) + \beta_n^{(n-1)}\,\left({\bf r}_{n-1},\,{\bf A\,s}_{n-1}\right)\right]^2 =\left({\bf r}_{n-1},\,{\bf A\,c}_n\right)^2\;.\label{eqn:numerator}\end{equation}However, we can use transformation (\ref{eqn:cn}) to decrease thedenominator in (\ref{eqn:deltar}), thus further decreasing the residual${\bf r}_n$. We achieve the minimization of the denominator\begin{equation}\|{\bf A\,s}_n^{(n-1)}\|^2 =  \|{\bf A\,c}_n\|^2 + 2\,\beta_n^{(n-1)}\,\left({\bf A\,c}_n,\,{\bf A\,s}_{n-1}\right) +\left(\beta_n^{(n-1)}\right)^2\,\|{\bf A\,s}_{n-1}\|^2\label{eqn:denominator}\end{equation} by choosing the coefficient $\beta_n^{(n-1)}$ to be\begin{equation}\beta_n^{(n-1)} = - {{\left({\bf A\,c}_n,\,{\bf A\,s}_{n-1}\right)} \over{\|{\bf A\,s}_{n-1}\|^2}}\;.\label{eqn:beta}\end{equation}Note the analogy between (\ref{eqn:beta}) and (\ref{eqn:alpha}). Analogously to(\ref{eqn:rasn}), equation (\ref{eqn:beta}) is equivalent to the orthogonality condition\begin{equation}\left({\bf A\,s}_n^{(n-1)},\,{\bf A\,s}_{n-1}\right) = 0\;.\label{eqn:acas}\end{equation}Analogously to (\ref{eqn:pythagor}), applying formula (\ref{eqn:beta}) is also equivalent to defining theminimized denominator as\begin{equation}\|{\bf A\,c}_n^{(n-1)}\|^2 =  \|{\bf A\,c}_n\|^2 -{{\left({\bf A\,c}_n,\,{\bf A\,s}_{n-1}\right)^2} \over{\|{\bf A\,s}_{n-1}\|^2}}\;.\label{eqn:pithagor2}\end{equation}\subsection{Second step of the improvement}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Now let us assume $n > 2$ and add some amount of the step fromthe $(n-2)$-th iteration to the search direction, determining the newdirection ${\bf s}_n^{(n-2)}$, as follows:\begin{equation}{\bf s}_n^{(n-2)} =  {\bf s}_n^{(n-1)} + \beta_n^{(n-2)}\,{\bf s}_{n-2}\;.\label{eqn:cn2}\end{equation}We can deduce that after the second change, the value of numerator inequation (\ref{eqn:deltar}) is still the same:\begin{equation}\left({\bf r}_{n-1},\,{\bf A\,s}_n^{(n-2)}\right)^2 = \left[\left({\bf r}_{n-1},\,{\bf A\,c}_n\right) + \beta_n^{(n-2)}\,\left({\bf r}_{n-1},\,{\bf A\,s}_{n-2}\right)\right]^2 =\left({\bf r}_{n-1},\,{\bf A\,c}_n\right)^2\;.\label{eqn:numerator2}\end{equation}This remarkable fact occurs as the result of transforming the dot product $\left({\bfr}_{n-1},\,{\bf A\,s}_{n-2}\right)$ with the help of equation(\ref{eqn:rn}):\begin{equation}\left({\bf r}_{n-1},\,{\bf A\,s}_{n-2}\right) =\left({\bf r}_{n-2},\,{\bf A\,s}_{n-2}\right) -\alpha_{n-1}\,\left({\bf A\,s}_{n-1},\,{\bf A\,s}_{n-2}\right) = 0\;.\label{eqn:dotprod}\end{equation}The first term in (\ref{eqn:dotprod}) is equal to zero according to formula(\ref{eqn:rasn}); the second term is equal to zero according to formula(\ref{eqn:acas}). Thus we have proved the new orthogonality equation\begin{equation}\left({\bf r}_{n-1},\,{\bf A\,s}_{n-2}\right) = 0\;,\label{eqn:rasn2}\end{equation}which in turn leads to the numerator invariance (\ref{eqn:numerator2}). Thevalue of the coefficient $\beta_n^{(n-2)}$ in (\ref{eqn:cn2}) is definedanalogously to (\ref{eqn:beta}) as\begin{equation}\beta_n^{(n-2)} = - {{\left({\bf A\,s}_n^{(n-1)},\,{\bf A\,s}_{n-2}\right)} \over{\|{\bf A\,s}_{n-2}\|^2}} = - {{\left({\bf A\,c}_n,\,{\bf A\,s}_{n-2}\right)} \over{\|{\bf A\,s}_{n-2}\|^2}}\;,\label{eqn:beta2}\end{equation}where we have again used equation (\ref{eqn:acas}). If ${\bf A\,s}_{n-2}$ isnot orthogonal to ${\bf A\,c}_n$, the second step of the improvement leadsto a further decrease of the denominator in (\ref{eqn:pythagor}) and,consequently, to a further decrease of the residual.\subsection{Induction}%%%%%%%%%%%%%%%%%%Continuing by induction the process of adding a linear combination ofthe previous steps to the arbitrarily chosen direction ${\bf c}_n$(known in mathematics as the {\em Gram-Schmidt orthogonalizationprocess}), we finally arrive at the complete definition of the newstep ${\bf s}_n$, as follows:\begin{equation}{\bf s}_n = {\bf s}_n^{(1)} =  {\bf c}_{n} + \sum_{j=1}^{j=n-1}\,\beta_n^{(j)}\,{\bf s}_{j}\;.\label{eqn:step}\end{equation}Here the coefficients $\beta_n^{(j)}$ are defined by equations\begin{equation}\beta_n^{(j)} = - {{\left({\bf A\,c}_n,\,{\bf A\,s}_{j}\right)} \over{\|{\bf A\,s}_{j}\|^2}}\;,\label{eqn:betaj}\end{equation} which correspond to the orthogonality principles\begin{equation}\left({\bf A\,s}_n,\,{\bf A\,s}_{j}\right) = 0\;,\;\;1 \leq j \leq n-1\label{eqn:asj}\end{equation}and\begin{equation}\left({\bf r}_{n},\,{\bf A\,s}_{j}\right) = 0\;,\;1 \leq j \leq n\;.\label{eqn:rasnj}\end{equation}It is these orthogonality properties that allowed us to optimize thesearch parameters one at a time instead of solving the $n$-dimensionalsystem of optimization equations for $\alpha_n$ and $\beta_n^{(j)}$.\section{ALGORITHM}The results of the preceding sections define the method of conjugatedirections to consist of the following algorithmic steps: 

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
**性色生活片久久毛片| 亚洲一区二区三区视频在线 | 亚洲资源中文字幕| 94色蜜桃网一区二区三区| 久久久亚洲精华液精华液精华液| 美女一区二区久久| 69久久99精品久久久久婷婷| 日日骚欧美日韩| 欧美日韩电影一区| 丝袜a∨在线一区二区三区不卡| 欧美影院一区二区| 亚洲高清免费视频| 欧美日韩国产影片| 日本在线观看不卡视频| 日韩一区二区在线看| 日韩av中文字幕一区二区| 91精品国产综合久久精品麻豆| 亚洲mv在线观看| 在线成人av网站| 日本不卡一区二区三区| 欧美一二三区在线观看| 精品一区二区精品| 久久久综合视频| 成人亚洲一区二区一| 中文字幕一区二区三区在线观看| 99久久国产综合精品麻豆 | 久久综合成人精品亚洲另类欧美| 久久99精品久久久久久动态图 | 丝袜亚洲另类丝袜在线| 欧美一级在线观看| 精品中文字幕一区二区| 久久精品一区蜜桃臀影院| 成人国产精品免费观看视频| 亚洲欧洲美洲综合色网| 欧美中文字幕一二三区视频| 三级欧美韩日大片在线看| 26uuu另类欧美| 成人黄色777网| 午夜欧美大尺度福利影院在线看| 日韩精品中午字幕| 国产成都精品91一区二区三| 亚洲视频一区在线观看| 欧美精品久久天天躁| 极品少妇一区二区三区精品视频| 国产视频一区二区在线| 一本久久精品一区二区| 日日嗨av一区二区三区四区| 久久精品视频在线看| 一本色道久久综合亚洲91 | 日产国产高清一区二区三区| 精品国产一区二区三区四区四| 成人小视频在线| 亚洲午夜免费视频| 精品免费一区二区三区| 99久久久久久99| 免费久久99精品国产| 国产日韩欧美激情| 欧美视频日韩视频在线观看| 美腿丝袜一区二区三区| 中文字幕日韩欧美一区二区三区| 欧美日韩另类一区| 国产成人综合在线播放| 亚洲一区在线观看视频| 精品国产免费久久| 国产精品久久久久影院| 欧美影视一区在线| 国产福利一区二区三区视频 | 日本中文字幕一区二区有限公司| 国产欧美日韩精品a在线观看| 欧美综合在线视频| 国产精品亚洲视频| 亚洲风情在线资源站| 中文字幕精品三区| 制服丝袜激情欧洲亚洲| 97se亚洲国产综合在线| 麻豆久久久久久| 亚洲精品大片www| 久久婷婷国产综合精品青草| 欧洲视频一区二区| 国产69精品久久久久毛片| 婷婷久久综合九色综合绿巨人 | 欧美mv日韩mv亚洲| 在线观看成人免费视频| 国产精一区二区三区| 午夜日韩在线电影| 亚洲品质自拍视频| 久久九九国产精品| 欧美一区二区三区电影| 色综合欧美在线| 国产成+人+日韩+欧美+亚洲| 日韩av电影免费观看高清完整版| 亚洲免费在线播放| 国产精品美女久久久久久久久 | 色综合久久88色综合天天 | av电影一区二区| 精品在线播放免费| 偷拍自拍另类欧美| 亚洲人吸女人奶水| 国产清纯美女被跳蛋高潮一区二区久久w | 另类小说图片综合网| 亚洲一区二区四区蜜桃| 国产日韩精品一区| 日韩欧美成人一区二区| 欧美日韩黄色一区二区| 91麻豆123| av午夜一区麻豆| 福利一区二区在线| 国模大尺度一区二区三区| 蜜臂av日日欢夜夜爽一区| 五月激情六月综合| 亚洲午夜视频在线| 亚洲在线一区二区三区| 亚洲乱码国产乱码精品精的特点| 国产精品三级视频| 久久久久久影视| 26uuu成人网一区二区三区| 欧美一卡二卡在线观看| 4438x亚洲最大成人网| 欧美日韩综合在线免费观看| 色综合久久中文综合久久牛| 99麻豆久久久国产精品免费优播| 国产**成人网毛片九色| 国产iv一区二区三区| 国产乱淫av一区二区三区| 狠狠色综合播放一区二区| 免费看黄色91| 蜜臀99久久精品久久久久久软件| 日韩av在线播放中文字幕| 午夜电影久久久| 青青草原综合久久大伊人精品| 日韩av一区二区在线影视| 蜜臀久久99精品久久久久宅男 | 欧美一区二区成人6969| 这里只有精品99re| 91精品国产福利| 欧美成人高清电影在线| 精品毛片乱码1区2区3区| 久久这里只有精品6| 久久久久久久综合日本| 国产日产精品一区| 综合欧美亚洲日本| 亚洲精品欧美激情| 亚洲国产wwwccc36天堂| 丝袜美腿高跟呻吟高潮一区| 老司机一区二区| 国产精品18久久久久久久网站| 国产999精品久久久久久绿帽| 成人免费视频视频在线观看免费| 91免费观看在线| 欧美日韩专区在线| 日韩一区二区三区电影| 久久综合丝袜日本网| 国产精品区一区二区三| 亚洲欧美日韩小说| 日韩专区中文字幕一区二区| 久久97超碰国产精品超碰| 国产成人高清在线| 91麻豆蜜桃一区二区三区| 欧美性猛交xxxx乱大交退制版| 777久久久精品| 国产午夜亚洲精品理论片色戒 | 午夜伦欧美伦电影理论片| 麻豆成人久久精品二区三区红| 国产精品自拍三区| 99re在线精品| 欧美久久久久久蜜桃| 精品国产一区二区在线观看| 国产精品乱码一区二区三区软件 | 久久久不卡网国产精品一区| 中文字幕va一区二区三区| 亚洲乱码精品一二三四区日韩在线| 亚洲v日本v欧美v久久精品| 韩国成人福利片在线播放| 成人av网站在线| 欧美人妇做爰xxxⅹ性高电影| 精品久久国产字幕高潮| 国产精品久久久久久户外露出| 亚洲一区在线播放| 国内精品写真在线观看| 97精品国产97久久久久久久久久久久 | 91亚洲精品久久久蜜桃| 欧美丰满嫩嫩电影| 国产欧美视频一区二区| 亚洲一区二区三区精品在线| 精品一二三四在线| 一本色道久久综合亚洲aⅴ蜜桃| 日韩一区二区三区电影| 亚洲视频在线观看三级| 久久99久久久久| 日本乱人伦aⅴ精品| 欧美不卡视频一区| 亚洲美女屁股眼交3| 国产在线视频一区二区三区| 日本国产一区二区| 久久免费国产精品| 五月综合激情日本mⅴ| 成人免费高清视频| 日韩视频在线你懂得| 亚洲人成影院在线观看| 黑人巨大精品欧美一区|