?? paper.tex
字號:
% Bottom is a Middle east Geoflex shot profile (Y\&C \#39) after AGC.% Middle is gapped 1-D decon. Top is steep-dip decon.% }\parUnexpectedly, results showed that 1-D deconvolutionalso suppresses low-velocity noises.An explanation can be that these noises are often either low-frequencyor quasimonochromatic.\parAs a minor matter, fundamentally,my code cannot work ideally along the side boundariesbecause there is no output(so I replaced it by the variance scaled input).With a little extra coding,better outputs could be produced along the sidesif we used spatially one-sided filters like\begin{equation} \begin{array}{ccccc} x& x& x& x& x \\ .& x& x& x& x \\ .& x& x& x& x \\ .& .& x& x& x \\ .& .& x& x& x \\ .& .& .& x& x \\ .& .& .& x& x \\ .& .& .& .& . \\ .& .& .& .& . \\ .& .& .& .& 1 \end{array}\end{equation}\noindentThese would be applied on one side of the shotand the opposite orientation would be applied on the other side.With many kinds of data sets,such as off-end marine recording in whicha ship tows a hydrophone streamer,the above filter might be better in the interior too.\subsection{Are field arrays really needed?}\inputdir{mideast}Field arrays\sx{field arrays}cancel random noise but their main function,I believe, is to cancel low-velocity coherent noises,something we now see is handled effectively by steep-dip deconvolution.While I do not advocate abandoning field arrays,it is pleasing to notice that with the arrival of steep-dip deconvolution,we are no longer so dependent on field arraysand perhaps coherent noises can be controlledwhere field arrays are impractical,as in certain 3-D geometries.A recently arrived 3-D shot profile from the sand dunesin the Middle East is Figure \ref{fig:dune3D}.The strong hyperbolas are \bx{ground roll} seen in a linethat does not include the shot.The open question here is,how should we formulate the problem of ground-roll removal in 3-D?\plot{dune3D}{height=4in,width=6in}{ Sand dunes. One shot, six parallel receiver lines.}\subsection{Which coefficients are really needed?}Steep-dip decon is a heavy consumer of computer time.Many small optimizations could be done,but more importantly,I feel there are some deeper issues that warrant further investigation.The first question is,how many filter coefficients should there beand where should they be? We would like to keep the number of nonzero filter coefficients to a minimumbecause it would speed the computation,but more importantly I fear the filter outputmight be defective in some insidious way (perhaps missing primaries)when too many filter coefficients are used.Perhaps if 1-D decon were done sequentially with steep-dip deconthe number of free parameters (and hence the amount of computer time)could be dropped even further.I looked at some of the filtersand they scatter wildly with the Nyquist frequency(particularly those coefficients on the trace with the ``1'' constraint).This suggests using a damping term on the filter coefficients,after which perhaps the magnitude of a filter coefficientwill be a better measure of whether this practice is really helpful.Also, it would, of course, be fun to get some complete data sets(rather than a single shot profile) to see the difference in the final stack.%\newpage%\section{SIGNAL ENHANCEMENT BY PREDICTION}%\sx{signal enhancement by prediction}%In historic exploration-industry use,%prediction-error filtering provides temporal predictions%that are immediately subtracted from the data itself.%In recent years,%Luis \bx{Canales} proposed and developed a process%of looking at the {\it spatial} predictions themselves%and this process has become quite popular.%The idea is that because noise is unpredictable,%better-looking seismic data can result%from looking at the spatial predictions%than looking at the data itself.%Although Canales' process is done in the temporal-frequency domain,%we can also do it in the time domain,%where we can maintain tighter control over nonstationarity%and statistical fluctuations.%The form of the prediction-error filter is%\begin{equation}%\begin{array}{ccccccc}%a &a &a &\cdot &\cdot &\cdot &\cdot \\%a &a &a &\cdot &\cdot &\cdot &\cdot \\%a &a &a &1 &\cdot &\cdot &\cdot \\%a &a &a &\cdot &\cdot &\cdot &\cdot \\%a &a &a &\cdot &\cdot &\cdot &\cdot%\end{array}%\end{equation}%and the prediction is the same without the ``1''.%It is perplexing that the spatial prediction has a horizontal direction.%Some people average the left and the right, but here I have not.%An alternative is to use interpolation,%\begin{equation}%\begin{array}{ccccccc}%a &a &a &\cdot &a &a &a \\%a &a &a &\cdot &a &a &a \\%a &a &a &1 &a &a &a \\%a &a &a &\cdot &a &a &a \\%a &a &a &\cdot &a &a &a %\end{array}%\end{equation}%but here I have not.%In either case, it is important to realize that after the filter coefficients%are determined by minimizing output power,%the ``1'' in the filter is replaced by zero before it is used.%Thus the methods are prediction or interpolation%and they should not be called ``deconvolution''.%%\activeplot{idapred}{width=6.5in,height=8.0in}{CR}{% Stack of Shearer's IDA data (left).% Prediction (right).% Notice that the time scale% is minutes and the offset is degrees of angle% on the earth's surface.% }%%\par%To compare the spatial predictions to the data itself,%I selected the interesting data set%shown in Figure~\ref{fig:idapred}.%The data plane is a stack of \bx{earthquake}s.%At early times, before 93 minutes travel time,%the data resembles a common-midpoint gather.%At later times,%the strong surface waves travel%round the earth and past the antipodes%and come back towards the source.%Otherwise, there are remarkable similarities%to conventional exploration seismic data.%Many fewer earthquakes are observed near 180 degrees%than near 90 degrees for the simple geometrical reason that the 10%degrees surrounding the equator is a much bigger area than the 10 degrees%surrounding the pole.%Thus the quality of the stacks degrades rapidly toward the poles.%Although data quality is poor at the poles themselves,%notice that waves going beyond the antipodes come back toward the source.%The data has a large dynamic range%that I compressed by various range- and time-dependent gain multipliers%and in the last step before display,%I took the signed square roots of the values of the stack.%%\subsection{Parameters for signal enhancement by prediction}%The predictions in Figure~\ref{fig:idapred} were derived from%prediction errors computed from subroutine %\texttt{find\_lopef} \vpageref{lst:lopef},%seen earlier in another application.%The prediction is simply the data minus the prediction error.%\par%Data is analyzed in many overlapping windows which are then merged.%Because the quality of the results depends on the window sizes,%I report here the reasoning behind my choices.%First,%the Canales process is generally applied in the temporal frequency domain.%The number of coefficients on the space axis for the predictions%is generally taken much larger%than the wave-slope count in a typical window.%This is common practice and%I explain the larger size by saying that because%the prediction of the data is based on noisy data itself,%the process needs a sizeable window in which to do statistical averaging.%%\par%%but this fact is irrelevant.%To match the stepout of the dominant wave%(an around-world \bx{Rayleigh wave})%I took the filter length and width to be%{\tt a1=27} and {\tt a2=7}.%Then for statistical smoothing%I chose fitting windows to be ten times%as large as the filter in both directions.%Obviously,%the statistics could be gathered in different amounts%on the two axes and averaging differently could%give significantly different results.%Anyway, the result for my choices is that the entire page%is divided into four windows horizontally and three vertically.%\par%The temporal (half) extent of the filter is evident by the%strong character change at the top and bottom.%The spatial extent is not revealed in this way because%of the vanishing traces (empty bins) along the edges.%\par%I notice a disturbing darkness at late times and wide offsets.%This is energy at zero frequency,%a highly predictable frequency,%that might have crept in because I used medians on bins%with small numbers of traces,%perhaps an {\it even} number so the median had a consistent bias.%%\par%Overall, the prediction process performs as expected.%It is disappointing, however,%in that it tends to swamp weak events in the ``side lobes''%of strong events.%I believe the widespread acceptance of this process%arises from its use on data of very low quality.%Where there is barely one perceptible event,%a process that strengthens that event is a welcome process.%\section{INVERSION AND NOISE REMOVAL}Here we relate the basic theoretical statementof geophysical inverse theoryto the basic theoretical statementof separation of signals from noises.\parA common form of linearized \bx{geophysical inverse theory} is \sx{inverse theory}\begin{eqnarray}\bold 0 & \approx & \bold W ( \bold L \bold m - \bold d) \\\bold 0 & \approx & \epsilon \bold A \bold m\end{eqnarray}We choose the operator $\bold L = \bold I$ to be an identityand we rename the model $\bold m$ to be signal $\bold s$.Define noise by the decomposition of data into signal plus noise,so$\bold n = \bold d-\bold s$.Finally, let us rename the weighting (and filtering) operations$\bold W=\bold N$ on the noise and $\bold A=\bold S$ on the signal.Thus the usual model fitting becomesa fitting for signal-noise separation:\begin{eqnarray}\label{eqn:noisereg}0 & \approx & \bold N (-\bold n) = \bold N ( \bold s - \bold d) \\\label{eqn:signalreg}0 & \approx & \epsilon \bold S \bold s\end{eqnarray}\section{SIGNAL-NOISE DECOMPOSITION BY DIP}Choose noise $\bold n$ to be energy that has no spatial correlationand signal $\bold s$ to be energy with spatial correlationconsistent with one, two, or possibly a few plane-wave segments.(Another view of noise is that a huge number of plane waves is requiredto define the wavefield; in other words, with \bx{Fourier analysis}you can make anything, signal or noise.)We know that a first-order differential equation can absorb (kill)a single plane wave, a second-order equationcan absorb one or two plane waves, etc.In practice, we will choose the order of the wavefieldand minimize power to absorb all we can,and call that the signal.$\bold S $ is the operator that absorbs (by prediction error)the plane waves and $\bold N$ absorbs noisesand $\epsilon > 0$ is a small scalar to be chosen.The difference between $\bold S$ and $\bold N$is the spatial order of the filters.Because we regard the noise as spatially uncorrelated,$\bold N$ has coefficients only on the time axis.Coefficients for $\bold S$are distributed over time and space.They have one space level,plus another levelfor each plane-wave segment slope thatwe deem to be locally present.In the examples here the number of slopes is taken to be two.Where a data field seems to require more than two slopes,it usually means the ``patch'' could be made smaller.\parIt would be nice if we could forget about the goal(\ref{eqn:signalreg})but without it the goal(\ref{eqn:noisereg}),would simply set the signal $\bold s$equal to the data $\bold d$.Choosing the value of $\epsilon$ will determine in some way the amount of data energy partitioned into each.The last thing we will do is choose the value of $\epsilon$,and if we do not find a theory for it, we will experiment.\parThe operators $\bold S $ and $\bold N$can be thought of as ``leveling'' operators.The method of least-squares sees mainly big things,and spectral zeros in $\bold S $ and $\bold N$tend to cancelspectral lines and plane waves in $\bold s$ and $\bold n$.(Here we assume that power levels remain fairly level in time.Were power levels to fluctuate in time,the operators $\bold S $ and $\bold N$should be designed to level them out too.)\parNone of this is new or exciting in one dimension,but I find it exciting in more dimensions.In seismology,quasisinusoidal signals and noises are quite rare,whereas local plane waves are abundant.Just asa short one-dimensional filter can absorb a sinusoid of any frequency,a compact two-dimensional filter can absorb a wavefront of any dip.\parTo review basic concepts,suppose we are in the one-dimensional frequency domain.Then the solution to the fitting goals(\ref{eqn:signalreg})and(\ref{eqn:noisereg})amounts to minimizing a quadratic formby setting to zero its derivative, say\begin{equation}0 \eq{\partial \ \over \partial \bold s'}\left( (\bold s'-\bold d')\bold N'\bold N(\bold s - \bold d )+ \epsilon^2 \bold s' \bold S'\bold S \bold s\right)\end{equation}which gives the answer\begin{eqnarray} \label{eqn:notchfilter}\bold s &=& \left( \bold N' \bold N \over \bold N' \bold N \ + \ \epsilon^2 \bold S'\bold S \right) \ \bold d\\ \label{eqn:narrowfilter}\bold n \eq \bold d - \bold s&=& \left( \epsilon^2 \bold S'\bold S \over \bold N' \bold N \ + \ \epsilon^2 \bold S'\bold S \right) \ \bold d\end{eqnarray}
?? 快捷鍵說明
復(fù)制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -