亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? tr156.tex

?? 無損音頻壓縮源碼. 畢業設計 qq 64134703 更多畢業設計 www.rmlcd.cn
?? TEX
?? 第 1 頁 / 共 3 頁
字號:
\documentstyle[psfig,12pt,a4wide]{article}\begin{document}\def\baselinestretch{0.95}\title{{\Large SHORTEN:} \\Simple lossless and near-lossless waveform compression}\author{Tony Robinson \\\\Technical report {\sc CUED/F-INFENG/TR.156} \\\\Cambridge University Engineering Department, \\Trumpington Street, Cambridge, CB2 1PZ, UK}\date{December 1994}\maketitle\begin{abstract}This report describes a program that performs compression of waveformfiles such as audio data.  A simple predictive model of the waveform isused followed by Huffman coding of the prediction residuals.  This isboth fast and near optimal for many commonly occuring waveform signals.This framework is then extended to lossy coding under the conditions ofmaximising the segmental signal to noise ratio on a per frame basis andcoding to a fixed acceptable signal to noise ratio.\end{abstract}\section{Introduction}It is common to store digitised waveforms on computers and the resultingfiles can often consume significant amounts of storage space.  Generalcompression algorithms do not perform very well on these files as theyfail to take into account the structure of the data and the nature ofthe signal contained therein.  Typically a waveform file will consist ofsigned 16 bit numbers and there will be significant sample to samplecorrelation.  A compression utility for these file must be reasonablyfast, portable, accept data in a most popular formats and givesignificant compression.  This report describes ``shorten'', a programfor the UNIX and DOS environments which aims to meet these requirements.A significant application of this program is to the problem ofcompression of speech files for distribution on CDROM.  This reportstarts with a description of this domain, then discusses the two mainproblems associated with general waveform compression, namely predictivemodelling and residual coding.  This framework is then extended to lossycoding.  Finally, the shorten implementation is described and anappendix details the command line options.\section{Compression for speech corpora}One important use for lossless waveform compression is to compressspeech corpora for distribution on CDROM.  State of the art speechrecognition systems require gigabytes of acoustic data for modelestimation which takes many CDROMs to store.  Use of compressionsoftware both reduces the distribution cost and the number of CDROMchanges required to read the complete data set.The key factors in the design of compression software for speech corporaare that there must be no perceptual degradation in the speech signaland that the decompression routine must be fast and portable.There has been much research into efficient speech coding techniques andmany standards have been established.  However, most of this work hasbeen for telephony applications where dedicated hardware can used toperform the coding and where it is important that the resulting systemoperates at a well defined bit rate.  In such applications lossy codingis acceptable and indeed necessary order to guarantee that the systemoperates at the fixed bit rate.Similarly there has been much work in design of general purpose losslesscompressors for workstation use.  Such systems do not guarantee anycompression for an arbitrary file, but in general achieve worthwhilecompression in reasonable time on general purpose computers.Speech corpora compression needs some features of both systems.Lossless compression is an advantage as it guarantees there is noperceptual degradation in the speech signal.  However, the establishedcompression utilities do not exploit the known structure of the speechsignal.  Hence {\tt shorten} was written to fill this gap and is now inuse in the distribution of CDROMs containing speechdatabases~\cite{GarofoloRobinsonFiscus94}.The recordings used as examples in section~\ref{ss:model} andsection~\ref{ss:perf} are from the TIMIT corpus which is distributed as16 bit, 16kHz linear PCM samples.  This format is in common used forcontinuous speech recognition research corpora.  The recordings werecollected using a Sennheiser HMD 414 noise-cancelling head-mountedmicrophone in low noise conditions.  All ten utterances from speaker{\tt fcjf0} are used which amount to a total of 24 seconds or about384,000 samples.\section{Waveform Modeling\label{ss:model}}Compression is achieved by building a predictive model of the waveform(a good introduction for speech is Jayant and Noll~\cite{JayantNoll84}).An established model for a wide variety of waveforms is that of anautoregressive model, also known as linear predictive coding (LPC).Here the predicted waveform is a linear combination of past samples:\begin{eqnarray}\hat{s}(t) & = & \sum_{i = 1}^{p} a_i s(t - i) \label{eq:lpc}\end{eqnarray}The coded signal, $e(t)$, is the differencebetween the estimate of the linear predictor, $\hat{s}(t)$ and thespeech signal, $s(t)$.\begin{eqnarray}e(t) & = & s(t) - \hat{s}(t) \label{eq:error}\end{eqnarray}However, many waveforms of interest are not stationary, that is the bestvalues for the coefficients of the predictor, $a_i$, vary from onesection of the waveform to another.  It is often reasonable to assumethat the signal is pseudo-stationary, i.e.\ there exists a time-spanover which reasonable values for the linear predictor can be found.Thus the three main stages in the coding process are blocking,predictive modelling, and residual coding.\subsection{Blocking}The time frame over which samples are blocked depends to some extent onthe nature of the signal.  It is inefficient to block on too short atime scale as this incurs an overhead in the computation andtransmission of the prediction parameters.  It is also inefficient touse a time scale over which the signal characteristics changeappreciably as this will result in a poorer model of the signal.However, in the implementation described below the linear predictorparameters typically take much less information to transmit than theresidual signal so the choice of window length is not critical.  Thedefault value in the shorten implementation is 256 which results in 16msframes for a signal sampled at 16 kHz.Sample interleaved signals are handelled by treating each data stream asindependent.  Even in cases where there is a known correlation betweenthe streams, such as in stereo audio, the within-channel correlationsare often significantly greater than the cross-channel correlations sofor lossless or near-lossless coding the exploitation of this additionalcorrelation only results in small additional gains.A rectangular window is used in preference to any tapering window as theaim is to model just those samples within the block, not the spectralcharacteristics of the segment surrounding the block.  The window lengthis longer than the block size by the prediction order, which istypically three samples.\subsection{Linear Prediction\label{sect:lpc}}Shorten supports two forms of linear prediction: the standard $p$thorder LPC analysis of equation~\ref{eq:lpc}; and a restricted formwhereby the coefficients are selected from one of four fixed polynomialpredictors.In the case of the general LPC algorithm, the prediction coefficients,$a_i$, are quantised in accordance with the same Laplacian distributionused for the residual signal and described in section~\ref{sect:resid}.The expected number of bits per coefficient is 7 as this was found to bea good tradeoff between modelling accuracy and model storage.  Thestandard Durbin's algorithm for computing the LPC coefficients from theautocorrelation coefficients is used in a incremental way.  On eachiteration the mean squared value of the prediction residual iscalculated and this is used to compute the expected number of bitsneeded to code the residual signal.  This is added to the number of bitsneeded to code the prediction coefficients and the LPC order is selectedto minimise the total.  As the computation of the autocorrelationcoefficients is the most expensive step in this process, the search forthe optimal model order is terminated when the last two models haveresulted in a higher bit rate.  Whilst it is possible to constructsignals that defeat this search procedure, in practice for speechsignals it has been found that the occasional use of a lower predictionorder results in an insignificant increase in the bit rate and has theadditional side effect of requiring less compute to decode.A restrictive form of the linear predictor has been found to be useful.In this case the prediction coefficients are those specified by fittinga $p$ order polynomial to the last $p$ data points, e.g.\ a line to thelast two points:\begin{eqnarray}\hat{s}_0(t) & = & 0 \\\hat{s}_1(t) & = & s(t-1) \\\hat{s}_2(t) & = & 2 s(t-1) - s(t-2) \\\hat{s}_3(t) & = & 3 s(t-1) - 3 s(t-2) + s(t-3)\end{eqnarray}Writing $e_i(t)$ as the error signal from the $i$th polynomial predictor:\begin{eqnarray}e_0(t) & = & s(t) \label{eq:polyinit}\\e_1(t) & = & e_0(t) - e_0(t - 1) \\e_2(t) & = & e_1(t) - e_1(t - 1) \\e_3(t) & = & e_2(t) - e_2(t - 1) \label{eq:polyquit}\end{eqnarray}As can be seen from equations~\ref{eq:polyinit}-\ref{eq:polyquit} thereis an efficient recursive algorithm for computing the set of polynomialprediction residuals.  Each residual term is formed from the differenceof the previous order predictors.  As each term involves only a fewinteger additions/subtractions, it is possible to compute all predictorsand select the best.  Moreover, as the sum of absolute values islinearly related to the variance, this may be used as the basis ofpredictor selection and so the whole process is cheap to compute as itinvolves no multiplications.Figure~\ref{fig:rate} shows both forms of prediction for a range ofmaximum predictor orders.  The figure shows that first and second orderprediction provides a substantial increase in compression and thathigher order predictors provide relatively little improvement.  Thefigure also shows that for this example most of the total compressioncan be obtained using no prediction, that is a zeroth order coderachieved about 48\% compression and the best predictor 58\%.  Hence, forlossless compression it is important not to waste too much compute onthe predictor and to to perform the residual coding efficiently.\begin{figure}[hbtp]\center\mbox{\psfig{file=rate.eps,width=0.7\columnwidth}}\caption[nop]{compression against maximum prediction order}\label{fig:rate}\end{figure}\subsection{Residual Coding\label{sect:resid}}The samples in the prediction residual are now assumed to beuncorrelated and therefore may be coded independently.  The problem ofresidual coding is therefore to find an appropriate form for theprobability density function (p.d.f.) of the distribution of residualvalues so that they can be efficiently modelled.  Figures~\ref{fig:pdf}and~\ref{fig:logpdf} show the p.d.f.\ for the segmentally normalizedresidual of the polynomial predictor (the full linear predictor shows asimilar p.d.f).  The observed values are shown as open circles, theGaussian p.d.f.\ is shown as dot-dash line and the Laplacian, or doublesided exponential distribution is shown as a dashed line.\begin{figure}[hbtp]\center\mbox{\psfig{file=hist.eps,width=0.7\columnwidth}}\caption[nop]{Observed, Gaussian and quantized Laplacian p.d.f.}\label{fig:pdf}\end{figure}\begin{figure}[hbtp]\center\mbox{\psfig{file=lnhist.eps,width=0.7\columnwidth}}\caption[nop]{Observed, Gaussian, Laplacian and quantized Laplacian p.d.f and log$_2$ p.d.f.}\label{fig:logpdf}\end{figure}These figures demonstrate that the Laplacian p.d.f. fits the observeddistribution very well.  This is convenient as there is a simple Huffmancode for this distribution~\cite{Rice71,YehRiceMiller91,Rice91}.  Toform this code, a number is divided into a sign bit, the $n$th low orderbits and the the remaining high order bits.  The high order bits aretreated as an integer and this number of 0's are transmitted followed bya terminating 1.  The $n$ low order bits then follow, as in the example

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
91精品国产综合久久精品麻豆| 欧美一区二区精美| 欧美日韩国产中文| 国产欧美一区二区精品婷婷| 石原莉奈一区二区三区在线观看| 成人av在线资源| 久久一二三国产| 天天操天天色综合| 色噜噜狠狠成人网p站| 久久久亚洲欧洲日产国码αv| 五月激情六月综合| 欧美自拍偷拍一区| 日韩美女视频一区二区| 国产·精品毛片| 久久久精品免费观看| 黄网站免费久久| 日韩精品专区在线影院重磅| 亚洲成a人片在线不卡一二三区 | 日韩精品免费视频人成| 99国产麻豆精品| 中文字幕av不卡| 国产高清视频一区| 久久久久久久综合日本| 精品一二三四区| 欧美xxxx在线观看| 久久电影网站中文字幕| 欧美大度的电影原声| 青草av.久久免费一区| 日韩一级精品视频在线观看| 青青草97国产精品免费观看无弹窗版 | 色综合色综合色综合 | 图片区小说区国产精品视频| 欧美亚洲国产怡红院影院| 一区二区三区四区亚洲| 欧美日韩一本到| 视频一区二区不卡| 精品久久人人做人人爱| 国产精品亚洲午夜一区二区三区| 国产午夜亚洲精品理论片色戒| 国产成人日日夜夜| 亚洲男帅同性gay1069| 欧美视频在线观看一区二区| 偷窥少妇高潮呻吟av久久免费| 91精品婷婷国产综合久久竹菊| 日本伊人午夜精品| 26uuu成人网一区二区三区| 国产盗摄一区二区三区| 日韩久久一区二区| 7777精品伊人久久久大香线蕉完整版| 天堂在线亚洲视频| 久久久不卡网国产精品一区| www.成人网.com| 午夜精品一区二区三区免费视频| 欧美一级片在线| 国产宾馆实践打屁股91| 亚洲综合色噜噜狠狠| 欧美成人性福生活免费看| 国产69精品久久久久毛片| 亚洲一区二区三区中文字幕 | 日本一区二区三区国色天香| av不卡在线播放| 日韩二区三区四区| 欧美高清在线视频| 在线成人av网站| 成人免费av网站| 青青草成人在线观看| 亚洲欧洲成人精品av97| 91.麻豆视频| 国产成人在线看| 五月综合激情日本mⅴ| 国产日产亚洲精品系列| 欧美日韩久久一区二区| 国产成人超碰人人澡人人澡| 亚洲成人在线免费| 中文字幕在线一区二区三区| 欧美精品粉嫩高潮一区二区| 成人激情电影免费在线观看| 日本视频中文字幕一区二区三区| 国产精品毛片大码女人| 日韩欧美视频一区| 欧美日韩三级一区| 99麻豆久久久国产精品免费| 久久电影国产免费久久电影| 一区二区三区四区五区视频在线观看 | 韩国三级中文字幕hd久久精品| 亚洲美女偷拍久久| 久久伊99综合婷婷久久伊| 欧美日韩免费高清一区色橹橹| 99re这里都是精品| 国产高清久久久久| 国产在线播精品第三| 奇米色777欧美一区二区| 亚洲精品五月天| 国产精品欧美一区喷水| 26uuu亚洲| 欧美成人a∨高清免费观看| 欧亚一区二区三区| 成人综合婷婷国产精品久久| 蜜桃传媒麻豆第一区在线观看| 亚洲午夜日本在线观看| 亚洲天堂免费在线观看视频| 亚洲国产精品成人综合色在线婷婷 | 精品视频一区二区三区免费| 波多野结衣的一区二区三区| 国产成人综合网| 国产一区二区在线视频| 狠狠色综合色综合网络| 久久精品免费看| 狂野欧美性猛交blacked| 日本成人在线视频网站| 日本亚洲天堂网| 日韩国产精品久久久久久亚洲| 亚洲高清免费观看| 日本不卡一二三区黄网| 久久精品国产99久久6| 久久国产三级精品| 韩国毛片一区二区三区| 国产精品亚洲第一| 成人av网站在线| 日本韩国欧美一区二区三区| 欧美三区免费完整视频在线观看| 欧美日本视频在线| 精品日韩一区二区三区| 国产日本欧美一区二区| 亚洲品质自拍视频网站| 亚洲自拍偷拍图区| 美女视频黄久久| 国产寡妇亲子伦一区二区| 国产91清纯白嫩初高中在线观看| 99综合影院在线| 欧美三级日韩在线| 日韩一区二区三区三四区视频在线观看| 欧美日韩国产高清一区| 精品国产一区二区三区久久久蜜月| 亚洲精品一区二区三区99| 中文字幕乱码亚洲精品一区| 一区二区在线观看免费视频播放| 午夜视频一区二区三区| 久久成人综合网| 91视频一区二区| 日韩午夜激情视频| 欧美国产精品久久| 午夜精品影院在线观看| 国产综合色视频| 色久优优欧美色久优优| 欧美一级高清大全免费观看| 国产三级一区二区三区| 亚洲午夜免费电影| 国产精品一区二区久久不卡| 在线观看不卡视频| 久久亚洲私人国产精品va媚药| 日韩理论片中文av| 精品一区二区久久| 色婷婷av一区二区三区之一色屋| 日韩手机在线导航| 亚洲私人黄色宅男| 国产一区 二区 三区一级| 欧美专区亚洲专区| 欧美激情综合五月色丁香小说| 亚洲影院久久精品| 成人亚洲一区二区一| 亚洲欧洲色图综合| 日日摸夜夜添夜夜添亚洲女人| av在线播放一区二区三区| 日韩一区二区三区电影| 亚洲制服丝袜av| eeuss鲁一区二区三区| 欧美大片日本大片免费观看| 一区二区久久久久| 成人黄色在线看| 久久综合一区二区| 日本亚洲一区二区| 欧美日韩精品综合在线| 国产精品一卡二卡在线观看| 在线观看免费一区| 亚洲欧美另类久久久精品2019| 国产精品性做久久久久久| 日韩欧美在线网站| 亚洲电影一区二区三区| 成人免费视频国产在线观看| 精品国产露脸精彩对白| 蜜乳av一区二区| 欧美一级爆毛片| 日韩成人午夜电影| 欧美午夜宅男影院| 一区二区三区日韩在线观看| 色综合久久综合网| 亚洲天堂av一区| 色婷婷综合视频在线观看| 国产精品每日更新在线播放网址 | 裸体一区二区三区| 91精品综合久久久久久| 亚洲成人av电影在线| 欧美日韩精品免费| 日本伊人色综合网| 日韩欧美亚洲一区二区| 男女性色大片免费观看一区二区| 欧美日韩国产高清一区| 人禽交欧美网站| 日韩欧美国产综合一区|