?? channel coding.tex.bak
字號:
\documentclass[10pt,a4paper]{article}
\usepackage{amsmath, mathrsfs}
\usepackage{amssymb, graphicx, epsfig, subfigure}
\title{\textbf{Channel coding}}
\author{Jinkyu Kang}
\date{2008.01.20}
\begin{document}
\maketitle
\begin{abstract}
In computer science, a channel code is a broadly used term mostly
referring to the forward error correction code and bit
interleaving in communication and storage where the communication
media or storage media is viewed as a channel. In this report, we
introduce the convolution coding and Viterbi decoding and the
performance of each the Hard and Soft decision for QPSK and 16-QAM
to bit error rate(BER). Simulation results will show that Hard and
Soft decision have different performance: Soft decision is better
than Hard decision.
\end{abstract}
\section{Channel coding}
Channel coding refers to the class of signal transformations
designed to improve communications performance by enabling the
transmitted signals to better with stand the effects of various
channel impairments, such as noise, interference, and fading.
Sometimes channel coding also refers to other physical layer
issues such as digital modulation, line coding, pulse shaping,
channel equalization, bit synchronization, training sequences,
etc.
%\subsection(Characteristic of Channel coding)
\section{Convolution code}
Convolutional code is is a type of error-correcting code in which
each m-bit information symbol (each m-bit string) to be encoded is
transformed into an n-bit symbol. Convolutional code described by
three integers, $n$, $k$, and $K$, where the ratio has the same
code rate significance(information per coded bit) that it has for
block codes; however, n does not define a block or codeword length
as it does for block codes. The integer K is a parameter known ass
the constraint length; it represents the number of k-tuple stages
in the encoding shift register. An important characteristic of
convolutional codes, differ from block codes, is that the encoder
has memory, but is also a function of the previous $K-1$ input
k-tuples.
\subsection{Characteristic of Convolutional code}
We shall use the convolutional encoder, shown in Figure 1, as a model for discussing
convolutional encoders. The figure illustrates a (2,1)
convolutional encoder with constraint length $K=3$ and the code
rate $k/n$ is $\frac{1}{2}$. At each input bit time, a bit is
shifted into the leftmost stage and the bits in the register are
shifted one position to the right. Next, the output switch samples
the output of each modulo-2 adder, thus forming the code symbol
pair making up the branch word associated with the bit just
inputted. The sampling is repeated for each inputted bit.\\
\begin{figure}[h]
\centering%
\includegraphics[width=10cm,height=6cm]{Convolutionalencoder}\\
\caption{Convolutional encoder (rate 1/2, K=3)}\label{fig1}
\end{figure}
One way to represent simple encoders is with a \emph{state
diagram}: such a representation for the encoder in Figure 1 is
shown in Figure 2. The states, shown in the boxes of the diagram,
represent the possible contents of the right-most $K-1$ stages of
the register, and the paths between the stages represent the
output branch words resulting from such state transitions.
\begin{figure}[h]
\centering%
\includegraphics[width=6cm,height=6cm]{statediagram}\\
\caption{Encoder state diagram(rate 1/2, K=3)}\label{fig2}
\end{figure}
Other ways to represent simple encoders is with a Tree diagram and
a Trellis diagram. The tree diagram adds the dimension of time to
the state diagram. The trellis diagram, by exploiting the
repetitive structure, provides a more manageable encoder
description than does the tree diagram. the trellis diagram for
the convolutional encoder of Figure 1 is shown in Figure 3.
\begin{figure}[h]
\centering%
\includegraphics[width=10cm,height=6cm]{Trellisdiagram}\\
\caption{Encoder trellis diagram(rate 1/2, K=3)}\label{fig3}
\end{figure}
\section{Convolutional Decoding}
Convolutional decoding methods include maximum likelihood decoding
, Channel models: Hard versus Soft decision, and the Viterbi
convolutional decoding algorithm. \\
In maximum likelihood decoding, if all input message sequences are
equally likely, a decoder that achieves the minimum probability of
error is one that compares the conditional probabilities. In the
maximum likelihood context, we can say that the decoder chooses a
particular $\bf{U}^{(m')}$ as the transmitted sequence if the
likelihood $P({\bf{Z}}|{\bf{U}}^{(m')} )$ is greater than the
likelihoods of all the other possible transmitted sequences. Such
an optimal decoder, which minimizes the error probability, is
known as a \emph{maximum likelihood decoder}. \\
The demodulator output can be configured in a variety of ways. It
can be implemented to make \emph{hard decision} as to whether
$z(T)$ represents a zero or a one. In this case, the output of the
demodulator is quantized to two levels, zero and one, and fed into
the decoder. Since the decoder operates on the hard decisions made
by the demodulator, the decoding is called \emph{hard-decision
decoding}. The demodulator can also be configured to feed the
decoder with a quantized value of $z(T)$ greater than two levels.
Such an implementation furnishes the decoder with more information
than is provided in the hard-decision case. When the quantization
level of the demodulator output is greater than two, the decoding
is called \emph{soft-decision decoding}.\\
The Viterbi algorithm essentially performs maximum likelihood
decoding; however, it reduces the computational load by taking
advantages of the special structure in the code trellis. The
advantage of Viterbi decoding is that the complexity of a Viterbi
decoder is not a function of the number of symbols in the codeword
sequence. The algorithm involves calculating a measure of
similarity, or distance, between the received signal, at time
$t_{i}$, and all the trellis paths entering each state at time
$t_{i}$. The Viterbi algorithm removes from consideration those
trellis paths that could not possibly be candidates for the
maximum likelihood choice. When two paths enter the same state,
the one having the best metric is chosen; this path is called the
\emph{surviving path}. This selection of surviving path is
performed for all the states. The decoder continues in this way to
advance deeper into the trellis, making decistions by eliminating
the least likely paths. The early rejection of the unlikely paths
reduces the decoding complexity. Viterbi decoding is divided on
the maximum likelihood metric and minimum distance metric.
\begin{figure}[h]
\centering%
\includegraphics[width=12cm,height=6cm]{Viterbi}\\
\caption{Viterbi decoding}\label{fig4}
\end{figure}
\section{Simulation result}
Figure 5, 6 show coded error performance with convolutional code.
Figure 5 include Soft and Hard decision for QPSK and 16QAM in
AWGN, figure 6 include Soft and Hard decision for QPSK and 16QAM
in Rayleigh. This figures denote the coded versus uncoded error
performance. As you can see, coded and uncoded error performance
graphs intersect at one point. Coded error performance is worse
than uncoded error performance at low value of $E_{b}/N_{0}$ which
based on intersecting point and is better than uncoded error
performance at high value of $E_{b}/N_{0}$ which based on
intersecting point. Because, if there are more errors within a
block than the code is capable of correcting, the system will
perform poorly. When that intersecting point is crossed, we can
interpret the degraded performance as being caused by the
redundant bits consuming energy but giving back nothing beneficial
in return. As you can see, soft decision is better than hard
decision all the range. For a Gaussian channel, eight-level soft
decision decoding can provide the same probability of bit error as
that of hard-decision decoding, but requires 2dB less
$E_{b}/N_{0}$ for the same performance.
\begin{figure}[h]
\centering%
\includegraphics[width=8cm,height=5cm]{BERwithConvolutioncodinginAWGN}\\
\caption{BER Graph with convolution coding in AWGN}\label{fig5}
\end{figure}
\begin{figure}[h]
\centering%
\includegraphics[width=8cm,height=5cm]{BERwithConvolutioncodinginRayleigh}\\
\caption{BER Graph with convolution coding in
Rayleigh}\label{fig6}
\end{figure}
\section{Conclusion}
\end{document}
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -