Berkeley ELENG 229B - Decoding Algorithms and Error Probability Bounds for Convolutional Codes

Unformatted text preview:

Decoding Algorithms and Error ProbabilityBounds for Convolutional CodesEE 229B - Error Control Coding ProjectHari PalaiyanurMay 13, 20051 IntroductionError control codes are a way of combatting uncertainty in the transmissionof information from one point to another. Information theory shows the exis-tence of codes that allow for arbitrarily reliable communication when the rateof transmission is less than the channel capacity. However, it doesn’t provideschemes that are practical to achieve this. Error control coding strives to findcodes that provide for good performance in terms of error probability, but alsolow complexity in encoding and decoding.The first major class of codes designed for this purpose was linear blockcodes, having been invented in the late 1940’s and early 1950’s. Linear blockcodes take a fixed number, k, of input symbols and encode them into a larger,fixed number, n, of output symbols in some linear fashion. Some of the majorclasses of linear block codes we have discussed in class are Hamming codes,Reed-Muller codes, BCH codes and Reed-Solomon codes. Each of these codesintroduces redundancy into a sequence of input symbols in a controlled mannerto improve the resistance of the transmitted codewords to noise. However, eachblock has no dependence on other blocks.Convolutional codes were first introduced in the mid-1950’s as an alternativeto block codes. At each time step, a convolutional code takes k input symbolsand encodes them into n output symbols. However, we allow for each block todepend on other blocks of k inputs in a manner that can be implemented asa finite state machine. We will assume that each block is allowed to dependonly on previous blocks of inputs, and not future blocks of inputs. The useof memory in convolutional codes makes the code structure more complicatedthan a block code and increases complexity in the encoder by the use of shiftregisters storing previous inputs, but the additional structure has advantages indecoding.A convolutional code can be represented by a trellis. The trellis describesthe codewords of the code as paths through a directed graph. If we weight theedges of this trellis with log-likelihoods, then maximum-likelihood decoding can1be viewed as the problem of finding the shortest path from the root or startof the trellis to the end. Maximum likelihood (ML) decoding of convolutionalcodes is accomplished in this manner through the well known Viterbi Algorithm[1].While the Viterbi Algorithm provides the optimal solution, it may not bepractical to implement for certain code parameters. Sequential decoding algo-rithms, such as the Stack Algorithm and Fano Algorithm can be useful in somecases where the Viterbi algorithm is not practical.The performance criteria we use when evaluating these various algorithmswill be probability of error. In an unterminated convolutional code used over anoisy channel, the probability of error will tend to 1, so instead we will focuson the probability of error per unit time in most cases.The purpose of this project was to become familiar with several decodingprocedures and then evaluate their performance in terms of this probability oferror p er unit time for randomly generated convolutional codes. In this process,roughly speaking, we come across results concerning convolutional codes usingML decoding, block codes derived from terminated convolutional codes usingML decoding, and convolutional codes using sequential decoding. (The codeitself doesn’t use the decoding process, but the probability of error dependsboth on the code and the decoding process chosen.) We will mostly coverresults, but since my personal goal for the project was to become familiar withproof techniques used for ML decoding analysis, we give a few detailed proofsof theorems. The main sources from which these results are drawn are Forney’spapers on ML decoding [2] and sequential decoding [3] and Zigangirov’s book[4] on convolutional codes. For a quick, general introduction to convolutionalcodes, Lin and Costello’s textbook[5] is useful.2 Description of Convolutional CodesWe begin by describing the convolutional code as Forney [2] does, ultimatelyleading to the trellis description of convolutional codes. At any time step, theconvolutional encoder is assumed to have k inputs which can each take on qvalues, and n outputs. Hence, there are M = qkpossible inputs at any giventime. Perhaps the simplest description of a convolutional code is with a tree.We note that the tree is really a description of the code for a specific encoder.Figure 1 shows an example of a convolutional code represented as a tree.In this example, at each time step, one input bit comes into the encoder, andtwo output bits are given out. A node’s depth in the tree denotes the time atwhich the node occurs. A node branches upwards if the input bit is a ’0’, ordownwards if the input bit is a ’1’. The outputs of the encoder at time t are thelabels of edges connecting nodes at time t−1 to nodes at time t. The output bitsdepend not only on the present input bit, but also the previous input bit. Forthe code depicted, in the notation of Lin and Costello [5], G(D) = [1 1 + D].This signifies that the first output bit is equal to the input bit at the presenttime and the second output bit is equal to the sum of the input bits at the2previous time and the present time. So for this code, q = 2, k = 1, M = 2 andn = 2.· · ·010000000010101010111111010101Figure 1: An example of a convolutional code represented by a tree.In general, at time zero, there is only the root node, at time 1, there are Mnodes, at time 2, there are M2nodes and so on. Now we note that we haverequired the convolutional code encoder to be implemented with a finite statemachine. Eventually there is a depth in the tree when the number of nodes atthat depth exceeds the number of possible states in the enco der. At this depthand afterwards, the tree contains redundant information about the encoder.This is because we again require that future outputs depend only on the stateand future inputs. Hence, all we really need to keep track of for purposes ofknowing the output of the encoder is the state of the encoder at each time. Thecode trellis is formed from the code tree by merging nodes corresponding to thesame encoder state.Figure 2 shows the trellis for the example in Figure 1. As can be seen,there are two p ossible states, corresponding to the fact that the present outputdepends on


View Full Document

Berkeley ELENG 229B - Decoding Algorithms and Error Probability Bounds for Convolutional Codes

Download Decoding Algorithms and Error Probability Bounds for Convolutional Codes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Decoding Algorithms and Error Probability Bounds for Convolutional Codes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Decoding Algorithms and Error Probability Bounds for Convolutional Codes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?