Unformatted text preview:

MIT 6.02 DRAFT Lecture NotesFall 2010 (Last update: October 6, 2010)Comments, questions or bug reports?Please contact [email protected] 9Viterbi Decoding of ConvolutionalCodesThis lecture describes an elegant and efficient method to decode convolutional codes. Itavoids the explicit enumeration of the 2Npossible combinations of N-bit parity bit se-quences. This method was invented by Andrew Viterbi (’57, SM ’57) and bears his name.� 9.1 The ProblemAt the receiver, we have a sequence of voltage samples corresponding to the parity bitsthat the transmitter has sent. For simplicity, and without loss of generality, we will assume1 sample per bit.In the previous lecture, we assumed that these voltages have been digitized to form a re-ceived bit sequence. If we decode this received bit sequence, the decoding process is termedhard decision decoding (aka “hard decoding”). If we decode the voltage samples directlybefore digitizing them, we term the process soft decision decoding (aka “soft decoding”).The Viterbi decoder can be used in either case. Intuitively, because hard decision decodingmakes an early decision regarding whether a bit is 0 or 1, it throws away information in thedigitizing process. It might make a wrong decision, especially for voltages near the thresh-old, introducing a greater number of bit errors in the received bit sequence. Although itstill produces the most likely transmitted sequence given the received sequence, by intro-ducing additional errors in the early digitization, the overall reduction in the probability ofbit error will be smaller than with soft decision decoding. But it is conceptually a bit easierto understand hard decoding, so we will start with that, before going on to soft decisiondecoding.As mentioned in the previous lecture, the trellis provides a good framework for un-derstanding decoding. Suppose we have the entire trellis in front of us for a code, andnow receive a sequence of digitized bits (or voltage samples). If there are no errors (i.e.,the noise is low), then there will be some path through the states of the trellis that wouldexactly match up with the received sequence. That path (specifically, the concatenation ofthe encoding of each state along the path) corresponds to the transmitted parity bits. From12 LECTURE 9. VITERBI DECODING OF CONVOLUTIONAL CO DESFigure 9-1: The trellis is a convenient way of viewing the decoding task and understanding the time evo-lution of the state machine.there, getting to the original message is easy because the top arc emanating from each nodein the trellis corresponds to a “0” bit and the bottom arrow corresponds to a “1” bit.When there are errors, what can we do? As explained earlier, finding the most likelytransmitted message sequence is appealing because it minimizes the BER. If we can comeup with a way to capture the errors introduced by going from one state to the next, thenwe can accumulate those errors along a path and come up with an estimate of the totalnumber of errors along the path. Then, the path with the smallest such accumulation oferrors is the path we want, and the transmitted message sequence can be easily determinedby the concatenation of states explained above.To solve this problem, we need a way to capture any errors that occur in going throughthe states of the trellis, and a way to navigate the trellis without actually materializing theentire trellis (i.e., without enumerating all possible paths through it and then finding theone with smallest accumulated error). The Viterbi decoder solves these problems. It is anexample of a more general approach to solving optimization problems, called dynamic pro-gramming. Later in the course, we will apply similar concepts in network routing protocolsto find good paths in multi-hop networks.� 9.2 The Viterbi DecoderThe decoding algorithm uses two metrics: the branch metric (BM) and the path metric(PM). The branch metric is a measure of the “distance” between what was transmitted andwhat was received, and is defined for each arc in the trellis. In hard decision decoding,where we are given a sequence of digitized parity bits, the branch metric is the Hammingdistance between the expected parity bits and the received ones. An example is shown inSECTION 9.2. THE VITERBI DECODER 3Figure 9-2: The branch metric for hard decision decoding. In this example, the receiver gets the parity bits00.Figure 9-2, where the received bits are 00. For each state transition, the number on the arcshows the branch metric for that transition. Two of the branch metrics are 0, correspondingto the only states and transitions where the corresponding Hamming distance is 0. Theother non-zero branch metrics correspond to cases when there are bit errors.The path metric is a value associated with a state in the trellis (i.e., a value associatedwith each node). For hard decision decoding, it corresponds to the Hamming distanceover the most likely path from the initial state to the current state in the trellis. By “mostlikely”, we mean the path with smallest Hamming distance between the initial state andthe current state, measured over all possible paths between the two states. The path withthe smallest Hamming distance minimizes the total number of bit errors, and is most likelywhen the BER is low.The key insight in the Viterbi algorithm is that the receiver can compute the path metricfor a (state, time) pair incrementally using the path metrics of previously computed statesand the branch metrics.� 9.2.1 Computing the Path MetricSuppose the receiver has computed the path metric PM[s, i] for each state s (of whichthere are 2k−1, where k is the constraint length) at time step i. The value of PM[s, i] is thetotal number of bit errors detected when comparing the received parity bits to the mostlikely transmitted message, considering all messages that could have been sent by thetransmitter until time step i (starting from state “00”, which we will take by convention tobe the starting state always).Among all the possible states at time step i, the most likely state is the one with thesmallest path metric. If there is more than one such state, they are all equally good possi-bilities.Now, how do we determine the path metric at time step i +1, PM[s, i +1], for eachstate s? To answer this question, first observe that if the transmitter is at state s at timestep i +1, then it must have been in only one of two possible states at time step i.


View Full Document

MIT 6 02 - Viterbi Decoding of Convolutional Codes

Download Viterbi Decoding of Convolutional Codes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Viterbi Decoding of Convolutional Codes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Viterbi Decoding of Convolutional Codes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?