Unformatted text preview:

MIT 6.02 DRAFT Lecture NotesFall 2010 (Last update: September 27, 2010)Comments, questions or bug reports?Please contact [email protected] 6Coping with Bit ErrorsRecall our main goal in designing digital communication networks: to send informationboth reliably and efficiently between nodes. Meeting that goal requires the use of tech-niques to combat bit errors, which are an inevitable property of both commmunicationchannels and storage media.The key idea we will apply to achieve reliable communication is redundancy: by repli-cating data to ensure that it can be reconstructed from some (correct) subset of the datareceived, we can improve the likelihood of fixing errors. This approach, called error cor-rection, can work quite well when errors occur according to some random process. Errorcorrection may not be able to correct all errors, however, so we will need a way to tell ifany errors remain. That task is called error detection, which determines whether the datareceived after error correction is in fact the data that was sent. It will turn out that all theguarantees we can make are probabilistic, but we will be able to make our guarantees onreliabile data receptions with very high probability. Over a communication channel, thesender and receiver implement the error correction and detection procedures. The senderhas an encoder whose job is to take the message and process it to produce the coded bitsthat are then sent over the channel. The receiver has a decoder whose job is to take the re-ceived (coded) bits and to produce its best estimate of the message. The encoder-decoderprocedures together constitute channel coding.Our plan is as follows. First, we will define a model for the kinds of bit errors we’regoing to handle and revisit the previous lectures to see why errors occur. Then, we willdiscuss and analyze a simple redundancy scheme called a replication code, which will sim-ply make c copies of any given bit. The replication code has a code rate of 1/c—that is, forevery useful bit we receive, we will end up encoding c total bits. The overhead of the repli-cation code of rate c is 1 − 1/c, which is rather high for the error correcting power of thecode. We will then turn to the key ideas in that allow us to build powerful codes capableof correcting errors without such a high overhead (or, capable of correcting far more errorsat a given code rate than the trivial replication code).There are two big ideas that are used in essentially all channel codes: the first is thenotion of embedding, where the messages one wishes to send are placed in a geometri-cally pleasing way into an embedding in a larger space so that the distance between any12 LECTURE 6. COPING WITH BIT ERR OR Stwo valid points in the embedding is large enough to enable the correction and detec-tion of errors. The second big idea is to use parity calculations (or more generally, linearfunctions) over the bits we wish to send to produce the bits that are actually sent. Wewill study examples of embeddings and parity calculations in the context of two classes ofcodes: linear block codes, which are an instance of the broad class of algebraic codes, andconvolutional codes, which are an instance of the broad class of graphical codes.1� 6.1 Bit error modelsIn the previous lectures, we developed a model for how channels behave using the idea oflinear time-invariance and saw how noise corrupted the information being received at theother end of a channel. We characterized the output of the channel, Y , as the sum of twocomponentsy[n]=ynf[n]+noise, (6.1)where ynf[n] is the sum of two terms. The first is the noise-free prediction of the channeloutput, which can be computed as the convolution of the channel’s unit sample responsewith the input X, and the second is a random additive noise term. A good noise model formany real-world channels is Gaussian; such a model is has a special name: additive whiteGaussian noise, or AWGN.2AWGN has mean 0 and is fully characterized by the variance,σ2. The larger the variance, the more intense the noise.One of the properties of AWGN is that there is always a non-zero probability that avoltage transmitted to represent a 0 will arrive at the receiver with enough noise that itwill be interpreted as a 1, and vice versa. For such a channel, if we know the transmitter’ssignaling levels and receiver’s digitizing threshold, we know (from the earlier lecture onnoise) how to calculate the probability of bit error (BER). For the most part, we will assumea simple (but useful) model that follows from the properties of AWGN: a transmitted 0bit may be digitized at the receiver as a 1 (after deconvolution) with some probability p(the BER), and a transmitted 1 may be digitized as a 0 at the receiver (after deconvolution)with the same probability p. Such a model is also called a binary symmetric channel, or BSC.The “symmetric” refers to the property that a 0 becomes a 1 and vice versa with the sameprobability, p.BSC is perhaps the simplest error model that is realistic, but real-world channels exhibitmore complex behaviors. For example, over many wireless and wired channels as well ason storage media (like CDs, DVDs, and disks), errors can occur in bursts. That is, the prob-ability of any given bit being received wrongly depends on (recent) history: the likelihoodis higher if the bits in the recent past were received incorrectly.Our goal is to develop techniques to mitigate the effects of both the BSC and bursterrors. We’ll start with techniques that work well over a BSC and then discuss how to dealwith bursts.A BSC error model is characterized by one number, p. We can determine p empiricallyby noting that if we send N bits over the channel, the expected number of erroneously1Graphical codes are sometimes also called “probabilistic codes” in the literature, for reasons we can’t getinto here.2The “white” part of this term refers to the variance being the same over all the frequencies being used tocommunicate.SECTION 6.2. THE SIMPLEST CODE: REPLICATION 3received bits is Np. Hence, by sending a known bit pattern and counting the fraction orerroneously received bits, we can estimate p. In practice, even when BSC is a reasonableerror model, the range of p could be rather large, between 10−2(or even a bit higher) all theway to 10−10or even 10−12. A value of p of about 10−2means that messages longer than a100 bits will see at least one error on average; given that the


View Full Document

MIT 6 02 - Coping with Bit Errors

Download Coping with Bit Errors
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Coping with Bit Errors and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Coping with Bit Errors 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?