DOC PREVIEW
MIT 6 050J - Errors

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

4 Errors4.1 Extension of System Model4.2 How do Errors Happen?4.3 Detection vs. Correction4.4 Hamming Distance4.5 Single Bits4.6 Multiple Bits4.6.1 Parity4.6.2 Rectangular Codes4.6.3 Hamming Codes4.7 Block Codes4.8 Advanced Codes4.9 Detail: Check DigitsChapter 4ErrorsIn Chapter 2 we saw examples of how symbols could be represented by arrays of bits. In Chapter 3 welooked at some techniques of compressing the bit representations of such symbols, or a series of such symbols,so fewer bits would be required to represent them. If this is done while preserving all the original information,the compressions are said to be lossless, or reversible, but if done while losing (presumably unimp ortant)information, the compression is called lossy, or irreversible. Frequently source coding and compression arecombined into one operation.Because of compression, there are fewer bits carrying the same information, so each bit is more important,and the consequence of an error in a single bit is more serious. All practical systems introduce errors toinformation that they process (some systems more than others, of course). In this chapter we examinetechniques for insuring that such inevitable errors cause no harm.4.1 Extension of System ModelOur model for information handling will be extended to include “channel coding.” The new channelencoder adds bits to the message so that in case it gets corrupted in some way, the channel encoder willknow that and possibly even be able to repair the damage.Source EncoderCompressorChannel EncoderNoisy ChannelChannel DecoderExpanderSource Decoder- - - - - - - -Input Output(Symbols) (Symbols)Figure 4.1: Communication system with errorsAuthor: Paul Penfield, Jr.This document: http://www.mtl.mit.edu/Courses/6.050/ 200 8/no tes/ chap ter4 .pd fVersion 1.5, February 20, 2008. Copyrightc 2008 Massachusetts Institute of TechnologyStart of notes · back · next | 6.050J/2.110J home page | Site map | Search | About this document | Comments and inquiries444.2 How do Errors Happen? 454.2 How do Errors Happen?The model pictured above is quite general, in that the purpose m ight be to transmit information fromone place to another (communication), store it for use later (data storage), or even process it so that theoutput is not intended to be a faithful replica of the input (computation). Different systems involve differentphysical devices as the channel (for example a communication link, a floppy disk, or a computer). Manyphysical effects can cause errors. A CD or DVD can get scratched. A memory cell can fail. A telephone linecan be noisy. A computer gate can respond to an unwanted surge in power supply voltage.For our purposes we will model all such errors as a change in one or more bits from 1 to 0 or vice versa.In the usual case where a message consists of several bits, we will usually assume that different bits getcorrupted independently, but in some cases errors in adjacent bits are not independent of each other, butinstead have a comm on underlying cause (i.e., the errors may happen in bursts).4.3 Detection vs. CorrectionThere are two approaches to dealing with errors. One is to detect the error and then let the person orsystem that uses the output know that an error has occurred. The other is to have the channel deco derattempt to repair the message by correcting the error. I n both cases, extra bits are added to the mes sagesto make them longer. The result is that the message contains redundancy—if it did not, every possiblebit pattern would be a legal message and an error would simply change one valid message to another. Bychanging things so that many (indeed, most) bit patterns do not correspond to legal messages, the effect ofan error is usually to change the message to one of the illegal patterns; the channel decoder can detect thatthere was an error and take suitable action. In fact, if every illegal pattern is, in a sense to be describedbelow, closer to one legal message than any other, the dec oder could substitute the closest legal message,thereby repairing the damage.In everyday life error detection and correction occur routinely. Written and spoken communication isdone with natural languages such as English, and there is sufficient redundancy (estimated at 50%) so thateven if several letters, sounds, or even words are omitted, humans can still understand the message.Note that channel encoders, because they add bits to the pattern, generally preserve all the originalinformation, and therefore are reversible. The channel, by allowing errors to occur, actually introducesinformation (the details of exactly which bits got changed). The decoder is irreversible in that it discardssome information, but if well designed it throws out the “bad” information caused by the errors, and keepsthe original information. In later chapters we will analyze the information flow in such systems quantitatively.4.4 Hamming DistanceWe need some technique for saying how similar two bit patterns are. In the case of physical quantitiessuch as length, it is quite natural to think of two measurements as being close, or approximately equal. Isthere a similar sense in which two patterns of bits are close?At first it is tem pting to say that two bit patterns are close if they represent integers that are adjacent,or floating point numbers that are close. However, this notion is not useful because it is based on particularmeanings ascribed to the bit patterns. It is not obvious that two bit patterns which differ in the first bitshould be any more or less “different” from each other than two which differ in the last bit.A more useful definition of the difference between two bit patterns is the number of bits that are differentbetween the two. This is called the Hamming distance, after Richard W. Hamming (1915 – 1998)1. Thus0110 and 1110 are separated by Hamming distance of one. Two patterns which are the same are separatedby Hamming distance of zero.1See a biography of Hamming at http://www-groups.dcs.st-andrews.ac.uk/%7Ehistory/Biographies/Hamming.html4.5 Single Bits 46Note that a Hamming distance can only be defined between two bit patterns with the same number ofbits. It does not make sense to speak of the Hamming distance of a single string of bits, or between two bitstrings of different lengths.Using this definition, the effect of errors introduced in the channel can be described by the Hammingdistance between the two bit patterns, one at the input to the channel and the


View Full Document

MIT 6 050J - Errors

Download Errors
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Errors and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Errors 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?