DOC PREVIEW
MIT 6 050J - Errors

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

4 Errors4.1 Extension of System Model4.2 How do Errors Happen?4.3 Detection vs. Correction4.4 Hamming Distance4.5 Single Bits4.6 Block Codes4.6.1 Parity4.6.2 Rectangular Codes4.6.3 Hamming Codes4.7 Advanced Codes4.8 Detail: Check DigitsChapter 4ErrorsIn Chapter 2 we saw examples of how symbols could be represented by arrays of bits. In Chapter 3 welooked at some techniques of compressing the bit representations of such symbols, or messages consisting ofa sequence of such symbols, so fewer bits would be required. If this is done while preserving all the originalinformation, the compressions are said to be lossless, or reversible, but if done while losing (presumablyunimportant) information, the compression is called lossy, or irreversible. Frequently source coding andcompression are combined into one operation.Because of compression, there are fewer bits carrying the same information, so each bit is more important,and the consequence of an error in a single bit is more serious. All practical systems introduce errors toinformation that they process (some systems more than others, of course). In this chapter we examinetechniques for detecting errors when they happen and repairing them.4.1 Extension of System ModelOur model for information handling is now extended to include “channel coding.” The new channelencoder adds bits to the message so that in case it gets corrupted in some way, the channel decoder willknow that and possibly even be able to repair the damage.Source EncoderCompressorChannel EncoderNoisy ChannelChannel DecoderExpanderSource Decoder- - - - - - - -Input Output(Symbols) (Symbols)Figure 4.1: Communication system with errorsAuthor: Paul Penfield, Jr.This document: http://www.mtl.mit.edu/Courses/6.050/2010/notes/chapter4.pdfVersion 1.6, February 23, 2010. Copyrightc 2010 Massachusetts Institute of TechnologyStart of notes · back · next | 6.050J/2.110J home page | Site map | Search | About this document | Comments and inquiries454.2 How do Errors Happen? 464.2 How do Errors Happen?The model pictured above is quite general, in that the purpose might be to transmit information fromone place to another (communication), store it for use later (data storage), or even process it so that theoutput is not intended to be a faithful replica of the input (computation). Different systems involve differentphysical devices as the channel (for example a communication link, a floppy disk, or a computer). Manyphysical effects can cause errors. A CD or DVD can get scratched. A memory cell can fail. A telephone linecan be noisy. A computer gate can respond to an unwanted surge in power supply voltage.For our purposes we will model all such errors as a change in one or more bits from 1 to 0 or vice versa.In the usual case where a message consists of several bits, we will usually assume that different bits getcorrupted independently, but in some cases errors in adjacent bits are not independent of each other, butinstead have a common underlying cause (i.e., the errors may happen in bursts).4.3 Detection vs. CorrectionThere are two approaches to dealing with errors. One is to detect the error and then let the person orsystem that uses the output know what happened. The other is to have the channel decoder attempt torepair the message by correcting the error. In both cases, extra bits are added to the messages to make themlonger. The result is that the message contains redundancy—if it did not, every possible bit pattern wouldbe a legal message and an error would simply change one valid message to another. By changing things sothat many (indeed, most) bit patterns do not correspond to legal messages, the effect of an error is usuallyto change the message to one of the illegal patterns; the channel decoder can detect that there was an errorand take suitable action. In fact, if every illegal pattern is close (in a sense to be described below) to onelegal message than any other, the decoder could substitute the closest legal message, thereby repairing thedamage.In everyday life error detection and correction occur routinely. Written and spoken communication isdone with natural languages such as English, and there is sufficient redundancy (estimated at 50%) so thateven if several letters, sounds, or even words are omitted, humans can still understand the message.Note that channel encoders, because they add bits to the pattern, generally preserve all the originalinformation, and therefore are reversible. The channel, by allowing errors to occur, actually introducesinformation (the details of exactly which bits got changed). The decoder is irreversible in that it discardssome information, but if well designed it throws out the “bad” information caused by the errors, and keepsthe original information. In later chapters we will analyze the information flow in such systems quantitatively.4.4 Hamming DistanceWe need some technique for saying how similar two bit patterns are. In the case of physical quantitiessuch as length, it is quite natural to think that two measurements are close if the two numbers measuredare approximately equal. Is there a similar sense in which two patterns of bits are close?At first it is tempting to say that two bit patterns are close if they represent integers that are adjacent,or floating point numbers that are close. However, this notion is not useful because it is based on particularmeanings ascribed to the bit patterns. It is not obvious that two bit patterns which differ in the first bitshould be any more or less “different” from each other than two which differ in the last bit.A more useful definition of the difference between two bit patterns is the number of bits that are differentbetween the two. This is called the Hamming distance, after Richard W. Hamming (1915–1998)1. Thus0110 and 1110 are separated by Hamming distance of one. The Hamming distance between two identicalpatterns is zero.1See a biography of Hamming at http://www-groups.dcs.st-andrews.ac.uk/%7Ehistory/Biographies/Hamming.html4.5 Single Bits 47Note that a Hamming distance can only be defined between two bit patterns with the same number ofbits. It does not make sense to speak of the Hamming distance of a single string of bits, or between two bitstrings of different lengths.Using this definition, the effect of errors introduced in the channel can be described by the Hammingdistance between the bit pattern at the input to the channel and that at the output. No errors meansHamming


View Full Document

MIT 6 050J - Errors

Download Errors
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Errors and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Errors 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?