DOC PREVIEW
MIT 6 050J - Errors

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

4 Errors4.1 Extension of System Model4.2 How do Errors Happen?4.3 Detection vs. Correction4.4 Hamming Distance4.5 Single Bits4.6 Multiple Bits4.6.1 Parity4.6.2 Rectangular Codes4.6.3 Hamming Codes4.7 Block Codes4.8 Advanced CodesChapter 4ErrorsIn Chapter 2 we saw examples of how symbols could be represented by arrays of bits. In Chapter 3 welooked at some techniques of compressing the bit representations of such symbols, or a series of such symbols,so fewer bits would be required to represent them. If this is done while preserving all the original information,the compressions are said to be lossless, or reversible, but if done while losing (presumably unimp ortant)information, the compression is called lossy, or irreversible. Frequently source coding and compression arecombined into one operation.Because of compression, there are fewer bits carrying the same information, so each bit is more important,and the consequence of an error in a single bit is more serious. All practical systems introduce errors toinformation that they process (some systems more than others, of course). In this chapter we examinetechniques for insuring that inevitable errors cause no harm.4.1 Extension of System ModelOur model for information handling will be extended to include two new elements, to provide “channelcoding.” The purpose of this encoder, and the accompanying channel decoder, is to add bits to the messageso that in case it gets corrupted in some way, the receiver will know that and possibly even be able to repairthe damage.Source EncoderCompressorChannel EncoderChannelChannel DecoderExpanderSource Decoder- - - - - - - -Input Output(Symbols) (Symbols)Figure 4.1: Elaborate communication systemAuthor: Paul Penfield, Jr.This document: http://www-mtl.mit.edu/Courses/6.050/ 200 5/no tes/ chap ter4 .pd fVersion 1.2, February 14, 2005. Copyrightc 2005 Massachusetts Institute of TechnologyStart of notes · back · next | 6.050J/2.110J home page | Site map | Search | About this document | Comments and inquiries424.2 How do Errors Happen? 434.2 How do Errors Happen?The model pictured above is quite general, in that the purpose m ight be to transmit information fromone place to another (communication), store it for use later (data storage), or even process it so that theoutput is not intended to be a faithful replica of the input (computation). Different systems involve differentphysical devices as the channel (for example a communication link, a floppy disk, or a computer). Manyphysical effects can cause errors. A CD can get scratched. A memory cell can fail. A telephone line can benoisy. A computer gate can resp ond to an undesired surge in power supply voltage.For our purposes we will model all such errors as a change in one or more bits from 1 to 0 or vice versa.In the usual case where a message consists of several bits, we will usually assume that different bits getcorrupted independently, but in some cases we may also be interested in the case in which errors in adjacentbits are not independent of each other, but instead have a common underlying cause (i.e., the errors happenin bursts).4.3 Detection vs. CorrectionThere are two approaches to dealing with errors. One is to detect the error and then let the person orsystem that uses the output know that an error has occurred. The other is to have the channel deco derattempt to repair the message by correcting the error. In both cases, extra bits are added to the mes sagesto make them longer. The result is that the message contains redundancy—if it did not, every possible bitpattern would be a legal message and an error would simply change one message to another valid message.By changing things so that many (indeed, most) bit patterns do not correspond to legal messages, the effectof an error is to change the message to one of the illegal patterns; the channel decoder can detect that therewas an error and take suitable action. In fact, if every illegal pattern is, in a sense to be desc ribed below,closer to one legal message than any other, the decoder could substitute the closest legal message, therebyrepairing the damage.In everyday life error detection and correction occur routinely. Written and spoken communication isdone with natural languages such as English, and there is sufficient redundancy (estimated at 50%) so thateven if several letters, sounds, or even words are omitted, humans can still understand the message.Note that channel encoders, because they add bits to the pattern, generally preserve all the originalinformation, and therefore represent reversible operations. The channel, by allowing errors to occur, actuallyintroduces information (the details of exactly which bits got changed). The decoder is irreversible in thatit discards some information, but if well designed it throws out the “bad” information caused by the errors,and keeps the original information. In later chapters we will analyze the information flow in such systemsquantitatively.4.4 Hamming DistanceWe need a measure of how similar two bit patterns are. It is natural to think of two measurements ofphysical quantities as possibly being close, for example the lengths of two objects can be approximatelyequal. Is there a similar sense in which two patterns of bits are close?At first it is tempting to say that two bit patterns are close if they represent integers that are adjacent,or floating point numbers that are close. However, this notion is not useful because it is based on particularmeanings ascribed to the bit patterns. It is not obvious that two bit patterns which differ in the first bitshould be any more or less “different” from each other than two which differ in the last bit.A more useful definition of the difference between two bit patterns is the number of bits that are differentbetween the two. This is called the Hamming distance, after Richard W. Hamming (1915 - 1998)1. Thus0110 and 1110 are separated by Hamming distance of one. Two patterns which are the same are separatedby Hamming distance of zero.1See a biography of Hamming at http://www-groups.dcs.st-andrews.ac.uk/%7Ehistory/Mathematicians/Hamming.html4.5 Single Bits 44Note that a Hamming distance can only be defined between two bit patterns, and these two patternsmust have the same number of bits. It does not make se nse to speak of the Hamming distance of one stringof bits.Using this definition, the effect of errors introduced in the channel can be described by the Hammingdistance


View Full Document

MIT 6 050J - Errors

Download Errors
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Errors and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Errors 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?