DOC PREVIEW
CALTECH EE 127 - Near Shannon Limit Error

This preview shows page 1-2-3-20-21-22-41-42-43 out of 43 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 43 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IEEE Communications Magazine • August 20031100163-6804/03/$17.00 © 2003 IEEECAPACITY APPROACHING CODES,ITERATIVE DECODING ALGORITHMS, ANDTHEIR APPLICATIONSINTRODUCTIONWhat is clear today is that Claude Shannon didnot make the slightest mistake when he calculat-ed the potential of channel coding and hisfamous capacity limits. We are now able to attainthese limits, by some hundredths of a decibel [1,2], thanks to turbo codes or turbo-like codes, butin no case, apparently, to go beyond them. Areal barrier! And how laborious it was to comeclose to this asymptote! Hamming, Elias, Reedand Solomon, Bose, Chaudhuri and Hoc-quenghem, Gallager, Berlekamp, Forney, Viter-bi, and so many others made importantcontributions to the 50-year-old edifice. Butthere seemed to be a latent prejudice in the fieldof information theory: because the foundation ofdigital communications relied on potent mathe-matical considerations [3], error correcting codeswere believed to belong solely to the world ofmathematics. If it is true that good codes (BCH,Reed-Solomon, etc.) were defined thanks toalgebraic tools, physics also had its say in thisstory, in particular regarding decoding tech-niques. Turbo decoding was devised in this spir-it, with the permanent intuition that the feedbackconcept, so precious in electronics, for instance,could also contribute to the decoding of com-pound (concatenated) codes, and this was indeedthe doorway to iterative decoding. The issue ofstability, which is crucial in feedback systems andthus in turbo decoders, was easily solved byintroducing the notion of extrinsic information,which prevents the cascade decoder from beinga positive feedback amplifier. Another concernwas the search for symmetry, a basic rule inmany physical structures, and this concern led tothe so-called parallel concatenation concept,which offers a perfect balance between the com-ponent codes, unlike classical serial concatena-tion.Turbo codes were presented to the scientificcommunity just 10 years ago [4]. Their inventionwas the result of a pragmatic construction con-ducted by C. Berrou and A. Glavieux [5], basedon the intuitions of G. Battail, J. Hagenauer,and P. Hoeher, who, in the late ’80s, highlightedthe interest of probabilistic processing in digitalcommunication receivers [6,7]. Previously, otherresearchers, mainly P. Elias [8], R. Gallager [9],and M. Tanner [10], had already imagined cod-ing and decoding techniques whose general prin-ciples are closely related to those of turbo codes.Since 1993, turbo codes have been widely stud-ied and adopted in several communication sys-tems, and the inherent concepts of the “turbo”principle have been applied to topics other thanerror correction coding, such as single-user andmulti-user detection.BRIEFLY, HOW DOES IT WORK?First, let us go back 30 years in the history ofcoding. In a well-known paper [11], D. Forneymade an in-depth presentation of convolutionalcodes, which can take the two forms described inFig. 1, with the example of ν = 3 memory units:nonrecursive nonsystematic (a), and recursivesystematic (b). For reasons that are not so obvi-ous today, with the passing of time, D. Forneyadvocated the use of the first structure, whichhas indeed been widely used with success inmany digital transmission systems. Turbo codesuse the other structure, which offers severaladvantages in comparison with the former. Thefirst is conceptual: a recursive systematic convo-lutional (RSC) code is based on a pseudo-ran-dom scrambler, and actually, random codes wereused by Shannon to calculate the theoreticalpotential of channel coding. The second advan-Claude Berrou, ENST BretagneABSTRACTIn the matter of channel coding and spectralefficiency, up to the invention of turbo codes, 3dB or more stood between what the theorypromised and what real systems were able tooffer. This gap has now been removed, allowingcommunication systems to be designed withquasi-optimum performance. Ten years after thefirst publication on this new technique, turbocodes have commenced their practical service.The Ten-Year-Old Turbo Codes areEntering into ServiceIEEE Communications Magazine • August 2003111tage is decisive for high coding rates and/or highlevels of noise: it just works better. The finaladvantage, related to the previous one and alsofundamental for turbo coding, concerns what arecalled return to zero sequences. This is developedin the next paragraph.RTZ SEQUENCES AND CIRCULAR ENCODINGSuppose that both registers of nonrecursive andrecursive encoders, each having ν memory units,are initialized in state 0, and that any randomsequence, followed by some (at least ν) addition-al 0s, feeds the two encoders. After the fullsequence has passed, the nonrecursive encodersystematically returns to state 0, whereas therecursive encoder does so with probability 1/2ν.This is because the latter is based on a pseudo-random scrambler. When the register goes backto state 0 after the encoding of a given sequencefollowed by some 0s, this sequence is called areturn to zero (RTZ) sequence. So all randomsequences continued by at least ν 0s are RTZfor a nonrecursive encoder, and only a fraction(1/2ν) of them are RTZ for a recursive encoder.Now, let us adapt the convolutional code inorder to transform it into a block code. The bestway to do this is to allow any state as initial stateand to encode the sequence, containing k infor-mation bits so that the final state of the encoderregister will be equal to the initial state. Thetrellis of the code (the temporal representationof the possible states of the encoder, from time i= 1 to i = k) can then be regarded as a circle,and this technique is called circular (or tail-bit-ing) termination. In the sequel we call the circu-lar version of RSC codes circular RSC (CRSC).Thus, without having to pay for any additionalinformation, and therefore without impairingspectral efficiency, the convolutional code hasbecome a real block code, in which, for eachtime i, the past is also the future, and vice versa.This means that a non-RTZ sequence produceseffects on the whole set of redundant symbolsstemming from the CRSC encoder, around thewhole circle; and thanks to this very informativeredundancy, the decoder has very little probabil-ity of failing to recover this non-RTZ sequence.This property explains the superiority, in mostsituations, of CRSC codes over classical nonre-cursive convolutional codes, for which allsequences are RTZ. It is also the key for


View Full Document

CALTECH EE 127 - Near Shannon Limit Error

Download Near Shannon Limit Error
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Near Shannon Limit Error and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Near Shannon Limit Error 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?