DOC PREVIEW
MIT 6 454 - An Introduction to Multiple Description Coding

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

An Introduction to Multiple Description CodingSheng JingOctober 16, 2006In this report, we formulate the Multiple Description (MD) coding problem, which was created couple ofdecades ago. Significant research effort has been devoted to completely characterizing the rate-distortion regionof the MD coding problem. Various Information-Theoretical approaches to this problem have been taken, whichgenerate beautiful results. We summarize part of the research effort in this report.1 IntroductionThe MD coding problem was not created as a pure information-theoretical puzzle. As the author stated in [1],“Multiple Description coding has come full circle from explicit practical motivation to theoretical novelty andback to engineering application”. MD coding was invented at Bell Laboratories during the 1970s in connectionwith communicating speech over the telephone network. At that time, though the telephone network enjoys goodreliability, outages of transmission is inevitable, mainly due to device failures, routine maintenance or upgrades.Rather than diverting calls to standby transmission links in case of transmission outage, it may be clever to splitthe information from a single call onto two separate links or paths. Some early attempts by channel splitting aresummarized in [1].The channel splitting idea inspires the following question: “If an information source is described with twoseparate descriptions, what are the concurrent limitations on qualities of these descriptions taken separately andjointly?” [1]. This question eventually came to be known as the MD coding problem.Before presenting the problem formulation of MD coding, we shall first introduce the basic definitions ofrate-distortion theory and state Shannon’s rate-distortion theorem.1.1 Shannon’s Rate-Distortion TheoryWhen information is transferred over a channel at a rate above the channel’s capacity, distortion in the recovery ofthe information is inevitable. The branch of information theory devoted to characterize the relationship between1achievable distortion and required rate is called the rate-distortion theory. The most important result in therate-distortion theory is perhaps Shannon’s rate-distortion theorem [2], which is restated in this section.We assume that Xi, i = 1, 2, · · · is a sequence of i.i.d. discrete random variables drawn according to acommon probability mass function p(x), x ∈ X. We are given a reconstruction spaceˆX together with anassociated distortion measured : X ׈X → R. (1)A description of x∈ˆXn=ˆX × · · · ׈X is a map i:ˆXn→ {1, · · · , 2nR}, where R is the rate of descriptionin bits per source symbol of x. A reconstruction of x is a map ˆx: {1, · · · , 2nR} →ˆXn. The distortion incurredthrough this pair of description and reconstruction is defined bydn= E"1nnXk=1d (Xk, ˆxk(i(X)))#. (2)The distortion d is said to be achievable with rate R for the source sequence {Xi}ni=1if for n = 1, 2, · · · , thereexists a sequence of rate R descriptions i:ˆXn→ {1, · · · , 2nR} and reconstructions ˆx: {1, · · · , 2nR} →ˆXnsuch that dn≤ d, for all n sufficiently large.Rate-Distortion Function The rate-distortion function R(d) is the infimum of all rates R achieving distortiond on a given stochastic process {Xi}∞i=1.Theorem 1.1 (Shannon’s Rate-Distortion Theorem [2]). If {Xi}∞i=1are i.i.d. discrete finite alphabet randomvariables with probability mass function p(x), thenR(d) = infP(d)I(X;ˆX), (3)whereP(d) =p(ˆx|x) :Xx,ˆxp(x) p(ˆx|x) d(x, ˆx) ≤ d. (4)We can calculate the rate-distortion function for several special sources and distortion measures.Corollary 1.2 (Bernoulli Source with Hamming Distortion). The rate-distortion function for a Bernoulli(α)Source with Hamming Distortion isR(d) =H(α) − H(d), 0 ≤ d ≤ min{α, 1 − α},0, d > min{α, 1 − α},(5)2where H(·) is the entropy function of a binary random variable.Corollary 1.3 (Gaussian Source with Squared Error Distortion). The rate-distortion function for a N (0, σ2)source with squared error distortion isR(d) =12logσ2d, 0 ≤ d ≤ σ2,0, d > σ2.(6)2 MD Coding: Problem FormulationThe 2-channel 3-receiver MD coding problem is represented in Figure 1.X Encoderj1(X)j2(X)Decoder 1Decoder 1,2Decoder 2ˆX1ˆX2ˆX1,2Figure 1: 2-Channel 3-Receiver MD Coding ProblemThe encoder is presented with a sequence of i.i.d. source symbols {Xi}∞i=1. Each source symbol is distributedaccording to a probability mass function p(x), x ∈ X . We are given three reconstruction spacesˆX1,ˆX2,ˆX1,2,together with the associated distortion measuresdt: X ׈Xt→ R, t = 1, 2, {1, 2}. (7)The distortion measure on n-sequences is defined by the average per-symbol distortiondnt(x, ˆxt) = 1/nnXk=1dt(xk, ˆxtk), t = 1, 2, {1, 2}, (8)where x= {x1, · · · , xn} ∈ Xnand ˆxt= {ˆxt1, · · · , ˆxtn} ∈ˆXnt. The encoding and decoding functions aredefined byft: Xn→ {1, · · · , Mt}, t = 1, 2 (9)gt: {1, · · · , Mt} →ˆXNt, t = 1, 2 (10)g1,2: {1, · · · , M1} × {1, · · · , M2} →ˆX1,2. (11)3Denote X = (X1, · · · , Xn) ∈ Xn. DefineˆXt= gt(ft((X))), t = 1, 2 (12)ˆX1,2= g1,2(f1((X)), f2((X))), (13)andDt= Ehdnt(X,ˆXt)i, t = 1, 2, {1, 2}. (14)The quintuple (f1, f2, g1, g2, g1,2) is called a code with parameter (n, M1, M2, D0, D1, D1,2).Achievable Rate-Distortion Vector We shall say (R1, R2) is achievable for distortion d= (d1, d2, d1,2) if, forall ǫ > 0, there exists for n sufficiently large a code with parameters (n, M1, M2, D0, D1, D1,2), whereMt< 2(Rt+ǫ)n, t = 1, 2 (15)Dt< dt+ ǫ, t = 1, 2, {1, 2}. (16)Rate-Distortion Region The rate-distortion region R(d) for distortion d = (d1, d2, d1,2) is the closure of theset of achievable rate vectors (R1, R2) inducing distortions ≤ d.Achievable Rate-Distortion Region Any subset of the rate-distortion region is called an achievable rate-distortionregion. Another common name for achievable rate-distortion region is inner bound to the rate-distortion region.3 Results on Achievable Rate-Distortion RegionThe following two sets of sufficient conditions for (R1, R2, d1, d2, d1,2) to be achievable was deduced by ElGammal and Cover in [4] which will be later referred to as the EGC⋆resp. EGC region.Theorem 3.1 (EGC⋆Achievable RD Region). Let X1, X2, · · · be a sequence of i.i.d. finite alphabet randomvariables drawn according to a probability mass function p(x). Let dm(·, ·) be


View Full Document

MIT 6 454 - An Introduction to Multiple Description Coding

Documents in this Course
Load more
Download An Introduction to Multiple Description Coding
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view An Introduction to Multiple Description Coding and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view An Introduction to Multiple Description Coding 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?