DOC PREVIEW
Error Exponents for Joint Source-Channel Coding with Delay-Constraints

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Error Exponents for Joint Source-Channel Coding withDelay-ConstraintsCheng ChangWireless Foundations, Dept. of EECSUniversity of California at BerkeleyEmail: [email protected] SahaiWireless Foundations, Dept. of EECSUniversity of California at BerkeleyEmail: [email protected]—Traditionally, joint source-channel codingis viewed in the block coding context — all the sourcesymbols are known in advance by the encoder. Here, weconsider source symbols to be revealed to the encoderin real time and require that they be reconstructed atthe decoder within a certain fixed end-to-end delay. Wederive upper and lower bounds on the reliability functionwith delay for cases with and without channel feedback.For erasure channels with feedback, the upper-bound isshown to be achievable and the resulting scheme showsthat from a delay perspective, nearly separate source andchannel coding is optimal.I. INTRODUCTIONThe block-length story for error exponents is partic-ularly seductive when upper and lower bounds agree— as they do for both lossless source coding and forpoint-to-point channel coding in the high-rate regime[1], [2]. However, the block-code setting conflates aparticular style of implementation with the problemstatement itself. Recently, it has become clear thatfixed-block codes may indeed incur unnecessarily poorperformance with respect to end-to-end delay, evenwhen the required delay is fixed.In [3], we show that despite the block channel cod-ing reliability functions not changing with feedbackin the high rate regime, the reliability function withrespect to fixed-delay can in fact improve dramaticallywith feedback.1For fixed-rate lossless source-coding,[5] showed similarly that the reliability function withfixed delay is much better than the reliability withfixed block-length. An example given in [5], [6]illustrated how sometimes an extremely simple andclearly suboptimal nonblock code can dramaticallyoutperform the best possible fixed-length block-codewhen considering the tradeoff with fixed delay.These results suggest that a more systematic exam-ination of the tradeoff between delay and probabilityof error is needed in other contexts as well. The mainresults in this paper are upper and lower bounds on the1It had long been known that the reliability function with respectto average block-length can improve, but there was a mistakenassertion by Pinsker in [4] that the fixed-delay exponents do notimprove with feedback.error exponents with delay for lossless joint source-channel coding, both with and without feedback.Without feedback, the upper bound strategy paral-lels the one used in [3] for channel coding alone. Wealso derive a lower bound (achievability) on the fixed-delay error exponents by combining variable lengthsource coding and sequential random channel coding.These two bounds are in general not the same.2Wethen study the joint source-channel coding problemwith noiseless feedback, giving a “focusing”[3] typeof upper bound similar to the pure channel codingresult in [3]. We then give a matching lower bound onthe joint source-channel reliability for the special caseof binary erasure channels. The scheme is interestingbecause it is essentially a separation based architectureexcept that it uses a variable-rate interface between thesource and channel coding layers.s1s2s3s4s5s6...x1(s11) x2(s21) x3(s31) x4(s41) x5(s51) x6(s61) ...ˆs1(4)ˆs2(5)ˆs3(6) ...y1y2y3y4y5y6...Channel OutputsEncodingSourceDecoding? ? ? ? ? ?? ? ? ? ? ?? ? ?Fig. 1. Sequential joint source-channel coding, delay ∆ = 3A. Review of block joint source-channel codingThe setup is shown in Figure 2. For convenience,there is a common clock for the iid source sn1, si∼ Qs3from a finite alphabet S and the DMC channelW . Without loss of generality, the source distributionQs(s) > 0, ∀s ∈ S. The DMC has transition matrixWy|xwhere the input and output alphabets are finitesets X and Y respectively. A block joint source-channel coding system for n source symbols consists2This parallels what was reported in [7] for block-coding. There,separate source-channel coding does not achieve the upper boundof the error exponent on joint source-channel coding.3In this paper, s is random variable, s is the realization of therandom variable.s1, s2, ...si∼ Qs-Joint source-channelencoder-x1, x2, ...NoisychannelWy|x-y1, y2, ...Joint source-channeldecoder-ˆs1,ˆs2, ...Fig. 2. Joint source-channel coding with a fixed delay requirementof a encoder-decoder pair (En, Dn). WhereEn: Sn→ Xn, En(sn1) = xn1Dn: Yn→ Sn, Dn(yn1) = ˆsn1The error probability is Pr(sn16=ˆsn1) = Pr(sn16=Dn(En(sn1))). The block source-channel exponentEb,scis achievable if ∃ a family of {(En, Dn)}, s.t.4lim infn→∞−1nlog2Pr(sn16=ˆsn1) = Eb,sc(1)The relevant results of [7] are summarized into thefollowing theorem.Theorem 1: E(1)b,sc≤ Eb,sc≤ E(2)b,scwhereE(1)b,sc= minH(Qs)≤R≤log |S|{e(R) + Er(R)}E(2)b,sc= minH(Qs)≤R≤log |S|{e(R) + E(R)}e(R) = minU:H(U)≥RD(UkQ) is the block sourcecoding error exponent for source Qs. E(R) is theblock channel coding error exponent for channel Wy|xandEr(R) = maxPminV[D(V kW |P ) + |I(P, V ) − R|+]is the random coding error exponent for channel Wy|x.Thus as shown in [1],E(R) ≤ Esp(R) = maxPminV :I(V,P )≤RD(V kW |P )the sphere packing bound, and E(R) = Er(R) =Esp(R) for Rcr≤ R ≤ C. As shown in [7], if theminimum of e(R)+Er(R) is attained for an R ≥ Rcrthen Eb,sc= e(R) + Er(R). This error exponent is ingeneral better than the obvious separate source chan-nel coding error exponent maxR{min{e(R), Er(R)}}[7].B. Source coding and channel coding with delayconstraintWe review some related results from [5] on sequen-tial lossless source coding and from [3] on sequentialchannel coding.4We use bits and log2in this paper.For lossless source coding, [5] derives a tight5bound on the error exponent with a hard delay con-straint: ∀i, lim inf∆→∞−1∆log2Pr(si6=ˆsi(i + ∆)) =Es,s(R) , where Es,s(R) is the sequential sourcecoding error exponent:Es,s(R) = infα>0,Us:H(Us)≥(1+α)R1αD(UskQs)= infα>01αe((1 + α)R) (2)Where e(R) = infUs:H(Us)≥RD(UskQs) is the blocksource coding error exponent [2].For channel coding without feedback but facing ahard delay constraint, [9] reviews how the randomblock-coding error exponent Er(R) governs how theprobability of bit error decays with delay6for randomtree codes. Formally, for all i7and ∆, there exists asequential channel coding


Error Exponents for Joint Source-Channel Coding with Delay-Constraints

Download Error Exponents for Joint Source-Channel Coding with Delay-Constraints
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Error Exponents for Joint Source-Channel Coding with Delay-Constraints and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Error Exponents for Joint Source-Channel Coding with Delay-Constraints 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?