New version page

# CALTECH EE 127 - Lecture notes

Pages: 18
Documents in this Course
Unformatted text preview:

EE/Ma 127b Lecture 14May 5, 2004Copyrightc 2004 by R. J. McElieceOutline• Weight Enumerators for Block Codes• The Bhattacharyya parameter• The Union Bound• Weight Analysis of Convolutional Codes• The Transfer Matrix Method1Weight Enumerators for Block Codes• For an(n, k) binary linear block code, the weight enumer-ator is the polynomialA(z)=A0+ A1+ ···+ Anzn,where Aidenotes the number of codewords of weight i.• Thus for example,A0=1A1= ···= Ad−1=0A0+ A1+ ···+ An=2k.2Examples• The (7, 4) Hamming code:A(z)=1+7z3+7z4+ z7.• The (8, 4) Extended Hamming code:A(z)=1+14z4+ z8.• The (23, 12) Golay code:A(z)=1+253z7+506z8+1288z11+1288z12+506z15+253 z16+z23.• The (24, 12) Extended Golay code:A(z)=1+759z8+ 1576z12+ 759z16+ z24.3The Bhattacharyya parameter• Suppose the code is being used on a binary input DMCwith inpuut alphabet {0, 1} and output alphabet A, andtransition probabilities p(y|0) and p(y|1), for y ∈ A.• The channel Bhattacharyya parameter is deﬁned asγ =y∈Ap(y|0)p(y|1).• γ is a kind of summary of the noisiness of the channel.4The Bhattacharyya Parameter• Example. For a BSC,γ 1 − ppp 1 − p=2p(1 − p).• Example 2. For a BEC,γ 1 −  0 01− = .5The Bhattacharyya Parameter• Example. For an AGC with inputs ±1 and noise varianceσ2,p(y|x)=1√2πσ2e−(y−x)2/2σ2.γ =+∞−∞p(y| +1)p(y|−1)dy= e−1/2σ2.• Remember,1σ2=2REB/N0,soγ = e−REb/N0.6The Two-Codeword Union Bound• Let C = {0000000, 1111111}, and assume 00000000 is sent,y is received.• AMLdecoder selectsargmax(p(y|0),p(y|1)).• Let Y1= {y : p(y|1) ≥ p(y|0)}, the set of “bad” receivedwords.• If PEdenotes the probability of MLD error,PE≤y∈Yp(y|0).7Doing the mathPE≤y∈Yp(y|0)≤y∈Yp(y|0)p(y|1)p(y|0)=y∈Yp(y|0)p(y|1)≤y∈Anp(y|0)p(y|1)=ni=1y∈Ap(y|0)p(y|1)= γn.8ConclusionTheorem. If the code has only two codewords, say xand y,PMLD≤ γdH(x,y).• Example. BSC(p): If n is odd,PexactMLD=k≥(n+1)/2 nkpk(1 − p)n−k= O(p(n+1)/2)γn=(2p(1 − p))n= O(pn/2)• But what if there are more than two codoewords?9The Union Bound• Suppose C = {x0,...,xM−1}, and suppose x0is trans-mitted. If y is received, xiwill not be preferred to x0bya MLD decoder if p(y|xi) <p(y|x0). Thus if we deﬁne theeventYi= {y : p(y|xi) ≥ p(y|x0)}.we know that{xiselected by MLD decoder}⊆Yi.• ThusPr{MLD error}≤M−1i=1Pr{Yi}.10The Union BoundBut from the two-codeword result,Pr{Yi}≤γdH(x0,xi)= γwH(xi).• ThereforeM−1i=1Pr{Yi}≤nj=1Ajγj.11SummaryTheorem. If the (n, k) binary linear code with weight enu-meratorA(z)=nj=0Ajzjis used with ML decoding on a binary-input DMC withBhattacharyya parameter γ,PMLD≤nj=1Ajγj.12The Union Bound in Action.• For the (7, 4) Hamming code,PMLD≤ 7γ3+7γ4+ γ8.For the BSC with γ =2p(1 − p), this givesPMLD≤ 56p3/2,whereas in factPMLD=21p2+ O(p3)asp → 013Weight Analysis of Convolutional Codes• What is the weight enumerator of a convolutional code?0 01 0110 1001101001010011114Weight Analysis of Convolutional Codes• How many codepaths from A to A of weight w are there?ABCD2210011115Weight Analysis of Convolutional Codes• We better not count the loop of weight 0 at A.ABCD221011116A Useful Trick• We better not count the loop of weight 0 at A.AABCD221011117Let’s Count Them• Here we

View Full Document Unlocking...