DOC PREVIEW
CALTECH EE 127 - Project 1 – BCJR algorithm

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

EE/Ma 127c Error-Correcting Codesdraft of May 11, 2001Ling [email protected] 1 – BCJR algorithmThis project report is organized into 3 parts: 1. some details of the implementation of the BCJRalgorithm; 2. the result of the computer tests (the main result is in Figure 1); 3. other interestingthings, such as the performance of the algorithm when σ2is not known precisely.Let u(t), s(t), x(t), and y(t) be the information bit, state vector, codebits, and the noisy codebitsat time t. Assume that the total time is T = k + 4 (k = 1024 is the number of the information bitsand 4 is the number of dummy bits). Let U , X and Y be the information, codeword, and noisycodeword from time 0 to T − 1.1. Encoder. The generator matrix is1,G1(D)G2(D)=1,1 + D41 + D + D2+ D3+ D4. (1)Then (refer to [Berrou et al., 1993, Fig. 1(b)])s(t + 1) = s(t)A + u(t)B, (2)x(t) = s(t)C + u(t)D,whereA =1 1 0 01 0 1 01 0 0 11 0 0 0, B =1 0 0 0, C =0 10 10 10 0, D =1 1.We can verify that G(D) is as given in (1). For a more convenient way for programming:s(t + 1) = (s1(t) + s2(t) + s3(t) + s4(t) + u(t), s1(t), s2(t), s3(t)) ,x(t + 1) = (u(t), s1(t) + s2(t) + s3(t) + u(t)) .After the encoding, xi(t) is transformed to {−1, +1} by mapping 0 7→ +1 and 1 7→ −1.2. The trellis. There are 24= 16 states in every stage of the trellis graph. From (2), two edgesexit from each state; and also two edges enter each state, since A is invertible.We use su/x7−→ s0to denote an edge in the trellis graph, starting from state s, accompanied byinformation bit u and codebits x, and ending at state s0.3. The weights. The evidence is E = Y = y(0) . . . y(T − 1). What we want to calculate isp(u(t) = a|E) = αXU :u(t)=ap(U )p(Y |X) = αXU :u(t)=aT −1Yi=0p(u(i))p(y(i)|x(i)),where the notation α is the same as in [McEliece et al., 1998]; or α = p(Y )−1. (Note thatthe channel is memoryless.) To apply the BCJR algorithm, we define the weight of an edgee = su/x7−→ s0asw(e) = π(u)p(y|x),1where π(u) is the a priori probability that the information bit is u.Since we will use the log-likelihood ratio (LLR), it is good to express the weight in log form.Let σ2be the Gaussian noise variance of the channel. Thenlog w(e) = log π(u) + log p(y|x)= log π(u) −12σ2ky − xk2− log 2πσ2= log π(u) −12σ2y21+ x21− 2x1y1+ y22+ x22− 2x2y2− log 2πσ2.Since xi∈ {−1, +1}, x2iis a constant. We can omit constants which are the same for u = 0and u = 1. Thus we can define the logarithmic weight for e ∈ Et,t+1as˜w(e) = log π(u) +x1y1(t) + x2y2(t)σ2. (3)If π(0) = π(1) =12, we can further simplify (3) to˜w(e) =x1y1(t) + x2y2(t)σ2. (4)4. α and β. The matrix Wiused in the forward-backward algorithm is very sparse. To reducethe computational complexity, we should use (see [McEliece, 2001])αt(s0) =Xu=0,1eu:su7→s0∈Et−1,tαt−1(su)w(eu),βt(s) =Xu=0,1eu:s7→s0u∈Et,t+1βt+1(s0u)w(eu).The logarithmic version is˜αt(s0) = loge˜αt−1(s0)+ ˜w(e0)+ e˜αt−1(s1)+ ˜w(e1), (5)˜βt(s) = loge˜βt+1(s00)+ ˜w(e0)+ e˜βt+1(s01)+ ˜w(e1). (6)The initial values of ˜α0(s) and˜βT(s) are defined as follows:˜α0(s) =˜βT(s) =1, s = 0;−∞, otherwise.5. Log-likelihood Ratio. We have (assume e = s 7→ s0)log p(u(t) = 0|E) = logXe∈E(0)t−1,tαt−1(s)w(e)βt(s0) = logXe∈E(0)t−1,te˜αt−1(s)+ ˜w(e)+˜βt(s0),log p(u(t) = 1|E) = logXe∈E(1)t−1,tαt−1(s)w(e)βt(s0) = logXe∈E(1)t−1,te˜αt−1(s)+ ˜w(e)+˜βt(s0),and thusLLRt= logp(u(t) = 0|E)p(u(t) = 1|E)= log p(u(t) = 0|E) −log p(u(t) = 1|E).26. Approximation of log(ex+ ey). We are asked to use the logarithmic weights ((3) or (4)) tocalculate LLR. This poses a problem with (5) and (6): how to calculate log (ex+ ey) withoutdoing log or exp. From Homework 2.2,log (ex+ ey) = max {x, y} + f (|x − y|),where f(∆) = log1 + e−∆. Here we approximate f(∆). I tried:• f ≡ 0. That is, use only max {x, y} to approximate log (ex+ ey).• 2-bit approximation. I tried two methods in my solution to Homework 2.2.It turns out that the approximation in my solution 2.2(b) is the best among those three, andf ≡ 0 is almost as good as the approximation in the solution 2.2(a).7. Histogram and normalization. We are asked to plot a histogram for {LLRt}k−1t=0, or moreprecisely, the adjusted LLR, i.e., {u(t) · LLRt}k−1t=0. We do not care about the LLR for thedummy bits.If we divide the range of LLRtinto M bins, and count the number of LLRt’s in each bin,then we can plot the histogram. However, we can do better. We can make a probabilitydensity from the histogram if we do a normalization before plotting.Assume all the bins have the same width, w. Denote the number of LLRtin bin i by ci.Then the probability density of LLRtin bin i ispi=ciw · k,since piis proportional to ciand the ‘integrate’ of pi,Piwpi= 1. For r runs (making thehistogram of LLR more accurate),pi=ciw · k · r.8. Random number generator. From [Press et al., 1992, Section 7.1], a linear congruential methodfor generating random numbers is not free of sequential correlation on successive calls. If theperiod is as small as 32768, the number of lines on which pairs of points lie in 2D space will beno greater than√32768, or 181. In this project, we will need about 107(≈ 2T × 5000 runs)random numbers. So I used the random number generator ran1() in [Press et al., 1992,Section 7.1]. However, my results from the computer tests didn’t show much difference.9. Basic results. The BCJR decoding algorithm was run 5000 times to determine the distributionof the adjusted LLR and the average BER (bit error rate). These tests were performed forseveral different values of Eb/N0. The results are shown in Figure 1.If we conjecture that the distribution of the adjusted LLR is Gaussian N(`, σ2), we cancalculate the BER from the Gaussian distribution asBERt=Z0−∞1√2πσ2e−(x−`)22σ2dx =1√πZ∞`/√2σ2e−t2dt =12erfc`√2σ2, (7)where erfc(t) =2√πR∞te−t2dt can be calculated by erfc() in Matlab or Erfc[] in Mathe-matica. The mean ` and variance σ2can be statistically calculated from the adjusted LLR3data. Although Figure 1(a) shows a good match of the real distribution of LLR with theGaussian approximation, Figure 1(b) implies that they two are not the same.Below I list the average number of errors (in k bits) over 5000 runs of the decoding, andk · BERt, the number


View Full Document

CALTECH EE 127 - Project 1 – BCJR algorithm

Download Project 1 – BCJR algorithm
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Project 1 – BCJR algorithm and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Project 1 – BCJR algorithm 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?