DOC PREVIEW
UIUC ECE 461 - Optimum Reception in AWGN

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

ECE 461 Fall 2006Optimum Reception in AWGN◦ Restricting to the case of memoryless modulation with no ISI (ideal AWGN channel), we canfocus on one symbol interval [0, Ts] without loss of optimality. We will also assume perfect syn-chronization at the receiver to begin the analysis. The received signal model in AWGN is thenr(t) = s(t) + w(t) 0 ≤ t ≤ TsThe signal s(t) ∈ {s1(t), . . . , sM(t)}, and the goal of the receiver is to determine which symbol m(equivalently, which signal sm(t)) was sent on the channel. Without the additive noise, this is atrivial problem as long as the signals are different, i.e., dkm6= 0, for k 6= m. What do we do inthe presence of noise? As we saw in class, the optimum receiver can be split into two steps:◦ Step 1: Demodulation. Projection of r(t) on to basis functions f1(t), . . . , fN(t) of the signal spaceto form the vector of sufficient statistics:R= [R1R2. . . RN]>, Rn= hr(t), fn(t)i◦ Step 2: Detection. Decide which symbol was sent based on R.◦ Why is it okay to split into these two steps? Principle of IrrelevanceThe sufficient statistics can be written as:Rn= hr(t), fn(t)i = hs(t), fn(t)i + hw(t), fn(t)i = sn+ wnwhere sn= hs(t), fn(t)i and wn= hw(t), fn(t)i.Note that s(t) can be represented without error (why?) in terms of the coefficients {sn}:s(t) =NXn=1snfn(t)However, there is an error in the representation of the noise, since w(t) may not belong to thesignal space S, i.e.,w(t) =NXn=1wnfn(t) + w0(t)where w0(t) is the representation error. Thusr(t) =NXn=1snfn(t) +NXn=1wnfn(t) + w0(t)As we argued in class, w0(t) is irrelevant for decision-making since it contains no information aboutthe signal that is transmitted. The first two terms in the above sum are sufficient for decisionmaking, and the sum of these two terms is equivalent to the vector Rof sufficient statistics.Thus R is sufficient for decision-making about the signal, and the split up of the receiver into thedemodulation and detection stages is justified.cV.V. Veeravalli, 2006 1◦ Correlation as Matched FilteringBy defining the filter with impulse response hnthat satisfies hn(Ts− t) = f?n(t), we see thatRn=ZTs0r(t)f?n(t)dt =ZTs0r(t)hn(Ts− t)dt =Z∞−∞r(t)hn(Ts− t)dt = r ∗ hn(Ts)Thus the sufficient statistic Rncan be obtained by passing r(t) through a matched filter withimpulse response hn(t) and sampling the output at time Ts.Note that hn(t) = f?n(Ts− t) and thus hnis a causal filter with impulse response limited to [0, Ts].◦ SNR Maximization and Matched FilteringWe showed in class, using the Cauchy-Shwarz Inequality, that the matched filter also maximizesthe signal-to-noise ratio (SNR) at the output of the receiver. This is another justification for thedemodulation step.◦ In summary, the vector of sufficient statistics Rcan be obtained through a bank of correlators{fn}, or equivalently through a bank of matched filters {hn} followed by sampling at the symbolperiod.R = s + Wwhere W is CN (0, N0I), i.e., {Wn} are i.i.d. CN (0, N0) random variables.◦ We now study the detection step in more detail.Optimum Detection◦ The information we can use to distinguish between the various symbols is statistical and is con-tained in the conditional distributions of R, given that each symbol is sent. These conditionaldistributions are also called likelihood functions.◦ Likelihood function: The conditional distribution of R, given that s = smis denoted by pm(r).◦ The goal is to choose ˆm, our estimate of the symbol that was sent, based on the likelihoodfunctions, pm(r), m = 1, 2, . . . , M . What criterion should we use for choosing ˆm?◦ (Average) Probability of symbol errorPe= P{ ˆm(R) 6= msent}Assuming that the prior probability of seeing symbol m is νm, we can writePe=MXm=1νmPe,mwherePe,m= P ({ ˆm(R) 6= m}|{m sent})cV.V. Veeravalli, 2006 2◦ Minimum Probability of Error (MPE) DetectionAs we showed in class, the MPE detector chooses ˆm as:ˆmMPE(r) = arg maxmνmpm(r)We also showed using Bayes rule that the MPE detector also maximizes the a posteriori probabilitythat symbol m was sent given that r is received, i.e.,ˆmMPE(r) = ˆmMAP(r)Finally, if the symbols are equally likely, i.e., νm=1M, m = 1, 2, . . . , M, thenˆmMPE(r) = arg maxmνmpm(r) = arg maxmpm(r) = ˆmML(r)where ˆmMLis the maximum likelihood (ML) decision rule. Note that we typically assume thatthe symbols are equally likely.cV.V. Veeravalli, 2006


View Full Document

UIUC ECE 461 - Optimum Reception in AWGN

Download Optimum Reception in AWGN
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Optimum Reception in AWGN and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Optimum Reception in AWGN 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?