DOC PREVIEW
CALTECH EE 127 - Lecture notes

This preview shows page 1-2-20-21 out of 21 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

EE/Ma 127b Lecture 7April 11, 2007Copyrightc! 2007 by R. J. McElieceOutline• Branch Metrics• Survivor Truncation1A Second Application: A Hidden Markov ChainSuppose our markovian particle is hidden from view, but asit follows a random pathP = e1e2· · · eN,where init e1= A, fin ei= init ei+1, for i = 1, . . . , N − 1,and fin eN= B, it produces “evidence”Y = Y1Y2. . . YN,where Yiis a random variable which is a “noisy” version ofthe edge ei∈ Ei−1,i.17What is the Evidence?The evidence is stochastically described by a series of tran-sition probability matrices pi(y|e), i = 1, . . . , N:Pr{Yi= y|ei= e} = pi(y|e).We call (y∗1, . . . , y∗N) the “evidence,” denoted E.18What Can We Infer about P from E?By Bayes’s rule,Pr{P |E} =Pr{P } Pr{E|P }Pr{E}= a posteriori probability of P .P∗= argmaxP{Pr{P |E}}= Viterbi’s’ algorithm (“Max-Product” form)22What Else Can We Infer about P from E?• Did P pass through v?Pr{v ∈ P |E} ="P :Av#→BPr{P |E} ∝"P :Av#→BPr{P } Pr{E|P }• Did P pass through e?Pr{e ∈ P |E} ="P :Ae#→BPr{P |E}∝"P :Ae#→BPr{P } Pr{E|P }.23What Else Can We Infer about P from E?• In both cases we need to know Pr{P } (the priors) andPr{E|P } (the likelihoods) for every path P from A to B.Pr{P } =N!i=1π(ei) (prior)Pr{E|P } =N!i=1pi(y∗i|ei). (likelihood)24And So . . .If we define the edge weights for e ∈ Ei−1,iasγ(e) = π(e)pi(y∗i|e)γ(P ) =N!i=1γ(ei).This is a Job for the FBA!Pr{v ∈ P |E} ∝!P :Av#→Bγ(P ) = α(v)β(v)Pr{e ∈ P |E} ∝!P :Ae#→Bγ(P ) = α(init e)γ(e)β(fin e).25A Famous Example26State TransitionsLet the state transition table be as follows: (In this examplep(y|e) depends only on init e.)Π =0 1 20 1/2 1/2 01 1/2 0 1/22 1/2 1/2 027Transition Evidence• Let the transition evidence be as follows: (In this examplep(y|e) depends only on init e.)0 1 2 ?0∗ 1/2 1/4 1/8 1/81∗ 1/8 1/2 1/4 1/82∗ 1/4 1/8 1/2 1/828The Gammas• Γ[a]i,j= The i → j edge weight if a is observed.Γ[0] =0 1 20 1/4 1/41 1/16 1/162 1/8 1/8Γ[1] =0 1 20 1/8 1/81 1/4 1/42 1/16 1/16Γ[2] =0 1 20 1/16 1/161 1/8 1/82 1/4 1/4Γ[?] =0 1 20 1/16 1/161 1/16 1/162 1/16 1/1629Example30APP Decoding of Convolutional CodesPr{ui= 0} ∝"e∈E0i−1,iαi−1(init(e))w(e)βi(fin(e))Pr{ui= 1} ∝"e∈E1i−1,iαi−1(init(e))w(e)βi(fin(e))31Likelihood RatiosLRi=/e∈E0i−1,iαi−1(init(e))w(e)βi(fin(e))/e∈E1i−1,iαi−1(init(e))w(e)βi(fin(e))32Likelihood Ratios000110110 1001011000110110 1001011LRi=e∈E0i−1,iαi−1(init(e))w(e)βi(fin(e))e∈E1i−1,iαi−1(init(e))w(e)βi(fin(e))7From Sum-Product to max∗-sumLemma. log(α+β)=max∗(log α, log β), where max∗(x, y)=max(x, y)+f(|x − y|).1 2 3 40.20.40.60.81log 2f(x) = log(1+ex)8From Sum-Product to max∗-sumProof: Assuming α ≥ β, x = log α, y = log β:log(α + β)=log(elog α+ elog β)= log(ex+ ey)= log(ex(1 + e−(x−y)))= x + log(1 + e−(x−y))= max(x, y)+f(x, y)= max∗(log α, log β).9f(x)iswell approximated with a look-up table1 2 3 40.20.40.60.8110The FBA in max∗-sum formαi(u)=e∈Ei−1,ifin(e)=uαi−1(init(e)) · w(e).βi(u)=e∈Ei,i+1init(e)=uw(e) · βi+1(fin(e)).Ai(u)= max∗e∈Ei−1,ifin(e)=u(Ai−1(init(e)) + W (e)) .Bi(u)= max∗e∈Ei,i+1init(e)=u(W (e)+Bi+1(fin(e))) .11THE BCJR Algorithm in max∗-sum formLRi=e∈E0i−1,iαi−1(init(e))w(e)βi(fin(e))e∈E1i−1,iαi−1(init(e))w(e)βi(fin(e))LLRi= max∗e∈E0i−1,i(Ai−1(init(e)) + W (e)+Bi(fin(e))− max∗e∈E1i−1,i(Ai−1(init(e)) + W (e)+Bi(fin(e))• Here A = log α, etc.12Implementation of max∗MAX*(X,Y)|x-y|max(x,y)xyLUT• (If LUT is omitted, Viterbi’s algorithm


View Full Document

CALTECH EE 127 - Lecture notes

Download Lecture notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?