Bayesian Networks –Inference (cont.)MarginalizationProbabilistic inference exampleUnderstanding variable elimination – Order can make a HUGE differenceVariable elimination algorithmComplexity of variable elimination – (Poly)-tree graphsComplexity of variable elimination – Graphs with loopsComplexity of variable elimination –Tree-widthExample: Large tree-width with small number of parentsChoosing an elimination orderMost likely explanation (MLE)Max-marginalizationExample of variable elimination for MLE – Forward passExample of variable elimination for MLE – Backward passMLE Variable elimination algorithm – Forward passMLE Variable elimination algorithm – Backward passWhat you need to knowHMMsAdventures of our BN heroHandwriting recognitionExample of a hidden Markov model (HMM)Understanding the HMM SemanticsHMMs semantics: DetailsHMMs semantics: Joint distributionLearning HMMs from fully observable data is easyPossible inference tasks in an HMMUsing variable elimination to compute P(Xi|o1:n)What if I want to compute P(Xi|o1:n) for each i?Reusing computationThe forwards-backwards algorithmMost likely explanationThe Viterbi algorithmWhat you’ll implement 1: multiplicationWhat you’ll implement 2: max & argmaxHigher-order HMMsWhat you need to know1Required Readings from Koller & Friedman:Representation: 2.1, 2.2Inference: 5.1, 6.1, 6.2, 6.7.1Optional:2.3, 5.2, 5.3, 6.3, 6.7.2Bayesian Networks –Inference (cont.)Machine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityMarch 26th, 20062MarginalizationFluAllergy=tSinus3Probabilistic inference exampleFluAllergySinusHeadacheNose=tInference seems exponential in number of variables!4Understanding variable elimination –Order can make a HUGE differenceFluAllergySinusHeadacheNose=t5Variable elimination algorithm Given a BN and a query P(X|e) ∝ P(X,e) Instantiate evidence e Prune non-ancestors of {X,e} Choose an ordering on variables, e.g., X1, …, Xn For i = 1 to n, If Xi∉{X,e} Collect factors f1,…,fkthat include Xi Generate a new factor by eliminating Xifrom these factors Variable Xihas been eliminated! Normalize P(X,e) to obtain P(X|e)IMPORTANT!!!6Complexity of variable elimination –(Poly)-tree graphsVariable elimination order:Start from “leaves” up –find topological order, eliminate variables in reverse orderLinear in number of variables!!! (versus exponential)7Complexity of variable elimination –Graphs with loopsExponential in number of variables in largest factor generated8Complexity of variable elimination –Tree-widthMoralize graph:Connect parents into a clique and remove edge directionsComplexity of VE elimination:(“Only”) exponential in tree-widthTree-width is maximum node cut +19Example: Large tree-width with small number of parentsCompact representation ⇒ Easy inference /10Choosing an elimination order Choosing best order is NP-complete Reduction from MAX-Clique Many good heuristics (some with guarantees) Ultimately, can’t beat NP-hardness of inference Even optimal order can lead to exponential variable elimination computation In practice Variable elimination often very effective Many (many many) approximate inference approaches available when variable elimination too expensive11Most likely explanation (MLE) Query: Using Bayes rule: Normalization irrelevant:FluAllergySinusHeadacheNose12Max-marginalizationFlu Nose=tSinus13Example of variable elimination for MLE – Forward passFluAllergySinusHeadacheNose=t14Example of variable elimination for MLE – Backward passFluAllergySinusHeadacheNose=t15MLE Variable elimination algorithm – Forward pass Given a BN and a MLE query maxx1,…,xnP(x1,…,xn,e) Instantiate evidence e Choose an ordering on variables, e.g., X1, …, Xn For i = 1 to n, If Xi∉{e} Collect factors f1,…,fkthat include Xi Generate a new factor by eliminating Xifrom these factors Variable Xihas been eliminated!16MLE Variable elimination algorithm – Backward pass {x1*,…, xn*} will store maximizing assignment For i = n to 1, If Xi∉{e} Take factors f1,…,fkused when Xiwas eliminated Instantiate f1,…,fk, with {xi+1*,…, xn*} Now each fjdepends only on Xi Generate maximizing assignment for Xi:17What you need to know Bayesian networks A useful compact representation for large probability distributions Inference to compute Probability of X given evidence e Most likely explanation (MLE) given evidence e Inference is NP-hard Variable elimination algorithm Efficient algorithm (“only” exponential in tree-width, not number of variables) Elimination order is important! Approximate inference necessary when tree-width to large not covered this semester Only difference between probabilistic inference and MLE is “sum” versus “max”18Classic HMM tutorial – see class website:*L. R. Rabiner, "A Tutorial on Hidden Markov Models and SelectedApplications in Speech Recognition," Proc. of the IEEE, Vol.77, No.2,pp.257--286, 1989.HMMsMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityMarch 26th, 200519Adventures of our BN hero Compact representation for probability distributions Fast inference Fast learning But… Who are the most popular kids?1. Naïve Bayes2 and 3. Hidden Markov models (HMMs)Kalman Filters20Handwriting recognitionCharacter recognition, e.g., kernel SVMszcbcacrrrrrr21Example of a hidden Markov model (HMM)22Understanding the HMM SemanticsX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5=HMMs semantics: Details23X1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Just 3 distributions:24HMMs semantics: Joint distributionX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5=25Learning HMMs from fully observable data is easyX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Learn 3 distributions:26Possible inference tasks in an HMMX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Marginal probability of a hidden variable:Viterbi decoding – most likely trajectory for hidden vars:27Using variable elimination to compute P(Xi|o1:n)X1= {a,…z}O1=
View Full Document