2005-2007 Carlos GuestrinHMMsMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityMarch 28th, 20072005-2007 Carlos GuestrinAdventures of our BN hero Compact representation for probability distributions Fast inference Fast learning But… Who are the most popular kids?1. Naïve Bayes2 and 3. Hidden Markov models (HMMs)Kalman Filters2005-2007 Carlos GuestrinHandwriting recognitionCharacter recognition, e.g., kernel SVMszcbcacrrrrrr2005-2007 Carlos GuestrinExample of a hidden Markov model (HMM)2005-2007 Carlos GuestrinUnderstanding the HMM SemanticsX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5=2005-2007 Carlos GuestrinHMMs semantics: DetailsX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Just 3 distributions:2005-2007 Carlos GuestrinHMMs semantics: Joint distributionX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5=2005-2007 Carlos GuestrinLearning HMMsfrom fully observable data is easyX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Learn 3 distributions:2005-2007 Carlos GuestrinPossible inference tasks in an HMMX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Marginal probability of a hidden variable:Viterbi decoding – most likely trajectory for hidden vars:2005-2007 Carlos GuestrinUsing variable elimination to compute P(Xi|o1:n)X1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Variable elimination order?Compute:Example:2005-2007 Carlos GuestrinWhat if I want to compute P(Xi|o1:n) for each i?X1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Variable elimination for each i?Compute:Variable elimination for each i, what’s the complexity?2005-2007 Carlos GuestrinReusing computationX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Compute:2005-2007 Carlos GuestrinThe forwards-backwards algorithmX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Initialization: For i = 2 to n Generate a forwards factor by eliminating Xi-1 Initialization: For i = n-1 to 1 Generate a backwards factor by eliminating Xi+1 ∀ i, probability is:2005-2007 Carlos GuestrinMost likely explanationX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Variable elimination order?Compute:Example:2005-2007 Carlos GuestrinThe Viterbi algorithmX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Initialization: For i = 2 to n Generate a forwards factor by eliminating Xi-1 Computing best explanation: For i = n-1 to 1 Use argmax to get explanation:2005-2007 Carlos GuestrinWhat you’ll implement 1: multiplication2005-2007 Carlos GuestrinWhat you’ll implement 2: max & argmax2005-2007 Carlos GuestrinHigher-order HMMsX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Add dependencies further back in time →→→→better representation, harder to learn2005-2007 Carlos GuestrinWhat you need to know Hidden Markov models (HMMs) Very useful, very powerful! Speech, OCR,… Parameter sharing, only learn 3 distributions Trick reduces inference from O(n2) to O(n) Special case of BN2005-2007 Carlos GuestrinBayesian Networks –(Structure) Learning Machine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityMarch 28th, 20072005-2007 Carlos GuestrinReview Bayesian Networks Compact representation for probability distributions Exponential reduction in number of parameters Fast probabilistic inference using variable elimination Compute P(X|e) Time exponential in tree-width, not number of variables Today Learn BN structureFluAllergySinusHeadacheNose2005-2007 Carlos GuestrinLearning Bayes netsMissing dataFully observable dataUnknown structureKnown structurex(1)…x(m)Datastructure parametersCPTs –P(Xi| PaXi)2005-2007 Carlos GuestrinLearning the CPTsx(1)…x(m)DataFor each discrete variable Xi2005-2007 Carlos GuestrinInformation-theoretic interpretation of maximum likelihood Given structure, log likelihood of data:FluAllergySinusHeadacheNose2005-2007 Carlos GuestrinInformation-theoretic interpretation of maximum likelihood Given structure, log likelihood of data:FluAllergySinusHeadacheNose2005-2007 Carlos GuestrinInformation-theoretic interpretation of maximum likelihood 2 Given structure, log likelihood of data:FluAllergySinusHeadacheNose2005-2007 Carlos GuestrinDecomposable score Log data likelihood Decomposable score: Decomposes over families in BN (node and its parents) Will lead to significant computational efficiency!!! Score(G : D) = ∑iFamScore(Xi|PaXi: D)2005-2007 Carlos GuestrinHow many trees are there?Nonetheless – Efficient optimal algorithm finds best tree2005-2007 Carlos GuestrinScoring a tree 1: equivalent trees2005-2007 Carlos GuestrinScoring a tree 2: similar trees2005-2007 Carlos GuestrinChow-Liu tree learning algorithm 1 For each pair of variables Xi,Xj Compute empirical distribution: Compute mutual information: Define a graph Nodes X1,…,Xn Edge (i,j) gets weight2005-2007 Carlos GuestrinChow-Liu tree learning algorithm 2 Optimal tree BN Compute maximum weight spanning tree Directions in BN: pick any node as root, breadth-first-search defines directions2005-2007 Carlos GuestrinCan we extend Chow-Liu 1 Tree augmented naïve Bayes (TAN) [Friedman et al. ’97] Naïve Bayes model overcounts, because correlation between features not considered Same as Chow-Liu, but score edges with:2005-2007 Carlos GuestrinCan we extend Chow-Liu 2 (Approximately learning) models with tree-width up to k [Narasimhan & Bilmes ’04] But, O(nk+1)… and more subtleties2005-2007 Carlos GuestrinWhat
View Full Document