Time series,HMMs,Kalman FiltersAdventures of our BN heroHandwriting recognitionExample of a hidden Markov model (HMM)Understanding the HMM SemanticsHMMs semantics: DetailsHMMs semantics: Joint distributionLearning HMMs from fully observable data is easyPossible inference tasks in an HMMUsing variable elimination to compute P(Xi|o1:n)What if I want to compute P(Xi|o1:n) for each i?Reusing computationThe forwards-backwards algorithmMost likely explanationThe Viterbi algorithmWhat about continuous variables?Time series data example: Temperatures from sensor networkOperations in Kalman filterDetour: Understanding Multivariate GaussiansCharacterizing a multivariate GaussianConditional GaussiansKalman filter with GaussiansDetour2: Canonical formConditioning in canonical formOperations in Kalman filterRoll-up in canonical formOperations in Kalman filterLearning a Kalman filterMaximum likelihood learning of a multivariate GaussianWhat you need to knowClassic HMM tutorial – see class website:*L. R. Rabiner, "A Tutorial on Hidden Markov Models and SelectedApplications in Speech Recognition," Proc. of the IEEE, Vol.77, No.2,pp.257--286, 1989.Time series,HMMs,Kalman FiltersMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityMarch 28th, 2005Adventures of our BN hero Compact representation for probability distributions Fast inference Fast learning But… Who are the most popular kids?1. Naïve Bayes2 and 3. Hidden Markov models (HMMs)Kalman FiltersHandwriting recognitionCharacter recognition, e.g., kernel SVMszcbcacrrrrrrExample of a hidden Markov model (HMM)Understanding the HMM SemanticsX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5=HMMs semantics: DetailsX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Just 3 distributions:HMMs semantics: Joint distributionX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5=Learning HMMs from fully observable data is easyX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Learn 3 distributions:Possible inference tasks in an HMMX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Marginal probability of a hidden variable:Viterbi decoding – most likely trajectory for hidden vars:Using variable elimination to compute P(Xi|o1:n)X1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Compute:Variable elimination order?Example:What if I want to compute P(Xi|o1:n) for each i?X1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Compute:Variable elimination for each i?Variable elimination for each i, what’s the complexity?Reusing computationX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Compute:The forwards-backwards algorithmX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Initialization: For i = 2 to n Generate a forwards factor by eliminating Xi-1 Initialization: For i = n-1 to 1 Generate a backwards factor by eliminating Xi+1 ∀ i, probability is:Most likely explanationX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Compute:Variable elimination order?Example:The Viterbi algorithmX1= {a,…z}O1= X5= {a,…z}X3= {a,…z} X4= {a,…z}X2= {a,…z}O2= O3= O4= O5= Initialization: For i = 2 to n Generate a forwards factor by eliminating Xi-1 Computing best explanation: For i = n-1 to 1 Use argmax to get explanation:What about continuous variables? In general, very hard! Must represent complex distributions A special case is very doable When everything is Gaussian Called a Kalman filter One of the most used algorithms in the history of probabilities!Time series data example: Temperatures from sensor networkSERVERLABKITCHENCOPYELECPHONEQUIETSTORAGECONFERENCEOFFICEOFFICE50515253544648494743454442 41373938 36333610111213141516171920212224252628303231272923189587434123540Operations in Kalman filter Compute Start with At each time step t: Condition on observation Roll-up (marginalize previous time step)X1O1= X5X3X4X2O2= O3= O4= O5=Detour: Understanding Multivariate GaussiansObserve attributesExample: Observe X1=18P(X2|X1=18)Characterizing a multivariate GaussianMean vector:Covariance matrix:Conditional Gaussians Conditional probabilities P(Y|X)Kalman filter with GaussiansX1O1= X5X3X4X2O2= O3= O4= O5= Equivalent to a linear systemDetour2: Canonical form Standard form and canonical forms are related: Conditioning is easy in canonical form Marginalization easy in standard formConditioning in canonical form First multiply: Then, condition on value B = yOperations in Kalman filter Compute Start with At each time step t: Condition on observation Roll-up (marginalize previous time step)X1O1= X5X3X4X2O2= O3= O4= O5=Roll-up in canonical form First multiply: Then, marginalize Xt:Operations in Kalman filter Compute Start with At each time step t: Condition on observation Roll-up (marginalize previous time step)X1O1= X5X3X4X2O2= O3= O4= O5=Learning a Kalman filter Must learn: Learn joint, and use division rule:Maximum likelihood learning of a multivariate Gaussian Data: Means are just empirical means: Empirical covariances:What you need to know Hidden Markov models (HMMs) Very useful, very powerful! Speech, OCR,… Parameter sharing, only learn 3 distributions Trick reduces inference from O(n2) to O(n) Special case of BN Kalman filter Continuous vars version of HMMs Assumes Gaussian distributions Equivalent to linear system Simple matrix operations for
View Full Document