Expectation MaximizationAnnouncementsCoordinate descent algorithmsExpectation MaximalizationBack to Unsupervised Learning of GMMs – a simple caseEM for simple case of GMMs: The E-stepEM for simple case of GMMs: The M-stepE.M. for GMMsE.M. for General GMMsGaussian Mixture Example: StartAfter first iterationAfter 2nd iterationAfter 3rd iterationAfter 4th iterationAfter 5th iterationAfter 6th iterationAfter 20th iterationSome Bio Assay dataGMM clustering of the assay dataResulting Density EstimatorThree classes of assay(each learned with it’s own mixture model)Resulting Bayes ClassifierResulting Bayes Classifier, using posterior probabilities to alert about ambiguity and anomalousnessThe general learning problem with missing dataE-stepJensen’s inequalityApplying Jensen’s inequalityThe M-step maximizes lower bound on weighted dataThe M-stepConvergence of EMM-step is easyE-step also doesn’t decrease potential function 1KL-divergenceE-step also doesn’t decrease potential function 2E-step also doesn’t decrease potential function 3EM is coordinate ascentWhat you should knowAcknowledgementsEM for HMMsa.k.a. The Baum-Welch AlgorithmLearning HMMs from fully observable data is easyLearning HMMs from fully observable data is easyLog likelihood for HMMs with hidden XE-stepThe M-stepStarting state probability P(X1)Transition probability P(Xt+1|Xt)Observation probability P(Ot|Xt)E-step revisitedThe forwards-backwards algorithmE-step revisited1Expectation MaximizationMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityApril 10th, 20062Announcements Reminder: Project milestone due Wednesday beginning of class3Coordinate descent algorithms Want: minaminbF(a,b) Coordinate descent: fix a, minimize b fix b, minimize a repeat Converges!!! if F is bounded to a (often good) local optimum as we saw in applet (play with it!)K-means is a coordinate descent algorithm!4Expectation Maximalization5Back to Unsupervised Learning of GMMs – a simple caseRemember:We have unlabeled data x1x2… xmWe know there are k classesWe know P(y1) P(y2) P(y3) … P(yk)We don’tknow µ1µ2.. µkWe can write P( data | µ1…. µk) ()()()()()∏∑∏∑∏======⎟⎠⎞⎜⎝⎛−−∝====mjkiijmjkiijmjkjkmiyxiyxxxx1122111111Pµσ21exp Pµpµ...µpµ...µ...p6EM for simple case of GMMs: The E-step If we know µ1,…,µk → easily compute prob. point xjbelongs to class y=i()()iyxxiyijkj=⎟⎠⎞⎜⎝⎛−−∝= Pµσ21expµ...µ,p2217EM for simple case of GMMs: The M-step If we know prob. point xjbelongs to class y=i → MLE for µiis weighted average imagine k copies of each xj, each with weight P(y=i|xj):()()∑∑=====mjjjmjjixiyPxxiyP11 µ8E.M. for GMMsE-stepCompute “expected” classes of all datapoints for each classJust evaluate a Gaussian at xj()()iyxxiyijkj=⎟⎠⎞⎜⎝⎛−−∝= Pµσ21expµ...µ,p221M-stepCompute Max. like µ given our data’s class membership distributions()()∑∑=====mjjjmjjixiyPxxiyP11 µE.M. for General GMMs9Iterate. On the t’th iteration let our estimates beλt= { µ1(t), µ2(t)…µk(t), Σ1(t), Σ2(t)…Σk(t), p1(t), p2(t)…pk(t)}E-stepCompute “expected” classes of all datapoints for each class()())()()(,p,Ptitijtitjxpxiy Σ∝=µλpi(t)is shorthand for estimate of P(y=i)on t’th iterationM-step Compute Max. like µ given our data’s class membership distributions()()()∑∑===+jtjjjtjtixiyxxiyλλ,P ,Pµ1()()()[]()[]() ,P ,P111∑∑=−−==Σ+++jtjTtijtijjtjtixiyxxxiyλµµλ()mxiypjtjti∑==+λ,P)1(m= #recordsJust evaluate a Gaussian at xj10Gaussian Mixture Example: Start11After first iteration12After 2nd iteration13After 3rd iteration14After 4th iteration15After 5th iteration16After 6th iteration17After 20th iteration18Some Bio Assay data19GMM clustering of the assay data20Resulting Density Estimator21Three classes of assay(each learned with it’s own mixture model)22Resulting Bayes Classifier23Resulting Bayes Classifier, using posterior probabilities to alert about ambiguity and anomalousnessYellow means anomalousCyan means ambiguous24The general learning problem with missing data Marginal likelihood – x is observed, z is missing:25E-step x is observed, z is missing Compute probability of missing data given current choice of θ Q(z|xj) for each xj e.g., probability computed during classification step corresponds to “classification step” in K-means26Jensen’s inequality Theorem: log ∑zP(z) f(z) ≥ ∑zP(z) log f(z)27Applying Jensen’s inequality Use: log ∑zP(z) f(z) ≥ ∑zP(z) log f(z)28The M-step maximizes lower bound on weighted data Lower bound from Jensen’s: Corresponds to weighted dataset: <x1,z=1> with weight Q(t+1)(z=1|x1) <x1,z=2> with weight Q(t+1)(z=2|x1) <x1,z=3> with weight Q(t+1)(z=3|x1) <x2,z=1> with weight Q(t+1)(z=1|x2) <x2,z=2> with weight Q(t+1)(z=2|x2) <x2,z=3> with weight Q(t+1)(z=3|x2) …29The M-step Maximization step: Use expected counts instead of counts: If learning requires Count(x,z) Use EQ(t+1)[Count(x,z)]30Convergence of EM Define potential function F(θ,Q): EM corresponds to coordinate ascent on F Thus, maximizes lower bound on marginal log likelihood31M-step is easy Using potential function32E-step also doesn’t decrease potential function 1 Fixing θ to θ(t):33KL-divergence Measures distance between distributions KL=zero if and only if Q=P34E-step also doesn’t decrease potential function 2 Fixing θ to θ(t):35E-step also doesn’t decrease potential function 3 Fixing θ to θ(t) Maximizing F(θ(t),Q) over Q → set Q to posterior probability: Note that36EM is coordinate ascent M-step: Fix Q, maximize F over θ (a lower bound on ): E-step: Fix θ, maximize F over Q: “Realigns” F with likelihood:37What you should know K-means for clustering: algorithm converges because it’s coordinate ascent EM for mixture of Gaussians: How to “learn” maximum likelihood parameters (locally max. like.) in the case of unlabeled data Be happy with this kind of probabilistic analysis Remember, E.M. can get stuck in local minima, and empirically it DOES EM is coordinate ascent General case for EM38Acknowledgements K-means & Gaussian mixture models presentation contains material from excellent tutorial by Andrew Moore: http://www.autonlab.org/tutorials/ K-means Applet:
View Full Document