12005-2007 Carlos GuestrinEMMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityNovember 23rd, 2009©2005-2009 Carlos Guestrin12The General GMM assumptionµ1µ2µ3• There are k components• Component i has an associated mean vector mi• Each component generates data from a Gaussian with mean mi and covariance matrix ΣiEach data point is generated according to the following recipe: 1. Pick a component at random: Choose component i with probability P(y=i)2. Datapoint ~ N(mi, Σi)©2005-2009 Carlos Guestrin23Expectation Maximalization©2005-2009 Carlos Guestrin4The E.M. Algorithm We’ll get back to unsupervised learning soon But now we’ll look at an even simpler case with hidden information The EM algorithm Can do trivial things, such as the contents of the next few slides An excellent way of doing our unsupervised learning problem, as we’ll see Many, many other uses, including learning BNs with hidden data©2005-2009 Carlos Guestrin35Silly ExampleLet events be “grades in a class”w1= Gets an A P(A) = ½w2= Gets a B P(B) = µw3= Gets a C P(C) = 2µw4= Gets a D P(D) = ½-3µ(Note 0 ≤ µ ≤1/6)Assume we want to estimate µ from data. In a given class there werea A’sb B’sc C’sd D’sWhat’s the maximum likelihood estimate of µ given a,b,c,d ?©2005-2009 Carlos Guestrin6Trivial StatisticsP(A) = ½ P(B) = µ P(C) = 2µ P(D) = ½-3µP( a,b,c,d | µ) = K(½)a(µ)b(2µ)c(½-3µ)dlog P( a,b,c,d | µ) = log K + alog ½ + blog µ + clog 2µ + dlog (½-3µ)FOR MAX LIKE µ, SET ∂LogP∂µ= 0∂LogP∂µ=bµ+2c2µ−3d1 /2 − 3µ= 0Gives max like µ = b + c6 b + c + d( )So if class gotMax like µ=110A B C D14 6 9 10©2005-2009 Carlos Guestrin47Same Problem with Hidden InformationSomeone tells us thatNumber of High grades (A’s + B’s) = hNumber of C’s = cNumber of D’s = dWhat is the max. like estimate of µ now?REMEMBERP(A) = ½P(B) = µP(C) = 2µP(D) = ½-3µ©2005-2009 Carlos Guestrin8Same Problem with Hidden InformationSomeone tells us thatNumber of High grades (A’s + B’s) = hNumber of C’s = cNumber of D’s = dWhat is the max. like estimate of µ now?We can answer this question circularly:µ = b+c6 b + c + d( )MAXIMIZATIONIf we know the expected values of a and bwe could compute the maximum likelihood value of µREMEMBERP(A) = ½P(B) = µP(C) = 2µP(D) = ½-3µa =1212+µh b =µ12+µhEXPECTATIONIf we know the value of µ we could compute the expected value of a and bSince the ratio a:b should be the same as the ratio ½ : µ©2005-2009 Carlos Guestrin59E.M. for our Trivial ProblemWe begin with a guess for µWe iterate between EXPECTATION and MAXIMALIZATION to improve our estimates of µ and a and b.Define µ(t)the estimate of µ on the t’th iterationb(t)the estimate of b on t’th iterationREMEMBERP(A) = ½P(B) = µP(C) = 2µP(D) = ½-3µµ(0)= initial guessb(t )= µ(t )h12+µ(t )= Ε b |µ(t )[ ]µ(t +1)=b(t )+ c6 b(t )+ c + d( )= max like est. of µ given b(t )E-stepM-stepContinue iterating until converged.Good news: Converging to local optimum is assured.Bad news: I said “local” optimum.©2005-2009 Carlos Guestrin10E.M. Convergence Convergence proof based on fact that Prob(data | µ) must increase or remain same between each iteration [NOT OBVIOUS] But it can never exceed 1 [OBVIOUS]So it must therefore converge [OBVIOUS]t µ(t)b(t)0 0 01 0.0833 2.8572 0.0937 3.1583 0.0947 3.1854 0.0948 3.1875 0.0948 3.1876 0.0948 3.187In our example, suppose we hadh = 20c = 10d = 10µ(0)= 0Convergence is generally linear: error decreases by a constant factor each time step.©2005-2009 Carlos Guestrin611Back to Unsupervised Learning of GMMs – a simple caseA simple case:We have unlabeled data x1x2… xmWe know there are k classesWe know P(y1) P(y2) P(y3) … P(yk)We don’t know µ1µ2.. µkWe can write P( data | µ1…. µk) = p x1...xmµ1...µk()= p xjµ1...µk( )j=1m∏= p xjµi( )P y = i( )i=1k∑j=1m∏∝ exp −12σ2xj−µi2 P y = i()i=1k∑j=1m∏©2005-2009 Carlos Guestrin12EM for simple case of GMMs: The E-step If we know µ1,…,µk → easily compute prob. point xjbelongs to class y=ip y = i xj,µ1...µk( )∝exp −12σ2xj−µi2 P y = i( )©2005-2009 Carlos Guestrin713EM for simple case of GMMs: The M-step If we know prob. point xjbelongs to class y=i → MLE for µiis weighted average imagine k copies of each xj, each with weight P(y=i|xj):µi = P y = i xj( )j=1m∑xjP y = i xj( )j=1m∑©2005-2009 Carlos Guestrin14E.M. for GMMsE-stepCompute “expected” classes of all datapoints for each classM-stepCompute Max. like µ given our data’s class membership distributionsJust evaluate a Gaussian at xjp y = i xj,µ1...µk( )∝ exp −12σ2xj−µi2 P y = i( )µi = P y = i xj( )j=1m∑xjP y = i xj( )j=1m∑©2005-2009 Carlos Guestrin815E.M. Convergence This algorithm is REALLY USED. And in high dimensional state spaces, too. E.G. Vector Quantization for Speech Data• EM is coordinate ascent on an interesting potential function• Coord. ascent for bounded pot. func. ! convergence to a local optimum guaranteed• See Neal & Hinton reading on class webpage©2005-2009 Carlos Guestrin16E.M. for axis-aligned GMMsIterate. On the t’th iteration let our estimates beλt= { µ1(t), µ2(t)… µk(t), Σ1(t), Σ2(t)… Σk(t), p1(t), p2(t)… pk(t)}E-stepCompute “expected” classes of all datapoints for each class()())()()(,p,Ptitijtitjxpxiy Σ∝=µλpi(t)is shorthand for estimate of P(y=i)on t’th iterationM-step Compute Max. like µ given our data’s class membership distributions( )()( )∑∑===+jtjjjtjtixiyxxiyλλ,P ,Pµ1()mxiypjtjti∑==+λ,P)1(m = #recordsJust evaluate a Gaussian at xj Σ =σ210 0 L 0 00σ220 L 0 00 0σ23L 0 0M M M O M M0 0 0 Lσ2m−100 0 0 L 0σ2m pi(t)is shorthand for estimate of P(y=i) on t’th iteration©2005-2009 Carlos Guestrin917E.M. for General GMMsIterate. On the t’th iteration let our estimates beλt= { µ1(t), µ2(t)… µk(t), Σ1(t), Σ2(t)… Σk(t), p1(t), p2(t)… pk(t)}E-stepCompute “expected” classes of all datapoints for each class()())()()(,p,Ptitijtitjxpxiy Σ∝=µλpi(t)is shorthand for estimate of P(y=i)on t’th iterationM-step Compute Max. like µ given our data’s class membership
View Full Document