©2005-2007 Carlos GuestrinExpectation MaximizationMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityApril 9th, 2007©2005-2007 Carlos GuestrinGaussian Bayes Classifier Reminder)()()|()|(jjjpiyPiypiyPxxx====P(y = i | xj) ∝1(2π)m /2|| Σi||1/2exp −12xj−µi()TΣi−1xj−µi()⎡ ⎣ ⎢ ⎤ ⎦ ⎥ P(y = i)©2005-2007 Carlos GuestrinNext… back to Density EstimationWhat if we want to do density estimation with multimodal or clumpy data?©2005-2007 Carlos GuestrinMarginal likelihood for general case Marginal likelihood:P(xj)j=1m∏= P(xj,y = i)i =1k∑j=1m∏=1(2π)m /2|| Σi||1/2exp −12xj−µi()TΣi−1xj−µi()⎡ ⎣ ⎢ ⎤ ⎦ ⎥ P(y = i )i =1k∑j=1m∏P(y = i | xj) ∝1(2π)m /2|| Σi||1/2exp −12xj−µi()TΣi−1xj−µi()⎡ ⎣ ⎢ ⎤ ⎦ ⎥ P(y = i)©2005-2007 Carlos GuestrinGraph of log P(x1, x2.. x25| µ1, µ2)against µ1 (→) and µ2 (↑)Max likelihood = (µ1=-2.13, µ2=1.668)Local minimum, but very close to global at (µ1=2.085, µ2=-1.257)** corresponds to switching y1with y2.Duda & Hart’s Exampleµ1µ2©2005-2007 Carlos GuestrinFinding the max likelihood µ1,µ2..µkWe can compute P( data | µ1,µ2..µk)How do we find the µi‘s which give max. likelihood? The normal max likelihood trick:Set ∂log Prob (….) = 0∂ µiand solve for µi‘s.# Here you get non-linear non-analytically-solvable equations Use gradient descentSlow but doable Use a much faster, cuter, and recently very popular method…©2005-2007 Carlos GuestrinExpectation Maximalization©2005-2007 Carlos GuestrinThe E.M. Algorithm We’ll get back to unsupervised learning soon But now we’ll look at an even simpler case with hidden information The EM algorithm Can do trivial things, such as the contents of the next few slides An excellent way of doing our unsupervised learning problem, as we’ll see Many, many other uses, including learning BNs with hidden dataDETOUR©2005-2007 Carlos GuestrinSilly ExampleLet events be “grades in a class”w1= Gets an A P(A) = ½w2= Gets a B P(B) = µw3= Gets a C P(C) = 2µw4= Gets a D P(D) = ½-3µ(Note 0 ≤ µ ≤1/6)Assume we want to estimate µ from data. In a given class there werea A’sb B’sc C’sd D’sWhat’s the maximum likelihood estimate of µ given a,b,c,d ?©2005-2007 Carlos GuestrinTrivial StatisticsP(A) = ½ P(B) = µ P(C) = 2µ P(D) = ½-3µP( a,b,c,d | µ) = K(½)a(µ)b(2µ)c(½-3µ)dlog P( a,b,c,d | µ) = log K + alog ½ + blog µ + clog 2µ + dlog (½-3µ)FOR MAX LIKE µ, SET ∂LogP∂µ= 0∂LogP∂µ=bµ+2c2µ−3d1/2− 3µ= 0Gives max like µ = b + c6 b + c + d()So if class gotMax like µ=110109614DCBABoring, but true!©2005-2007 Carlos GuestrinSame Problem with Hidden InformationSomeone tells us thatNumber of High grades (A’s + B’s) =hNumber of C’s = cNumber of D’s = dWhat is the max. like estimate of µ now?REMEMBERP(A) = ½P(B) = µP(C) = 2µP(D) = ½-3µ©2005-2007 Carlos GuestrinSame Problem with Hidden InformationSomeone tells us thatNumber of High grades (A’s + B’s) =hNumber of C’s = cNumber of D’s = dWhat is the max. like estimate of µ now?We can answer this question circularly:µ = b+c6 b + c + d()MAXIMIZATIONIf we know the expected values of a and bwe could compute the maximum likelihood value of µREMEMBERP(A) = ½P(B) = µP(C) = 2µP(D) = ½-3µa =1212+µh b =µ12+µhEXPECTATIONIf we know the value of µ we could compute the expected value of aand bSince the ratio a:b should be the same as the ratio ½ : µ©2005-2007 Carlos GuestrinE.M. for our Trivial ProblemWe begin with a guess for µWe iterate between EXPECTATION and MAXIMALIZATION to improve our estimates of µ and a and b.Define µ(t)the estimate of µ on the t’th iterationb(t)the estimate of b on t’th iterationREMEMBERP(A) = ½P(B) = µP(C) = 2µP(D) = ½-3µµ(0)= initial guessb(t)= µ(t)h12+µ(t)=Εb |µ(t)[]µ(t+1)=b(t)+ c6 b(t)+ c + d()= max like est. of µ given b(t)E-stepM-stepContinue iterating until converged.Good news: Converging to local optimum is assured.Bad news: I said “local” optimum.©2005-2007 Carlos GuestrinE.M. Convergence Convergence proof based on fact that Prob(data | µ) must increase or remain same between each iteration [NOT OBVIOUS] But it can never exceed 1 [OBVIOUS]So it must therefore converge [OBVIOUS]3.1870.094863.1870.094853.1870.094843.1850.094733.1580.093722.8570.08331000b(t)µ(t)tIn our example, suppose we hadh = 20c = 10d = 10µ(0)= 0Convergence is generally linear: error decreases by a constant factor each time step.©2005-2007 Carlos GuestrinBack to Unsupervised Learning of GMMs – a simple caseA simple case:We have unlabeled data x1x2… xmWe know there are k classesWe know P(y1) P(y2) P(y3) … P(yk)We don’tknow µ1µ2.. µkWe can write P( data | µ1…. µk) = p x1...xmµ1...µk()= p xjµ1...µk()j=1m∏= p xjµi()P y = i()i=1k∑j=1m∏∝ exp −12σ2xj−µi2⎛ ⎝ ⎜ ⎞ ⎠ ⎟ P y = i()i=1k∑j=1m∏©2005-2007 Carlos GuestrinEM for simple case of GMMs: The E-step If we know µ1,…,µk → easily compute prob. point xjbelongs to class y=ip y = ixj,µ1...µk()∝exp −12σ2xj−µi2⎛ ⎝ ⎜ ⎞ ⎠ ⎟ P y = i()©2005-2007 Carlos GuestrinEM for simple case of GMMs: The M-step If we know prob. point xjbelongs to class y=i → MLE for µiis weighted average imagine k copies of each xj, each with weight P(y=i|xj):µi = Py= ixj()j=1m∑xjPy= ixj()j=1m∑©2005-2007 Carlos GuestrinE.M. for GMMsE-stepCompute “expected” classes of all datapoints for each classM-stepCompute Max. like µ given our data’s class membership distributionsJust evaluate a Gaussian at xjp y = ixj,µ1...µk()∝exp −12σ2xj−µi2⎛ ⎝ ⎜ ⎞ ⎠ ⎟ P y = i()µi = Py= ixj()j=1m∑xjPy= ixj()j=1m∑©2005-2007 Carlos GuestrinE.M. Convergence This algorithm is REALLY USED. And in high dimensional state spaces, too. E.G. Vector Quantization for Speech Data• EM is coordinate ascent on an interesting potential function• Coord. ascent for bounded pot. func. →convergence to a local optimum guaranteed• See Neal & Hinton reading on class webpage©2005-2007 Carlos GuestrinE.M. for General GMMsIterate. On the t’th iteration let our estimates beλt= { µ1(t), µ2(t)…µk(t), Σ1(t), Σ2(t)…Σk(t), p1(t), p2(t)…pk(t)}E-stepCompute “expected”
View Full Document