Unsupervised learning or Clustering (cont.) –K-meansGaussian mixture modelsSome DataK-meansK-meansWhat is K-means optimizing?Does K-means converge??? Part 1Does K-means converge??? Part 2Coordinate descent algorithms(One) bad case for k-meansGaussian Bayes Classifier ReminderPredicting wealth from agePredicting wealth from ageLearning modelyear , mpg ---> makerGeneral: O(m2) parametersAligned: O(m) parametersAligned: O(m) parametersSpherical: O(1) cov parametersSpherical: O(1) cov parametersNext… back to Density EstimationWhat if we want to do density estimation with multimodal or clumpy data?But we don’t see class labels!!!Special case: spherical Gaussians and hard assignmentsThe GMM assumptionThe GMM assumptionThe GMM assumptionThe GMM assumptionThe General GMM assumptionUnsupervised Learning:not as hard as it looksMarginal likelihood for general caseSpecial case 2: spherical Gaussians and soft assignmentsUnsupervised Learning:Mediumly Good NewsDuda & Hart’s ExampleDuda & Hart’s ExampleFinding the max likelihood μ1,μ2..μkExpectation MaximalizationThe E.M. AlgorithmSilly ExampleTrivial StatisticsSame Problem with Hidden InformationSame Problem with Hidden InformationE.M. for our Trivial ProblemE.M. ConvergenceBack to Unsupervised Learning of GMMs – a simple caseEM for simple case of GMMs: The E-stepEM for simple case of GMMs: The M-stepE.M. for GMMsE.M. ConvergenceE.M. for General GMMsGaussian Mixture Example: StartAfter first iterationAfter 2nd iterationAfter 3rd iterationAfter 4th iterationAfter 5th iterationAfter 6th iterationAfter 20th iterationSome Bio Assay dataGMM clustering of the assay dataResulting Density EstimatorThree classes of assay(each learned with it’s own mixture model)Resulting Bayes ClassifierResulting Bayes Classifier, using posterior probabilities to alert about ambiguity and anomalousnessWhat you should knowAcknowledgements1Unsupervised learning or Clustering (cont.) –K-meansGaussian mixture modelsMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityApril 5th, 20062Some Data3K-means1. Ask user how many clusters they’d like. (e.g. k=5) 2. Randomly guess k cluster Center locations3. Each datapoint finds out which Center it’s closest to.4. Each Center finds the centroid of the points it owns…5. …and jumps there6. …Repeat until terminated!4K-means Randomly initialize k centers µ(0)= µ1(0),…, µk(0) Classify: Assign each point j∈{1,…m} to nearest center: Recenter: µibecomes centroid of its point:Equivalent to µi← average of its points!5What is K-means optimizing? Potential function F(µ,C) of centers µ and point allocations C: Optimal K-means: minµminCF(µ,C)6Does K-means converge??? Part 1 Optimize potential function: Fix µ, optimize C7Does K-means converge??? Part 2 Optimize potential function: Fix C, optimize µ8Coordinate descent algorithms Want: minaminbF(a,b) Coordinate descent: fix a, minimize b fix b, minimize a repeat Converges!!! if F is bounded to a (often good) local optimum as we saw in applet (play with it!)K-means is a coordinate descent algorithm!9(One) bad case for k-means Clusters may overlap Some clusters may be “wider” than others10Gaussian Bayes Classifier Reminder)()()|()|(jjjpiyPiypiyPxxx====()())(21exp||||)2(1)|(12/12/iyPiyPijiTijimj=⎥⎦⎤⎢⎣⎡−−−∝=−µxΣµxΣxπ11Predicting wealth from age12Predicting wealth from age13Learning modelyear , mpg ---> maker⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎝⎛=mmmmm2212221211212σσσσσσσσσLMOMMLLΣ14General: O(m2)parameters⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎝⎛=mmmmm2212221211212σσσσσσσσσLMOMMLLΣ15Aligned: O(m)parameters⎟⎟⎟⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎜⎜⎜⎝⎛=−mm21232221200000000000000000000σσσσσLLMMOMMMLLLΣ16Aligned: O(m)parameters⎟⎟⎟⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎜⎜⎜⎝⎛=−mm21232221200000000000000000000σσσσσLLMMOMMMLLLΣ17Spherical: O(1)cov parameters⎟⎟⎟⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎜⎜⎜⎝⎛=2222200000000000000000000σσσσσLLMMOMMMLLLΣ18Spherical: O(1)cov parameters⎟⎟⎟⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎜⎜⎜⎝⎛=2222200000000000000000000σσσσσLLMMOMMMLLLΣ19Next… back to Density EstimationWhat if we want to do density estimation with multimodal or clumpy data?20But we don’t see class labels!!! MLE: max ∏jP(yj,xj) But we don’t know yj’s!!! Maximize marginal likelihood: max ∏jP(xj) = max ∏j∑i=1kP(yj=i,xj)21Special case: spherical Gaussians and hard assignments If P(X|Y=i) is spherical, with same σ for all classes: If each xjbelongs to one class C(j) (hard assignment), marginal likelihood: Same as K-means!!!()()⎥⎦⎤⎢⎣⎡−−−==−ijiTijimjiyP µxΣµxΣx12/12/21exp||||)2(1)|(π⎥⎦⎤⎢⎣⎡−−∝=2221exp)|(ijjiyP µxxσ∏∏∑===⎥⎦⎤⎢⎣⎡−−∝=mjjCjmjkijiyP12)(21121exp),( µxxσ22The GMM assumption• There are k components• Component ihas an associated mean vector µiµ1µ2µ323The GMM assumption• There are k components• Component ihas an associated mean vector µi• Each component generates data from a Gaussian with mean µi and covariance matrix σ2IEach data point is generated according to the following recipe: µ1µ2µ324The GMM assumption• There are k components• Component ihas an associated mean vector µi• Each component generates data from a Gaussian with mean µi and covariance matrix σ2IEach data point is generated according to the following recipe: 1. Pick a component at random: Choose component i with probability P(y=i)µ225The GMM assumption•There are k components• Component ihas an associated mean vector µi• Each component generates data from a Gaussian with mean µi and covariance matrix σ2IEach data point is generated according to the following recipe: 1. Pick a component at random: Choose component i with probability P(y=i)2. Datapoint ~ N(µi, σ2I)µ2x26The General GMM assumptionµ1µ2µ3•There are k components• Component ihas an associated mean vector µi• Each component generates data from a Gaussian with mean µi and covariance matrix ΣiEach data point is generated according to the following recipe: 1. Pick a component at random: Choose component i with probability P(y=i)2. Datapoint ~ N(µi, Σi)27Unsupervised Learning:not as hard as it looksSometimes easySometimes impossibleand sometimes
View Full Document