Bayesian Networks –Structure Learning (cont.)Learning Bayes netsLearning the CPTsInformation-theoretic interpretation of maximum likelihoodMaximum likelihood (ML) for learning BN structureInformation-theoretic interpretation of maximum likelihood 2Information-theoretic interpretation of maximum likelihood 3Mutual information ! Independence testsDecomposable scoreScoring a tree 1: equivalent treesScoring a tree 2: similar treesChow-Liu tree learning algorithm 1Chow-Liu tree learning algorithm 2Can we extend Chow-Liu 1Can we extend Chow-Liu 2Scoring general graphical models – Model selection problemMaximum likelihood overfits!Bayesian score avoids overfittingHow many graphs are there?Structure learning for general graphsLearn BN structure using local searchWhat you need to know about learning BNsUnsupervised learning or Clustering –K-meansGaussian mixture modelsSome DataK-meansK-meansK-meansK-meansK-meansUnsupervised LearningGaussian Bayes Classifier ReminderPredicting wealth from agePredicting wealth from ageLearning modelyear , mpg ---> makerGeneral: O(m2) parametersAligned: O(m) parametersAligned: O(m) parametersSpherical: O(1) cov parametersSpherical: O(1) cov parametersNext… back to Density EstimationWhat if we want to do density estimation with multimodal or clumpy data?The GMM assumptionThe GMM assumptionThe GMM assumptionThe GMM assumptionThe General GMM assumptionUnsupervised Learning:not as hard as it looksComputing likelihoods in supervised learning caseComputing likelihoods in unsupervised caselikelihoods in unsupervised caseUnsupervised Learning:Mediumly Good NewsDuda & Hart’s ExampleDuda & Hart’s ExampleFinding the max likelihood μ1,μ2..μkExpectation MaximalizationThe E.M. AlgorithmSilly ExampleSilly ExampleTrivial StatisticsSame Problem with Hidden InformationSame Problem with Hidden InformationE.M. for our Trivial ProblemE.M. ConvergenceBack to Unsupervised Learning of GMMsE.M. for GMMsE.M. for GMMsE.M. ConvergenceE.M. for General GMMsGaussian Mixture Example: StartAfter first iterationAfter 2nd iterationAfter 3rd iterationAfter 4th iterationAfter 5th iterationAfter 6th iterationAfter 20th iterationSome Bio Assay dataGMM clustering of the assay dataResulting Density EstimatorThree classes of assay(each learned with it’s own mixture model)Resulting Bayes ClassifierResulting Bayes Classifier, using posterior probabilities to alert about ambiguity and anomalousnessFinal CommentsWhat you should knowAcknowledgements1Bayesian Networks –Structure Learning (cont.) Machine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityApril 3rd, 2006Koller & Friedman Chapters (handed out):Chapter 11 (short)Chapter 12: 12.1, 12.2, 12.3 (covered in the beginning of semester)12.4 (Learning parameters for BNs)Chapter 13: 13.1, 13.3.1, 13.4.1, 13.4.3 (basic structure learning)Learning BN tutorial (class website):ftp://ftp.research.microsoft.com/pub/tr/tr-95-06.pdfTAN paper (class website):http://www.cs.huji.ac.il/~nir/Abstracts/FrGG1.html2Learning Bayes netsKnown structure Unknown structureFully observable dataMissing datax(1)…x(m)Datastructure parametersCPTs –P(Xi| PaXi)3Learning the CPTsx(1)…x(M)DataFor each discrete variable XiWHY??????????4Information-theoretic interpretation of maximum likelihood Given structure, log likelihood of data:FluAllergySinusHeadacheNose5Maximum likelihood (ML) for learning BN structureData<x1(1),…,xn(1)>…<x1(M),…,xn(M)>FluAllergySinusHeadacheNosePossible structuresScore structureLearn parametersusing ML6Information-theoretic interpretation of maximum likelihood 2 Given structure, log likelihood of data:FluAllergySinusHeadacheNose7Information-theoretic interpretation of maximum likelihood 3 Given structure, log likelihood of data:FluAllergySinusHeadacheNose8Mutual information → Independence tests Statistically difficult task! Intuitive approach: Mutual information Mutual information and independence: Xiand Xjindependent if and only if I(Xi,Xj)=0 Conditional mutual information:9Decomposable score Log data likelihood10Scoring a tree 1: equivalent trees11Scoring a tree 2: similar trees12Chow-Liu tree learning algorithm 1 For each pair of variables Xi,XjCompute empirical distribution: Compute mutual information: Define a graph Nodes X1,…,Xn Edge (i,j) gets weight13Chow-Liu tree learning algorithm 2 Optimal tree BN Compute maximum weight spanning tree Directions in BN: pick any node as root, breadth-first-search defines directions14Can we extend Chow-Liu 1 Tree augmented naïve Bayes (TAN) [Friedman et al. ’97] Naïve Bayes model overcounts, because correlation between features not considered Same as Chow-Liu, but score edges with:15Can we extend Chow-Liu 2 (Approximately learning) models with tree-width up to k [Narasimhan & Bilmes ’04] But, O(nk+1)…16Scoring general graphical models –Model selection problemWhat’s the best structure?Data<x_1^{(1)},…,x_n^{(1)}>…<x_1^{(m)},…,x_n^{(m)}>FluAllergySinusHeadacheNoseThe more edges, the fewer independence assumptions,the higher the likelihood of the data, but will overfit…17Maximum likelihood overfits! Information never hurts: Adding a parent always increases score!!!18Bayesian score avoids overfitting Given a structure, distribution over parameters Difficult integral: use Bayes information criterion (BIC) approximation (equivalent as M→∞) Note: regularize with MDL score Best BN under BIC still NP-hard19How many graphs are there?20Structure learning for general graphs In a tree, a node only has one parent Theorem: The problem of learning a BN structure with at most dparents is NP-hard for any (fixed) d≥2 Most structure learning approaches use heuristics Exploit score decomposition (Quickly) Describe two heuristics that exploit decomposition in different ways21Learn BN structure using local searchScore using BICLocal search,possible moves:• Add edge• Delete edge• Invert edgeStarting from Chow-Liu tree22What you need to know about learning BNs Learning BNs Maximum likelihood or MAP learns parameters Decomposable score Best tree (Chow-Liu) Best TAN Other BNs, usually local search with BIC score23Unsupervised learning or Clustering –K-meansGaussian mixture modelsMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityApril 3rd, 200624Some DataK-means251. Ask user how many clusters they’d like.
View Full Document