Koller Friedman Chapters handed out Chapter 11 short Chapter 12 12 1 12 2 12 3 covered in the beginning of semester 12 4 Learning parameters for BNs Chapter 13 13 1 13 3 1 13 4 1 13 4 3 basic structure learning Learning BN tutorial class website ftp ftp research microsoft com pub tr tr 95 06 pdf TAN paper class website http www cs huji ac il nir Abstracts FrGG1 html Bayesian Networks Structure Learning cont Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University April 3rd 2006 1 Learning Bayes nets Known structure Unknown structure Fully observable data Missing data Data CPTs P Xi PaXi x 1 x m structure parameters 2 Learning the CPTs Data For each discrete variable Xi x 1 x M WHY 3 Information theoretic interpretation of maximum likelihood Flu Allergy Sinus Given structure log likelihood of data Nose Headache 4 Maximum likelihood ML for learning BN structure Possible structures Flu Learn parameters using ML Score structure Allergy Sinus Headache Nose Data x1 1 xn 1 M x1 xn M 5 Information theoretic interpretation of maximum likelihood 2 Flu Allergy Sinus Given structure log likelihood of data Nose Headache 6 Information theoretic interpretation of maximum likelihood 3 Flu Allergy Sinus Given structure log likelihood of data Nose Headache 7 Mutual information Independence tests Statistically difficult task Intuitive approach Mutual information Mutual information and independence Xi and Xj independent if and only if I Xi Xj 0 Conditional mutual information 8 Decomposable score Log data likelihood 9 Scoring a tree 1 equivalent trees 10 Scoring a tree 2 similar trees 11 Chow Liu tree learning algorithm 1 For each pair of variables Xi Xj Compute empirical distribution Compute mutual information Define a graph Nodes X1 Xn Edge i j gets weight 12 Chow Liu tree learning algorithm 2 Optimal tree BN Compute maximum weight spanning tree Directions in BN pick any node as root breadth firstsearch defines directions 13 Can we extend Chow Liu 1 Tree augmented na ve Bayes TAN Friedman et al 97 Na ve Bayes model overcounts because correlation between features not considered Same as Chow Liu but score edges with 14 Can we extend Chow Liu 2 Approximately learning models with tree width up to k Narasimhan Bilmes 04 But O nk 1 15 Scoring general graphical models Model selection problem What s the best structure Flu Allergy Sinus Headache Nose Data x 1 1 x n 1 x 1 m x n m The more edges the fewer independence assumptions the higher the likelihood of the data but will overfit 16 Maximum likelihood overfits Information never hurts Adding a parent always increases score 17 Bayesian score avoids overfitting Given a structure distribution over parameters Difficult integral use Bayes information criterion BIC approximation equivalent as M Note regularize with MDL score Best BN under BIC still NP hard 18 How many graphs are there 19 Structure learning for general graphs In a tree a node only has one parent Theorem The problem of learning a BN structure with at most d parents is NP hard for any fixed d 2 Most structure learning approaches use heuristics Exploit score decomposition Quickly Describe two heuristics that exploit decomposition in different ways 20 Learn BN structure using local search Starting from Chow Liu tree Local search possible moves Add edge Delete edge Invert edge Score using BIC 21 What you need to know about learning BNs Learning BNs Maximum likelihood or MAP learns parameters Decomposable score Best tree Chow Liu Best TAN Other BNs usually local search with BIC score 22 Unsupervised learning or Clustering K means Gaussian mixture models Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University April 3rd 2006 23 Some Data 24 K means 1 Ask user how many clusters they d like e g k 5 25 K means 1 Ask user how many clusters they d like e g k 5 2 Randomly guess k cluster Center locations 26 K means 1 Ask user how many clusters they d like e g k 5 2 Randomly guess k cluster Center locations 3 Each datapoint finds out which Center it s closest to Thus each Center owns a set of datapoints 27 K means 1 Ask user how many clusters they d like e g k 5 2 Randomly guess k cluster Center locations 3 Each datapoint finds out which Center it s closest to 4 Each Center finds the centroid of the points it owns 28 K means 1 Ask user how many clusters they d like e g k 5 2 Randomly guess k cluster Center locations 3 Each datapoint finds out which Center it s closest to 4 Each Center finds the centroid of the points it owns 5 and jumps there 6 Repeat until terminated 29 Unsupervised Learning You walk into a bar A stranger approaches and tells you I ve got data from k classes Each class produces observations with a normal distribution and variance 2 I Standard simple multivariate gaussian assumptions I can tell you all the P wi s So far looks straightforward I need a maximum likelihood estimate of the i s No problem There s just one thing None of the data are labeled I have datapoints but I don t know what class they re from any of them Uh oh 30 Gaussian Bayes Classifier Reminder p x y i P y i P y i x p x 1 1 T exp x x k i i k i pi m 2 1 2 2 i 2 P y i x p x How do we deal with that 31 Predicting wealth from age 32 Predicting wealth from age 33 Learning modelyear mpg maker 21 12 M 1m 12 L 1m 2 2 L 2m M 2m M L 2 m O 34 2 O m General parameters 21 12 M 1m 12 L 1m 2 2 L 2m M 2m M L 2 m O 35 Aligned O m parameters 21 0 0 22 0 0 M M 0 0 0 0 0 L 0 0 L 0 23 L 0 M M O 0 L 2 m 1 0 L 0 0 0 0 M 0 2 m 36 Aligned O m parameters 21 0 0 22 0 0 M M 0 0 0 0 0 L 0 0 L 0 23 L 0 M M O 0 L 2 m 1 0 L 0 0 0 0 M 0 2 m 37 Spherical O 1 cov parameters 2 0 0 M 0 0 0 0 L 0 2 0 L 0 2 L 0 M 0 M 0 0 M O 0 0 L 2 L 0 38 0 0 0 M 0 2 Spherical O 1 cov parameters 2 0 0 M 0 0 0 0 L 0 2 0 L 0 2 L 0 M 0 M 0 0 M O 0 0 L 2 L 0 39 0 0 0 M 0 2 Next back to Density Estimation What if we want to do density estimation with multimodal or clumpy data 40 The GMM assumption There are k components The …
View Full Document