Param. Learning (MLE) Structure Learning The GoodLearning the CPTsSlide 3Maximum likelihood estimation (MLE) of BN parameters – exampleMaximum likelihood estimation (MLE) of BN parameters – General caseTaking derivatives of MLE of BN parameters – General caseGeneral MLE for a CPTWhere are we with learning BNs?Learning the structure of a BNRemember: Obtaining a P-map?Score-based approachInformation-theoretic interpretation of maximum likelihoodInformation-theoretic interpretation of maximum likelihood 2Decomposable scoreAnnouncementsBN code release!!!!How many trees are there?Scoring a tree 1: I-equivalent treesScoring a tree 2: similar treesChow-Liu tree learning algorithm 1Chow-Liu tree learning algorithm 2Can we extend Chow-Liu 1Can we extend Chow-Liu 2What you need to know about learning BN structures so farCan we really trust MLE?Bayesian LearningBayesian Learning for ThumbtackBeta prior distribution – P()Posterior distributionConjugate priorUsing Bayesian posteriorBayesian prediction of a new coin flipAsymptotic behavior and equivalent sample sizeBayesian learning corresponds to smoothingBayesian learning for multinomialBayesian learning for two-node BNVery important assumption on prior: Global parameter independenceGlobal parameter independence, d-separation and local predictionWithin a CPTPriors for BN CPTs (more when we talk about structure learning)An exampleWhat you need to know about parameter learning1Param. Learning (MLE)Structure LearningThe GoodGraphical Models – 10708Carlos GuestrinCarnegie Mellon UniversityOctober 1st, 2008Readings:K&F: 16.1, 16.2, 17.1, 17.2, 17.3.1, 17.4.110-708 – Carlos Guestrin 2006-200810-708 – Carlos Guestrin 2006-20082Learning the CPTsx(1)… x(m)DataFor each discrete variable Xi10-708 – Carlos Guestrin 2006-20083Learning the CPTsx(1)… x(m)DataFor each discrete variable XiWHY??????????10-708 – Carlos Guestrin 2006-20084Maximum likelihood estimation (MLE) of BN parameters – example Given structure, log likelihood of data:FluAllergySinusNose10-708 – Carlos Guestrin 2006-20085Maximum likelihood estimation (MLE) of BN parameters – General caseData: x(1),…,x(m)Restriction: x(j)[PaXi] ! assignment to PaXi in x(j)Given structure, log likelihood of data:10-708 – Carlos Guestrin 2006-20086Taking derivatives of MLE of BN parameters – General case10-708 – Carlos Guestrin 2006-20087General MLE for a CPTTake a CPT: P(X|U)Log likelihood term for this CPTParameter X=x|U=u :10-708 – Carlos Guestrin 2006-20088Where are we with learning BNs?Given structure, estimate parametersMaximum likelihood estimationLater Bayesian learningWhat about learning structure?10-708 – Carlos Guestrin 2006-20089Learning the structure of a BNConstraint-based approachBN encodes conditional independenciesTest conditional independencies in dataFind an I-mapScore-based approachFinding a structure and parameters is a density estimation taskEvaluate model as we evaluated parametersMaximum likelihoodBayesian etc. Data<x1(1),…,xn(1)>…<x1(m),…,xn(m)>FluAllergySinusHeadacheNoseLearn structure andparameters10-708 – Carlos Guestrin 2006-200810Remember: Obtaining a P-map?Given the independence assertions that are true for PObtain skeletonObtain immoralitiesFrom skeleton and immoralities, obtain every (and any) BN structure from the equivalence classConstraint-based approach:Use Learn PDAG algorithmKey question: Independence test10-708 – Carlos Guestrin 2006-200811Score-based approachData<x1(1),…,xn(1)>…<x1(m),…,xn(m)>FluAllergySinusHeadacheNosePossible structuresScore structureLearn parameters10-708 – Carlos Guestrin 2006-200812Information-theoretic interpretation of maximum likelihoodGiven structure, log likelihood of data:FluAllergySinusHeadacheNose10-708 – Carlos Guestrin 2006-200813Information-theoretic interpretation of maximum likelihood 2Given structure, log likelihood of data:FluAllergySinusHeadacheNose10-708 – Carlos Guestrin 2006-200814Decomposable scoreLog data likelihoodDecomposable score:Decomposes over families in BN (node and its parents)Will lead to significant computational efficiency!!!Score(G : D) = i FamScore(Xi|PaXi : D)AnnouncementsRecitation tomorrowDon’t miss it!HW2Out todayDue in 2 weeksProjects!!! Proposals due Oct. 8th in classIndividually or groups of twoDetails on course websiteProject suggestions will be up soon!!!1510-708 – Carlos Guestrin 2006-2008BN code release!!!!Pre-release of a C++ library for probabilistic inference and learningFeatures:basic datastructures (random variables, processes, linear algebra)distributions (Gaussian, multinomial, ...)basic graph structures (directed, undirected)graphical models (Bayesian network, MRF, junction trees)inference algorithms (variable elimination, loopy belief propagation, filtering)Limited amount of learning (IPF, Chow Liu, order-based search)Supported platforms:Linux (tested on Ubuntu 8.04)MacOS X (tested on 10.4/10.5)limited Windows supportWill be made available to the class early next week.10-708 – Carlos Guestrin 2006-20081610-708 – Carlos Guestrin 2006-200817How many trees are there?Nonetheless – Efficient optimal algorithm finds best tree10-708 – Carlos Guestrin 2006-200818Scoring a tree 1: I-equivalent trees10-708 – Carlos Guestrin 2006-200819Scoring a tree 2: similar trees10-708 – Carlos Guestrin 2006-200820Chow-Liu tree learning algorithm 1 For each pair of variables Xi,XjCompute empirical distribution:Compute mutual information:Define a graphNodes X1,…,XnEdge (i,j) gets weight10-708 – Carlos Guestrin 2006-200821Chow-Liu tree learning algorithm 2Optimal tree BNCompute maximum weight spanning treeDirections in BN: pick any node as root, breadth-first-search defines directions10-708 – Carlos Guestrin 2006-200822Can we extend Chow-Liu 1Tree augmented naïve Bayes (TAN) [Friedman et al. ’97] Naïve Bayes model overcounts, because correlation between features not consideredSame as Chow-Liu, but score edges with:10-708 – Carlos Guestrin 2006-200823Can we extend Chow-Liu 2(Approximately learning) models with tree-width up to k[Chechetka & Guestrin ’07]But, O(n2k+6)10-708 – Carlos Guestrin 2006-200824What you need to know about
View Full Document