1©Carlos Guestrin 2005-20091PAC-learning, VC Dimension (cont.)Machine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityNovember 2nd, 2009©Carlos Guestrin 2005-2009 2Review: Generalization error in finite hypothesis spaces [Haussler ’88] Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learned hypothesis h that is consistent on the training data:Even if h makes zero errors in training data, may make errors in test2©Carlos Guestrin 2005-2009 3Using a PAC bound Typically, 2 use cases: 1: Pick ε and δ, give you m 2: Pick m and δ, give you ε©Carlos Guestrin 2005-2009 4Limitations of Haussler ‘88 bound Consistent classifier Size of hypothesis space3©Carlos Guestrin 2005-2009 5PAC bound and Bias-Variance tradeoff Important: PAC bound holds for all h, but doesn’t guarantee that algorithm finds best h!!!or, after moving some terms around,with probability at least 1-δδδδ::::2005-2007 Carlos Guestrin 6PAC bound for decision trees of depth k Bad!!! Number of points is exponential in depth! But, for m data points, decision tree can’t get too big…Number of leaves never more than number data points42005-2007 Carlos Guestrin 7PAC bound for decision trees with k leaves – Bias-Variance revisited2005-2007 Carlos Guestrin 8What did we learn from decision trees? Bias-Variance tradeoff formalized Moral of the story:Complexity of learning not measured in terms of size hypothesis space, but in maximum number of points that allows consistent classification Complexity m – no bias, lots of variance Lower than m – some bias, less variance52005-2007 Carlos Guestrin 9What about continuous hypothesis spaces? Continuous hypothesis space: |H| = ∞ Infinite variance??? As with decision trees, only care about the maximum number of points that can be classified exactly!2005-2007 Carlos Guestrin 10How many points can a linear boundary classify exactly? (1-D)62005-2007 Carlos Guestrin 11How many points can a linear boundary classify exactly? (2-D)2005-2007 Carlos Guestrin 12How many points can a linear boundary classify exactly? (d-D)72005-2007 Carlos Guestrin 13PAC bound using VC dimension Number of training points that can be classified exactly is VC dimension!!! Measures relevant size of hypothesis space, as with decision trees with k leaves2005-2007 Carlos Guestrin 14Shattering a set of points82005-2007 Carlos Guestrin 15VC dimension2005-2007 Carlos Guestrin 16PAC bound using VC dimension Number of training points that can be classified exactly is VC dimension!!! Measures relevant size of hypothesis space, as with decision trees with k leaves Bound for infinite dimension hypothesis spaces:92005-2007 Carlos Guestrin 17Examples of VC dimension Linear classifiers: VC(H) = d+1, for d features plus constant term b Neural networks VC(H) = #parameters Local minima means NNs will probably not find best parameters 1-Nearest neighbor?2005-2007 Carlos Guestrin 18Another VC dim. example -What can we shatter? What’s the VC dim. of decision stumps in 2d?102005-2007 Carlos Guestrin 19Another VC dim. example -What can’t we shatter? What’s the VC dim. of decision stumps in 2d?2005-2007 Carlos Guestrin 20What you need to know Finite hypothesis space Derive results Counting number of hypothesis Mistakes on Training data Complexity of the classifier depends on number of points that can be classified exactly Finite case – decision trees Infinite case – VC dimension Bias-Variance tradeoff in learning theory Remember: will your algorithm find best classifier?112005-2007 Carlos Guestrin 21Bayesian Networks –Representation Machine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityNovember 2nd, 20092005-2007 Carlos Guestrin 22Handwriting recognitionCharacter recognition, e.g., kernel SVMszcbcacrrrrrr122005-2007 Carlos Guestrin 23Webpage classificationCompany home pagevsPersonal home pagevsUniversity home pagevs…2005-2007 Carlos Guestrin 24Handwriting recognition 2132005-2007 Carlos Guestrin 25Webpage classification 22005-2007 Carlos Guestrin 26Today – Bayesian networks One of the most exciting advancements in statistical AI in the last 10-15 years Generalizes naïve Bayes and logistic regression classifiers Compact representation for exponentially-large probability distributions Exploit conditional independencies142005-2007 Carlos Guestrin 27Causal structure Suppose we know the following: The flu causes sinus inflammation Allergies cause sinus inflammation Sinus inflammation causes a runny nose Sinus inflammation causes headaches How are these connected?2005-2007 Carlos Guestrin 28Possible queriesFluAllergySinusHeadacheNose Inference Most probable explanation Active data collection152005-2007 Carlos Guestrin 29Car starts BN 18 binary attributes Inference P(BatteryAge|Starts=f) 216terms, why so fast? Not impressed? HailFinder BN – more than 354= 58149737003040059690390169 terms2005-2007 Carlos Guestrin 30Factored joint distribution -PreviewFluAllergySinusHeadacheNose162005-2007 Carlos Guestrin 31Number of parametersFluAllergySinusHeadacheNose2005-2007 Carlos Guestrin 32Key: Independence assumptionsFluAllergySinusHeadacheNoseKnowing sinus separates the variables from each other172005-2007 Carlos Guestrin 33(Marginal) Independence Flu and Allergy are (marginally) independent More Generally:Flu = t Flu = fAllergy = tAllergy = fAllergy = tAllergy = fFlu = tFlu = f2005-2007 Carlos Guestrin 34Marginally independent random variables Sets of variables X, Y X is independent of Y if P Ⱶ (X=x⊥Y=y), ∀x∈Val(X), y∈Val(Y) Shorthand: Marginal independence: P Ⱶ (X ⊥ Y) Proposition: P statisfies (X ⊥ Y) if and only if P(X,Y) = P(X) P(Y)182005-2007 Carlos Guestrin 35Conditional independence Flu and Headache are not (marginally) independent Flu and Headache are independent given Sinus infection More Generally:2005-2007 Carlos Guestrin 36Conditionally independent random variables Sets of variables X, Y, Z X is independent of Y given Z if P Ⱶ (X=x⊥Y=y|Z=z), ∀x∈Val(X), y∈Val(Y), z∈Val(Z) Shorthand: Conditional independence: P Ⱶ (X ⊥ Y | Z) For P Ⱶ (X ⊥ Y |∅), write P Ⱶ (X ⊥ Y)
View Full Document