©2005-2007 Carlos Guestrin1PAC-learning, VC Dimension and Margin-based Bounds (cont.)Machine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityMarch 5th, 2007©2005-2007 Carlos Guestrin2A simple setting… Classification m data points Finite number of possible hypothesis (e.g., dec. trees of depth d) A learner finds a hypothesis h that is consistentwith training data Gets zero error in training – errortrain(h) = 0 What is the probability that h has more than εtrue error? errortrue(h) ¸ ε©2005-2007 Carlos Guestrin3But there are many possible hypothesis that are consistent with training data©2005-2007 Carlos Guestrin4Union bound P(A or B or C or D or …)©2005-2007 Carlos Guestrin5How likely is learner to pick a bad hypothesis Prob. h with errortrue(h) ¸ ε gets m data points right There are k hypothesis consistent with data How likely is learner to pick a bad one?©2005-2007 Carlos Guestrin6Review: Generalization error in finite hypothesis spaces [Haussler ’88] Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learned hypothesis h that is consistent on the training data:©2005-2007 Carlos Guestrin7Using a PAC bound Typically, 2 use cases: 1: Pick ε and δ, give you m 2: Pick m and δ, give you ε©2005-2007 Carlos Guestrin8Review: Generalization error in finite hypothesis spaces [Haussler ’88] Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learned hypothesis h that is consistent on the training data:Even if h makes zero errors in training data, may make errors in test©2005-2007 Carlos Guestrin9Limitations of Haussler ‘88 bound Consistent classifier Size of hypothesis space©2005-2007 Carlos Guestrin10What if our classifier does not have zero error on the training data? A learner with zero training errors may make mistakes in test set What about a learner with errortrain(h) in training set?©2005-2007 Carlos Guestrin11Simpler question: What’s the expected error of a hypothesis? The error of a hypothesis is like estimating the parameter of a coin! Chernoff bound: for m i.i.d. coin flips, x1,…,xm, where xi2{0,1}. For 0<ε<1:©2005-2007 Carlos Guestrin12Using Chernoff bound to estimate error of a single hypothesis©2005-2007 Carlos Guestrin13But we are comparing many hypothesis: Union boundFor each hypothesis hi:What if I am comparing two hypothesis, h1 and h2?©2005-2007 Carlos Guestrin14Generalization bound for |H| hypothesis Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learned hypothesis h:©2005-2007 Carlos Guestrin15PAC bound and Bias-Variance tradeoff Important: PAC bound holds for all h, but doesn’t guarantee that algorithm finds best h!!!or, after moving some terms around,with probability at least 1-δ:©2005-2007 Carlos Guestrin16What about the size of the hypothesis space? How large is the hypothesis space?©2005-2007 Carlos Guestrin17Boolean formulas with n binary features©2005-2007 Carlos Guestrin18Number of decision trees of depth kRecursive solution Given n attributesHk= Number of decision trees of depth kH0=2Hk+1= (#choices of root attribute) *(# possible left subtrees) *(# possible right subtrees)= n * Hk* HkWrite Lk= log2HkL0= 1Lk+1= log2n + 2LkSo Lk= (2k-1)(1+log2n) +1©2005-2007 Carlos Guestrin19PAC bound for decision trees of depth k Bad!!! Number of points is exponential in depth! But, for m data points, decision tree can’t get too big…Number of leaves never more than number data points©2005-2007 Carlos Guestrin20Number of decision trees with k leavesHk= Number of decision trees with k leavesH0=2Loose bound: Reminder:©2005-2007 Carlos Guestrin21PAC bound for decision trees with k leaves – Bias-Variance revisited©2005-2007 Carlos Guestrin22Announcements Midterm on Wednesday Open book and notes, no other material Bring a calculator No laptops, PDAs or cellphones©2005-2007 Carlos Guestrin23What did we learn from decision trees? Bias-Variance tradeoff formalized Moral of the story:Complexity of learning not measured in terms of size hypothesis space, but in maximum number of points that allows consistent classification Complexity m – no bias, lots of variance Lower than m – some bias, less variance©2005-2007 Carlos Guestrin24What about continuous hypothesis spaces? Continuous hypothesis space: |H| = 1 Infinite variance??? As with decision trees, only care about the maximum number of points that can be classified exactly!©2005-2007 Carlos Guestrin25How many points can a linear boundary classify exactly? (1-D)©2005-2007 Carlos Guestrin26How many points can a linear boundary classify exactly? (2-D)©2005-2007 Carlos Guestrin27How many points can a linear boundary classify exactly? (d-D)©2005-2007 Carlos Guestrin28PAC bound using VC dimension Number of training points that can be classified exactly is VC dimension!!! Measures relevant size of hypothesis space, as with decision trees with k leaves©2005-2007 Carlos Guestrin29Shattering a set of points©2005-2007 Carlos Guestrin30VC dimension©2005-2007 Carlos Guestrin31PAC bound using VC dimension Number of training points that can be classified exactly is VC dimension!!! Measures relevant size of hypothesis space, as with decision trees with k leaves Bound for infinite dimension hypothesis spaces:©2005-2007 Carlos Guestrin32Examples of VC dimension Linear classifiers: VC(H) = d+1, for d features plus constant term b Neural networks VC(H) = #parameters Local minima means NNs will probably not find best parameters 1-Nearest neighbor?©2005-2007 Carlos Guestrin33Another VC dim. example -What can we shatter? What’s the VC dim. of decision stumps in 2d?©2005-2007 Carlos Guestrin34Another VC dim. example -What can’t we shatter? What’s the VC dim. of decision stumps in 2d?©2005-2007 Carlos Guestrin35What you need to know Finite hypothesis space Derive results Counting number of hypothesis Mistakes on Training data Complexity of the classifier depends on number of points that can be classified exactly Finite case – decision trees Infinite case – VC dimension Bias-Variance tradeoff in learning theory Remember: will your algorithm find best classifier?©2005-2007 Carlos Guestrin36Big PictureMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon
View Full Document