Machine Learning 10-701Tom M. MitchellMachine Learning DepartmentCarnegie Mellon UniversityOctober 17, 2006Computational Learning Theory Part 2 VC dimension, Sample Complexity, Mistake boundsRequired reading: • Mitchell chapter 7Optional advanced reading:• Kearns & Vazirani, ‘Introduction to Computational Learning Theory’Last time: PAC Learning1. Finite H, assume target function c ∈ HSuppose we want this to be at most δ. Then m examples suffice:2. Finite H, agnostic learning: perhaps c not in Hwith probability at least (1-δ) every h in H satisfiesWhat if H is not finite?• Can’t use our result for finite H• Need some other measure of complexity for H– Vapnik-Chervonenkis (VC) dimension!VC(H)=3Compare to our earlier results based on |H|:How many randomly drawn examples suffice to ε-exhaust VSH,Dwith probability at least (1-δ)? ie., to guarantee that any hypothesis that perfectly fits the training data is probably (1-δ) approximately (ε) correctSample Complexity based on VC dimensionVC dimension: examplesConsider X = <, want to learn c:XÆ{0,1}What is VC dimension of• Open intervals:• Closed intervals:xVC dimension: examplesConsider X = <, want to learn c:XÆ{0,1}What is VC dimension of• Open intervals:• Closed intervals:xVC(H1)=1VC(H2)=2VC(H3)=2VC(H4)=3VC dimension: examplesConsider X = <2, want to learn c:XÆ{0,1}What is VC dimension of lines in a plane?•H= { ((wx+b)>0 Æ y=1) }VC dimension: examplesConsider X = <2, want to learn c:XÆ{0,1}What is VC dimension of•H= { ((w·x+b)>0 Æ y=1) }–VC(H1)=3– For linear separating hyperplanes in n dimensions, VC(H)=n+1For any finite hypothesis space H, give an upper bound on VC(H) in terms of |H|More VC Dimension Examples• Decision trees defined over n boolean featuresF: <X1, ... Xn> Æ Y• Decision trees defined over n continuous featuresWhere each internal tree node involves a threshold test (Xi> c)• Decision trees of depth 2 defined over n features• Logistic regression over n continuous features? Over n boolean features?• How about 1-nearest neighbor?How tight is this bound?How many examples m suffice to assure that any hypothesis that fits the training data perfectly is probably (1-δ) approximately (ε) correct?Tightness of Bounds on Sample ComplexityLower bound on sample complexity (Ehrenfeucht et al., 1989):Consider any class C of concepts such that VC(C) ≥ 2, any learner L, any 0 < ε < 1/8, and any 0 < δ < 0.01. Then there exists a distribution and target concept in C, such that if L observes fewer examples than Then with probability at least δ, L outputs a hypothesis withAgnostic Learning: VC BoundsWith probability at least (1-δ) every h ∈ H satisfies[Schölkopf and Smola, 2002]Structural Risk MinimizationWhich hypothesis space should we choose? • Bias / variance tradeoffH1H2H3H4[Vapnik]SRM: choose H to minimize bound on true error!* unfortunately a somewhat loose bound...What You Should Know• Sample complexity varies with the learning setting– Learner actively queries trainer– Examples provided at random• Within the PAC learning setting, we can bound the probability that learner will output hypothesis with given error– For ANY consistent learner (case where c 2 H)– For ANY “best fit” hypothesis (agnostic learning, where perhaps c not in H)• VC dimension as measure of complexity of H• Quantitative bounds characterizing bias/variance in choice of H– but the bounds are quite loose...• Mistake bounds in learning• Conference on Learning Theory:
View Full Document