Computational Learning Theory Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University November 1, 2010 Reading: • Mitchell chapter 7 Suggested exercises: • 7.1, 7.2, 7.5, 7.7Dinstances!drawn at random from !Probability distribution P(x) training examplesDinstances!drawn at random from !Probability distribution P(x) training examples Can we bound in terms of ??Probability distribution P(x) training examples Can we bound in terms of ?? if D was a set of examples drawn from and independent of h, then we could use standard statistical confidence intervals to determine that with 95% probability, lies in the interval: but D is the training data for h ….Target concept is the (usually unknown) boolean fn to be learned c: X {0,1}true error lessAny(!) learner that outputs a hypothesis consistent with all training examples (i.e., an h contained in VSH,D)What it means [Haussler, 1988]: probability that the version space is not ε-exhausted after m training examples is at most 1. How many training examples suffice?"Suppose we want this probability to be at most δ"2. If then with probability at least (1-δ):"Example: H is Conjunction of Boolean Literals Consider classification problem f:XY: • instances: X = <X1 X2 X3 X4> where each Xi is boolean • learned hypotheses are rules of the form: – IF <X1 X2 X3 X4> = <0,?,1,?> , THEN Y=1, ELSE Y=0 – i.e., rules constrain any subset of the Xi How many training examples m suffice to assure that with probability at least 0.9, any consistent learner will output a hypothesis with true error at most 0.05?Example: H is Decision Tree with depth=2 Consider classification problem f:XY: • instances: X = <X1 … XN> where each Xi is boolean • learned hypotheses are decision trees of depth 2, using only two variables How many training examples m suffice to assure that with probability at least 0.9, any consistent learner will output a hypothesis with true error at most 0.05?Sufficient condition: Holds if learner L requires only a polynomial number of training examples, and processing per example is polynomialtrue error training error degree of overfitting note ε here is the difference between the training error and true errorAdditive Hoeffding Bounds – Agnostic Learning • Given m independent coin flips of coin with true Pr(heads) = θ" bound the error in the maximum likelihood estimate • Relevance to agnostic learning: for any single hypothesis h • But we must consider all hypotheses in H • So, with probability at least (1-δ) every h satisfiesGeneral Hoeffding Bounds • When estimating parameter θ inside [a,b] from m examples • When estimating a probability θ is inside [0,1], so • And if we’re interested in only one-sided error, thenWhat if H is not finite? • Can’t use our result for finite H • Need some other measure of complexity for H – Vapnik-Chervonenkis (VC) dimension!labels each member of S positive or negativeVC(H)=3!Compare to our earlier results based on |H|: How many randomly drawn examples suffice to ε-exhaust VSH,D with probability at least (1-δ)? ie., to guarantee that any hypothesis that perfectly fits the training data is probably (1-δ) approximately (ε) correct Sample Complexity based on VC dimensionVC dimension: examples Consider X = <, want to learn c:X{0,1} What is VC dimension of • Open intervals: • Closed intervals: xVC dimension: examples Consider X = <, want to learn c:X{0,1} What is VC dimension of • Open intervals: • Closed intervals: xVC(H1)=1 VC(H2)=2 VC(H3)=2 VC(H4)=3VC dimension: examples What is VC dimension of lines in a plane? • H2 = { ((w0 + w1x1 + w2x2)>0 y=1) }VC dimension: examples What is VC dimension of • H2 = { ((w0 + w1x1 + w2x2)>0 y=1) } – VC(H2)=3 • For Hn = linear separating hyperplanes in n dimensions, VC(Hn)=n+1For any finite hypothesis space H, can you give an upper bound on VC(H) in terms of |H| ? (hint: yes)More VC Dimension Examples to Think About • Logistic regression over n continuous features – Over n boolean features? • Linear SVM over n continuous features • Decision trees defined over n boolean features F: <X1, ... Xn> Y!• Decision trees of depth 2 defined over n features • How about 1-nearest neighbor?How tight is this bound?!How many examples m suffice to assure that any hypothesis that fits the training data perfectly is probably (1-δ) approximately (ε) correct?!Tightness of Bounds on Sample ComplexityHow tight is this bound?!How many examples m suffice to assure that any hypothesis that fits the training data perfectly is probably (1-δ) approximately (ε) correct?!Tightness of Bounds on Sample Complexity Lower bound on sample complexity (Ehrenfeucht et al., 1989):!Consider any class C of concepts such that VC(C) > 1, any learner L, any 0 < ε < 1/8, and any 0 < δ < 0.01. Then there exists a distribution and a target concept in C, such that if L observes fewer examples than !Then with probability at least δ, L outputs a hypothesis with !Agnostic Learning: VC Bounds With probability at least (1-δ) every h ∈ H satisfies [Schölkopf and Smola, 2002]Structural Risk Minimization Which hypothesis space should we choose? • Bias / variance tradeoff H1 H2 H3 H4 [Vapnik] SRM: choose H to minimize bound on true error! * unfortunately a somewhat loose
View Full Document