DOC PREVIEW
CMU CS 10601 - Computational Learning Theory`

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Computational Learning Theory`pgyRdiReading: • Mitchell chapter 7Suggested exercises:Suggested exercises:• 7.1, 7.2, 7.5, 7.7Machine Learning 10-601Tom M. MitchellMachine Learning DepartmentCarnegie Mellon UniversityFebruary 18, 2008February 18, 2008Instances, Hypotheses, and More-General-ThanDSet of training examplesinstancesdrawn at random from Probability Probability distribution P(x)Can we boundin terms ofDDD??Set of training examplesinstancesdrawn at random from Probability Probability distribution P(x)Target concept is the (usually the (usually unknown) boolean fn to be learnedc: X Æ{0,1}c X {0, }ltrue error lessAny(!) learner that outputs a hypothesis ypconsistent with all training examples examples (i.e., an h contained in VSH,D)What it means[Haussler, 1988]: probability that the version space is not ε-exhausted after m training examples is at most1 How many training examples suffice?Suppose we want this probability to be at most δ1. How many training examples suffice?2. If then with probability at least (1-δ):E.g.,X=< X1, X2, ... Xn >Each h ∈ H constrains each Xi to be 1, 0, or “don’t care”In other words, each h is a rule such as:is a rule such as:If X2=0 and X5=1Then Y=1, else Y=0Sufficient condition: uff n n nHolds if L requires only a polynomial number of training gexamples, and processing per example is polynomialnote ε here is the difference between the training error and true errortrue error training error degree of overfittingAdditive Hoeffding Bounds – Agnostic Learning• Given m independent coin flips of coin with Pr(heads) = θbound the error in the estimate• Relevance to agnostic learning: for any single hypothesis h• But we must consider all hypotheses in H• So, with probability at least (1-δ) every h satisfiesGeneral Hoeffding Bounds• When estimating parameter θ∈[a,b] from m examplesWh ti ti b bilitθ[0 1]•When estimating a probability θ∈[0,1], so• And if we’re interested in only one-sided error,


View Full Document

CMU CS 10601 - Computational Learning Theory`

Documents in this Course
lecture

lecture

40 pages

Problem

Problem

12 pages

lecture

lecture

36 pages

Lecture

Lecture

31 pages

Review

Review

32 pages

Lecture

Lecture

11 pages

Lecture

Lecture

18 pages

Notes

Notes

10 pages

Boosting

Boosting

21 pages

review

review

21 pages

review

review

28 pages

Lecture

Lecture

31 pages

lecture

lecture

52 pages

Review

Review

26 pages

review

review

29 pages

Lecture

Lecture

37 pages

Lecture

Lecture

35 pages

Boosting

Boosting

17 pages

Review

Review

35 pages

lecture

lecture

32 pages

Lecture

Lecture

28 pages

Lecture

Lecture

30 pages

lecture

lecture

29 pages

leecture

leecture

41 pages

lecture

lecture

34 pages

review

review

38 pages

review

review

31 pages

Lecture

Lecture

41 pages

Lecture

Lecture

15 pages

Lecture

Lecture

21 pages

Lecture

Lecture

38 pages

Notes

Notes

37 pages

lecture

lecture

29 pages

Load more
Download Computational Learning Theory`
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Computational Learning Theory` and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Computational Learning Theory` 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?