DOC PREVIEW
CMU CS 10701 - Boosting

This preview shows page 1-2-3-4 out of 11 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

11BoostingMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityOctober 5th, 2009©Carlos Guestrin 2005-20092Fighting the bias-variance tradeoff Simple (a.k.a. weak) learners are good e.g., naïve Bayes, logistic regression, decision stumps (or shallow decision trees) Low variance, don’t usually overfit Simple (a.k.a. weak) learners are bad High bias, can’t solve hard learning problems Can we make weak learners always good??? No!!! But often yes…©Carlos Guestrin 2005-200923Voting (Ensemble Methods) Instead of learning a single (weak) classifier, learn many weak classifiers that are good at different parts of the input space Output class: (Weighted) vote of each classifier Classifiers that are most “sure” will vote with more conviction Classifiers will be most “sure” about a particular part of the space On average, do better than single classifier! But how do you ???  force classifiers to learn about different parts of the input space? weigh the votes of different classifiers?©Carlos Guestrin 2005-20094Boosting Idea: given a weak learner, run it multiple times on (reweighted) training data, then let learned classifiers vote On each iteration t:  weight each training example by how incorrectly it was classified Learn a hypothesis – ht A strength for this hypothesis – αt Final classifier: Practically useful Theoretically interesting[Schapire, 1989]©Carlos Guestrin 2005-200935Learning from weighted data Sometimes not all data points are equal Some data points are more equal than others Consider a weighted dataset D(i) – weight of i th training example (xi,yi) Interpretations: i th training example counts as D(i) examples If I were to “resample” data, I would get more samples of “heavier” data points Now, in all calculations, whenever used, i th training example counts as D(i) “examples” e.g., MLE for Naïve Bayes, redefine Count(Y=y) to be weighted count©Carlos Guestrin 2005-20096weakweak©Carlos Guestrin 2005-200947©Carlos Guestrin 2005-20098Training error of final classifier is bounded by:Where What αtto choose for hypothesis ht?[Schapire, 1989]©Carlos Guestrin 2005-200959Training error of final classifier is bounded by:Where What αtto choose for hypothesis ht?[Schapire, 1989]©Carlos Guestrin 2005-200910Training error of final classifier is bounded by:Where If we minimize ∏∏∏∏tZt, we minimize our training errorWe can tighten this bound greedily, by choosing αtand hton each iteration to minimize Zt.What αtto choose for hypothesis ht?[Schapire, 1989]©Carlos Guestrin 2005-2009611What αtto choose for hypothesis ht?We can minimize this bound by choosing αton each iteration to minimize Zt.For boolean target function, this is accomplished by [Freund & Schapire ’97]: You’ll prove this in your homework! ☺[Schapire, 1989]©Carlos Guestrin 2005-200912Strong, weak classifiers If each classifier is (at least slightly) better than random εt< 0.5 AdaBoost will achieve zero training error (exponentially fast): Is it hard to achieve better than random training error?©Carlos Guestrin 2005-2009713Boosting results – Digit recognition Boosting often Robust to overfitting Test set error decreases even after training error is zero[Schapire, 1989]©Carlos Guestrin 2005-200914Boosting generalization error bound T – number of boosting rounds d – VC dimension of weak learner, measures complexity of classifier  m – number of training examples[Freund & Schapire, 1996]©Carlos Guestrin 2005-2009815Boosting generalization error bound T – number of boosting rounds d – VC dimension of weak learner, measures complexity of classifier  m – number of training examples[Freund & Schapire, 1996] Contradicts: Boosting often Robust to overfitting Test set error decreases even after training error is zero Need better analysis tools we’ll come back to this later in the semester©Carlos Guestrin 2005-200916Boosting: Experimental ResultsComparison of C4.5, Boosting C4.5, Boosting decision stumps (depth 1 trees), 27 benchmark datasets[Freund & Schapire, 1996]errorerrorerror©Carlos Guestrin 2005-2009917©Carlos Guestrin 2005-200918Boosting and Logistic RegressionLogistic regression assumes:And tries to maximize data likelihood:Equivalent to minimizing log loss©Carlos Guestrin 2005-20091019Boosting and Logistic RegressionLogistic regression equivalent to minimizing log lossBoosting minimizes similar loss function!!Both smooth approximations of 0/1 loss!©Carlos Guestrin 2005-200920Logistic regression and BoostingLogistic regression: Minimize loss fn Define where xjpredefinedBoosting: Minimize loss fn Define where ht(xi) defined dynamically to fit data(not a linear classifier) Weights αjlearned incrementally©Carlos Guestrin 2005-20091121What you need to know about Boosting Combine weak classifiers to obtain very strong classifier Weak classifier – slightly better than random on training data Resulting very strong classifier – can eventually provide zero training error AdaBoost algorithm Boosting v. Logistic Regression  Similar loss functions Single optimization (LR) v. Incrementally improving classification (B) Most popular application of Boosting: Boosted decision stumps! Very simple to implement, very effective classifier©Carlos Guestrin


View Full Document

CMU CS 10701 - Boosting

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Boosting
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Boosting and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Boosting 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?