DOC PREVIEW
CMU CS 10701 - boosting-xvalidation-regularization

This preview shows page 1-2-3-22-23-24-45-46-47 out of 47 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 47 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

BoostingSimple Model SelectionCross ValidationRegularizationAnnouncementsFighting the bias-variance tradeoffVotingBoostingLearning from weighted dataWhat t to choose for hypothesis ht?What t to choose for hypothesis ht?What t to choose for hypothesis ht?What t to choose for hypothesis ht?Strong, weak classifiersBoosting results – Digit recognitionBoosting generalization error boundBoosting generalization error boundBoosting: Experimental ResultsBoosting and Logistic RegressionBoosting and Logistic RegressionLogistic regression and BoostingWhat you need to know about BoostingOK… now we’ll learn to pick those darned parameters…Test set error as a function of model complexitySimple greedy model selection algorithmGreedy model selectionSimple greedy model selection algorithmSimple greedy model selection algorithmValidation setSimple greedy model selection algorithmSimple greedy model selection algorithm(LOO) Leave-one-out cross validationLOO cross validation is (almost) unbiased estimate of true error!Simple greedy model selection algorithmUsing LOO error for model selectionComputational cost of LOOSolution 2 to complexity of computing LOO: (More typical) Use k-fold cross validationRegularization – RevisitedRegularization in linear regressionOther regularization examplesHow do we pick magic parameter?Regularization and Bayesian learningOccam’s RazorMinimum Description Length PrincipleBayesian interpretation of MDL PrincipleWhat you need to know about Model Selection, Regularization and Cross ValidationAcknowledgements©2006 Carlos Guestrin1Boosting: (Linked from class website)Schapire ’01 BoostingSimple Model SelectionCross ValidationRegularizationMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 8th, 2006©2006 Carlos Guestrin2Announcements Recitations stay on Thursdays 5-6:30pm in Wean 5409 This week: Decision Trees and Boosting Homework due… Tomorrow by 10:30am (class time) to Monica Hopes, Wean Hall 4616©2006 Carlos Guestrin3Fighting the bias-variance tradeoff Simple (a.k.a. weak) learners are good e.g., naïve Bayes, logistic regression, decision stumps (or shallow decision trees) Low variance, don’t usually overfit Simple (a.k.a. weak) learners are bad High bias, can’t solve hard learning problems Can we make weak learners always good??? No!!! But often yes…©2006 Carlos Guestrin4Voting Instead of learning a single (weak) classifier, learn many weak classifiersthat are good at different parts of the input space Output class: (Weighted) vote of each classifier Classifiers that are most “sure” will vote with more conviction Classifiers will be most “sure” about a particular part of the space On average, do better than single classifier! But how do you ???  force classifiers to learn about different parts of the input space? weigh the votes of different classifiers?©2006 Carlos Guestrin5Boosting[Schapire, 1989] Idea: given a weak learner, run it multiple times on (reweighted) training data, then let learned classifiers vote On each iteration t:  weight each training example by how incorrectly it was classified Learn a hypothesis – ht A strength for this hypothesis – αt Final classifier: Practically useful Theoretically interesting©2006 Carlos Guestrin6Learning from weighted data Sometimes not all data points are equal Some data points are more equal than others Consider a weighted dataset D(i) – weight of i th training example (xi,yi) Interpretations: i th training example counts as D(i) examples If I were to “resample” data, I would get more samples of “heavier” data points Now, in all calculations, whenever used, i th training example counts as D(i) “examples” e.g., MLE for Naïve Bayes, redefine Count(Y=y) to be weighted count©2006 Carlos Guestrin7©2006 Carlos Guestrin8©2006 Carlos Guestrin9What αtto choose for hypothesis ht?[Schapire, 1989]Training error of final classifier is bounded by:Where©2006 Carlos Guestrin10What αtto choose for hypothesis ht?[Schapire, 1989]Training error of final classifier is bounded by:Where©2006 Carlos Guestrin11What αtto choose for hypothesis ht?[Schapire, 1989]Training error of final classifier is bounded by:Where If we minimize ∏tZt, we minimize our training errorWe can tighten this bound greedily, by choosing αtand hton each iteration to minimize Zt.©2006 Carlos Guestrin12What αtto choose for hypothesis ht?[Schapire, 1989]We can minimize this bound by choosing αton each iteration to minimize Zt.For boolean target function, this is accomplished by [Freund & Schapire ’97]: You’ll prove this in your homework! ☺©2006 Carlos Guestrin13Strong, weak classifiers If each classifier is (at least slightly) better than random εt< 0.5 AdaBoost will achieve zero training error (exponentially fast): Is it hard to achieve better than random training error?©2006 Carlos Guestrin14Boosting results – Digit recognition[Schapire, 1989] Boosting often Robust to overfitting Test set error decreases even after training error is zero©2006 Carlos Guestrin15Boosting generalization error bound[Freund & Schapire, 1996] T – number of boosting rounds d – VC dimension of weak learner, measures complexity of classifier m – number of training examples©2006 Carlos Guestrin16Boosting generalization error bound[Freund & Schapire, 1996] Contradicts: Boosting often Robust to overfitting Test set error decreases even after training error is zero Need better analysis tools we’ll come back to this later in the semester T – number of boosting rounds d – VC dimension of weak learner, measures complexity of classifier m – number of training examplesBoosting: Experimental Results©2006 Carlos Guestrin17[Freund & Schapire, 1996]Comparison of C4.5, Boosting C4.5, Boosting decision stumps (depth 1 trees), 27 benchmark datasets©2006 Carlos Guestrin18©2006 Carlos Guestrin19Boosting and Logistic RegressionLogistic regression assumes:And tries to maximize data likelihood:Equivalent to minimizing log loss©2006 Carlos Guestrin20Boosting and Logistic RegressionLogistic regression equivalent to minimizing log lossBoosting minimizes similar loss function!!Both smooth approximations of 0/1 loss!©2006 Carlos Guestrin21Logistic regression and BoostingLogistic regression: Minimize loss fn


View Full Document

CMU CS 10701 - boosting-xvalidation-regularization

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download boosting-xvalidation-regularization
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view boosting-xvalidation-regularization and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view boosting-xvalidation-regularization 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?