BoostingSimple Model SelectionCross ValidationRegularizationAnnouncementsFighting the bias-variance tradeoffVotingBoostingLearning from weighted dataWhat t to choose for hypothesis ht?What t to choose for hypothesis ht?What t to choose for hypothesis ht?What t to choose for hypothesis ht?Strong, weak classifiersBoosting results – Digit recognitionBoosting generalization error boundBoosting generalization error boundBoosting: Experimental ResultsBoosting and Logistic RegressionBoosting and Logistic RegressionLogistic regression and BoostingWhat you need to know about BoostingOK… now we’ll learn to pick those darned parameters…Test set error as a function of model complexitySimple greedy model selection algorithmGreedy model selectionSimple greedy model selection algorithmSimple greedy model selection algorithmValidation setSimple greedy model selection algorithmSimple greedy model selection algorithm(LOO) Leave-one-out cross validationLOO cross validation is (almost) unbiased estimate of true error!Simple greedy model selection algorithmUsing LOO error for model selectionComputational cost of LOOSolution 2 to complexity of computing LOO: (More typical) Use k-fold cross validationRegularization – RevisitedRegularization in linear regressionOther regularization examplesHow do we pick magic parameter?Regularization and Bayesian learningOccam’s RazorMinimum Description Length PrincipleBayesian interpretation of MDL PrincipleWhat you need to know about Model Selection, Regularization and Cross ValidationAcknowledgements©2006 Carlos Guestrin1Boosting: (Linked from class website)Schapire ’01 BoostingSimple Model SelectionCross ValidationRegularizationMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 8th, 2006©2006 Carlos Guestrin2Announcements Recitations stay on Thursdays 5-6:30pm in Wean 5409 This week: Decision Trees and Boosting Homework due… Tomorrow by 10:30am (class time) to Monica Hopes, Wean Hall 4616©2006 Carlos Guestrin3Fighting the bias-variance tradeoff Simple (a.k.a. weak) learners are good e.g., naïve Bayes, logistic regression, decision stumps (or shallow decision trees) Low variance, don’t usually overfit Simple (a.k.a. weak) learners are bad High bias, can’t solve hard learning problems Can we make weak learners always good??? No!!! But often yes…©2006 Carlos Guestrin4Voting Instead of learning a single (weak) classifier, learn many weak classifiersthat are good at different parts of the input space Output class: (Weighted) vote of each classifier Classifiers that are most “sure” will vote with more conviction Classifiers will be most “sure” about a particular part of the space On average, do better than single classifier! But how do you ??? force classifiers to learn about different parts of the input space? weigh the votes of different classifiers?©2006 Carlos Guestrin5Boosting[Schapire, 1989] Idea: given a weak learner, run it multiple times on (reweighted) training data, then let learned classifiers vote On each iteration t: weight each training example by how incorrectly it was classified Learn a hypothesis – ht A strength for this hypothesis – αt Final classifier: Practically useful Theoretically interesting©2006 Carlos Guestrin6Learning from weighted data Sometimes not all data points are equal Some data points are more equal than others Consider a weighted dataset D(i) – weight of i th training example (xi,yi) Interpretations: i th training example counts as D(i) examples If I were to “resample” data, I would get more samples of “heavier” data points Now, in all calculations, whenever used, i th training example counts as D(i) “examples” e.g., MLE for Naïve Bayes, redefine Count(Y=y) to be weighted count©2006 Carlos Guestrin7©2006 Carlos Guestrin8©2006 Carlos Guestrin9What αtto choose for hypothesis ht?[Schapire, 1989]Training error of final classifier is bounded by:Where©2006 Carlos Guestrin10What αtto choose for hypothesis ht?[Schapire, 1989]Training error of final classifier is bounded by:Where©2006 Carlos Guestrin11What αtto choose for hypothesis ht?[Schapire, 1989]Training error of final classifier is bounded by:Where If we minimize ∏tZt, we minimize our training errorWe can tighten this bound greedily, by choosing αtand hton each iteration to minimize Zt.©2006 Carlos Guestrin12What αtto choose for hypothesis ht?[Schapire, 1989]We can minimize this bound by choosing αton each iteration to minimize Zt.For boolean target function, this is accomplished by [Freund & Schapire ’97]: You’ll prove this in your homework! ☺©2006 Carlos Guestrin13Strong, weak classifiers If each classifier is (at least slightly) better than random εt< 0.5 AdaBoost will achieve zero training error (exponentially fast): Is it hard to achieve better than random training error?©2006 Carlos Guestrin14Boosting results – Digit recognition[Schapire, 1989] Boosting often Robust to overfitting Test set error decreases even after training error is zero©2006 Carlos Guestrin15Boosting generalization error bound[Freund & Schapire, 1996] T – number of boosting rounds d – VC dimension of weak learner, measures complexity of classifier m – number of training examples©2006 Carlos Guestrin16Boosting generalization error bound[Freund & Schapire, 1996] Contradicts: Boosting often Robust to overfitting Test set error decreases even after training error is zero Need better analysis tools we’ll come back to this later in the semester T – number of boosting rounds d – VC dimension of weak learner, measures complexity of classifier m – number of training examplesBoosting: Experimental Results©2006 Carlos Guestrin17[Freund & Schapire, 1996]Comparison of C4.5, Boosting C4.5, Boosting decision stumps (depth 1 trees), 27 benchmark datasets©2006 Carlos Guestrin18©2006 Carlos Guestrin19Boosting and Logistic RegressionLogistic regression assumes:And tries to maximize data likelihood:Equivalent to minimizing log loss©2006 Carlos Guestrin20Boosting and Logistic RegressionLogistic regression equivalent to minimizing log lossBoosting minimizes similar loss function!!Both smooth approximations of 0/1 loss!©2006 Carlos Guestrin21Logistic regression and BoostingLogistic regression: Minimize loss fn
View Full Document