BoostingRecitation 9Oznur TastanOutline• Overview of common mistakes in midterm• BoostingSanity checks• Entropy for discrete variables is always non-negativeand equals zero only if the variable takes on a single value• Information gain is always non-negative2( ) ( ( )) ( ) ( ) ( )log ( )i i i iiiH X E I X p x I x p x p xSanity checksIn decision trees:• You cannot obtain a leaf that has no training examples • If a leaf contains examples from multiple classes, you predict themost common class. • If there are multiple, you predict any of the most common classes.Common mistakesMany people only stated one of either of the problems.Common mistakes6.3 Controlling overfittingIncrease the number of training examples in logistic regression, the bias remains unchanged. MLE is an approximately unbiased estimator.11 Bayesian networks’12 Graphical model inference Entries in potential tables aren't probabilitiesMany people forgot about the possibility of accidental independences.Boosting• As opposed to bagging and random forest learn many big trees• Learn many small trees (weak classifiers)Commonly used terms:Learner = Hypothesis = ClassifierBoosting• Given weak learner that can consistently classify the examples with error ≤1/2-γ• A boosting algorithm can provably construct single classifier with error ≤εwhere ε and γ are small.AdaBoostIn the first round all examples are equally weighted D_t(i)=1/NAt each run:Concentrate on the hardest ones:The examples that are misclassified in the previous run are weighted more so that the new learner focuses on them. At the end:Take a weighted majority vote.Formal descriptionthis is the classifier or hypothesisweighted errormistake on an example with high weightcosts much.weighted majority votethis is a distributionover examplesUpdating the distributionCorrectly predicted this exampledecrease the weight of the exampleMistaken.Increase the weight of the exampleUpdating Dtweighted error of the classifier1/2 ln((1-0.3)/0.3)When final hypothesis is too complexMargin of the classifierCumulative distribution of the marginsAlthough the final classifier is getting larger, the margins are increasing.Advantages• Fast• Simple and easy to program• No parameters to tune (except T)• Provably effective• Performance depends on the data and the weak learner• Can fail if the weak learners are too complex (overfitting)• If the weak classifiers are too simple (underfitting)ReferencesMiroslav Dudik lecture
View Full Document