DOC PREVIEW
Rutgers University CS 536 - Machine Learning

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

INTRODUCTIONTOMachineLearningETHEM ALPAYDIN© The MIT Press, [email protected]://www.cmpe.boun.edu.tr/~ethem/i2mlLecture Slides forCHAPTER15:CombiningMultipleLearnersLecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)3Rationale No Free Lunch thm: There is no algorithm that is always the most accurate Generate a group of base-learners which when combined has higher accuracy Different learners use different Algorithms Hyperparameters Representations (Modalities) Training sets SubproblemsLecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)4Voting Linear combination ClassificationLecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)5 Bayesian perspective: If djare iid Bias does not change, variance decreases by L Average over randomnessLecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)6Error-Correcting Output Codes K classes; L problems (Dietterich and Bakiri, 1995) Code matrix W codes classes in terms of learners One per classL=K PairwiseL=K(K-1)/2Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)7 Full code L=2(K-1)-1 With reasonable L, find W such that the Hamming distance btw rows and columns are maximized. Voting scheme Subproblems may be more difficult than one-per-KLecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)8Bagging  Use bootstrapping to generate L training sets and train one base-learner with each (Breiman, 1996) Use voting (Average or median with regression) Unstable algorithms profit from baggingLecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)9AdaBoostGenerate a sequence of base-learners each focusing on previous one’s errors(Freund and Schapire, 1996)Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)10Mixture of ExpertsVoting where weights are input-dependent (gating)(Jacobs et al., 1991)Experts or gating can be nonlinearLecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)11Stacking Combiner f () is another learner (Wolpert, 1992)Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.0)12CascadingUse djonly if preceding ones are not confidentCascade learners in order of


View Full Document
Download Machine Learning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Machine Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Machine Learning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?