DOC PREVIEW
CMU CS 10601 - Midterm Review

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1 Midterm Review Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 25, 2011 See practice exams on our website Midterm is in class October 27 Midterm is open book, open notes, NO computers, NO internet Covers all material presented up through todayʼs class. Some Topics We’ve Covered Decision trees entropy, mutual info., overfitting Probability basics Bayes rule, MLE, MAP, conditional indep. Naïve Bayes conditional independence, # of parameters to estimate, decision surface Logistic regression form of P(Y|X) generative vs. discriminative Linear Regression minimizing sum sq. error (why?) regularization ~ MAP Sources of Error unavoidable error, bias, variance Overfitting, and Avoiding it priors over H cross validation PAC theory: probabilistic bound on overfitting Bayesian Networks factored representation of joint distribution, conditional independence assumptions, D-separation inference in Bayes nets learning from fully/partly observed data PAC Learning sample complexity probabilistic bounds on errortrain – errortrue VC dimension2 Understanding/Comparing Learning Methods Form of learned model • Inputs: • Outputs: Optimization Objective: Algorithm: Assumptions: Guarantees?: Decision boundary: Generative/Discriminative? Naïve Bayes Form of learned model • Inputs: • Outputs: Optimization Objective: Algorithm: Assumptions: Guarantees?: Decision boundary: Generative/Discriminative? Logistic Regression3 Four Fundamentals for ML 1. Learning is an optimization problem – many algorithms are best understood as optimization algs – what objective do they optimize, and how? Local minima? – gradient descent/ascent as general fallback approach Four Fundamentals for ML 1. Learning is an optimization problem – many algorithms are best understood as optimization algs – what objective do they optimize, and how? 2. Learning is a parameter estimation problem – the more training data, the more accurate the estimates – MLE, MAP, M(Conditional)LE, … – to measure accuracy of learned model, we must use test (not train) data4 Four Fundamentals for ML 1. Learning is an optimization problem – many algorithms are best understood as optimization algs – what objective do they optimize, and how? 2. Learning is a parameter estimation problem – the more training data, the more accurate the estimates – MLE, MAP, M(Conditional)LE, … – to measure accuracy of learned model, we must use test (not train) data 3. Error arises from three sources – unavoidable error, bias, variance – PAC learning theory: probabilistic bound on overfitting: errortrue - errortrain given some estimator Y for some parameter θ, we note Y is a random variable (why?) the bias of estimator Y : the variance of estimator Y : consider when • θ is the probability of “heads” for my coin • Y = proportion of heads observed from 3 flips consider when • θ is the vector of correct parameters for learner • Y = parameters output by learning algorithm Bias and Variance of Estimators5 Four Fundamentals for ML 1. Learning is an optimization problem – many algorithms are best understood as optimization algs – what objective do they optimize, and how? 2. Learning is a parameter estimation problem – the more training data, the more accurate the estimates – MLE, MAP, M(Conditional)LE, … – to measure accuracy of learned model, we must use test (not train) data 3. Error arises from three sources – unavoidable error, bias, variance – PAC learning theory: probabilistic bound on overfitting: errortrue - errortrain 4. Practical learning requires making assumptions – Why? – form of the f:X à Y, or P(Y|X) to be learned – priors on parameters: MAP, regularization – Conditional independence: Naive Bayes, Bayes nets,


View Full Document

CMU CS 10601 - Midterm Review

Documents in this Course
lecture

lecture

40 pages

Problem

Problem

12 pages

lecture

lecture

36 pages

Lecture

Lecture

31 pages

Review

Review

32 pages

Lecture

Lecture

11 pages

Lecture

Lecture

18 pages

Notes

Notes

10 pages

Boosting

Boosting

21 pages

review

review

21 pages

review

review

28 pages

Lecture

Lecture

31 pages

lecture

lecture

52 pages

Review

Review

26 pages

review

review

29 pages

Lecture

Lecture

37 pages

Lecture

Lecture

35 pages

Boosting

Boosting

17 pages

Review

Review

35 pages

lecture

lecture

32 pages

Lecture

Lecture

28 pages

Lecture

Lecture

30 pages

lecture

lecture

29 pages

leecture

leecture

41 pages

lecture

lecture

34 pages

review

review

38 pages

review

review

31 pages

Lecture

Lecture

41 pages

Lecture

Lecture

15 pages

Lecture

Lecture

21 pages

Lecture

Lecture

38 pages

Notes

Notes

37 pages

lecture

lecture

29 pages

Load more
Download Midterm Review
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Midterm Review and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Midterm Review 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?