ILLINOIS CS 446 - 090717.2 (6 pages)

Previewing pages 1, 2 of 6 page document View the full content.
View Full Document

090717.2



Previewing pages 1, 2 of actual document.

View the full content.
View Full Document
View Full Document

090717.2

35 views


Pages:
6
School:
University of Illinois - urbana
Course:
Cs 446 - Machine Learning
Machine Learning Documents

Unformatted text preview:

CS446 Machine Learning Fall 2017 Lecture 4 Overfitting Naive Bayes Maximum Likelihood Lecturer Sanmi Koyejo Scribe Liqun Lu Sep 7th 2017 Review of generalization and Bayes optimal Generalization The goal of generalization is to find a a function algorithm that has good prediction accuracy performance on new data such that the risk R hn Dtest where Dtest 6 Dtrain satisfies R hn Dtest R hn P Bayes optimal The Bayes optimal classifier is defined as f argmin R f P f F The accuracy error is determined by two parts see Figure 1 a representation error b statistical error optimization error Figure 1 Representation error and statistical error 1 2 4 Overfitting Naive Bayes Maximum Likelihood Overfitting Underfitting Overfitting Overfitting means a function hn has good training performance but bad test performance i e hn does not generalize R hn Dtrain R hn Dtest Generally overfitting implies that the hypothesis class namely H is too big This means the function form one can choose is too flexible e g has excessive parameters such that it can fit very well on the training data but predict poorly on the test data An example is 1 NN classifiers In many cases it has perfect training performance but can have bad test performances To avoid overfitting one generally make the hypothesis class smaller Underfitting Underfitting is the opposite of overfitting and is usually hard to detect It implies size of H is too small Comparing with how we detect overfitting it is very rare that the test performance is better than training performance i e R hn Dtrain R hn Dtest However one way to detect potential underfitting is that if R hn Dtrain R hn Dtest it may imply under fitting An example is hn 1 where the train performance is nearly the same as test performance Bias Variance The bias and variance have the same meaning for classifiers Suppose x is an estimator of x P The bias is defined as Bias x E x where is the true value that is probably unknown The variance of x is 1X V ar x x i E x 2 E x



View Full Document

Access the best Study Guides, Lecture Notes and Practice Exams

Loading Unlocking...
Login

Join to view 090717.2 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view 090717.2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?