Readings listed in class website Gaussians Linear Regression Bias Variance Tradeoff Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University January 22nd 2007 Carlos Guestrin 2005 2007 Maximum Likelihood Estimation Data Observed set D of H Heads and T Tails Hypothesis Binomial distribution Learning is an optimization problem What s the objective function MLE Choose that maximizes the probability of observed data Carlos Guestrin 2005 2007 1 Bayesian Learning for Thumbtack Likelihood function is simply Binomial What about prior Represent expert knowledge Simple posterior form Conjugate priors Closed form representation of posterior For Binomial conjugate prior is Beta distribution Carlos Guestrin 2005 2007 Posterior distribution Prior Data H heads and T tails Posterior distribution Carlos Guestrin 2005 2007 2 MAP Maximum a posteriori approximation As more data is observed Beta is more certain MAP use most likely parameter Carlos Guestrin 2005 2007 What about continuous variables Billionaire says If I am measuring a continuous variable what can you do for me You say Let me tell you about Gaussians Carlos Guestrin 2005 2007 3 Some properties of Gaussians affine transformation multiplying by scalar and adding a constant X N 2 Y aX b Y N a b a2 2 Sum of Gaussians X N X 2X Y N Y 2Y Z X Y Z N X Y 2X 2Y Carlos Guestrin 2005 2007 Learning a Gaussian Collect a bunch of data Hopefully i i d samples e g exam scores Learn parameters Mean Variance Carlos Guestrin 2005 2007 4 MLE for Gaussian Prob of i i d samples D x1 xN Log likelihood of data Carlos Guestrin 2005 2007 Your second learning algorithm MLE for mean of a Gaussian What s MLE for mean Carlos Guestrin 2005 2007 5 MLE for variance Again set derivative to zero Carlos Guestrin 2005 2007 Learning Gaussian parameters MLE BTW MLE for the variance of a Gaussian is biased Expected result of estimation is not true parameter Unbiased variance estimator Carlos Guestrin 2005 2007 6 Bayesian learning of Gaussian parameters Conjugate priors Mean Gaussian prior Variance Wishart Distribution Prior for mean Carlos Guestrin 2005 2007 MAP for mean of Gaussian Carlos Guestrin 2005 2007 7 Prediction of continuous variables Billionaire says Wait that s not what I meant You says Chill out dude He says I want to predict a continuous variable for continuous inputs I want to predict salaries from GPA You say I can regress that Carlos Guestrin 2005 2007 The regression problem Instances xj tj Learn Mapping from x to t x Hypothesis space Given basis functions Find coeffs w w1 wk Why is this called linear regression model is linear in the parameters Precisely minimize the residual squared error Carlos Guestrin 2005 2007 8 The regression problem in matrix notation weights N sensors K basis func N sensors K basis functions measurements Carlos Guestrin 2005 2007 Regression solution simple matrix operations where k k matrix for k basis functions k 1 vector Carlos Guestrin 2005 2007 9 But why Billionaire again says Why sum squared error You say Gaussians Dr Gateson Gaussians Model prediction is linear function plus Gaussian noise t i wi hi x Learn w using MLE Carlos Guestrin 2005 2007 Maximizing log likelihood Maximize Least squares Linear Regression is MLE for Gaussians Carlos Guestrin 2005 2007 10 Applications Corner 1 Predict stock value over time from past values other relevant vars e g weather demands etc Carlos Guestrin 2005 2007 Applications Corner 2 50 OFFICE 52 12 9 54 OFFICE 51 49 QUIET PHONE 11 8 53 13 14 7 Measure temperatures at some locations Predict temperatures throughout the environment 17 18 STORAGE 16 15 10 CONFERENCE 48 LAB ELEC COPY 5 47 19 6 4 46 45 21 SERVER KITCHEN 39 37 38 36 23 33 35 40 41 22 1 43 42 20 3 2 44 29 27 31 34 25 32 30 28 24 26 Guestrin et al 04 Carlos Guestrin 2005 2007 11 Applications Corner 3 Predict when a sensor will fail based several variables age chemical exposure number of hours used Carlos Guestrin 2005 2007 Announcements Readings associated with each class See course website for specific sections extra links and further details Visit the website frequently Recitations Thursdays 5 30 6 50 in Wean Hall 5409 Special recitation on Matlab Jan 24 Wed 5 30 6 50pm NSH 1305 Carlos Guestrin 2005 2007 12 Bias Variance tradeoff Intuition Model too simple does not fit the data well A biased solution Model too complex small changes to the data solution changes a lot A high variance solution Carlos Guestrin 2005 2007 Squared Bias of learner Given dataset D with m samples learn function h x If you sample a different datasets you will learn different h x Expected hypothesis ED h x Bias difference between what you expect to learn and truth Measures how well you expect to represent true solution Decreases with more complex model phi Carlos Guestrin 2005 2007 13 Squared Bias of learner Given dataset D with m samples learn function h x If you sample a different datasets you will learn different h x Expected hypothesis ED h x Bias difference between what you expect to learn and truth Measures how well you expect to represent true solution Decreases with more complex model Carlos Guestrin 2005 2007 Variance of learner Given a dataset D with m samples you learn function h x If you sample a different datasets you will learn different h x Variance difference between what you expect to learn and what you learn from a from a particular dataset Measures how sensitive learner is to specific dataset Decreases with simpler model Carlos Guestrin 2005 2007 14 Bias Variance Tradeoff Choice of hypothesis class introduces learning bias More complex class less bias More complex class more variance Carlos Guestrin 2005 2007 Bias Variance decomposition of error Consider simple regression problem f X T t f x g x noise N 0 deterministic Collect some data and learn a function h x What are sources of prediction error Carlos Guestrin 2005 2007 15 Sources of error 1 noise What if we have perfect learner infinite data If our learning solution h x satisfies h x g x Still have remaining unavoidable error of 2 due to noise Carlos Guestrin 2005 2007 Sources of error 2 Finite data What if we have imperfect learner or only m training examples What is our expected squared error per example Expectation taken over random training sets D of size m drawn from distribution P X T Carlos Guestrin 2005 2007 16 Bias Variance Decomposition of Error Bishop Chapter 3 Assume target function t f x g x Then expected sq error over fixed size training sets D drawn from P X
View Full Document