Basics Gaussians Koller Friedman 1 1 1 2 handed out in class Bias Variance tradeoff Bishop chapter 9 1 9 2 Gaussians Linear Regression Bias Variance Tradeoff Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University January 23rd 2006 Announcements Recitations stay on Thursdays 5 6 30pm in Wean 5409 Special Matlab recitation Jan 25 Wed 5 00 7 00pm in NSH 3305 First homework Programming part and Analytic part Remember collaboration policy can discuss questions but need to write your own solutions and code Out later today Due Mon Feb 6th beginning of class Start early Maximum Likelihood Estimation Data Observed set D of H Heads and T Tails Hypothesis Binomial distribution Learning is an optimization problem What s the objective function MLE Choose that maximizes the probability of observed data Bayesian Learning for Thumbtack Likelihood function is simply Binomial What about prior Represent expert knowledge Simple posterior form Conjugate priors Closed form representation of posterior For Binomial conjugate prior is Beta distribution Posterior distribution Prior Data H heads and T tails Posterior distribution MAP Maximum a posteriori approximation As more data is observed Beta is more certain MAP use most likely parameter What about continuous variables Billionaire says If I am measuring a continuous variable what can you do for me You say Let me tell you about Gaussians Some properties of Gaussians affine transformation multiplying by scalar and adding a constant X N 2 Y aX b Y N a b a2 2 Sum of Gaussians X N X 2X Y N Y 2Y Z X Y Z N X Y 2X 2Y Learning a Gaussian Collect a bunch of data Hopefully i i d samples e g exam scores Learn parameters Mean Variance MLE for Gaussian Prob of i i d samples x1 xN Log likelihood of data Your second learning algorithm MLE for mean of a Gaussian What s MLE for mean MLE for variance Again set derivative to zero Learning Gaussian parameters MLE BTW MLE for the variance of a Gaussian is biased Expected result of estimation is not true parameter Unbiased variance estimator Bayesian learning of Gaussian parameters Conjugate priors Mean Gaussian prior Variance Wishart Distribution Prior for mean MAP for mean of Gaussian Prediction of continuous variables Billionaire says Wait that s not what I meant You says Chill out dude He says I want to predict a continuous variable for continuous inputs I want to predict salaries from GPA You say I can regress that The regression problem Instances xj tj Learn Mapping from x to t x Hypothesis space Given basis functions Find coeffs w w1 wk Why is this called linear regression model is linear in the parameters Precisely minimize the residual error The regression problem in matrix notation weights N sensors K basis func N sensors K basis functions measurements Regression solution simple matrix operations where k k matrix for k basis functions k 1 vector But why Billionaire again says Why sum squared error You say Gaussians Dr Gateson Gaussians Model prediction is linear function plus Gaussian noise t i wi hi x Learn w using MLE Maximizing log likelihood Maximize Least squares Linear Regression is MLE for Gaussians Bias Variance tradeoff Intuition Model too simple does not fit the data well A biased solution Model too complex small changes to the data solution changes a lot A high variance solution Squared Bias of learner Suppose you are given a dataset D with m samples from some distribution You learn function h x from data D If you sample a different datasets you will learn different h x Expected hypothesis ED h x Bias difference between what you expect to learn and truth Measures how well you expect to represent true solution Decreases with more complex model Variance of learner Suppose you are given a dataset D with m samples from some distribution You learn function h x from data D If you sample a different datasets you will learn different h x Variance difference between what you expect to learn and what you learn from a from a particular dataset Measures how sensitive learner is to specific dataset Decreases with simpler model Bias Variance Tradeoff Choice of hypothesis class introduces learning bias More complex class less bias More complex class more variance Bias Variance decomposition of error Consider simple regression problem f X T t f x g x noise N 0 deterministic Collect some data and learn a function h x What are sources of prediction error Sources of error 1 noise What if we have perfect learner infinite data Our learning solution h x satisfies h x g x Still have remaining unavoidable error of 2 due to noise Sources of error 2 Finite data What if we have imperfect learner or only m training examples What is our expected squared error per example Expectation taken over random training sets D of size m drawn from distribution P X T Bias Variance Decomposition of Error Bishop chapter 9 1 9 2 Assume target function t f x g x Then expected sq error over fixed size training sets D drawn from P X T can be expressed as sum of three components Where What you need to know Gaussian estimation MLE Bayesian learning MAP Regression Basis function features Optimizing sum squared error Relationship between regression and Gaussians Bias Variance trade off
View Full Document