1Copyright © 2001, 2003, Andrew W. MoorePredicting Real-valued outputs: an introduction to RegressionAndrew W. MooreProfessorSchool of Computer ScienceCarnegie Mellon Universitywww.cs.cmu.edu/[email protected] to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew’s tutorials: http://www.cs.cmu.edu/~awm/tutorials. Comments and corrections gratefully received. This is reordered material from the Neural Nets lecture and the “Favorite Regression Algorithms” lectureCopyright © 2001, 2003, Andrew W. Moore 2Single-Parameter Linear Regression2Copyright © 2001, 2003, Andrew W. Moore 3Linear RegressionLinear regression assumes that the expected value of the output given an input, E[y|x], is linear.Simplest case: Out(x) = wxfor some unknown w.Given the data, we can estimate w.y5 = 3.1x5 = 4y4 = 1.9x4 = 1.5y3 = 2x3 = 2y2 = 2.2x2 = 3y1 = 1x1 = 1outputsinputsDATASET← 1 →↑w↓Copyright © 2001, 2003, Andrew W. Moore 41-parameter linear regressionAssume that the data is formed byyi= wxi+ noiseiwhere…• the noise signals are independent• the noise has a normal distribution with mean 0 and unknown variance σ2p(y|w,x) has a normal distribution with•mean wx•variance σ23Copyright © 2001, 2003, Andrew W. Moore 5Bayesian Linear Regressionp(y|w,x) = Normal (mean wx, var σ2)We have a set of datapoints (x1,y1) (x2,y2) … (xn,yn) which are EVIDENCE about w.We want to infer wfrom the data.p(w|x1, x2, x3,…xn, y1, y2…yn)•You can use BAYES rule to work out a posterior distribution for wgiven the data.•Or you could do Maximum Likelihood EstimationCopyright © 2001, 2003, Andrew W. Moore 6Maximum likelihood estimation of wAsks the question:“For which value of wis this data most likely to have happened?”<=>For what wisp(y1, y2…yn|x1, x2, x3,…xn, w) maximized?<=>For what w ismaximized? ),(1iniixwyp∏=4Copyright © 2001, 2003, Andrew W. Moore 7For what w isFor what w isFor what w isFor what w ismaximized? ),(1iniixwyp∏=maximized? ))(21exp(21σiiwxyni−=∏−maximized? 2121−−∑=σiiniwxy()minimized? 21∑=−niiiwxyCopyright © 2001, 2003, Andrew W. Moore 8Linear RegressionThe maximum likelihood wis the one that minimizes sum-of-squares of residualsWe want to minimize a quadratic function of w.()()()22222 wxwyxywxyiiiiiiii∑∑∑∑+−=−=ΕE(w)w5Copyright © 2001, 2003, Andrew W. Moore 9Linear RegressionEasy to show the sum of squares is minimized when2∑∑=iiixyxwThe maximum likelihood model isWe can use it for prediction()wxx=OutCopyright © 2001, 2003, Andrew W. Moore 10Linear RegressionEasy to show the sum of squares is minimized when2∑∑=iiixyxwThe maximum likelihood model isWe can use it for predictionNote: In Bayesian stats you’d have ended up with a prob dist of wAnd predictions would have given a prob dist of expected outputOften useful to know your confidence. Max likelihood can give some kinds of confidence too.p(w)w()wxx=Out6Copyright © 2001, 2003, Andrew W. Moore 11Multivariate Linear RegressionCopyright © 2001, 2003, Andrew W. Moore 12Multivariate RegressionWhat if the inputs are vectors?Dataset has formx1 y1x2 y2x3 y3.: :.xRyR3 .. 4 6 ..5. 8. 102-d input examplex1x27Copyright © 2001, 2003, Andrew W. Moore 13Multivariate RegressionWrite matrix X and Y thus:===RRmRRmmRyyyxxxxxxxxxMMM212122221112112 .......................................yxxxx1(there are R datapoints. Each input has mcomponents)The linear regression model assumes a vector w such thatOut(x) = wTx= w1x[1] + w2x[2] + ….wmx[D]The max. likelihood wis w = (XTX) -1(XTY)Copyright © 2001, 2003, Andrew W. Moore 14Multivariate RegressionWrite matrix X and Y thus:===RRmRRmmRyyyxxxxxxxxxMMM212122221112112 .......................................yxxxx1(there are R datapoints. Each input has mcomponents)The linear regression model assumes a vector w such thatOut(x) = wTx= w1x[1] + w2x[2] + ….wmx[D]The max. likelihood wis w = (XTX) -1(XTY)IMPORTANT EXERCISE: PROVE IT !!!!!8Copyright © 2001, 2003, Andrew W. Moore 15Multivariate Regression (con’t)The max. likelihood w is w = (XTX)-1(XTY)XTX is an m x m matrix: i,j’th elt isXTY is an m-element vector: i’thelt∑=Rkkjkixx1∑=Rkkkiyx1Copyright © 2001, 2003, Andrew W. Moore 16Constant Term in Linear Regression9Copyright © 2001, 2003, Andrew W. Moore 17What about a constant term?We may expect linear data that does not go through the origin.Statisticians and Neural Net Folks all agree on a simple obvious hack.Can you guess??Copyright © 2001, 2003, Andrew W. Moore 18The constant term• The trick is to create a fake input “X0” that always takes the value 1205517431642YX2X1111X0205517431642YX2X1Before:Y=w1X1+ w2X2 …has to be a poor modelAfter:Y= w0X0+w1X1+ w2X2 = w0+w1X1+ w2X2 …has a fine constant termIn this example, You should be able to see the MLE w0, w1andw2 by inspection10Copyright © 2001, 2003, Andrew W. Moore 19Linear Regression with varying noiseHeteroscedasticity...Copyright © 2001, 2003, Andrew W. Moore 20Regression with varying noise• Suppose you know the variance of the noise that was added to each datapoint.x=0 x=3x=2x=1y=0y=3y=2y=1σ=1/2σ=2σ=1σ=1/2σ=21/4234321/4121114½½σi2yixi),(~2iiiwxNyσAssumeWhat’s the MLE estimate of w?11Copyright © 2001, 2003, Andrew W. Moore 21MLE estimation with varying noise=),,...,,,,...,,|,...,,(log222212121argmaxwxxxyyypwRRRσσσ=−∑=Riiiiwxyw122)(argminσ==−∑=0)(such that 12Riiiiiwxyxwσ∑∑==RiiiRiiiixyx12212σσAssuming independence among noise and then plugging in equation for Gaussian and simplifying.Setting dLL/dwequal to zeroTrivial algebraCopyright © 2001, 2003, Andrew W. Moore 22This is Weighted Regression• We are asking to minimize the weighted sum of squaresx=0
View Full Document