1©2005-2007 Carlos Guestrin1Instance-basedLearningMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 19th, 2007©2005-2007 Carlos Guestrin2Why not just use Linear Regression?2©2005-2007 Carlos Guestrin3Using data to predict new data©2005-2007 Carlos Guestrin4Nearest neighbor3©2005-2007 Carlos Guestrin5Univariate 1-Nearest NeighborGiven datapoints (x1,y1) (x2,y2)..(xN,yN),where we assume yi=f(xi) for someunknown function f.Given query point xq, your job is to predictNearest Neighbor:1. Find the closest xi in our set of datapoints( )qxfy !ˆ( )qiixxnni !=argmin( )nniyy =ˆ2. PredictHere’s adataset withone input, oneoutput and fourdatapoints.xyHere, this isthe closestdatapointHere, this isthe closestdatapointHere, this isthe closestdatapointHere, this isthe closestdatapoint©2005-2007 Carlos Guestrin61-Nearest Neighbor is an example of…. Instance-based learningFour things make a memory based learner: A distance metric How many nearby neighbors to look at? A weighting function (optional) How to fit with the local points?x1 y1x2 y2x3 y3..xn ynA function approximatorthat has been aroundsince about 1910.To make a prediction,search database forsimilar datapoints, and fitwith the local points.4©2005-2007 Carlos Guestrin71-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?One3. A weighting function (optional)Unused4. How to fit with the local points?Just predict the same output as the nearest neighbor.©2005-2007 Carlos Guestrin8Multivariate 1-NN examplesRegression Classification5©2005-2007 Carlos Guestrin9Multivariate distance metricsSuppose the input vectors x1, x2, …xn are two dimensional:x1 = ( x11 , x12 ) , x2 = ( x21 , x22 ) , …xN = ( xN1 , xN2 ).One can draw the nearest-neighbor regions in input space.Dist(xi,xj) =(xi1 – xj1)2+(3xi2 – 3xj2)2The relative scalings in the distance metric affect region shapesDist(xi,xj) = (xi1 – xj1)2 + (xi2 – xj2)2©2005-2007 Carlos Guestrin10Euclidean distance metricOther Metrics… Mahalanobis, Rank-based, Correlation-based,…( )!!!!!"#$$$$$%&='=(=''2N222122ó000ó000ó )x'-(x)x'-(x )x'(x,' )x'(x,LLLLLLLTiiiiDxxD)whereOr equivalently,6©2005-2007 Carlos Guestrin11Notable distance metrics(and their level sets)L1 norm (absolute)L1 (max) normScaled Euclidian (L2)Mahalanobis (here, Σ on the previousslide is not necessarilydiagonal, but is symmetric©2005-2007 Carlos Guestrin12Consistency of 1-NN Consider an estimator fn trained on n examples e.g., 1-NN, neural nets, regression,... Estimator is consistent if true error goes to zero asamount of data increases e.g., for no noise data, consistent if: Regression is not consistent! Representation bias 1-NN is consistent (under some mild fineprint)What about variance???7©2005-2007 Carlos Guestrin131-NN overfits?©2005-2007 Carlos Guestrin14k-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?k1. A weighting function (optional)Unused2. How to fit with the local points?Just predict the average output among the k nearest neighbors.8©2005-2007 Carlos Guestrin15k-Nearest Neighbor (here k=9)K-nearest neighbor for function fitting smoothes away noise, but there areclear deficiencies.What can we do about all the discontinuities that k-NN gives us?©2005-2007 Carlos Guestrin16Weighted k-NNs Neighbors are not all the same9©2005-2007 Carlos Guestrin17Kernel regressionFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?All of them3. A weighting function (optional)wi = exp(-D(xi, query)2 / Kw2)Nearby points to the query are weighted strongly, far pointsweakly. The KW parameter is the Kernel Width. Veryimportant.4. How to fit with the local points?Predict the weighted average of the outputs:predict = Σwiyi / Σwi©2005-2007 Carlos Guestrin18Weighting functionswi = exp(-D(xi, query)2 / Kw2)Typically optimize Kwusing gradient descent(Our examples use Gaussian)10©2005-2007 Carlos Guestrin19Kernel regression predictionsIncreasing the kernel width Kw means further away points get anopportunity to influence you.As Kw1, the prediction tends to the global average.KW=80KW=20KW=10©2005-2007 Carlos Guestrin20Kernel regression on our test casesKW=1/16 axis width.KW=1/32 of x-axis width.KW=1/32 of x-axis width.Choosing a good Kw is important. Not just for Kernel Regression, butfor all the locally weighted learners we’re about to see.11©2005-2007 Carlos Guestrin21Kernel regression can look badKW = Best.KW = Best.KW = Best.Time to try something more powerful…©2005-2007 Carlos Guestrin22Locally weighted regressionKernel regression:Take a very very conservative function approximatorcalled AVERAGING. Locally weight it.Locally weighted regression:Take a conservative function approximator calledLINEAR REGRESSION. Locally weight it.12©2005-2007 Carlos Guestrin23Locally weighted regression Four things make a memory based learner: A distance metricAny How many nearby neighbors to look at?All of them A weighting function (optional)Kernels wi = exp(-D(xi, query)2 / Kw2) How to fit with the local points?General weighted regression:( )212âxâyâˆargmin!="=NkkTkkw©2005-2007 Carlos Guestrin24How LWR worksQueryLinear regression Same parameters for all queriesLocally weighted regression Solve weighted linear regression for each query( )YXXXâˆT1T!=( )WYWXWXWXâˆT1T!=!!!!!"#$$$$$%&=nwwwW00000000000021O13©2005-2007 Carlos Guestrin25Another view of LWRImage from Cohn, D.A., Ghahramani, Z., and Jordan, M.I. (1996) "Active Learning with Statistical Models", JAIR Volume 4, pages 129-145.©2005-2007 Carlos Guestrin26LWR on our test casesKW = 1/8 of x-axis width.KW = 1/32 of x-axiswidth.KW = 1/16 of x-axiswidth.14©2005-2007 Carlos Guestrin27Locally weighted polynomial regressionLW Quadratic RegressionKernel width KW at optimallevel.KW = 1/15 x-axisLW Linear RegressionKernel width KW at optimallevel.KW = 1/40 x-axisKernel RegressionKernel width KW at optimallevel.KW = 1/100 x-axisLocal quadratic regression is easy: just add quadratic terms to theWXTWX matrix. As the regression degree increases, the kernel widthcan increase without introducing bias.©2005-2007 Carlos Guestrin28Curse of dimensionality
View Full Document