©2005-2007 Carlos Guestrin1Instance-based LearningMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 19th, 2007©2005-2007 Carlos Guestrin2Why not just use Linear Regression?©2005-2007 Carlos Guestrin3Using data to predict new data©2005-2007 Carlos Guestrin4Nearest neighbor©2005-2007 Carlos Guestrin5Univariate 1-Nearest NeighborGiven datapoints (x1,y1) (x2,y2)..(xN,yN),where we assume yi=f(xi) for some unknown function f.Given query point xq, your job is to predict Nearest Neighbor:1. Find the closest xiin our set of datapoints()qxfy≈ˆ()qiixxnni −=argmin()nniyy =ˆ2. PredictHere’s a dataset with one input, one output and four datapoints.xyHere, this is the closest datapointHere, this is the closest datapointHere, this is the closest datapointHere, this is the closest datapoint©2005-2007 Carlos Guestrin61-Nearest Neighbor is an example of….Instance-based learningFour things make a memory based learner: A distance metric How many nearby neighbors to look at? A weighting function (optional) How to fit with the local points?x1y1x2y2x3y3..xnynA function approximator that has been around since about 1910.To make a prediction, search database for similar datapoints, and fit with the local points.©2005-2007 Carlos Guestrin71-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?One3. A weighting function (optional)Unused4. How to fit with the local points?Just predict the same output as the nearest neighbor.©2005-2007 Carlos Guestrin8Multivariate 1-NN examplesRegression Classification©2005-2007 Carlos Guestrin9Multivariate distance metricsSuppose the input vectors x1, x2, …xnare two dimensional:x1= ( x11, x12) , x2= ( x21, x22) , …xN= ( xN1, xN2).One can draw the nearest-neighbor regions in input space.Dist(xi,xj) =(xi1– xj1)2+(3xi2– 3xj2)2The relative scalings in the distance metric affect region shapesDist(xi,xj) = (xi1– xj1)2+ (xi2– xj2)2©2005-2007 Carlos Guestrin10Euclidean distance metricOther Metrics… Mahalanobis, Rank-based, Correlation-based,…()=−=∑∑22 )x'-(x)x'-(x )x'(x,' )x'(x,TiiiiDxxDσwhereOr equivalently,©2005-2007 Carlos Guestrin11Notable distance metrics (and their level sets)L1norm (absolute)L∞ (max) normScaled Euclidian (L2)Mahalanobis (here, Σ on the previous slide is not necessarily diagonal, but is symmetric©2005-2007 Carlos Guestrin12Consistency of 1-NN Consider an estimator fntrained on n examples e.g., 1-NN, neural nets, regression,... Estimator is consistent if true error goes to zero as amount of data increases e.g., for no noise data, consistent if: Regression is not consistent! Representation bias 1-NN is consistent (under some mild fineprint)What about variance???©2005-2007 Carlos Guestrin131-NN overfits?©2005-2007 Carlos Guestrin14k-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?k1. A weighting function (optional)Unused2. How to fit with the local points?Just predict the average output among the k nearest neighbors.©2005-2007 Carlos Guestrin15k-Nearest Neighbor (here k=9)K-nearest neighbor for function fitting smoothes away noise, but there are clear deficiencies.What can we do about all the discontinuities that k-NN gives us?©2005-2007 Carlos Guestrin16Weighted k-NNs Neighbors are not all the same©2005-2007 Carlos Guestrin17Kernel regressionFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?All of them3. A weighting function (optional)wi= exp(-D(xi, query)2/ Kw2)Nearby points to the query are weighted strongly, far points weakly. The KWparameter is the Kernel Width. Very important.4. How to fit with the local points?Predict the weighted average of the outputs:predict = Σwiyi/ Σwi©2005-2007 Carlos Guestrin18Weighting functionswi= exp(-D(xi, query)2/ Kw2)Typically optimize Kwusing gradient descent(Our examples use Gaussian)©2005-2007 Carlos Guestrin19Kernel regression predictionsIncreasing the kernel width Kwmeans further away points get an opportunity to influence you.As KwÆ∞, the prediction tends to the global average.KW=80KW=20KW=10©2005-2007 Carlos Guestrin20Kernel regression on our test casesKW=1/16 axis width.KW=1/32 of x-axis width.KW=1/32 of x-axis width.Choosing a good Kwis important. Not just for Kernel Regression, but for all the locally weighted learners we’re about to see.©2005-2007 Carlos Guestrin21Kernel regression can look badKW = Best.KW = Best.KW = Best.Time to try something more powerful…©2005-2007 Carlos Guestrin22Locally weighted regressionKernel regression:Take a very very conservative function approximator called AVERAGING. Locally weight it.Locally weighted regression:Take a conservative function approximator called LINEAR REGRESSION. Locally weight it.©2005-2007 Carlos Guestrin23Locally weighted regression Four things make a memory based learner: A distance metricAny How many nearby neighbors to look at?All of them A weighting function (optional)Kernels wi = exp(-D(xi, query)2/ Kw2) How to fit with the local points?General weighted regression:()212βxβyβˆargmin∑=−=NkkTkkw©2005-2007 Carlos Guestrin24How LWR worksQueryLinear regression Same parameters for all queriesLocally weighted regression Solve weighted linear regressionfor each query()YXXXβˆT1T−=()WYWXWXWXβˆT1T−=⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎝⎛=nwwwW00000000000021O©2005-2007 Carlos Guestrin25Another view of LWRImage from Cohn, D.A., Ghahramani, Z., and Jordan, M.I. (1996) "Active Learning with Statistical Models", JAIR Volume 4, pages 129-145.©2005-2007 Carlos Guestrin26LWR on our test casesKW = 1/8 of x-axis width.KW = 1/32 of x-axis width.KW = 1/16 of x-axis width.©2005-2007 Carlos Guestrin27Locally weighted polynomial regressionLW Quadratic RegressionKernel width KWat optimal level.KW = 1/15 x-axisLW Linear RegressionKernel width KWat optimal level.KW = 1/40 x-axisKernel RegressionKernel width KWat optimal level.KW = 1/100 x-axisLocal quadratic regression is easy: just add quadratic terms to the WXTWX matrix. As the regression degree increases, the kernel width can increase without introducing bias.©2005-2007 Carlos Guestrin28Curse of dimensionality for instance-based learning Must store and retreve all data! Most real work done
View Full Document