DOC PREVIEW
CMU CS 10701 - Instance-based Learning

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1©2005-2007 Carlos Guestrin1Instance-basedLearningMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 19th, 2007©2005-2007 Carlos Guestrin2Why not just use Linear Regression?2©2005-2007 Carlos Guestrin3Using data to predict new data©2005-2007 Carlos Guestrin4Nearest neighbor3©2005-2007 Carlos Guestrin5Univariate 1-Nearest NeighborGiven datapoints (x1,y1) (x2,y2)..(xN,yN),where we assume yi=f(xi) for someunknown function f.Given query point xq, your job is to predictNearest Neighbor:1. Find the closest xi in our set of datapoints( )qxfy !ˆ( )qiixxnni !=argmin( )nniyy =ˆ2. PredictHere’s adataset withone input, oneoutput and fourdatapoints.xyHere, this isthe closestdatapointHere, this isthe closestdatapointHere, this isthe closestdatapointHere, this isthe closestdatapoint©2005-2007 Carlos Guestrin61-Nearest Neighbor is an example of…. Instance-based learningFour things make a memory based learner: A distance metric How many nearby neighbors to look at? A weighting function (optional) How to fit with the local points?x1 y1x2 y2x3 y3..xn ynA function approximatorthat has been aroundsince about 1910.To make a prediction,search database forsimilar datapoints, and fitwith the local points.4©2005-2007 Carlos Guestrin71-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?One3. A weighting function (optional)Unused4. How to fit with the local points?Just predict the same output as the nearest neighbor.©2005-2007 Carlos Guestrin8Multivariate 1-NN examplesRegression Classification5©2005-2007 Carlos Guestrin9Multivariate distance metricsSuppose the input vectors x1, x2, …xn are two dimensional:x1 = ( x11 , x12 ) , x2 = ( x21 , x22 ) , …xN = ( xN1 , xN2 ).One can draw the nearest-neighbor regions in input space.Dist(xi,xj) =(xi1 – xj1)2+(3xi2 – 3xj2)2The relative scalings in the distance metric affect region shapesDist(xi,xj) = (xi1 – xj1)2 + (xi2 – xj2)2©2005-2007 Carlos Guestrin10Euclidean distance metricOther Metrics… Mahalanobis, Rank-based, Correlation-based,…( )!!!!!"#$$$$$%&='=(=''2N222122ó000ó000ó )x'-(x)x'-(x )x'(x,' )x'(x,LLLLLLLTiiiiDxxD)whereOr equivalently,6©2005-2007 Carlos Guestrin11Notable distance metrics(and their level sets)L1 norm (absolute)L1 (max) normScaled Euclidian (L2)Mahalanobis (here, Σ on the previousslide is not necessarilydiagonal, but is symmetric©2005-2007 Carlos Guestrin12Consistency of 1-NN Consider an estimator fn trained on n examples e.g., 1-NN, neural nets, regression,... Estimator is consistent if true error goes to zero asamount of data increases e.g., for no noise data, consistent if: Regression is not consistent! Representation bias 1-NN is consistent (under some mild fineprint)What about variance???7©2005-2007 Carlos Guestrin131-NN overfits?©2005-2007 Carlos Guestrin14k-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?k1. A weighting function (optional)Unused2. How to fit with the local points?Just predict the average output among the k nearest neighbors.8©2005-2007 Carlos Guestrin15k-Nearest Neighbor (here k=9)K-nearest neighbor for function fitting smoothes away noise, but there areclear deficiencies.What can we do about all the discontinuities that k-NN gives us?©2005-2007 Carlos Guestrin16Weighted k-NNs Neighbors are not all the same9©2005-2007 Carlos Guestrin17Kernel regressionFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?All of them3. A weighting function (optional)wi = exp(-D(xi, query)2 / Kw2)Nearby points to the query are weighted strongly, far pointsweakly. The KW parameter is the Kernel Width. Veryimportant.4. How to fit with the local points?Predict the weighted average of the outputs:predict = Σwiyi / Σwi©2005-2007 Carlos Guestrin18Weighting functionswi = exp(-D(xi, query)2 / Kw2)Typically optimize Kwusing gradient descent(Our examples use Gaussian)10©2005-2007 Carlos Guestrin19Kernel regression predictionsIncreasing the kernel width Kw means further away points get anopportunity to influence you.As Kw1, the prediction tends to the global average.KW=80KW=20KW=10©2005-2007 Carlos Guestrin20Kernel regression on our test casesKW=1/16 axis width.KW=1/32 of x-axis width.KW=1/32 of x-axis width.Choosing a good Kw is important. Not just for Kernel Regression, butfor all the locally weighted learners we’re about to see.11©2005-2007 Carlos Guestrin21Kernel regression can look badKW = Best.KW = Best.KW = Best.Time to try something more powerful…©2005-2007 Carlos Guestrin22Locally weighted regressionKernel regression:Take a very very conservative function approximatorcalled AVERAGING. Locally weight it.Locally weighted regression:Take a conservative function approximator calledLINEAR REGRESSION. Locally weight it.12©2005-2007 Carlos Guestrin23Locally weighted regression Four things make a memory based learner: A distance metricAny How many nearby neighbors to look at?All of them A weighting function (optional)Kernels wi = exp(-D(xi, query)2 / Kw2) How to fit with the local points?General weighted regression:( )212âxâyâˆargmin!="=NkkTkkw©2005-2007 Carlos Guestrin24How LWR worksQueryLinear regression Same parameters for all queriesLocally weighted regression Solve weighted linear regression for each query( )YXXXâˆT1T!=( )WYWXWXWXâˆT1T!=!!!!!"#$$$$$%&=nwwwW00000000000021O13©2005-2007 Carlos Guestrin25Another view of LWRImage from Cohn, D.A., Ghahramani, Z., and Jordan, M.I. (1996) "Active Learning with Statistical Models", JAIR Volume 4, pages 129-145.©2005-2007 Carlos Guestrin26LWR on our test casesKW = 1/8 of x-axis width.KW = 1/32 of x-axiswidth.KW = 1/16 of x-axiswidth.14©2005-2007 Carlos Guestrin27Locally weighted polynomial regressionLW Quadratic RegressionKernel width KW at optimallevel.KW = 1/15 x-axisLW Linear RegressionKernel width KW at optimallevel.KW = 1/40 x-axisKernel RegressionKernel width KW at optimallevel.KW = 1/100 x-axisLocal quadratic regression is easy: just add quadratic terms to theWXTWX matrix. As the regression degree increases, the kernel widthcan increase without introducing bias.©2005-2007 Carlos Guestrin28Curse of dimensionality


View Full Document

CMU CS 10701 - Instance-based Learning

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Instance-based Learning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Instance-based Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Instance-based Learning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?