DOC PREVIEW
CMU CS 10701 - Instance-based Learning (a.k.a. non-parametric methods)

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Instance based Learning a k a non parametric methods Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University October 14th 2009 Carlos Guestrin 2005 2009 1 Why not just use Linear Regression Carlos Guestrin 2005 2009 2 1 Using data to predict new data Carlos Guestrin 2005 2009 3 Nearest neighbor Carlos Guestrin 2005 2009 4 2 Univariate 1 Nearest Neighbor Given datapoints x1 y1 x2 y2 xN yN where we assume yi f xi for some unknown function f Given query point xq your job is to predict q Nearest Neighbor 1 Find the closest xi in our set of datapoints y f x i nn argmin xi xq i 2 Predict y yi nn Here s a dataset with one input one output and four datapoints Here this is the closest datapoint y x Carlos Guestrin 2005 2009 5 1 Nearest Neighbor is an example of Instance based learning A function approximator that has been around since about 1910 x1 x2 x3 To make a prediction search database for similar datapoints and fit with the local points y1 y2 y3 xn yn Four things make a memory based learner A distance metric How many nearby neighbors to look at A weighting function optional How to fit with the local points Carlos Guestrin 2005 2009 6 3 1 Nearest Neighbor Four things make a memory based learner 1 A distance metric Euclidian and many more 2 How many nearby neighbors to look at One 3 A weighting function optional Unused 4 How to fit with the local points Just predict the same output as the nearest neighbor Carlos Guestrin 2005 2009 7 Multivariate 1 NN examples Classification Regression Carlos Guestrin 2005 2009 8 4 Multivariate distance metrics Suppose the input vectors x1 x2 xn are two dimensional x1 x11 x12 x2 x21 x22 xN xN1 xN2 One can draw the nearest neighbor regions in input space Dist xi xj xi1 xj1 2 xi2 xj2 2 Dist xi xj xi1 xj1 2 3xi2 3xj2 2 The relative scalings in the distance metric affect region shapes Carlos Guestrin 2005 2009 9 Euclidean distance metric Or equivalently where D x x x x 2 i 2 i i i D x x x x T x x Other Metrics Mahalanobis Rank based Correlation based Carlos Guestrin 2005 2009 10 5 Notable distance metrics and their level sets Scaled Euclidian L2 L1 norm absolute Mahalanobis here on the previous slide is not necessarily diagonal but is symmetric L max norm Carlos Guestrin 2005 2009 11 Consistency of 1 NN Consider an estimator fn trained on n examples Estimator is consistent if true error goes to zero as amount of data increases e g for no noise data consistent if Regression is not consistent e g 1 NN neural nets regression Representation bias 1 NN is consistent under some mild fineprint What about variance Carlos Guestrin 2005 2009 12 6 1 NN overfits Carlos Guestrin 2005 2009 13 k Nearest Neighbor Four things make a memory based learner 1 A distance metric Euclidian and many more 2 How many nearby neighbors to look at k 1 A weighting function optional Unused 2 How to fit with the local points Just predict the average output among the k nearest neighbors Carlos Guestrin 2005 2009 14 7 k Nearest Neighbor here k 9 K nearest neighbor for function fitting smoothes away noise but there are clear deficiencies What can we do about all the discontinuities that k NN gives us Carlos Guestrin 2005 2009 15 Weighted k NNs Neighbors are not all the same Carlos Guestrin 2005 2009 16 8 Kernel regression Four things make a memory based learner 1 A distance metric Euclidian and many more 2 How many nearby neighbors to look at All of them 3 A weighting function optional wi exp D xi query 2 Kw2 Nearby points to the query are weighted strongly far points weakly The KW parameter is the Kernel Width Very important 4 How to fit with the local points Predict the weighted average of the outputs predict wiyi wi Carlos Guestrin 2005 2009 17 Weighting functions wi exp D xi query 2 Kw2 Typically optimize Kw using gradient descent Our examples use Gaussian Carlos Guestrin 2005 2009 18 9 Kernel regression predictions KW 10 KW 20 KW 80 Increasing the kernel width Kw means further away points get an opportunity to influence you As Kw the prediction tends to the global average Carlos Guestrin 2005 2009 19 Kernel regression on our test cases KW 1 32 of x axis width KW 1 32 of x axis width KW 1 16 axis width Choosing a good Kw is important Not just for Kernel Regression but for all the locally weighted learners we re about to see Carlos Guestrin 2005 2009 20 10 Kernel regression can look bad KW Best KW Best KW Best Time to try something more powerful Carlos Guestrin 2005 2009 21 Locally weighted regression Kernel regression Take a very very conservative function approximator called AVERAGING Locally weight it Locally weighted regression Take a conservative function approximator called LINEAR REGRESSION Locally weight it Carlos Guestrin 2005 2009 22 11 Locally weighted regression Four things make a memory based learner A distance metric Any How many nearby neighbors to look at All of them A weighting function optional Kernels wi exp D xi query 2 Kw2 How to fit with the local points General weighted regression N 2 argmin wk yk T x k 2 k 1 Carlos Guestrin 2005 2009 23 How LWR works Query Linear regression Locally weighted regression Same parameters for all queries Solve weighted linear regression for each query XT X XT Y 1 T w1 0 W 0 0 Carlos Guestrin 2005 2009 1 WX WY WX WX 0 w2 0 0 T 0 0 O 0 0 wn 0 0 24 12 Another view of LWR Carlos 25 Image from Cohn D A Ghahramani Z and Jordan M I Guestrin 1996 2005 2009 Active Learning with Statistical Models JAIR Volume 4 pages 129 145 LWR on our test cases KW 1 16 of x axis width KW 1 32 of x axis width Carlos Guestrin 2005 2009 KW 1 8 of x axis width 26 13 Locally weighted polynomial regression Kernel Regression Kernel width KW at optimal level LW Linear Regression Kernel width KW at optimal level LW Quadratic Regression Kernel width KW at optimal level KW 1 100 x axis KW 1 40 x axis KW 1 15 x axis Local quadratic regression is easy just add quadratic terms to the WXTWX matrix As the regression degree increases the kernel width can increase without introducing bias Carlos Guestrin 2005 2009 27 Curse of dimensionality for instance based learning Must store and retreve all data Most real work done during testing For every test sample must search through all dataset very slow We ll see fast methods for dealing with large datasets Instance based learning often poor with noisy or irrelevant features Carlos Guestrin 2005 2009 28 14 Curse of the irrelevant feature Carlos Guestrin 2005 2009 29 What you need to know about instance based learning k


View Full Document

CMU CS 10701 - Instance-based Learning (a.k.a. non-parametric methods)

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Instance-based Learning (a.k.a. non-parametric methods)
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Instance-based Learning (a.k.a. non-parametric methods) and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Instance-based Learning (a.k.a. non-parametric methods) and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?