DOC PREVIEW
CMU CS 10701 - Instance-based Learning

This preview shows page 1-2-3-24-25-26-27-49-50-51 out of 51 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Instance-based LearningAnnouncementsWhy not just use Linear Regression?Using data to predict new dataNearest neighborUnivariate 1-Nearest Neighbor1-Nearest Neighbor is an example of…. Instance-based learning1-Nearest NeighborMultivariate 1-NN examplesMultivariate distance metricsEuclidean distance metricNotable distance metrics (and their level sets)Consistency of 1-NN1-NN overfits?k-Nearest Neighbork-Nearest Neighbor (here k=9)Weighted k-NNsKernel regressionWeighting functionsKernel regression predictionsKernel regression on our test casesKernel regression can look badLocally weighted regressionLocally weighted regressionHow LWR worksAnother view of LWRLWR on our test casesLocally weighted polynomial regressionCurse of dimensionality for instance-based learningCurse of the irrelevant featureWhat you need to know about instance-based learningAcknowledgmentSupport Vector MachinesLinear classifiers – Which line is better?Pick the one with the largest margin!Maximize the marginBut there are a many planes…Review: Normal to a planeNormalized margin – Canonical hyperplanesMargin maximization using canonical hyperplanesSupport vector machines (SVMs)What if the data is not linearly separable?What if the data is still not linearly separable?Slack variables – Hinge lossSide note: What’s the difference between SVMs and logistic regression?What about multiple classes?One against AllLearn 1 classifier: Multiclass SVMLearn 1 classifier: Multiclass SVMWhat you need to knowAcknowledgment©2006 Carlos Guestrin1Instance-based LearningMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 20th, 2006©2006 Carlos Guestrin2Announcements Third homework  Out later today Due March 1st©2006 Carlos Guestrin3Why not just use Linear Regression?©2006 Carlos Guestrin4Using data to predict new data©2006 Carlos Guestrin5Nearest neighbor©2006 Carlos Guestrin6Univariate 1-Nearest NeighborGiven datapoints (x1,y1) (x2,y2)..(xN,yN),where we assume yi=f(si) for some unknown function f.Given query point xq, your job is to predict Nearest Neighbor:1. Find the closest xiin our set of datapoints()qxfy≈ˆ()qixxnni −=argmini()nniyy =ˆ2. PredictHere’s a dataset with one input, one output and four datapoints.xyHere, this is the closest datapointHere, this is the closest datapointHere, this is the closest datapointHere, this is the closest datapoint©2006 Carlos Guestrin71-Nearest Neighbor is an example of….Instance-based learningx1y1x2y2x3y3..xnynA function approximator that has been around since about 1910.To make a prediction, search database for similar datapoints, and fit with the local points.Four things make a memory based learner: A distance metric How many nearby neighbors to look at? A weighting function (optional) How to fit with the local points?©2006 Carlos Guestrin81-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?One3. A weighting function (optional)Unused4. How to fit with the local points?Just predict the same output as the nearest neighbor.©2006 Carlos Guestrin9Multivariate 1-NN examplesRegression Classification©2006 Carlos Guestrin10Multivariate distance metricsSuppose the input vectors x1, x2, …xn are two dimensional:x1= ( x11, x12) , x2= ( x21, x22) , …xN= ( xN1, xN2).One can draw the nearest-neighbor regions in input space.Dist(xi,xj) =(xi1– xj1)2+(3xi2– 3xj2)2The relative scalings in the distance metric affect region shapes.Dist(xi,xj) = (xi1– xj1)2+ (xi2– xj2)2©2006 Carlos Guestrin11Euclidean distance metric()⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎣⎡=∑=−=∑∑2N222122σ000σ000σ )x'-(x)x'-(x )x'(x,' )x'(x,LLLLLLLTiiiiDxxDσOr equivalently,whereOther Metrics… Mahalanobis, Rank-based, Correlation-based,…©2006 Carlos Guestrin12Notable distance metrics (and their level sets)L1norm (absolute)L∞(max) normScaled Euclidian (L2)Mahalanobis (here, Σ on the previous slide is not necessarily diagonal, but is symmetric©2006 Carlos Guestrin13Consistency of 1-NN Consider an estimator fntrained on n examples e.g., 1-NN, neural nets, regression,... Estimator is consistent if prediction error goes to zero as amount of data increases e.g., for no noise data, consistent if: Regression is not consistent! Representation bias 1-NN is consistent (under some mild fineprint)What about variance???©2006 Carlos Guestrin141-NN overfits?©2006 Carlos Guestrin15k-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?k1. A weighting function (optional)Unused2. How to fit with the local points?Just predict the average output among the k nearest neighbors.©2006 Carlos Guestrin16k-Nearest Neighbor (here k=9)K-nearest neighbor for function fitting smoothes away noise, but there are clear deficiencies.What can we do about all the discontinuities that k-NN gives us?©2006 Carlos Guestrin17Weighted k-NNs Neighbors are not all the sameKernel regression©2006 Carlos Guestrin18Four things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?All of them3. A weighting function (optional)wi= exp(-D(xi, query)2/ Kw2)Nearby points to the query are weighted strongly, far points weakly. The KWparameter is the Kernel Width. Very important.4. How to fit with the local points?Predict the weighted average of the outputs:predict = Σwiyi/ Σwi©2006 Carlos Guestrin19Weighting functionswi= exp(-D(xi, query)2/ Kw2)Typically optimize Kwusing gradient descent(Our examples use Gaussian)©2006 Carlos Guestrin20Kernel regression predictionsIncreasing the kernel width Kwmeans further away points get an opportunity to influence you.As KwÆ∞, the prediction tends to the global average.KW=10 KW=20 KW=80©2006 Carlos Guestrin21Kernel regression on our test casesKW=1/32 of x-axis width. KW=1/32 of x-axis width. KW=1/16 axis width.Choosing a good Kwis important. Not just for Kernel Regression, but for all the locally weighted learners we’re about to see.©2006 Carlos Guestrin22Kernel regression can look badKW = Best. KW = Best. KW = Best.Time to try something more powerful…©2006 Carlos Guestrin23Locally weighted regressionKernel regression:Take a very very conservative function approximator called AVERAGING. Locally weight it.Locally weighted regression:Take a conservative function


View Full Document

CMU CS 10701 - Instance-based Learning

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Instance-based Learning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Instance-based Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Instance-based Learning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?