DOC PREVIEW
CMU CS 10701 - Neural Networks

This preview shows page 1-2-3-4-27-28-29-30-55-56-57-58 out of 58 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 58 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Neural Nets Many possible refs e g Mitchell Chapter 4 Neural Networks Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University February 15th 2006 2006 Carlos Guestrin 1 Announcements Recitations stay on Thursdays 5 6 30pm in Wean 5409 This week Cross Validation and Neural Nets Homework 2 Due next Monday Feb 20th Updated version online with more hints Start early 2006 Carlos Guestrin 2 Logistic regression P Y X represented by Learning rule MLE 2006 Carlos Guestrin 3 1 0 9 Perceptron as a graph 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 0 6 2006 Carlos Guestrin 4 2 0 2 4 6 4 Linear perceptron classification region 1 0 9 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 0 6 2006 Carlos Guestrin 4 2 0 2 4 6 5 The perceptron learning rule Compare to MLE 2006 Carlos Guestrin 6 Percepton linear classification Boolean functions Can learn x1 x2 Can learn x1 x2 Can learn any conjunction or disjunction 2006 Carlos Guestrin 7 Percepton linear classification Boolean functions Can learn majority Can perceptrons do everything 2006 Carlos Guestrin 8 Going beyond linear classification Solving the XOR problem 2006 Carlos Guestrin 9 Hidden layer Perceptron 1 hidden layer 2006 Carlos Guestrin 10 Example data for NN with hidden layer 2006 Carlos Guestrin 11 Learned weights for hidden layer 2006 Carlos Guestrin 12 NN for images 2006 Carlos Guestrin 13 Weights in NN for images 2006 Carlos Guestrin 14 Forward propagation for 1 hidden layer Prediction 1 hidden layer 2006 Carlos Guestrin 15 Gradient descent for 1 hidden layer Back propagation Computing Dropped w0 to make derivation simpler 2006 Carlos Guestrin 16 Gradient descent for 1 hidden layer Back propagation Computing Dropped w0 to make derivation simpler 2006 Carlos Guestrin 17 Multilayer neural networks 2006 Carlos Guestrin 18 Forward propagation prediction Recursive algorithm Start from input layer Output of node Vk with parents U1 U2 2006 Carlos Guestrin 19 Back propagation learning Just gradient descent Recursive algorithm for computing gradient For each example Perform forward propagation Start from output layer Compute gradient of node Vk with parents U1 U2 Update weight wik 2006 Carlos Guestrin 20 Many possible response functions Sigmoid Linear Exponential Gaussian 2006 Carlos Guestrin 21 Convergence of backprop Perceptron leads to convex optimization Gradient descent reaches global minima Multilayer neural nets not convex Gradient descent gets stuck in local minima Hard to set learning rate Selecting number of hidden units and layers fuzzy process NNs falling in disfavor in last few years We ll see later in semester kernel trick is a good alternative Nonetheless neural nets are one of the most used ML approaches 2006 Carlos Guestrin 22 Training set error Neural nets represent complex functions Output becomes more complex with gradient steps Training set error 2006 Carlos Guestrin 23 What about test set error 2006 Carlos Guestrin 24 Overfitting Output fits training data too well Poor test set accuracy Overfitting the training data Related to bias variance tradeoff One of central problems of ML Avoiding overfitting More training data Regularization Early stopping 2006 Carlos Guestrin 25 What you need to know about neural networks Perceptron Representation Perceptron learning rule Derivation Multilayer neural nets Representation Derivation of backprop Learning rule Overfitting Definition Training set versus test set Learning curve 2006 Carlos Guestrin 26 Instance based Learning Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University February 15th 2006 2006 Carlos Guestrin 27 Announcements Reminder Second homework due Monday 21st 2006 Carlos Guestrin 28 Why not just use Linear Regression 2006 Carlos Guestrin 29 Using data to predict new data 2006 Carlos Guestrin 30 Nearest neighbor 2006 Carlos Guestrin 31 Univariate 1 Nearest Neighbor Given datapoints x1 y1 x2 y2 xN yN where we assume yi f si for some unknown function f Given query point xq your job is to predict q Nearest Neighbor 1 Find the closest xi in our set of datapoints y f x i nn argmin xi xq i Predict y yi nn He the re t da clo his tap se is oin st t Here s a dataset with one input one output and four datapoints is s t i t h es s re clo oint e H h e ap t at d H th ere da e c t h ta l o s i s po es is int t 2 y Here this is the closest datapoint x 2006 Carlos Guestrin 32 1 Nearest Neighbor is an example of Instance based learning A function approximator that has been around since about 1910 x1 x2 x3 To make a prediction search database for similar datapoints and fit with the local points xn y1 y2 y3 yn Four things make a memory based learner A distance metric How many nearby neighbors to look at A weighting function optional How to fit with the local points 2006 Carlos Guestrin 33 1 Nearest Neighbor Four things make a memory based learner 1 A distance metric Euclidian and many more 2 How many nearby neighbors to look at One 3 A weighting function optional Unused 4 How to fit with the local points Just predict the same output as the nearest neighbor 2006 Carlos Guestrin 34 Multivariate 1 NN examples Regression Classification 2006 Carlos Guestrin 35 Multivariate distance metrics Suppose the input vectors x1 x2 xn are two dimensional x1 x11 x12 x2 x21 x22 xN xN1 xN2 One can draw the nearest neighbor regions in input space Dist xi xj xi1 xj1 2 xi2 xj2 2 Dist xi xj xi1 xj1 2 3xi2 3xj2 2 The relative scalings in the distance metric affect region shapes 2006 Carlos Guestrin 36 Euclidean distance metric Or equivalently where D x x x x 2 2 i i i i D x x x x 12 0 2 0 2 L L 0 0 T x x 0 0 L L 2 L N L L Other Metrics Mahalanobis Rank based Correlation based 2006 Carlos Guestrin 37 Notable distance metrics and their level sets Scaled Euclidian L2 L1 norm absolute Mahalanobis here on the previous slide is not necessarily diagonal but is symmetric L max norm 2006 Carlos Guestrin 38 Consistency of 1 NN Consider an estimator fn trained on n examples e g 1 NN neural nets regression Estimator is consistent if prediction error goes to zero as amount of data increases e g for no noise data consistent if Regression is not consistent Representation bias 1 NN is consistent under some mild fineprint What about variance 2006 Carlos Guestrin 39 1 NN overfits 2006 Carlos Guestrin 40 k Nearest Neighbor Four things make a memory based learner 1 A distance metric Euclidian and many more 2 How many nearby neighbors to look at k 1 A weighting function optional Unused 2 How to fit with the local points Just predict


View Full Document

CMU CS 10701 - Neural Networks

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Neural Networks
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Neural Networks and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Neural Networks and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?