DOC PREVIEW
CMU CS 10701 - Neural Networks

This preview shows page 1-2-3-4-5-6 out of 18 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1©2005-2007 Carlos Guestrin1Neural NetworksMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 14th, 2007©2005-2007 Carlos Guestrin2Logistic regression P(Y|X) represented by: Learning rule – MLE:2©2005-2007 Carlos Guestrin3Sigmoid-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91w0=0, w1=1w0=2, w1=1 w0=0, w1=0.5©2005-2007 Carlos Guestrin4Perceptron as a graph-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.913©2005-2007 Carlos Guestrin5Linear perceptronclassification region-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91©2005-2007 Carlos Guestrin6Optimizing the perceptron Trained to minimize sum-squared error4©2005-2007 Carlos Guestrin7Derivative of sigmoid©2005-2007 Carlos Guestrin8The perceptron learning rule Compare to MLE:5©2005-2007 Carlos Guestrin9Percepton, linear classification,Boolean functions Can learn x1 Ç x2 Can learn x1 Æ x2 Can learn any conjunction or disjunction©2005-2007 Carlos Guestrin10Percepton, linear classification,Boolean functions Can learn majority Can perceptrons do everything?6©2005-2007 Carlos Guestrin11Going beyond linear classification Solving the XOR problem©2005-2007 Carlos Guestrin12Hidden layer Perceptron: 1-hidden layer:7©2005-2007 Carlos Guestrin13Example data for NN with hidden layer©2005-2007 Carlos Guestrin14Learned weights for hidden layer8©2005-2007 Carlos Guestrin15NN for images©2005-2007 Carlos Guestrin16Weights in NN for images9©2005-2007 Carlos Guestrin17Forward propagation for 1-hiddenlayer - Prediction 1-hidden layer:©2005-2007 Carlos Guestrin18Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0 to make derivation simpler10©2005-2007 Carlos Guestrin19Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0 to make derivation simpler©2005-2007 Carlos Guestrin20Multilayer neural networks11©2005-2007 Carlos Guestrin21Forward propagation – prediction Recursive algorithm Start from input layer Output of node Vk with parents U1,U2,…:©2005-2007 Carlos Guestrin22Back-propagation – learning Just gradient descent!!! Recursive algorithm for computing gradient For each example Perform forward propagation Start from output layer Compute gradient of node Vk with parents U1,U2,… Update weight wik12©2005-2007 Carlos Guestrin23Many possible response functions Sigmoid Linear Exponential Gaussian …©2005-2007 Carlos Guestrin24Convergence of backprop Perceptron leads to convex optimization Gradient descent reaches global minima Multilayer neural nets not convex Gradient descent gets stuck in local minima Hard to set learning rate Selecting number of hidden units and layers = fuzzy process NNs falling in disfavor in last few years We’ll see later in semester, kernel trick is a good alternative Nonetheless, neural nets are one of the most used MLapproaches13©2005-2007 Carlos Guestrin25Training set error Neural nets representcomplex functions Output becomes more complexwith gradient steps©2005-2007 Carlos Guestrin26Overfitting Output fits training data “too well” Poor test set accuracy Overfitting the training data Related to bias-variance tradeoff One of central problems of ML Avoiding overfitting? More training data Regularization Early stopping14©2005-2007 Carlos Guestrin27What you need to know aboutneural networks Perceptron: Representation Perceptron learning rule Derivation Multilayer neural nets Representation Derivation of backprop Learning rule Overfitting Definition Training set versus test set Learning curve©2005-2007 Carlos Guestrin28Instance-basedLearningMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 14th, 200715©2005-2007 Carlos Guestrin29Why not just use Linear Regression?©2005-2007 Carlos Guestrin30Using data to predict new data16©2005-2007 Carlos Guestrin31Nearest neighbor©2005-2007 Carlos Guestrin32Univariate 1-Nearest NeighborGiven datapoints (x1,y1) (x2,y2)..(xN,yN),where we assume yi=f(xi) for someunknown function f.Given query point xq, your job is to predictNearest Neighbor:1. Find the closest xi in our set of datapoints( )qxfy !ˆ( )qiixxnni !=argmin( )nniyy =ˆ2. PredictHere’s adataset withone input, oneoutput and fourdatapoints.xyHere, this isthe closestdatapointHere, this isthe closestdatapointHere, this isthe closestdatapointHere, this isthe closestdatapoint17©2005-2007 Carlos Guestrin331-Nearest Neighbor is an example of…. Instance-based learningFour things make a memory based learner: A distance metric How many nearby neighbors to look at? A weighting function (optional) How to fit with the local points?x1 y1x2 y2x3 y3..xn ynA function approximatorthat has been aroundsince about 1910.To make a prediction,search database forsimilar datapoints, and fitwith the local points.©2005-2007 Carlos Guestrin341-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?One3. A weighting function (optional)Unused4. How to fit with the local points?Just predict the same output as the nearest neighbor.18©2005-2007 Carlos Guestrin35Multivariate 1-NN examplesRegression Classification©2005-2007 Carlos Guestrin36Multivariate distance metricsSuppose the input vectors x1, x2, …xn are two dimensional:x1 = ( x11 , x12 ) , x2 = ( x21 , x22 ) , …xN = ( xN1 , xN2 ).One can draw the nearest-neighbor regions in input space.Dist(xi,xj) =(xi1 – xj1)2+(3xi2 – 3xj2)2The relative scalings in the distance metric affect region shapes.Dist(xi,xj) = (xi1 – xj1)2 + (xi2 –


View Full Document

CMU CS 10701 - Neural Networks

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Neural Networks
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Neural Networks and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Neural Networks 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?