DOC PREVIEW
CMU CS 10701 - Neural Networks

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1©2005-2007 Carlos Guestrin1Neural NetworksMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityOctober 8th, 2007©2005-2007 Carlos Guestrin2Logistic regression P(Y|X) represented by: Learning rule – MLE:-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.912©2005-2007 Carlos Guestrin3Sigmoid-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91w0=0, w1=1w0=2, w1=1 w0=0, w1=0.5©2005-2007 Carlos Guestrin4Perceptron as a graph-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.913©2005-2007 Carlos Guestrin5Linear perceptronclassification region-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91©2005-2007 Carlos Guestrin6Optimizing the perceptron Trained to minimize sum-squared error4©2005-2007 Carlos Guestrin7Derivative of sigmoid©2005-2007 Carlos Guestrin8The perceptron learning rule Compare to MLE:5©2005-2007 Carlos Guestrin9Percepton, linear classification,Boolean functions Can learn x1 Ç x2 Can learn x1 Æ x2 Can learn any conjunction or disjunction©2005-2007 Carlos Guestrin10Percepton, linear classification,Boolean functions Can learn majority Can perceptrons do everything?6©2005-2007 Carlos Guestrin11Going beyond linear classification Solving the XOR problem©2005-2007 Carlos Guestrin12Hidden layer Perceptron: 1-hidden layer:7©2005-2007 Carlos Guestrin13Example data for NN with hidden layer©2005-2007 Carlos Guestrin14Learned weights for hidden layer8©2005-2007 Carlos Guestrin15NN for images©2005-2007 Carlos Guestrin16Weights in NN for images9©2005-2007 Carlos Guestrin17Forward propagation for 1-hiddenlayer - Prediction 1-hidden layer:©2005-2007 Carlos Guestrin18Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0 to make derivation simpler10©2005-2007 Carlos Guestrin19Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0 to make derivation simpler©2005-2007 Carlos Guestrin20Multilayer neural networks11©2005-2007 Carlos Guestrin21Forward propagation – prediction Recursive algorithm Start from input layer Output of node Vk with parents U1,U2,…:©2005-2007 Carlos Guestrin22Back-propagation – learning Just gradient descent!!! Recursive algorithm for computing gradient For each example Perform forward propagation Start from output layer Compute gradient of node Vk with parents U1,U2,… Update weight wik12©2005-2007 Carlos Guestrin23Many possible response functions Sigmoid Linear Exponential Gaussian …©2005-2007 Carlos Guestrin24Convergence of backprop Perceptron leads to convex optimization Gradient descent reaches global minima Multilayer neural nets not convex Gradient descent gets stuck in local minima Hard to set learning rate Selecting number of hidden units and layers = fuzzy process NNs falling in disfavor in last few years We’ll see later in semester, kernel trick is a good alternative Nonetheless, neural nets are one of the most used MLapproaches13©2005-2007 Carlos Guestrin25Overfitting? Neural nets representcomplex functions Output becomes more complexwith gradient steps©2005-2007 Carlos Guestrin26Overfitting Output fits training data “too well” Poor test set accuracy Overfitting the training data Related to bias-variance tradeoff One of central problems of ML Avoiding overfitting? More training data Regularization Early stopping14©2005-2007 Carlos Guestrin27What you need to know aboutneural networks Perceptron: Representation Perceptron learning rule Derivation Multilayer neural nets Representation Derivation of backprop Learning rule Overfitting Definition Training set versus test set Learning


View Full Document

CMU CS 10701 - Neural Networks

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Neural Networks
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Neural Networks and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Neural Networks 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?