DOC PREVIEW
CMU CS 15381 - Lecture-NeuralNetworks

This preview shows page 1-2-16-17-18-33-34 out of 34 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Decision trees are used a little, neural Networks a used quite a bit1Brain works pretty well, we should be able to make an algorithm that emulates thebrain that also works well!These points can be *very* high dimensional. For, example the points above (GPA, SAT, HS) could be (GPA, SAT, HS, PlaysTennis, PlaysPiano, PlaysFootball, PlaysClarinet, LikedZombieland…). The admissions department can learn to predict your GPA! Each admitted student is a vector of attributes, and the output is their GPA when they graduate.2A neuron. Has inputs (dendrites) and outputs (axon). Some inputs are positive (make it more likely the neuron will fire), and some are negative (make it less likely the neuron will fire)Lots of neurons in your brain.3Wij is the weight from neuron i to neuron jActivation function g is a function of the weighted sum of the input neuronsA0 doesn't come from another neuron, rather it is the bias. Bias is a constant factor and determines how much input it takes for the neuron to fire. “How hard is it to make the neruon fire”4The in3 is the weighted sum of all the inputs to the neuron56We can *always* center this at 0, because of the bias term. Note if we didn’t have bias (b) we’d just have it centered at whatever the bias was.Threshold activation functions are very common.The sigmoid is continuous and differentiable. This is really nice, as we will see later.These two functions are very similar…almost always practically equalWe are assuming all neurons use the same activation function7OR is like adding– you need at least one.8Here you need both.9Try to make an XOR. Actually, you can’t, and we’ll tell you why later.1011Feed forward neural networks do not have any cycles.Cycles make things much more difficult. With cycles whether 4 fires or not would potentially not converge. Luckily we almost always do feed forward neural networks12131415Only one layer of weights. Slightly confusing name, as there are two layers of neurons (the input layer and the output layer.16Only need to analyze with a single output unit.1718If this were a decision tree, it’d have to be really big. Isn’t that nice?1920A vector of weights dot product with a vector of inputs.21Linearly separable means all negative and positive examples can be separated by a line (in higher dimensions, separable by a hyperplane)And *this* is why you can’t do XOR, as it is not linearly separable. Therefore a single threshold perceptron cannot do XORHowever, if you have multiple perceptrons, you can do a lot more.22We’re doing local search on the weights, trying to minimize the squared error.23We can derive the equation to minimize the squared error. If the error (not squared error) is positive, we’ll make the weight increase. If the error is negative (you’re overshooting), you decrease the weights.This is why using a g which has continuous partial derivatives in respect to the weights is useful.The black box tells you how to update the weights when learning. g' is the derivative of g.24Hidden layer is the layer between the input layer and the output layer.Just guess a structure.Often people guess a structure, learn the weights, then go back and revise if the error is bad.Structure learning is another interesting problem.252627282930Okay, so since we know what the output of 5 and 6 should have been we can calculate that easily. But what about unit 3?3132This is the back-propagation algorithm. The idea is that you are back-propagating the errors from the units in the end layers. Then once you can calculate the error, you can learn the


View Full Document

CMU CS 15381 - Lecture-NeuralNetworks

Documents in this Course
Planning

Planning

19 pages

Planning

Planning

19 pages

Lecture

Lecture

42 pages

Lecture

Lecture

27 pages

Lecture

Lecture

19 pages

FOL

FOL

41 pages

lecture

lecture

34 pages

Exam

Exam

7 pages

Lecture

Lecture

22 pages

Handout

Handout

11 pages

Midterm

Midterm

14 pages

lecture

lecture

83 pages

Handouts

Handouts

38 pages

mdp

mdp

37 pages

HW2

HW2

7 pages

nn

nn

25 pages

lecture

lecture

13 pages

Handout

Handout

5 pages

Lecture

Lecture

27 pages

Lecture

Lecture

62 pages

Lecture

Lecture

5 pages

Load more
Download Lecture-NeuralNetworks
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture-NeuralNetworks and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture-NeuralNetworks 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?