Neural Networks Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University October 10th 2007 1 2005 2007 Carlos Guestrin 1 0 9 Perceptron as a graph 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 0 6 2005 2007 Carlos Guestrin 4 2 0 2 4 6 2 1 The perceptron learning rule Compare to MLE 2005 2007 Carlos Guestrin 3 2005 2007 Carlos Guestrin 4 Hidden layer Perceptron 1 hidden layer 2 Example data for NN with hidden layer 2005 2007 Carlos Guestrin 5 Learned weights for hidden layer 2005 2007 Carlos Guestrin 6 3 NN for images 2005 2007 Carlos Guestrin 7 Weights in NN for images 2005 2007 Carlos Guestrin 8 4 Gradient descent for 1 hidden layer Back propagation Computing Dropped w0 to make derivation simpler 2005 2007 Carlos Guestrin 9 Gradient descent for 1 hidden layer Back propagation Computing Dropped w0 to make derivation simpler 2005 2007 Carlos Guestrin 10 5 Multilayer neural networks 2005 2007 Carlos Guestrin 11 Forward propagation prediction Recursive algorithm Start from input layer Output of node Vk with parents U1 U2 2005 2007 Carlos Guestrin 12 6 Back propagation learning Just gradient descent Recursive algorithm for computing gradient For each example Perform forward propagation Start from output layer Compute gradient of node Vk with parents U1 U2 Update weight wik 2005 2007 Carlos Guestrin 13 Many possible response functions Sigmoid Linear Exponential Gaussian 2005 2007 Carlos Guestrin 14 7 Convergence of backprop Perceptron leads to convex optimization Gradient descent reaches global minima Multilayer neural nets not convex Gradient descent gets stuck in local minima Hard to set learning rate Selecting number of hidden units and layers fuzzy process NNs falling in disfavor in last few years We ll see later in semester kernel trick is a good alternative Nonetheless neural nets are one of the most used ML approaches 2005 2007 Carlos Guestrin 15 Overfitting Neural nets represent complex functions Output becomes more complex with gradient steps 2005 2007 Carlos Guestrin 16 8 Overfitting Output fits training data too well Poor test set accuracy Overfitting the training data Related to bias variance tradeoff One of central problems of ML Avoiding overfitting More training data Regularization Early stopping 2005 2007 Carlos Guestrin 17 What you need to know about neural networks Perceptron Representation Perceptron learning rule Derivation Multilayer neural nets Representation Derivation of backprop Learning rule Overfitting Definition Training set versus test set Learning curve 2005 2007 Carlos Guestrin 18 9 Announcements Recitation this week Neural networks Project proposals due next Wednesday Exciting data Swivel com user generated graphs Recognizing Captchas Election contributions Activity recognition 2005 2007 Carlos Guestrin 19 Instance based Learning Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University October 10th 2007 2005 2007 Carlos Guestrin 20 10 Why not just use Linear Regression 2005 2007 Carlos Guestrin 21 Using data to predict new data 2005 2007 Carlos Guestrin 22 11 Nearest neighbor 23 2005 2007 Carlos Guestrin Univariate 1 Nearest Neighbor Given datapoints x1 y1 x2 y2 xN yN where we assume yi f xi for some unknown function f Given query point xq your job is to predict q Nearest Neighbor 1 Find the closest xi in our set of datapoints y f x i nn argmin xi xq i y yi nn H th ere da e clo this tap se is oin st t Here s a dataset with one input one output and four datapoints is is t th e s s t e e r lo in H he c apo t at d H th e r d a e c e th ta lo s is p o e is in t s t 2 Predict y Here this is the closest datapoint x 2005 2007 Carlos Guestrin 24 12 1 Nearest Neighbor is an example of Instance based learning A function approximator that has been around since about 1910 x1 x2 x3 To make a prediction search database for similar datapoints and fit with the local points xn y1 y2 y3 yn Four things make a memory based learner A distance metric How many nearby neighbors to look at A weighting function optional How to fit with the local points 2005 2007 Carlos Guestrin 25 1 Nearest Neighbor Four things make a memory based learner 1 A distance metric Euclidian and many more 2 How many nearby neighbors to look at One 3 A weighting function optional Unused 4 How to fit with the local points Just predict the same output as the nearest neighbor 2005 2007 Carlos Guestrin 26 13 Multivariate 1 NN examples Regression Classification 2005 2007 Carlos Guestrin 27 Multivariate distance metrics Suppose the input vectors x1 x2 xn are two dimensional x1 x11 x12 x2 x21 x22 xN xN1 xN2 One can draw the nearest neighbor regions in input space Dist xi xj xi1 xj1 2 xi2 xj2 2 Dist xi xj xi1 xj1 2 3xi2 3xj2 2 The relative scalings in the distance metric affect region shapes 2005 2007 Carlos Guestrin 28 14 Euclidean distance metric Or equivalently where D x x 2 2 x x i i i i D x x x x T x x 12 0 0 22 L L 0 0 L 0 L 0 L L L 2N Other Metrics Mahalanobis Rank based Correlation based 2005 2007 Carlos Guestrin 29 Notable distance metrics and their level sets Scaled Euclidian L2 L1 norm absolute Mahalanobis here on the previous slide is not necessarily diagonal but is symmetric L1 max norm 2005 2007 Carlos Guestrin 30 15 Consistency of 1 NN Consider an estimator fn trained on n examples Estimator is consistent if true error goes to zero as amount of data increases e g for no noise data consistent if Regression is not consistent e g 1 NN neural nets regression Representation bias 1 NN is consistent under some mild fineprint What about variance 2005 2007 Carlos Guestrin 31 1 NN overfits 2005 2007 Carlos Guestrin 32 16 k Nearest Neighbor Four things make a memory based learner 1 A distance metric Euclidian and many more 2 How many nearby neighbors to look at k 1 A weighting function optional Unused 2 How to fit with the local points Just predict the average output among the k nearest neighbors 2005 2007 Carlos Guestrin 33 k Nearest Neighbor here k 9 K nearest neighbor for function fitting smoothes away noise but there are clear deficiencies What can we do about all the discontinuities that k NN gives us 2005 2007 Carlos Guestrin 34 17 Weighted k NNs Neighbors are not all the same 2005 2007 Carlos Guestrin 35 Kernel regression Four things make a memory based learner 1 A distance metric Euclidian and many more 2 How many nearby neighbors to look at All of them 3 A weighting function optional wi exp D xi query 2 Kw2 Nearby points to the query are weighted strongly far points weakly The KW
View Full Document