1©2005-2007 Carlos Guestrin1Neural NetworksMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityOctober 10th, 2007©2005-2007 Carlos Guestrin2Perceptron as a graph-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.912©2005-2007 Carlos Guestrin3The perceptron learning rule Compare to MLE:©2005-2007 Carlos Guestrin4Hidden layer Perceptron: 1-hidden layer:3©2005-2007 Carlos Guestrin5Example data for NN with hidden layer©2005-2007 Carlos Guestrin6Learned weights for hidden layer4©2005-2007 Carlos Guestrin7NN for images©2005-2007 Carlos Guestrin8Weights in NN for images5©2005-2007 Carlos Guestrin9Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0 to make derivation simpler©2005-2007 Carlos Guestrin10Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0 to make derivation simpler6©2005-2007 Carlos Guestrin11Multilayer neural networks©2005-2007 Carlos Guestrin12Forward propagation – prediction Recursive algorithm Start from input layer Output of node Vk with parents U1,U2,…:7©2005-2007 Carlos Guestrin13Back-propagation – learning Just gradient descent!!! Recursive algorithm for computing gradient For each example Perform forward propagation Start from output layer Compute gradient of node Vk with parents U1,U2,… Update weight wik©2005-2007 Carlos Guestrin14Many possible response functions Sigmoid Linear Exponential Gaussian …8©2005-2007 Carlos Guestrin15Convergence of backprop Perceptron leads to convex optimization Gradient descent reaches global minima Multilayer neural nets not convex Gradient descent gets stuck in local minima Hard to set learning rate Selecting number of hidden units and layers = fuzzy process NNs falling in disfavor in last few years We’ll see later in semester, kernel trick is a good alternative Nonetheless, neural nets are one of the most used MLapproaches©2005-2007 Carlos Guestrin16Overfitting? Neural nets representcomplex functions Output becomes more complexwith gradient steps9©2005-2007 Carlos Guestrin17Overfitting Output fits training data “too well” Poor test set accuracy Overfitting the training data Related to bias-variance tradeoff One of central problems of ML Avoiding overfitting? More training data Regularization Early stopping©2005-2007 Carlos Guestrin18What you need to know aboutneural networks Perceptron: Representation Perceptron learning rule Derivation Multilayer neural nets Representation Derivation of backprop Learning rule Overfitting Definition Training set versus test set Learning curve10©2005-2007 Carlos Guestrin19Announcements Recitation this week: Neural networks Project proposals due next Wednesday Exciting data: Swivel.com - user generated graphs Recognizing Captchas Election contributions Activity recognition …©2005-2007 Carlos Guestrin20Instance-basedLearningMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityOctober 10th, 200711©2005-2007 Carlos Guestrin21Why not just use Linear Regression?©2005-2007 Carlos Guestrin22Using data to predict new data12©2005-2007 Carlos Guestrin23Nearest neighbor©2005-2007 Carlos Guestrin24Univariate 1-Nearest NeighborGiven datapoints (x1,y1) (x2,y2)..(xN,yN),where we assume yi=f(xi) for someunknown function f.Given query point xq, your job is to predictNearest Neighbor:1. Find the closest xi in our set of datapoints( )qxfy !ˆ( )qiixxnni !=argmin( )nniyy =ˆ2. PredictHere’s adataset withone input, oneoutput and fourdatapoints.xyHere, this isthe closestdatapointHere, this isthe closestdatapointHere, this isthe closestdatapointHere, this isthe closestdatapoint13©2005-2007 Carlos Guestrin251-Nearest Neighbor is an example of…. Instance-based learningFour things make a memory based learner: A distance metric How many nearby neighbors to look at? A weighting function (optional) How to fit with the local points?x1 y1x2 y2x3 y3..xn ynA function approximatorthat has been aroundsince about 1910.To make a prediction,search database forsimilar datapoints, and fitwith the local points.©2005-2007 Carlos Guestrin261-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?One3. A weighting function (optional)Unused4. How to fit with the local points?Just predict the same output as the nearest neighbor.14©2005-2007 Carlos Guestrin27Multivariate 1-NN examplesRegression Classification©2005-2007 Carlos Guestrin28Multivariate distance metricsSuppose the input vectors x1, x2, …xn are two dimensional:x1 = ( x11 , x12 ) , x2 = ( x21 , x22 ) , …xN = ( xN1 , xN2 ).One can draw the nearest-neighbor regions in input space.Dist(xi,xj) =(xi1 – xj1)2+(3xi2 – 3xj2)2The relative scalings in the distance metric affect region shapesDist(xi,xj) = (xi1 – xj1)2 + (xi2 – xj2)215©2005-2007 Carlos Guestrin29Euclidean distance metricOther Metrics… Mahalanobis, Rank-based, Correlation-based,…( )!!!!!"#$$$$$%&='=(=''2N222122ó000ó000ó )x'-(x)x'-(x )x'(x,' )x'(x,LLLLLLLTiiiiDxxD)whereOr equivalently,©2005-2007 Carlos Guestrin30Notable distance metrics(and their level sets)L1 norm (absolute)L1 (max) normScaled Euclidian (L2)Mahalanobis (here,Σ on the previous slide is notnecessarily diagonal, but issymmetric16©2005-2007 Carlos Guestrin31Consistency of 1-NN Consider an estimator fn trained on n examples e.g., 1-NN, neural nets, regression,... Estimator is consistent if true error goes to zero asamount of data increases e.g., for no noise data, consistent if: Regression is not consistent! Representation bias 1-NN is consistent (under some mild fineprint)What about variance???©2005-2007 Carlos Guestrin321-NN overfits?17©2005-2007 Carlos Guestrin33k-Nearest NeighborFour things make a memory based learner:1. A distance metricEuclidian (and many more)2. How many nearby neighbors to look at?k1. A weighting function (optional)Unused2. How to fit with the local points?Just predict the average output among the k nearest neighbors.©2005-2007 Carlos Guestrin34k-Nearest Neighbor (here k=9)K-nearest neighbor for function fitting smoothes away noise, but there areclear deficiencies.What can we do about all the discontinuities that k-NN gives
View Full Document