Neural NetworksAnnouncementsLogistic regressionPerceptron as a graphLinear perceptron classification regionThe perceptron learning rulePercepton, linear classification, Boolean functionsPercepton, linear classification, Boolean functionsGoing beyond linear classificationHidden layerForward propagation for 1-hidden layer - PredictionGradient descent for 1-hidden layer – Back-propagation: ComputingGradient descent for 1-hidden layer – Back-propagation: ComputingMultilayer neural networksForward propagation – predictionBack-propagation – learningMany possible response functionsConvergence of backpropTraining set errorWhat about test set error?OverfittingWhat you need to know about neural networksInstance-based LearningAnnouncementsWhy not just use Linear Regression?Using data to predict new dataNearest neighborUnivariate 1-Nearest Neighbor1-Nearest Neighbor is an example of…. Instance-based learning1-Nearest NeighborMultivariate 1-NN examplesMultivariate distance metricsEuclidean distance metricNotable distance metrics (and their level sets)Consistency of 1-NN1-NN overfits?k-Nearest Neighbork-Nearest Neighbor (here k=9)Weighted k-NNsKernel regressionWeighting functionsKernel regression predictionsKernel regression on our test casesKernel regression can look badLocally weighted regressionLocally weighted regressionHow LWR worksAnother view of LWRLWR on our test casesLocally weighted polynomial regressionCurse of dimensionality for instance-based learningCurse of the irrelevant featureWhat you need to know about instance-based learningAcknowledgment©2006 Carlos Guestrin1Neural Nets:Many possible refse.g., Mitchell Chapter 4Neural NetworksMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 15th, 2006©2006 Carlos Guestrin2Announcements Recitations stay on Thursdays 5-6:30pm in Wean 5409 This week: Cross Validation and Neural Nets Homework 2 Due next Monday, Feb. 20th Updated version online with more hints Start early©2006 Carlos Guestrin3Logistic regression P(Y|X) represented by: Learning rule – MLE:©2006 Carlos Guestrin4Perceptron as a graph-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91©2006 Carlos Guestrin5Linear perceptronclassification region-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91©2006 Carlos Guestrin6The perceptron learning rule Compare to MLE:©2006 Carlos Guestrin7Percepton, linear classification, Boolean functions Can learn x1 ∨ x2 Can learn x1 ∧ x2 Can learn any conjunction or disjunction©2006 Carlos Guestrin8Percepton, linear classification, Boolean functions Can learn majority Can perceptrons do everything?©2006 Carlos Guestrin9Going beyond linear classification Solving the XOR problem©2006 Carlos Guestrin10Hidden layer Perceptron: 1-hidden layer:Example data for NN with hidden layer©2006 Carlos Guestrin11©2006 Carlos Guestrin12Learned weights for hidden layer©2006 Carlos Guestrin13NN for imagesWeights in NN for images©2006 Carlos Guestrin14©2006 Carlos Guestrin15Forward propagation for 1-hidden layer - Prediction 1-hidden layer:©2006 Carlos Guestrin16Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0to make derivation simpler©2006 Carlos Guestrin17Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0to make derivation simpler©2006 Carlos Guestrin18Multilayer neural networks©2006 Carlos Guestrin19Forward propagation – prediction Recursive algorithm Start from input layer Output of node Vkwith parents U1,U2,…:©2006 Carlos Guestrin20Back-propagation – learning Just gradient descent!!! Recursive algorithm for computing gradient For each example Perform forward propagation Start from output layer Compute gradient of node Vkwith parents U1,U2,… Update weight wik©2006 Carlos Guestrin21Many possible response functions Sigmoid Linear Exponential Gaussian …©2006 Carlos Guestrin22Convergence of backprop Perceptron leads to convex optimization Gradient descent reaches global minima Multilayer neural nets not convex Gradient descent gets stuck in local minima Hard to set learning rate Selecting number of hidden units and layers = fuzzy process NNs falling in disfavor in last few years We’ll see later in semester, kernel trick is a good alternative Nonetheless, neural nets are one of the most used ML approaches©2006 Carlos Guestrin23Training set error Neural nets represent complex functions Output becomes more complex with gradient steps Training set error©2006 Carlos Guestrin24What about test set error?©2006 Carlos Guestrin25Overfitting Output fits training data “too well” Poor test set accuracy Overfitting the training data Related to bias-variance tradeoff One of central problems of ML Avoiding overfitting? More training data Regularization Early stopping©2006 Carlos Guestrin26What you need to know about neural networks Perceptron: Representation Perceptron learning rule Derivation Multilayer neural nets Representation Derivation of backprop Learning rule Overfitting Definition Training set versus test set Learning curve©2006 Carlos Guestrin27Instance-based LearningMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 15th, 2006©2006 Carlos Guestrin28Announcements Reminder: Second homework due Monday 21st©2006 Carlos Guestrin29Why not just use Linear Regression?©2006 Carlos Guestrin30Using data to predict new data©2006 Carlos Guestrin31Nearest neighbor©2006 Carlos Guestrin32Univariate 1-Nearest NeighborGiven datapoints (x1,y1) (x2,y2)..(xN,yN),where we assume yi=f(si) for some unknown function f.Given query point xq, your job is to predict Nearest Neighbor:1. Find the closest xiin our set of datapoints()qxfy≈ˆ()qixxnni −=argmini()nniyy =ˆ2. PredictHere’s a dataset with one input, one output and four datapoints.xyHere, this is the closest datapointHere, this is the closest datapointHere, this is the closest datapointHere, this is the closest datapoint©2006 Carlos Guestrin331-Nearest Neighbor is an example of….Instance-based learningx1y1x2y2x3y3..xnynA function approximator that has been around since about 1910.To make a prediction, search database for similar datapoints, and fit with the local points.Four things make a memory based learner: A distance metric How many nearby neighbors to
View Full Document