11Neural NetworksMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityOctober 12th, 2009©Carlos Guestrin 2005-2009©Carlos Guestrin 2005-2009 2Logistic regression P(Y|X) represented by: Learning rule – MLE:-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.912©Carlos Guestrin 2005-2009 3Sigmoid-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91w0=0, w1=1w0=2, w1=1 w0=0, w1=0.5©Carlos Guestrin 2005-2009 4Perceptron as a graph-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.913©Carlos Guestrin 2005-2009 5Linear perceptron classification region-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.91©Carlos Guestrin 2005-2009 6Optimizing the perceptron Trained to minimize sum-squared error4©Carlos Guestrin 2005-2009 7Derivative of sigmoid©Carlos Guestrin 2005-2009 8The perceptron learning rule Compare to MLE:5©Carlos Guestrin 2005-2009 9Percepton, linear classification, Boolean functions Can learn x1 Ç x2 Can learn x1 Æ x2 Can learn any conjunction or disjunction©Carlos Guestrin 2005-2009 10Percepton, linear classification, Boolean functions Can learn majority Can perceptrons do everything?6©Carlos Guestrin 2005-2009 11Going beyond linear classification Solving the XOR problem©Carlos Guestrin 2005-2009 12Hidden layer Perceptron: 1-hidden layer:7©Carlos Guestrin 2005-2009 13Example data for NN with hidden layer©Carlos Guestrin 2005-2009 14Learned weights for hidden layer8©Carlos Guestrin 2005-2009 15NN for images©Carlos Guestrin 2005-2009 16Weights in NN for images9©Carlos Guestrin 2005-2009 17Forward propagation for 1-hidden layer - Prediction 1-hidden layer: ©Carlos Guestrin 2005-2009 18Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0to make derivation simpler10©Carlos Guestrin 2005-2009 19Gradient descent for 1-hidden layer –Back-propagation: ComputingDropped w0to make derivation simpler©Carlos Guestrin 2005-2009 20Multilayer neural networks11©Carlos Guestrin 2005-2009 21Forward propagation – prediction Recursive algorithm Start from input layer Output of node Vkwith parents U1,U2,…:©Carlos Guestrin 2005-2009 22Back-propagation – learning Just gradient descent!!! Recursive algorithm for computing gradient For each example Perform forward propagation Start from output layer Compute gradient of node Vkwith parents U1,U2,… Update weight wik12©Carlos Guestrin 2005-2009 23Many possible response functions Sigmoid Linear Exponential Gaussian …©Carlos Guestrin 2005-2009 24Convergence of backprop Perceptron leads to convex optimization Gradient descent reaches global minima Multilayer neural nets not convex Gradient descent gets stuck in local minima Hard to set learning rate Selecting number of hidden units and layers = fuzzy process NNs falling in disfavor in last few years We’ll see later in semester, kernel trick is a good alternative Nonetheless, neural nets are one of the most used ML approaches Plus, neural nets are back with a new name!!!! Deep belief networks (and a probabilistic interpretation & slightly different learning procedure)13©Carlos Guestrin 2005-2009 25Overfitting? Neural nets represent complex functions Output becomes more complex with gradient steps©Carlos Guestrin 2005-2009 26Overfitting Output fits training data “too well” Poor test set accuracy Overfitting the training data Related to bias-variance tradeoff One of central problems of ML Avoiding overfitting? More training data Regularization Early stopping14©Carlos Guestrin 2005-2009 27What you need to know about neural networks Perceptron: Representation Perceptron learning rule Derivation Multilayer neural nets Representation Derivation of backprop Learning rule Overfitting Definition Training set versus test set Learning
View Full Document