Neural Networks Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University October 12th 2009 1 Carlos Guestrin 2005 2009 1 0 9 Logistic regression 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 P Y X represented by Learning rule MLE Carlos Guestrin 2005 2009 0 6 4 2 0 2 4 6 2 1 Sigmoid w0 2 w1 1 w0 0 w1 1 w0 0 w1 0 5 1 1 1 0 9 0 9 0 9 0 8 0 8 0 8 0 7 0 7 0 7 0 6 0 6 0 6 0 5 0 5 0 5 0 4 0 4 0 4 0 3 0 3 0 3 0 2 0 2 0 2 0 1 0 1 0 6 4 2 0 2 4 6 0 6 0 1 4 2 0 2 4 6 0 6 4 2 0 2 4 6 Carlos Guestrin 2005 2009 3 1 0 9 Perceptron as a graph 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 0 6 Carlos Guestrin 2005 2009 4 2 0 2 4 6 4 2 Linear perceptron classification region 1 0 9 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 0 6 Carlos Guestrin 2005 2009 4 2 0 2 4 6 5 Optimizing the perceptron Trained to minimize sum squared error Carlos Guestrin 2005 2009 6 3 Derivative of sigmoid Carlos Guestrin 2005 2009 7 The perceptron learning rule Compare to MLE Carlos Guestrin 2005 2009 8 4 Percepton linear classification Boolean functions Can learn x1 x2 Can learn x1 x2 Can learn any conjunction or disjunction Carlos Guestrin 2005 2009 9 Percepton linear classification Boolean functions Can learn majority Can perceptrons do everything Carlos Guestrin 2005 2009 10 5 Going beyond linear classification Solving the XOR problem Carlos Guestrin 2005 2009 11 Carlos Guestrin 2005 2009 12 Hidden layer Perceptron 1 hidden layer 6 Example data for NN with hidden layer Carlos Guestrin 2005 2009 13 Learned weights for hidden layer Carlos Guestrin 2005 2009 14 7 NN for images Carlos Guestrin 2005 2009 15 Weights in NN for images Carlos Guestrin 2005 2009 16 8 Forward propagation for 1 hidden layer Prediction 1 hidden layer Carlos Guestrin 2005 2009 17 Gradient descent for 1 hidden layer Back propagation Computing Dropped w0 to make derivation simpler Carlos Guestrin 2005 2009 18 9 Gradient descent for 1 hidden layer Back propagation Computing Dropped w0 to make derivation simpler Carlos Guestrin 2005 2009 19 Multilayer neural networks Carlos Guestrin 2005 2009 20 10 Forward propagation prediction Recursive algorithm Start from input layer Output of node Vk with parents U1 U2 Carlos Guestrin 2005 2009 21 Back propagation learning Just gradient descent Recursive algorithm for computing gradient For each example Perform forward propagation Start from output layer Compute gradient of node Vk with parents U1 U2 Update weight wik Carlos Guestrin 2005 2009 22 11 Many possible response functions Sigmoid Linear Exponential Gaussian Carlos Guestrin 2005 2009 23 Convergence of backprop Perceptron leads to convex optimization Gradient descent reaches global minima Multilayer neural nets not convex Gradient descent gets stuck in local minima Hard to set learning rate Selecting number of hidden units and layers fuzzy process NNs falling in disfavor in last few years We ll see later in semester kernel trick is a good alternative Nonetheless neural nets are one of the most used ML approaches Plus neural nets are back with a new name Deep belief networks and a probabilistic interpretation slightly different learning procedure Carlos Guestrin 2005 2009 24 12 Overfitting Neural nets represent complex functions Output becomes more complex with gradient steps Carlos Guestrin 2005 2009 25 Overfitting Output fits training data too well Poor test set accuracy Overfitting the training data Related to bias variance tradeoff One of central problems of ML Avoiding overfitting More training data Regularization Early stopping Carlos Guestrin 2005 2009 26 13 What you need to know about neural networks Perceptron Representation Perceptron learning rule Derivation Multilayer neural nets Representation Derivation of backprop Learning rule Overfitting Definition Training set versus test set Learning curve Carlos Guestrin 2005 2009 27 14
View Full Document