Neural Networks Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University October 8th 2007 1 2005 2007 Carlos Guestrin 1 0 9 Logistic regression 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 P Y X represented by Learning rule MLE 2005 2007 Carlos Guestrin 0 6 4 2 0 2 4 6 2 1 Sigmoid w0 2 w1 1 w0 0 w1 1 w0 0 w1 0 5 1 1 1 0 9 0 9 0 9 0 8 0 8 0 8 0 7 0 7 0 7 0 6 0 6 0 6 0 5 0 5 0 5 0 4 0 4 0 4 0 3 0 3 0 3 0 2 0 2 0 2 0 1 0 1 0 6 4 2 0 2 4 6 0 6 0 1 4 2 0 2 4 6 0 6 4 2 0 2 4 6 3 2005 2007 Carlos Guestrin 1 0 9 Perceptron as a graph 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 0 6 2005 2007 Carlos Guestrin 4 2 0 2 4 6 4 2 Linear perceptron classification region 1 0 9 0 8 0 7 0 6 0 5 0 4 0 3 0 2 0 1 0 6 2005 2007 Carlos Guestrin 4 2 0 2 4 6 5 Optimizing the perceptron Trained to minimize sum squared error 2005 2007 Carlos Guestrin 6 3 Derivative of sigmoid 2005 2007 Carlos Guestrin 7 The perceptron learning rule Compare to MLE 2005 2007 Carlos Guestrin 8 4 Percepton linear classification Boolean functions Can learn x1 x2 Can learn x1 x2 Can learn any conjunction or disjunction 2005 2007 Carlos Guestrin 9 Percepton linear classification Boolean functions Can learn majority Can perceptrons do everything 2005 2007 Carlos Guestrin 10 5 Going beyond linear classification Solving the XOR problem 2005 2007 Carlos Guestrin 11 2005 2007 Carlos Guestrin 12 Hidden layer Perceptron 1 hidden layer 6 Example data for NN with hidden layer 2005 2007 Carlos Guestrin 13 Learned weights for hidden layer 2005 2007 Carlos Guestrin 14 7 NN for images 2005 2007 Carlos Guestrin 15 Weights in NN for images 2005 2007 Carlos Guestrin 16 8 Forward propagation for 1 hidden layer Prediction 1 hidden layer 2005 2007 Carlos Guestrin 17 Gradient descent for 1 hidden layer Back propagation Computing Dropped w0 to make derivation simpler 2005 2007 Carlos Guestrin 18 9 Gradient descent for 1 hidden layer Back propagation Computing Dropped w0 to make derivation simpler 2005 2007 Carlos Guestrin 19 Multilayer neural networks 2005 2007 Carlos Guestrin 20 10 Forward propagation prediction Recursive algorithm Start from input layer Output of node Vk with parents U1 U2 2005 2007 Carlos Guestrin 21 Back propagation learning Just gradient descent Recursive algorithm for computing gradient For each example Perform forward propagation Start from output layer Compute gradient of node Vk with parents U1 U2 Update weight wik 2005 2007 Carlos Guestrin 22 11 Many possible response functions Sigmoid Linear Exponential Gaussian 2005 2007 Carlos Guestrin 23 Convergence of backprop Perceptron leads to convex optimization Gradient descent reaches global minima Multilayer neural nets not convex Gradient descent gets stuck in local minima Hard to set learning rate Selecting number of hidden units and layers fuzzy process NNs falling in disfavor in last few years We ll see later in semester kernel trick is a good alternative Nonetheless neural nets are one of the most used ML approaches 2005 2007 Carlos Guestrin 24 12 Overfitting Neural nets represent complex functions Output becomes more complex with gradient steps 2005 2007 Carlos Guestrin 25 Overfitting Output fits training data too well Poor test set accuracy Overfitting the training data Related to bias variance tradeoff One of central problems of ML Avoiding overfitting More training data Regularization Early stopping 2005 2007 Carlos Guestrin 26 13 What you need to know about neural networks Perceptron Representation Perceptron learning rule Derivation Multilayer neural nets Representation Derivation of backprop Learning rule Overfitting Definition Training set versus test set Learning curve 2005 2007 Carlos Guestrin 27 14
View Full Document