Neural Networks Required reading Bishop Chapter 5 especially 5 1 5 2 5 3 and 5 5 through 5 5 2 Optional reading Neural nets Mitchell chapter 4 Machine Learning 10 701 Tom M Mitchell Center for Automated Learning and Discovery Carnegie Mellon University September 28 2006 Artificial Neural Networks to learn f X Y f might be non linear function X vector of continuous and or discrete vars Y vector of continuous and or discrete vars Represent f by network of threshold units Each unit is a logistic function MLE train weights of all units to minimize sum of squared errors of network function ALVINN Pomerleau 1993 M C LE Training for Neural Networks Consider regression problem f X Y for scalar Y y f x noise N 0 iid deterministic Let s maximize the conditional data likelihood Learned neural network MAP Training for Neural Networks Consider regression problem f X Y for scalar Y y f x noise N 0 deterministic Gaussian P W N 0 ln P W c i wi2 xd input td target output od observed unit output wi weight i xd input td target output od observed unit output wij wt from i to j Original MLE error fn 2 Artificial neural networks what you should know Highly expressive non linear functions Highly parallel network of logistic function units Minimizing sum of squared training errors Gives MLE estimates of network weights if we assume zero mean Gaussian noise on output values Minimizing sum of sq errors plus weight squared regularization MAP estimates assuming weight priors are zero mean Gaussian Gradient descent as training procedure How to derive your own gradient descent procedure Discover useful representations at hidden units Local minima is greatest problem Overfitting regularization early stopping
View Full Document