Neural NetworksMachine Learning 10-701Tom M. MitchellCenter for Automated Learning and DiscoveryCarnegie Mellon UniversitySeptember 28, 2006Required reading: • Bishop Chapter 5, especially 5.1, 5.2, 5.3, and 5.5 through 5.5.2Optional reading:• Neural nets: Mitchell chapter 4Artificial Neural Networks to learn f: X Æ Y• f might be non-linear function• X (vector of) continuous and/or discrete vars• Y (vector of) continuous and/or discrete vars• Represent f by network of threshold units• Each unit is a logistic function• MLE: train weights of all units to minimize sum of squared errors of network functionALVINN[Pomerleau 1993]• Consider regression problem f:XÆY , for scalar Yy = f(x) + εnoise N(0,σε), iiddeterministicM(C)LE Training for Neural NetworksLearned neural network• Let’s maximize the conditional data likelihood• Consider regression problem f:XÆY , for scalar Yy = f(x) + εnoise N(0,σε)deterministicMAP Training for Neural NetworksGaussian P(W) = N(0,σΙ)ln P(W) ↔ c ∑iwi2xd= inputtd= target outputod= observed unit outputwi= weight ixd= inputtd= target outputod= observed unit outputwij= wt from i to j2Original MLE error fn.Artificial neural networks – what you should know• Highly expressive non-linear functions• Highly parallel network of logistic function units• Minimizing sum of squared training errors– Gives MLE estimates of network weights if we assume zero mean Gaussian noise on output values• Minimizing sum of sq errors plus weight squared (regularization)– MAP estimates assuming weight priors are zero mean Gaussian • Gradient descent as training procedure– How to derive your own gradient descent procedure• Discover useful representations at hidden units• Local minima is greatest problem• Overfitting, regularization, early
View Full Document