Neural NetworksCS472/CS473 – Fall 2005Restaurant Data SetLimited Expressiveness of Preceptrons• Minsky and Papert (1969) showed certain simple functions cannot be represented (e.g. Boolean XOR). Killed the field! • Mid 80th: Non-linear Neural Networks (Rumelhart et al. 1986)Neural Networks• Rich history, starting in the early forties (McCulloch and Pitts 1943).• Two views:– Modeling the brain– “Just” representation of complex functions(Continuous; contrast decision trees)• Much progress on both fronts.• Drawn interest from: Neuroscience, Cognitive science, AI, Physics, Statistics, and CS/EE.Neuron Why Neural Nets?Motivation:Solving problems under the constraints similar to those of the brain may lead to solutions to AI problems that would otherwise be overlooked.• Individual neurons operate very slowlymassively parallel algorithms• Neurons are failure-prone devicesdistributed representations• Neurons promote approximate matchingless brittleConnectionist Models of LearningCharacterized by: • A large number of very simple neuron-like processing elements.• A large number of weighted connections between the elements.• Highly parallel, distributed control.• An emphasis on learning internal representations automatically.Artificial NeuronsActivation Functions:Example: Perceptron Perceptron
View Full Document