Unformatted text preview:

Neural Networks - 2 !Robert Stengel!Robotics and Intelligent Systems, MAE 345, Princeton University, 2013"• Associative/recurrent networks"– Hopfield network"– Adaptive resonance theory network"• Unsupervised training"– k-means clustering"– Self-organizing map"• Deep learning"• Restricted Boltzmann machine"• Semi-supervised learning"Copyright 2013 by Robert Stengel. All rights reserved. For educational use only.!http://www.princeton.edu/~stengel/MAE345.html!Learning Objectives!Recurrent Networks!Recurrent Networks"! Recursion to identify an unknown object"! Network is given a single, fixed input, and it iterates to a solution"! Convergence and stability of the network are critical issues"! Single network may have many stable states"! Classified outputs of the map"! Pattern recognition with noisy data"Hopfield Network"! Bipolar (–1,1) inputs and outputs"! dim(y) = n x 1"! Supervised training with perfect exemplar outputs"! Noisy measurement of an exemplar as input to be identified"! Network operation"zs= ys+ nsˆy0= zsˆyk+1= s rk( )= s Wˆyk( )ˆyik+1=1,Unchanged−1,⎧⎨⎪⎩⎪,rik> 0rik= 0rik< 0, i = 1 to n! Iterate to convergence"Training a Hopfield Network"! No iterations to define weights"! Large number of weights"! Limited number of exemplars (< 0.15 n)"! Similar exemplars pose a problem"! Network training"! Given M exemplars, ys"! Each exemplar is a character represented by n pixels"! Batch calculation of weighting matrix"W = ysysT− In( )s=1M∑=y12−1 y1y2...y1y2y22−1 ...... ... ...⎡⎣⎢⎢⎢⎢⎤⎦⎥⎥⎥⎥ n = 120; M = 8# weights = n2= 14,400Hopfield Network"Alternative plot of 4-node network"Exemplar!Novel Image!“Energy Landscape”!Adaptive Resonance Theory Network!(Grossberg, Carpenter, 1976)"! Self-organizing and self-stabilizing network for binary pattern recognition (ART-1)"! Subjective evolution through “Fuzzy ART”"! Learns new patterns when it discerns sufficient mismatch from old patterns"! Long-Term Memory"! Short-Term Memory"! Stability and plasticity"! Unsupervised and supervised learning"! “Bottom-up” input"! “Top-down” priming"! Pre-cursor to “deep learning”"Features"Categories"Adaptive Resonance Theory Network"Architecture!Binary Neurons represent Pattern Pixels!Recursive Training Example: adding new templates!Neural Networks with Unsupervised Learning!Self-Organizing Map!(Kohonen, 1981)"! Competitive, unsupervised learning"! Premise: input signal patterns that are close produce outputs that are close"! Ordered inputs produce a spacial distribution, i.e., a map"! Cells of the map are likened to the cell structure of the cerebral cortex"! x: (n x 1) input vector characterizes features (attributes) of a signal"! m: (n x 1) weight vector of a cell that represents an output class"Competition in the Self-Organizing Map"! Competition is based on minimizing distance from x to m" Cost = distance = x − miminCost = minmix − m! m encodes the output classes"! Semantic net decodes the output to identify classes" m1=013⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ → Class A; m2=101⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ → Class BGoal of the Self-Organizing Map"! Given:"! I output classes"! Input training set, xj, j = 1 to J"! Find: Cell weights, mi, i = 1 to I that best cluster the data (i.e., with minimum norm)"! Initialize the cell weights, mi, randomly in the space of x"Training the Self-Organizing Map"! Define a neighborhood set within a radius of Nc around each cell, mi!! Choose Nc to overlap with neighboring cells"! Find the best cell-weight match, mbest, (i.e., the closest mi) to the 1st training sample, x1"Cell Weight Updates"! Update cell weights for all cells in the neighborhood set, Nc, of mbest"! αk = adaptation gain or learning rate"! Repeat for "! x2 to xJ"! m1 to mI" mik + 1( )=mik( )+αkx1− mik( )[ ],mik( ),⎧ ⎨ ⎩ mi∈ Ncmi∉ NcConvergence of Cell Weights"Repeat entire process with decreasing Nc radius until convergence occurs" mik + 1( )=mik( )+αkx1− mik( )[ ],mik( ),⎧ ⎨ ⎩ mi∈ Ncmi∉ NcSemantic Map"! Representation of abstract or categorical information"! Contextual information used to generate map of symbols"! Dimensionality and number of nearest neighbors affects implied association"2 nearest neighbors, linear association"Evolution of points on a line, which identify locations of mi "(Random field of data points not shown)!Choice of Neighborhood Set"! Representation of abstract or categorical information"! Contextual information used to generate map of symbols"4 nearest neighbors, polygonal association"Evolution of grid points, which identify locations of mi"(Random field of data points not shown)!Minimum Spanning Tree"Example of association identification"Hexagonal lattice of grid points, which identify locations of mi!Minimum spanning tree: smallest total edge length!Semantic Identification"Example of semantic identification"Deep Learning"! Multi-layered network"! Single sigmoid layer ~ Restricted Boltzmann machine (RBM)"! Pre-train each layer separately and contextually (unsupervised)"! Fine-tune with backpropagation"! Semi-supervised learning"! Initial clustering"! Smoothness"! Manifold"! Overcoming “vanishing gradient” problem in multi-layer backpropagation"! “Google Brain”"http://en.wikipedia.org/wiki/Deep_learning!http://deeplearning.net!Hinton et al, 2006!Next Time:!Machine Learning and Expert Systems!Supplemental Material!Linear Vector Quantization"! Incorporation of supervised learning"! Classification of groups of outputs"! Type 1"! Addition of codebook vectors, mc, with known meaning" mck + 1( )=mck( )+αkxk− mck( )[ ],mck( )−αkxk− mck( )[ ],⎧ ⎨ ⎪ ⎩ ⎪ if classified correctlyif classified incorrectlyLinear Vector Quantization"! Type 2"! Inhibition of nearest neighbor whose class is known to be different, e.g.,"! x belongs to class of mj but is closer to mi" mik + 1( )= mik( )−αkxk− mik( )[ ]mjk + 1( )= mjk( )+αkxk− mjk( )[ ]Adaptive Critic Proportional-Integral Neural Network Controller "Adaptation of Control Network NNC Aircraft Model • Transition Matrices • State Prediction Utility Function Derivatives NNA xa(t) a(t) Optimality Condition NNA


View Full Document

Princeton MAE 345 - Neural Networks - 2

Download Neural Networks - 2
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Neural Networks - 2 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Neural Networks - 2 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?