DOC PREVIEW
U of M PSY 5038 - Neural Networks

This preview shows page 1-2-21-22 out of 22 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 22 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Introduction to Neural NetworksU. Minn. Psy 5038Daniel KerstenIntroductionLast timeModeled aspects of spatial lateral inhibition in the visual system:Simple linear network: intensity->spatial filter->responseintensity and response were represented as vectorsthe spatial filter was represented as a matrixIf the spatial filter is a neural network, then the elements of the matrix can be interperted as strengths of synaptic connec-tions. The idea is to represent the synaptic weights for the ith output neuron by the values in the ith row of the matrix. The matrix of weights is sometimes called a connection matrix just because non-zero weights are between connected neurons, and neurons that aren't connected get fixed zero weights.This model is what we called a linear feedforward network, and used the generic neuron model. We generalized the linear discrete model to a linear continous one with feedback. Although more complex, its steady-state solution is identical to the linear feedforward model. We studied the continous system by approximating it as a discrete time system with Ε = Dt as a free parameter.TodayThere is a large body of mathematical results on linear algebra and matrices, and it is worth our while to spend some time going over some of the basics of matrix manipulation. We will first review matrix arithmetic (addition and multiplication). We will review the analog of division, namely finding the inverse of a matrix--something that was used in Lecture 5 to show how the steady-state solution of the feedback model of lateral inhibition was equivalent to a feedforward model with the appropriate weights.You may wonder at times how all this is used in neural modeling. But as this course goes on, we will see how otherwise obscure notions of things like an "outer product" between two vectors, or the "eigenvectors" of a matrix are meaningful for neural networks. For example, the "outer product" between two vectors can be used in modeling learning, and the eigenvec-tors corresponding to a "matrix of memories" can represent stored prototypes, and in dimensionality reduction.Basic matrix arithmeticA short motivation and previewIn Lecture 4, we developed the generic neural model. If we leave out the non-linear squashing function and noise, we have a linear neural nework. yi=âwi,jxjIn Lecture 5, we noted that the feedforward model of lateral inhibition does a good job of characterizing the neural response of visual receptors in the limulus. We implemented the feedforward linear model of lateral inhibition by taking the dot product of a weight vector with the input vector at one location, then we shifted the weight vector over, took a new dot product, and so forth.But this operation can be expressed as a matrix, W, times a vector, e, using the linear model of a neural network:yi=âWi,jejW.e is short-hand for: ÚWi,jejTo illustrate, define e as we did in the previous lecture, but over a shorter width (so the output display isn' t too big) :2Lect_6_Matrices[1].nbIn[1]:=width = 24; low = 0.2; hi = .8; lowx = 12; hix = 16;e =Table@Piecewise@88low , x < lowx<,8HHhi - lowLHhix - lowxLLx - HHhi - lowLlowxLHhix - lowxL+ low ,x >= lowx && x < hix<, 8hi, x >= hix<<D, 8x, 1, width<D;In[2]:=e MatrixFormOut[2]//MatrixForm=0.20.20.20.20.20.20.20.20.20.20.20.20.350.50.650.80.80.80.80.80.80.80.80.8Take the same weights we used in the previous lecture and make a 24x24 matrix in which the first row is: -1,-1,6,1,1,0,0,..... ,and the rows above are versions of the above row shifted one place to the right.Lect_6_Matrices[1].nb3In[3]:=W =RotateLeft@ToeplitzMatrix@86, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0<D, 2DOut[3]=88-1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1, 0<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1, -1<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6, -1<,80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, 6<,86, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<,8-1, 6, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<<(Ignore the Toeoplitz[] function for now. We use it as a "one-liner" to fill up the matrix with the appropriate weights).4Lect_6_Matrices[1].nbIn[5]:=ListPlot@8e, W.e<, PlotJoined ® TrueDOut[5]=51015200.51.01.52.02.53.0Definition of a matrix: a list of scalar listsTraditional and Standard form output format defaults.àDefining arrays or lists of function outputs using indicesAs we've already seen, Table[ ] can be used to generate a matrix, or list of lists. E.g.In[8]:=H = Table@i^2 + j^2, 8i, 1, …


View Full Document
Download Neural Networks
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Neural Networks and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Neural Networks 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?