DOC PREVIEW
U of M PSY 5038 - Energy and attractor networks Graded response Hopfield net

This preview shows page 1-2-3-4-5-6 out of 17 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Introduction to Neural NetworksU. Minn. Psy 5038"Energy" and attractor networksGraded response Hopfield netGraded response Hopfield network‡The model of the basic neural elementHopfield's 1982 paper was strongly criticized for having an unrealistic model of the neuron. In 1984, he published another influential paper with an improved neural model. The model was intended to capture the fact that neural firing rate can be considered a continuously valued response (recall that frequency of firing can vary from 0 to 500 Hz or so). The question was whether this more realistic model also showed well-behaved convergence properties. Earlier, in 1983, Cohen and Grossberg had published a paper showing conditions for network convergence. Previously, we derived an expression for the rate of firing for the "leaky integrate-and-fire" model of the neuron. Hopfield adopted the basic elements of this model, together with the assumption of a non-linear sigmoidal output, represented below by an operational amplifier with non-linear transfer function g(). An operational amplifier (or "op amp") has a very high input impedance so its key property is that it essentially draws no current. Below is the electrical circuit corresponding to the model of a single neuron. The unit's input is the sum of currents (input voltages weighted by conductances Tij, corresponding to synaptic weights). Ii is a bias input current (which is often set to zero depending on the problem). There is a capacitance Ci and membrane resistance Ri--that characterizes the leakiness of the neural membrane. ‡The basic neural circuit‡The basic neural circuitNow imagine that we connect up N of these model neurons to each other to form a completely connected network. Like the earlier discrete model, neurons are not connected to themselves, and the two conductances between any two neurons is the same. In other words, the weight matrix has a zero diagonal (Tii=0), and is symmetric (Tij=Tji). We follow Hopfield, and let the output range continuously between -1 and 1 for the graded response network (rather than taking on discrete values of 0 or 1 as for the network in the previous lecture.)The update rule is given by a set of differential equations over the network. The equations are determined by the three basic laws of electricity: Kirchoff's rule (sum of currents at a junction has to be zero, i.e. sum in has to equal sum out), Ohm's law (I=V/R), and that the current across a capacitor is proportional to the rate of change of voltage (I=Cdu/dt). Resistance is the reciprocal of conductance (T=1/R). As we did earlier with the integrate-and-fire model, we write an expression representing the requirement that the total current into the op amp be zero, or equivalently that the sum of the incoming currents at the point of the input to the op amp is equal to the sum of currents leaving:‚jTijVj+ Ii= Ciduidt+uiRiWith a little rearrangement, we have:(1)(2)The first equation is really just a slightly elaborated version of the "leaky integrate and fire" equation we studied in Lecture 3. We now note that the "current in" (s in lecture 3) is the sum of the currents from all the inputs.‡Proof of convergenceHere is where Hopfield's main contribution lies. All parameters (Ci,Tij,Ri,Ii,g) are fixed, and we want to know how the state vector Vi changes with time. We could imagine that the state vector could have almost any trajectory depending on initial conditions, and wander arbitrarily around state space--but it doesn't. In fact, just as for the discrete case, the continu-ous Hopfield network converges to stable attractors. Suppose that at some time t, we know Vi(t) for all units (i=1 to N). Then Hopfield proved that the state vector will migrate to points in state space whose values are constant: Vi ->Vis. In other words, to state space points where dVi/dt =0. This is a steady-state solution to the above equations. This is an important result because it says that the network can be used to store memories. To prove this, Hopfield defined an energy function as:2 Lect_19_HopfieldCont.nbThe form of the sigmoidal non-linearity, g(), was taken to be an inverse tangent function (see below). The expression for E looks complicated, but we want to examine how E changes with time, and with a little manipulation, we can obtain an expression for dE/dt which is easy to interpret. Let's get started. If we take the derivative of E with respect to time, then for symmetric T, we have (using the product rule, and the chain rule for differentiation of composite functions):Where we've used equation 2 to replace g-1HVL by u (after taking the derivative of Ÿ0Vigi-1HVL„ V with respect to time)Substituting the expression from the first equation (1) for the expression between the brackets, we obtain:And now replace dui/dt by taking the derivative of the inverse g function (again using the chain rule):(3)Below, there is an exercise in which you can show that the derivative of the inverse g function is always positive. And because capacitance and (dVi/dt)^2 are also positive, and given the minus sign, the right hand side of the equation can never be positive--energy never increases. Further, we can see that stable points, i.e. where dE/dt is zero, correspond to attractors in state space. Mathematically, we have:E is a Lyapunov function for the system of differential equations that describe the neural system whose neurons have graded responses.Use the product rule and the chain rule from calculus to fill in the missing stepsLect_19_HopfieldCont.nb 3Use the product rule and the chain rule from calculus to fill in the missing stepsSimulation of a 2 neuron Hopfield network‡DefinitionsWe will let the resistances and capacitances all be one, and the current input Ii be zero. Define the sigmoid function, g[] and its inverse, inverseg[]:In[10]:=a1 := (2/Pi); b1 := Pi 1.4 / 2;g[x_] := N[a1 ArcTan[b1 x]]; inverseg[x_] := N[(1/b1) Tan[x/a1]];Although it is straightforward to compute the inverse of g( ) by hand, do it using the Solve[] function in Mathematica::In[13]:=Solve[a ArcTan[b y]==x,y]Out[13]=::y ØtanJxaNb>>As we saw in earlier lectures, dividing the argument of a squashing function such as g[] by a small number makes the sigmoid more like a step or threshold function. This provides a bridge between the discrete (two-state) Hopfield and the continuous Hopfield networks:In[14]:=Manipulate@Plot@g@1 ê b2 xD, 8x, -p, p<, ImageSize Ø Tiny, Axes Ø FalseD,88b2, 1<,


View Full Document
Download Energy and attractor networks Graded response Hopfield net
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Energy and attractor networks Graded response Hopfield net and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Energy and attractor networks Graded response Hopfield net 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?