DOC PREVIEW
BuonoJNphy05

This preview shows page 1-2-3 out of 9 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

A Learning Rule for the Emergence of Stable Dynamics and Timingin Recurrent NetworksDean V. BuonomanoDepartments of Neurobiology and Psychology and Brain Research Institute, University of California, Los Angeles, Los Angeles, CaliforniaSubmitted 6 December 2004; accepted in final form 18 May 2005Buonomano, Dean V. A learning rule for the emergence of stabledynamics and timing in recurrent networks. J Neurophysiol 94:2275–2283, 2005; doi:10.1152/jn.01250.2004. Neural dynamicswithin recurrent cortical networks is an important component ofneural processing. However, the learning rules that allow networkscomposed of hundreds or thousands of recurrently connected neuronsto develop stable dynamical states are poorly understood. Here I usea neural network model to examine the emergence of stable dynamicalstates within recurrent networks. I describe a learning rule that canaccount both for the development of stable dynamics and guidenetworks to states that have been observed experimentally, specifi-cally, states that instantiate a sparse code for time. Across trials, eachneuron fires during a specific time window; by connecting the neuronsto a hypothetical set of output units, it is possible to generate arbitraryspatial-temporal output patterns. Intertrial jitter of the spike time of agiven neuron increases as a direct function of the delay at which itfires. These results establish a learning rule by which cortical net-works can potentially process temporal information in a self-organiz-ing manner, in the absence of specialized timing mechanisms.INTRODUCTIONLocal dynamics within cortical networks play a fundamentalrole in neural computations (Koch and Fuster 1989; Ringach etal. 1997; Somers et al. 1995). Dynamics in the form ofcomplex spatial-temporal patterns of neuronal firing have beenobserved in vitro (Beggs and Plenz 2004; Buonomano 2003;Ikegaya et al. 2004; Jimbo et al. 1999) and in vivo and beenshown to contain information about sensory stimuli (Gawne etal. 1996; Laurent et al. 1996) and motor behavior (Hahnloser etal. 2002; Weesberg et al. 2000). While it has been establishedthat networks exhibit complex spatial-temporal dynamics, thesynaptic learning rules that allow recurrent networks to de-velop functional and stable dynamics are not known. Indeed,conventional coincident-based learning rules are unstable(Miller and MacKay 1994; Turrigiano and Nelson 2004) andcan lead to runaway excitation in recurrent networks.A potential computational function of spatial-temporal pat-terns of activity is temporal processing and motor control.Decoding temporal information, or generating timed responseson the order of tens to hundreds of milliseconds, is a funda-mental component of many sensory and motor tasks (Mauk andBuonomano 2004). Indeed the precise sequential generation ofmotor responses is a virtually ubiquitous component of behav-ior. One of the most studied forms of complex sensory-motorprocessing is the birdsong system (Bottjer and Arnold 1997;Doupe and Kuhl 1999). Song generation relies on preciselytimed sequential generation of motor patterns over both thetime scale of individual syllable features and sequences ofsyllables (Fee et al. 2004). While relatively little is knownabout the neural mechanisms underling the generation of pre-cisely timed motor sequences, it has recently been shown thatthere is a sparse code for time in the premotor area HVc, whichmay control song production (Hahnloser et al. 2002). Dynam-ically changing patterns of activity have also been proposed tocode for time in the cerebellum and underlie certain motorpatterns (Medina et al. 2000). Sparse long-lasting responseshave also been observed in vitro. As shown in Fig. 1 in corticalorganotypic cultures, a single stimulus can elicit single spikesat latencies of a few hundred milliseconds (Buonomano 2003).These slices contain thousands of recurrently connected neu-rons and initially exhibit weak synaptic connections (Echevar-rı´a and Albus 2000; Muller et al. 1993). In this study, Iexamined how dynamics may emerge in this general class ofnetworks.It has been shown that a modified form of synaptic scalingcan guide networks to a stable dynamical state. Interestingly,the dynamical states that emerge from this learning rule allowthe network to generate a sparse temporal code for time.Qualitatively the architecture of the model and dynamicalstates observed are consistent with experimental data fromcortical organotypic slices (Fig. 1) (Buonomano 2003).METHODSAll simulations were performed with NEURON (Hines andCarnevale 1997). Each neuron was simulated as a single compartmentintegrate-and-fire unit, with dynamic synapses. The ratio of excitatory(Ex) to inhibitory (Inh) neurons and connectivity probability wasbased on experimental data (Beaulieu et al. 1992). Specifically, 80%of the units were excitatory and 20% inhibitory. In our defaultsimulations, there were 320 Ex and 80 Inh units (each Ex unit received20 excitatory and 5 inhibitory synapses, and each Inh unit received 5excitatory synapses). Connectivity was uniformly random.Integrate-and-fire unitsThe resting membrane potential of all units was – 60 mV. Thresh-olds were set from a normal distribution (␴2⫽ 5% of meanthreshold); the mean thresholds for the Ex and Inh units were – 40 and– 45 mV, respectively. After a spike, the voltage was reset to –60 and– 65 for the Ex and Inh units, respectively. Membrane time constantswere 30 ms for the Ex units and 10 ms for the Inh units. InputAddress for reprint requests and other correspondence: D. V. Buonomano,Brain Research Inst., Univ. of California, Box 951761, Los Angeles, CA90095 (E-mail: [email protected]).The costs of publication of this article were defrayed in part by the paymentof page charges. The article must therefore be hereby marked “advertisement”in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.J Neurophysiol 94: 2275–2283, 2005;doi:10.1152/jn.01250.2004.22750022-3077/05 $8.00 Copyright © 2005 The American Physiological Societywww.jn.orgresistance was 300 M⍀. See supplementary information1for furtherdetails.Synapses␣-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA),N-methyl-D-aspartate (NMDA), and GABAAsynaptic currents weresimulated using a kinetic model (Buonomano 2000; Destexhe et al.1994; Karmarkar and Buonomano 2002) (supplemental information).Short-term plasticity was incorporated in all the synapses based onexperimental data and implemented


BuonoJNphy05

Download BuonoJNphy05
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view BuonoJNphy05 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view BuonoJNphy05 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?