DOC PREVIEW
UT PSY 301 - An attractor network model of serial recall
Type Miscellaneous
Pages 11

This preview shows page 1-2-3-4 out of 11 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Cognitive Systems Research 3 (2002) 45–55www.elsevier.com/locate/cogsysAn attractor network model of serial recallAction editors: Wayne Gray and Christan Schunn*Matt Jones , Thad A. PolkDepartment of Psychology,Cognition and Perception,University of Michigan, 525East University,Ann Arbor,MI48109,USAReceived 1 March 2001; accepted 1 September 2001AbstractWe present a neural network model of verbal working memory which attempts to illustrate how a few simple assumptionsabout neural computation can shed light on cognitive phenomena associated with the serial recall of verbal material. Weassume that neural representations are distributed, that neural connectivity is massively recurrent, and that synaptic efficacyis modified based on the correlation between pre- and post-synaptic activity (Hebbian learning). Together these assumptionsgive rise to emergent computational properties that are relevant to working memory, including short-term maintenance ofinformation, time-based decay, and similarity-based interference. We instantiate these principles in a specific model of serialrecall and show how it can both simulate and explain a number of standard cognitive phenomena associated with the task,including the effects of serial position, word length, articulatory suppression (and its interaction with word length), andphonological similarity.  2002 Elsevier Science B.V. All rights reserved.Keywords:Verbal working memory; Serial recall; Learning & Memory; Attractor network; Recurrent neutral network; Phonologicalsimilarity; Primacy; Recency; Articulatory suppression; Word length1. Introduction instantiated in the brain is an open question (but seeBurgess & Hitch, 1999, for one recent proposal).Working memory is among the most intensively Similarly, there is a substantial body of neurosci-studied cognitive processes in both cognitive psy- entific research investigating the neural substrates ofchology and neuroscience, and yet results from the working memory in both animals (Fuster, 1973;two fields have not made as much contact with each Funahashi, Bruce & Goldman-Rakic, 1989) andother as one might hope. For example, cognitive humans (Smith & Jonides, 1999), but this work haspsychology has discovered a host of robust empirical typically only addressed a small subset of the richphenomena associated with verbal working memory behavioral data and theories available in cognitiveand has developed elegant theoretical models, such psychology.as Baddeley’s phonological loop, that can explain the In this paper, we attempt to illustrate that a simpleempirical results (Baddeley, 1986). Nevertheless, the and independently motivated model of neuraldetails of how these psychological hypotheses are computation can make contact with, and even shedlight on, the cognitive psychology of verbal workingmemory. We begin by describing a few widely*Corresponding author.E-mail address:[email protected] (M. Jones). accepted assumptions about neural computation.1389-0417/02/$ – see front matter  2002 Elsevier Science B.V. All rights reserved.PII: S1389-0417(01)00043-246 M.Jones,T.A.Polk / Cognitive Systems Research3 (2002) 45–55Next, we discuss some of the emergent computation- retrieve an incorrect attractor pattern, but one whichal properties of these assumptions that are relevant to tends to be similar to the correct one.verbal working memory (e.g., maintenance, decay, Finally, we have also found that attractor networksinterference). We then illustrate how these assump- can be easily extended to exhibit time-based decay.tions can be instantiated in a specific computational In the original formulation of attractor networks,model that simulates and explains many of the major each unit was binary (either ON or OFF) andpsychological phenomena associated with the serial activation patterns could be maintained for indefiniterecall task. periods of time (Hopfield, 1982). Hopfield (1984)subsequently showed that networks using morerealistic continuous-valued units could also exhibit2. A simple model of neural computation similar computational properties. Our investigationshave shown that by appropriately increasing theWe begin with three simple and widely accepted threshold of the individual units, continuous-valuedassumptions about neural computation. The first is attractor networks can be made to exhibit time-based1that representations in the cortex are generally decay once external input is removed.distributed across a population of neurons, ratherthan being localized to individual cells. The secondis that there is massive connectivity among neurons 3. The serial recall taskwithin local areas of cortex and that this connectivityis recurrent rather than unidirectional. The third In the standard serial recall task a subject isassumption is that synaptic efficacy is modified presented, either visually or auditorially, with abased on the correlation between pre- and post- sequence of items, most often words, letters, orsynaptic activity (Hebbian learning). digits. Once presentation of the list has been com-Taken together, these assumptions give rise to pleted, the task of the subject is to repeat back thenetworks with interesting emergent properties, many list in its original order, either by speaking or byof which are relevant to working memory. For writing.example, such networks are known to be capable of This task has been intensively studied and a largemaintaining an activation pattern via internal re- number of robust behavioral phenomena have beenverberatory activity even after the input to the identified. Below are some of the major phenomenanetwork has been removed (Hopfield, 1982). Those which we will address in this paper. For a morepatterns which the network can maintain in this way thorough review of the literature see Gathercoleare termed attractors (and hence the networks them- (1997).selves are known as attractor networks), and underthe Hebbian learning rule they tend to emerge as3.1.Serial positionthose patterns to which the network is repeatedlyexposed. Furthermore, when presented with a noisy The effects of an item’s position within theor incomplete version of a previously trained pattern, presented list are generally described as two separatean attractor network will tend to converge its activity phenomena (see, e.g., Crowder, 1972):upon that attractor state which is most similar to theinput, thereby retrieving the original pattern. Primacy.Items from the start of the list tend toAnother


View Full Document

UT PSY 301 - An attractor network model of serial recall

Documents in this Course
Notes

Notes

2 pages

Notes

Notes

2 pages

Notes

Notes

2 pages

Self

Self

2 pages

Memory

Memory

60 pages

Genetics

Genetics

27 pages

Self

Self

2 pages

Jeopardy

Jeopardy

62 pages

Load more
Download An attractor network model of serial recall
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view An attractor network model of serial recall and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view An attractor network model of serial recall 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?