DOC PREVIEW
UT PSY 394U - A Critical Look at the Mechanisms Underlying Implicit Sequence

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

A Critical Look at the Mechanisms Underlying Implicit SequenceLearningTodd M. Gureckis ([email protected])Bradley C. Love ([email protected])Department of Psychology; The University of Texas at AustinAustin, TX 78712 USAAbstractIn this report, a model of human sequence learningis developed called the linear associative s hift reg-ister (LASR). LASR uses a simple error-driven as -sociative learning rule to incrementally acquire in-formation about the structure of event sequences.In contrast to recent modeling approaches to im-plicit sequence learning, LASR describes learningas a simple and limited process. We argue that thissimplicity is a virtue in that the complexity of themo del is better matched to the demonstrated com-plexity of human processing. The model is appliedin a variety of situations including implicit learningvia the serial reaction time (SRT) task and statisti-cal word learning. The results of these simulationshighlight commonalities between different tasks andlearning modalities which suggest similar underly-ing learning mechanisms.IntroductionOne of the most striking aspects of human behavioris the ease with which we can acquire new skills withlittle conscious effort. In order to better understandthis phenomena, a large literature has developed ex-ploring the ability of participants to implicitly learnabout the sequential structure of a series of events(see Cleeremans, Destrebecqz, Boyer, 1998, for a re-view). However, the type of memory and learningmechanisms which might support such learning arenot well understood (see Keele, Ivry, Mayr, Hazel-tine, & Heuer, 2004 or Sun, Sluzarz, & Terry, 2005for some recent proposals).In this paper, we develop a simple model of se-quence learning behavior called the linear associa-tive shift-register (LASR). The model is unique frompast approaches in that it describes implicit se-quence learning as a simple and limited processwhich operates on a small temporary buffer of pastevents. This contrasts with other models of sequencelearning which have described learning as a morecomplex and flexible process (Cleeremans & McClel-land, 1991; Cleeremans, 1993; Sun et al., 2005).There are two main goals of this report. First, weexplore the ability of this simple model to accountfor sequential learning phenomena in a variety ofimplicit learning situations including the serial reac-tion time (SRT) task and statistical word learningparadigms. LASR provides a similar account of thetype of processing which underlies performance inboth kinds of tasks, suggesting that they may relyon similar underlying mechanisms.Second, we demonstrate how a very simple learn-ing mechanism such as LASR can provide a detailedaccount of a number of findings from the implicitsequence learning literature. A key criticism we de-velop is that in previous modeling accounts (suchas the simple recurrent network (SRN) of Cleere-mans, 1993), the complexity of the model is notwell matched to the demonstrated complexity of thelearner. While LASR cannot explain all aspects ofour rich sequential behavior, we believe the modelprovides a unique baseline against which to test morecomplex theories and experiments.We begin by introducing the LASR model and theprinciples upon which it is based. Next, we considera study conducted by Lee (1997) assessing implicitlearning of sequentially structured material. Finally,we explore the ability of LASR to account for statis-tical word learning in infants as reported by Saffran,Aslin, and Newport (1996).The Linear Associative Shift-Register(LASR) ModelLASR is a mechanistic model of implicit sequencelearning. The model describes implicit sequencelearning as the task of appreciating the associativerelationship between past e vents and future ones.LASR assumes that subjects maintain a limitedmemory for the sequential order of past events andthat they use a simple error-driven associative learn-ing rule (Widrow & Hoff, 1960; Rescorla & Wagner,1972) to incrementally acquire information about se-quential structure. Despite it’s simplicity, the modelcan very quickly learn to appreciate rather complexdependencies between events which are structured intime. The model is organized around 3 principles:1. Past events are stored in a temporarybuffer The model begins by assuming a simpleshift-register memory for past events. Individual el-ements of the register are referred to as slots. Newevents encountered in time are inserted at one endof the register and all past events are accordinglyshifted one time slot. Thus, the most recent event isalways located in the right-most slot of the register(see Figure 1). This form of memory maintains thesequential order of recent events using spatial p os i-tion (see Sejnowski and Rosenberg (1987) or Cleere-man’s (1993) buffer network for similar approaches).2. Learning to predict what comes next Thissimple memory mechanism forms the basis of adetector (see Figure 1). A detector is a simple,single-layer linear network or perceptron (Rosen-blatt, 1958) which learns to predict the occurrenceof a single future e vent based on past events. Be-cause each detector predicts only a single event, aseparate detector is needed for each possible event.Each detector has a weight from each event out-come at each time slot. On each trial, activationfrom each memory-register slot is passed over a con-nection weight and summed to compute the acti-vation of the detector’s prediction unit. The taskof a detector is to adjust the weights from individ-ual memory slots so that it can succe ss fully predictthe future o ccurrence of it’s response. Each detec-tor learns to strengthen the connection weights formemory slots which prove predictive of the detec-tor’s response while weakening those which are notpredictive or are counter-predictive.2. Recent events have more influence onlearning than past events The model assumesthat events in the recent past are remembered betterthan events which happened long ago. This effect isimplemented by attenuating the activation strengthof each register position by how far back in timethe event occurred. Because of this, an event whichhapp e ned at time t − 1 has more influence on futurepredictions than events which happened at t−2, t−3,etc... Similarly, learning is slower for slots which arepositioned further in the past because their activa-tion strength is reduced (see Equation 3).Model FormalismThe following section describes the mathematicalformalism of the model. The


View Full Document

UT PSY 394U - A Critical Look at the Mechanisms Underlying Implicit Sequence

Documents in this Course
Roadmap

Roadmap

6 pages

Load more
Download A Critical Look at the Mechanisms Underlying Implicit Sequence
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Critical Look at the Mechanisms Underlying Implicit Sequence and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Critical Look at the Mechanisms Underlying Implicit Sequence 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?