New version page

Locally Bayesian Learning With Applications to Retrospective Revaluation and Highlighting

Upgrade to remove ads

This preview shows page 1-2-22-23 out of 23 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Locally Bayesian Learning With Applications to Retrospective Revaluationand HighlightingJohn K. KruschkeIndiana UniversityA scheme is described for locally Bayesian parameter updating in models structured as successions ofcomponent functions. The essential idea is to back-propagate the target data to interior modules, such thatan interior component’s target is the input to the next component that maximizes the probability of thenext component’s target. Each layer then does locally Bayesian learning. The approach assumes onlinetrial-by-trial learning. The resulting parameter updating is not globally Bayesian but can better capturehuman behavior. The approach is implemented for an associative learning model that first maps inputsto attentionally filtered inputs and then maps attentionally filtered inputs to outputs. The Bayesianupdating allows the associative model to exhibit retrospective revaluation effects such as backwardblocking and unovershadowing, which have been challenging for associative learning models. Theback-propagation of target values to attention allows the model to show trial-order effects, includinghighlighting and differences in magnitude of forward and backward blocking, which have been chal-lenging for Bayesian learning models.Keywords: learning theory, statistical probability, selective attentionCognitive systems are often thought of as hierarchies of pro-cesses. Each process takes an input representation, transforms it,and generates another representation. That representation in turn istransformed by a subsequent process until an ultimate representa-tion corresponds with a response or outcome. For example, Marr(1982) expounded a representational framework that progressedfrom a representation of image intensity to a “primal sketch” to a“two-and-a-half-D sketch” to a 3-D model representation. Palmer(1999) outlined four stages of visual processing, from image-basedto surface-based to object-based to category-based. I am specifi-cally interested in such architectures when applied to trial-by-trial,online learning. The transformations within levels of the hierarchyare incrementally tuned by each episodic experience in the world.Bayesian approaches to cognitive modeling have been espe-cially attractive because they express optimal performance underspecific assumptions. Bayesian approaches can be useful either toshow that human behavior is nearly optimal or to show specificallyhow human performance fails to be optimal. Bayesian approachesalso stipulate how the model should adjust its distribution ofparameter probabilities when data are supplied. Thus, Bayesianupdating describes optimal learning. Bayesian learning has beenapplied to a range of phenomena from low-level perceptual learn-ing (e.g., Eckstein, Abbey, Pham, & Shimozaki, 2004) to high-level causal induction and language acquisition (e.g., Regier &Gahl, 2004; Steyvers, Tenenbaum, Wagenmakers, & Blum, 2003).If the Bayesian approach to learning is to be a general principlefor modeling the mind, then it is logical to attempt Bayesianlearning for the entire hierarchy of representations simultaneously.However, in a system as complex as the mind, replete with myriadparameters, it is unlikely that every episodic experience catalyzesa monolithic Bayesian updating of the complete joint parameterdistribution simultaneously. Perhaps it is not being too mystical,however, to imagine that there is Bayesian updating within mod-ules. Perhaps for small subspaces of parameters, there is Bayesianupdating within each subspace. The problem is that most modulesin the mental hierarchy are not in direct contact with the stimuliprovided by the outside world, and so they do not know what datato use for updating their parameters.There are three main points in this article, addressed in turn.First, I report a new general scheme for doing online (i.e., trial-by-trial) locally Bayesian updating in models structured as succes-sions of component functions. The essential idea is to back-propagate the target data to interior modules, such that the interiortargets are those that maximize the probability of the target in thesubsequent layer. Second, I implement the approach for an asso-ciative learning model that first maps inputs to attentionally fil-tered inputs and then maps attentionally filtered inputs to outputs.Third, I apply the model to several phenomena exhibited in humanlearning that have heretofore been unaddressed by one or the otherof Bayesian learning models and associative learning models. TheBayesian updating allows the associative model to exhibit retro-spective revaluation effects such as backward blocking andThis research was supported in part by Grant BCS-9910720 from theNational Science Foundation. A portion of this research was accomplishedwhile the author was on sabbatical during Fall 2004. For comments ondrafts of the article, I gratefully acknowledge Jerome Busemeyer, MichaelLee, Jay Myung, Richard Shiffrin, Joshua Tenenbaum, and Eric-JanWagenmakers, along with David Blei, Jason Gold, Richard Golden, RobertJacobs, and Chen Yu. Some of this research was presented at the 38thAnnual Meeting of the Society for Mathematical Psychology, August 6,2005, University of Memphis, Memphis, Tennessee. The author’s Webpage is at http://www.indiana.edu/⬃kruschke/, which has links to softwarefor the simulations reported in the article.Correspondence concerning this article should be addressed to John K.Kruschke, Department of Psychological and Brain Sciences, Indiana Uni-versity, 1101 East 10th Street, Bloomington, IN 47405-7007. E-mail:[email protected] Review Copyright 2006 by the American Psychological Association2006, Vol. 113, No. 4, 677– 699 0033-295X/06/$12.00 DOI: 10.1037/0033-295X.113.4.677677unovershadowing. These effects are challenging for many non-Bayesian associative learning models. The back-propagation oftarget values to attentionally filtered cues allows the model toshow trial-order effects, including highlighting and differences inthe magnitudes of forward and backward blocking. These trial-order effects are challenging for many extant Bayesian learningmodels.Bayesian Modeling GenerallyThe benefits of Bayesian approaches to model fitting and modelcomparison have been compellingly discussed and demonstrated(e.g., Lee, 2004; MacKay, 2003; Myung & Pitt, 1997). Here, Iprovide a brief overview of Bayesian modeling as background fordiscussing Bayesian models of learning.Suppose we have


Download Locally Bayesian Learning With Applications to Retrospective Revaluation and Highlighting
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Locally Bayesian Learning With Applications to Retrospective Revaluation and Highlighting and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Locally Bayesian Learning With Applications to Retrospective Revaluation and Highlighting 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?