DOC PREVIEW
MIT 9 459 - Current Progress with a model of visual search

This preview shows page 1-2-3-4-26-27-28-54-55-56-57 out of 57 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Page 1 Guided Search 4.0: Current Progress with a model of visual search Jeremy M Wolfe, Brigham and Women’s Hospital and Harvard Medical School Text & Captions = 8605 words Abstract Visual input is processed in parallel in the early stages of the visual system. Later, object recognition processes are also massively parallel, matching a visual object with a vast array of stored representation. A tight bottleneck in processing lies between these stages. It permits only one or a few visual objects at any one time to be submitted for recognition. That bottleneck limits performance on visual search tasks when an observer looks for one object in a field containing distracting objects. Guided Search is a model of the workings of that bottleneck. It proposes that a limited set of attributes, derived from early vision, can be used to guide the selection of visual objects. The bottleneck and recognition processes are modeled using an asynchronous version of a diffusion process. The current version (Guided Search 4.0) captures a wide range of empirical findings.Page 2 Introduction Guided Search (GS) is a model of human visual search performance; specifically, of search tasks in which an observer looks for a target object among some number of distracting items. Classically, models have described two mechanisms of search: “serial” and “parallel” (Egeth, 1966). In serial search attention is directed to one item at a time allowing each item to be classified as a target or a distractor in turn (Sternberg, 1966). Parallel models propose that all (or many) items are processed at the same time. A decision about target presence is based on the output of this processing (Neisser, 1963). GS evolved out of the 2-stage architecture of models like Treisman’s Feature Integration Theory (FIT Treisman & Gelade, 1980). FIT proposed a parallel, preattentive first stage and a serial, second stage controlled by visual selective attention. Search tasks could be divided into those performed by the first stage in parallel and those requiring serial processing. Much of the data comes from experiments measuring reaction time (RT) as a function of set size. The RT is the time required to respond that a target is present or absent. Treisman proposed that there was a limited set of attributes (e.g. color, size, motion) that could be processed in parallel, across the whole visual field (Treisman, 1985; Treisman, 1986;Treisman & Gormican, 1988). These produced RTs that were essentially independent of the set size. Thus, slopes of RT x set size functions were near zero. In FIT, targets defined by two or more attributes required the serial deployment of attention. The critical difference between preattentive search tasks and serial tasks was that the serial tasks required a serial “binding” step (Treisman, 1996; von der Malsburg, 1981). One piece of brain might analyze the color of an object. Another might analyze its orientation. Binding is the act of linking those bits of information into a single representation of an object – an object filePage 3 (Kahneman, Treisman, & Gibbs, 1992). Tasks requiring serial deployment of attention from one item to the next produce RT x set size functions with slopes markedly greater than zero (typically, about 20-30 msec/item for target-present trials and a bit more than twice that for target-absent). The original GS model had a preattentive stage and an attentive stage, much like FIT. The core of GS was the claim that information from the first stage could be used to guide deployments of selective attention in the second (Cave & Wolfe, 1990; Wolfe et al., 1989). Thus, if observers searched for a red letter “T” among distracting red and black letters, preattentive color processes could guide the deployment of attention to red letters, even if no front-end process could distinguish a “T” from an “L” (Egeth et al., 1984). This first version of GS (GS1) argued that all search tasks required that attention be directed to the target item. The differences in task performance depended on the differences in the quality of guidance. In a simple feature search (e.g., a search for red among green), attention would be directed toward the red target before it was deployed to any distractors, regardless of the set size. This would produce RTs that were independent of set size. In contrast, there are other tasks where no preattentive information, beyond information about the presence of items in the field, is useful in guiding attention. In these tasks, as noted, search is inefficient. RTs increase with set size at a rate of 20-30 msec/item on target present trials and a bit more than twice that on the target absent trials (Wolfe, 1998). Examples include searching for a 2 among mirror-reversed 2s (5s) or searching for rotated Ts among rotated Ls. GS1 argued that the target is found when it is sampled, at random, from the set of all items.Page 4 Tasks where guidance is possible (e.g., search for conjunctions of basic features) tend to have intermediate slopes (Nakayama & Silverman, 1986; Quinlan & Humphreys, 1987; Treisman & Sato, 1990; Zohary, Hochstein, & Hillman, 1988). In GS1, this was modeled as a bias in the sampling of items so that, because it had the correct features, the target was likely to be picked earlier than it would have been by random sampling, but later than it would have been if it were the only item with those features. GS has gone through major revisions yielding GS2 (Wolfe, 1994) and GS3 (Wolfe & Gancarz, 1996). GS2 was an elaboration on GS1 seeking to explain new phenomena and to provide an account for the termination of search on target-absent trials. GS3 was an attempt to integrate the covert deployments of visual attention with overt deployments of the eyes. This paper describes the current state of the next revision, uncreatively dubbed Guided Search 4.0 (GS4). The model is not in its final state because several problems remain to be resolved. What does Guided Search 4.0 seek to explain? GS4 is a model of simple search tasks done in the laboratory with the hope that the same principles will scale-up to the natural and artificial search tasks that are performed continuously by people outside of the laboratory. A set of phenomena is described here. Each pair of figures illustrates an aspect of the data that that any comprehensive model of visual search should strive to account for. The left-hand member of the pair is the easier


View Full Document

MIT 9 459 - Current Progress with a model of visual search

Download Current Progress with a model of visual search
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Current Progress with a model of visual search and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Current Progress with a model of visual search 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?