MIT HST 722 - Lateralization of auditory language functions

Unformatted text preview:

Lateralization of auditory language functions: A dynamic dual pathway modelIntroductionThe dynamic dual pathway modelComparison with other viewsPsycholinguistic modelsNeurological evidenceNeurophysiological evidenceNeuroimaging evidenceConclusionAcknowledgementsReferencesLateralization of auditory language functions: A dynamicdual pathway modelAngela D. Friederici*and Kai AlterMax Planck Institute of Cognitive Neuroscience, P.O. Box 500 355, 04303 Leipzig, GermanyAccepted 20 August 2003AbstractSpoken language comprehension requires the coordination of different subprocesses in time. After the initial acoustic analysis thesystem has to extract segmental information such as phonemes, syntactic elements and lexical-semantic elements as well as su-prasegmental information such as accentuation and intonational phrases, i.e., prosody. According to the dynamic dual pathwaymodel of auditory language comprehension syntactic and semantic information are primarily processed in a left hemispherictemporo-frontal pathway including separate circuits for syntactic and semantic information whereas sentence level prosody isprocessed in a right hemispheric temporo-frontal pathway. The relative lateralization of these functions occurs as a result of stimulusproperties and processing demands. The observed interaction between syntactic and prosodic information during auditory sentencecomprehension is attributed to dynamic interactions between the two hemispheres.Ó 2003 Elsevier Inc. All rights reserved.1. IntroductionThe processing of spoken language depends on morethan one mental capacity: on the one hand the systemmust extract from the input a number of different typesof segmental information to identify phonemes andcontent words as well as syntactic elements indicatingthe grammatical relation between these words: on theother hand the system has to extract suprasegmentalinformation, i.e., the intonational contour which signalsthe separation of different consistuents and the accen-tuation of relevant words in the speech stream.There are various descriptions of how syntactic andsemantic information are processed in the brain(Friederici, 2002; Ullman, 2001). However, apart from afew general descriptions of processing intonational as-pects in language and music (Zatorre, Belin, & Penhune,2002), there is no brain based description of how into-national information and segmental information worktogether during spoken language comprehension. Herewe will propose a model incorporating this aspect. Theindication that such a model is needed may best be ex-emplified by the following examples (# indicates the‘‘intonational pause’’, called Intonational PhraseBoundary, IPB).(a) The teacher said # the student is stupid.(b) The teacher # said the student # is stupid.(c) The teacher said the student # is stupid.Sentences (a) and (b) are both prosodically correct,however, sentence (c) is not. The incorrect intonationalboundary after student in (c) indicates a mismatch be-tween the syntactic and the prosodic structure. Theprosodic realization in (c) left open the question towhom the attribute ‘‘to be stupid’’ has to be assigned.This example shows how intonational information ofnatural speech, called prosodic information can influ-ence syntactic processes and thus sentence comprehen-sion. The language processing system (ÔparserÕ) does wellin relying on the prosodic informat ion as all IPBs aresyntactic phrase boundaries as well, although the revers eis not always true. This prosody–syntax relationship ismanifested by the finding that prosodic informationeases the infantsÕ access to syntax during early devel-opment (Gleitman & Wanner, 1982; Hirsch-Pasek, 1987;Jusczyk, 1997), and supports parsing during languageacquisition and during adult language comprehension(Marslen-Wilson, Tyler, Warren, Grenier, & Lee, 1992;*Corresponding author. Fax: +49-341-9940-113.E-mail address: [email protected] (A.D. Friederici).0093-934X/$ - see front matter Ó 2003 Elsevier Inc. All rights reserved.doi:10.1016/S0093-934X(03)00351-1Brain and Language 89 (2004) 267–276www.elsevier.com/locate/b&lWarren, Grabe, & Nolan, 1995). In the following wepresent our dynamic dual pathway model taking intoconsideration semantic, syntactic and prosodic aspectsof processing and discuss the empirical evidence onwhich this model is based.2. The dynamic dual pathway modelThe neural basis of language processing has been thefocus of many studies (for review see Friederici, 2002;Hickok & Poeppel, 2000; Kaan & Swaab, 2002; Kutas &Federmeier, 2000; Ullman, 2001;), however, only a fewhave addressed auditory language comprehension inparticular (Friederici, 2002; Hickok & Poeppel, 2000).The latter two approaches have either concentrated onthe processing of segmental information suggestingparticular networks in the left hemisphere (LH) tosupport phonological, syntactic and semantic processes,or they have focused on the processing of prosodic in-formation suggesting an involvement of the righthemisphere (RH) (Gandour et al., 2000; Zatorre et al.,2002).The present model that binds together existing LHmodels with observations from more recent studies onprosodic processing. The premise of the dual pathwaymodel is that the rough distinction of processing seg-mental versus suprasegmental speech information is re-lated to the distinction between the two hemispheres.Segmental properties are associated via prosodic infor-mation to lexical and syntactic infor mation. The inter -connection between segmental and suprasegmentalparameters can be established by the association oftones and tonal varia tions to segments and syllables.Segmental, lexical and syntactic information is pro-cessed in the LH. This is even true when lexical differ-ences are coded by tones bearing lexical meaning(Gandour & Dardarananda, 1983; Van Lancker &Fromkin, 1973) and by word level stress (Baum, Da-niloff, Daniloff, & Lewis, 1982; Blumstein & Good glass,1972; Pell & Baum, 1997; Van Lancker & Sidtis, 1992).In contrast, sentence level suprase gmental information,namely accentuation and boundary marking expressedacoustically by typical pitch variations, is process ed bythe RH (Meyer, Alter, Friederici, Lohmann, & vonCramon, 2002). During spoken language comprehen-sion processes of the left and the right hemisphere areassumed to interact dynamically in time.The brain bases of the segmental language processingsystem has already been described in some detail


View Full Document

MIT HST 722 - Lateralization of auditory language functions

Documents in this Course
Load more
Download Lateralization of auditory language functions
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lateralization of auditory language functions and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lateralization of auditory language functions 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?