Auditory-visual fusion in speech perception in children with cochlear implantsEfrat A. Schorr, Nathan A. Fox, Virginie van Wassenhove, and Eric I. Knudsen doi:10.1073/pnas.0508862102 2005;102;18748-18750; originally published online Dec 8, 2005; PNAS This information is current as of November 2006. & ServicesOnline Information www.pnas.org/cgi/content/full/102/51/18748etc., can be found at: High-resolution figures, a citation map, links to PubMed and Google Scholar, Supplementary Material www.pnas.org/cgi/content/full/0508862102/DC1Supplementary material can be found at: References www.pnas.org/cgi/content/full/102/51/18748#BIBLThis article cites 18 articles, 2 of which you can access for free at: www.pnas.org/cgi/content/full/102/51/18748#otherarticlesThis article has been cited by other articles: E-mail Alerts. click hereat the top right corner of the article orReceive free email alerts when new articles cite this article - sign up in the box Rights & Permissions www.pnas.org/misc/rightperm.shtmlTo reproduce this article in part (figures, tables) or in entirety, see: Reprints www.pnas.org/misc/reprints.shtmlTo order reprints, see: Notes:Auditory–visual fusion in speech perceptionin children with cochlear implantsEfrat A. Schorr*, Nathan A. Fox*, Virginie van Wassenhove†‡, and Eric I. Knudsen§¶*Department of Human Development兾Institute of Child Study, University of Maryland, 3304 Benjamin Building, College Park, MD 20742;†Visualand Multisensory Perception Laboratory, Department of Psychology, University of California, Los Angeles, CA 90095-1563;‡Shimojo PsychophysicsLaboratory, Division of Biology, California Institute of Technology, Pasadena, CA 91125; and§Department of Neurobiology, Stanford UniversitySchool of Medicine, Stanford, CA 94305-5125Contributed by Eric I. Knudsen, October 10, 2005Speech, for most of us, is a bimodal percept whenever we bothhear the voice and see the lip movements of a speaker. Childrenwho are born deaf never have this bimodal experience. We testedchildren who had been deaf from birth and who subsequentlyreceived cochlear implants for their ability to fuse the auditoryinformation provided by their implants with visual informationabout lip movements for speech perception. For most of thechildren with implants (92%), perception was dominated by visionwhen visual and auditory speech information conflicted. For some,bimodal fusion was strong and consistent, demonstrating a re-markable plasticity in their ability to form auditory–visual associ-ations despite the atypical stimulation provided by implants. Thelikelihood of consistent auditory–visual fusion declined with age atimplant beyond 2.5 years, suggesting a sensitive period for bi-modal integration in speech perception.auditory visual integration 兩 deafness 兩 learning 兩 sensitive periods 兩speech developmentSpeech is traditionally thought of as an exclusively auditorypercept. However, when the face of the speaker is visible,infor mation c ontained primarily in the movement of the lipsc ontributes powerfully to our perception of speech. This coop-erative interaction between the auditory and visual modalitiesimproves our ability to interpret speech accurately, particularlyin low-signal or high-noise environments (1–4).The cross-modal influence of visual information on speechperception is illustrated by a compelling illusion, referred to asthe McGurk effect. This illusion is evoked when a listener ispresented with an audio recording of one syllable (e.g.,兾 pa兾)while watching a synchronized video recording of a speaker’sface articulating a different syllable (e.g.,兾ka兾). Under thesec onditions, the majority of adults typically report hearing thesyllable兾ta兾. The illusion is robust and obligatory, and has beendemonstrated in adults and children and in numerous languages(5, 6).The McGurk effect demonstrates that, in most people, thecentral nervous system combines v isual information from theface with acoustic information in creating the speech percept.Because the stimuli that the visual and auditory systems encodeis of a very different nature, and because the relationshipbet ween changes in lip shape and changes in acoustic spectrumcan vary across languages, experience may play a critical role infor ming the audiovisual associations that underlie bimodalspeech perception.Children who have been deaf since birth and have receivedc ochlear implants provide a unique population to examine theef fects of auditory deprivation and timing of the introduction ofauditory ex perience on the emergence of perceptual and cog-n itive processes involved in speech perception (7). Cochlearimplants produce patterns of auditory nerve activation thatdif fer markedly from those produced normally by the c ochlea.Nevertheless, in a dramatic example of brain plasticit y, a sub-st antial proportion of children who receive cochlear implantslearn to perceive speech remarkably well using their implants(8–10) and appear able to integrate congruent audiovisualspeech stimuli (11–14). However, their ability to fuse conflictingauditory and visual information in speech perception has neverbeen tested. In addition, because these children have receivedimplants at various ages, they offer the opportunity to investigatethe importance of age at the time of implant for the developmentof bimodal speech perception.Materials and MethodsWe tested children who were deaf from birth and had used theirc ochlear implants for at least 1 year (n ⫽ 36; mean ⫽ 5.85 years).Each child was capable of perceiving spoken language using theimplant alone. Participants in the study met the followingcriteria: they were 5–14 years of age at the time of testing, wereprofoundly deaf from birth, had a minimum of 1 year ofc ochlear-implant experience, used oral language as a primarymode of communication, and could perceive spoken language.Speech perception ability was assessed by using the lexicalneighborhood test (LNT) and the multisyllabic lexical neighbor-hood test (MLNT) (15). Performance on the lexically ‘‘easy’’ andlexically ‘‘hard’’ word list from both the LNT and MLNT wereaveraged. Scores obtained by children with cochlear implantsranged f rom 50% to 88% (mean ⫽ 71.05%, SD ⫽ 9.975%). Thechildren could also read lips but before the implant had notex perienced a correspondence bet ween lip movement and au-ditory signals. The implants est ablished such a correspondencefor the first
or
We will never post anything without your permission.
Don't have an account? Sign up