MIT HST 722 - Auditory, Somatosensory, and Motor Interactions in Speech Production

Unformatted text preview:

CNS Speech Lab at Boston University Speech Sound Map  Mirror NeuronsSpeech Sound Map  Mirror NeuronsLearning in the Model – Stage 1Learning in the Model – Stage 2 (Imitation) Talk OutlinePrediction: Auditory Error CellsfMRI Study of Unexpected Auditory Perturbation During SpeechfMRI Study of Unexpected Jaw Perturbation During SpeechTalk OutlineFeedforward Control in the ModelTuning Feedforward CommandsSensorimotor Adaptation Study – F1 PerturbationSensorimotor Adaptation Study ResultsSummaryReconciling Gestural and Auditory Views of Speech ProductionSimulating a Hemodynamic Response from the ModelBrain regions active during cued production of 3-syllable stringsMotor Equivalence in American English /r/ Production Motor Equivalence in the DIVA ModelBuilding Speaker-Specific Vocal Tract Models from MRI ImagesHST 722 – Speech Motor Control 1Auditory, Somatosensory, and Motor Interactions in Speech ProductionSupported by NIDCD, NSF.Frank H. GuentherDepartment of Cognitive and Neural Systems, Boston UniversityDivision of Health Sciences and Technology, Harvard University / M.I.T.Research Laboratory of Electronics, Massachusetts Institute of TechnologySatrajit GhoshAlfonso Nieto-CastanonJason TourvilleOren CivierKevin ReillyJason BohlandJonathan BrumbergMichelle HampsonJoseph PerkellVirgilio VillacortaMajid ZandipourMelanie MatthiesShinji MaedaCollaboratorsHST 722 – Speech Motor Control 2CNS Speech Lab at Boston UniversityPrimary goal is to elucidate the neural processes underlying: • Learning of speech in children• Normal speech in adults• Breakdowns of speech in disorders such as stuttering and apraxia of speechMethods of investigation include:• Neural network modeling• Functional brain imaging• Motor and auditory psychophysicsThese studies are organized around the DIVA model, a neural network model of speech acquisition and production developed in our lab.HST 722 – Speech Motor Control 3Talk OutlineOverview of the DIVA model• Mirror neurons in the model• Learning in the model• Simulating a hemodynamic response from the modelFeedback control subsystem• Auditory perturbation fMRI experiment• Somatosensory perturbation fMRI experimentFeedforward control subsystem• Sensorimotor adaptation to F1 perturbationSummaryHST 722 – Speech Motor Control 4Schematic of the DIVA ModelHST 722 – Speech Motor Control 5Boxes in the schematic correspond to maps of neurons; arrows correspond to synaptic projections.The model controls movements of a “virtual vocal tract”, or articulatory synthesizer. Video shows random movements of the articulators in this synthesizer.Production of a speech sound in the model starts with activation of a speech sound map cell in left ventral premotor cortex (BA 44/6), which in turn activates feedforwardand feedback control subsystemsthat converge on primary motor cortex.HST 722 – Speech Motor Control 6Speech Sound Map Ù Mirror NeuronsSince its inception in 1992, the DIVA model has included a speech sound map that contains cells which are active during both perception and production of a particular speech sound (phoneme or syllable).During perception, these neurons are necessary to learn an auditory target or goal for the sound, and to a lesser degree somatosensory targets(limited to the visible articulators such as lips).Speech Sound Map (Premotor Cortex)Articulator Velocity and Position Cells (Motor Cortex)Auditory Error(Auditory Cortex)Somatosensory Error(Somatosensory Cortex)Auditory GoalRegionSomatosensory Goal RegionSomatosensory StateAuditory StateFeedforwardCommandTo MusclesAuditory Feedback-Based CommandSomatosensory Feedback-Based CommandSpeech Sound Map (Premotor Cortex)Articulator Velocity and Position Cells (Motor Cortex)Auditory Error(Auditory Cortex)Somatosensory Error(Somatosensory Cortex)Auditory GoalRegionSomatosensory Goal RegionSomatosensory StateAuditory StateFeedforwardCommandTo MusclesAuditory Feedback-Based CommandSomatosensory Feedback-Based CommandSpeech sound map during perceptionHST 722 – Speech Motor Control 7Speech Sound Map Ù Mirror NeuronsAfter a sound has been learned (described next), activating the speech sound map cells for the sound leads to readout of the learned feedforward commands (“gestures”) and auditory and somatosensory targets for the sound (red arrows at right).These targets are compared to incoming sensory signals to generate corrective commands if needed (blue).The overall motor command (purple) combines feedforward and feedback components.Speech Sound Map (Premotor Cortex)Articulator Velocity and Position Cells (Motor Cortex)Auditory Error(Auditory Cortex)Somatosensory Error(Somatosensory Cortex)Auditory GoalRegionSomatosensory Goal RegionSomatosensory StateAuditory StateFeedforwardCommandTo MusclesAuditory Feedback-Based CommandSomatosensory Feedback-Based CommandSpeech Sound Map (Premotor Cortex)Articulator Velocity and Position Cells (Motor Cortex)Auditory Error(Auditory Cortex)Somatosensory Error(Somatosensory Cortex)Auditory GoalRegionSomatosensory Goal RegionSomatosensory StateAuditory StateFeedforwardCommandTo MusclesAuditory Feedback-Based CommandSomatosensory Feedback-Based CommandSpeech sound map during productionHST 722 – Speech Motor Control 8Learning in the Model – Stage 1In the first learning stage, the model learns the relationships between motor commands, somatosensory feedback, and auditory feedback.In particular, the model needs to learn how to transform sensory error signals into corrective motor commands. This is done with babbling movements of the vocal tract which provide paired sensory and motor signals that can be used to tune these transformations.Speech Sound Map (Premotor Cortex)Articulator Velocity and Position Cells (Motor Cortex)Auditory Error(Auditory Cortex)Somatosensory Error(Somatosensory Cortex)Auditory GoalRegionSomatosensory Goal RegionSomatosensory StateAuditory StateFeedforwardCommandTo MusclesAuditory Feedback-Based CommandSomatosensory Feedback-Based CommandSpeech Sound Map (Premotor Cortex)Articulator Velocity and Position Cells (Motor Cortex)Auditory Error(Auditory Cortex)Somatosensory Error(Somatosensory Cortex)Auditory GoalRegionSomatosensory Goal RegionSomatosensory StateAuditory StateFeedforwardCommandTo MusclesAuditory Feedback-Based CommandSomatosensory Feedback-Based CommandHST 722 – Speech Motor Control 9Learning in the Model – Stage 2 (Imitation) The model then needs to learn auditory and somatosensory


View Full Document

MIT HST 722 - Auditory, Somatosensory, and Motor Interactions in Speech Production

Documents in this Course
Load more
Download Auditory, Somatosensory, and Motor Interactions in Speech Production
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Auditory, Somatosensory, and Motor Interactions in Speech Production and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Auditory, Somatosensory, and Motor Interactions in Speech Production 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?