Michael Beauchamp
Department of Neurosurgery
University of Pennsylvania
Interactions between Face and Voice: New Findings in Human Speech Perception
Auditory information from a talker’s voice and visual information from a talker’s face provide independent cues about speech content. In the illusion known as the McGurk effect, integration of conflicting auditory and visual speech cues produces a fusion percept different than either component modality (McGurk and MacDonald, 1976). A computational model known as CIMS (causal inference in multisensory speech perception) can explain the perception of the McGurk effect and other incongruent audiovisual speech (Magnotti & Beauchamp, 2017). Recently, we discovered that repeatedly experiencing the McGurk effect produces long-lasting changes in auditory-only perception. Instead of being perceived veridically, the auditory component of the McGurk stimulus presented on its own begins to evoke the fusion percept. The change, termed fusion-induced recalibration (FIR), is talker-specific and persists for months or years. An updated CIMS model that includes an error signal propagating between audiovisual and unisensory representations provides a quantitative, predictive framework for understanding FIR.
Pizza will be served. Please bring your own beverage.