Events / ILST Seminar: McCall Sarrett

ILST Seminar: McCall Sarrett

February 4, 2022
1:30 PM - 2:30 PM

online offering

McCall Sarrett
Villanova University

 

via Zoom: https://upenn.zoom.us/j/97106379648

 

Decoding speech information from neurophysiological data

 

The acoustics of spoken language are highly variable, and yet most listeners easily extract meaningful information from the speech signal. Psycholinguistic work has revealed which acoustic dimensions are relevant when listeners categorize speech sounds. However, the real-time neural mechanisms subserving such processes are not well understood. One crucial question is which perceptual distinctions are detectable in neural responses. First, we examine perceptual encoding of speech sounds using electroencephalography (EEG) and machine learning techniques (N=27). We contrast two approaches with machine learning and EEG and discuss methodological considerations for researchers using such techniques. We find that this approach can reveal neural sensitivity to phonetic contrasts that are indistinguishable in traditional EEG analyses. Second, we examine how such auditory information is integrated over time (N=31). Such integration is a critical contributing process for efficient spoken word recognition. We propose a novel method to measure this process directly from neural activity, drawing on machine learning techniques. We show robust decoding of words on an individual-trial, individual-subject level, which follows the expected pattern of lexical competition dynamics and shows similar individual subject variability as traditional methods (e.g. Visual World Paradigm). Proposed future directions and methodological limitations are discussed.