Graduate students and postdocs are encouraged to join the speaker for lunch after the seminar. To sign up for a spot, please email: pennmindcore@sas.upenn.edu
Jennifer Groh
Professor of Psychology & Neuroscience, Neurobiology, Biomedical Engineering, and Computer Science
Duke Institute for Brain Sciences
Computing the location(s) of sound(s) in the visual scene
I will discuss two topics concerning visual and auditory spatial coding: 1. Early cross-talk between vision and hearing, in which eye movement signals trigger eardrum oscillations and create faint saccade-related sounds. 2. A new theory of neural coding, involving multiplexing of signals via fluctuating activity patterns. Such multiplexing could allow representations to encode more than one simultaneous visual or auditory stimulus. These findings emerged from experimentally testing computational models, highlighting the importance of theory in guiding experimental science.
Selected References:
Lovich, S. N., C. D. King, D. L. Murphy, R. Landrum, C. A. Shera and J. M. Groh (2023). “Parametric information about eye movements is sent to the ears.” Proceedings of the national academy of sciences 120(48): p. e2303562120.
Groh, J. M., M. N. Schmehl, V. C. Caruso and S. T. Tokdar (2024). “Signal switching may enhance processing power of the brain.” Trends Cogn Sci 28(7): 600-613.
A pizza lunch will be served. Please bring your own beverage.
We will also stream this seminar via Zoom.
For the link, please email: pennmindcore@sas.upenn.edu
This is a joint seminar with the EARS group at Penn.