Dr. Hiroyuki Kato (University of North Carolina at Chapel Hill): “Cortical area-specific roles in spectro-temporal integration”
In our daily life, even in the face of multiple sound sources, our brain binds together frequency components that belong to the same source and recognizes individual sound objects. This “feature binding” relies on the precise synchrony of each component’s onset timing, but little is known regarding its neural correlates. Here, we find that multi-frequency sounds prevalent in vocalizations, specifically harmonics, preferentially activate the mouse secondary auditory cortex (A2), whose response sharply deteriorates with shifts in component onset timings. The temporal window for harmonics integration in A2 was broadened by inactivation of somatostatin-expressing interneurons (SOM cells), but not parvalbumin-expressing interneurons (PV cells). Importantly, A2 has functionally connected subnetworks of neurons preferentially encoding coincident harmonics. These subnetworks are stable across days and exist prior to experimental harmonics exposure, suggesting their formation during development. Therefore, we propose A2 as a locus for multi-frequency integration, which may form the circuit basis for vocal processing. In this seminar, I will further discuss the disparities between the location of functionally identified A2 and the representation of “AuV” in brain atlases. Our data show that stereotaxic targeting of auditory cortical areas is prone to inaccuracy due to the marked spatial variability across individuals. These results call for the reconsideration of the usage of brain atlases and suggest the necessity of functional mapping in dissecting the hierarchically organized auditory cortices.
Dr. Mitchell Sutter (University of California, Davis): “Task Dependence of Attentional Modulation of Auditory Cortical Coding”
Attention improves our ability to process sounds in complex hearing environments. We are investigating how attention works to improve the neuronal encoding of sound. In this talk we describe how different forms of attention and different types of stimulus discriminations can dramatically change how the auditory cortex encodes sounds and how attention manifests itself in single neuron signals. We will specifically compare three tasks: simple amplitude modulation (AM) detection, selective feature attention where you must attend to either the modulation or the carrier of AM, and intermodal attention where you must attend to either an auditory or visual stimulus which are presented simultaneously. The results show that the form of neural code (opponent versus non-opponent code) as well as whether attention is manifested at the single neuron level or the population level is heavily dependent on how ambiguous a single neuron rate code is at encoding the AM. The results also show a higher level of complexity in primary auditory cortex than one would expect including encoding of behavioral context, independent of the sound processing properties of the neuron.
E.A.R.S. is a monthly auditory seminar series with the focus on central auditory processing and circuits. Please pre-register (for free) and tune in via Crowdcast (enter your email to receive the link for the talk): https://www.crowdcast.io/e/ears/10
(Note: for optimal performance, we recommend using Google Chrome as your browser).