Computational Neuroimaging of Human Auditory Cortex
Just by listening, humans can determine who is talking to them, whether a window in their house is open or shut, or what their kid dropped on the floor in the next room. This ability to derive information from sound is enabled by a cascade of neuronal processing stages that transform the sound waveform entering the ear into cortical representations that are presumed to make behaviorally important sound properties explicit. Although much is known about the peripheral processing of sound, the auditory cortex remains poorly understood, with little consensus even about its coarse-scale organization. This talk will describe our recent efforts using computational neuroimaging methods to better understand the cortical representation of sound. Our work relies on several new methods for neuroimaging experimental design and data analysis: “model-matched” stimuli, voxel decomposition of responses to natural sounds, and the use of task-optimized deep neural networks to model brain responses. We have harnessed these methods to reveal functional segregation in non-primary auditory cortex, and representational transformations occurring between primary and non-primary cortex that may support the recognition of speech, music, and other real-world sound signals.
This event is open to the public.
