Professor Jonathan Simon (Biology/ECE/ISR) is the Principal Investigator of a new National Institutes of Health National Institute on Deafness and Other Communication Disorders R01 grant, "Auditory Scene Analysis and Temporal Cortical Computations." The five year, $1.5M grant started March 1, 2015. The research will further the understanding of how in an environment with many sounds and voices, people are able to concentrate on an individual voice and what it is saying.
When many people in a room are talking at the same time, the sounds of their voices mix with each other before ever arriving at our ears. Despite the fact that sorting out this sound mixture, or auditory scene, into individual voices is a profoundly difficult mathematical problem, the human brain routinely accomplishes this task, and often with little apparent effort. The neural underpinnings of this task are not at all well understood. In addition, when this ability declines—for example due to hearing loss or aging—it is not known which specific mechanisms of neural processing are the most critical in preserving the remaining aspects of this ability.
Simon will use magnetoencephalography (MEG) to record from the auditory cortexes of the brains of human subjects, specifically the temporally dynamic neural responses to individual sound elements and their mixtures. Linking the neural responses with their auditory stimuli and attentional state will allow inferences of neural representations of these sounds. These neural representations are temporal: neural processing unfolds in time in response to ongoing acoustic dynamics.
Simon will use these temporal representations to investigate how complex auditory scenes are neurally encoded—from the broad mixture of the entire acoustic scene to separated individual sources, in different areas of auditory cortex, and with a special emphasis on speech. He hypothesizes that the brain’s auditory cortex employs a universal neural encoding scheme, genuinely temporal in nature, which underlies not only general auditory processing but also auditory scene segregation.
Simon will determine how the auditory cortex neurally represents speech in difficult listening situations. One example is of speech in noise in a reverberant environment, a very relevant combination which can strongly undermine speech intelligibility. Another example is listening to a speaker in the presence of several competing speakers. In this case, understanding how the background (the mixture of the competing speakers) is neurally represented is of particular interest, and of direct relevance in determining how the brain segregates the foreground speech from the background.
Simon also will determine analogs of these neural speech representations for dynamic non-speech sounds, especially when the sounds are separate components of a larger acoustic scene. This will generalize what is known about speech segregation to a wider class of sounds. (While speech is very important for human listeners, most sounds are not speech.).
In addition, Simon will investigate the detailed neural mechanisms by which the auditory cortex identifies and isolates individual speakers in a complex acoustic scene. Pitch and timbre, two acoustic cues known to be important for this task, will be separately and independently modified, so that their individual contributions to the neural process of auditory scene segregation of speech may be determined.
March 13, 2015