
Brain Mechanisms Help You Distinguish Words In Crowds
Columbia University Zuckerman Institute (2023)
There’s a good explanation for how the brain tracks speech when you’re in a noisy, crowded room, and this finding could improve hearing aids.
The general idea of speech recognition is that the brain only processes the voices of those who are paying attention, says Vinay Ragavan of Columbia University in New York. “But my problem with that idea is that when someone shouts in a crowded place, we don’t ignore it because we’re focused on who we’re talking to, and we still pick it up.”
To better understand how we process multiple voices, Ragavan and his colleagues implanted electrodes in the brains of seven people undergoing epilepsy surgery to monitor organ activity. . Participants who remained awake throughout the procedure listened to a 30-minute audio clip of two voices.
During 30 minutes, participants were repeatedly asked to shift focus between two voices, one male and one female. The voices spoke to each other and were roughly the same volume, but one was louder than the other at various points in the clip, mimicking the changing volume of background dialogue in a crowded space.
The team then used this brain activity data to create a model that predicted what would happen. The brain processes softer and louder voices, and you can see how it differs depending on which voice the participants were asked to focus on.
The researchers found that the louder of the two voices was associated with the primary auditory cortex, thought to be responsible for conscious perception of sounds, and more complex speech processing, even if the participants were deaf. found to be encoded by both responsible secondary auditory cortices. I was told not to concentrate on loud voices.
“This is the first neuroscience study to show that the brain encodes speech that we don’t pay attention to,” Raghavan said. “This opens the door to understanding how the brain processes things we don’t pay attention to.”
The researchers found that when participants were asked to focus on the voice, quieter voices were processed only by the brain in the primary and secondary cortices. It took about 95 milliseconds longer for the brain to process this voice as speech than if the participants were asked to focus on the louder voice.
“The results of this study suggest that the brain may use different mechanisms to encode and represent these two different levels of speech when a background conversation is ongoing. ,” said Raghavan.
Targeting the mechanisms that perceive softer voices can make hearing aids even more effective, says Raghavan. “If you can make a hearing aid that knows who you’re paying attention to, you can turn up the volume of just that person’s voice.”
The researchers plan to repeat the experiment using a less invasive method for recording speech processing in the brain. “Ideally, you don’t want anything implanted in the brain to get enough brain recordings to decipher attention,” says Raghavan.
topic: