Selecting Sounds: How the Brain Knows What To Listen To

How is it that we are able—without any noticeable effort—to listen to a friend talk in a crowded café or follow the melody of a violin within an orchestra?

A team led by scientists at Carnegie Mellon University and Birkbeck, University of London has developed a new approach to how the brain singles out a specific stream of sound from other distracting sounds. Using a novel experimental approach, the scientists non-invasively mapped sustained auditory selective attention in the human brain. Published in the Journal of Neuroscience, the study lays crucial groundwork to track deficits in auditory attention due to aging, disease or brain trauma and to create clinical interventions, like behavioral training, to potentially correct or prevent hearing issues.

“Deficits in auditory selective attention can happen for many reasons—concussion, stroke, autism or even healthy aging. They are also associated with social isolation, depression, cognitive dysfunction and lower work force participation. Now, we have a clearer understanding of the cognitive and neural mechanisms responsible for how the brain can select what to listen to,” said Lori Holt, professor of psychology in CMU’s Dietrich College of Humanities and Social Sciences and a faculty member of the Center for the Neural Basis of Cognition (CNBC).

Photo by rawpixel from PexelsTo determine how the brain can listen out for important information in different acoustic frequency ranges— similar to paying attention to the treble or bass in a music recording— eight adults listened to one series of short tone melodies and ignored another distracting one, responding when they heard a melody repeat. To understand how paying attention to the melodies changed brain activation, the researchers took advantage of a key way that sound information is laid out across the surface, or cortex, of the brain. The cortex contains many 'tonotopic' maps of auditory frequency, where each map represents frequency a little like an old radio display, with low frequencies on one end, going to high on the other. These maps are put together like pieces of a puzzle in the top part of the brain's temporal lobes.

When people in the MRI scanner listened to the melodies at different frequencies, the parts of the maps tuned to these frequencies were activated. What was surprising was that just paying attention to these frequencies activated the brain in a very similar way—not only in a few core areas, but also over much of the cortex where sound information is known to arrive and be processed.

Multiparameter Mapping: Auditory cortical maps of sound frequency and attention. Photo by University at BuffaloThe researchers then used a new high-resolution brain imaging technique called multiparameter mapping to see how the activation to hearing or just paying attention to different frequencies related to another key brain feature, or myelination. Myelin is the 'electrical insulation' of the brain, and brain regions differ a lot in how much myelin insulation is wrapped around the parts of neurons that transmit information. In comparing the frequency and myelin maps, the researchers found that they were very related in specific areas: if there was an increase in the amount of myelin across a small patch of cortex, there was also an increase in how strong a preference neurons had for particular frequencies.

“This was an exciting finding because it potentially revealed some shared 'fault lines' in the auditory brain,” said Frederic Dick, professor of auditory cognitive neuroscience at Birkbeck College and University College London. "Like earth scientists who try to understand what combination of soil, water and air conditions makes some land better for growing a certain crop, as neuroscientists we can start to understand how subtle differences in the brain's functional and structural architecture might make some regions more 'fertile ground' for learning new information like language or music.” 

StepUp Summary

A child who struggles with listening and auditory processing, may develop poor listening habits and skills. He may not expect to hear, understand or retain words, language patterns, and sentences. A child who doesn’t predict he will be successful often doesn’t pay attention during group activities or to general directions. In addition to missing out on valuable and useful language information, the child misses out on the satisfaction of successful listening. If children don’t predict they will be successful, they aren’t motivated to listen.

With StepUp to Learn, a child can develop good listening skills through practice with successful listening experiences where he learns to expect to hear, understand, and remember information. Successful listening develops thinking and problem-solving skills. It lays the foundation for reading decoding by helping children hear the differences between similar words and lays the foundation for children’s reading comprehension, understanding phrases and sentences, and enables them to visualize the meaning of phrases and sentences. 

StepUp to Learn helps children develop fluency in reading decoding, handwriting, and math-fact retrieval. StepUp's cloud-based programs enrich any PreK - Grade 2 curriculum and can be used as an intervention for struggling learners.  

Watch this video to see StepUp to Learn in action! 

Ready to try for yourself? Get your 30-day trial now!

Article written by Shilo Rea and reposted with permission from Carnegie Mellon University.