Researchers at Macquarie University in Sydney, Australia, have upended a decades-old theory about how human brains figure out the location of sounds. The old model assumed we have specialized neurons to detect where sounds originate.
The neural network the researchers discovered also distinguishes speech from background noise. This will undoubtedly get the attention of hearing aid and smartphone designers because understanding speech in a noisy place is challenging — known as the “cocktail party problem” — and for smartphones to interpret what we’re saying.
Why it matters
This research could pave the way for innovations in hearing aids and smartphones, especially in filtering background noise.
“We like to think that our brains must be far more advanced than other animals in every way, but that is just hubris. We’ve been able to show that gerbils are like guinea pigs, guinea pigs are like rhesus monkeys, and rhesus monkeys are like humans in this regard. A sparse, energy efficient form of neural circuitry performs this function – our gerbil brain, if you like.” —Distinguish Professor of Hearing, David McAlpine, Macquarie University
Distinguished Professor of Hearing David McAlpine, in the Macquarie University anechoic chamber
The backstory
An influential 1940s engineering theory proposed that human brains had specialized detectors for mapping sounds to a specific location. This assumption has guided audio tech development for more than 75 years.
The new study, however, shows humans use the same sparse, energy-efficient neural networks as small mammals like gerbils and guinea pigs.
The research
-
The scientists combined advanced brain imaging with specialized hearing tests.
-
The researchers found no evidence of dedicated spatial hearing neurons in any animal's brain after studying dozens of species.
-
Comparing the data to primates like rhesus monkeys confirmed humans use the same sparse, energy-efficient neural circuitry across both brain hemispheres.
The bottom line
-
Our brains don't continuously track sounds. Instead, they find a sound's location from tiny audio snippets before language processing occurs.
-
Researchers think achieving human-level machine listening lies in the more straightforward "gerbil brain" mechanics, not complex language-processing models.
-
Identifying the minimum audio information needed for locating sounds is the next step for improving machine listening.
Concerned about hearing loss?
★ For facts about hearing loss and hearing aid options, download The Hearing Loss Guide.
★ Sign up for our newsletter for the latest on Hearing aids, dementia triggered by hearing loss, pediatric speech and hearing, speech-language therapies, Parkinson's Voice therapies, and occupational-hearing conservation. We publish our newsletter eight times a year.
★ Call 708-599-9500 to schedule a free, 15-minute hearing screening by an audiologist.
Don't let untreated hearing loss rob you of your health and happiness.