Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we will assume that you are happy to receive all cookies and you will not see this message again. Click 'Find out more' for information on how to change your cookie settings.

A computational modelling study from the King Group demonstrates that the way sounds are transformed from the ear to the brain’s auditory cortex may be simpler than expected. These findings not only highlight the value of computational modelling for determining the principles underlying neural processing, but could also be useful for improving treatments for patients with hearing loss.

A cochleagram for a simple cochlea model showing a range of high and low frequencies when exposed to human speech.
A simple model of the ear and auditory nerve serves as suitable input to predict responses of brain cells to natural sounds (e.g. human speech) in the auditory cortex

Our sensory systems such as the ear and the auditory regions of the brain are known to be extremely complicated. The ear is arguably the most mechanically complex part of the body and allows us to hear a vast array of sounds. The cochlea – the hearing part of the inner ear – converts these sounds into patterns of neural activity, which travel through the auditory brainstem, a cluster of brain regions that comprise many diverse cells and connections, to the auditory cortex. Auditory cortex is the brain region central to the processing of natural sound phenomena such as speech and music. Consequently, it is widely assumed that the computations behind our ability to hear, and indeed all computations performed by our sensory systems, are also complex.

A new study from King Group researchers sought to understand how the transformation of sounds by the ear and early levels of the auditory pathway impact on cortical activity. The team, led by Monzilur Rahman and Dr Nicol Harper, examined how well different models of the ear and auditory nerve could be used to predict responses of brain cells in the primary auditory cortex. The models ranged from detailed simulations of the cochlea and auditory nerve to simple models that were a rudimentary approximation of the information processing in these structures. The simple models only retained a few biological features. First, the models decomposed the sounds into different frequencies and did so more finely at lower frequencies. Second, their response increased steeply with intensity for quiet sounds, but then less so for louder sounds. Finally, some models had multiple outputs with different sensitivities to approximate the different kinds of fibre in the auditory nerve.

Remarkably, the simple models, which left out many of the biological details, predicted the neural responses to diverse natural and artificial sounds more consistently well than the biologically-detailed models. This implies that only certain features of the processing that takes place in the ear and nerve are transmitted through the brainstem to the cortex, and that many details have little impact on cortical activity. Last author Dr Nicol Harper said: “This suggests that there may be an underlying simplicity to the signal transformation from ear to cortex that is hidden among the detail. This hidden simplicity may be a feature of other sensory systems too.”

“Understanding the computations performed by the auditory pathway, as well as providing insight into brain function in general, will aid us in developing better hearing aids and ear and brain implants to help people with hearing loss.”

First author Monzilur Rahman said: "The ability to predict the time course of the responses of auditory neurons is very important when it comes to improving our understanding of how the brain processes the sounds we hear. However, achieving high accuracy in predicting the time course of neural responses has always proven to be very challenging. We have explored this hard problem, attempting to improve our ability to predict the responses of auditory cortical neurons, while also relating it to the complexity of the auditory periphery. I found it astonishing how a simple model aimed at capturing the computational essence of the auditory periphery can perform similarly to a biologically-detailed model. While measuring prediction performance for particular stimuli is a good test for a model, we have also put our models to a more rigorous test by assessing their ability to predict well across different datasets and brain states."

The full paper "Simple transformations capture auditory input to cortex" is available to read in PNAS.

Example cochleagrams from the study

Cochleagrams provide the output of a cochlear model, which represents the activity in each sound frequency channel over time. 
The complex biologically-detailed models are labelled WSR, Lyon, BEZ, MSS.
The simple spectrogram-based models are labelled spec-log, spec-log1plus, spec-power, spec-Hill.
The three models that performed particularly well at predicting cortical responses were spec-log, spec-power and spec-Hill.

Cochleagrams produced by each cochlear model for identical inputs

A. Each column is a different stimulus: a click, a low frequency pure tone, a high frequency pure tone, white noise, and a short and long clip of natural sound. B. Each row is a different cochlear model, the top four are more complex and biologically-detailed, the bottom four are simple and spectrogram-based.

Cochlegram showing frequencies ranging from low to high produced in all cochlear models used when exposed to the following sounds: a click, different kHz of pure tones, white noise, natural sounds and human speech.

Examples of cochleagrams of natural sound stimuli for each model


Cochleagrams showing frequencies ranging from high to low produced by each cochlear model when exposed to four natural sounds: ferret vocalization, insects buzzing, speech and water sound.




Similar stories

Iron deficiency anaemia in early pregnancy increases risk of heart defects, suggests new research

In animal models, iron deficient mothers have a greatly increased risk of having offspring with congenital heart disease (CHD). The risk of CHD can be greatly reduced if the mother is given iron supplements very early in pregnancy. Additionally, embryos from a mouse model of Down Syndrome were particularly vulnerable to the effects of maternal iron deficiency, leading to a higher risk of developing severe heart defects.

New target to develop immunosuppressants

A new study from the Parekh Group has resolved a long-standing question in our understanding of intracellular Ca2+ signalling, namely how a specific type of Ca2+ channel is uniquely able to signal to the nucleus to regulate gene expression. By unravelling this mechanism, researchers have identified a new approach for developing immunosuppressant drugs.

How the kidney contributes to healthy iron levels and disease

A new study from the Lakhal-Littleton Group has addressed a long-standing gap in our understanding of systemic iron homeostasis. It provides the first formal demonstration that the hormone hepcidin controls iron reabsorption in the kidney, in a manner that impacts the body’s iron levels, under normal physiological conditions. It also demonstrates for the first time how this mechanism becomes critically important in the development of iron disorders.

Nchimunya Nelisa Tebeka wins Diabetes UK Early Career Investigator Award

Congratulations are in order to Rhodes Scholar Nchimunya Nelisa Tebeka, who has been awarded this year's Diabetes UK Early Career Investigator Award for her DPhil work. This award is awarded for the best basic or clinical science oral abstract presentation at the Diabetes UK Professional Conference.

New research to radically alter our understanding of synaptic development

A new study from the Molnár group on the role of regulated synaptic vesicular release in specialised synapse formation has made it to the cover of Cerebral Cortex.