Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we will assume that you are happy to receive all cookies and you will not see this message again. Click 'Find out more' for information on how to change your cookie settings.

A computational modelling study from the King Group demonstrates that the way sounds are transformed from the ear to the brain’s auditory cortex may be simpler than expected. These findings not only highlight the value of computational modelling for determining the principles underlying neural processing, but could also be useful for improving treatments for patients with hearing loss.

A cochleagram for a simple cochlea model showing a range of high and low frequencies when exposed to human speech.
A simple model of the ear and auditory nerve serves as suitable input to predict responses of brain cells to natural sounds (e.g. human speech) in the auditory cortex

Our sensory systems such as the ear and the auditory regions of the brain are known to be extremely complicated. The ear is arguably the most mechanically complex part of the body and allows us to hear a vast array of sounds. The cochlea – the hearing part of the inner ear – converts these sounds into patterns of neural activity, which travel through the auditory brainstem, a cluster of brain regions that comprise many diverse cells and connections, to the auditory cortex. Auditory cortex is the brain region central to the processing of natural sound phenomena such as speech and music. Consequently, it is widely assumed that the computations behind our ability to hear, and indeed all computations performed by our sensory systems, are also complex.

A new study from King Group researchers sought to understand how the transformation of sounds by the ear and early levels of the auditory pathway impact on cortical activity. The team, led by Monzilur Rahman and Dr Nicol Harper, examined how well different models of the ear and auditory nerve could be used to predict responses of brain cells in the primary auditory cortex. The models ranged from detailed simulations of the cochlea and auditory nerve to simple models that were a rudimentary approximation of the information processing in these structures. The simple models only retained a few biological features. First, the models decomposed the sounds into different frequencies and did so more finely at lower frequencies. Second, their response increased steeply with intensity for quiet sounds, but then less so for louder sounds. Finally, some models had multiple outputs with different sensitivities to approximate the different kinds of fibre in the auditory nerve.

Remarkably, the simple models, which left out many of the biological details, predicted the neural responses to diverse natural and artificial sounds more consistently well than the biologically-detailed models. This implies that only certain features of the processing that takes place in the ear and nerve are transmitted through the brainstem to the cortex, and that many details have little impact on cortical activity. Last author Dr Nicol Harper said: “This suggests that there may be an underlying simplicity to the signal transformation from ear to cortex that is hidden among the detail. This hidden simplicity may be a feature of other sensory systems too.”

“Understanding the computations performed by the auditory pathway, as well as providing insight into brain function in general, will aid us in developing better hearing aids and ear and brain implants to help people with hearing loss.”

First author Monzilur Rahman said: "The ability to predict the time course of the responses of auditory neurons is very important when it comes to improving our understanding of how the brain processes the sounds we hear. However, achieving high accuracy in predicting the time course of neural responses has always proven to be very challenging. We have explored this hard problem, attempting to improve our ability to predict the responses of auditory cortical neurons, while also relating it to the complexity of the auditory periphery. I found it astonishing how a simple model aimed at capturing the computational essence of the auditory periphery can perform similarly to a biologically-detailed model. While measuring prediction performance for particular stimuli is a good test for a model, we have also put our models to a more rigorous test by assessing their ability to predict well across different datasets and brain states."

The full paper "Simple transformations capture auditory input to cortex" is available to read in PNAS.

Example cochleagrams from the study

Cochleagrams provide the output of a cochlear model, which represents the activity in each sound frequency channel over time. 
The complex biologically-detailed models are labelled WSR, Lyon, BEZ, MSS.
The simple spectrogram-based models are labelled spec-log, spec-log1plus, spec-power, spec-Hill.
The three models that performed particularly well at predicting cortical responses were spec-log, spec-power and spec-Hill.

Cochleagrams produced by each cochlear model for identical inputs

A. Each column is a different stimulus: a click, a low frequency pure tone, a high frequency pure tone, white noise, and a short and long clip of natural sound. B. Each row is a different cochlear model, the top four are more complex and biologically-detailed, the bottom four are simple and spectrogram-based.

Cochlegram showing frequencies ranging from low to high produced in all cochlear models used when exposed to the following sounds: a click, different kHz of pure tones, white noise, natural sounds and human speech.

Examples of cochleagrams of natural sound stimuli for each model


Cochleagrams showing frequencies ranging from high to low produced by each cochlear model when exposed to four natural sounds: ferret vocalization, insects buzzing, speech and water sound.




Similar stories

Same genome, different worlds: How a similar brain causes sexually dimorphic behaviours

CNCB Publication Research

A new paper from the Goodwin group based in DPAG's Centre for Neural Circuits and Behaviour has shown how males and females are programmed differently in terms of sex.

Thomas Willis (1621 - 1675) 400th Birthday - Alastair Compston in conversation with Zoltán Molnár: An insight into the writings of Willis

General Research

Professor Zoltán Molnár talks to Professor Emeritus of Neurology Alastair Compston FRS about the deeply influential texts written by the Founder of Neurology Thomas Willis four centuries ago.

New form of gift wrap drives male reproductive success

Publication Research Wilson Group News

The transfer of complex mixtures of signals and nutrients between individuals is a key step in several biologically important events in our lives, such as breastfeeding and sexual intercourse. However, we know relatively little about the ways in which the molecular gifts involved are packaged to ensure their successful delivery to the recipient.

Thomas Willis (1621 - 1675) 400th Birthday - Chrystalina Antoniades in conversation with Zoltán Molnár: The Circle of Willis

General Research

Professor Zoltán Molnár talks to Associate Professor Chrystalina Antoniades for an in-depth look at the Circle of Willis, the name given to the arterial ring at the base of the brain, in recognition of the man renowned for its original description.

Just over half of British Indians would take COVID vaccine

EDI News Outreach Postdoctoral Publication Research Riley Group News

University of Oxford researchers from the Department of Physiology, Anatomy and Genetics (DPAG) and the Department of Psychiatry, in collaboration with The 1928 Institute, have published a major new study on the impact of COVID-19 on the UK’s largest BME population.

Thomas Willis (1621 - 1675) 400th Birthday - Alastair Buchan in conversation with Zoltán Molnár

General Research

Professor Zoltán Molnár talks to Pro-Vice-Chancellor Professor Alastair Buchan to learn more about Thomas Willis's residence and base for scientific discoveries, Beam Hall.