Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we will assume that you are happy to receive all cookies and you will not see this message again. Click 'Find out more' for information on how to change your cookie settings.

A computational modelling study from the King Group demonstrates that the way sounds are transformed from the ear to the brain’s auditory cortex may be simpler than expected. These findings not only highlight the value of computational modelling for determining the principles underlying neural processing, but could also be useful for improving treatments for patients with hearing loss.

A cochleagram for a simple cochlea model showing a range of high and low frequencies when exposed to human speech.
A simple model of the ear and auditory nerve serves as suitable input to predict responses of brain cells to natural sounds (e.g. human speech) in the auditory cortex

Our sensory systems such as the ear and the auditory regions of the brain are known to be extremely complicated. The ear is arguably the most mechanically complex part of the body and allows us to hear a vast array of sounds. The cochlea – the hearing part of the inner ear – converts these sounds into patterns of neural activity, which travel through the auditory brainstem, a cluster of brain regions that comprise many diverse cells and connections, to the auditory cortex. Auditory cortex is the brain region central to the processing of natural sound phenomena such as speech and music. Consequently, it is widely assumed that the computations behind our ability to hear, and indeed all computations performed by our sensory systems, are also complex.

A new study from King Group researchers sought to understand how the transformation of sounds by the ear and early levels of the auditory pathway impact on cortical activity. The team, led by Monzilur Rahman and Dr Nicol Harper, examined how well different models of the ear and auditory nerve could be used to predict responses of brain cells in the primary auditory cortex. The models ranged from detailed simulations of the cochlea and auditory nerve to simple models that were a rudimentary approximation of the information processing in these structures. The simple models only retained a few biological features. First, the models decomposed the sounds into different frequencies and did so more finely at lower frequencies. Second, their response increased steeply with intensity for quiet sounds, but then less so for louder sounds. Finally, some models had multiple outputs with different sensitivities to approximate the different kinds of fibre in the auditory nerve.

Remarkably, the simple models, which left out many of the biological details, predicted the neural responses to diverse natural and artificial sounds more consistently well than the biologically-detailed models. This implies that only certain features of the processing that takes place in the ear and nerve are transmitted through the brainstem to the cortex, and that many details have little impact on cortical activity. Last author Dr Nicol Harper said: “This suggests that there may be an underlying simplicity to the signal transformation from ear to cortex that is hidden among the detail. This hidden simplicity may be a feature of other sensory systems too.”

“Understanding the computations performed by the auditory pathway, as well as providing insight into brain function in general, will aid us in developing better hearing aids and ear and brain implants to help people with hearing loss.”

First author Monzilur Rahman said: "The ability to predict the time course of the responses of auditory neurons is very important when it comes to improving our understanding of how the brain processes the sounds we hear. However, achieving high accuracy in predicting the time course of neural responses has always proven to be very challenging. We have explored this hard problem, attempting to improve our ability to predict the responses of auditory cortical neurons, while also relating it to the complexity of the auditory periphery. I found it astonishing how a simple model aimed at capturing the computational essence of the auditory periphery can perform similarly to a biologically-detailed model. While measuring prediction performance for particular stimuli is a good test for a model, we have also put our models to a more rigorous test by assessing their ability to predict well across different datasets and brain states."

The full paper "Simple transformations capture auditory input to cortex" is available to read in PNAS.

Example cochleagrams from the study

Cochleagrams provide the output of a cochlear model, which represents the activity in each sound frequency channel over time. 
The complex biologically-detailed models are labelled WSR, Lyon, BEZ, MSS.
The simple spectrogram-based models are labelled spec-log, spec-log1plus, spec-power, spec-Hill.
The three models that performed particularly well at predicting cortical responses were spec-log, spec-power and spec-Hill.

Cochleagrams produced by each cochlear model for identical inputs

A. Each column is a different stimulus: a click, a low frequency pure tone, a high frequency pure tone, white noise, and a short and long clip of natural sound. B. Each row is a different cochlear model, the top four are more complex and biologically-detailed, the bottom four are simple and spectrogram-based.

Cochlegram showing frequencies ranging from low to high produced in all cochlear models used when exposed to the following sounds: a click, different kHz of pure tones, white noise, natural sounds and human speech.

Examples of cochleagrams of natural sound stimuli for each model

 

Cochleagrams showing frequencies ranging from high to low produced by each cochlear model when exposed to four natural sounds: ferret vocalization, insects buzzing, speech and water sound.

 

 

 

Similar stories

Winners of DPAG Image Competition announced

A department-wide image competition has yielded a range of stunning images showcasing the diversity and breadth of DPAG's science. Three prize winners and eight commended pictures are announced.

New book expands the horizons of brain research

A pioneering book from Professor Zoltán Molnár and Yale Professors Tamas Horvath and Joy Hirsch to be released on 1 February 2022 addresses the fundamental relationship between the body, brain and behaviour.

Christoph Treiber awarded ERC Starting Grant to investigate the origins of behavioural diversity

Congratulations are in order for postdoctoral research scientist Dr Christoph Treiber who has been awarded a Starting Grant from the European Research Council. His funded project will investigate the genetic components that may contribute to diversity of brain function and behaviour.

Eminent neuroscientist Randy Bruno Joins DPAG

This week, Randy Bruno, PhD starts his research group here at DPAG. He joins the Department as Professor of Neuroscience and Tutorial Fellow at St Peter’s College. He was previously an Associate Professor of Neuroscience at Columbia University, Principal Investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute, and a member of the Kavli Institute for Brain Science.

Switch with a spring: a new model for sleep regulation

New collaborative research led by the Vyazovskiy Group has shed new light on the role of the hypothalamus in the transition between sleep and wake states.