Deep learning, a type of biologically-inspired machine learning, has recently undergone dramatic developments. It is already having an impact across society, from scientific discovery to climate change prediction. However, current deep learning models require billions of parameters which can take several weeks to train, costing millions of pounds with large carbon footprints. It is therefore becoming increasingly important to develop efficient training methods for deep neural networks.
One of the key bottlenecks that underlie the inefficiency of deep neural networks is the need to perform a large number of computational steps sequentially. Here, inspired by recent findings on how biological neural networks learn, Rui Ponte Costa has proposed research to develop an efficient parallel deep learning model.
He comments, ‘We have recently proposed that a specialized brain region, the cerebellum, enables parallel efficient learning across the brain. The cerebellum has two defining features that are well-placed to facilitate parallel learning: sparsity and modularity. First, the cerebellum contains highly sparse connectivity with only four input connections per neuron, which should result in faster learning. Second, the cerebellum is a highly modular system, which is well-placed to enable parallel learning. Inspired by these cerebellar features we will develop a sparse-modular system for training neural networks in parallel.’
Overall, this work will unlock a new understanding of how the cerebellum plays a critical role in efficient brain-wide learning with deep implications for both biological and artificial learning.