Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we will assume that you are happy to receive all cookies and you will not see this message again. Click 'Find out more' for information on how to change your cookie settings.
Skip to main content

© 2017 Neural information processing systems foundation. All rights reserved. Cortical circuits exhibit intricate recurrent architectures that are remarkably similar across different brain areas. Such stereotyped structure suggests the existence of common computational principles. However, such principles have remained largely elusive. Inspired by gated-memory networks, namely long short-term memory networks (LSTMs), we introduce a recurrent neural network in which information is gated through inhibitory cells that are subtractive (subLSTM). We propose a natural mapping of subLSTMs onto known canonical excitatory-inhibitory cortical microcircuits. Our empirical evaluation across sequential image classification and language modelling tasks shows that subLSTM units can achieve similar performance to LSTM units. These results suggest that cortical circuits can be optimised to solve complex contextual problems and proposes a novel view on their computational function. Overall our work provides a step towards unifying recurrent networks as used in machine learning with their biological counterparts.

Type

Conference paper

Publication Date

01/01/2017

Volume

2017-December

Pages

272 - 283