Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Given sensory input delivered via layer 4, the “students” in layer 2/3 generate predictions (“Will the apple fall?”), which are then evaluated by comparing them to the actual sensory outcome provided by the “teacher” in layer 5. In this way, the sensory input itself supervises learning (self-supervision). Image shows illustration of man in profile with activities in brain
Given sensory input delivered via layer 4, the “students” in layer 2/3 generate predictions (“Will the apple fall?”), which are then evaluated by comparing them to the actual sensory outcome provided by the “teacher” in layer 5. In this way, the sensory input itself supervises learning (self-supervision).

It is widely believed that the neocortex learns an internal model of the world, yet the exact mechanisms have remained elusive. Drawing on insights from modern AI, researchers in the Costa group have published research in Nature Communications which demonstrates for the first time that neocortical circuitry can explain a broad spectrum of learning phenomena observed in the neocortex. Their new computational theory proposes that the neocortex — specifically layers 2/3, 4, and 5 — learns to predict sensory input using the sensory input itself (self-supervision). The model suggests that:

  • Layer 2/3 (L2/3) generates predictions based on past sensory data, relayed via layer 4, and top-down contextual information.
  • Layer 5 (L5) compares predictions with actual input to generate an implicit teaching signal (self-supervision) that drives learning

The theory accounts for over two decades of experimental findings, spanning synaptic plasticity to sensorimotor error responses in awake, behaving animals.

Since, the same computational principles also underpin learning in modern artificial intelligence systems, this work establishes a compelling link between brain function and AI. Furthermore, the model makes clear, testable predictions that can drive future experiments and advance our understanding of the computational roles played by different cortical layers.

This work was led by Kevin Nejad, a PhD student in the Costa lab, in close collaboration with Paul Anastasiades, an experimental neuroscientist at the University of Bristol, and Loreen Hertäg, a theoretical neuroscientist at Technische Universität Berlin.

Associate Professor Rui Ponte Costa says, ‘Our findings reveal that the brain’s layered cortex may be purpose-built for the kind of self-supervised learning that drives today’s most advanced AI.’

 

Read the paper here