It is widely believed that the neocortex learns an internal model of the world, yet the exact mechanisms have remained elusive. Drawing on insights from modern AI, researchers in the Costa group have published research in Nature Communications which demonstrates for the first time that neocortical circuitry can explain a broad spectrum of learning phenomena observed in the neocortex. Their new computational theory proposes that the neocortex — specifically layers 2/3, 4, and 5 — learns to predict sensory input using the sensory input itself (self-supervision). The model suggests that:
- Layer 2/3 (L2/3) generates predictions based on past sensory data, relayed via layer 4, and top-down contextual information.
- Layer 5 (L5) compares predictions with actual input to generate an implicit teaching signal (self-supervision) that drives learning
The theory accounts for over two decades of experimental findings, spanning synaptic plasticity to sensorimotor error responses in awake, behaving animals.
Since, the same computational principles also underpin learning in modern artificial intelligence systems, this work establishes a compelling link between brain function and AI. Furthermore, the model makes clear, testable predictions that can drive future experiments and advance our understanding of the computational roles played by different cortical layers.
This work was led by Kevin Nejad, a PhD student in the Costa lab, in close collaboration with Paul Anastasiades, an experimental neuroscientist at the University of Bristol, and Loreen Hertäg, a theoretical neuroscientist at Technische Universität Berlin.
Associate Professor Rui Ponte Costa says, ‘Our findings reveal that the brain’s layered cortex may be purpose-built for the kind of self-supervised learning that drives today’s most advanced AI.’
Read the paper here