Search results
Found 12471 matches for
Distributional coding of associative learning in discrete populations of midbrain dopamine neurons.
Midbrain dopamine neurons are thought to play key roles in learning by conveying the difference between expected and actual outcomes. Recent evidence suggests diversity in dopamine signaling, yet it remains poorly understood how heterogeneous signals might be organized to facilitate the role of downstream circuits mediating distinct aspects of behavior. Here, we investigated the organizational logic of dopaminergic signaling by recording and labeling individual midbrain dopamine neurons during associative behavior. Our findings show that reward information and behavioral parameters are not only heterogeneously encoded but also differentially distributed across populations of dopamine neurons. Retrograde tracing and fiber photometry suggest that populations of dopamine neurons projecting to different striatal regions convey distinct signals. These data, supported by computational modeling, indicate that such distributional coding can maximize dynamic range and tailor dopamine signals to facilitate specialized roles of different striatal regions.
Model-Based Inference of Synaptic Transmission.
Synaptic computation is believed to underlie many forms of animal behavior. A correct identification of synaptic transmission properties is thus crucial for a better understanding of how the brain processes information, stores memories and learns. Recently, a number of new statistical methods for inferring synaptic transmission parameters have been introduced. Here we review and contrast these developments, with a focus on methods aimed at inferring both synaptic release statistics and synaptic dynamics. Furthermore, based on recent proposals we discuss how such methods can be applied to data across different levels of investigation: from intracellular paired experiments to in vivo network-wide recordings. Overall, these developments open the window to reliably estimating synaptic parameters in behaving animals.
A deep learning framework for neuroscience.
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.
Pre- and postsynaptically expressed spike-timing-dependent plasticity contribute differentially to neuronal learning.
A plethora of experimental studies have shown that long-term synaptic plasticity can be expressed pre- or postsynaptically depending on a range of factors such as developmental stage, synapse type, and activity patterns. The functional consequences of this diversity are not clear, although it is understood that whereas postsynaptic expression of plasticity predominantly affects synaptic response amplitude, presynaptic expression alters both synaptic response amplitude and short-term dynamics. In most models of neuronal learning, long-term synaptic plasticity is implemented as changes in connective weights. The consideration of long-term plasticity as a fixed change in amplitude corresponds more closely to post- than to presynaptic expression, which means theoretical outcomes based on this choice of implementation may have a postsynaptic bias. To explore the functional implications of the diversity of expression of long-term synaptic plasticity, we adapted a model of long-term plasticity, more specifically spike-timing-dependent plasticity (STDP), such that it was expressed either independently pre- or postsynaptically, or in a mixture of both ways. We compared pair-based standard STDP models and a biologically tuned triplet STDP model, and investigated the outcomes in a minimal setting, using two different learning schemes: in the first, inputs were triggered at different latencies, and in the second a subset of inputs were temporally correlated. We found that presynaptic changes adjusted the speed of learning, while postsynaptic expression was more efficient at regulating spike timing and frequency. When combining both expression loci, postsynaptic changes amplified the response range, while presynaptic plasticity allowed control over postsynaptic firing rates, potentially providing a form of activity homeostasis. Our findings highlight how the seemingly innocuous choice of implementing synaptic plasticity by single weight modification may unwittingly introduce a postsynaptic bias in modelling outcomes. We conclude that pre- and postsynaptically expressed plasticity are not interchangeable, but enable complimentary functions.
Developmental depression-to-facilitation shift controls excitation-inhibition balance.
Changes in the short-term dynamics of excitatory synapses over development have been observed throughout cortex, but their purpose and consequences remain unclear. Here, we propose that developmental changes in synaptic dynamics buffer the effect of slow inhibitory long-term plasticity, allowing for continuously stable neural activity. Using computational modeling we demonstrate that early in development excitatory short-term depression quickly stabilises neural activity, even in the face of strong, unbalanced excitation. We introduce a model of the commonly observed developmental shift from depression to facilitation and show that neural activity remains stable throughout development, while inhibitory synaptic plasticity slowly balances excitation, consistent with experimental observations. Our model predicts changes in the input responses from phasic to phasic-and-tonic and more precise spike timings. We also observe a gradual emergence of short-lasting memory traces governed by short-term plasticity development. We conclude that the developmental depression-to-facilitation shift may control excitation-inhibition balance throughout development with important functional consequences.
Cerebro-cerebellar networks facilitate learning through feedback decoupling.
Behavioural feedback is critical for learning in the cerebral cortex. However, such feedback is often not readily available. How the cerebral cortex learns efficiently despite the sparse nature of feedback remains unclear. Inspired by recent deep learning algorithms, we introduce a systems-level computational model of cerebro-cerebellar interactions. In this model a cerebral recurrent network receives feedback predictions from a cerebellar network, thereby decoupling learning in cerebral networks from future feedback. When trained in a simple sensorimotor task the model shows faster learning and reduced dysmetria-like behaviours, in line with the widely observed functional impact of the cerebellum. Next, we demonstrate that these results generalise to more complex motor and cognitive tasks. Finally, the model makes several experimentally testable predictions regarding cerebro-cerebellar task-specific representations over learning, task-specific benefits of cerebellar predictions and the differential impact of cerebellar and inferior olive lesions. Overall, our work offers a theoretical framework of cerebro-cerebellar networks as feedback decoupling machines.
Cortical microcircuits as gated-recurrent neural networks
Cortical circuits exhibit intricate recurrent architectures that are remarkably similar across different brain areas. Such stereotyped structure suggests the existence of common computational principles. However, such principles have remained largely elusive. Inspired by gated-memory networks, namely long short-term memory networks (LSTMs), we introduce a recurrent neural network in which information is gated through inhibitory cells that are subtractive (subLSTM). We propose a natural mapping of subLSTMs onto known canonical excitatory-inhibitory cortical microcircuits. Our empirical evaluation across sequential image classification and language modelling tasks shows that subLSTM units can achieve similar performance to LSTM units. These results suggest that cortical circuits can be optimised to solve complex contextual problems and proposes a novel view on their computational function. Overall our work provides a step towards unifying recurrent networks as used in machine learning with their biological counterparts.
The short-term plasticity of VIP interneurons in motor cortex.
Short-term plasticity is an important feature in the brain for shaping neural dynamics and for information processing. Short-term plasticity is known to depend on many factors including brain region, cortical layer, and cell type. Here we focus on vasoactive-intestinal peptide (VIP) interneurons (INs). VIP INs play a key disinhibitory role in cortical circuits by inhibiting other IN types, including Martinotti cells (MCs) and basket cells (BCs). Despite this prominent role, short-term plasticity at synapses to and from VIP INs is not well described. In this study, we therefore characterized the short-term plasticity at inputs and outputs of genetically targeted VIP INs in mouse motor cortex. To explore inhibitory to inhibitory (I → I) short-term plasticity at layer 2/3 (L2/3) VIP IN outputs onto L5 MCs and BCs, we relied on a combination of whole-cell recording, 2-photon microscopy, and optogenetics, which revealed that VIP IN→MC/BC synapses were consistently short-term depressing. To explore excitatory (E) → I short-term plasticity at inputs to VIP INs, we used extracellular stimulation. Surprisingly, unlike VIP IN outputs, E → VIP IN synapses exhibited heterogeneous short-term dynamics, which we attributed to the target VIP IN cell rather than the input. Computational modeling furthermore linked the diversity in short-term dynamics at VIP IN inputs to a wide variability in probability of release. Taken together, our findings highlight how short-term plasticity at VIP IN inputs and outputs is specific to synapse type. We propose that the broad diversity in short-term plasticity of VIP IN inputs forms a basis to code for a broad range of contrasting signal dynamics.
Cerebellar-driven cortical dynamics can enable task acquisition, switching and consolidation
AbstractThe brain must maintain a stable world model while rapidly adapting to the environment, but the underlying mechanisms are not known. Here, we posit that cortico-cerebellar loops play a key role in this process. We introduce a computational model of cerebellar networks that learn to drive cortical networks with task-outcome predictions. First, using sensorimotor tasks, we show that cerebellar feedback in the presence of stable cortical networks is sufficient for rapid task acquisition and switching. Next, we demonstrate that, when trained in working memory tasks, the cerebellum can also underlie the maintenance of cognitive-specific dynamics in the cortex, explaining a range of optogenetic and behavioural observations. Finally, using our model, we introduce a systems consolidation theory in which task information is gradually transferred from the cerebellum to the cortex. In summary, our findings suggest that cortico-cerebellar loops are an important component of task acquisition, switching, and consolidation in the brain.
Synaptic Transmission Optimization Predicts Expression Loci of Long-Term Plasticity.
Long-term modifications of neuronal connections are critical for reliable memory storage in the brain. However, their locus of expression-pre- or postsynaptic-is highly variable. Here we introduce a theoretical framework in which long-term plasticity performs an optimization of the postsynaptic response statistics toward a given mean with minimal variance. Consequently, the state of the synapse at the time of plasticity induction determines the ratio of pre- and postsynaptic modifications. Our theory explains the experimentally observed expression loci of the hippocampal and neocortical synaptic potentiation studies we examined. Moreover, the theory predicts presynaptic expression of long-term depression, consistent with experimental observations. At inhibitory synapses, the theory suggests a statistically efficient excitatory-inhibitory balance in which changes in inhibitory postsynaptic response statistics specifically target the mean excitation. Our results provide a unifying theory for understanding the expression mechanisms and functions of long-term synaptic transmission plasticity.