Search results
Found 12716 matches for
Learning to hear: plasticity of auditory cortical processing.
Sensory experience and auditory cortex plasticity are intimately related. This relationship is most striking during infancy when changes in sensory input can have profound effects on the functional organization of the developing cortex. But a considerable degree of plasticity is retained throughout life, as demonstrated by the cortical reorganization that follows damage to the sensory periphery or by the more controversial changes in response properties that are thought to accompany perceptual learning. Recent studies in the auditory system have revealed the remarkably adaptive nature of sensory processing and provided important insights into the way in which cortical circuits are shaped by experience and learning.
Complexity of frequency receptive fields predicts tonotopic variability across species.
Primary cortical areas contain maps of sensory features, including sound frequency in primary auditory cortex (A1). Two-photon calcium imaging in mice has confirmed the presence of these global tonotopic maps, while uncovering an unexpected local variability in the stimulus preferences of individual neurons in A1 and other primary regions. Here we show that local heterogeneity of frequency preferences is not unique to rodents. Using two-photon calcium imaging in layers 2/3, we found that local variance in frequency preferences is equivalent in ferrets and mice. Neurons with multipeaked frequency tuning are less spatially organized than those tuned to a single frequency in both species. Furthermore, we show that microelectrode recordings may describe a smoother tonotopic arrangement due to a sampling bias towards neurons with simple frequency tuning. These results help explain previous inconsistencies in cortical topography across species and recording techniques.
Subcortical Circuits Mediate Communication Between Primary Sensory Cortical Areas
<jats:title>Abstract</jats:title><jats:p>Integration of information across the senses is critical for perception and is a common property of neurons in the cerebral cortex, where it is thought to arise primarily from corticocortical connections. Much less is known about the role of subcortical circuits in shaping the multisensory properties of cortical neurons. We show that stimulation of the whiskers causes widespread suppression of sound-evoked activity in mouse primary auditory cortex (A1). This suppression depends on the primary somatosensory cortex (S1), and is implemented through a descending circuit that links S1, via the auditory midbrain, with thalamic neurons that project to A1. Furthermore, a direct pathway from S1 has a facilitatory effect on auditory responses in higher-order thalamic nuclei that project to other brain areas. Crossmodal corticofugal projections to the auditory midbrain and thalamus therefore play a pivotal role in integrating multisensory signals and in enabling communication between different sensory cortical areas.</jats:p>
Distributional coding of associative learning in discrete populations of midbrain dopamine neurons.
Midbrain dopamine neurons are thought to play key roles in learning by conveying the difference between expected and actual outcomes. Recent evidence suggests diversity in dopamine signaling, yet it remains poorly understood how heterogeneous signals might be organized to facilitate the role of downstream circuits mediating distinct aspects of behavior. Here, we investigated the organizational logic of dopaminergic signaling by recording and labeling individual midbrain dopamine neurons during associative behavior. Our findings show that reward information and behavioral parameters are not only heterogeneously encoded but also differentially distributed across populations of dopamine neurons. Retrograde tracing and fiber photometry suggest that populations of dopamine neurons projecting to different striatal regions convey distinct signals. These data, supported by computational modeling, indicate that such distributional coding can maximize dynamic range and tailor dopamine signals to facilitate specialized roles of different striatal regions.
Model-Based Inference of Synaptic Transmission.
Synaptic computation is believed to underlie many forms of animal behavior. A correct identification of synaptic transmission properties is thus crucial for a better understanding of how the brain processes information, stores memories and learns. Recently, a number of new statistical methods for inferring synaptic transmission parameters have been introduced. Here we review and contrast these developments, with a focus on methods aimed at inferring both synaptic release statistics and synaptic dynamics. Furthermore, based on recent proposals we discuss how such methods can be applied to data across different levels of investigation: from intracellular paired experiments to in vivo network-wide recordings. Overall, these developments open the window to reliably estimating synaptic parameters in behaving animals.
A deep learning framework for neuroscience.
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.
Pre- and postsynaptically expressed spike-timing-dependent plasticity contribute differentially to neuronal learning.
A plethora of experimental studies have shown that long-term synaptic plasticity can be expressed pre- or postsynaptically depending on a range of factors such as developmental stage, synapse type, and activity patterns. The functional consequences of this diversity are not clear, although it is understood that whereas postsynaptic expression of plasticity predominantly affects synaptic response amplitude, presynaptic expression alters both synaptic response amplitude and short-term dynamics. In most models of neuronal learning, long-term synaptic plasticity is implemented as changes in connective weights. The consideration of long-term plasticity as a fixed change in amplitude corresponds more closely to post- than to presynaptic expression, which means theoretical outcomes based on this choice of implementation may have a postsynaptic bias. To explore the functional implications of the diversity of expression of long-term synaptic plasticity, we adapted a model of long-term plasticity, more specifically spike-timing-dependent plasticity (STDP), such that it was expressed either independently pre- or postsynaptically, or in a mixture of both ways. We compared pair-based standard STDP models and a biologically tuned triplet STDP model, and investigated the outcomes in a minimal setting, using two different learning schemes: in the first, inputs were triggered at different latencies, and in the second a subset of inputs were temporally correlated. We found that presynaptic changes adjusted the speed of learning, while postsynaptic expression was more efficient at regulating spike timing and frequency. When combining both expression loci, postsynaptic changes amplified the response range, while presynaptic plasticity allowed control over postsynaptic firing rates, potentially providing a form of activity homeostasis. Our findings highlight how the seemingly innocuous choice of implementing synaptic plasticity by single weight modification may unwittingly introduce a postsynaptic bias in modelling outcomes. We conclude that pre- and postsynaptically expressed plasticity are not interchangeable, but enable complimentary functions.
Developmental depression-to-facilitation shift controls excitation-inhibition balance.
Changes in the short-term dynamics of excitatory synapses over development have been observed throughout cortex, but their purpose and consequences remain unclear. Here, we propose that developmental changes in synaptic dynamics buffer the effect of slow inhibitory long-term plasticity, allowing for continuously stable neural activity. Using computational modeling we demonstrate that early in development excitatory short-term depression quickly stabilises neural activity, even in the face of strong, unbalanced excitation. We introduce a model of the commonly observed developmental shift from depression to facilitation and show that neural activity remains stable throughout development, while inhibitory synaptic plasticity slowly balances excitation, consistent with experimental observations. Our model predicts changes in the input responses from phasic to phasic-and-tonic and more precise spike timings. We also observe a gradual emergence of short-lasting memory traces governed by short-term plasticity development. We conclude that the developmental depression-to-facilitation shift may control excitation-inhibition balance throughout development with important functional consequences.
Cerebro-cerebellar networks facilitate learning through feedback decoupling.
Behavioural feedback is critical for learning in the cerebral cortex. However, such feedback is often not readily available. How the cerebral cortex learns efficiently despite the sparse nature of feedback remains unclear. Inspired by recent deep learning algorithms, we introduce a systems-level computational model of cerebro-cerebellar interactions. In this model a cerebral recurrent network receives feedback predictions from a cerebellar network, thereby decoupling learning in cerebral networks from future feedback. When trained in a simple sensorimotor task the model shows faster learning and reduced dysmetria-like behaviours, in line with the widely observed functional impact of the cerebellum. Next, we demonstrate that these results generalise to more complex motor and cognitive tasks. Finally, the model makes several experimentally testable predictions regarding cerebro-cerebellar task-specific representations over learning, task-specific benefits of cerebellar predictions and the differential impact of cerebellar and inferior olive lesions. Overall, our work offers a theoretical framework of cerebro-cerebellar networks as feedback decoupling machines.
Cortical microcircuits as gated-recurrent neural networks
Cortical circuits exhibit intricate recurrent architectures that are remarkably similar across different brain areas. Such stereotyped structure suggests the existence of common computational principles. However, such principles have remained largely elusive. Inspired by gated-memory networks, namely long short-term memory networks (LSTMs), we introduce a recurrent neural network in which information is gated through inhibitory cells that are subtractive (subLSTM). We propose a natural mapping of subLSTMs onto known canonical excitatory-inhibitory cortical microcircuits. Our empirical evaluation across sequential image classification and language modelling tasks shows that subLSTM units can achieve similar performance to LSTM units. These results suggest that cortical circuits can be optimised to solve complex contextual problems and proposes a novel view on their computational function. Overall our work provides a step towards unifying recurrent networks as used in machine learning with their biological counterparts.