Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Research groups

Collaborators

Samuel Liebana Garcia

DPhil Student

Samuel graduated with an MEng in Electrical, Information, and Control Engineering from the University of Cambridge in 2020. He became interested in Neuroscience during a 2018 summer research placement at the University of Freiburg (Germany) where he developed a MATLAB toolbox for the analysis of electrophysiological recordings from the subthalamic nucleus (STN) and primary motor cortex (M1) in a rat model of Parkinson's Disease. In 2019, he was selected for a summer undergraduate research fellowship (SURF) at Caltech (USA) where he worked on applying methods from Causal Inference to address questions in Neuroscience under the supervision of Prof Frederick Eberhardt.

Samuel moved to Oxford in 2020 as a student on the MSc in Neuroscience. His first rotation involved demonstrating the presence of turbulence in resting MEG data with Prof Morten Kringelbach. During his second rotation, he developed a (learning) race model to capture the learning process of mice during a simple perceptual decision-making task with Dr Armin Lak and Dr Andrew Saxe.

Samuel is now a final-year DPhil student supervised by Dr Armin Lak (DPAG), Dr Andrew Saxe (SWC/Gatsby Unit), and Prof Rafal Bogacz (BNDU). He researches the role of dopamine in learning to make perceptual decisions by relating mathematical models of mouse behaviour to recordings of dopamine release across the striatum. More broadly, he is interested in testing and researching formal theories of learning at the intersection between neuroscience and reinforcement learning.