Established August 2020 at EPFL
Welcome to the Mathis Group!
We work at the intersection of computational neuroscience and machine learning. Ultimately — and this is very aspirational — we are interested in understanding behavior in computational terms and in reverse-engineering the algorithms of the brain, in order to figure out how the brain works and to build better artificial intelligence systems.
We develop machine learning tools for behavioral, and neural data analysis and conversely try to learn from the brain to solve challenging machine learning problems such as learning motor skills or estimate poses in crowded scenes. Check out some of our research direction below.
We are also passionate about wildlife conservation and are thrilled that our tools can contribute beyond Neuroscience. Check out our Nature Communications Perspectives in machine learning for wildlife conservation with many others!
Machine Learning Tools for Animal Behavior Analysis
We strive to develop tools for the analysis of animal behavior. Behavior is a complex reflection of an animal's goals, state and character. Thus, accurately measuring behavior is crucial for advancing basic neuroscience, as well as the study of various neural and psychiatric disorders. However, measuring behavior (from video) is also a challenging computer vision and machine learning problem. Thus, our work advances machine learning and computer vision to push the state of the art for the analysis of behavior.
Published work in this field includes DeepLabCut, a popular open-source software tool for pose estimation. For action segmentation, check out DLC2action, AmadeusGPT, hBehaveMAE as well as WildCLIP.
Brain-inspired motor skill learning
Watching any expert athlete, it is apparent that brains have mastered to elegantly control our bodies. This is an astonishing feat, especially considering the inherent challenges of slow hardware and the sensory and motor latencies that impede control. Understanding how the brain achieves skilled behavior is one of the core questions of neuroscience that we tackle through Modeling using Reinforcement Learning, and Control Theory.
Check out DMAP, Lattice, our winning code for the MyoChallenge at NeurIPS 2022, 2023 and more!
Our winning solution for the inaugural NeurIPS MyoChallenge leverages an approach mirroring human skill learning. Using a novel curriculum learning approach, we trained a recurrent neural network to control a realistic model of the human hand with 39 muscles to rotate two Baoding balls in the palm of the hand. In agreement with data from human subjects, the policy uncovers a small number of kinematic synergies even though it is not explicitly biased towards low-dimensional solutions. However, by selectively inactivating parts of the control signal, we found that more dimensions contribute to the task performance than suggested by traditional synergy analysis. Check out the paper in Neuron.
Task-driven models of proprioception & sensorimotor processing
We develop normative theories and models for sensorimotor transformations and learning. Recent work has demonstrated that networks trained on object-recognition tasks provide excellent models for the visual system. Yet, for sensorimotor circuits this fruitful approach is less explored, perhaps due to the lack of datasets like ImageNet. Thus, we explore task-demands, like controlling an arm or learning motor skills, and investigate the emerging representations and computations.
One key hypothesis is that brain circuits for motor control and learning emerge when optimizing circuit models for ethological behaviors. We test & improve those models in collaboration with experimental labs.
Initially, we started by modeling the proprioceptive system of humans (Sandbrink* & Mamidanna* et al. eLife 2023). In subsequent work, we expanded this framework to test which hypothesis best explains the neural dynamics of proprioceptive units in the brain stem and somatosensory cortex (Marin Vargas* & Bisi* et al. Cell 2024).