DeepDraw

Task-driven models of proprioception

deepdraw_a.gif
 

Initially, we started by modeling the proprioceptive system of humans (Sandbrink* & Mamidanna* et al. eLife 2023). In subsequent work, we expanded this framework to test which hypothesis best explains the neural dynamics of proprioceptive units in the brain stem and somatosensory cortex (Marin Vargas& & Bisi* et al. TBD).

Contrasting action and posture coding with hierarchical deep neural network models of proprioception

Kai J. Sandbrink*, Pranav Mamidanna*, Claudio Michaelis,
Matthias Bethge, Mackenzie Weygandt Mathis**, Alexander Mathis**

Using OpenSim and muscle spindle models, we generated a scalable dataset of muscle spindle inputs based on a human drawing Latin Alphabet characters on a tablet. We then trained neural network models in a goal-driven way and checked the emergence of known proprioceptive tuning curves. Check out the publication in eLife more details!

Publication: eLife 2023
Pre-print: https://www.biorxiv.org/content/10.1101/2020.05.06.081372v3
Dataset and Code available: https://github.com/amathislab/deepdraw

 

Task-driven neural network models predict neural dynamics of proprioception

Alessandro Marin Vargas*, Axel Bisi*, Alberto Chiappa, Chris Versteeg, Lee Miller, Alexander Mathis

Proprioception tells the brain the state of the body based on distributed sensors in the body. However, the principles that govern proprioceptive processing from those distributed sensors are poorly understood. Here, we employ a task-driven neural network modeling approach to investigate the neural code of proprioceptive neurons in both cuneate nucleus (CN) and somatosensory cortex area 2 (S1). We simulated muscle spindle signals through musculoskeletal modeling and generated a large-scale, naturalistic movement repertoire to train thousands of neural network models on 16 behavioral tasks, each reflecting a hypothesis about the neural computations of the ascending proprioceptive pathway. We found that the network’s internal representations developed through task-optimization generalize from synthetic data to predict single-trial neural activity in CN and S1 of primates performing center-out reaching. Task-driven models outperform linear encoding models and data-driven models. Behavioral tasks, which aim to predict the limb position and velocity were the best to predict the neural activity in both areas. Architectures that are better at solving the tasks are also better at predicting the neural data. Last, since task-optimization develops representations that better predict neural activity during active but not passively generated movements, we hypothesize that neural activity in CN and S1 is top-down modulated during goal-directed movements.

Publication: TBD
Pre-print: https://www.biorxiv.org/content/10.1101/2023.06.15.545147v1
Dataset and Code available: https://github.com/amathislab/Task-driven-Proprioception

 
deepdraw-01.png