1. Data driven discovery of computation principles

Historically, scientists have inferred the brain’s plasticity rules and decision making principles (e.g., reinforcement learning) manually from neural recordings and behavioral observations. However, as datasets grow larger and task paradigms become increasingly complex, these traditional approaches may become less feasible. This underscores the need for algorithms that enable data-driven discovery of learning rules. Some of my ongoing work is guided by this line of thoughts:

- Multi-agent inverse reinforcement learning to reveal the value functions, Theory-of-Mind mentalization process of animals. My current work (Chen et al. bioRxiv, 2025) has setup an inference algorithm to achieve that in multiple animals.

- Data driven discovery of social learning rules. By accessing value functions at different snapshots of the learning process (Chen et al. bioRxiv, 2025), we can link their updates to various functional forms of decisions, outcomes, and others’ choices. My ongoing work explores that using reinforcement learning constrained state-space modeling.

- Data driven discovery of neural plasticity rules. After inferring behavioral-level learning rules, the next step is to uncover how the brain implements them. Recent advances in meta-learning provide promising tools for this endeavor.

2. Revealing network learning from representations

Where we are now can provide important clues about where we came from. By examining the neural representations that emerge during a given behavioral paradigm, it is possible—at least in artificial neural networks (ANNs)—to infer the underlying learning trajectories. In recent years, there has been a surge of theoretical work in machine learning aimed at opening the “black box” of learning dynamics and representations in ANNs. These advances offer powerful tools that we can also leverage to illuminate the black box of brain networks. Example interesting questions:

- What is the unifying objective function of the brain that accounts for the wide range of findings observed in neuroscience experiments? My previous work focused on temporal predictive coding in hippocampus (Chen et al., Neuron, 2024). Future work would be focusing on extending the temporal predictive coding framework to cortical column computation in neural processing hierarchy.

- What unique inductive biases and constraints of the brain contribute to its flexible and adaptive learning? My ongoing work explored the emergence of cognitive maps underlying different network learning rules, initialization and other configurations

3. Revealing the brain’s functional connectivity

While many statistical methods can reveal correlational structures, evaluating the brain’s functional connectivity requires accounting for its intrinsic dynamics. My previous work was the first to integrate neural dynamics into the evaluation of functional connectivity, opening opportunities for many extensions-for example, incorporating specific nonlinearities or modeling the influence of external inputs. Now with the increasing amount of shared modality data recording functional activities and structural connectivity (e.g. MiCRONs and V1 Deep Dive from Allen Institute), this method might come handy while handling these functional connectivity.


CHEN Yusi ,Ph.D.

Investigator