Symposium Program
9:30 – 09:45 |
Introduction |
9:45 – 10:20![]() |
The distribution of serotonin receptors in the mammalian cortex and their role in modulating local circuit and cortex-wide neural dynamicsSean Froudist-Walsh, University of Bristol, Bristol, UKIn this talk I will describe our anatomy-led approach to modelling and analysis of neural dynamics across scales, focusing on serotonergic modulation of the cortex. I will highlight our investigations into how serotonergic drugs (i.e. psilocybin) dynamically alter cortical dynamics locally in the medial prefrontal cortex and anterior cingulate cortex. I will then show our anatomical investigations showing how this is a peculiar site of intense serotonin 1a receptor expression, which is largely conserved across species. Lastly, I will show how we can integrate cortex-wide data on receptor expression to create testable predictions as to the role of serotonin in enabling a switch between distributed brain activity states for perceptually-coupled and perceptually-decoupled cognition. |
10:20 – 10:55![]() |
Digital Twins Enable Early Disease Diagnosis by Reconstructing Neurodegeneration Levels from Neural RecordingsLorenzo Gaetano Amato, Sant'Anna School of Advanced Studies, Pisa, ITUnderstanding how structural alterations in the brain lead to functional anomalies is a central challenge in neuroscience and clinical neurology. This talk will discuss an emerging approach, leveraging computational modeling to derive personalized digital biomarkers from non-invasive neural recordings. These biomarkers estimate individual brain pathology by inverting individual experimental data, such as EEG recordings. Specifically, we use a personalized whole-brain modeling framework to estimate biophysically meaningful parameters that capture the progression of neurodegenerative alterations, translating non-invasive electrophysiological data into mechanistic insights about disease. We demonstrate this approach in the context of Alzheimer’s disease, where we apply the Digital Alzheimer’s Disease Diagnosis (DADD) model to a large cohort of EEG recordings. The derived digital biomarkers successfully reflect underlying pathological changes, predict positivity to biological markers, and significantly improve the diagnostic and prognostic power of EEG. This model-based strategy opens new possibilities for affordable, scalable, and non-invasive monitoring of brain health, potentially enabling earlier detection and stratification of patients along the disease continuum. |
10:20 – 10:55 |
Cofee Break |
11:25 – 12:00![]() |
Weird and weak but united we stand: oscillatory coordination and information dynamicsDemian Battaglia, University of Strasbourg, FRNeural oscillations have been proposed to play important roles in brain information processing. Oscillations at distinct frequencies would subserve the multiplexing of information flows in different directions. Nested faster and slower oscillations would provide a reference frame for temporal coding. Modulations in the power and frequency of neuronal oscillations could themselves be representational, conveying information about stimuli and context. Most theories on the functional role of oscillations, however, implicitly assume that oscillations are regular in timing, reliable in frequency, and strong in amplitude. Yet, in vivo, oscillations are highly transient and stochastic, fluctuating in frequency and phase, and generally weak in amplitude. Can classical theories about the functional role of oscillations withstand these accounts of a brain with "no metronome"?Here, we review findings from data analysis and computational modeling that suggest oscillations may still remain relevant to information processing despite their weird features and weak intensity—and that, furthermore, their functional roles could extend beyond those traditionally proposed. Considering recordings from the rodent hippocampus and the non-human primate cortex during navigation and working memory tasks, we find that task-relevant information is indeed conveyed by the properties of weak-intensity oscillations recorded across different sites. In the hippocampus, despite fluctuations in frequency and phase, and without a clear relation to the anatomical depth of the recording site, codes mapping gamma oscillatory bursts to behavior can be learned and shown to evolve systematically with learning. In the cortex, we show that the sites carrying the most information about the content held in working memory are not those with the strongest beta power, but rather those exhibiting stronger beta-band coordination with other sites. Turning to the role of the coexistence of multiple frequency bands, we hypothesize—based on extensive computational simulations—that their function extends beyond the simple multiplexing of directed inter-population communication channels. Dynamical working points, with distinct oscillation frequencies dominating at different laminar depths, would instead be necessary to enhance synergistic integration between functionally connected regions. The self-organized coordination of oscillatory bursts co-occurring across faster and slower frequencies would intrinsically lead to complex patterns of reweighting the relevance of different inputs, depending on the latency at which they were emitted in the past—resembling the effects of attention mechanisms in artificial neural networks. Taken together, these empirical observations and conceptual investigations suggest that the irregularity of oscillations is not a limitation to their functional role. On the contrary, it may be an asset, enabling the emergence of richer coding and processing schemes in systems composed of multiple coupled populations. |
12:00 – 12:35![]() |
Network Complexity and Dimensionality in Whole-Brain Dynamics Estimated from fMRI DataMatthieu Gilson, Institut de Neurosciences de la Timone, Aix-Marseille University, FRThe study of the brain as a complex network has grown in the past decades thanks to progress in neuroimaging techniques like magnetic resonance imaging (MRI). Functional MRI enables the characterization of subnetworks engaged in cognitive tasks as well as neuropathological alterations. Here I put in perspective recent work on effective connectivity (EC) at the whole-brain level, yielding signatures of subject- and task-specific brain dynamics. The EC model fitted to data can then be analyzed in a network-oriented manner to examine the propagation of activity in the brain network (e.g. segregation versus integration), as a proxy to characterize information processing. I will focus on our recent work with G Zamora-Lopez to adapt network complexity theory on dynamic systems and the EC-based Ornstein-Uhlenbeck process in particular, with a perspective on quantifying the dimensionality of the resulting network activity. |
12:35 – 14:00 |
Lunch Break + Poster session |
14:00 – 14:35![]() |
Low dimensional neural manifolds for the control of movementSara Solla, Northwestern University, Evanston, USAA fundamental question of systems neuroscience is how neural population dynamics implement computations and information processing. The problem is formidable, as neural activity in any specific area not only reflects its intrinsic dynamics but must also represent inputs to and outputs from that area, and the computations being performed. The analysis of neural dynamics in several brain cortices has consistently revealed the existence of low dimensional neural manifolds, spanned by latent variables that capture a significant fraction of neural variability. In motor cortex, the analysis of population activity provides solid evidence in support of low-dimensional neural manifolds, and reveals surprising geometric similarities between manifolds associated with various motor tasks. The ability of manifold dynamics to predict muscle activation leads to the hypothesis that these latent variables or "neural modes" are the generators of motor behavior. This manifold-based view of motor cortex dynamics thus may lead to a better understanding of how the brain controls movement. |
14:35 – 15:10![]() |
Disentangling neural representations underlying movement execution and imagery for human augmentationLeonardo Pollina, École Polytechnique Fédérale de Lausanne, Lausanne, CHOne of the central challenges in human motor augmentation is the neural resource allocation problem: how to identify reliable control signals in physiological activity that could drive external devices without interfering with ongoing biological functions. Neural population dynamics and latent neural representations offer a promising avenue for addressing this challenge by allowing us to disentangle overlapping motor processes. In this talk, I will explore how motor execution and motor imagery are represented in electrocorticography (ECoG) data recorded from a tetraplegic patient with partial upper-limb residual function. I will then introduce an approach that involves identifying distinct neural subspaces, specific to either executed or imagined movements, as a step toward isolating viable control signals and advancing the development of intuitive brain-machine interfaces for both motor restoration and augmentation. |
15:10 – 15:45![]() |
The dynamics and geometry of choice in the premotor cortexTatiana A. Engel, Princeton University, Princeton, USANeural responses in association brain areas during cognitive tasks are heterogeneous, and the widespread assumption is that this heterogeneity reflects complex dynamics involved in cognition. However, the complexity may arise from a fundamentally different coding principle: the collective dynamics of a neural population encode simple cognitive variables, while individual neurons have diverse tuning to the cognitive variable, similar to tuning curves of sensory neurons to external stimuli. We developed an approach to simultaneously infer neural population dynamics and tuning functions of single neurons to the latent population state. Applied to spike data recorded from primate premotor cortex during decision-making, our model revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Our results reveal a unifying geometric principle for neural encoding of sensory and dynamic cognitive variables. |
15:45 – 16:15 |
Coffee Break |
16:15 – 16:50![]() |
Associative RNNs with self-interactions: reservoir computing without catastrophic forgettingMaurizio Mattia, Istituto Superiore di Sanità, Roma, ITAssociative memory models form a key link between statistical physics and theoretical neuroscience. Traditionally, spin-glass models with Hebbian learning in absence of self-interactions suffer catastrophic forgetting when memory load exceeds a critical threshold. Here I will show that including Hebbian self-couplings in deterministic, graded-unit recurrent neural networks (RNNs) fundamentally reshapes the energy landscape, confining dynamics to the low-dimensional space of stored patterns. This enables robust recall at any memory load, overcoming catastrophic forgetting without altering the Hebbian matrix or requiring nonlocal learning. Beyond storing static patterns, these RNNs can implement reservoir computing to learn and recall complex dynamical sequences, even if the starting Amari-Hopfield synaptic matrix is symmetric. Our theory elucidates how such networks mimic arbitrary dynamical systems evolving within low-dimensional state spaces and predicts optimal parameters for performance. In conclusion I will demonstrate how these RNNs effectively replicate premotor cortical activity recorded from monkeys performing a stop-signal task, offering novel insights into motor decision and movement inhibition. |
16:50 – 17:25![]() |
Toward Foundation Models for Dynamical Systems Reconstruction in NeuroscienceDaniel Durstewitz, Central Istitute for Menthal Health, GermanyRather than hand-crafting computational theories of neural function, recent progress in scientific machine learning (ML) and AI suggests that we may be able to infer dynamical-computational models directly from neurophysiological and behavioral data. This is called dynamical systems reconstruction (DSR), the learning of generative surrogate models of the underlying dynamics from time series observations. In my talk I will cover recent ML/AI architectures, training algorithms, and validation procedures for DSR, and how they can integrate neuroscience data from multiple modalities, animals, and task designs, into a joint latent model. Finally, I will introduce DynaMix, a recent interpretable DSR foundation model which exhibits the zero-shot inference and in-context learning capabilities known from LLMs. |
17:25 – 17:40 |
Concluding Remarks |