About

Deep neural networks are driving many of the recent successes in machine learning. Compared to previous machine learning techniques, it is their ability to extract statistical patterns or “features” from data that yields state-of-the-art performance in many domains, such as image recognition or natural language processing.

From a theoretical point of view however, learning features is only poorly understood. Indeed, most theoretical works on neural networks either did not model data structure at all (computer science, statistics) or modeled data as independent, identically distributed (IID) random variables (statistical physics). Despite providing valuable insights, both these approaches were thus by construction blind to the statistical properties of real-world datasets, and how they shape learning.

There is now a growing consensus that understanding deep neural networks will require a better understanding of the impact of data structure through better models of data. Indeed, there has been a flurry of activity recently, leading to new tractable models for structured data sets that have already provided a number of new insights.

The problem of understanding the impact of data structure is not only relevant in the theory of machine learning but also in theoretical neuroscience. Recent advances in large-scale data recording in neuroscience, along with a growing appreciation of artificial neural networks as models for biological brains, highlight the need for new theoretical tools to characterise the structure and function in representations from biological and artificial neural networks. A growing trend in addressing this challenge is to analyze the geometry of these high-dimensional representations, i.e., neural population geometry, or "neural manifolds." Methods based on statistical physics, such as replica theory and mean-field theory, have played an essential role in developing theories of neural population geometry.

The goal of this workshop is to gather leading scientists from the different communities that have contributed to these rapidly growing fields. There is a long history going back to the 1980s of fruitful interactions between theoretical neuroscience, machine learning and statistical physics. Our goal is to bring together today’s leading researchers from these fields, with a renewed focus on the problems these fields face today. Current questions of interest for all these communities are for example the following: How does data structure shape the learning dynamics? And how does this structure impact the performance of learning? Are there biologically plausible alternatives to training neural networks with backpropagation? How do neural manifolds impact learning?

The common theme that unifies these works is the analysis of high-dimensional problems, be they dynamic or static, with the different tools these respective fields offer. This convergence of disciplines has previously yielded great progress in signal processing and inference, and we hope that this workshop will serve as a focal point for these different communities to come together to tackle these new problems centered on neural networks.

The workshop will be held on February 20-24th 2023 in Les Houches in the French Alps. We are complementing the workshop with a special issue in the Journal of Physics A, which has a long history of publishing articles at the interface between statistical physics and neuroscience.