Le rôle du chaos dans le cerveau
brain criticalité
Nous ne prenons pas assez en considération les exploits qu’accomplit à chaque instant notre cerveau. Dire son nom à quelqu’un est déjà une tâche difficile à réaliser par un calculateur
Keith Hengen, biologiste à la at Washington University à St Louis rappelle cependant que cela ne serait pas possible sans le désordre voire le chaos, qui règne à chaque instant dans le cerveau.
Ils nomment cette hypothèse « the critical brain hypothesis ». Selon eux, la matière grise balance en permanence entre l’ordre et le désordre dans ce qu’ils nomment une “critical zone”, autrement dit au bord du chaos.
Il s’agit d’un phénomène que l’on retrouve en permanence dans la nature, par exemple dans les avalanches ou les incendies de forêts . De petits évènements peuvent avoir des conséquences catastrophiques . Ils sont gouvernes par des principes mathématiques.;
Ces hypothèses semblent absurdes, mais aujourd’hui d’autres neuroscientifiques les sles partagent. C’est le cas de Karim Jerbi, de l’Université de Montreal. Cette hypothèse devrait nous aider à augment notre flexibilisé mentale. Elle devrait aussi nous aider à mieux comprendre le fonctionnement des cerveaux de certains animaux chats ou singes. On lira auss sur ce sujet Woodrow Shew, physicien à l’University of Arkansas, Un article en ce sens vient d’être publie dans Neuron
Highlights
•
Criticality may be a unifying principle of optimal neural computation
•
Criticality is a key endpoint of homeostasis in the brain
•
Deviations from criticality correlate with multiple brain disorders and anesthesia
•
Conflicting evidence in criticality can be explained by temporal coarse graining
Summary
Brains face selective pressure to optimize computation, broadly defined. This is achieved by mechanisms including development, plasticity, and homeostasis. Is there a universal optimum around which the healthy brain tunes itself, across time and individuals? The criticality hypothesis posits such a setpoint. Criticality is a state imbued with internally generated, multiscale, marginally stable dynamics that maximize the features of information processing. Experimental support emerged two decades ago and has accumulated at an accelerating pace despite disagreement. Here, we lay out the logic of criticality as a general computational endpoint and review experimental evidence. We perform a meta-analysis of 140 datasets published between 2003 and 2024. We find that a long-standing controversy is the product of a methodological choice with no bearing on underlying dynamics. Our results suggest that a new generation of research can leverage criticality—as a unifying principle of brain function—to accelerate understanding of behavior, cognition, and disease.Le rôle du chaos
Introduction
Is there a unifying rule of computation in biology? Has evolution inevitably settled on a key principle that accounts for the brain’s capacity to generate behavior and cognition? Or is each brain function in each animal individually governed by different rules, without common ground? One direct path to answering such a question is to ask if there is a universal homeostatic endpoint that can account for computational capacity, flexibility, and robustness. Put another way, brains maintain themselves at some point that allows for all behavior and cognition despite enormous variability, unpredictability, and perturbation throughout life. Understanding such a setpoint, should it exist, would give insight into the mathematical principles at the core of the brain’s power. Through a combination of first-principles reasoning and a growing body of evidence, we propose and evaluate a candidate solution to this problem.
Brains are the physical basis of biological computation. Brain functions, such as those underlying behaviors, are directly caused by the computations of neuronal populations. In some cases, a highly specialized, permanent solution is needed; the conversion of light into a neurobiological signal requires light-sensitive molecules that are of little use to olfaction, for example. More often, however, the computations are not hard wired; they must be learned on the fly, capable of reconfiguration to accommodate diverse, ever-changing environmental conditions, experiences, and perturbations.
It is tempting to suppose that flexible computation implies a lack of constraint—each system is free to drift and be sculpted by experience and associative plasticity. But just as the tuning of a photosensitive molecule is essential to its function, the ability of a network of neurons to transmit and transform information also requires tuning; it is neither inevitable nor trivial. Consider that the principles that allow for flexible computation are intrinsically destabilizing. Mechanisms of learning and memory—Hebbian plasticity—operate by positive feedback. Left unchecked, LTP and LTD (mechanisms of associative plasticity) lead to catastrophic saturation or silence, respectively, at the circuit level.1,2,3,4,5 In simple terms, a brain that can learn requires some form of active stabilization. Known mechanisms of homeostatic plasticity—cellular and synaptic—are well positioned to counteract these destabilizing forces.6,7,8,9,10,11,12 However, our understanding of such homeostatic mechanisms is to some extent arbitrary—there is little a priori reason to predict that a neuron’s mean firing rate should be 3.2 Hz, for example. What determines the variegated setpoints throughout the central nervous system? Ultimately, the target of homeostasis in the brain is behavior; stabilizing a neuron’s firing rate is of little value if it does not contribute to reliable behavior. Because evolution can only select for behavior,13 and because behavior arises from the coordinated activity of millions to billions of neurons, there is a selective pressure for homeostatic processes in the brain to actively maintain an optimal setpoint at the level of population computation that gives rise to behavior.
The elucidation of such a setpoint would crystallize a fundamental principle of neurobiology. There are homeostatic setpoints at many levels of organization, including molecular biology, synaptic physiology, and single-cell biophysics. However, since behavior is the target of selective forces, we suggest that the endpoint of neuronal homeostasis must lie at the level of brain physiology penultimate to behavior—that is, at the level of neuronal population dynamics. While myriad genetic, molecular, synaptic, and cellular factors obviously shape and constrain neuronal function, the relevance of these factors is ultimately determined by their impact on population dynamics and thus behavior.
As discussed above, an adaptable neuronal population is precarious; optimal and reliable function requires active maintenance. This raises the question, “what aspect of population dynamics should be the target of homeostatic constraint?” To make this more coherent, consider a thought experiment; imagine that you are responsible for tuning the activity of billions of neurons. You have one knob for every parameter that controls population dynamics—that is, cell-type-specific wiring rules, synaptic strengths, the relative importance of excitation and inhibition, differences in single-neuron biophysics, network structure, assorted time constants, and countless others. You can try all possible combinations of the knobs, searching an enormous space of setpoints, seeking a state that suits computation. Such a search would reveal large regions of parameter space that are not viable for general, flexible computation. One region might be good for a specific task but poor for another. For example, a desynchronized region might be well suited for low-noise sensory coding but might perform poorly for long-range coordination across brain regions. Much of the parameter space would be relatively insensitive to small adjustments of the knobs. However, you would occasionally encounter an abrupt change in population dynamics. This is analogous to how water behaves identically at 10°C and 8°C, but water at 1°C is quite different than water at −1°C. Such a tipping point goes by many names including a bifurcation, a phase transition, or simply a boundary in parameter space. At a special kind of boundary, called criticality, population dynamics emerge with multiple properties ideally suited for flexible computation (Figure 1). A neuronal population at criticality maximizes dynamic range, information storage, information transmission, controllability, susceptibility, and more (more details in the next section and previous reviews14,15,16). Thus, it seems that combined with learning rules, operating near this setpoint should allow your network to achieve any desirable function. In other words, a brain tuned to criticality should be able to learn to do almost anything.
Figure 1 Criticality: A high point in the computational fitness landscape
Here, we contend that criticality is a unifying computational principle central to brain function and a key endpoint of neuronal homeostatic control. In the next section, we lay out the rationale for our argument and detail the computationally advantageous properties of neuronal population dynamics at criticality. We also carefully consider supporting experimental evidence. After that, we describe eight testable predictions implicated by the criticality hypothesis, and we systematically review two decades of experimental evidence that supports these predictions (a downloadable annotated bibliography for all 320 papers is provided in the supplemental information). Finally, we perform a meta-analysis of prior work, reconciling studies that were long thought to contradict the criticality hypothesis.
