Quant aux terres rares, l’Ukraine se place en quarantième position des pays producteurs de minerais, toutes catégories confondues (charbon inclus), selon la publication . World Mining Data 2024
Sa production repose sur trois minerais stratégiques : le manganèse (huitième producteur mondial), le titane (11e) et le graphite (14e), indispensable aux batteries électriques.
De ce dernier minerai, l’Ukraine concentre « 20 % des ressources mondiales estimées », d’après le Bureau français de recherches géologiques et minières (BRGM).
Selon des estimations du think tank ukrainien We Build Ukraine et de l’Institut national des études stratégiques, environ 40 % des ressources métalliques du pays sont maintenant sous occupation russe (d’après des données du premier semestre 2024).
Depuis, les troupes russes ont continué à progresser dans la région orientale de Donetsk. En janvier, l’Ukraine a fermé sa seule mine de charbon à coke située à l’extérieur de la ville de Pokrovsk, que les forces russes tentent de capturer.
Mais c’est dans le domaine des terres rares que la Russie a le plus grand besoin des réserves de l’Ukraine. Ceci fait que la Russie a occupé au moins deux gisements de lithium ukrainiens pendant la guerre – l’un dans le Donbass et l’autre dans la région de Zaporijjia, dans le Sud-Est. Par contre Kiev contrôle toujours des gisements de lithium dans la région centrale de Kyrovohrad.
Production en tonnes d’oxyde de terres rares Chine 270.000 USA 45.000 Myanmar 31.000 Australie 13.000 Thailand 13.000 Inde 3.000 Russie 2.500.
L’Ukraine détiendrait environ 5 % des réserves mondiales de terres rares , utilisées dans l’électronique, les technologies de défense et les projets énergétiques. Certaines de ces réserves se situent dans des régions de l’est de l’Ukraine occupées actuellement par la Russie.
C’est du moins ce qu’indique une nouvelle étude sur la collision survenue il y a 66 millions d’années entre un astéroïde de 12 kilomètres de diamètre et l’océan bordant aujourd’hui la ville portuaire de Chicxulub, au Mexique. L’impact a provoqué la disparition soudaine des dinosaures, décimant une grande partie de ces créatures et 75 % de toute forme de vie terrestre.
D’après une étude parue en 2017, dont nous publions ci-dessous les références et l’abstract la collision de l’astéroïde avec une sorte de poudrière pétrolière serait à l’origine de ce phénomène d’extinction de masse. La suie ainsi libérée dans l’atmosphère aurait alors provoqué un refroidissement climatique extrême. Selon les résultats de l’étude, la température de la planète se serait située entre -10 °C et -8 °C après l’impact, avec une chute oscillant entre -8 °C et -1 °C sur les sols.
Seuls 13 % de la surface terrestre sont composés de roches susceptibles de libérer une telle quantité de suie, comme l’indique l’équipe responsable de l’étude. En d’autres termes, les dinosaures non aviaires auraient bien pu ne pas disparaître si l’astéroïde avait suivi une autre trajectoire
Rappelons que le Crétacé s’est terminé il y a 66 millions d’années avec de grandes éruptions volcaniques en Inde et l’impact d’une météorite de 10 kilomètres de diamètre dans le golfe du Mexique (formant le cratère de Chicxulub).
Chacun des deux événements est isolément potentiellement catastrophique, mais celui qui a finalement tué les dinosaures non aviaires est longtemps resté une énigme.
Une équipe de chercheurs internationaux, au sein de laquelle on trouve Sietske Batenburg (Géosciences Rennes), a publié en janvier 2020 dans la revue Science (Hull et al., 2020) des résultats originaux qui permettent d’éclairer les circonstances du cataclysme planétaire en précisant la chronologie des évènements.
L’évaluation de leur importance relative – volcanisme vs astéroïde – est compliquée du fait de l’incertitude concernant la datation précise du dégazage volcanique. Pour y voir plus clair, les chercheurs ont utilisé la modélisation du cycle du carbone et les enregistrements de paléotempérature pour se concentrer sur la datation de la période du dégazage volcanique. Ils ont ainsi constaté que les principaux dégazages commencent et se terminent nettement avant l’impact.
Bien que le volcanisme ait provoqué un réchauffement climatique important en augmentant sensiblement l’effet de serre, les conclusions de l’étude montre que celui-ci s’est terminé 200 000 ans avant la fin du Crétacé. C’est donc l’impact de l’astéroïde – coïncidant avec l’extinction de masse – qui constitue par conséquent la principale cause de la disparition des dinosaures.
Si cet événement ne s’était pas produit, les dinosaures auraient vraisemblablement évolués sous la forme de créatures dotées d’un intelligence comparable celle de l’homme. Ils se seraient dotés de plus gros cerveaux capables d’une intelligence analogue à l’intelligence humaine, sinon supérieure.
Les mammifères précurseurs de l’homme seraient restés ce qu’ils étaient, de gros rats rampant entre les pattes des géants
Voir aussi Rethinking Diosaurs’ decline Newscientist 19 april 2025 p.15
Sixty-six million years ago, an asteroid approximately 9 km in diameter hit the hydrocarbon- and sulfur-rich sedimentary rocks in what is now Mexico. Recent studies have shown that this impact at the Yucatan Peninsula heated the hydrocarbon and sulfur in these rocks, forming stratospheric soot and sulfate aerosols and causing extreme global cooling and drought. These events triggered a mass extinction, including dinosaurs, and led to the subsequent macroevolution of mammals. The amount of hydrocarbon and sulfur in rocks varies widely, depending on location, which suggests that cooling and extinction levels were dependent on impact site. Here we show that the probability of significant global cooling, mass extinction, and the subsequent appearance of mammals was quite low after an asteroid impact on the Earth’s surface. This significant event could have occurred if the asteroid hit the hydrocarbon-rich areas occupying approximately 13% of the Earth’s surface. The site of asteroid impact, therefore, changed the history of life on Earth.
Une publication parue dans la revue Science annonce l’appartenance d’une mandibule découverte par des pécheurs dans le détroit de Penghu, à Taïwan, à l’espèce Homo denisovensis. Une découverte importante pour cette espèce encore mal connue malgré des données génétiques importantes.
Jusqu’où s’étendait l’aire de répartition de l’Homme de Denisova ? Elle serait allé au moins de la Sibérie jusqu’à Taïwan, là où a été retrouvé un nouveau fossile. Il ne s’agit pas d’un corps entier mais d’un côté droit de mandibule –la partie inférieure de la mâchoire– sur laquelle étaient encore plantées quatre dents intactes. Elle avait été découverte initialement dans le détroit de Penghu, qui se trouve juste en face de Taïwan, à la frontière avec la Chine. Ce ne sont pas des paléontologues mais des pêcheurs qui avaient trouvé ce fossile dans leur filet, parmi les poissons, il y a près de 20 années, en 2008.
Le fossile avait été classé dans un magasin d’antiquités avant d’être redécouvert par un amateur. Durant les précédentes périodes glaciaire, ce détroit faisait partie d’une lande de terre émergée reliant Taiwan et la Chine. Il était peuplé de nombreuses espèces animales. Un précédent reste fossile d’un homme de cette espèce avait ét faite en 2010 dans une grotte des montagnes de l’Altaî, Russie. Mais il ne s’agissait que d’un os de doigt de pied
Pour en savoir plus sur l’Homme de Denisova, voir Silvana Condemi, paléoanthropologue, directrice de recherche au CNRS, rattachée au laboratoire ADES à l’Université d’Aix-Marseille et auteur de L’Enigme Denisova aux éditions Albin Michel.
Denisovans are a Pleistocene hominin lineage first identified genomically and known from only a few fossils. Although genomic studies suggest that they were widespread throughout Asia, fossils of this group have thus far only been identified from regions with cold climates, Siberia and Tibet. Tsutaya et al. used ancient proteomic analysis on a previously unidentified hominin mandible from Taiwan and identified it as having belonged to a male Denisovan. This identification confirms previous genomic predictions of the group’s widespread occurrence, including in warmer climates. The robust nature of this mandible is similar to that seen in a Denisovan one from Tibet, suggesting that this is a consistent trait for the lineage. —Sacha Vignieri
Abstract
Denisovans are an extinct hominin group defined by ancient genomes of Middle to Late Pleistocene fossils from southern Siberia. Although genomic evidence suggests their widespread distribution throughout eastern Asia and possibly Oceania, so far only a few fossils from the Altai and Tibet are confidently identified molecularly as Denisovan. We identified a hominin mandible (Penghu 1) from Taiwan (10,000 to 70,000 years ago or 130,000 to 190,000 years ago) as belonging to a male Denisovan by applying ancient protein analysis. We retrieved 4241 amino acid residues and identified two Denisovan-specific variants. The increased fossil sample of Denisovans demonstrates their wider distribution, including warm and humid regions, as well as their shared distinct robust dentognathic traits that markedly contrast with their sister group, Neanderthals.
La synapse st une zone de contact fonctionnelle qui s’établit entre deux neurones, ou entre un neurone et une autre cellule (cellules musculaires, récepteurs sensoriels…). Elle assure la conversion d’un potentiel d’action déclenché dans le neurone présynaptique en un signal dans la cellule postsynaptique. On estime, pour certains types cellulaires (par exemple cellule pyramidale, cellule de Purkinje…), qu’environ 40 % de la surface membranaire est couverte de synapses. Wikipedia
Intervenir dans le cerveau pour modifier les transmissions d’influx nerveux est une opération a risque. Elle peut entrainer la destruction de plus vastes zones du cortex superficiel, entraient des paralysies ou des hallucinations. On cite notamment l’obsessive compulsive disorder (OCD) Néanmoins l’intervention sera de plus en plus tentée en cas par exemple de tumeur ou de chocs violents entraînant des destructions locales du cortex.
Aussi, à la Duke University de Durham (Etats-Unis) la décision a-t-elle été prise de faire de premières expériences chez des souris. Leur cerveau est bien plus petit que celui de l’homme, comme nul n’en ignore, mais il s’agit de mammifères proches de l’homme en ce qui concerne les grandes fonctions cérébrales.
On distingue les synapses au microscope électronique par la taille de la fente synaptique ; de l’ordre de 2 nanomètres pour les synapses électriques, entre 10 et 40 nm pour les synapses chimiques. On peut également, dans le cas des synapses électriques, observer les jonctions communicantes. Au niveau d’une synapse, il s’agit toujours d’un contact entre deux membranes plasmiques, il n’y a jamais fusion en un syncytium.
Les chercheurs cherchaient à savoir si des synapses électriques artificielles pouvaient remplacer les signaux chimiques des synapses naturelles. Un travail analogue avait été fait en utilisant des vers dits nématodes mais le cerveau de ceuc-ci, si cerveau il y a, ne comporte que 300 neurones tandis que la souris en a 71 millions.
Ils implantèrent ensuite des connexines dans le cerveau des souris. Les connexines constituent une grande famille de protéines transmembranaires qui permettent la communication intercellulaire et le transfert d’ions et de petites molécules de signalisation entre les cellules .. Ces connexisnes provenaient d’un poisson …
Note Retour à l’article en anglais, non traduit ici faute de temps
called the white perch (Morone americana). These connexins would later be used by the nerve cells on either side of the junction at the synapse, like the positive and negative parts of a circuit.
After identifying the right proteins, the next issue was knowing where to place them. “We implanted lots of electrodes about the size of a hair into many brain areas at the same time in mice and then we recorded their electrical activity,” says Dzirasa. “This gives an electrical map of how information is flowing through the brain.”
The team then exposed the mice to situations that induce behaviours like anxiety or aggression to see how this flow changed, pinpointing which brain cells should receive the engineered synapse.
Once these had been identified, the researchers injected a harmless virus into the mice’s brains to deliver the genetic information needed to make the connexins. This resulted in working electrical synapses that changed how electricity moved in a microcircuit in the frontal cortex. The mice then showed signs of being more explorative and sociable, suggesting this approach could help treat conditions like social anxiety.
“It’s a cute idea,” says David Spray at the Albert Einstein College of Medicine in New York. “It will likely provide a useful tool to answer the question of what would happen to activity patterns and behaviours if we added electrical synapses to specified cell types in neural circuits.”
The researchers also did a further experiment investigating the potential of this technique for preventing mental health issues. “We wanted to know if we could use this tool to promote resilience,” says Dzirasa.
To attempt this, Dzirasa and his colleagues targeted a long-range circuit between the frontal cortex and an area of the brain called the thalamus. They identified this circuit as important when mice are stressed, which is a sensation they may respond to by freezing in place. Introducing the engineered electrical synapses enhanced communication between these regions and stopped the mice from freezing.
“We have created an approach to edit the connection between cells, enabling targeted rewiring of the brain,” says Dzirasa. “It has the potential to edit many different types of genetically inducted wiring deficits to prevent the emergency of psychiatric disorders.”
Katrin Amunts at the Jülich Research Centre in Germany says that while the research is at an early stage, the scientists “demonstrate in the mouse model that a targeted change at the subcellular level can have an effect at the behavioural level, so there is psychiatric relevance”.
Further work by Dzirasa and a different group of colleagues introduced connexins into juvenile mice genetically predisposed to develop OCD-like symptoms. “Normally, over time, the mice start grooming a lot, and the grooming can be so severe that they get these huge facial lesions that mirror the lesions that some people with OCD get when they compulsively wash their hands,” says Dzirasa.
The mice with the electrical synapses groomed less and about two-thirds of them never developed facial lesions, he says.
Despite the work being done in mice, Dzirasa selected connexins 34.7 and 35 partly on the basis that they should work similarly in people. Existing atlases of gene expression profiles in humans could also identify which cells to target.
“These gene expression patterns are like a GPS indicator,” he says, showing which cells do what. Viruses carrying the necessary genomic material could be injected into the bloodstream and then pass through the blood-brain barrier, which could also be opened via focused ultrasound, to target cells with the right profiles, says Dzirasa.
“I’m personally very excited,” says Ithai Rabinowitch at the Hebrew University of Jerusalem in Israel, part of the team that put an electrical synapse in C. elegans. “Engineering or editing synaptic connections provides a potential all-biological approach for elucidating neural circuit function and for potentially treating various diseases involving neural connectivity,” he says. “Importantly, once installed, these new connections drive neural circuit information flow and function completely autonomously, with no need for external activation or regulation.”
But brain editing in people is a long way off and raises ethical questions, says Dzirasa. “I just want to make sure there’s something available for people if they need it.”
Rabinowitch also wonders if the brain would respond to the changes by making new neural links that may undo the effects of the engineered synapses or create other potentially negative pathways. The intervention might also have unknown side effects, he says.
La Théorie des Cordes paraît aujourd’hui en mesure de devenir une théorie de Tout, c’est-à-dire apporter une explication à tout ce qui était jusqu’ici incompréhensible pour la science, de la gravité au Big Bang et aux Trous noirs. Le seul problème est qu’elle ne peut expliquer un univers comme le nôtre, en expansion à une vitesse accélérée.
En fait nul n’a compris encore la raison de cette expansion. On évoque une mystérieuse énergie noire. Mais, selon les théories actuelles, rien ne devrait se produire.
Cependant aujourd’hui certains physiciens proposent une solution à cette interrogation. Pour eux, notre univers ne serait qu’un point dans un ensemble beaucoup plus vaste dont une expansion accélérée serait la règle naturelle, entre un hyper-espace de grande dimension et un vide absolu, selon la description qu’en donne Antonio Padilla, de l’Université de Nottingham (UK).
La théorie des cordes ,aujourd’hui célébré, avait commencé modestement. Ce n’était alors qu’une simple équation destinée à donner un sens aux collisions entre protons et neutrons mus par ce que l’on nommait la « strong force ». Celle-ci l’emporte sur la gravité sur les courtes distances quand il s’agit de tenir réunis les protons dans les noyaux des atomes. Mais dans un sens étendu, les physiciens suggérèrent que chaque particule fondamentale, électron, quark, boson de Higgs, pouvait être l’extrémité d’une corde minuscule vibrant sur un rythme distinct.
Le premier succès de la théorie des cordes fut de pouvoir décrire la gravité . Plus tard les autres forces furent aussi analysées. La théorie des cordes devint ainsi une « théorie du Tout ».
Ses mathématiques sont complexes. Nous n’en connaissons que trois, trois d’espace et une de temps. Cependant certains théoriciens des cordes ont suggéré qu’elles étaient dix séparées par une membrane poreuse. mais elles seraient si minuscules qu’elles seraient inobservables par tout instrument que ce soit. Elles ont été nommés des branes. Malheureusement la théorie des cordes est si flexible qu’elle peut décrire de nombreux objets imaginaires au nombre d’au moins 10 puissance 500 Quelle autorité peut-elle avoir en ce cas précis ?
.
Suite non traduite et non résumée
It might seem odd that string theory struggles with something so apparently mundane as an accelerating universe. The reason is that the rate of expansion comes from the very geometry of space-time, as defined by Albert Einstein’s general theory of relativity and later descibed in detail by cosmologist Willem de Sitter. In one solution to Einstein’s equations, space-time is spherical and expanding at an accelerating rate – what’s now known as a de Sitter (dS) space. In the alternative solution, space-time is saddle-shaped and cannot expand at all, called an anti-de Sitter (AdS) space. String theory strongly implies that only AdS space-times are stable and able to support themselves, even though a wealth of astronomical data, including from those distant supernovae, confirms that our present-day universe is dS.
What we see and feel would be just a projection of a greater reality, beyond our senses
The seed of an idea to get around this impasse was planted even before physicists had fully got to grips with dark energy and the accelerating universe. In 1999, in an attempt to solve an entirely different problem in string theory, theorists Lisa Randall and Raman Sundrum toyed with the concept of high-dimensional branes, albeit in a less extravagant, easier-to-describe, five-dimensional (5D) setting.
In geometry, the surface of an object always requires one dimension fewer than the object itself – for instance, each face of a 3D cube is a 2D square. Likewise, Randall and Sundrum discovered that it was possible to have a pair of 5D AdS space-times separated by a brane that was merely 4D, just like our universe. Moreover, Randall’s later work with theorist Andreas Karch showed that this brane would have an accelerating, dS geometry – again, just like our universe. Did we live on a brane? The possibility was tantalising.
Alas, despite its superficial promise, the Randall-Karch brane never much helped to ease string theory’s woes. The reason was that, sandwiched between two mammoth AdS space-times, it wasn’t much more stable than the few pure dS universes that could be wrenched out of string theory. About five years ago, however, Danielsson and his colleagues at Uppsala University had an epiphany. “We thought, what if instability wasn’t a problem?” he says. “What if we could turn it to our advantage?”
Every space-time model has a certain level of energy woven into its fabric, governing the types and behaviours of the particles, strings, branes and other entities that may be contained within it. If a space-time doesn’t reside at the lowest possible energy, quantum mechanics says it is inherently unstable and has the risk of “decaying”, suddenly transforming into a new universe in which the energy is lower. Danielsson’s group considered a 5D AdS space-time that begins high on this energy ladder, and found that if even a tiny part of it decays, this fragment quickly forms a “dark bubble” of lower-energy 5D AdS space-time. As in the Randall-Karch scenario, this bubble is enclosed by a 4D dS space-time like our own – yet crucially, it arises out of instability, rather than being at the mercy of it.
In this new scenario, the bubble membrane on which our cosmos is poised still wouldn’t be perfectly stable. But that just means another dark bubble would occasionally pop up, or nucleate, within its inner 5D space-time, which we can’t see. In fact, new dark bubbles would continually nucleate within each other, each enclosed by a dS brane – in effect, a new universe. In this grand multiverse, what we refer to as “the” big bang would be just the moment our parent bubble gave birth to ours.
Danielsson argues that this idea is actually more intuitive than accepted big-bang cosmology. “A favourite picture of the big bang is that it’s like a balloon expanding into space,” he says. “Usually someone tells you that’s wrong: the balloon isn’t expanding into space, because that extra space – the beyond – simply doesn’t exist. But with the dark bubble, actually it does.”
Leaking gravity
However, there is a fear that, on closer inspection, the bubble may burst. The issue has to do with that most pesky of nature’s forces, gravity, and, in particular, why it is so weak. A common illustration of gravity’s weakness is the comparative strength of a fridge magnet, which is able to exert enough electromagnetic force to stick to a metal door and, in so doing, counteract the gravitational pull of the entire planet. String theory offers a generic, hand-waving answer to gravity’s feebleness: with 10 dimensions to act in, gravity is somehow diluted more than the others. But if gravity is leaking away, theorists still have to explain why it isn’t so weak as to let planets escape their orbits and spinning galaxies fly apart.
The trick is to somehow confine gravity so that only a little – just the right amount – seeps away into the extra dimensions. Padilla argues that dark bubbles don’t successfully do this. After all, the dark bubble multiverse is infinite, so gravity can potentially leak anywhere. As a result, Danielsson’s group has had to introduce additional strings in their fifth dimension to tether gravity to the bubble membranes.
“To us, it looked like you have to jump through hoops to get gravity to look 4D,” says Padilla. For this reason, he and his colleagues Ben Muntz and Paul Saffin at the University of Nottingham began to toy with an alternative solution: get rid of the multiverse and have just one bubble. With no infinite space-time, gravity’s leakage is stemmed, allowing it to be weak, but not too weak. And while our world is still the 4D brane surrounding a 5D bubble, it is now also the barrier between a 5D cosmos and pure nothingness. In other words, it is literally the edge of the universe, an “end of the world” brane. “Of course, this also presents a problem,” says Padilla. “What’s nothing, and how can something come out of it?”
How to get something from nothing is a vexed question, to say the least. From the dawn of philosophical thought, people have wondered how space, time, substance – even the rules governing those things – can arise if there is nothing there to begin with. As physics deals with relationships between entities, there seems little hope that it can ever fully answer this mystery.
La physique des particules rencontre aujourd’hui deux difficultés. Un approche connue souu le nom de supersymétrie, par exemple, prévoit de nouvelles particules permettant d’annuler les fluctuations quantiques résultant du modèle standard des particules.
La supersymétrie (abrégée en SuSy) est une symétrie supposée de la physique des particules qui postule une relation profonde entre les particules de spin demi-entier (les fermions) qui constituent la matière et les particules de spin entier (les bosons) véhiculant les interactions. Dans le cadre de la SuSy, chaque fermion est associé à un « superpartenaire » de spin entier, alors que chaque boson est associé à un « superpartenaire » de spin demi-entier. (Wikipedia).
Une solution alternative a été proposée par Nima Arkani-Hamed, aujourd’hui à l’ Institute for Advanced Study à Princeton, New Jersey.
Celle-ci considère que la gravité peut fuir à travers ces extra-dimensions, la rendant progressivement plus faible qu’elle ne l »est aujourd’hui. Des modèles basés sur cette hypothèse prévoient une échelle de Planck inférieure à l’actuelle, la faisant paraître plus faible qu’elle ne l’est actuellement. Les extradimensions sont actuellement invisibles parce qu’elle sont trop faibles
La longueur de Planck ou échelle de Planck est une unité de longueur qui fait partie du système d’unités naturelles dites unités de Planck et vaut 1,616 25 En physique des particules et en cosmologie physique, l’échelle de Planck est une échelle d’énergie autour de 1,22 × 10 28 eV (l’énergie de Planck, correspondant à l’équivalent énergétique de la masse de Planck, 2,17645 × 10 −8 kg) à laquelle les effets quantiques de la gravité deviennent significatifs. (Wikipedia)
Jusqu’à présent cependant ces hypothèses se sont révélées trop timides pour rendre compte des nouvelles observations du LHC, d’autant plus que celui-ci ne cesse pas d’en produire.
Pour résumer, la physique des particules est en crise. C’est pourquoi un petit groupe de théoriciens, ont commencé à explorer une alternative au réductionnisme tel qu’il est connu aujourd’hui. Au lieu d’étudier les différents niveaux d’énergie de l’univers comme des entités indépendantes, il les traite comme si elles se conditionnaient respectivement..
De la même façon, dans un arc en ciel l’ultraviolet et l’infrarouge, que nous ne pouvons pas voir, enferment les autres couleurs du spectre que nous pouvons voir, lerouge, l’orange, le jaune, le vert, le bleu, l’indigo et le violet. C’est dans l’équivalent de celles-ci qu’opère le modèle standard des particules.
Dans la fin des années 1970, les physiciens Andrew Cohen , David Kaplan et Ann Nelson , en étudiant les trous noirs calculèrent qu’il y avait un minimum d’énergie à partir duquel le modèle standard cessait d’être viable
voir Effective Field Theory, Black Holes, and the Cosmological Constant
While studying black holes, Cohen and his colleagues calculated that there is a maximum length, or minimum energy, at which the standard model stops being valid. Beyond it, gravity takes over. It might seem intuitive that if there is a lower limit, there must also be an upper one. But crucially the researchers found that these seemingly unrelated cutoffs aren’t independent of each other. In other words, the physics at these vastly different energy scales seems to be related – a phenomenon dubbed
The calculations didn’t suggest any concrete values for the low-energy cutoff. So Cohen and his collaborators tried out the largest scale they could think of: the radius of the observable universe. In a further fascinating twist, the corresponding UV cutoff to this IR cutoff turned out to be exactly the tiny energy value of the universe’s dark energy – not the Planck scale, after all. If the virtual particles contributing to dark energy abide by this limit, that could explain why these effects don’t drive dark energy to ridiculously large values.
For a long time, no one took much notice of this result. Most people had their sights set on supersymmetry and its ability to resolve the problem of the Higgs particle. But recently the crisis in physics has become more apparent, as many potential solutions to the fine-tuning problem have fallen away. As a result, the insights of Cohen and his colleagues have been receiving a huge amount of interest from theorists like myself. I started to wonder: if UV/IR mixing might help to solve the dark energy problem, could it also assist with the second major problem in fundamental physics, namely the unbearable lightness of the Higgs?
To answer this question, Tom Kephart at Vanderbilt University in Tennessee and I first attempted to work out what the IR cutoff might be for the Higgs boson based on the limited lifetime of the particle. We determined a UV cutoff that is 11 orders of magnitude below the Planck scale. It is better than what we had, and yet still a million times too large for the Higgs mass we see. Adding extra dimensions could resolve the problem entirely.
Over recent years, theorists like me have tried several other ways to solve the Higgs problem using variations of UV/IR mixing – each coming from various angles. Some, like ours, take their inspiration from Cohen and his colleagues’ work on black holes. Others were born in string theory, which suggests everything is made of unbelievably tiny strings. None of the attempts so far is supported by experimental evidence, but they may get us a step in the right direction. A few of them even point to one fundamental property of underlying reality that could be causing this mixing to happen, with big implications for how we see the universe.
Quantum entanglement
Quantum entanglement is usually described as a startling correlation between quantum objects. Prepare two particles in a particular way and a measurement of one immediately fixes the other, regardless of the distance between them. But these correlations can be thought of as proof of the fact that entangled quantum systems can’t be understood as being made out of parts: they are one and the same. Just as this indivisibility links faraway particles, it also can link quantum effects at different energies. In other words, quantum entanglement could be responsible for the UV and the IR scales of the universe seemingly talking to each other.
As we proceed up the size scale and down in energy, the effects of lower energies could be broken by a process called decoherence. This well-understood quantum phenomenon hides entanglement from the eye of a local observer. It is the reason why we experience no quantum weirdness in our daily lives.
Some work has found a relationship between entanglement and UV/IR mixing, but the bounds in Cohen and his colleagues’ study were caused by gravity rather than entanglement. Excitingly, recent work by leading researchers in string theory offers a solution: by suggesting gravity itself may be entanglement in disguise.
It is a bold idea, but I suspect entanglement causes UV/IR mixing. If so, there are huge implications for understanding reality at its most fundamental. If entanglement can be applied to the entire cosmos, then instead of everything being made of smaller and smaller pieces, it would turn the universe into “a single, indivisible unit”, in the words of quantum pioneer David Bohm. All objects in existence would be encoded in a universal wave function, a mathematical entity that describes a single, entangled state.
Soon, we may know if this matches up with reality. Cohen and his collaborators suggested UV/IR mixing would affect the interaction of electrons or subatomic particles called muons with electromagnetic fields, showing up as a mismatch between the standard model’s predictions and measurements. And the phenomenon may crop up in other processes, too. One example my colleagues and I are currently exploring relates to neutrino masses. Unlike any other particles, the almost non-existent masses of the elusive neutrinos can be entirely generated by virtual particles, according to some models. This means they should be more sensitive than other particles to any UV/IR mixing effects.
If we do find evidence to support this idea, it would dramatically alter the way we conceive of the cosmos. It would mean we could not only see a world in a grain of sand, as the poet William Blake once said, but we could also quite literally see the entire universe in its tiniest pieces and particles. While this might sound like just a different way of going about physics, it is much more than that. I believe that we are on the way to a completely new understanding of how the universe is put together.
Heinrich Päs is a theoretical physicist at the Technical University of Dortmund in Germany and the author of All is One
Le 1er juillet 2023, le satellite européen Euclid a quitté la Terre pour cartographier tune partie de l’Univers visible. La mission vise à comprendre comment l’univers se structure et pourquoi son expansion s’accélère depuis ces 10 derniers milliards d’années. Ces phénomènes qui mettent en jeu l’énergie noire et la matière noire.
Euclid est la 2ème mission M2 (dite moyenne) du programme scientifique obligatoire Cosmic Vision de l’ESA, une mission d’astronomie et d’astrophysique sélectionnée au SPC du 04 octobre 2011, puis adoptée au SPC du 20 juin 2012. C’est une mission principalement dédiée à la cosmologie, c’est-à-dire à l’étude de l’origine, de la nature, de la structure et de l’évolution de l’Univers qui va essayer d’accroître nos connaissances sur deux composantes encore mystérieuses de notre univers, l’énergie noire et la matière noire.
La mission a principalement deux buts. Le premier est de comprendre pourquoi l’expansion de l’Univers s’accélère sous l’effet de cette encore mystérieuse « énergie noire », (ou « sombre »). Le second est de cartographier la non moins mystérieuse « matière noire » (ou « sombre »), puisque bien qu’invisible directement à nos yeux et aux instruments, elle participe, avec la matière visible (étoiles, nébuleuses, …etc) aux effets de gravitation qui lient entre elles les étoiles au sein des galaxies et les galaxies au sein des amas.
En observant toujours plus loin, donc en remontant plus loin dans le temps, Euclid tentera de reconstruire l’évolution de notre univers au cours des 10 derniers milliards d’années sous les effets antagonistes de la matière noire et de l’énergie noire.
Au cours de sa mission nominale de 6 ans, Euclid doit observer à peu près un tiers de la voûte céleste, soit un peu moins de 15 000 degrés², le reste étant occulté par le plan galactique (disque dans lequel tournent les galaxies de la Voie Lactée) et par le plan de l’écliptique (disque dans lequel tournent les planètes de notre système solaire).
À ce relevé s’ajouteront des observations environ 10 fois plus profondes visant trois champs situés près des pôles écliptiques, un au nord couvrant 20 degrés et 2 au sud couvrant chacun 10 degrés. Ils seront visités régulièrement pendant toute la durée de la mission, et serviront de données d’étalonnage et de contrôle de stabilité des performances du télescope et des instruments, ainsi que de données scientifiques pour l’observation des galaxies et des quasars les plus lointains de l’Univers.
Euclid observera donc des milliards de galaxies et l’évolution des grandes structures de l’univers à travers les âges jusqu’à 10 milliards d’années dans le passé, dans le domaine visible et proche infrarouge (longueur d’onde de 550 à 2000 nm). Pour ce faire, il est prévu de déterminer les décalages spectraux vers le rouge (appelé redshift et noté z) des sources observées par des méthodes spectrométriques et photométriques issues de mesures instrumentales et complémentées, pour les mesures photométriques, par l’assistance de télescopes terrestres pour des mesures dans le domaine visible.
Avec son immense couverture céleste et ses catalogues de milliards d’étoiles et de galaxies, l’intérêt scientifique de la mission dépasse le cadre de la cosmologie. Cette base de données alimentera en sources l’ensemble de la communauté astronomique mondiale pour des décennies et constituera un réservoir d’objets astronomiques nouveaux pour des observations impliquant les télescopes le JWST, l’E-ELT, le TMT, ALMA, SKA ou le Vera C. Rubin Observatory.
Pour réaliser ce travail de cartographie, Euclid aura à son bord 2 instruments, un spectrophotomètre proche infrarouge appelé NISP (Near Infrared Spectro Photometer) et un imageur travaillant dans le domaine visible, l’Instrument VIS (VISible Instrument), développés par un consortium international dirigé par l’Institut d’Astrophysique de Paris (IAP/CNRS). Celui-cid regroupe plus de 2 200 personnes (dont 425 en France) réparties dans environ 250 laboratoires (dont 40 en France) de 16 pays.
Note
Nul n’a compris encore compris la raison de l’expansion de l’univers. On évoque une mystérieuse énergie noire. Selon les hypothéses actualles, rien ne devrait se produire. Cependant aujourd’hui certains physiciens proposent une solution à cette interrogation. Pour eux, notre univers ne serait qu’un point dans un ensemble beaucoup plus vaste dont une expansion accélérée serait la règle naturelle, entre un hyper-espace de grande dimension et un vide absolu, selon la description qu’en donne Antonio Padilla, de l’Université de Nottingham (UK). Comme l’observe le Newscientist, les lois de la physique semblent appliquer une règle mathématique mysterieuse
Le satellite militaire français d’observation CSO-3 (Composante Spatiale Optique). CSO-3 a été mis en orbite le 6 mars 2025 depuis le Centre spatial guyanais, au moyen du lanceur Ariane 6, à 17h24, heure de Paris.
Le lancement marque la fin d’un cycle de renouvellement des capacités spatiales d’observation de la défense française. Le système CSO, composé de trois satellites dont les deux premiers ont été mis en orbite en 2018 et 2020, constitue la nouvelle génération de satellites d’observation militaire.
Les satellites CSO permettent aux armées d’accéder à des images d’une qualité sans précédent en Europe, d’avoir accès à des détails plus fins et d’identifier des cibles plus petites de jour dans le visible, comme de nuit dans l’infrarouge. Ces satellites représentent ainsi une plus-value significative pour les activités d’appui aux opérations, de renseignement et de ciblage. CSO est également un système agile offrant aux utilisateurs, en un seul survol, plus d’images sur une même zone géographique. Enfin, ses capacités de réactivité permettent une meilleure adaptation au rythme des opérations.
Il constitue un outil indispensable de la politique de défense de la France en garantissant une autonomie d’appréciation et une souveraineté décisionnelle dans l’espace.
Le système CSO, développé dans un cadre national au sein du programme MUSIS mené par la DGA, au profit du CDE, est résolument ouvert aux partenariats européens au travers d’accords bilatéraux avec huit partenaires. En effet, actuellement, l’Allemagne (2015), la Suède (2015), la Belgique (2017), l’Italie (2019), l’Espagne (2021), la Suisse (2023), la Pologne (2024) et la Grèce (2024) ont déjà rejoint la communauté CSO via des accords de coopération.
La DGA assure la maîtrise d’ouvrage du programme MUSIS, en équipe intégrée avec le CDE, grand commandement de l’armée de l’Air et de l’Espace. Elle assure en propre la maîtrise d’ouvrage du segment sol utilisateurs (SSU), ainsi que l’ensemble des aspects ayant trait à la mise en place des partenariats de coopération.
La DGA a délégué au CNES la maîtrise d’ouvrage pour la réalisation et le lancement des satellites CSO, ainsi que pour la réalisation du segment sol mission (SSM). La maîtrise d’œuvre industrielle des satellites CSO est assurée par le groupement d’entreprises Thales Alenia Space et Airbus Defence and Space. Arianespace fournit les services de lancement.
For 100 years, quantum theory has painted the subatomic world as strange beyond words. But bold new interpretations and experiments may help us to finally grasp its true meaning
For 100 years, quantum theory has painted the subatomic world as strange beyond words. But bold new interpretations and experiments may help us to finally grasp its true meaning
Le problème de la théorie quantique (TQ) est qu’elle n’explique pas les relations qu’elle entretient avec la physique ordinaire, dite parfois macroscopique. En résultat, nous ne percevons pas ce que ce chef d’oeuvre signifie pour la compréhension de la réalité.
Les idées ne manquent pas, d’autant plus qu’elles ne proposent pas d’être soumises à des vérifications expérimentales. Selon le physicien David Mermin, « de nouvelles interprétations apparaissent chaque année. Aucune ne disparait ».
Depuis 10 ans, les choses ont commencé à changer. La TQ a fait des prédictions explicites, vérifiables par l’observation, même si elles reposent sur l’hypothèse qu’il n’y a pas de réalité objective. De plus les physiciens trouvent chaque jour de nouvelles méthodes pour juger de la validité de ces jugements.
Selon Eric Cavalcanti, physicien quantique à la Griffith University de Queensland, Australie, les possibilité augmentent régulièrement
La Théorie quantique
Depuis Isaac Newton qui avait formulé ses lois sur le mouvement et la gravitation au 17e siècle (cf Carlo Rovelli, On what we get wrong about the origins of quantum theory” ) les physiciens construisaient leurs théories sur la base que les systèmes physiques étaient définis par des équations précisant comment ils évolueraient avec le temps. Mais les particules subatomiques telles que l’électron et le photon pouvaient se comporter comme des « vagues » ou exister en état de « superposition » de plusieurs états simultanément, sauf à être « mesurées ». Que se passait-il avant la mesure ? La TQ ne le disait pas.
L’équation de Schrödinger a introduit un concept mathématique, la « fonction d’ondes » pour résumer tous les états possible et calculer la probabililé de trouver la particule en un certain état après mesure, mesure où la fonstion d’onde était dite s’effondrer, « collapse » .Mais une seule mesure ne suffisait pas. Il en fallait plusieurs, d’où un retour aux probabilités.
Que se passe-t-il avant mesure, et que signifie précisément une mesure ? Là encore il fallait faire appel aux probabilités.car la TQ ne le précise pas. L’interprétation dite de Copenhague oblige à « calculer » pour répondre à ces question. Comme Mermin l’avait dit, « ferme ta g. et calcule ». Mais Copenhague a été discuté dès le début, depuis qu’Albert Einstein avait déclaré « God does not play dice with the universe ».
De nombreux physiciens considèrent aujourd’hui, tels Roderich Tumulka, de l’University de Tübingen en Allemagne, « nous voulons des jugemnts sur la vraie nature de la réalité. Il n’est pas accceptable de penser que de simples humains puissent faire s’effondrer la fonction d’onde ».
Tumulka est de ceux qui considèrent la fonction d’onde comme physiquement réelle, représentant le monde tel qu’il existe, que nous le voulions ou non. Il faut seulement l’étendre si nécessaire. La plus célèbre de ces extensions est la many-worlds interpretation, qui considère que la fonction d’ondes se concrétise après mesure dans une infinité d’univers séparés connectés au nôtre. Suite de l’article. Version originale non traduite
But there is also objective collapse, a suite of models proposing that quantum mechanics is incomplete and that something else has to be tacked onto the Schrödinger equation to explain wave function collapse. “The [key] difference with the standard interpretation is that the collapse of the wave function is not something that occurs by magic at the end of the measurement process,” says Angelo Bassi, a theorist at the University of Trieste in Italy. “It’s just part of the dynamics.”
Collapse models have garnered more attention than most in recent years, partly because they offer a plausible explanation of how classical reality emerges without reference to human observers. We don’t see large objects like picture frames and paint brushes in a superposition, it says, because the collapse process works in such a way that the more interacting particles there are, the more readily collapse occurs.
One new interpretation can solve several quantum mysteries in one fell swoop
What triggers this continuous collapsing isn’t entirely clear. Some models don’t say, others posit that it is just gravity. But Bassi says there may ultimately be no good answer – it may just be a property of nature. “That’s why I like collapse models, because they try to open the door to a new world which we don’t understand at the moment – something beyond quantum mechanics that we are not grasping.”
What really sets collapse models apart, however, is that they can be put to the test. Uniquely, they make explicit observational predictions that differ from what standard quantum mechanics predicts. The idea is that this constant process of spontaneous collapse should cause quantum objects such as particles to constantly jiggle around, which, in turn, means they emit excess energy that should be detectable, even if the signal is extremely faint.
Testing quantum interpretations
For the past decade, Bassi has been working with colleagues around the world on an ambitious experimental programme in search of such a signal. They have mostly been repurposing detectors designed to sense hints of dark matter or elusive particles called neutrinos, such as the ultra-sensitive instruments located deep underground beneath the Gran Sasso massif in Italy. And the results are trickling in. In 2020, for instance, a team including Bassi and Cătălina Curceanu, an experimentalist at Italy’s National Institute of Nuclear Physics, was able to rule out the simplest form of one model in which gravity does the collapsing.
Similar experiments are ongoing, and with each new analysis we get fresh constraints on which, if any, of these models might work. But while the fact that we finally have a shot at ruling out objective collapse with experimentation is itself progress, actually doing so is a slow process. “So far, we saw no signal, but this is just the beginning,” says Bassi.
If we were to detect a signal that everyone can agree supports objective collapse, it would surely be worthy of a Nobel prize. Whether that would immediately tell us anything about the meaning of quantum theory is another matter, according to Magdalena Zych at Stockholm University in Sweden, because we would still have to figure out what it is in the environment that is doing the collapsing.
“It would solve the measurement problem in the sense of, if you believe that quantum theory is missing something, this is it,” says Zych. “But it doesn’t really reveal what quantum mechanics is telling us about reality, because you still have to impose some meaning yourself to some extent: you have to say what is the ‘noise’ in the environment [that collapses the wave function].”
More importantly, Zych says we would also be none the wiser about why the observable properties of quantum objects emerge in a probabilistic way, from the act of measurement itself. “That’s really the deep mystery of all this, the fact that we have to speak about probabilities at all,” she says. There is no self-evident reason why the behaviour of subatomic particles cannot be governed by deterministic laws. The fact that they aren’t demands an explanation.
Quantum Bayesianism
For Zych, the take on quantum mechanics that tackles that challenge head on falls into a whole different category of interpretations. While the likes of Bassi and Tumulka insist that quantum states are real, some physicists take a starkly different view: that they don’t represent independent reality at all.
Arguably the most striking example of this approach is QBism, originally known as Quantum Bayesianism because it is founded on a framework for interpreting probabilities first developed by 18th-century minister Thomas Bayes.
Conventionally, probabilities are viewed in “frequentist” terms: we count up the outcomes of many coin tosses to conclude that the odds of getting heads or tails are 50/50. Similarly, many measurements of a particle give you the relative probability of it having one state or another when measured. The Bayesian approach, by contrast, recasts probability as a subjective value that updates as you gain more information.
Running with this idea, the central argument of QBism is that quantum mechanics is similarly subjective. It supplies recommendations about what an observer should believe about what they will see on making a measurement, allowing them to update those beliefs as they take into account fresh experiences. “It’s a theory for agents to navigate the world,” says Ruediger Schack at Royal Holloway, University of London, who developed QBism with Chris Fuchs at the University of Massachusetts Boston.
The appeal of this interpretation is that it seems to address several quantum conundrums at once. It deals with the measurement problem by providing and even requiring a central role for subjective experience. The mysterious collapse of the wave function is simply the observer updating their beliefs on making a measurement, says Schack.
QBism’s answer to the question of how classical reality emerges from the quantum fog, meanwhile, is that it is a result of our actions on the world, of our constant updating of our beliefs about it. The idea even makes light work of a notorious conundrum known as the Wigner’s friend paradox, a thought experiment proposed in the 1950s by physicist Eugene Wigner. Essentially, it demonstrates that two observers – Wigner and a friend observing him making measurements on a quantum system – can have two contradictory experiences of reality. For a QBist, there is no paradox because a measurement outcome is always personal to the person experiencing it. All of which means that QBism stands starkly athwart the idea that it is possible to achieve an objective view on the universe. But that is exactly the point, says Schack, and this is the great lesson of quantum mechanics: that reality is more than any third-person perspective can capture. “It’s a radically different way of looking at the world.”
What really set collapse models apart is that they can be put to the test Others find QBism hard to swallow. Bassi, for instance, insists that objective reality is too high a price to pay. “What physics is about is describing nature in an objective way,” he says. Another problem is that QBism doesn’t appear to offer any observable predictions differing from standard quantum mechanics, and no realistic prospect of submitting to experimental tests. “Convincing people might be a case of pointing out the inadequacies of the alternatives,” says Schack.
That arguably leaves us back where we started. If our best hope of an empirical solution to the measurement problem would leave open questions even if it were proved correct, and an alternative that can address those questions can’t be tested, where do we go from here?
There might still be cause for optimism. In the past few years, some physicists have begun to demonstrate that the assumptions underpinning how we think about the meaning of quantum theory – typically considered more in the realm of metaphysics than science – might themselves submit to testing.
Experimental metaphysics
They call it experimental metaphysics. “It’s an approach that tries to be clear about the landscape of metaphysical assumptions made by different interpretations,” says Cavalcanti, who is one of its key proponents. Among those assumptions are the absoluteness of observed events, which is to say that the outcomes of a measurement are the same for all observers; freedom of choice, the notion that the outcome of any measurement isn’t due to factors involved in the measurement; and locality, or the idea that a free choice cannot influence the observed outcome of an experiment at a distance or in the past. “Individually, these may not be testable, but when you group them together, they can be,” says Cavalcanti. In this way, you can potentially at least disprove classes of quantum interpretation, he says.
Cavalcanti was part of the team behind the most powerful demonstration of this approach to date. In 2020, he and his colleagues used photons to perform an extended version of the Wigner’s friend thought experiment that also involved entanglement, another quantum phenomenon that links particles across vast distances. In short, they found that if standard quantum mechanics is right – if we find no signals for objective collapse, for example – we must abandon one of these assumptions: locality, freedom of choice or the absoluteness of observed events.
That placed the most stringent constraints yet on physical reality, says Cavalcanti. “If you want to keep the notion of freedom of choice, together with locality, then you need to reject the assumption of absoluteness of observed events,” says Cavalcanti – just as QBism insists we must. So, although we aren’t at a stage where we can say QBism or any other interpretation is the right way to think about the meaning of quantum mechanics, “we can now narrow down the possibilities,” says Cavalcanti.
He now wants to go further. In their 2020 experiment, Cavalcanti and his colleagues used photon detectors in place of Wigner and photons themselves as a proxy for his friend. Yet photons are obviously a far cry from the human observers imagined by Wigner in the 50s, and most people would presumably say photons don’t count as observers. It is extremely difficult to keep a molecule comprising a couple of thousand atoms in a superposition, owing to the fragility of quantum states, never mind anything approaching the complexity of a human. But Cavalcanti and his colleagues have suggested that we might one day do the same experiment with an advanced artificial intelligence algorithm running on a large quantum computer, performing a simulated experiment in a simulated lab (see “What exactly would a full-scale quantum computer be useful for?”). That, he says, could show us whether we really do have to relinquish our cherished notion of objectivity – even if we are a long way from being able to do such an experiment.
Quantum gravity
What, then, after all that, are the prospects for some sort of resolution on what quantum mechanics is really telling us about reality? In some ways, we are no further along than we were when the pioneers of quantum mechanics fell out over its meaning. “What we do know for sure is that a certain classical way of looking at the world fails, and we can demonstrate that with mathematical and experimental certainty as much as we can know anything in science,” says Cavalcanti. For now, we have to each decide for ourselves which of the various interpretations of what quantum mechanics means is more appealing based on theoretical considerations – whether you are prepared to give up one assumption or another, and what price you are happy to pay in turn for keeping the assumptions you prize above all else.
Cavalcanti says we would ideally get some guidance from our attempts to figure out if quantum mechanics fits with Einstein’s general theory of relativity, which describes gravity as the result of mass warping space-time. If a particular interpretation helps us make progress on that front, he says, it would be a strong clue. “I think these foundational experiments are relevant here,” he says. “Because the question of whether or not events are absolute is important for the construction of a viable theory of quantum gravity.”
In the meantime, we have at least begun to clarify things by putting the problems quantum mechanics throws up in terms we can understand and devising experiments that can narrow down the plausible solutions. And all we can do is to strive for ever more sophisticated ways to do that, says Cavalcanti. “I think you can’t understand the world less by understanding more than one way to see it.”
For 100 years, quantum theory has painted the subatomic world as strange beyond words. But bold new interpretations and experiments may help us to finally grasp its true meaning
Le problème de la théorie quantique (TQ) est qu’elle n’explique pas les relations qu’elle entretient avec la physique ordinaire, dite parfois macroscopique. En résultat, nous ne percevons pas ce que ce chef d’oeuvre signifie pour la compréhension de la réalité.
Les idées ne manquent pas, d’autant plus qu’elles ne proposent pas d’être soumises à des vérifications expérimentales. Selon le physicien David Mermin, « de nouvelles interprétations apparaissent chaque année. Aucune ne disparait ».
Deuis 10 ans, les choses ont commencé à changer. La TQ a fait des prédictions explicites, vérifiables par l’observation, même si elles reposent sur l’hypothèse qu’il n’y a pas de réalité objective. De plus les physiciens trouvent chaque jour de nouvelles méthodes pour juger de la validité de ces jugements.
Selon Eric Cavalcanti, physicien quantique à la Griffith University de Queensland, Australie, les possibilité augmentent régulièrement
La Theorie quantique
Depuis Isaac Newton qui avait formulé ses lois sur le mouvement et la gravitation au 17e siècle (cf Carlo Rovelli, On what we get wrong about the origins of quantum theory” ) les physiciens construisaient leurs théories sur la base que les sysèmes physiques étaient définis par des équations précisant comment ils évolueraient avec le temps. Mais les particules subatomiques telles que l’électron et le photon pouvaient se comporter comme des « vagues » ou exister en état de « superposition » de plusieurs états simultanément, sauf à être « mesurées ». Que se passait-il avant la mesure ? La TQ ne le disait pas.
L’équation de Schrödinger a introduit un concept mathématique, la « fonction d’ondes » pour résumer tous les états possible et calculer la probabitilé de trouver la particule en un certain état après mesure, mesure où la fonstion d’onde était dite s’effondrer, « collapse » ; Mais une seule mesure ne suffisait pas. Il en fallait plusieurs, d’où un retour aux probabilités.
Que se passe-t-il avant mesure, et que signifie précisément une mesure ? Là encore il fallait faire appel aux probabilités.car la TQ ne le précise pas. L’interprétation dite de Copenhague oblige à calculer pour répondre à ces question. Comme Mermin l’avait dit, « ferme ta g. et calcule ». Mais Copenhague a été discuté dès le début, depuis qu’Albert Einstein avait déclaré « God does not play dice with the universe ».
De nombreux physicines considèrent aujourd’hui, tels Roderich Tumulka, de l’University de Tübingen en Allemagne, « nous voulons des jugemnts sur la vraie nature de la réalité. Il n’est pas accceptable de penser que de simples humains puissent faire s’effondrer la fonction d’onde ».
Tumulka est de ceux qui considèrent la fonction d’onde comme physiquement réelle, représentant le monde tel qu’il existe, que nous le voulions ou non. Il faut seulement l’étendre si nécessaire. La plis célèbre de ces extensions est la many-worlds interpretation, qui considère que la fonction d’ondes se concrétise après mesure dans une infinité d’univers séparés connectés au nôtre.
Suite de l’article. Version originale non traduite
But there is also objective collapse, a suite of models proposing that quantum mechanics is incomplete and that something else has to be tacked onto the Schrödinger equation to explain wave function collapse. “The [key] difference with the standard interpretation is that the collapse of the wave function is not something that occurs by magic at the end of the measurement process,” says Angelo Bassi, a theorist at the University of Trieste in Italy. “It’s just part of the dynamics.”
Collapse models have garnered more attention than most in recent years, partly because they offer a plausible explanation of how classical reality emerges without reference to human observers. We don’t see large objects like picture frames and paint brushes in a superposition, it says, because the collapse process works in such a way that the more interacting particles there are, the more readily collapse occurs.
One new interpretation can solve several quantum mysteries in one fell swoop
What triggers this continuous collapsing isn’t entirely clear. Some models don’t say, others posit that it is just gravity. But Bassi says there may ultimately be no good answer – it may just be a property of nature. “That’s why I like collapse models, because they try to open the door to a new world which we don’t understand at the moment – something beyond quantum mechanics that we are not grasping.”
What really sets collapse models apart, however, is that they can be put to the test. Uniquely, they make explicit observational predictions that differ from what standard quantum mechanics predicts. The idea is that this constant process of spontaneous collapse should cause quantum objects such as particles to constantly jiggle around, which, in turn, means they emit excess energy that should be detectable, even if the signal is extremely faint.
Testing quantum interpretations
For the past decade, Bassi has been working with colleagues around the world on an ambitious experimental programme in search of such a signal. They have mostly been repurposing detectors designed to sense hints of dark matter or elusive particles called neutrinos, such as the ultra-sensitive instruments located deep underground beneath the Gran Sasso massif in Italy. And the results are trickling in. In 2020, for instance, a team including Bassi and Cătălina Curceanu, an experimentalist at Italy’s National Institute of Nuclear Physics, was able to rule out the simplest form of one model in which gravity does the collapsing.
Similar experiments are ongoing, and with each new analysis we get fresh constraints on which, if any, of these models might work. But while the fact that we finally have a shot at ruling out objective collapse with experimentation is itself progress, actually doing so is a slow process. “So far, we saw no signal, but this is just the beginning,” says Bassi.
If we were to detect a signal that everyone can agree supports objective collapse, it would surely be worthy of a Nobel prize. Whether that would immediately tell us anything about the meaning of quantum theory is another matter, according to Magdalena Zych at Stockholm University in Sweden, because we would still have to figure out what it is in the environment that is doing the collapsing.
“It would solve the measurement problem in the sense of, if you believe that quantum theory is missing something, this is it,” says Zych. “But it doesn’t really reveal what quantum mechanics is telling us about reality, because you still have to impose some meaning yourself to some extent: you have to say what is the ‘noise’ in the environment [that collapses the wave function].”
More importantly, Zych says we would also be none the wiser about why the observable properties of quantum objects emerge in a probabilistic way, from the act of measurement itself. “That’s really the deep mystery of all this, the fact that we have to speak about probabilities at all,” she says. There is no self-evident reason why the behaviour of subatomic particles cannot be governed by deterministic laws. The fact that they aren’t demands an explanation.
Quantum Bayesianism
For Zych, the take on quantum mechanics that tackles that challenge head on falls into a whole different category of interpretations. While the likes of Bassi and Tumulka insist that quantum states are real, some physicists take a starkly different view: that they don’t represent independent reality at all.
Arguably the most striking example of this approach is QBism, originally known as Quantum Bayesianism because it is founded on a framework for interpreting probabilities first developed by 18th-century minister Thomas Bayes.
Conventionally, probabilities are viewed in “frequentist” terms: we count up the outcomes of many coin tosses to conclude that the odds of getting heads or tails are 50/50. Similarly, many measurements of a particle give you the relative probability of it having one state or another when measured. The Bayesian approach, by contrast, recasts probability as a subjective value that updates as you gain more information.
Running with this idea, the central argument of QBism is that quantum mechanics is similarly subjective. It supplies recommendations about what an observer should believe about what they will see on making a measurement, allowing them to update those beliefs as they take into account fresh experiences. “It’s a theory for agents to navigate the world,” says Ruediger Schack at Royal Holloway, University of London, who developed QBism with Chris Fuchs at the University of Massachusetts Boston.
The appeal of this interpretation is that it seems to address several quantum conundrums at once. It deals with the measurement problem by providing and even requiring a central role for subjective experience. The mysterious collapse of the wave function is simply the observer updating their beliefs on making a measurement, says Schack.
QBism’s answer to the question of how classical reality emerges from the quantum fog, meanwhile, is that it is a result of our actions on the world, of our constant updating of our beliefs about it. The idea even makes light work of a notorious conundrum known as the Wigner’s friend paradox, a thought experiment proposed in the 1950s by physicist Eugene Wigner. Essentially, it demonstrates that two observers – Wigner and a friend observing him making measurements on a quantum system – can have two contradictory experiences of reality.
For a QBist, there is no paradox because a measurement outcome is always personal to the person experiencing it. All of which means that QBism stands starkly athwart the idea that it is possible to achieve an objective view on the universe. But that is exactly the point, says Schack, and this is the great lesson of quantum mechanics: that reality is more than any third-person perspective can capture. “It’s a radically different way of looking at the world.”
What really set collapse models apart is that they can be put to the test
Others find QBism hard to swallow. Bassi, for instance, insists that objective reality is too high a price to pay. “What physics is about is describing nature in an objective way,” he says. Another problem is that QBism doesn’t appear to offer any observable predictions differing from standard quantum mechanics, and no realistic prospect of submitting to experimental tests. “Convincing people might be a case of pointing out the inadequacies of the alternatives,” says Schack.
That arguably leaves us back where we started. If our best hope of an empirical solution to the measurement problem would leave open questions even if it were proved correct, and an alternative that can address those questions can’t be tested, where do we go from here?
There might still be cause for optimism. In the past few years, some physicists have begun to demonstrate that the assumptions underpinning how we think about the meaning of quantum theory – typically considered more in the realm of metaphysics than science – might themselves submit to testing.
Experimental metaphysics
They call it experimental metaphysics. “It’s an approach that tries to be clear about the landscape of metaphysical assumptions made by different interpretations,” says Cavalcanti, who is one of its key proponents. Among those assumptions are the absoluteness of observed events, which is to say that the outcomes of a measurement are the same for all observers; freedom of choice, the notion that the outcome of any measurement isn’t due to factors involved in the measurement; and locality, or the idea that a free choice cannot influence the observed outcome of an experiment at a distance or in the past. “Individually, these may not be testable, but when you group them together, they can be,” says Cavalcanti. In this way, you can potentially at least disprove classes of quantum interpretation, he says.
Cavalcanti was part of the team behind the most powerful demonstration of this approach to date. In 2020, he and his colleagues used photons to perform an extended version of the Wigner’s friend thought experiment that also involved entanglement, another quantum phenomenon that links particles across vast distances. In short, they found that if standard quantum mechanics is right – if we find no signals for objective collapse, for example – we must abandon one of these assumptions: locality, freedom of choice or the absoluteness of observed events.
That placed the most stringent constraints yet on physical reality, says Cavalcanti. “If you want to keep the notion of freedom of choice, together with locality, then you need to reject the assumption of absoluteness of observed events,” says Cavalcanti – just as QBism insists we must. So, although we aren’t at a stage where we can say QBism or any other interpretation is the right way to think about the meaning of quantum mechanics, “we can now narrow down the possibilities,” says Cavalcanti.
He now wants to go further. In their 2020 experiment, Cavalcanti and his colleagues used photon detectors in place of Wigner and photons themselves as a proxy for his friend. Yet photons are obviously a far cry from the human observers imagined by Wigner in the 50s, and most people would presumably say photons don’t count as observers. It is extremely difficult to keep a molecule comprising a couple of thousand atoms in a superposition, owing to the fragility of quantum states, never mind anything approaching the complexity of a human. But Cavalcanti and his colleagues have suggested that we might one day do the same experiment with an advanced artificial intelligence algorithm running on a large quantum computer, performing a simulated experiment in a simulated lab (see “What exactly would a full-scale quantum computer be useful for?”). That, he says, could show us whether we really do have to relinquish our cherished notion of objectivity – even if we are a long way from being able to do such an experiment.
Quantum gravity
What, then, after all that, are the prospects for some sort of resolution on what quantum mechanics is really telling us about reality? In some ways, we are no further along than we were when the pioneers of quantum mechanics fell out over its meaning. “What we do know for sure is that a certain classical way of looking at the world fails, and we can demonstrate that with mathematical and experimental certainty as much as we can know anything in science,” says Cavalcanti.
For now, we have to each decide for ourselves which of the various interpretations of what quantum mechanics means is more appealing based on theoretical considerations – whether you are prepared to give up one assumption or another, and what price you are happy to pay in turn for keeping the assumptions you prize above all else.
Cavalcanti says we would ideally get some guidance from our attempts to figure out if quantum mechanics fits with Einstein’s general theory of relativity, which describes gravity as the result of mass warping space-time. If a particular interpretation helps us make progress on that front, he says, it would be a strong clue. “I think these foundational experiments are relevant here,” he says. “Because the question of whether or not events are absolute is important for the construction of a viable theory of quantum gravity.”
In the meantime, we have at least begun to clarify things by putting the problems quantum mechanics throws up in terms we can understand and devising experiments that can narrow down the plausible solutions. And all we can do is to strive for ever more sophisticated ways to do that, says Cavalcanti. “I think you can’t understand the world less by understanding more than one way to see it.”
This article is part of a special series celebrating the 100th anniversary of the birth of quantum theory.
For 100 years, quantum theory has painted the subatomic world as strange beyond words. But bold new interpretations and experiments may help us to finally grasp its true meaning
Le problème de la théorie quantique (TQ) est qu’elle n’explique pas les relations qu’elle entretient avec la physique ordinaire, dite parfois macroscopique. En résultat, nous ne percevons pas ce que ce chef d’oeuvre signifie pour la compréhension de la réalité.
Les idées ne manquent pas, d’autant plus qu’elles ne proposent pas d’être soumises à des vérifications expérimentales. Selon le physicien David Mermin, « de nouvelles interprétations apparaissent chaque année. Aucune ne disparait ».
Deuis 10 ans, les choses ont commencé à changer. La TQ a fait des prédictions explicites, vérifiables par l’observation, même si elles reposent sur l’hypothèse qu’il n’y a pas de réalité objective. De plus les physiciens trouvent chaque jour de nouvelles méthodes pour juger de la validité de ces jugements.
Selon Eric Cavalcanti, physicien quantique à la Griffith University de Queensland, Australie, les possibilité augmentent régulièrement
La Theorie quantique
Depuis Isaac Newton qui avait formulé ses lois sur le mouvement et la gravitation au 17e siècle (cf Carlo Rovelli, On what we get wrong about the origins of quantum theory” ) les physiciens construisaient leurs théories sur la base que les sysèmes physiques étaient définis par des équations précisant comment ils évolueraient avec le temps. Mais les particules subatomiques telles que l’électron et le photon pouvaient se comporter comme des « vagues » ou exister en état de « superposition » de plusieurs états simultanément, sauf à être « mesurées ». Que se passait-il avant la mesure ? La TQ ne le disait pas.
L’équation de Schrödinger a introduit un concept mathématique, la « fonction d’ondes » pour résumer tous les états possible et calculer la probabitilé de trouver la particule en un certain état après mesure, mesure où la fonstion d’onde était dite s’effondrer, « collapse » ; Mais une seule mesure ne suffisait pas. Il en fallait plusieurs, d’où un retour aux probabilités.
Que se passe-t-il avant mesure, et que signifie précisément une mesure ? Là encore il fallait faire appel aux probabilités.car la TQ ne le précise pas. L’interprétation dite de Copenhague oblige à calculer pour répondre à ces question. Comme Mermin l’avait dit, « ferme ta g. et calcule ». Mais Copenhague a été discuté dès le début, depuis qu’Albert Einstein avait déclaré « God does not play dice with the universe ».
De nombreux physicines considèrent aujourd’hui, tels Roderich Tumulka, de l’University de Tübingen en Allemagne, « nous voulons des jugemnts sur la vraie nature de la réalité. Il n’est pas accceptable de penser que de simples humains puissent faire s’effondrer la fonction d’onde ».
Tumulka est de ceux qui considèrent la fonction d’onde comme physiquement réelle, représentant le monde tel qu’il existe, que nous le voulions ou non. Il faut seulement l’étendre si nécessaire. La plis célèbre de ces extensions est la many-worlds interpretation, qui considère que la fonction d’ondes se concrétise après mesure dans une infinité d’univers séparés connectés au nôtre.
Suite de l’article. Version originale non traduite
But there is also objective collapse, a suite of models proposing that quantum mechanics is incomplete and that something else has to be tacked onto the Schrödinger equation to explain wave function collapse. “The [key] difference with the standard interpretation is that the collapse of the wave function is not something that occurs by magic at the end of the measurement process,” says Angelo Bassi, a theorist at the University of Trieste in Italy. “It’s just part of the dynamics.”
Collapse models have garnered more attention than most in recent years, partly because they offer a plausible explanation of how classical reality emerges without reference to human observers. We don’t see large objects like picture frames and paint brushes in a superposition, it says, because the collapse process works in such a way that the more interacting particles there are, the more readily collapse occurs.
One new interpretation can solve several quantum mysteries in one fell swoop
What triggers this continuous collapsing isn’t entirely clear. Some models don’t say, others posit that it is just gravity. But Bassi says there may ultimately be no good answer – it may just be a property of nature. “That’s why I like collapse models, because they try to open the door to a new world which we don’t understand at the moment – something beyond quantum mechanics that we are not grasping.”
What really sets collapse models apart, however, is that they can be put to the test. Uniquely, they make explicit observational predictions that differ from what standard quantum mechanics predicts. The idea is that this constant process of spontaneous collapse should cause quantum objects such as particles to constantly jiggle around, which, in turn, means they emit excess energy that should be detectable, even if the signal is extremely faint.
Testing quantum interpretations
For the past decade, Bassi has been working with colleagues around the world on an ambitious experimental programme in search of such a signal. They have mostly been repurposing detectors designed to sense hints of dark matter or elusive particles called neutrinos, such as the ultra-sensitive instruments located deep underground beneath the Gran Sasso massif in Italy. And the results are trickling in. In 2020, for instance, a team including Bassi and Cătălina Curceanu, an experimentalist at Italy’s National Institute of Nuclear Physics, was able to rule out the simplest form of one model in which gravity does the collapsing.
Similar experiments are ongoing, and with each new analysis we get fresh constraints on which, if any, of these models might work. But while the fact that we finally have a shot at ruling out objective collapse with experimentation is itself progress, actually doing so is a slow process. “So far, we saw no signal, but this is just the beginning,” says Bassi.
If we were to detect a signal that everyone can agree supports objective collapse, it would surely be worthy of a Nobel prize. Whether that would immediately tell us anything about the meaning of quantum theory is another matter, according to Magdalena Zych at Stockholm University in Sweden, because we would still have to figure out what it is in the environment that is doing the collapsing.
“It would solve the measurement problem in the sense of, if you believe that quantum theory is missing something, this is it,” says Zych. “But it doesn’t really reveal what quantum mechanics is telling us about reality, because you still have to impose some meaning yourself to some extent: you have to say what is the ‘noise’ in the environment [that collapses the wave function].”
More importantly, Zych says we would also be none the wiser about why the observable properties of quantum objects emerge in a probabilistic way, from the act of measurement itself. “That’s really the deep mystery of all this, the fact that we have to speak about probabilities at all,” she says. There is no self-evident reason why the behaviour of subatomic particles cannot be governed by deterministic laws. The fact that they aren’t demands an explanation.
Quantum Bayesianism
For Zych, the take on quantum mechanics that tackles that challenge head on falls into a whole different category of interpretations. While the likes of Bassi and Tumulka insist that quantum states are real, some physicists take a starkly different view: that they don’t represent independent reality at all.
Arguably the most striking example of this approach is QBism, originally known as Quantum Bayesianism because it is founded on a framework for interpreting probabilities first developed by 18th-century minister Thomas Bayes.
Conventionally, probabilities are viewed in “frequentist” terms: we count up the outcomes of many coin tosses to conclude that the odds of getting heads or tails are 50/50. Similarly, many measurements of a particle give you the relative probability of it having one state or another when measured. The Bayesian approach, by contrast, recasts probability as a subjective value that updates as you gain more information.
Running with this idea, the central argument of QBism is that quantum mechanics is similarly subjective. It supplies recommendations about what an observer should believe about what they will see on making a measurement, allowing them to update those beliefs as they take into account fresh experiences. “It’s a theory for agents to navigate the world,” says Ruediger Schack at Royal Holloway, University of London, who developed QBism with Chris Fuchs at the University of Massachusetts Boston.
The appeal of this interpretation is that it seems to address several quantum conundrums at once. It deals with the measurement problem by providing and even requiring a central role for subjective experience. The mysterious collapse of the wave function is simply th observer updating their beliefs on making a measurement, says Schack.
QBism’s answer to the question of how classical reality emerges from the quantum fog, meanwhile, is that it is a result of our actions on the world, of our constant updating of our beliefs about it. The idea even makes light work of a notorious conundrum known as the Wigner’s friend paradox, a thought experiment proposed in the 1950s by physicist Eugene Wigner. Essentially, it demonstrates that two observers – Wigner and a friend observing him making measurements on a quantum system – can have two contradictory experiences of reality.
For a QBist, there is no paradox because a measurement outcome is always personal to the person experiencing it. All of which means that QBism stands starkly athwart the idea that it is possible to achieve an objective view on the universe. But that is exactly the point, says Schack, and this is the great lesson of quantum mechanics: that reality is more than any third-person perspective can capture. “It’s a radically different way of looking at the world.”
What really set collapse models apart is that they can be put to the test
Others find QBism hard to swallow. Bassi, for instance, insists that objective reality is too high a price to pay. “What physics is about is describing nature in an objective way,” he says. Another problem is that QBism doesn’t appear to offer any observable predictions differing from standard quantum mechanics, and no realistic prospect of submitting to experimental tests. “Convincing people might be a case of pointing out the inadequacies of the alternatives,” says Schack.
That arguably leaves us back where we started. If our best hope of an empirical solution to the measurement problem would leave open questions even if it were proved correct, and an alternative that can address those questions can’t be tested, where do we go from here?
There might still be cause for optimism. In the past few years, some physicists have begun to demonstrate that the assumptions underpinning how we think about the meaning of quantum theory – typically considered more in the realm of metaphysics than science – might themselves submit to testing.
Experimental metaphysics
They call it experimental metaphysics. “It’s an approach that tries to be clear about the landscape of metaphysical assumptions made by different interpretations,” says Cavalcanti, who is one of its key proponents. Among those assumptions are the absoluteness of observed events, which is to say that the outcomes of a measurement are the same for all observers; freedom of choice, the notion that the outcome of any measurement isn’t due to factors involved in the measurement; and locality, or the idea that a free choice cannot influence the observed outcome of an experiment at a distance or in the past. “Individually, these may not be testable, but when you group them together, they can be,” says Cavalcanti. In this way, you can potentially at least disprove classes of quantum interpretation, he says.
Cavalcanti was part of the team behind the most powerful demonstration of this approach to date. In 2020, he and his colleagues used photons to perform an extended version of the Wigner’s friend thought experiment that also involved entanglement, another quantum phenomenon that links particles across vast distances. In short, they found that if standard quantum mechanics is right – if we find no signals for objective collapse, for example – we must abandon one of these assumptions: locality, freedom of choice or the absoluteness of observed events.
That placed the most stringent constraints yet on physical reality, says Cavalcanti. “If you want to keep the notion of freedom of choice, together with locality, then you need to reject the assumption of absoluteness of observed events,” says Cavalcanti – just as QBism insists we must. So, although we aren’t at a stage where we can say QBism or any other interpretation is the right way to think about the meaning of quantum mechanics, “we can now narrow down the possibilities,” says Cavalcanti.
He now wants to go further. In their 2020 experiment, Cavalcanti and his colleagues used photon detectors in place of Wigner and photons themselves as a proxy for his friend. Yet photons are obviously a far cry from the human observers imagined by Wigner in the 50s, and most people would presumably say photons don’t count as observers. It is extremely difficult to keep a molecule comprising a couple of thousand atoms in a superposition, owing to the fragility of quantum states, never mind anything approaching the complexity of a human. But Cavalcanti and his colleagues have suggested that we might one day do the same experiment with an advanced artificial intelligence algorithm running on a large quantum computer, performing a simulated experiment in a simulated lab (see “What exactly would a full-scale quantum computer be useful for?”). That, he says, could show us whether we really do have to relinquish our cherished notion of objectivity – even if we are a long way from being able to do such an experiment.
Quantum gravity
What, then, after all that, are the prospects for some sort of resolution on what quantum mechanics is really telling us about reality? In some ways, we are no further along than we were when the pioneers of quantum mechanics fell out over its meaning. “What we do know for sure is that a certain classical way of looking at the world fails, and we can demonstrate that with mathematical and experimental certainty as much as we can know anything in science,” says Cavalcanti.
For now, we have to each decide for ourselves which of the various interpretations of what quantum mechanics means is more appealing based on theoretical considerations – whether you are prepared to give up one assumption or another, and what price you are happy to pay in turn for keeping the assumptions you prize above all else.
Cavalcanti says we would ideally get some guidance from our attempts to figure out if quantum mechanics fits with Einstein’s general theory of relativity, which describes gravity as the result of mass warping space-time. If a particular interpretation helps us make progress on that front, he says, it would be a strong clue. “I think these foundational experiments are relevant here,” he says. “Because the question of whether or not events are absolute is important for the construction of a viable theory of quantum gravity.”
In the meantime, we have at least begun to clarify things by putting the problems quantum mechanics throws up in terms we can understand and devising experiments that can narrow down the plausible solutions. And all we can do is to strive for ever more sophisticated ways to do that, says Cavalcanti. “I think you can’t understand the world less by understanding more than one way to see it.”