A QUANTUM OF DESPAIR

What if science is becoming toxic to human society?

 

In the summer of 1950, Nobel-winning nuclear physicist Enrico Fermi didn’t understand the scale of the cosmos as we understand it today. But he knew that it was at least hundreds of millions of light-years across, encompassing thousands of galaxies and more than a trillion stellar systems. Thus, he reckoned, unless Earth was a vanishingly rare exception, the universe should be teeming with life-forms, and many of their civilizations would be far more advanced than ours.

And yet, as he asked his lunchmates at Los Alamos one day that summer, “where is everybody?” ET visitations to Earth were either absent entirely, or—assuming some were marked by UFO sightings—difficult to pin down as such. This discrepancy between the theoretical abundance of ETs on the one hand, and their actual scarcity or elusiveness on the other, became known as Fermi’s Paradox.

Prominent among the proposed solutions to this paradox are those that posit a common barrier to civilizational progress, a barrier that usually stops a civilization’s development or even extinguishes the civilization outright before it can reach the star-faring stage. Elon Musk, for example, seems convinced that self-made (e.g., nuclear war, rogue-AI) and/or natural (e.g., asteroid strike) disasters tend to smother early-stage, single-planet civs in their cradles so to speak—which is why he wants urgently to make humans “multiplanetary” by colonizing Mars.

One assumption shared by virtually all hypotheses purporting to explain Fermi’s Paradox is that humans (and other civ-building species) will always possess the motive to become star-farers, even if disasters or technical obstacles stand in their way. But is this assumption really justified? What if the scientific advances that are needed for star-faring have the side effect of destroying the will to pursue that goal?

 

Toxic Science

Our present civilization has put science on the pedestal where religion used to be, and so we are strongly encouraged to think of it as an unqualified good. Nevertheless, the idea that science carries with it a certain psychological toxicity is a long-standing one. As philosophers as different as Pascal and Nietzsche have argued, science cannot satisfy and often clashes directly with our deep-set anthropocentrism, our conceit that the universe is about us, our need for higher “meaning” and “purpose”—and in so doing it tends to incubate nihilism and despair.

One might counter that we have already coped pretty well with centuries of scientific findings (Copernicus, Darwin) that challenge our traditional needs and conceits. My suggestion here is that this toxic process is just getting started, and probably is still too subtle to detect easily—though ultimately it could induce civilizational collapse, perhaps with reversions to simpler, static social forms (think of the Amish, or the Taleban) that no longer yearn to go ad astra.

 

Billions and Billions

How new scientific findings about humans’ origins and place in the cosmos influence the worldviews and ethical frameworks of individuals is not well understood—to put it mildly—since so much of this influence occurs beneath conscious awareness. But surely it is fair to say that this influence can be tempered or blocked for a long time by “denial” mechanisms, and ultimately may take generations or even centuries to play out. Evolutionary theory, for example, which became the scientific consensus more than 150 years ago, does not yet seem to have fully replaced our ancient picture of ourselves as creatures made in God’s image. Neuroscience’s refutation of our “free will” illusion has hardly been assimilated at all—the illusion is a powerful one.

But what about Copernicus? Didn’t he demote us from the center of the universe almost five centuries ago? And didn’t we adapt to that easily enough?

Perhaps we did, but I think that was only because Copernicus’s theory was far less revolutionary than it is commonly said to have been. The traditional, pre-Copernican model of the cosmos, which was based on our instinctive anthropocentrism and a naïve interpretation of the movements of lights in the sky, held that the Earth with its God-chosen beings lay at the center of existence, while all else revolved around it. Copernicus’s model made just one change, putting our sun at the center, which allowed a simpler, more elegant account of celestial motions even as it kept our stellar system in its place of supreme privilege.

It was only much more recently that the traditional (and literal) anthropocentrism of our cosmological models was conclusively overturned. Until almost exactly a century ago—the mid 1920s—leading astronomers continued to believe that our Sun sits at or near the center of the universe, and that the universe consists only of our Milky Way galaxy. Around that time, better techniques for estimating stellar distances revealed that Earth, in fact, lies far from the Milky Way’s center, while the Milky Way is just one of multiple galaxies.

 

That was a major change, but when I was coming of age a half-century later, the scale of the universe still seemed somewhat manageable. Like millions of others, I watched Carl Sagan’s Cosmos documentary series in 1980-81, learning, for example, that the universe contains not just a handful of galaxies but at least tens of thousands of them, and I wouldn’t say that put an irreparable dent in my belief in human potential. It still seemed conceivable that mankind, with exponentially improving scientific knowledge and technology, could spread outward and someday comprehend it all.

Cosmologists and cosmology-minded physicists were still just getting started, though. By the turn of the millennium, with the help of tools like the Hubble Telescope, their models were assuming billions of galaxies. Astronomers also were starting to use terms like “visible universe” and “known universe” to delineate the space their telescopes could reach, which, despite its vastness, was apparently incomplete. Indeed, they increasingly embraced the idea that the universe is ever-expanding, and is not just tens or hundreds of billions of light years across but infinite—in all dimensions, presumably including dimensions we can’t perceive.

In a truly infinite cosmos, any local reality would have essentially identical variants elsewhere: “parallel worlds.” As physicist Brian Greene put it in his 2011 book The Hidden Reality, “I find it both curious and compelling that numerous developments in physics, if followed sufficiently far, bump into some variation on the parallel-universe theme.”

 

MWI

The best-known and most widely held parallel-world theory these days is the “Many Worlds Interpretation” (MWI), initially devised by Hugh Everett III (1930-1982) in the mid-1950s while he was a physics PhD student under John Wheeler at Princeton. Everett’s work was mostly ignored while he was alive, though other physicists, notably Bryce DeWitt and David Deutsch, did much to popularize it later among physicists and the general public—and to extend it and give it its present name.

MWI is called an “interpretation” because it tries to make sense of a conundrum at the heart of quantum mechanics: In certain types of experiment, any quantum-scale particle such as an electron or a photon seems to possess an innate multiplicity. In other words, it manifests as a ghostly ensemble of particles (with different positions and velocities) and only when an experimenter tries to detect it more directly does it stop acting like a ghostly ensemble and resolve to just one particle. The leading interpretation in quantum mechanics’ first half-century or so was that this “collapse” to just one state is induced by the experimenter’s act of observation, and that the other, left-behind states are somehow not real. Everett proposed instead that all these states are real and essentially represent different versions of the particle that end up being captured—by different versions of the experimenter—in different universes. In short, MWI holds that reality consists of multiple universes, in which, collectively, anything that can happen does happen.

Everett’s idea was rejected at first, as brave new ideas that threaten the status quo and its defenders typically are. But the reaction to MWI wasn’t just the usual circling of the wagons by the old guard. Even many who admired the theory’s elegance were discomfited by it. As Oxford philosopher of physics Simon Saunders said to a reporter in 2007, “The multiverse will drive you crazy if you really think about how it affects your life, and I can’t live like that. I’ll just accept Everett and then think about something else, to save my sanity.”

Still, MWI was and remains elegant and robustly consistent with experimental results. As alternative theories have fallen by the wayside, it has risen steadily in popularity, not just among physicists but also among science popularizers—and popular audiences, as demonstrated by the success of the recent MWI-themed movie Everything Everywhere All at Once.

MWI recently received what some consider major support when Google reported a “quantum supremacy” demonstration by its experimental quantum computer Willow. A feat of quantum supremacy is a feat that a quantum computer—whose computational bits exist not as discrete 0 or 1 bits but in ghostly superpositions of both—can do that an ordinary “classical” computer can never match. It is regarded as an empirical proof that quantum computing is real, which for many physicists also bolsters the validity of MWI, because the idea of quantum computing—first developed by Deutsch in the mid 1980s—is that such computers gain their advantage in effect by performing computations across different universes.

—from Google Quantum AI blog post 9 Dec 2024

 

Concealed toxicity

Popularizers of physics and cosmology, under the influence of the media industries that nourish them, tend to look on the sunny side. They celebrate the “beauty” of the cosmos in all its vastness and complexity. They applaud our ability as a species to transcend our humble origins, and species-centric biases, as we start to apprehend and explore that cosmos. As I was rewatching Cosmos (1980) recently, I noticed it was now prefaced by Ann Druyan, Sagan’s widow and a writer and director on the series, who invokes the “soaring spiritual high” of science’s “central revelation: our oneness with the universe.”

Is this not simply hand-waving self-delusion, or, worse, window-dressing to protect media consumers’ feelings? Science’s central revelation is the insignificance of humanity, and for MWI and other infinite-universe theories this is not just a relative insignificance but an absolute, one-over-infinity insignificance.

The MWI cosmos is, in a technical sense, more splendid and elegant than anything found in human religion. What could be more perfect, what could be more complete, than an infinitude in which everything that can happen does happen? The problem is that this perfect completeness, or maybe unendingness, leaves no room for “purpose,” “meaning,” or “achievement” in any substantive sense. It mocks our childish conceit that we could somehow explore and/or “conquer” it all.

In fact, MWI implies that there is no higher purpose or meaning to any human being’s actions or existence, other than by filling out, in an infinitesimal way, the infinite space of possibility. Are you a good person in this universe? Are you “successful”? How can this be substantially meaningful (especially from the perspective of a God that transcends the multiverse), if otherwise indistinguishable variants of you are bad and unsuccessful in other universes—and presumably average to a mediocrity across all instances? When you combine this “MWI view” with the modern neuroscientific view of behavior—as being determined moment to moment by innumerable, mostly subconscious factors while our conscious selves stand by as purblind spectators—you start to get a picture of humans as “non-player-character” automata in a sort of video game with infinite parallel playthroughs.

To the extent that people can see themselves and their lives from this perspective, they are likely to lose a lot of their motivations for doing things—and not just great and ambitious things but also the ordinary, pro-social behaviors that keep societies from coming unglued. Such behaviors are rooted in concepts of good and bad, meaning and purpose, and MWI erodes all that as completely as would a revelation that we live in a simulation.

MWI defies our traditional views so starkly and extensively that it also calls into question the “sapiens sapiens” label we have given ourselves. Perhaps, when compared to other civ-building species in our galaxy, we aren’t very smart at all. Perhaps our simple ape brains are already nearing the limits of what they can do—limits that fall well short of what even the most basic star-faring endeavors require. As Nietzsche once put it: [1]

However high mankind may have evolved—and perhaps at the end it will stand even lower than at the beginning!—it cannot pass over into a higher order, as little as the ant and the earwig can at the end of its ‘earthly course’ rise up to kinship with God and eternal life.

 

Fermi revisited

Again, big changes in our understanding of ourselves and our place in the cosmos can take generations to sink in, and the entry of MWI and other infinite-cosmos notions into the popular mind began only recently. It may be that only children born very recently, or even around now, will be forced to confront these ideas in a substantial way during the impressionable years when their models of the world and moral structures come together. Maybe only in the Western world of twenty or thirty years hence will we start to see clear signs of its impact.

In the meantime, it’s surely fair to say that despair and nihilism already seem abundant, particularly among the young.

 

While much of that may be due to the many other social and economic disruptions of recent decades, perhaps some already is being caused (again: via mechanisms that are mostly subconscious) by science—which after all has been moving in its current direction, displacing religiously based beliefs and ethics, for centuries now. In any case, despair caused by other factors is treatable in principle, whereas the one served up by science seems incurable.

Couldn’t we create advanced AI-based robots that self-replicate and relentlessly explore outer space, without regard for the apparent pointlessness of the endeavor—indeed, without any emotion at all? Yes, in principle, if we could remain motivated long enough to develop the necessary AI and robotics tech. But autonomous robot exploration definitely is not the same as human exploration. Moreover, it’s not hard to imagine these clever creations eventually finishing off their depressed, listless, impotent creators, in a perfect and final example of a “cure” that kills the patient.

The idea that physics and cosmology eventually become toxic to society offers a solution to Fermi’s Paradox because it is plausible that not only humans but also other intelligent species that emerge in the cosmos and start venturing into space face this same problem—this fundamental conflict between, on the one hand, the psychology needed to make basic civilization and science, and on the other, the more advanced science that is needed to reach the stars.

Incidentally, MWI’s hint at our inferiority as a species offers another, complementary explanation for the apparent dearth of ET visitors: The few star-faring civs that do exist in our vicinity either do not care about us, or, even if they are curious enough to visit, don’t waste time trying to communicate—firstly because we are too primitive to process what they would have to say, and secondly because almost anything they could convey, particularly regarding cosmology and the nature of reality, would injure us.

[1] Daybreak: Thoughts on the Prejudices of Morality, 1:49, trans. R. J. Hollingdale, Cambridge Univ. Press 1997.

 

***