The Faustian bargain is a spiritual form of a conservation law: nothing good happens without something bad happening too. In the modern version, as depicted in books and movies, the message is that geniuses see far beyond their contemporaries, but often at the expense of lasting relationships and happy families. In our preoccupation with the image of the mad scientist, one can’t help but sense a bit of anti-intellectual schadenfreude lurking in the background—solace for all of us “normals.”

Stories of genius don’t have to take this form, but they often do. It’s an organizing principle for the Chilean writer Benjamín Labatut in his widely praised collection of loosely linked stories, When We Cease to Understand the World—the first of his books to be translated into English—and also in his latest, The
MANIAC, which he wrote in English. Both are unsettling, often violent books based on some of the twentieth century’s great ideas of chemistry, physics, and mathematics, told as stories of individual obsession and militaristic madness.

When We Cease to Understand the World takes its name from its central novella, a wild reimagining of the origin of quantum mechanics, a theory created to describe the surprising results of experiments in blackbody radiation conducted in the late nineteenth century. Designed to investigate the dynamics of energy absorption in the atom (akin to turning up the heat underneath a covered pot), these experiments were expected to meet with a continuous increase in some output (like the gradually warmed pot). Instead discrete jumps—quanta—were measured (as if cookware took on only very specific temperatures, with none in between). The mechanics of this system did not behave classically, which is to say, in the way predicted by the calculus-based mathematical models of Newton. New models—and, as it turned out, new mathematics—would be necessary to describe this “quantum mechanics.” The development of these ideas consumed the physics community for the first few decades of the twentieth century and is now the stuff of legend. It’s a story that has been told before, but never in such an original and startling manner.

At the center is the German physicist Werner Heisenberg, who came upon his idea for “matrix mechanics”—a mathematical encoding of quantum phenomena in spreadsheets of infinite extent—during a brief stay on the small island of Heligoland, “Germany’s only outlying island, so dry and inclement that trees barely rise from the ground and not a single flower blossoms amid its stones.” This is a historical fact, but for Labatut this and the other facts that ground the tale are—by his own admission—just starter materials. In his stormy telling Heisenberg hikes all about the windswept island, becoming ill from the exhaustion of nonstop physical and mental exertion. He returns to his hotel, and in a nightmarish fog of fever and physics, amid a dark, delusional encounter between the poets Goethe and Hafez, he creates his groundbreaking mathematical formulation:

He felt his brain split in two: each hemisphere worked on its own, without needing to communicate with the other, and as a result his matrices violated all the rules of ordinary algebra and obeyed the logic of dreams…. Too weary to question himself, he continued working until he had reached the final matrix. When he solved it, he left his bed and ran around his room shouting, “Unobservable! Unimaginable! Unthinkable!” until the entire hotel was awakened.

Among the consequences of these calculations is the famous Heisenberg uncertainty principle, which codifies the limits of measurement in the subatomic realm. Heisenberg later realizes this while walking the streets of Copenhagen, in a vision darkened by a foreshadowing of the nuclear weapons whose invention can be traced directly to the discovery of quantum mechanics: he sees a dead baby at his feet and finds himself surrounded by “thousands of figures” who looked as though they wanted to “warn him of something, before they were carbonized in an instant” by a “flash of blind light.”

Labatut has said that he doesn’t believe in a line between nonfiction and fiction—that “anything that comes out of a writer is fiction.” In the acknowledgments to When We Cease to Understand the World, he explains that while he aims “to remain faithful to the scientific concepts,” he increasingly takes “liberties”: “This is a work of fiction based on real events. The quantity of fiction grows throughout the book.”

Of course historical fiction has often been used to good effect as a means of exposition for the history of ideas, notably for the bubbling cauldron that was early-twentieth-century Eastern Europe. Wonderful examples include Bruce Duffy’s The World as I Found It (1987), a dramatization of the intersecting lives of the philosophers Bertrand Russell, G.E. Moore, and Ludwig Wittgenstein, and Janna Levin’s A Madman Dreams of Turing Machines (2007), a prismatic treatment of the tragic lives of the mathematicians Alan Turing and Kurt Gödel. What is special about Labatut’s approach is this graduated introduction of fiction, a loss of certainty mirrored in physics as we move from classical mechanics to quantum mechanics. In the physical journey things get fuzzy as “quantum chaos” emerges at the boundary. Analogous things happen in Labatut’s narrative.

Advertisement

Thus are the stories in When We Cease to Understand the World packed with ideas, but also dizzying and unnerving, both in style and—for those who care—for their blurry line between truth and imagination. The account of the discovery of quantum mechanics is awash in frenzied dreams and sexual fantasies. The Nobel laureate Erwin Schrödinger works out his fundamental wave equation while conducting a Lolita-style affair with the tubercular daughter of the owner of the inn where he is recovering from the shock of his broken marriage. Another Nobel laureate, Prince Louis de Broglie, produces reams of mathematics while holed up in a Gaudí-esque mansion, living amid a personal museum of art brut curated by his artist lover, recently a suicide, that includes a gigantic replica of Notre Dame made from excrement. And there is more.

The diligent reader might want to dig for the historical truth (Labatut provides an incomplete scattering of references) but might also be carried away by the stories and intrigued by the science. This is not science writing—at least as we think of it today—but science storytelling, giving the reader not only information but also a strong sense of the bursts of intellectual and physical energy that animate discovery and creativity. This kind of romantic presentation of scientific discovery has precedent: E.T. Bell’s classic Men of Mathematics (1937)—a book that inspired generations of mathematicians and is told as a series of mini-biographies—feels closer to Butler’s Lives of the Saints than to a textbook.

Labatut’s central novella is bracketed by four shorter pieces, three of which bring Bell to mind. The book opens with “Prussian Blue,” an account of the invention of chemical weapons as a surprise artifact of the search for new dyes, as well as of their most famous inventor, Fritz Haber. The Faustian message is that no discovery comes for free—with costs measured in the deaths of multitudes: emerald greens require arsenic, ruby reds derive from “crushing millions of female cochineals.” Prussian blue is the surprise product of an alchemical attempt by the German theologian Johann Conrad Dippel in the early eighteenth century to create an elixir of life, an artifact of a grisly brew of “decomposing blood, bones, antlers, horns and hooves.” From there it is but a scientific hop, skip, and jump to cyanide, discovered by Carl Wilhelm Scheele in 1782, and then to Zyklon B, whose derivation was made possible by Haber’s earlier work on insecticides. Haber’s talents for chemistry also earned him a Nobel Prize for his discovery of a process to isolate nitrogen from air and enable the easy production of fertilizers and thus help to feed the world. There is no math here, just coincidence and chemistry, directed first at fine art, fertilizer, and fumigation and, ultimately, in the most awful and tragic of ironies, at the genocide that even Haber—German but still Jewish—couldn’t outrun.

The second story, “Schwarzschild’s Singularity,” is a staggering retelling of the tragic life of the German Jewish polymath Karl Schwarzschild. He was among the most productive scientists of the early twentieth century, best known today for finding the first solution to Einstein’s equations of general relativity—the solution that predicts the existence of black holes, whose infinite gravitational fields correspond to the infinity produced by the mathematical “singularity.” Schwarzschild derived the answer while serving for Germany in the trenches of World War I as a volunteer, despite the fact that his age and scientific stature would have exempted him from service. Although close to death and already suffering the effects of repeated exposures to gas attacks, he “filled three notebooks with calculations.” His last scientific efforts were, in Labatut’s description,

composed on sheets of paper laid out on the floor, his arms hanging over the edge of his bed, lying on his stomach, covered in scabs and abscesses left behind by his blisters when they burst, his body transformed, as it were, into a miniature model of war-torn Europe.

The third story in the collection is “The Heart of the Heart,” devoted to the German-born French mathematician Alexander Grothendieck, who “towered over mathematics like a veritable colossus,” bringing together wide swaths of the field under one unifying abstraction: “Numbers, angles, curves and equations did not interest him, nor did any other mathematical object in particular: all that he cared for was the relationship between them.” The resulting “topos theory”—the eponymous heart of the heart, built on a mathematics of “motives”—was of a minimalist beauty at odds with his extraordinarily messy personal life. But after spending time in Vietnam and Algeria and then spurred by the 1968 protests in France, Grothendieck began to believe that his achievements made him complicit in military technological innovations. He renounced “the vile and dangerous practice of mathematics” and retreated “for the protection of mankind” so that “no one should suffer from his discovery…‘the shadow of a new horror.’” He lived out his life as a hermit in a mountainside village, just a car ride away from the internment camp that had warehoused his father before he was sent to Auschwitz for execution.

Advertisement

A wholly fictional story, “The Night Gardener,” repackages some of this account and closes the book. It is about a mathematician who quits midcareer, in part because he has suddenly realized

that it was mathematics—not nuclear weapons, computers, biological warfare or our climate Armageddon—which was changing our world to the point where, in a couple of decades at most, we would simply not be able to grasp what being human really meant.

A book that begins in fact and materiality concludes in a fable about the end of days.

This brings us to Labatut’s The
MANIAC. In some sense it picks up where When We Cease to Understand the World leaves off, largely given over to the dangerous possibilities—personal and societal—of the creative power of mathematics. The title sets the reader up for a story of a crazed protagonist, but some will also recognize the all-caps version as the acronym for the Mathematical Analyzer Numerical Integrator and Computer, a computing machine installed at the Los Alamos National Laboratory after World War II. The main purpose of this “electronic brain” (as computers were then called) was to perform calculations related to the design of thermonuclear weapons, checking in computer code the ideas of a group of martial physicists. The physicist George Gamow joked that its name also stood for “Metropolis and Neumann Invent Awful Contraption.”

“Metropolis” is the mathematician Nicholas Metropolis. “Neumann” is János (“John” or “Johnny” to his friends) von Neumann, the primary subject of The
MANIAC. His story, “JOHN or The Mad Dreams of Reason,” is the centerpiece of a literary triptych that begins with “PAUL or The Discovery of the Irrational,” a mini-biography of Paul Ehrenfest, another pioneer of quantum mechanics. To the extent that The
MANIAC is a reflection on the “soul’s faculty of reason,” the heartbreaking story of Ehrenfest’s decline into the madness of depression fits here, but it also feels more of a piece with When We Cease to Understand the World.

Von Neumann is one of the most famous mathematicians of the twentieth century. His extraordinary contributions to science and technology are matched only by the degree of hawkish influence he had at the highest levels of government and military policymaking. He has already been the subject of a few traditional biographies,1 but in Labatut’s hands the story is prismatic, told through reflections attributed to a chorus of family members, colleagues, and acquaintances. This approach was also used to beautiful effect in Louisa Hall’s novel Trinity (2018) to tell the story of the other parent of the nuclear age, the physicist J. Robert Oppenheimer. Hall uses only fictional characters as observers, but in Labatut’s now characteristic style, “JOHN” is a nonfiction/fiction slurry, with real historical figures providing chapter-length testimonials that seem to hover somewhere between myth and memoir and are often filigreed with lyricism.

The basic facts of von Neumann’s life can be gleaned from these chapters. He was born into a prosperous, assimilated Jewish family in late-nineteenth-century Budapest. He quickly distinguished himself from his gymnasium peers, who included the physicist and future Nobel laureate Eugene Wigner. Stories abound of his lightning-quick calculating abilities and a photographic memory that enabled him to absorb, almost verbatim, all the classic scientific, philosophical, and literary texts of his day. In his memoir Wigner wrote of von Neumann that “his mind seemed a perfect instrument, with gears machined to mesh accurately to one thousandth of an inch.”

While working toward an advanced degree in chemical engineering to satisfy his practically minded father, von Neumann also earned a degree in mathematics—his true love—for work on axiomatizing set theory, a highly abstract but fundamental subject. His early mathematical achievements earned him a position in 1933 as one of the founding faculty members of the Institute for Advanced Study (IAS) in Princeton, New Jersey, and he became part of the first wave of the 1930s diaspora of Eastern European intelligentsia who fled the Continental rise of fascism and antisemitism and reshaped intellectual life in the United States.

Von Neumann blended in easily, although even in the intellectually rarefied IAS community his mind set him apart. As one joke widely circulated in Princeton put it, “He had made a detailed study of human beings and could imitate them perfectly.” He loved fast cars, strong liquor, and good food, and his Princeton home was the site of frequent lively parties. But as social as he could be, a question or conversation could suddenly initiate an interior retreat: he would fold into himself to work through the problem, suddenly unaware of the people around him, then return to his surroundings to announce the answer and move on to the next—almost surely dirty—joke.

Preparations for World War II diverted many scientists to military work. No one’s involvement was deeper than von Neumann’s. This might seem odd for someone so appreciative of abstraction, but that love had a limit—he believed in abstraction as a tool of clarification of complex phenomena but had little patience with abstraction for its own sake. He forever believed that all great mathematical work should remain tethered to a practical example, and warfare provided many.

It wasn’t just the science behind military applications that interested him. He liked the trappings too. He acquired stacks of clearances, which he would throw at guards as he entered secure locations, asking them to sort out which one gave him entry. While Einstein was wringing his hands over the ethics of nuclear weapons, von Neumann believed (as ventriloquized through Labatut) that it would “be unethical, from the point of view of scientists, not to do what they know is feasible, no matter what terrible consequences it may have.”

Von Neumann’s first official military appointment was as a consultant at the US Army’s Ballistic Research Laboratory. It was there that he became aware of ENIAC (Electronic Numerical Integrator and Computer), a room-size iron behemoth, all vacuum tubes and wires, that was being used to compute the all-important “firing tables” that provided an artillery operator with the settings necessary to hit a moving target with various ordnances under a range of weather conditions. Mathematically this amounts to solving the differential equations describing projectile flight, a procedure that is conceptually simple to execute but lengthy and laborious if done by hand—as it was for decades by the original human “computers”; in the circuitry of the ENIAC relays (capable of executing five hundred multiplications per second!2)these calculations could be accelerated to great effect.

While von Neumann was not there at its birth, he quickly took the lead in ENIAC’s evolution. After the war, priorities turned from firing tables to the knottier mathematics crucial to understanding the physics of nuclear explosives. MANIAC was a successor to ENIAC and one of the first computers to make use of the “von Neumann architecture” that would serve as the rough blueprint for generations of computers, characterized by a large-scale structure of separate but connected input, central processing unit, and memory. It grew out of a postwar effort at IAS to build a machine to do weather prediction—which for von Neumann was mainly about the possibility of waging “weather war,” for if you could predict the weather, you could, in his view, also control it. This led him (and others) to shocking ideas—thankfully never pursued—about doing so by strategically exploding nuclear weapons in the upper atmosphere. The first full calculation executed by MANIAC proved the feasibility of “the Super”—the hydrogen bomb. The programmers, working without security clearances, were allowed to know only the equations, not their significance.

Although the war is what prompted von Neumann to think deeply about machine computing, by many accounts a chance opportunity as a seven-year-old to play with an automated loom—often cited as the forerunner of the computer—may have been formative. Labatut writes that little János stayed up all night pondering how

any pattern, natural or man-made, could be broken down and translated into the “language” of the loom, a language that was embedded in the little holes punched into the cards, and that determined which threads the mechanism would pull up and thread through more than four hundred hooks, to weave each successive line of the tapestry. Those cards, brother said, stored all the relevant information about the finished work in its purest and most abstract form.

Labatut attributes these reminiscences to von Neumann’s brother Nicholas. But Labatut’s “fiction based on fact” once again creates an artful sense of uncertainty. It provides an extra degree of literary freedom that can be used to touching effect, for example to connect von Neumann’s work on and interest in artificial life—closer to the simulation of bacteria colonies than anything human—to his late-in-life desire for children and the disappointment of a marriage “barren in almost all respects” (in words attributed to his second wife, Klára), and to a late explosion of religious belief after being diagnosed with cancer. In a virtuosic section, Labatut’s Wigner describes visiting the dying von Neumann, only to find the avowed atheist drunk, shirtless, and “glistening with sweat” while sobbing and struggling to put on tefillin, the leather wraps worn by observant Jews as a part of daily religious practice:

There he was, asleep below me, looking as frail and vulnerable as a hydrocephalic toddler with his giant head, and for some reason I felt not just overwhelming sadness for the fate of my childhood friend, now spiraling toward death, decay, and—God forbid—perhaps even madness, but a small measure of relief that made me feel incredibly ashamed of myself. Yes, I thought, Janos was human after all, not only a genius but also a drunken fool, just like any of us.

However, for all the artistry it enables, speaking through the dead—especially in this testimonial style—raises important questions about artistic responsibility. Knowing that Wigner’s memoir lacked the poetry of Labatut’s writing, I found myself beginning to question the reflections, which became distracting as I tried to sort out attributions. But Labatut’s writing is magnificent and engrossing if the professor’s loupe can be set aside.

Von Neumann died at the age of fifty-three of cancer, perhaps caused by the aftereffects of the Trinity test, though surely not impeded by a lifetime of extravagant living. His final days included spiritual consolation from Father Anselm Strittmatter, a Catholic priest and Benedictine monk. Like many assimilated German Jews, von Neumann had some years earlier been baptized, although he didn’t practice. His return to the Church may have been strategic—a hedging of his bets. He told his daughter, Marina, on several occasions that “Catholicism was a difficult religion to live in but the best one to die in.”

His end was horrific. He was under twenty-four-hour guard, lest he let slip classified information or—even better—new ideas that should be classified. “To them,” Labatut writes, “the Professor was the goose and the golden eggs.” To keep this brain alive the military cleared an entire floor of Walter Reed Army Medical Center and put von Neumann in a private room, a modern-day Prometheus, chained not to a rock but to a machine that “looked like a massive car engine, like a V-40, and it smelled terrible, like burned hair.” As expressed through Labatut’s version of von Neumann’s military attaché, Vincent Ford:

We heard the machines come alive with a low, deep hum that made the windowpanes quiver as though the building was being hit by an earthquake. And then we heard his screams of agony. I’ve never heard anything like that. I’ve seen soldiers bleed to death from combat wounds, I’ve heard men in sick bay grasping their intestines and speaking in tongues, flyboys cooked in jet fuel, disfigured from head to toe. But this was different. It was the Professor’s voice, but it didn’t sound like a human screaming…. They wheeled out his body in the morning. As they passed me, his hand dropped down from the gurney and I saw that his skin had turned black, with dime-sized white spots all over it, as if they had covered his body in electrodes and burned him to a crisp. I have often wondered if they let him rest, or if they even fiddled with him after death.

The connections between human thought and computing were much on von Neumann’s mind at the end of his life. As he lay dying he was cleaning up the notes of a last lecture that would become the book The Computer and the Brain, influential well past his death. He thought it possible that mathematics was “a secondary language, built on the primary language truly used by the central nervous system.” It’s not much of a leap from that idea to the idea of “thinking machines.”

The words of Labatut’s night gardener resonate. Mathematics has brought us to the point where what’s at stake is not just the erasure of humanity through nuclear or biological weapons but what it means to be human. For many, this is a question of cognition and intelligence, the latter a term that still needs a good definition. The growing sophistication of artificial intelligence–based technologies has caused many to consider seriously the question of not just if but when machines will achieve general intelligence. The steady accumulation of machine accomplishments is matched by the steadily moving goalposts of what it means to be human, and we run the risk of being marched off the field.

The
MANIAC closes with “LEE or The Delusions of Artificial Intelligence,” an account of the widely reported five-game Go match between the world champion player Lee Sedol and the AI-powered AlphaGo in 2016. Game playing—specifically chess and checkers—was among the human skills that computers were programmed to emulate in early AI explorations. These efforts relied mainly on generating and evaluating as many game scenarios as possible from a given position, a “brute force” approach that was limited only by computing power. It quickly produced a checkers champion and eventually a chess champion, although few saw this as having achieved “intelligence.” Brute force has its shortcomings. Humans certainly don’t—and can’t—think this way.

Go is different. The rules are relatively simple: two players place black and white stones on a nineteen-by-nineteen grid, evolving a tiny world of interlocking archipelagos. Superficially, these bear a good deal of resemblance to the artificial life simulations run by von Neumann, which likewise used grids filled with binary. The game is fifty orders of magnitude more complicated than chess—the number of possible chessboards is 10120, whereas the number of possible Go boards is 10170. (The number of stars in the universe clocks in at something less than 1024.) The brute-force technique that was successful in chess is not possible here, but there is another.

AlphaGo makes use of “deep learning,” an approach inspired by our own neural architecture. It uses a structured web of billions of connections to encode the patterns of play in billions and billions of games of Go. AlphaGo’s creators at Google DeepMind first found the patterns characterizing the play of the best human Go players. This set up a foundation of Go play, defining “human” style. They then had AlphaGo improve by repeatedly playing itself. In a final step AlphaGo played millions more games against itself, but this time to create a second artificial neural network capable of assigning to any board a probability that it can win.

Famously, Sedol lost to AlphaGo. The fact of the loss was perhaps less surprising than the style. In the second game AlphaGo made a move that was the linchpin of a strategy Sedol never imagined. He could only say, “Surely AlphaGo is creative. This move made me think about Go in a new light. What does creativity mean in Go? It was not just a good, or great, or a powerful move. It was meaningful.” Sedol did manage to win the fourth game, confusing the machine by playing a move that was computed to have only a one-in-ten-thousand chance of winning—the same probability that was attached to AlphaGo’s “meaningful” move of the second game. A commentator shouted that it reflected “the hand of God.” Nevertheless, Sedol retired soon after, reflecting that “even if I become the best that the world has ever known, there is an entity that cannot be defeated.” The chapter ends with the creepy specter of an even stronger machine, AlphaZero, which learns only from playing itself.

In this final story Labatut’s style changes dramatically. Much of it is reported, quotes are taken from a documentary, and significant space is devoted to the description of board positions and strategies. The pace slows, and the mad flights of fancy are fewer. Ironically enough, at times it feels as though it could have come from ChatGPT or another AI text generator. It pales in comparison to Labatut’s writing at its best, when it can stagger the reader with its energy and creativity. Such writing is the best argument for the existence of an ability that can’t be mechanized—some core quality of “human.”

On the other hand, who knows what’s ahead. Propelled by curiosity, commerce, and conflict, AI careens forward as von Neumann’s scientific imperative continues to reverberate. In Goethe’s final scene, Faust is lifted to heaven to spend eternity among the angels. But take a look at the cover of The
MANIAC. That dark thermonuclear mushroom cloud isn’t a photograph—it’s the output of OpenAI’s generative art software, DALL-E 2.