One reason to subscribe to The New York Times is David Wallace-Wells, a “science writer” (I guess that’s right) whose published work on climate change, the pandemic and advanced science and technologies is highly regarded. His weekly newsletter, by itself, is worth the price of a Times subscription. This week’s edition is especially bracing. What follows is an excerpt:
But A.I. is also exhibiting some plainly disorienting progress, not just on concrete tasks but on unnerving ones: a chatbot hiring a human TaskRabbit to solve a captcha, another writing its own Python code to enable its “escape.” These are not examples of robot autonomy so much as performances of ready-made anxieties — in each case, they were prompted by human observers to test guardrails — and yet they still disquiet, signs that something strange and disruptive is absolutely afoot.
The tech is moving so quickly that it may seem presumptuous to believe that we already know what to make of it all. But many of those who have spent the last decade neck-deep in machine learning believe they do, in fact, know, and that we need to be thinking in quite dire terms. It’s common to hear invocations of the A.I. revolution as an event as significant as the arrival of the internet — but it’s one thing to prepare for a cultural earthquake like the internet and another to be preparing for the equivalent of nuclear war. And it is especially remarkable, given the pervasive utopianism of the internet’s original architects, just how dystopian those ushering in its next phase seem to be about the very new world they believe they are spawning.
“Last time we had rivals in terms of intelligence they were cousins to our species, like Homo neanderthalensis, Homo erectus, Homo floresiensis, Homo denisova and more,” the neuroscientist Erik Hoel wrote in one much-passed-around meditation on the current state of play, with the subtitle “Microsoft’s new A.I. really does herald a global threat.” Hoel went on: “Let’s be real: After a bit of inbreeding we likely murdered the lot.”
More outspoken cries of worry have been echoing across the internet now for months, including from Eliezer Yudkowsky, the godfather of A.I. existentialism, who lately has been taking whatever you’d call the opposite of a victory lap to despair over the progress already made by A.I. and the failure to erect real barriers to its takeoff. We may be on the cusp of significant breakthroughs in A.I. superintelligence, Yudkowsky told one pair of interviewers, but the chances we will get to observe those breakthroughs playing out are slim, “because we’ll all be dead.” His advice, given how implausible he believes a good outcome with A.I. appears to be, is to “go down fighting with dignity.”
Even Sam Altman — the mild-mannered, somewhat normie chief executive of OpenAI, the company behind the most impressive new chatbots — has publicly promised “to operate as though these risks are existential,” and suggested that Yudkowsky might well deserve the Nobel Peace Prize for raising the alarm about the risks. He also recently wrote that “A.I. is going to be the greatest force for economic empowerment and a lot of people getting rich we have ever seen,” and joked in 2015 that “A.I. will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” A year later, in a New Yorker profile, Altman was less ironic about the bleakness of his worldview. “I prep for survival,” he acknowledged — meaning eventualities like a laboratory-designed superbug, nuclear war and an A.I. that attacks us. “My problem is that when my friends get drunk they talk about the ways the world will end,” he said. “I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”
This may not be a universal view among those working on artificial intelligence, but it also is not an uncommon one. In one much cited 2022 survey, A.I. experts were asked: “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median estimate was 10 percent — a one in 10 chance. Half the responses rated the chances higher. In another poll, nearly one-third of those actively working on machine learning said they believed that artificial intelligence would make the world worse. My colleague Ezra Klein recently described these results as mystifying: Why, then, would you choose to work on it? (Source: nytimes.com)
P.S.: The answer to Mr. Klein’s question is the same as George Leigh Mallory’s answer explaining why he attempted to climb Mt. Everest: “Because it is there.”