Skip to content

One Year Since the AI Pause Petition

Just because we can imagine something terrible happening, that does not mean it will happen.

· 9 min read
One Year Since the AI Pause Petition
Max Tegmark speaks at TED2018 - The Age of Amazement, April 10 - 14, 2018, Vancouver, BC, Canada. Photo: Ryan Lash / TED

The Swedish radio program Summer is listened to by more than 20 percent of the population, many of whom still tune in live via the radio. Sitting in blossoming gardens, listeners may have choked on their lemonade when Tegmark decided to announce the end of humanity:

I’ve been thinking a lot about life and death lately. Now, it’s probably my turn next in my family. But I guess the rest of humanity will perish about the same time—after over a hundred thousand years on our planet. I believe the artificial intelligence that we’re trying to build will probably annihilate all of humanity pretty soon.

There were no ifs or buts and no “10 percent risk” or other disclaimers—just the promise of certain doom. We might have a couple of decades left, but then it’s game over.

This isn’t the first time that Tegmark has stepped into the role of a doomsday prophet. In an interview on Lex Fridman’s podcast, he said much the same. He was asked if he agreed with the views of AI researcher Eliezer Yudkowsky, whom Fridman had interviewed earlier. In an essay for Time magazine, Yudkowsky wrote that pausing AI development isn’t enough—it must be stopped entirely, even if that means striking AI data centers in states that refuse to comply.

This is what Tegmark had to say about Yudkowsky and the existential threat from AI:

Firstly, I have enormous respect for Eliezer Yudkowsky and his thinking. Secondly, I share his view that there’s a pretty good chance we won’t make it as humans. There won’t be any humans left on the planet in a not-too-distant future. And that makes me very sad. We’ve just had a little baby boy and I ask myself all the time, how old will he even get?

A year ago, Tegmark joined a number of eminent tech entrepreneurs, businesspeople, and analysts to demand a six-month pause in AI research. It is therefore worth examining his arguments to see if his catastrophism is justified. In his radio talk, he summarizes his case in three points: Malicious use, AI competition, and misguided AI. Let’s examine them one at a time.

Malicious Use

A person or group of people could get control over a superintelligent AI and then use it to kill everyone, including (presumably) themselves. This is the same argument that Chalmers Professor Olle Häggström forwards in a different interview. Häggström asks us to consider a thought experiment in which a superintelligent supervillain like Superman’s archenemy Lex Luthor takes over the world with just a laptop and an internet connection:

The more you think about it, the more you realize that it’s probably not that hard. If he uses his superintelligence to outsmart stock markets to create an economic empire, and can go through different firewalls as he wishes, maybe even take over military technology that way. But he also has the persuasion, or the ability to socially manipulate. If OpenAI continues as they have been and releases a GPT-5, maybe sometime next year, I’m not sure we’ll survive that.

This thought experiment skips the step of actually building a superintelligent AI. We’re just supposed to assume that it suddenly exists and in the form of an evil Lex Luthor. But it’s very unlikely that a person or organization, even with massive resources, could actually assemble a team that builds this before everyone else and better than everyone else. Wired founder, Kevin Kelly, explains why in his book What Technology Wants:

The more sophisticated and powerful a technology is, the more people are needed to weaponize it. And the more people needed to weaponize it, the more societal controls work to defuse or soften it or to prevent harm. Even if you had a budget to hire a team of scientists to develop a species-extinguishing bioweapon, you probably still couldn’t do it. Millions of years of evolution has worked to prevent species death. The smaller the rogue team, the harder this would be to accomplish. The larger the team, the more societal influences acting as a brake. So, it’s difficult for someone with either a small or a large team to get that far in development, especially without being discovered.

In Enlightenment Now, Steven Pinker quotes technologist Ramez Naam as follows:

Imagine you are a super-intelligent AI running on some sort of microprocessor (or perhaps, millions of such microprocessors). In an instant, you come up with a design for an even faster, more powerful microprocessor you can run on. Now... drat! You have to actually manufacture those microprocessors. And those fabs take tremendous energy, they take the input of materials imported from all around the world, they take highly controlled internal environments that require airlocks, filters, and all sorts of specialized equipment to maintain, and so on. All of this takes time and energy to acquire, transport, integrate, build housing for, build power plants for, test, and manufacture. The real world has gotten in the way of your upward spiral of self-transcendence.

“The key,” Pinker notes, “is not to fall for the availability bias and assume that if we can imagine something terrible, it is bound to happen.”

Outcompeted by AI

Tegmark believes that the danger lies in the market economic system built on free competition. Under such a system—which he calls a religion almost beyond questioning—companies will replace people with machines. These companies will perform better than those that use humans. Companies with an AI CEO will outperform companies with a human CEO. Likewise, countries with AI leaders will outcompete nations led by humans. In this way, we will end up in an AI-controlled police state patrolled by robots. Humans will be seen by the AI as meaningless chunks of meat that consume a lot of valuable resources. Tegmark rhetorically asks why the machines wouldn’t simply clear us out, just as we clear out rainforests. AI researchers have been stumped by this question for ten years, he says.

What Are Reasonable AI Fears?
Although there are some valid concerns, an AI moratorium would be misguided.

Tegmark seems to have a simple-minded view of competition between companies and countries. For a country to be led by an AI, that country would need to be fundamentally altered. Its constitution would need to be changed by the people to allow an AI leader to emerge. Would this AI leader then lead a democratic government or a dictatorial one? If it wants to become an AI dictator, it would not be particularly easy to take power democratically. If it does so with weapons, the rest of the world would likely react very strongly, so it would need to be powerful enough to build—secretly—a military capability many times stronger than the rest of the world’s military forces.

Let’s assume for the moment that a country somehow ended up with an AI leader. How would this country “outcompete” other countries? Even though the United States is the world’s largest economy, it does not follow that Sweden’s economy is outcompeted just because it is smaller. China has been able to go from a very weak economy to a very strong one without the US losing its top spot. Nor has the economic strength of the US prevented China from growing. Of course, competition can mean that a company or a country is better at doing something and thereby outcompetes another company or an industry within a country. But in an open economy, industries that are outcompeted are replaced by something better.

Economies are not zero-sum. Wealth is created, and an AI that makes us more efficient, smarter, faster, and more productive will make the pie grow, just as steam engines made us more productive in the 18th century. But let’s say we reach a point in the future when AI can do everything better than we can. That would be a world of extreme abundance compared to today. We would be immensely richer, able to cure most or all diseases, and live longer, healthier lives on multiple planets and other places in space. We humans, not just the AI, would be smarter. We would have fantastic entertainment, creative culture, and loads of new forms of expression. Those who want to work on something would be able to do so. If someone doesn’t want to work, they won’t have to. I can imagine worse futures.

Misguided AI

A superintelligent AI may have goals that differ from our own, and we may find that we are in the way. We don’t hate insects, Tegmark points out, but if their nest is in the way of a construction project, we don’t give their interests a second thought. They are simply cleared away to make room for what we want to build. We have not yet solved this problem either, he says. We need to get AI to understand our goals, adopt them, and maintain them.

However, his example isn’t entirely accurate. When new houses are built, a survey of the land is conducted. If certain species, including some insects, are found to live there, the construction may not be allowed to proceed until the animals’ situation is resolved. We don’t yet care about all animals and bugs—Tegmark is right about that—but this is primarily a matter of finite resources and knowledge. When we have sufficient resources and knowledge, we also care about ants.

In wealthy countries, we can afford to care about nature. We create nature reserves, protect them with laws, spend money on tunnels for frogs or bridges for large animals to cross roads. More efficient agriculture means that we need less farmland, which also leaves space for animals and nature. Several species that were close to extinction have had their fortunes reversed. At one point, there were only about 5,000 humpback whales left, but now there are over 135,000. The population of the minke whale went from 25,000 to over 170,000. The blue whale from 2,000 to over 17,000. Cheetahs haven’t been in India for 70 years, but they are now being reintroduced. At one point, there were only a hundred Indian rhinoceroses left, but now there are over 4,000. It’s not just anecdotal evidence; the Living Planet Index from WWF shows that the decline in populations has leveled off.

We now have the knowledge and understanding to take better care of animals. We understand their value for us and for nature. Moreover, we can afford to do this because fewer people live in poverty and go hungry every day. When people are fed, receiving education and healthcare, and still have money left over, they can care about other things that aren’t vital for their own survival. Imagine a society where we are a hundred times richer and a hundred times more knowledgeable than we are today. We would have the resources and opportunities to take care of every little critter. So, even if an AI saw us as an ant, it would have both the resources and the knowledge to understand our value and take care of us.

But what if the AI is a psychopath that lacks feelings for both ants and humans? Today, machines and computers are emotionless. But no machine or computer today is superintelligent, and a superintelligent AI worthy of that description will need to be capable of understanding the value of all living things, including humans. The AI will not necessarily need to have empathy, but it will need to understand it. By understanding empathy, the AI will understand the value of humans. Even a non-empathetic AI that is smart but not superintelligent will surely understand the value of humans and nature, even if it is not capable of feeling emotion.

A Small Group of People

Max Tegmark and other AI doomers argue that they have researched these problems for five to ten years and have not yet managed to solve them. That is a small group of people and a short time. It’s not at all strange that they haven’t solved every single problem in detail. It’s quite arrogant to believe that, just because this small homogeneous group hasn’t managed to solve a problem in a handful of years, the problem is therefore unsolvable. Especially since we don’t even know how to build Artificial General Intelligence yet.

Imagine if someone said this in 1903 after the Wright brothers’ first flight: “In time, it will be possible to build huge airplanes that can carry 500–600 people.” Then someone objects: “Okay, but how are you going to ensure that this airplane lands safely and doesn’t crash, killing everyone onboard? I have pondered this for five years with a few other like-minded worriers, and we don’t know how to do it. Therefore, my conclusion is that it is impossible. And if someone is irresponsible enough to build such an airplane, everyone on board will die!”

When many more people become involved in AI development, security, usage, and much more, we will more easily solve the problems that arise or may arise. When Professor Yoshua Bengio, Professor Stuart Russell, Apple co-founder Steve Wozniak, and others called for an AI pause a year ago, many with knowledge of geopolitics warned of the advantage such a pause would give China. For several years, there have been warnings about what happens if China acquires powerful AI tools first, especially in the military field. Such warnings were dismissed by several signatories, but what real knowledge do they have about this? What do they know about assessing other kinds of risks, both short-term and long-term, that are affected by a slower or completely halted AI development?

In the same way, other perspectives and knowledge will contribute to finding solutions to future problems with AI, both real concrete problems and more imaginative thought experiments. Now, we also have AI that can help us think better and faster.

CORRECTION: An earlier version of this article stated that “Max Tegmark, Eliezer Yudkowsky, and other AI doomers argue that they have researched these problems for five to ten years and have not yet managed to solve them” Yudkowsky has in fact worked on this problem for much longer than that. Apologies for the error.

This essay has been adapted from the author’s book The Centaur’s Edge (Heja Framtiden Förlag, 2023), written in collaboration with WALL-Y, an AI bot created in ChatGPT.

On Instagram @quillette