It is 2029. Los Angeles is a post-apocalyptic hellscape. The freeway is littered with burnt-out cars, the land with human skulls. As we watch, a robot soldier steps onto one of those skulls, crushing it to dust.
This is the opening scene of Terminator 2: Judgment Day, the highest grossing film of 1991. As well as being technically ground-breaking—the movie pioneered the use of computer graphics in cinema—Terminator made a profound impact on our cultural understanding of artificial intelligence. The premise of the series is that a super-intelligent computer system has become self-aware and, concluding that humanity is a threat to it, has decided to wipe us out, by first precipitating a war between the superpowers and then deploying military robots (“terminators”) to deal with the survivors of the ensuing nuclear holocaust.
The rapid growth of systems such as Chat-GPT in recent times has led to concerns that a fictional future like this one might become fact. Of the three godfathers of AI—developers Yoshua Bengio, Geoffrey Hinton, and Yan LeCun—two have lent credence to these fantasies. Bengio has characterized the technology as an “existential risk”; Hinton has warned that it could “threaten humanity.” (LeCun, however, has dismissed such concerns as “preposterous.”)
In response to growing calls for the regulation of AI, which will require global cooperation, on 1 November 2023, the British government hosted the world’s first AI Safety Summit at Bletchley Park, home to the nation’s World War II codebreakers. The summit attracted guests from governments around the world, including the US and China, and from leading companies in the sector, such as OpenAI and the Google subsidiary DeepMind. Together, the attendees formulated and pledged themselves to the Bletchley Declaration: designed as a first step towards oversight of the technology to “ensure human-centric, trustworthy and responsible AI that is safe.” Two further summits are to be held in South Korea and France in 2024.
Not all of those who attended the summit agreed with The Terminator view of AI, however. In a subsequent interview with Rishi Sunak, Elon Musk characterized the technology’s potential as “80 percent good, 20 percent bad.” While he acknowledged—with a possible nod to James Cameron’s blockbuster franchise—that it is important to put off switches on robots, he focused primarily on the possibility of AI friends and tutors and held out the prospect of a world where “no job is needed. You can have a job if you want one for personal satisfaction, but AI will do everything.”
That certainly seems like a more attractive vision than the idea of a cyborg trampling on a human skull—but is it as idyllic a prospect as it seems?
A similar view of the AI future is presented in Pixar’s 2008 film Wall-E. The movie is set in the 29th century, after Earth has become uninhabitable due to overconsumption and humanity has taken refuge on the giant ship Axiom. On board, each passenger’s every whim is tended to by robots but—thanks to the effects of microgravity, coupled with the fact that there is never any need to do anything—people have become morbidly obese and rely on hover-chairs to be able to move at all. By the end of the movie, humans have decided to leave their lives of comfort and return to Earth to attempt to repair the planet.
The implication of Wall-E is that having all one’s wishes satisfied would ultimately prove disappointing and might even have negative effects. The film is part of a long tradition of stories that show the downside of having one’s wishes granted—consider King Midas. The problem is not just the impact on the physiques of the Axiom passengers. It is that they need more than the purely sensory fulfilment that their environment offers them. In this, the film reflects a long-standing view that human happiness relies on more than the baser pleasures of the body. Human beings need meaning, and a life in which all one’s needs were met by external agents would fail to provide it. “It is better to be a human being dissatisfied than a pig satisfied,” as John Stuart Mill put it in 1863.
Musk, to his credit, seems aware that his vision of the future will not be completely trouble-free. He acknowledges, “One of the challenges in the future will be how do we find meaning in life.” However, like the makers of Wall-E, he seems to believe that such a society could survive (the action of the movie takes place 700 years after humans have left Earth)—even if it provided people with an ultimately unsatisfactory existence.
But there is another film that imagines a radically different conclusion.
In 1982, two years before the release of The Terminator, cinemagoers were treated to the animated film The Secret of NIMH, a tale about a family of mice and the hyperintelligent rats they encounter, based on a children’s book that was inspired by a series of scientific experiments.
In 1947, John Calhoun, a researcher at the Rodent Ecology Project at Johns Hopkins University, built a 10,000 sq. ft (around 3,500 sq. m) pen to allow him to study the behaviour of a colony of Norway rats (Rattus norvegicus). Although his facility theoretically had space for 5,000 residents, he noticed that the rodent population never exceeded 200. At that number, he observed, the rats began to behave strangely: they stopped digging tunnels and refused to mate.
In 1954, Calhoun moved to the National Institute of Mental Health (NIMH) where he continued his experiments, this time with mice, building ever more elaborate environments for his rodent subjects and observing their behaviour. Each experiment produced similar results. There would be an initial period of rapid population growth—in Universe-25, his most advanced attempt, the initial population of eight (four breeding pairs) increased to 620 over the course of just 315 days. This was followed by behavioural changes: young mice were expelled from the nest before weaning was complete; juveniles were attacked and wounded; males stopped defending their territory. Eventually, the rodent societies collapsed altogether: males withdrew from society to concentrate on grooming (researchers dubbed them “the beautiful ones”) and females ceased to reproduce. In Universe-25, the last pup was born 600 days after the experiment began and the population peaked at 2,200 in a space intended for 3,480.
There were no external factors involved in the collapse. The colonies were designed to give their residents the best chances of survival. The resources the rats consumed were continually replenished. The researchers even catered to the rodents’ every conceivable desire. And still the colonies collapsed.
Calhoun theorised that the initial population growth resulted in a surplus of individuals capable of performing each social role and the resulting intense competition led to the breakdown of normal behaviour, followed by the collapse of society. He suspected that this dynamic was not limited to the rodent world:
For an animal so complex as man, there is no logical reason why a comparable sequence of events should not also lead to species extinction. If opportunities for role fulfilment fall far short of the demand by those capable of filling roles and having expectancies to do so, only violence and disruption of social organisation can follow.
There are clear parallels here with Peter Turchin’s theory of “elite overproduction,” which ascribes events such as the fall of the Roman Republic and the French Revolution to the fact that those societies had produced more potential members of the elite than they could support in such positions.
While Musk offers a positive view of the AI future, the implications of his vision are that humans will live in an environment similar to that of Calhoun’s mice. Even if they do not have to work for a living, humans will still have needs—food, shelter, entertainment etc.—and, as in Universe-25, these will have to be supplied by external forces. As AI optimises our environment for human survival and improvements in medicine render currently fatal conditions treatable, our population—like that of Calhoun’s coddled rodents—is likely to increase. Artificial friends and tutors will ease the social isolation that may result. But could our societies end up imploding, like those of Calhoun’s mice?
To the best of our knowledge, rats and mice do not feel the urge to find meaning in their lives, so a lack of meaningfulness cannot have been the cause of the collapse of their colonies. What they do have in common with humans, though, is a desire for status, the frustration of which causes problems. By providing the residents with everything they desired, Calhoun left no individual mouse with any way to distinguish itself from its peers. In the initial stage of each experiment, this led to outbreaks of violence, as the mice still continued to struggle for dominance. But their society’s continuing inability to provide a means of establishing a natural hierarchy eventually led its members to become apathetic and unmotivated and turn away from reproduction, child-rearing, and the other activities necessary to keep their colony viable.
The logic of AI development suggests that human society will soon face the same problem. Status symbols have value precisely because they can only be owned by a few. They will lose their meaning when they can be acquired merely by requesting them from an omnipotent system. While human beings might still have jobs in such a society, what would they produce? In a world in which technology is superior to humans, everything we make will be shoddy by comparison with the work of the machines. Nor will this problem be confined to the physical world. An AI that is cleverer than us will make better decisions than we do. Why, then, would we not want to be ruled by it? Britain’s Deputy Prime Minister Oliver Dowden is already talking about outsourcing some decisions to technology. It is unclear how a world with widespread, super-intelligent AI would allow people to “excel and be distinguished above others” or why, in the absence of any opportunity to win status, humans would not meet the same fate as Calhoun’s rodents.
And even if we did avoid the type of collapse that inspired The Secret of NIMH, it is not clear that the AI future envisaged in Wall-E would last long. A world in which AI caters to all our needs would be a world without struggle—but struggle seems to be a fundamental human need. As Francis Fukuyama warns in The End of History,
if men cannot struggle in a just cause because that just cause was victorious in an earlier generation, then they will struggle against that just cause. They will struggle for the sake of struggle. They will struggle, in other words, out of a certain boredom: for they cannot imagine living in a world without struggle.
Fukuyama was referring to ideological conflict—but in a world ruled by a beneficent AI that has solved all human needs, some portion of humanity might feel compelled to struggle against the system, as the only fight available. Asimov’s First Law of Robotics states that a robot may not injure a human being, but how should a super-intelligent AI dedicated to providing the optimum physical environment for humanity react to attempts to overthrow it? Would Wall-E have to become the Terminator?
These are hypothetical musings. Concerns about AI may well be overblown. There might be some, as yet undiscovered, reason why it can never reach superhuman levels of intelligence. But if it does, it is far from clear that even a benign system would create utopia. The idea of a life of ease and abundance may seem alluring but, as the Dalai Lama allegedly once put it, “Sometimes not getting what you want is a wonderful stroke of luck.”