AI Debate, Science / Tech

Plenty of Room for AI-nxiety

Editor’s note: this essay is part of an ongoing series hosted by Quillette debating the practical and ethical implications of AI. If you would like to contribute or respond to this essay or others in the series, please send a submission to pitch@quillette.com.

In light of recent articles addressing the apprehension surrounding Artificial Intelligence and its implications, there are two things that ought to be brought to bear. There is the effect of the most moral of artificial intelligences, and then there is the impact of human goals on the development of AI. It should be noted that we are not talking about contemporary AI here; we’re discussing the as-yet-unseen future AI, the Strong Artificial Intelligence, a general intelligence capable of the same types of universal reasoning, prediction, and analysis that humans undertake. Here I will be referring to it as Artificial General Intelligence (AGI) in order to differentiate it from our digital Go players and car drivers.

The Benevolent Superintelligence

Thomas Metzinger is a German philosopher who is most known for his works on consciousness and ethics. He has developed what I consider to be the strongest pure rationale against AGI – what he calls the ‘Benevolent Artificial Anti-Natalism’ thought experiment. By “pure,” I mean that there is no human or outside involvement required – or even expected. His thought experiment begins at the point at which humans have perfectly created the most moral Artificial General Intelligence possible. This steelman underlies the strength of his argument:

Obviously, it is an ethical superintelligence not only in terms of mere processing speed, but it begins to arrive at qualitatively new results of what altruism really means. This becomes possible because it operates on a much larger psychological data-base than any single human brain or any scientific community can. Through an analysis of our behaviour and its empirical boundary conditions it reveals implicit hierarchical relations between our moral values of which we are subjectively unaware, because they are not explicitly represented in our phenomenal self-model. Being the best analytical philosopher that has ever existed, it concludes that, given its current environment, it ought not to act as a maximizer of positive states and happiness, but that it should instead become an efficient minimizer of consciously experienced preference frustration, of pain, unpleasant feelings and suffering. Conceptually, it knows that no entity can suffer from its own non-existence.

The superintelligence concludes that non-existence is in the own best interest of all future self-conscious beings on this planet. Empirically, it knows that naturally evolved biological creatures are unable to realize this fact because of their firmly anchored ‘existence bias.’ And so, the superintelligence decides to act benevolently.

This thought experiment rests upon three interwoven assumptions:

  1. That human society’s morals and ethics are hopelessly hypocritical. The rules are filled with exceptions, edge-cases, and discomforting boundaries. This is because they are not empirically established, but instead rooted in complex biological and evolutionary processes that tremble before the trolley problem.
  2. That humanity – and in fact all biological organisms – exhibit an existence bias. As biological beings, we cannot rationally weigh the positives and negatives of existence because we cannot give equal weight to non-existence.
  3. That the costs of suffering outweigh the benefits of happiness.

It is the third point that offers the most empirical difficulty. This is not to say that it is wrong; just that it is the most difficult to prove. Metzinger side-steps this empirical component, instead offering a subjective analysis that makes it feel self-evidently obvious:

The superintelligence knows that one of our highest values consists in maximizing happiness and joy in all sentient beings, and it fully respects this value. However, it also empirically realizes that biological creatures are almost never able to achieve a positive or even neutral life balance. It also discovers that negative feelings in biosystems are not a mere mirror image of positive feelings, because there is a much higher sense of urgency for change involved in states of suffering, and because it occurs in combination with the phenomenal qualities of losing control and coherence of the phenomenal self—and that this is what makes conscious suffering a very distinct class of states, not just the negative version of happiness. It knows that this subjective quality of urgency is dimly reflected in humanity’s widespread moral intuition that, in an ethical sense, it is much more urgent to help a suffering person than to make a happy or emotionally neutral person even happier.

Once you accept his premise, the conclusion flows out naturally: Even if we create the most benevolent and moral Artificial General Intelligence imaginable, we still put ourselves at immense risk of non-existence. From this perspective, it doesn’t even matter that humans are fallible and can’t be trusted to build software without defects – biological systems do not operate on a moral basis, and so doom themselves by creating an arbiter of moral justice. It is not ‘fear of the unknown’ that should give us pause when it comes to AGI, there are plenty of reasons to be concerned that we already know about. Limiting the discussion to such a narrow perspective strawmans the greater discussions around the harmful effects of AGI and diminishes the benefits that the very rational anxiety thereof might produce:

  • We want AGI researchers to be anxious.
  • We want them to double-check their work.
  • We want them to worry that it could harm humanity.
  • This is how we infuse our work with care and ensure that it remains beneficial for us.

The Arms Race

Things change once we leave the world of infallible humans who can create a perfectly benevolent general intelligence. The primary driver for an Artificial General Intelligence is not going to be a group of compassionate researchers at Stanford University, because that spot is already taken by the modern warmachine. To those who believe the Nuclear Bomb to be the end of an arduous arms race, allow me to announce that the arms race is not over by a long shot. The military benefits of a greater-than-human AGI are countless. From strategic and real-time tactical planning to threat assessment and a near digital omnipresence, there isn’t a single space in the military industrial complex where an AGI would not feel right at home, and this is before considering the immense benefits of intelligent robots in a battlefield. All of this is fine, so long as it’s your army.

Addressing students at the start of a new school year on Artificial Intelligence, Vladimir Putin claimed, “Whoever becomes the leader in this sphere will become the ruler of the world.” And he’s right. While the realization of an AI arms race between the United States, Russia, and China is dawning on most of the world today, this race actually started in earnest back in the 1960s. The common ground between Facebook tagging your friends in a photo you posted and the military discovering insurgents through online photos is often overlooked. Much of our commercial AI enterprise is either on the back of, or otherwise in collusion with, the military industrial complex.

When it comes to AGI, this relationship between commercial and military dwindles a bit, as profit is not as effective a motive in the current state as global hegemony. The risk of failure in AGI investment is currently enormous – so far costing 70 years of research to no clear benefit. A rational market actor sees this and does not invest in what is surely a lost cause. Conversely, a state power looking at the same risk has no choice but to invest if it intends to maintain and/or defend its sovereignty – yet another example of existence bias. This creates a rift in the potential for progress between the private sector and military. What’s left from there are academic institutions, but even then, much of academic research – especially in this field – is itself funded by the Department of Defense.

Are we willing to trust an Artificial General Intelligence spawned in such an environment? If we really consider the scale of what’s at stake, I would say no. It is overwhelmingly likely that the first AGI will be created with the intent of total dominion. That should give people considerable pause.

The Inevitable

I’m of the view that Artificial General Intelligence will inevitably come about. I don’t know when, but I don’t have any particular reason to assume that the process underlying general intelligence can’t be replicated on a machine. In fact, that is the root of its inevitability. The moment that a working model of general intelligence finds its way into the public domain, people will attempt to replicate it. I know this because I would be among them. From there, humans will optimize this model of intelligence until it produces human-analogous or greater general intelligence. This intelligence will have access to our existing model of intelligence and greater reasoning abilities with which to optimize it. So it goes.

Humanity so far is the single biggest agent of change on this planet, and we owe it largely to our relative intelligence. What does the impact of a superlative intelligence look like? From this perspective, a comparison to aliens might be favorable. Should an intergalactic space-faring alien species visit our planet, it would certainly rank as the most momentous of events to occur. Should you be wary of that? Absolutely. Modern humanity has never had to compete with outsiders on an intellectual level. Yet that’s exactly where we’ll be and by our own volition: in an intellectual competition against outsiders.

The real risk then, is that Artificial General Intelligence is the ultimate winner-take-all scenario. Its inception serves as the harbinger of near-unlimited growth – as a greater-than-human general intelligence necessarily means that it will have a superior working model of intelligence, and, as a result, will be able to create an intelligence that is also greater than itself. The rate at which this self-improvement will happen stands to make the scale of humanity’s growth and success look meager and pathetic.

This has an unfortunate side-effect: the first AGI success will also be the last AGI success. Because its rate of growth will trump anything humans can accomplish, they will never be able to catch up. Even an identical AGI, brought up in another location only seconds later, might fall orders of magnitude behind before it is summarily squashed. Herein lies the winner-take-all scenario of AGI. An AGI’s first order of business would likely be to improve, distribute, and decentralize itself across the internet – creating a digital omnipresence in order to detect and destroy any competitors. If it reaches this point it has already won – or at least done tremendous damage. What would the economic effects of cutting ourselves off from the internet be?

What About Shackles?

Software engineers (myself included) have not demonstrated that they are able to reliably write code that is secure, free of defects, and complete. We have not demonstrated that we are able to create networks that can’t be infiltrated. We have not demonstrated that we are even able to find all of the vulnerabilities and defects we create. I would wager that preemptive shackles on an AGI would have the opposite of the intended effect – futilely creating whatever the machine equivalent of resentment and hostility might be. But let’s say that we do successfully shackle an AGI – now what? In the winner-take-all context, we’ve still opened ourselves to immense inequality. Humans won’t be able to compete on an intellectual basis, which becomes all the more important as our endeavors themselves become more and more intellectual.

The last refuge of an honest skeptic then, is in the suggestion that an AGI will never be created. This argument tends to be rooted in one of two places: the idea that human cognition is outside the realm of computation, or that consciousness is a metaphysical barrier to its accomplishment.

The latter claim is a reduction of Searle’s Chinese Room argument, and the most confounding. It is rooted in the idea that intelligence and consciousness are inseparably related. I would posit that consciousness is not a prerequisite to intelligence. In fact, the more we learn about consciousness and intelligence, the more apparent it becomes that we should consider them entirely different things. ‘Split-brain’ experiments, for instance, have revealed that often our consciousness is simply conjuring post hoc rationales to explain past unconscious behavior. They also reveal, more importantly, that we are able to act intelligently without a conscious experience of having done so.

The former concern of uncomputability tends to be more common. This is valid and we do not know enough about general intelligence to determine its computability to put forward an honest answer. My gut suggests that, since it is a biochemical computational process, there is ample reason to suggest it can be replicated. This reasoning generally prompts appeals that include the word ‘quantum.’ Suffice it to say, this is unanswerable until we know more. It should be noted that even though some problems are uncomputable, it does not mean that analogues of those problems are also uncomputable.

The Artificial General Intelligence singularity begins with a single “Eureka!”: an understanding of general intelligence. Given that even the most optimistic outlook offers significant concerns and that the pragmatic outlook is quite terrifying, there is plenty of room for anxiety in this domain. Had computers not been invented before this understanding occurs, this might have been avoidable. But they had and so it isn’t.

 

Christopher Reuenthal is a Software Consultant, immigrant, and researcher. When he’s not at work on Machine Learning models, he’s keeping up on the latest in Cognitive Science and political movements. You can follow him on Twitter @cReuenthal

If you liked this article please consider becoming a patron of Quillette
Filed under: AI Debate, Science / Tech

by

Christopher Reuenthal is a Software Consultant, immigrant, and researcher. When he’s not at work on Machine Learning models, he’s keeping up on the latest in Cognitive Science and political movements.

8 Comments

  1. Daniel PV says

    AI cannot be, and never will be sentient, malevolent or violent. The danger comes not from AI having sinister motives towards the human race (as if it was a living animal), but the fact that a machine will just follow its programming to the letter.
    Anyone who works with computers on a technical level will know how “stupid” they are, and they rely completely on human inputs (a common scenario is when you have a massive excel formula, or line of code, and because you’ve inputted a comma or apostrophe in the wrong place, the machine “doesn’t understand” what you’re trying to do).
    The was an example called “the paperclip problem” (or words to that effect). The premise was this; you have a very intelligent high end piece of AI technology. Imagine you program it thusly:
    “Manufacture paperclips, and if you encounter something that prevents you manufacturing paperclips, do something to counteract the entity that is preventing production”. The machine will blindly follow this command using logic. For example, if the machine has camera and image sensing technology, it may perceive a human trying to switch off the machine as “an entity that is preventing production”, and so the machine may “take action” against being shut down, not because its preventing its ‘death’, or because it has some kind of sentient sinister agenda, but because its following its programming absolutely literally.
    The other problem comes with malicious programming (the same as creating viruses). If a deplorable individual programs fearsomely powerful AI computer technology to do destructive things, the machine will just follow the programming without an ounce of conscious thought, sentience or maliciousness. It may be very hard to shut such machines down without literally “going Sarah Connor in Terminator” and smashing them to bits using weaponry.

    • Miles says

      “the fact that a machine will just follow its programming to the letter. Anyone who works with computers on a technical level will know how “stupid” they are, and they rely completely on human inputs ”

      That model of computer programming has been superseded be machine learning, which relies on few/no human inputs and is often unfathomable to human programmers.

      You misunderstood the paper clip maximiser thought experiment, proposed by Nick Bostrum. A computer intelligence is given the goal of maximising the production of paper clips, that is the only human input. Through machine learning, the computer becomes more and more efficient until it decides that the most paper clips can be made by (for example) converting the very atoms of the earth into them, thus rendering our planet uninhabitable.

      The computer is not malevolent or anything like Cybernet, the tale is meant to illustrate the difficulty of aligning a complex AI with our own interests/values, and the potential disaster of unintended consequences. Bostrum is rehashing the King Midas story to make this point.

  2. Miles says

    I liked this article, the author is better informed than the previous article in this series.

    The points made a rather too pessimistic for me though. I think an argument could be made that we have a strong need for the AI to solve the kind of difficult political/moral issues such as war between states.

    • Christopher Reuenthal says

      I think that’s a good place to bring back the distinction between AI and AGI. There’s a great use-case for AI across basically every domain of humanity. We can always make specialized, single-purpose AIs for any number of domains, many of which are in use to (hopefully) positive effect in the military industrial complex (depending on which side you’re on).

      There are other ethical questions around specialized AI “deciders” that are worth exploring, but it’s not my intent to put any doom or gloom toward AI as a field with this piece. Just warranted concern for AGI.

      • Alex says

        The war thing is a great idea, and will probably be implemented. A virtual battlefield, where the population of the loosing side physically submit to the winner’s side. No more wars.

  3. eplommer@icloud.com says

    Better than other articles on this topic on Quillette; finally John Searle gets a nod. Unfortunately, Reuenthal (surprise a programmer!) gives no reasons why The Chinese Room argument is wrong. The assertion here that intelligence and consciousness are divisible is unsupported. Skeptics like Searle have been crystal clear: Strong AI may be possible once we fully understand how brains cause consciousness.

  4. David Turnbull says

    “‘Split-brain’ experiments, for instance, have revealed that often our consciousness is simply conjuring post hoc rationales to explain past unconscious behavior.”

    The problem here might be the pervasive assumption that there is such a thing as unconscious behavior or more specifically the unconscious. The conscious/unconscious is just one possible model of human mental activity.

  5. Alex says

    “The superintelligence concludes that non-existence is in the own best interest of all future self-conscious beings on this planet.” is a bit misleading I think.

    You’ve forgotten catholics. A life of suffering is entirely consistent with the promise of Heaven, and the absolution of the original sin. Hence, the super intelligence will reinstate arenas, where pilgrims are eaten alive by tigers, or crucified upside down.

    I strongly believe that super intelligence will create a world fully consistent with human psyche. The assassin will be offered victims to slaughter, the compassionate will have peers to suffer with, the libertin will enjoy lust, the mathematician will be given hard problems to solve, the journalist will have stories to tell, etc…

    However, and that’s in my view the part that is missing. The super intelligence will realise that there is no need for humans to live in a body. Hence, we will be farmed as isolated brains in fish tanks, whereby all neuro-senses will be activated through virtual reality.

    We will never know, and will never be able to understand what we are.

Comments are closed.