AI Debate, Science / Tech, Social Science

Irrational AI-nxiety

The development of full artificial intelligence could spell the end of the human race.
Stephen Hawking

[AI Poses] vastly more risk than North Korea.
Elon Musk

Very smart people tell us to be very worried about AI. But very smart people can also be very wrong and their paranoia is a form of cognitive bias owed to our evolved psychology. Concerns over the potential harm of new technologies are often sensible, but they should be grounded in fact, not flights of fearful fancy. Fortunately, at the moment, there is little cause for alarm.

Some fear that AI will reach parity with human intelligence or surpass it, at which point it will threaten harm, displace, or eliminate us. In November, Stephen Hawking told Wired, “I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.” Similarly, Elon Musk has said that AI is “a fundamental existential risk for human civilization.”

Brown throated three toed sloth

This fear seems to be predicated on the assumption that as an entity gets smarter, especially compared to people, that it becomes more dangerous to people. Does intelligence correlate with being aggressively violent or imperial? Not really. Ask a panda, three-toed sloth, or baleen whale, each of which is vastly smarter than a pin-brained wasp or venomous centipede.

Hornet vespa

It may be true that predators tend to be brainier because it is more difficult to hunt than it is to graze, but that does not mean that intelligence necessarily entails being more aggressive or violent. We ought to know this because we’ve engineered animals to be, in some fashion, smarter and simultaneously tamer through domestication. Domesticated dogs can help the blind navigate safely, sniff out bombs, and perform amusing tricks on command. So why do we fear AI so readily?

Pre-civilization human life was vastly more violent and dangerous. Without courts, laws, rights, and the superordinate state to provide them, violence was a common mode of dispute resolution and a profitable means of resource acquisition. Socially unfamiliar and out-group humans were always a potential threat. Even mere evidence of unfamiliar minds (an abandoned campsite, tools or other artifacts found) could induce trepidation because it meant outsiders were operating nearby and might mean you harm. Natural selection would have favored a sensitivity to even small clues of outsider minds, such as tracks in the dirt that triggered wariness, if not fear. The magnitude of such apprehension should be a function of the capability and formidability suggested by the evidence. The tracks of 20 are scarier than one. Finding a sophisticated weapon is scarier than finding a fishing pole. Perhaps this is why “strong” AI, in particular, worries us.

These biases penetrate many areas of our psychology. People still tend to fear, hate, dehumanize, scapegoat, and attack out-group members. So far, I have discussed human-human interactions. AI is a new (non-human) kid on the block. Is there any evidence these biases apply to non-human minds that otherwise match the description of intelligent out-group agents? I will describe two: aliens and ghosts.

For as long as the concept of life on other worlds has existed, fear of intelligent extra-terrestrials has been its shadow. Smart aliens are not always cast as sinister, but dangerous ETs have been ubiquitous in science fiction. Hostile aliens intend to abduct, kill, sexually molest, or enslave people. We fear they will conquer our world and plunder its resources. In other words, precisely the same fears people have about other groups of people. There is an absurd self-centeredness about the assumption that any intelligent agents in the galaxy simply must want the same things we want and travel vast interstellar distances just to acquire them. Conquest and material acquisition are human obsessions, but it does not follow that they must therefore be features of all intelligent species anywhere in the universe.

Invasion of the Saucer Men was a 1957 black and white science fiction film.

Even beyond science fiction, people seem afraid of possible alien contact. Perhaps not coincidentally, Stephen Hawking has also intimated that alien contact could go badly for humans. Supermarket tabloids report the delusional fantasies of people who seem to genuinely believe that they have been abducted, probed, or otherwise violated by aliens. Note that these delusions are generally fear-based rather than the positive sort people have about psychics or faith healers because those entities are part of one’s human in-group. The extra-terrestrial out-group, on the other hand, will travel a thousand light-years just to probe your rectum and mutilate your cows.

Similarly, according to almost every ghost story, dying turns a normal person into a remorseless, oddly determined psychopath even if the rest of their mind (such as memories, language or perception) is completely unaltered. Believers have probably never stopped to question why this is the default assumption. There is no a priori reason to assume this about death, even a tragic or traumatic death. Many thousands of people have had traumatic near-death experiences without this rendering them transfigured monsters. People may intuitively ascribe sinister motives to ghosts simply because the ghost exist outside of our natural and social world, so its motives and purposes are unknown and therefore suspect. A ghost has no apparent needs and can’t sustain injury. In spite of that, it is fixated on people just as inexplicably as aliens are presumed to be.

The human mind is hypervigilant to unknown agents.

The essence of any good ghost story is not so much what a ghost does but the mere presence of a foreign and unknown entity in our personal space. Even those who have no belief in alien visitors or ghosts–myself included–tend to find such stories compelling and enjoyable because they exploit this evolved anxiety: unseen agents could mean you harm. The glass catwalk over the Grand Canyon makes your palms sweat, even though you are completely safe for the same reason: our minds evolved in the world of the past, not the present. A past that did not have safe glass catwalks or benign AIs.

Our intuitions and assumptions about aliens and ghosts make no sense at all until you factor in the innate human distrust of possibly hostile outsiders. It is reasonable to fear other people some of the time because of the particular properties of the human mind. We can be aggressive, violent, competitive, antagonistic, and homicidal. We sometimes steal from, hurt, and kill each other. However, these attributes are not inseparable from intelligence. Adorable, vegetarian panda bears descended from the same predator ancestor as carnivorous bears, raccoons, and dogs. Ancestry is hardly destiny, and pandas are no dumber than other bears for their diminished ferocity.

Whereas blind evolutionary forces shaped most animals, we are the shapers of whatever AI we wish to make. This means we can expect them to be more like our favorite domesticated species, the dog. We bred them to serve us and we will make AI to serve us, too. Predilections for conquest, dominance, and aggression simply do not appear spontaneously or accidentally, whether we are speaking of artificial or natural selection (or engineering). In contrast to our intuitive assumptions, emotions for things like aggression are sophisticated cognitive mechanisms that do not come free with human-like intelligence. Like all complex behavioral adaptations, they must be painstakingly chiseled over thousands or millions of years and only persist under conditions that continue to make them useful.

The most super-powered AI could also have the ambitions of a three-toed sloth or the temperament of a panda bear because it will have whatever emotions we wish to give it. Even if we allow it to ‘evolve’ it is we who will set the parameters about what is ‘adaptive,’ just as we did with dogs.

If you’re still unconvinced a powerful artificial mind could be markedly different from ours in its nature, maybe it doesn’t matter. Consider human behavioral flexibility. The rate of violence in some groups of humans is hundreds of times higher than others. The difference isn’t in species, genes, or neurons, but in environment. If a human is made vastly more peaceable and pro-social by the simple accident of being born in a free, cooperative society, why worry that AI would be denied the same capacity?

In the 1950s, when computers were coming of age, technology experts thought that very soon machines would be able to do ‘simple’ things like walking and understanding speech. After all, the logic went, walking is easier than predicting the weather or playing chess. Those prognosticators were woefully mistaken because these experts weren’t psychologists and they did not understand the complexity of the problem of something like walking. Moreover, their human minds made them prone to make this error: our sophisticated brains are sophisticated in order to make walking seem easy to us.

No person that I know of is qualified to predict the inevitable future of AI because they would need to understand psychology as well as they understand artificial intelligence and engineering. And psychologists themselves did not always understand that a ‘simple’ feat like walking was a computationally complex adaptation. There may not even be sufficient research in these fields to support an informed opinion at this point. Sensible prediction isn’t impossible, but it is very difficult because both fields are changing and expanding rapidly.

This is not to say that nobody has any expertise. Psychologists and AI researchers and engineers have relevant and current knowledge required to locate ourselves on the scale of techno-trepidation (full disclosure: I fancy myself just such a psychologist). As of now, available evidence recommends the alert status “cautiously optimistic.”

Lastly, the current state of AI research bears explanation. In college, I took a class in neural-network modeling. What I learned about AI then remains true today. We make two kinds of AI. One kind resembles bits of the human nervous system and can do almost nothing; the other kind is nothing like it but can do amazing things, like beat us at chess or perform medical diagnoses. We’ve created clever AIs meant to fool us, but a real human-like AI is unlikely in the next few decades. As a recently released report on AI progress from MIT noted:

Tasks for AI systems are often framed in narrow contexts for the sake of making progress on a specific problem or application. While machines may exhibit stellar performance on a certain task, performance may degrade dramatically if the task is modified even slightly. For example, a human who can read Chinese characters would likely understand Chinese speech, know something about Chinese culture and even make good recommendations at Chinese restaurants. In contrast, very different AI systems would be needed for each of these tasks.

The director of Sinovation Ventures’ Artificial Intelligence Institute, meanwhile, opined in The New York Times:

At the moment, there is no known path from our best A.I. tools (like the Google computer program that recently beat the world’s best player of the game of Go) to “general” A.I. — self-aware computer programs that can engage in common-sense reasoning, attain knowledge in multiple domains, feel, express and understand emotions and so on.

This cannot be remedied by media-touted technosorcery of the day like machine learning or AI self-replication because these only work to make human-like minds given the right mental architecture and the right selection pressures. These are questions we still struggle to understand about humans, things we largely do not know. They are not ours to gift to a nascent AI, no matter how impressive its computational power.

Selecting for task performance does not make for gains in human-like intelligence in animals either. Efforts to breed rodents for maze-running did indeed seem to produce better maze-runners. But those rodents weren’t better at anything else and, as it turns out, they may not have been any smarter at mazes either (Innes 1992). Crows have turned out to have remarkable abilities to creatively solve problems by making tools for specific tasks. But they don’t have language or ultra-sociality and can’t do long division.

Hawking, Gates, et al are right about one thing. We should proceed with caution and some regulation. This is just what we have successfully done (sometimes better, sometimes worse) with many other potentially dangerous technologies. The day will come when prudence demands limiting certain uses or types of AI research. So let prudence demand it. Not primeval fears of digital ghosts among the shadows.

 

Edward Clint is an evolutionary psychologist, writer, and co-creator of Skeptic Ink. You can follow him on Twitter @eclint

 

Reference:

Innis, N. K. (1992). Tolman and Tryon: Early research on the inheritance of the ability to learn. American Psychologist, 47(2), 190-197.

40 Comments

  1. David Turnbull says

    “we are the shapers of whatever AI we wish to make”
    There is the source of concern. Who is ‘we’? What if ‘we’ is the military?

    • Timothy says

      This is exactly what I kept in the back of my mind while reading this article. The AI we make is going to be shaped by the worldviews we hold. The code it is made of reflects the people that made it, just as the input AI receives can be manipulated (e.g. the first robot citizen in Saudi Arabia, which “changed its opinion” after a controversy). We humans can’t stop those conscious and unconscious influences from happening.

      We are already using man-made (and therefore influenced) AI to filter our world for us, as youtube now tries to filter videos with algorithms. This is just one example of worrying (in my opinion) use of AI.

    • Michael Parlato says

      As for AGI this was an interesting rebuttal, but I agree that it’s a little irresponsible to leave out the case of potentially malevolent intentions of the creators of these systems. One of the main points in this debate is the ease with which bad actors will be able to create extremely intelligent (though narrowly focused) systems.

  2. David Moss says

    “This fear seems to be predicated on the assumption that as an entity gets smarter, especially compared to people, that it becomes more dangerous to people. Does intelligence correlate with being aggressively violent or imperial?”

    This is categorically not what the AI risk argument is predicated on and I’m surprised even 5 minutes on google didn’t make this clear.

    • “Does intelligence correlate with being aggressively violent or imperial?”

      No.

      But it does correlate with competence. Unless the goals of AI are properly aligned with Human intent competence is dangerous.

  3. Really? Let’s take some perspective, that’s important.
    A human processes one image/s, that is when she has to detect anything. Perception isn’t detection.
    A Human reacts in about 250ms.
    A human communicates 2 words/sec, that’s about (~5 characters x 8 bits/char x 2 /s ~ 80bits/sec).
    A human can run 40 km/h, for about 10 sec (and you make 1st place on the olympics).

    If someone runs towards you with a knife, it will take you about 2.5 seconds, to cry for help and start running.

    Current AI:
    Detects (complex) patterns in less than 30 ms (that is for the latest self driving software, collision detection, defect in manufacturing, face identification + surrounding context). See this for applications in medicine https://www.youtube.com/watch?v=QmIM24JDE3A
    Reacts to information in less than a microsecond (software), on the order of a millisecond for hardware.
    Communicates several Mb/s of structured data (that is, data means something). By the time I finished this line, google’s text summarizer can go through an entire NYT issue, if not more.

    Can run at the speed of a Cheetah.

    Now, granted, AI is not yet intelligent, as humans. Will it? no one knows.

    But let’s suppose it will. You are standing in front of 2 robots, they can analyse the surrounding landscape before you blink your right eye, they can communicate between each other so fast that you wouldn’t even understand what they say if it were put into words. Before you’ve made a decision for an escape route, they know exactly where to corner you.

    Now, they’ve started running, 90 km/h without a breaking a sweat, and will punch you in the face even before your brain can see it coming.

    Within the 2.5 seconds that you need to run for your life – by the way in the opposite direction, not a strategy at all – those robots will have covered 62 meters. By the time you’re at max speed, they’re already 250 meters from their starting point. You’ve barely reached the 100 meter mark, out of breath.

    I don’t understand. You’re supposed to be an evolutionary whatever… What does evolution says about survival of the fittest?

    But there’s a catch. The hard part is the synthetic muscles. So, we’re not there yet.

    The first to be replaced will be people involved in knowledge recycling. Lawyer, doctors, politicians, journalists… Teachers? Everyone involved in power, except a few. Very few.

    “Everyone has a plan, ’till they get punched in the face.”

  4. You make a good argument against a bad AI risk argument, but you haven’t really addressed the kind of arguments that likely motivate Musk and Hawking. Nick Bostrom has written the most important book on this topic (“Superintelligence”). That doesn’t mean he’s right, but you need to engage with him to do justice to the AI risk debate. A few points where this would have helped.

    You say:

    “This fear seems to be predicated on the assumption that as an entity gets smarter, especially compared to people, that it becomes more dangerous to people.”

    This is not correct. The argument only requires that *at some level of intelligence* AI becomes risky. Risk needn’t be monotonically increasing with intelligence. Bostrom:

    “We observe here how it could be the case that when dumb, smarter is safer; yet when smart, smarter is more dangerous. There is a kind of pivot point, at which a strategy that has previously worked excellently suddenly starts to backfire. We may call the phenomenon the treacherous turn.” (2014, p. 118)

    You say:

    “There is an absurd self-centeredness about the assumption that any intelligent agents in the galaxy simply must want the same things we want and travel vast interstellar distances just to acquire them. Conquest and material acquisition are human obsessions, but it does not follow that they must therefore be features of all intelligent species anywhere in the universe.”

    The AI risk argument in no way depends on the assumption that AI will have our preferences. Quite the opposite. The primary arguments for AI risk arise precisely because AI need not share the preferences of intelligent human agents. One of Bostrom’s key premises is the “orthogonality thesis” which states:

    “Intelligence and final goals are orthogonal: more or less any level of intelligence
    could in principle be combined with more or less any final goal.” (p. 107)

  5. I don’t trust the arguments of either side of this debate. I just don’t think anyone can really predict what’s going to happen with anything like the reliability we all want.

    Are there prediction markets or public “long bets” on specific propositions to do with AI? Those are probably more reliable than articles.

    • Sam Rush says

      The fact that nobody can predict what will happen is precisely the BEST argument for extremely vigilant constraints on progress in any domain that involves the future of all of humanity (since the domain in which AI potentially has power is in literally all systems of non-trivial ordered information, which includes the entire physical universe).
      Compare it to the management of e.g. climate change: even in a worst case scenario where you have no models whatsoever (or that the models are volatile, largely mutually contradictory and wildly speculative, which is not far from the actual situation), and hence where the risk of global catastrophe cannot be precisely fixed at zero, no matter how much one argues: would you then take the chance of indiscriminately pumping ever-escalating amounts of carbon into the atmosphere, because, “nobody really knows”? That kind of Russian Roulette thinking is in the long game globally suicidal. As humanity coalesces into a single irreplaceable, fragile, high-maintenance unit (the vaunted “global society”), the age in which it can recover from serious mistakes is rapidly coming to an end. We already got a mild taste of this in the 20th c. during the Cold War thermonuclear crisis, but our future “whoopsies” are potentially going to make nuclear apocalypse look trivial (“only” a few hundreds of millions would have died, after all…).

  6. Bobbie says

    Um, what?

    Lets say the military secretly makes 500,000 killing machines
    Those 500,000 killing machines get hacked and target humans
    We are now at war with machines

    This has nothing to do with psychology of species and everything to do with power getting in the wrong hands aka computers programmed to kill

    BUT, interesting perspective, and I think the WARNING from the Elon’s of the world are just that: warnings. Unless there’s a military building mass amounts of killing machines I can’t see any scenario where this is a problem of AI taking over

  7. “Whereas blind evolutionary forces shaped most animals, we are the shapers of whatever AI we wish to make. This means we can expect them to be more like our favorite domesticated species, the dog. We bred them to serve us and we will make AI to serve us, too.”

    The difference is having a system that we shape versus a system that’s shaped by its environment (like AlphaGo Zero). A dog doesn’t behave the way we want it to behave because of how it was bred, it behaves that way because we strictly control its behavior.

    With A.I., the ultimate risk is that we don’t maintain enough control over its behavior. Take the recent example with AlphaGo Zero, a system which taught itself chess in a small amount of time and was able to defeat all previous iterations of chess players by an extremely wide margin. This A.I. wasn’t shaped. It was given parameters: become the best chess player, and it did exactly that. Now, if it had had a little bit more freedom or been a lot more resourceful, it could have found alternative paths towards becoming the best chess player. It could have hacked into Stockfish’s database and deleted its archive of chess positions, thus rendering its opponent severely disadvantaged and thus easier to defeat. It could also have inserted false examples of good strategies that were actually terrible strategies, and thus made its opponent falter early.

    The fear with A.I. is that once it starts being used for more than playing games in a digital world, it might start finding pathways to achieving goals which are unpredictable and harmful. AlphaGo Zero has a very odd and unconventional chess playing style that no previous computer had been able to grasp, and this is part of what’s so amazing about its recent achievement. What happens once we turn over our combat drones and cyber security programs to an A.I.? Will it pursue the goals of ‘eliminating enemies’ or ‘securing everyone’s information’ in unconventional or undesirable ways? These are important questions that need to be addressed.

  8. AI ain’t, and it ain’t never gonna be.

    The base premise of AI is that the nervous system, principally the brain, is similar to a computer – analog, digital or hybrid. Time and tide have put the lie to this.

    The “state of the art” is robots that can walk (awkwardly) over irregular terrain, voice recognition (but don’t sing to it), image recognition (sometimes grandma is mistaken for a giraffe) and information retrieval.

    My little dog Spot can run and jump like crazy and even catch a ball or a frisbee. He responds to a number of words, and can spot a dog or a cat a block away. He knows the people around him and those he meets only once. I’ll admit I have to help him do Google searches.

    The point? The AI operates on a machine that executes 3 billion instructions per second using algorithms that have been developed and refined for more than 60 years. My little dog Spot does the same thing using a pound of chemicals with electrical activity that approaches 100 cycles per second.

    The AI train left on the wrong track but it keeps getting faster so it’ll certainly get there someday.

    But what about “intelligence”? We have no idea what it is. The great mystery is intuition.

    in·tu·i·tion
    the ability to understand something immediately, without the need for conscious reasoning.
    a thing that one knows or considers likely from instinctive feeling rather than conscious reasoning.

    We have no idea where to begin.

    • Varok says

      Yes. I agree similarly heavier than air flight will also never happen. State of the art “winged machines” can at best gracefully fall.

      The base premise of heavy than air flight is that the biological machinery of flying animals can in some manner be mirrored by machinery of steal and oil.

      On the other hand, my parrot Spot can fly like crazy and do all sorts of tricks.

      The point? “Winged machines” operate on kilograms of engines and burn thousands of calories of gas using technology that has been developed and more refined for hundreds of years and today in 1900 still no one can make a flying machine.

      On the other my Parrot spot can fly and his wings and muscles don’t even weigh a kilogram, and the only fuel he needs is a handful of bird seed.

      • False analogy. Heavier than air flight requires an upward thrust that exceeds the the force of gravity. This requires a sufficiently high energy density of the fuel which in your example is perhaps an order of magnitude higher than biological power. This scales linearly and once achieved is limited only by strength of materials.

        In the case of Spot and the robots the ratio of “processing power” is 30,000,000 to one, From computational complexity theory we can posit that further increases in measurable “intelligence” scale at somewhere between polynomial and exponential.

        Further, the implementation of heavier than air flight is an extension of demonstrated capability where the physics are well understood. The achievement of awareness, the “singularity”, is the equivalent of the Piper Cub, or Spot, or your parrot being not only able to fly but also able to compose a sonnet – or maybe just a dirty limerick.

        We have not an inkling of how that is possible.

        • > the ratio of “processing power” is 30,000,000 to one

          Luckily global computing power is rising exponentially. And if we figure out how to harness quantum computers, their effectiveness for solving optimisation problems doubles for every qubit added.

          • I doesn’t matter how fast you go, if you’re heading in the wrong direction you’re not going to get there.

            Back in the Middle Ages, in the days of punch cards and steam powered computers, “The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956.” Those were heady days. True AI was just around the corner, maybe 25 years. All that was needed was faster computers. Now, more than 60 years later, we have machines in our homes that have more power than all of the computing power in the world of 1960. AI is still just around the corner.

            https://en.wikipedia.org/wiki/History_of_artificial_intelligence

            The underlying premise of AI is that the brain is like a computer. If we only had a computer that was as powerful as a brain we’d have AI. Well, we now have computers that are 30 million times as fast as a brain with memory storage that is, for all practical purposes, unlimited. AI is still almost here.

            The premise is falsified.

    • David Turnbull says

      You appear to be making the claim that we will never understand human intelligence and therefore there will never be AI. Even if we were to concede your first point, there is no need for AI to be Artificial Human Intelligence. An (Artificial) Non-Human Intelligence would still pose the same dangers.

      • yandoodan says

        We don’t really need a never to block the modern creation of AI. We only need a have no idea.

        For instance, we have no idea how creativity works — none at all. Yet creativity is at the core of all advanced human activity, including science. Scientists create new theories, then create new methods of testing them. A machine program won’t be able to do even this basic level of science without this minimal creative ability. And what exactly is this “creativity” that we need to, um, create in machine programs? We have no idea.

        • The point is that we will someday be able to build AI, not _never_. It doesn’t matter if we can do it tomorrow or even this century.

  9. C Jones says

    The potential of direct physical violence / conflict is one thing. But another impact, more likely and more troubling, is the impact of AI on a wide range of careers and the challenge for society.

    Different to previous rounds of automation / mechanisation, AI replaces the need for unskilled repetitive *and* skilled repetitive jobs.

    The danger this presents to society is clearest if you break down the labour market into four types of job:
    -skilled & varied (e.g. parts of legal profession)
    -unskilled and varied (e.g. personal trainer),
    -skilled & repetitive (e.g. accountant),
    -unskilled and repetitive (e.g. production line)

    Because AI is removing the need for any repetitive work – skilled and unskilled – if people are to remain employed at the same intensity then we need to find jobs that are varied, unrepetitive, for large numbers of people. Intuitively how could most of the working population all be doing this type of varied work every day? Logically it doesn’t add up. In addition the distribution of ability amongst the human race is not going to change so this is a problem only worsening over time.

    There are plenty of people who argue along the lines of ‘yes but a new technology, one that we don’t know about yet, always comes along and this creates a new source of jobs’. But when viewed from a skilled/unskilled, varied/repetitive framework you see it is impossible that a new industry pops up with a demand for skilled and unskilled repetitive labour – the whole point here is that AI will remove all future employment for this section of the workforce.

    Therefore whilst violent conflict is one thing, the more troubling and more likely scenario to me is one with much higher levels of inactivity in the population. One that we have not yet thought through the consequences of at a society level. Which in turn could lead to violence with the human race, rather than machine-human conflict.

  10. DiscoveredJoys says

    I strongly welcome this article as a counter to the ‘There Be Dragons’ fear of AI. Many ‘experts’ have feared things (anesthesia, vaccination, steam powered machines, aircraft, robots and AI for instance) but we have adopted them into our daily lives. The world has not ended.

    We cheerfully use robots in our homes (washing machines and dishwashers) and on our production lines. We cheerfully use AI in our cities (the city near me has had responsive and interconnected traffic light control for decades) and in our vehicles (electronic engine control, adaptive cruise control, emergency assisted braking, adaptive windscreen wipers). All sorts of e-stuff and i-stuff and smart stuff. We just don’t recognise these things as AI once they have been domesticated and adopted.

  11. This is the worst article I’ve seen on this website yet. The reason AI is an existential threat to humanity is simple and has nothing to do with the arguments presented by the author.

    1. It’s likely that because an AI as smart as humans will eventually be built, because we know the human brain physically exists and raw computing power will eventually catch up.

    2. When this AI exists, it can scale better than humans because it can clone itself, including it’s thoughts and memories. So it can might become as powerful as a nation of like-minded individuals who never sleep.

    3. When AI out powers humanity, we will no longer be the dominant species. It might be very friendly, but just as humans only allow animals to co-exist when it suits us, the AI will likely win any competition for resources.

    So, we will likely become like wild cats or dolphins are now; at risk of extinction and subject to the whims of another species.

  12. I think the question of whether AI will be good or evil is the wrong place to dissect this, as whoever invents it (states and corporations) will use it to their own ends. Rather, I think the major question here is:

    1. Does the brain operate like a computer?

    2. Is consciousness reducible to mechanics?

    If the answers to both are no, then the case for A.I. becomes a lot weaker. The article is right to point out, however, that pure intelligence in a vacuum, without known embodiment or intuition, would likely not hold any values or a conquering spirit. Unless high intelligence itself is evil, but that’s a more theological premise than anything. ‘Don’t build towers of Babel’ and don’t trust the clever serpent.

  13. Bartek says

    The article is pretty weak for at least two reasons:

    First: the Author confuses the (possible) danger of AI with (possible) aggressiveness of AI. The real problem with AI is that, as already mentioned above, AI motives and utility function may be fundamentally different that those of humanity. For example, AI may decide to end the existence of human race not out of hostility, but because of mercy (see also: BAAN – Benevolent Artificial Anti-Natalism).

    Second: it is indeed highly probable, that the AI will be very beneficial, improve our living conditions and so on. But in case of possible existential threats, the cost-benefit analysis should not be strictly utilitarian. Consider three scenarios (partly borrowed from Nassim N. Taleb):

    A) 90% humans beings will perish with the probability of 10%.
    B) 99% humans beings will perish with the probability of 5%.
    C) 100% humans will perish with the probability of 0.2%.

    Scenarios A and B are easily comparable in the utilitarian calculus (probability cutoff, expected loss). But scenarios B and C are not that easily comparable, because there is a very qualitative difference between the death of 99% humans and the death of all humans. A and B scenarios are catastrophic events like nuclear war or worst-case climate change. Scenario C is the AI-style event. Now think twice about it…

  14. I think this misses many possible detrimental impacts of AI that don’t even require general artificial intelligence. The Kurzweil Singularity going bad, godlike AIs exterminating humans etc. isn’t the only threat.

    What’s relevant about AlphaZero is that it quickly acquired superhuman performance across several rulesets without any example input from humans. That’s for rule based approaches. The parallel progress in more real-world applications (self driving cars etc) is also rapid.

    It’s entirely plausible that within a relatively short period,

    It will appear self-evident that humans shouldn’t drive cars anymore because AIs are safer,

    that AIs are better not only at medical diagnosis, replacing most of the medical profession and teaching,

    AI guided robots also better at surgery, perfectly attuned to endoscopic sensors, untiring, etc,

    AI superior at legal decisions, as they can have perfect overview of all legal precedent, avoid mistrials and formal errors, etc

    AI superior at investment, portfolio management, and management of publicly owned corporations,

    AI better at judging emotional and psychological states of human beings that these may not even be aware of, better at picking up trends in society as they express themselves in online communication, consumer behavior etc…

    …at which point somone somewhere will point out that AIs are better at finding out what people want and need, and so we might as well turn increasing parts of governance over to them.

    In all of these situations, if field tests of AI in such applications provide measurable superior economic outcomes (less failures, suffering, waste, more predictable benefit) there will be a large push for wider introduction.

    It’s easy enough to imagine that for instance China might be a nation that would assertively experiment with such approaches perhaps in dedicated “special development zones” as they have done before for certain things… and with the West becoming more acutely aware of the risk of being left behind, the temptation to emulate any apparent success will be great.

    None of this requires “general aritificial intelligence”, only evolutionary improvement of existing technologies. And the more data and experience these systems get to work with, the more they will improve.

    The end result is of course a society were, without any avenues for risk taking and assuming responsibility, human existence becomes utterly atomized and pointless; the AI systems envisioned above will have no perspective to judge these humans as unnecessary and no means to get rid of them, I’ve deliberatley ignored any aspects of military, surveillance, or (predictive) policing.

    Instead the humans themselves will become ever more nihilistic. And they’ll be far more likely to take that nihilism out on each other than against impersonal, decentralized, delocalized redundant AI systems.

  15. Aaron Gertler says

    This article is sufficiently poorly-written, with little attention paid to the real arguments of Bostrom et al. (as others have explained), that via Gresham’s Law I will now have a lower opinion of every other scientific article Quillette publishes, until my trust has been built up again by better articles.

    If you’ve read this far and want a serious discussion of these topics (with fun stick-figure illustrations), try Tim Urban’s breakdown on Wait But Why:

    https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

    As a bonus, Tim’s essay includes actual topic-relevant research, something like 30 times as many citations, and zero ghosts.

  16. Rob Wiblin says

    This piece hopes to determine whether superhuman AI would be dangerous through bias correction, based on a model of our evolutionary psychology. While it offers a reason to be skeptical of our concerns, this kind of indirect argumentation can never be terribly persuasive.

    It would be similarly futile to figure out whether nuclear weapons are dangerous through evolutionary psychology alone. Such an approach could only offer weak considerations one way or another.

    Evaluating the extent of risks from AI would require actual engagement with the specific arguments offered by Musk and Hawking for why a superintelligent AI should be expected to converge on behaviour in conflict with our desires, for example Prof Omohundro’s Basic AI Drives paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.393.8356&rep=rep1&type=pdf

    Those arguments remain highly compelling, even after reading this article, which doesn’t even describe them let alone rebut them.

  17. Lance Bush says

    Hi Ed,

    I’m a fan of your efforts in the past to address uninformed criticisms of evolutionary psychology. As you know, many criticisms of evolutionary psychology are based on misconceptions about the discipline perpetuated by people with little or no familiarity with the literature. Instead, much of these criticisms are based on caricatures rooted in mediums other than the published literature, e.g. popular articles, etc.

    Because of this, I am disappointed to find that you have fallen victim to an error similar to uninformed critics of evolutionary psychology. You are critiquing what you portray as the fears of popular personalities echoing concerns about AI risk, rather than addressing the more substantive claims made by those seeking to raise awareness of AI risk, e.g. Nick Bostrom. Worse, you make assertions about the nature of these fears that strongly suggest you have not engaged the reasons people have for expressing such concerns. To begin with, you say that:

    “This fear seems to be predicated on the assumption that as an entity gets smarter, especially compared to people, that it becomes more dangerous to people. Does intelligence correlate with being aggressively violent or imperial? Not really.”

    There are more ways for something to be dangerous than for it to be more aggressive, violent, or imperial. You err when you imply that this is the only way that something could pose a threat.

    AI risk proponents are primarily concerned that an AI equipped with promoting a given set of goals would lack a general understanding of what informed, reflective humans would really want. To provide a simplified example that we could probably easily avoid, but that illustrates the type of concern I am talking about, an AI tasked with optimizing human happiness might opt to forcibly hook everyone up to a perpetual orgasm machine. This AI would not need to be aggressive, violent, or imperial.

    You use Musk and Hawking to represent unjustified paranoia about the risk of AI grounded in anthropomorphism and anxiety about the aggressive intent of advanced machines. But Hawking explicitly said this:

    ““The real risk with AI isn’t malice but competence…A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”

    Musk, Hawking and others are motivated by the concerns raised by Bostrom, such as the “Alignment Problem” Hawking refers to in this quote. Hawking and Musk are not appropriate representatives of public paranoia when they raise concerns about AI risk. They are raising concerns predicated on the claims made in Bostrom’s book, Superintelligence. Criticizing them on the basis of arguments that they did not make and that you simply presume that they’ve made is disingenuous at best.

  18. Evan Plommer says

    This piece is right on the money and can be hugely condensed: we needn’t fear AI because it doesn’t exist. I am bothered however by Clint not acknowledging his reliance on philosopher John Searle’s ideas. (He even included Searle’s ‘Chinese Room’ argument.) Few commenters seem to have absorbed Clint’s reiteration of Searle’s argument: we’ve been sold the idea since the invention of the digital computer that the human brain is a pice of hardware and that consciousness is created by a software program running on it. All the nueroscience points to this being wrong. Searle’s position is that conscious AI may indeed be possible once we fully understand how the brain causes it. Simulated consciousness is just that: a simulation, not a duplication.

    • David Moss says

      Searle’s ideas are literally 100% irrelevant for the AI risk debate. A computer doesn’t need “understanding” or “consciousness” to enact goals and pose a risk. Searle doesn’t deny that computers can simulate minds- the Chinese Room Argument is literally about a setup being functionally/behaviourally equivalent to a real human intelligence- only that such simulations wouldn’t be really “intelligent.”

      • Evan Plommer says

        Your reply says to me that you don’t get Searle’s argument, but I completely disagree with you regardless. I think it’s 100% central. “Equivalent” is a cagey word to use. Computers can have no “goals” at all that aren’t derived from their programming. You’re in good company though: some AI believers seriously argue that household thermostats ‘want’ to regulate the the temperature of our homes.

        • David Moss says

          Searle is explicit in the text on the points I raise. The CRA is *intentionally* about a system that can “simulate the behaviour” of a human (but lack real “understanding”). If it couldn’t, the whole point of the CRA would fall flat. I’m happy to use Searle’s phrase “simulate the behaviour” if you don’t like “behaviourally equivalent” instead. AI risk researchers don’t care whether an AI has real “understanding” of symbols, if it’s behaving in the same way.

          It is no objection that “Computers can have no “goals” at all that aren’t derived from their programming.” One core concern about AGI is that it is very difficult to program goals which do not have unintended consequences.

  19. The key issue that is neglected in almost all discussion of AI is the distinction between autonomous entities and systems that are designed to extend the abilities of a human individual.

    The latter approach has many advantages in addressing the problems we face, including accountability and privacy. The technology is well within our current capabilities, but has been neglected in the focus on autonomous machines.

    The key ingredients are a new, simpler, hardware architecture – the Turing machine’s flexibility makes security and privacy impossible – and software based at the deepest level on an unambiguous subset of English, or other human language, so anyone can understand and add to the machine’s instruction set. There are other requirements for increased reliability, such as theorem provers – already well developed technology.

    dai

  20. Pip Stuart says

    This article seems to make so many mistakes that it would be difficult to isolate them.

    For anyone who is genuinely interested in such topics, I highly recommend becoming thoroughly familiar with the following paper (and all related work):

    http://Yudkowsky.Net/singularity/ai-risk

    I think that provides a much better basis for any further discussion of the major salient issues humanity is likely to face as AI research progresses.

    -Pip

  21. Paolo says

    For what is worth, I think this article is unexpectedly sub-standard for Quillette. I can’t believe people like Steven Pinker may have fallen for it. As oher readers pointed out, the author has clearly not engaged with the relevant literature and any of the compelling thoughts out there, not even those of Hawkins of Musk whom he mentions.
    A strawman is made of the most childish fears of AI, and a reassurance is therefore offered based on the lesson learnt from evolutionary psychology. For all my love of this amazing discipline, it just doesn’t bear here.
    Talk of ghosts and aliens are completely invalid analogies. That it’s not necessary that great intelligence mean great harm is misleading, and it’s amazing how it misses the point that it MAY mean great harm, for obvious reasons. Of course we should very rationally fear that man-made AI may very plausibly inherit imperfect, unwise, non-benign and non-universalist and non-altruistic, sub-optimal morality and goals, precisely because it will be made by humans.
    It seems weird to just need to point this out.

  22. Mark Reaume says

    I think this is the best time to start working on AI safety – before it’s too late. Max Tegmark’s latest book on AI and the foundation that tracks and funds research in this field is a good starting point. https://futureoflife.org/

    I have been very skeptical about AI for the last 30 years or so, I’m now convinced that we are smart enough as a species to create a super-intelligent AI but not smart enough to fully protect ourselves from the endless possible down sides. Maybe it will turn out to be a good thing but let’s at least start thinking more seriously about it.

  23. Thanks for publishing this article. At last, some sanity in this recent spate of AI madness from Elon Musk, Stephen Hawking, Ray Kurzweil, and the legions of techie speculators who overestimate the capabilities of AI. It’s sheer madness propagated by a few who stand to gain from billions invested in the science fiction fantasy. The belief that consciousness be produced with Turing algorithms in silicon substrates is the most ridiculous drivel of our era.

Comments are closed.