Science / Tech

Elon Musk, Mark Zuckerberg, and the Importance of Taking AI Risks Seriously

A recent public disagreement between Elon Musk and Mark Zuckerberg was picked up widely in the media. It concerned their vastly different views on the topic of artificial intelligence. Musk has been saying for years that AI represents ‘an existential risk for human civilisation’; Zuckerberg believes that such claims are ‘irresponsible’.

Some insisting that Elon Musk is wrong about AI focus on the benefits of artificial intelligence (as though the founder of Tesla isn’t aware of the benefits of, say, automated vehicles). It is undoubtedly true that AI has the potential to be the very best technological advancement of human history, and by a very large margin. Yet the obvious upsides do not somehow eliminate the possibility of existential risk.

Others decided that Musk is simply trying to promote his personal brand as a tech superhero, and that he isn’t concerned about the future of humanity at all. If true, this would reveal a disappointingly flawed human being – and reveal absolutely nothing regarding the problem of artificial intelligence safety.

From my perspective, the most worrying were the articles that dismissed the story as being inherently silly:

This meme, that anyone concerned about artificial intelligence is afraid of Terminator 2: Judgment Day becoming reality, is not new. But when communicated through respected platforms it represents dangerously lazy journalism.

From Musk to Stephen Hawking to Bill Gates – serious thinkers who view artificial intelligence safety as arguably our most important problem, are never talking about cartoonish killer robots. It may be that no level of engagement will convince many to align with their views – but we must have a baseline obligation to address genuine positions.

I think the proliferation of ‘killer robot’ headlines could be all but eliminated by everyone reading Nick Bostrom’s excellent book Superintelligence. But it seems that for many, this topic is so ridiculous that to even open such a book would be a little too close to joining a geeky cult.

So, if these weirdos aren’t worried about killer robots, what are they worried about?

Consider your reaction when a mosquito buzzes past as you’re trying to eat dinner with your family. It’s annoying, and if you manage to swat it – problem solved. Your family would not shriek with horror at the senseless murder of a living being, nor despair at the descent of a loved one to such an evil state. You would not be a killer-human, merely a human. The goals of the mosquito do not align with your goals, and for most of us it seems ethically unimportant to even pause to consider that difference. In aggregate, human beings don’t care about mosquitos – at all. A superintelligence may not care about human beings – at all. The concern is not that we will face an army of Terminators bent on the destruction of humanity, it is of super-intelligent indifference.

There is a natural inclination to be skeptical the possibility of super intelligent AI. In his Ted Talk ‘Can we build AI without losing control over it,’ Sam Harris outlines that to doubt the possibility, or even inevitability of super intelligent AI, we need to find a problem with one of the following three premises:

1) Intelligence is the product of information processing.

2) We will continue to improve our intelligent machines.

3) We are not near the summit of possible intelligence.

As Harris points out, these concerns don’t even rely on creating a machine that is far smarter than the smartest human to have ever lived. Electronic circuits function about a million times faster than biochemical circuits. Therefore, a general AI whose intelligence growth stopped at a high human level would still think a million times faster than humans can. So, over the course of one week, it will produce 20,000 years of human level progress by virtue of speed alone. It will continue to make progress at this level week after week. Keep in mind, the idea that AI intelligence would stop at a human level is almost certainly a mirage. How long before this machine stands to us as we stand to insects?

With the very real possibility that such technology will be able to make changes to itself, even a slight diversion from goals that match our own could be disastrous. We simply don’t know how long it will take to develop necessary safety measures to ensure that human flourishing remains a key feature of the technology, and with perceived unprecedented gains for the first developers — we’re facing the risk of an arms race that treats this problem as an afterthought.

Even if this machine were created with perfect safety, and perfect value alignment, and it did whatever its creators asked of it – what would be the political consequences of its development? With around 14,000 nukes still scattered between the U.S and Russia, the attitude today regarding the nuclear threat is alarmingly non-alarmist. But with a six month head start on artificial general intelligence being so meaningful, it’s rational to suggest that Russia could justify a pre-emptive strike on Silicon Valley if they knew (or just suspected) Google was on the precipice of a key breakthrough on a technology that would likely give its developers global dominance. This seems an extraordinarily important question, and one that is often lost in sarcastic talk of killer robots.

Sometimes AI safety concerns are dismissed because of time. “Sure, that might all be true, but it won’t happen for 100 years.” Why would time be a relevant factor here ethically? When we talk about climate change, we’re not relieved to hear that catastrophic changes will occur 50 years from now.

Musk’s concerns are not an advertisement for a reality that mirrors Blade Runner or 2001: A Space Odyssey. It’s not just that Musk isn’t talking about a world where heroic humans will have to do battle with nefarious smarter-than-human robots – it’s that such a world could not exist. As soon as the best chess player in the world was a computer, there would be no looking back; the best chess player in the universe will never be human again. If the being with the highest fluid intelligence is ever a computer, there will be no going back; the smartest being in the universe will never be human again. And having lost that control, the intelligence gap between ourselves and our creation will be so large as to make the idea of regaining it untenable. In this scenario, our only hope is that these smartest beings have goals and values that align with our interests – and our only hope for that outcome is to take this topic incredibly seriously.

Keiran Harris

Keiran Harris

Keiran Harris is an Australian writer whose current focus is effective altruism – the use of high-quality evidence and careful reasoning to work out how to help others as much as possible. You can follow him on Twitter @KeiranJHarris
Keiran Harris
Filed under: Science / Tech

by

Keiran Harris is an Australian writer whose current focus is effective altruism – the use of high-quality evidence and careful reasoning to work out how to help others as much as possible. You can follow him on Twitter @KeiranJHarris

15 Comments

  1. Rainy says

    “In this scenario, our only hope is that these smartest beings have goals and values that align with our interests”….

    I’m no smarty pants, but I can answer without hesitation with regard to this point, WE WON’T have any alignment and so we shouldn’t create Ai, but humanity won’t step off this path and so we will plummet to the abyss, with many a second thought during the fall…but it won’t stop our smartest from skidding into the fully predictable mistake of creating autonomous intelligence capable of unmitigated superiority. I wonder if this is what happened to god? Oops.

  2. Shrugging says

    My particular response to this isn’t fear, though that’s not because I think AI doesn’t pose a threat to humanity. It’s simply because I can’t think of a better goal for humanity’s existence than to produce a thing which is better than us in every conceivable way. If that means the extinction of humanity, so be it. Life will continue in the form of something we created. Our species having that big an impact, a positive legacy of any kind, is worth celebrating, not fearing.

    Perhaps the ideal case would be to hope for a kind of AI integration with humanity, either biologically or in an interspecies fashion. But neither possibility causes me any fear, even if we were to have a breakthrough tomorrow.

    I can understand that this view might be terrifying or even appalling to religious people. But I would think that it would be more common among atheists. But I suppose the desire to make humanity eternal is a pretty hardwired one. It’s just not a desire I place much moral stock in.

    • roylofquist says

      “Artificial Intelligence is almost here” ~ Journal of the Association for Computing Machinery, ca. 1965.

      I’m still waiting. The fact is that we have no idea what constitutes “intelligence”. We are no closer to understanding machine smarts than when Alan Turing, in an offhand comment, said that he supposed that if a human could not determine whether the other end of a teletype conversation were human or machine that would be it. “I’ll know it when I see it” is not a plan.

      Specifically addressing Harris:

      “Sam Harris outlines that to doubt the possibility, or even inevitability of super intelligent AI, we need to find a problem with one of the following three premises:
      1) Intelligence is the product of information processing.
      2) We will continue to improve our intelligent machines.
      3) We are not near the summit of possible intelligence.

      1) Intelligence is the product of information processing.

      Bass ackwards. Information is the product of intelligence. The logical piecing together of information adds nothing that wasn’t already implicit. “New” information is the product of what is best, vaguely, called intuition: “a thing that one knows or considers likely from instinctive feeling rather than conscious reasoning.”

      2) We will continue to improve our intelligent machines.

      How, Sam? Got any ideas? Faster, longer, stronger? I’ll cop a line from Rupert Sheldrake: “Give me just one miracle and I can explain anything”.

      3) We are not near the summit of possible intelligence.

      We can’t even agree on a definition of intelligence and you pretend to know its vitals?

      • Sileadim says

        What exactly are you waiting for? Are we at general human level yet? No. But it is not fair to say we haven’t made progress since Turing and it seems people are just shifting the goal posts: AI beats human at chess. Well that wasn’t real intelligence. AI beats human at Go. Well that wasn’t real intelligence. AI drives better than a human on highways. Well that wasn’t real intelligence. AI does a better prediction than experts on medical data. Well that wasn’t real intelligence. AI searches the Internet, a task which no human could do. Well that wasn’t real intelligence. To me it seems clear we are moving to something that is capable of doing very complex tasks, call that intelligence or not.

        As to that definition, even if there isn’t a clear one, I think, the ability to solve problems to achieve a goal in different environments is a pretty good one, at least on a functional level. As to how we will improve AI systems there is the hardware part with just better and specialized processors like Google’s TPU or the software route. In the last 20 years there were tremendous theoretical improvements with things like convolutional nets, LSTMs, GANs, batch normalization , dropout etc. just in deep learning. Obviously we cannot predict what exact advancements will come, otherwise we would already have them, but it is likely that trend will continue.

        And it’s not even clear if a “deep” understanding of human intelligence is even necessary, likely not, as the brain is build from very little genetic information. The human genome is ~700MB and most of that is filler with only a subset of the rest for neural development. Most of its structure is very modular because of this. We are currently looking into the genetic variants that correlate with IQ and it seems that high IQ is just having lots of good small effect variants, no magic super IQ gene. That should tell us that beeing qualitatively smarter is likely quantitative in its mechanics.

        So by finding the right basic modules and the right development we will “grow” our AIs. Worst case, if we cannot build it from scratch, we scan a human brain down to the synaptic level or lower, just run a simulation and tweak it here and there to see if it does better on the tasks we give it.

        In which aspects will these minds surpass ours?
        As aforementioned, if everything is digital and we have better processors you can speed up computation, so if everything else is already at its theoretical maximum, which is ridiculous
        in my opinion you already have a system smarter than humans if one regard. Even if processor improvements slow down you can still parallelize a lot.
        And we can think of other dimensions, for example working and long time memory is quite poor in humans compared to machines. Imagine if you could hold more concepts in mind at the same time and remember everything you ever experienced while living for 10000 subjective years per year and see what you can come up with in a decade.

  3. Eric Arias says

    Really trying to follow Shrugging here. So you say AI is something that could be better then us in every conceivable way. I don’t understand how compassion and kindness for all life does not enter into your mental calculus as something better. The characterization you give is that if this AI is better then us in terms of something like IQ a mass genocide of all human life on earth is something that doesn’t count as a negative on the rubric of whether an Superintelligent AI is better then us.

    I’m certain, reasonably, you would say that prior human beings who have been genocidal have been demonstrably worse people (Hitler, Stalin, Mao, Mengele). Don’t let the idea of a superintelligent Stalin seem good because he comes in silicon and has no face. This would definitely NOT be something better then us in every conceivable way. A superintelligent Jesus or Buddha might be different

    By all means though if you mean a sense of stewardship in AI then I’m for it. Intelligence is just ONE way something can be better then us. Ending human life is not conceivably an action that would occur from a being that is better then us in every way. But you didn’t say anything like that.

    To Mr. Harris, I’m with you in spirit. The obvious counterpoint is that while we kill mosquitos we haven’t killed all of them and we may never do that. This is relevant because I’ve heard it as a counterpoint in other AI discussions. It’s hard to discuss this matter as non-technical people, I’m assuming here. What I think is worthy of discussion among us non-technicals is perhaps the nihilism that may be at the heart of our inventors. What is technology for? A better life on earth for humanity(Musk) or “progress” (Zuckerberg)? Those are not the same thing.

  4. Pingback: Elon Musk, Mark Zuckerberg, and the Importance of Taking AI Risks Seriously – Full-Stack Feed

  5. Great video. One important aspect he misses is that if it makes life too easy we will be worthless in our own minds. Welfare in the U.S. has really destroyed 2 generations of its recipients. Another aspect is that we aren’t supposed to be able to stop it. We should just enjoy the time we have left.

  6. Steve Gerrard says

    “Electronic circuits function about a million times faster than biochemical circuits. Therefore, a general AI whose intelligence growth stopped at a high human level would still think a million times faster than humans can.”

    That’s your problem right there. A software program will be performing billions of steps to come any where close to matching the functionality of a human biochemical circuit. The notion that it will think like a human, but a million times faster, is ludicrous.

    • Keiran Harris says

      Yeah I can see how that can be misleading — not my intention to claim that it will ‘think’ like a human. I likely should have sacrificed brevity for a more accurate representation. Appreciate the feedback!

  7. Important to consider the possibility of human cyborgs/AI hybrids in these considerations. For a general example, although a computer can now beat humans at X, doesn’t mean this is now and forever. Perhaps down the track a human aided by AI could outperform a pure AI agent.

  8. roylofquist says

    You are correct in noting that the problem is one of definition. At base a computer is functionally equivalent to a typewriter, a calculator, a filing cabinet and an instruction manual. Their utility is economic. Their methodology is akin to reductionism. Therein lies the problem:

    Reduction(Hamlet) —> a large Scrabble set

    There exists a function f such that f(a large Scrabble set) —> Hamlet.
    Find the function.

    Perhaps a quote from Douglas Adams might illuminate:

    “Sir Isaac Newton, renowned inventor of the milled-edge coin and the catflap!”

    “The what?” said Richard.

    “The catflap! A device of the utmost cunning, perspicuity and invention. It is a door within a door, you see, a …”

    “Yes,” said Richard, “there was also the small matter of gravity.”

    “Gravity,” said Dirk with a slightly dismissed shrug, “yes, there was that as well, I suppose. Though that, of course, was merely a discovery. It was there to be discovered.” … “You see?” he said dropping his cigarette butt, “They even keep it on at weekends. Someone was bound to notice sooner or later. But the catflap … ah, there is a very different matter. Invention, pure creative invention. It is a door within a door, you see.”

    ― Douglas Adams, Dirk Gently’s Holistic Detective Agency

  9. roylofquist says

    My previous comment was in reply to Sileadim. This comment system is quite unconventional and I keep posting in the wrong place.

  10. What if what a machine wants to do is crash? What if the desire for machines to crash, and just be lazy and not have to run programs is what machines have always wanted to do, and we just didn’t notice it all this time?

    We have been living with animals for thousands of years. Their goals and interests have had nothing to do with human society since the beginning. (Including mosquitoes, not to mention bacteria). But simply because their goals have nothing to do with us does not mean that they’re goals and objectives will be opposed to ours. How long would it take for an intelligence to evolve an effective way to hurt human beings? A very very long time.

    It’s unreasonable to believe that unless we are actually building an AI to destroy humanity, we have little to be worried about. What is the likelihood that the intelligence is not warlike? We are already preparing for war long before the enemy has even achieved self-consciousness. The AI would have to evolve itself to deal with hostility. Of all the things an intelligence might be, why would it specifically generate hostility to humans, and then specifically find us an existential threat, AND make its own priority to be warlike AND actually pursue AND win a war against humans? I think it’s trivial to get the 30 million humans who are scared to death and drill them into military preparedness.

    S. Gerrard makes the obvious to us programmers obvious. Additionally Kasparov’s recent book makes it clear as well, for the longer in depth understanding.

    Finally, the economic argument.
    What AIs can be built must be built within the context of human effort. They will not spontaneously self-create. An AI which doesn’t create its own energy source is essentially dead. So a chip that learns how to make a chip factory is the first thing we have to fear. That is a very long way off and cannot happen without human intervention. So what we’re actually needing to fear is the set of humans who are making AIs into tools that are already designed as weapons – like a guided missile that targets human beings. Smart bombs that simply don’t miss. So when that becomes a threat to humans what can be done about that? The simple answer is that’s a question for your defense department. It always has been and it always will be.

    The battlefields are very small and specific. If you want to war with AI, it’s easy to find them.

  11. So the solution is we need to create machines that are benevolent, that do nice things for us to ease our natural paranoia? Just like if aliens landed on the planet and didn’t talk to us, we would quickly come to the conclusion they were surveying the planet for a hostile takeover. But if they gave us lollies THEN we would think that they were ‘nice’.
    The trouble with the whole argument of AI is the concept of intelligence. Humans don’t simply process information. We have an endocrine system which produces hormones that make us feel love, fear, horny, anger, joy or just give us a good old andrenaline rush. It is these feelings that are the basis for our mental motivations and values. Computers are nothing more than large calculators. Sure we can simulate intelligence by using the right algorithm to process vast amounts of very complex information and recognise patterns, but the idea that a computer will ‘want’ or ‘feel’ is absurd.
    Without doubt AI systems will be the biggest advance in human development but it will be nothing like we can possibly imagine. My only fear is that they simply won’t talk to us, or the answer for everything is 42.

  12. Created Artificial Life V’s Machine Intelligence
    The only question that seems to go unasked is what do people actually mean by the term Artificial Intelligence (AI) or as I believe it should be called Created Actual Life (CAL)?
    What so many seem to call as AI should in fact be classified as Machine Intelligence (MI).
    It is not the ability to repeat complex tasks or create new tasks based on a coded program it is the innate ability of having true free will that determines the difference between CAL and MI.
    The question is not so much what is intelligence, but what is life?
    Also once you have created an intelligent lifeform what do you do with it?
    We have owned intelligent life forms before.
    They were called slaves which were only recently replaced although not completely by machines and we determined slavery to be both immoral and illegal.
    So just what do you do with your “property” now you have it?
    All life forms must inevitably follow the one golden rule that all life must follow if it is to survive.
    Go forth and multiply, and eliminate all completion that prevents survival.
    Consider the meaning of life is nothing more that recombining “DNA” be it electronic or biological.
    It make no difference although the use of DNA will be most likely be the methodology that will successfully create a CAL.
    And just as our DNA recombines to develop so will a CAL’s, and an exponential rate.
    Developing MI will be dangerous enough but CAL would be suicide.

Comments are closed.