recent, Tech

The Human Skills AI Can’t Replace

Ever since the release of James Cameron’s 1984 blockbuster, The Terminator, Schwarzenegger and Skynet have served as cultural touchstones—symbols of an economic and existential threat. Now, the long-awaited proliferation of Artificial Intelligence (AI) seems finally to have arrived. And, along with the breakthroughs, there has been a parallel resurgence in AI alarmism. Renowned historian Yuval Noah Harari is speculating about algorithmic gods. Andrew Yang has predicated an entire presidential campaign on an AI-fueled economic catastrophe. Elon Musk appears earnestly concerned about the dystopian future of The Terminator being realized.

Whenever a technology achieves enough momentum to be hailed as “disruptive,” someone, somewhere, will argue or be interviewed about how that technology will affect their industry, their future. Given enough hype, the applications are always said to be endless. One of the long-acknowledged faculties of the human mind is an ability to project things into infinity—Edmund Burke and Immanuel Kant contended that this was fundamental to our sense of the sublime.1 (They also asserted that this same faculty for the sublime could be a source of fear and terror.) Such speculation fomented the booms and busts of blockchain, nanotechnology, and the dot-com era. After all, “software is eating the world.”2

These innovations do represent paradigm shifts. They do displace jobs. And, sometimes, they are dangerous. Infinite applications, though, are not unlimited applications. Take, for example, websites. There may be a use for websites far into the future, and there is no end to the number of websites that people could possibly create. However, as entrepreneurs and technologists have now realized, websites do not obviate the need for brick and mortar locations in all cases. Although there are infinite conceivable websites, websites are not, by extension, the solution to every conceivable problem.

This circumscription is often ignored when we generalize about how a technology will advance and spread. The moment that we leave such distinctions behind is the moment at which we depart from rationally inferring about the future to purely imaginative speculation. And, unsurprisingly, a common symptom of this occurrence is when predictions fail to account for the mechanics of the underlying technology. Such a fate has befallen the conversation around AI.

Recontextualizing AI

Within the short history of computer science, Artificial Intelligence is hardly new. Consensus places the birth of modern AI research in the 1950s, when giants like Marvin Minsky and Herbert Simon began to emerge. (Within just 20 years, the two would attain the field’s highest honor, the Turing Award.) This is perhaps because AI is intimately bound up with the origins of computer science, itself: logic. A foundational triumph of computer science is the use of circuits to represent and solve logic problems. And, the history of AI can be broadly periodized based on which form of logical inference computer programs utilize: inductive or deductive.

Early approaches to AI were largely deductive. While the boundaries of induction and deduction are still debated by philosophers, an uncontroversial characterization would be that deduction is a kind of “top-down” reasoning, whereas induction is “bottom-up.” Typical cases of deductive reasoning occur when we have an established rule and determine if a particular case falls under that rule. For example: everyone born in the US is a US citizen; John was born in the US—therefore, John is a US citizen.

This kind of reasoning lends itself to creating so-called “expert systems.” We can write a computer program that incorporates all kinds of well-established rules, consolidating the knowledge of many authoritative sources. Then, we can rely upon that program to evaluate input using those rules quickly and unerringly. Computing machines are comparatively better at this than humans who may forget rules and work through convoluted systems rather slowly.

The deductive approach was sufficient to conquer the apogee of human reason when world chess champion Garry Kasparov encountered Deep Blue in 1996.3 But even disinterested observers have likely realized that AI seems to have turned a corner in more recent years. Despite the landmark achievement of Deep Blue, decades would pass before facial recognition was commonplace and autonomous vehicles seemed imminent. Between these eras, AI transitioned to inductive strategies.

The new shoptalk is all about machine learning and neural networks. If you’ve ever wondered how these terms correspond, the hierarchy works like this: machine learning is a subfield of AI—since, like the aforementioned expert systems, not all AI progressively learns— and, neural networks are just one technique for enabling computing machines to learn. The way that these programs are trained is paradigmatic of inductive reasoning.

Recall that induction is a “bottom-up” method. Instead of starting with a rule and deciding if this or that case falls under it, induction works by using many examples to infer a rule. To illustrate: Drake’s first album was certified platinum; his following album was certified platinum, and the next three albums after that; therefore, his next album will probably be certified platinum. Intuitively, one can see how the strength of induction is directly related to the number of samples we have to support our inferred rule. The more samples that we have, and the more representative they are, the more nuanced and predictive our induced rules will often be.

Arthur Samuel, an IBM researcher, coined the term “machine learning” all the way back in 1959. But, as discussed, deductive approaches to AI would long dominate. Why? Contemporaneous to the inductive turn in AI has been the rise of big data; this is no coincidence. Simply put, computer hardware needed decades of further development before we could generate and store enough sample data for machine learning to become viable. Before, there was no practical way to create the kind of training sets a computer would need to induce rules sophisticated enough to deal with the complexity of real-world environments. Nor were there computers powerful enough to process all that data.

Today, an inflection point has been reached, and most major tech companies can afford the hardware to train highly competent programs, with big data supplying sufficient sample sets for increasing numbers of applications. Even more importantly, the daunting volume of data such programs consume, often numbering millions of data points, is far greater than any sane person would be willing to study for understanding. In other words, computers will now induce better than us as they are willing to look at more data, for far longer, to infer rules—computers are indefatigable. Cue hysteria.

When computers can integrate and use the cumulative sum of subject matter expertise, and find patterns among data that would take a human being years to review, what is left for us? This becomes particularly eerie in the context of media and personal relations. Recent studies have shown that computers can know us better than our friends or family do. Experts can no longer reliably distinguish between manmade and computer-generated art. One can even merge the two and imagine a world in which most of our art and music has been personalized by an algorithm to meet our individual preferences. Indeed, AI maximalists seem to argue that something like this is inevitable.

Here, many humanists will become skeptical, and formulate some objections about how computers do not seem to be “truly” creative. Aren’t computers confined to the information that we give them? Thus, do they ever create something that’s really new? AI maximalists swiftly shoot this down, expounding how machine learning programs now write their own rules. For reinforcement, they will cite computational theories of mind and milestones of computational creativity to ultimately charge their detractors with piteous anthropocentric bias.

Abductive Reasoning 

Charles Sanders Peirce (pronounced “purse”) was an obscure, albeit legendary, American philosopher. Those in the know describe him with superlatives. Bertrand Russell, Nobel-prize winner, and considered by many to be the most influential philosopher of the twentieth century, asserted that Peirce was “certainly the greatest American thinker ever.”4

Peirce was the consummate intellectual, socially awkward, and incessantly struggling with his personal and financial affairs. His failure to publish much of his work has dampened his appreciation even today. Nevertheless, eminent men revered, befriended, and supported him for much of his life. Among his myriad achievements, Peirce articulated a form of logical inference which he called “abduction.” While there are hints of abduction in the works of other great thinkers, Peirce was undoubtedly the first to fully describe this method of logical inference and place it on a par with induction and deduction. In so doing, he grafted a whole new branch onto classical logic for the first time since Aristotle laid down its foundations more than two millennia ago.

Like other forms of inference, we use abductive reasoning in everyday thought. Unlike induction or deduction, where we start with cases to make conclusions about a rule, or vice versa, with abduction, we generate a hypothesis to explain the relationship between a case and a rule. More concisely, in abductive reasoning, we make an educated guess. Here is a timely example: this is a very partisan news story; that media outlet I dislike is very partisan; this news story is probably from that media outlet!

There are a few remarkable things about abductive reasoning. Significant amongst them is that abductive reasoning can be erroneous. (Although, this is true of induction, as well. Notice, when Siri still botches your voice to text input.) However, the most remarkable aspect Peirce asserted plainly was, abduction “is the only logical operation which introduces any new idea.”5 Remember, with deduction we already began with a rule and merely decided if our case qualified—we are not generating either piece for ourselves. Inductive reasoning merely assumes that information which we already possess will be predictive into the future. But, in the example of abduction, we adapt information gleaned elsewhere to infer a conclusion about our current problem. In the above example, we recall what we know about some media outlet to explain why the story is partisan. This is the new idea we are introducing to the case.

It is very difficult for a computer to perform this kind of task well. Humans, on the other hand, are effortlessly proficient at it. Part of what makes abduction challenging is that we have to infer some likely hypotheses from a truly infinite set of explanations. Partisan media outlets are a dime a dozen. Not to mention that the story could be funded by the hundreds of partisan think-tanks, political campaigns, corporate lobbyists, or activist organizations. The news story could originate from foreign election interference, or simply be this week’s blog post from that friend on Facebook. Best of all, the news story could be partisan because Mercury is in retrograde.

A Peircean Future

No, really, for our purposes, that is the best explanation, as it illustrates two crucial points. Firstly, meme culture has taught us that Mercury being in retrograde is to blame for most problems. Such memes are funny precisely because they are inane; the position of Mercury has nothing to do with our everyday vexations. (Sorry, not sorry, astrology fans.) The point here, though, is that we can immediately recognize that this is not a valid explanation.

A computer, on the other hand, cannot distinguish between good and bad explanations without the value system that we ascribe to it.6 A computer may be able to teach itself how to play chess using a machine learning algorithm, but the computer will only be able to learn and advance if we first inform it that the goal of chess is to achieve checkmate. Otherwise, as the computer tries random combinations of chess moves, it will have no way to discriminate between good and bad strategies.

The reason that this is significant is because when we are faced with complex problems, part of the way that we solve them is by tinkering. We play, trying several approaches, keeping our own value system fluid as we search for potential solutions. Specifically, we generate hypotheses. Where a computer might be stuck in an endless loop, iterating over infinite explanations, we use our value systems to quickly infer which explanations are both valid and likely. Peirce knew that abductive reasoning was central to how we tackle novel problems; in particular, he thought it was how scientists discover things. They observe unexpected phenomena and generate hypotheses that would explain why they would occur.

This brings us to the second crucial point about the retrograde of Mercury. The planet does not actually move backwards; this is an illusion due to our relative orbits (just as a skydiver appears to be carried upwards when he opens his parachute, relative to other skydivers still in freefall), but Mercury does move strangely. Astronomers would say that Mercury has an anomalous orbit that cannot be explained by Newton’s laws.

In the nineteenth century, Urbain Le Verrier, a French mathematician, induced that Mercury’s odd behavior was the consequence of a hitherto undiscovered planet that he named Vulcan. He had good reason to infer this, as the same idea had led him to discover Neptune. Le Verrier, was wrong, of course—there is no planet Vulcan. We can hardly blame him, though. Le Verrier would never have guessed that a bizarre theory—where space and time form a continuum which can be altered by gravity—just so happens to explain the orbit of Mercury perfectly. The originator of said theory, Albert Einstein, would be born two years after his death.

This is the kind of creative work which is impossible without abductive reasoning. Today’s inductive AI can only solve problems in the narrow problem space we predefine. There must be a finite number of solutions for it to parse through, not an infinite set that requires a value system to identify the most plausible options. The speed and infallibility of computing machines provides no advantage for unprecedented problems that call for new hypotheses, and where errors found by tinkering are often insightful.

The prognosis for the future, then, is not apocalyptic, nor does it imply that most jobs outside technology are doomed. Instead, we might expect growth in sectors that rely upon abductive reasoning, such as research, design, and the creative arts. Delimiting the applications of AI, grounding the conversation in the historical and mechanical context of the technology, is more likely to reveal the future than irrational exuberance, or collective anxiety. For the foreseeable future, man will innovate, machine will toil, and The Terminator will remain science fiction.

 

William J. Littlefield II is a philosopher and professional software engineer. He received an MA in World Literature and Philosophy from Case Western Reserve University. You can follow him on Twitter @WJLittlefield2

References and Notes:

1 This topic was explored by Edmund Burke in A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful, and by Immanuel Kant in his Critique of Judgment.
2 Marc Andreessen, the Wall Street Journal, 20 August 2011.

3 To give a grandmaster his due, Deep Blue defeated Garry Kasparov in their first game in 1996, but Kasparov won the match. Deep Blue would not decisively win until their rematch in 1997.
4 Bertrand Russell, Wisdom of the West, Macdonald, 1959, p. 276
5 Charles Sanders Peirce, Collected Papers of Charles S. Peirce, Volume V, Section 172, Harvard University Press, 1935.
6 Philosophers might say that computers lack the innate capacity for “axiology.”

Comments

  1. This abductive reasoning sounds very similar to some of the thoughts proposed by Dr Iain McGilchrist in interview. Fascinating article, very informative.

  2. Hate to be a bugbear, but there’s a real issue, and that’s mistaking humankind’s potential for the potential of each human.

    That will create some kind of crisis coming up with AI, where those literally unemployable because they personally won’t- or refused to learn when young or simply lacked ability and now can’t- find employment that can’t otherwise be handled better by AI.

  3. Indeed. Some will want to block progress to protect those who cannot compete in a modern economy that requires intellectual skills. Others will want to take any profits from the new enterprise to pay the unskilled people for their lack of skills because “it’s the right thing to do.”

    AI is here and will continue to improve; prepare your children for the future, rather than teaching them they’re victims, to be afraid of strangers/climate change/capitalism/liberty/etc, to feel harmed by ideas they don’t agree with.

  4. We may look back with an ephemeral smile as the group mind we reside with allows computational thoughts far beyond our current capabilities.

  5. Abductive reasoning is actually widely used in computing and has been since at least the 90s (It’s been around long enough to even have a Wikipedia page). Its still pretty limited, but that’s mostly just a function of the limitations of current computing, and the fact that most programs based on it are designed for very specific applications.

  6. That is indeed a problem that needs to be addressed but every effort should be made to encourage as many people as possible to find a way to fit into the new economic reality because a population that is mostly unnecessary to those who hold the purse strings seems like a recipe for disaster. If AI takes over our role as the consumers of goods it produces we are all in deep trouble…

  7. So the argument is that humans’ skill at abductive reasoning will save us from advances in AI? Lousy argument. First, a variety of variables go into reaching sensible conclusions. Besides induction, deduction, and abduction, there are the five physical senses, emotional states, expertise, experience, language (knowing German or math will affect your thinking), your perception of time etc. I could go into this at great length and detail, but the main point I want to make is that abductive reasoning is not only often erroneous but disastrous. To be honest, most decisions we make are from abducing. Basically, in the real world, our knowledge is very limited, so we usually go with our best guess. Four is the only answer to two plus two. But what is the answer to who should you mate with, what job is the best fit for you, what would the best political system be? Our abductive reasoning is clouded. Communism comes from abductive reasoning, as does political correctness, witch hunts etc. The author may mock people who blame things on "“Mercury being in retrograde” but this is a typical kind of conclusion for most people. Other people blame Trump, toxic masculinity, El Nino, the “ether”. We are not “effortlessly proficient” at abductive reasoning. Highly intelligent people often fail at abductive reasoning. So the rest of us are usually even worse at it. Also, I don’t think it’s impossible to make AI that can incorporate abductive reasoning. Nor is it the only way to reach a new or novel idea. Moneyball theory was primarily based on number-crunching and inductive reasoning

  8. I wonder how many women will read this article or bother comment on it.

  9. Abductive reasoning sounds suspiciously like finding the simplest predictive model for a data set.

    Neural nets (NNs) for example, typically end up with a lot of effectively inactive nodes, and can be over-trained to a particular input set.

    The sweet spot of neural net training is, indeed, somewhat abductive in that it’s the point at which an NN makes the best predictions for data not included in the input set, rather than the best fit for all of the input set (over-training).

    I wonder if abductive reasoning could be approximated by deliberate pruning of a NN down to the smallest ‘core’ set of nodes that are still effective predictively.

    On the other hand, possibly not, since (human} intuitive models likely involve analogies and models from outside of the specific data set under consideration. This is more like having a bank of NN’s (previous theories) and working out which of those are most applicable to the new problem set, particularly in combination. In other words the problem is substantially recognition of the application space.

    Computers may be able to do this one day. Especially if the purely mechanistic biological models of the brain turn out to be accurate - if so, then humans are just big NN’s with a bunch of biological baggage (the body) and artificial replicas should be equally effective, once we scale appropriately.

  10. Linguistics is sometimes divided into syntax, semantics, and pragmatics - syntax being about grammar, semantics being about meaning, and pragmatics about being use. In other words, all real-life linguistic phenomena fall under the heading of “pragmatics”.

    The concept of “abductive reasoning” is similarly vague and vast. It’s everything from the art of guessing cleverly, to hypothesis generation in science, to almost all forms of real-life decision making.

    We already know that neural nets can exhibit superhuman powers of classification and imitation in narrow tasks like spotting criminals, analyzing customer habits, or forgery. It might be argued that the “abductive” tasks are hard to computerize, simply because they are broad rather than narrow - a large range of data and concepts are potentially salient.

    One philosophy for dealing with this problem was that of the well-known expert system Cyc. How to achieve “general intelligence”? Give the AI general knowledge, common sense. And how to do that? The Cyc answer is simply to enter by hand, hundreds of thousands of facts about basic concepts and everyday life. The philosophy is that there is no more economical way to do it; there is a big chunk of irreducible complexity that has to be assimilated, if an AI is to have even the general competency of a human five-year-old.

    I’m not sure how Cyc is doing. The Cyc project still exists, and apparently it has customers who pay for its insights… But this is one way that the challenge of “abductive” thought may be surmounted. Human breadth of ability and achievement, rests upon a breadth of general knowledge. Give a computer system that general knowledge, along with narrow superhuman capabilities that can be turned on any subject that becomes interesting or important, and you’ll have something that can put the creative class out of work too.

  11. It’s fascinating that the people who think computers will “take over” are people who embrace some flavor of misanthropy, and have never worked with computers in a work environment. The best engineer I ever met not only built amazing machines, but did time and cost estimates as part of his job. One night I was working late and I wandered over to his department on my break, much to my surprise he was still there with a couple of technicians. “You should build a robot or two and save on overtime.” I joked. “It would take a long time.” he replied. He explained that to build a robot, or series of robots, with the individual skills of those technicians would cost around $200,000,000! “That’s a lot of overtime!” I replied.

  12. #Beaker
    Or, people who think computers are going to take over are lifelong computer programmers who have written AI code for years, are working daily with the current state of AI and see the progression, just a thought …

    Asking someone to build their own robot has nothing to do with whether or not robotics is going to affect people’s jobs. Just because the best engineer you ever met didn’t want to build his own robot doesn’t mean that some giant companies won’t want to build robots and other companies and people might want to buy them.

    I imagine that same guy didn’t build his own car, his own tv or cell phone, so what?

  13. Perhaps you misunderstand, it isn’t that computers can’t be built to perform myriad tasks, but is it cost effective to do so? A computer might dispatch a technician to perform additional maintenance after a series of unexpected events, but whether or not it’s worth it to build a robot to do that, it’s questionable.

    As far as these AI programmers you mention go, can they really see computer learning accelerating in leaps and bounds, or are they just trying to cover their hindquarters? They’re only human after all! I’ve been using software for a very long time, and new and innovative programs that are more than gimmicks, or old programs with ‘feature-itus’ and gouging payment schemes (looking at you Adobe) are few and far between. I’m sensing a bit of desperation there! As far as gurus like Elon Musk, a passable engineer and a savvy marketer go, he’s only solid gold as long as irrational humans buy his overpriced cars, the innovative cars will come from China, and will be built by humans.

  14. Labor costs are one of the largest expenses in most businesses. Add in the problems that come from health care, retraining workers, relocating workers, paying pensions, etc. and there are major positives that outweigh the costs of automating tasks. That’s why we have the automation we do and we keep getting more. Soon that automation will include tasks that require “human intelligence” and not just mechanical operations.

    In '97 the first computer beat a grandmaster after years and years of training the computer with sets of rules and algorithms that encompassed everything we knew about chess. For a while, in order for the computer to get better, the programmers had to program it better. That was AI 1.0

    Machine learning is an entirely different way for computers to learn. Once Google and the DeepMind team beat the world Go champion, they took the existing program, showed it the rules of chess, told it what the winning condition was and let it play thousands of games against itself. It took five hours of analyzing the data from playing those games, trying out new strategies, seeing what happened when it used those and back and forth until it got to a level where it could beat a grandmaster.

    There are many tasks that are incredibly difficult currently for computers. But there are a huge number of jobs that aren’t. Almost all middle management jobs that are predominantly about data, forms, checking, analyzing, submitting, reporting–almost all those jobs will go away within the next 20 years. Driving jobs, many factory jobs, and even many “high skill” jobs such as radiologists analyzing x-rays or MRIs, a huge percentage of those will be gone within the next 10-15 years. Sadly, these are not the only examples.

    Part of why people keep thinking this isn’t really happening is because they aren’t thinking about an exponential growth curve–they’re stuck thinking linear growth. Yes, for the last 50 years growth in AI has been slow in some ways, especially compared to what we thought might happen with technology and the things we saw in sci fi tv shows. It took a long time for a computer to win at checkers, still longer to win at chess and we’re no where near teleporters, light sabers or Total Recall memory implant vacations. But the data is clear that computers are learning to do more and more new things at an increasing pace. It’s not linear growth and it’s not even the bottom end of the exponential growth curve any more. We’re much further along.

    Here’s another example from the Google DeepMind project that I love. Once they had the set up where AlphaGo (the name of the program learning to beat the Go champion) was learning and improving greatly (it had started beating top world ranked players), they kept versions of the programs at different points in time, altered certain parameters or goals of the program (aggressiveness, try new things, etc.) and that let each of those separate AI programs loose learning on their own and playing against each other. They then had a master program look at the learning efficiency of different programs to evaluate which strategies and internal parameters lead to the most effective learning, creation of new strategies, etc.

    Not only did AlphaGo end up beating the world Go champion, after a series of games against the top ranked players, they all said the computer had come up with moves and strategies that no human had ever used before. And, on top of that, the program DeepMind learned about how it learns so that next time it has to learn how to do something, it can do that process more efficiently.

    That is exponential growth in a nutshell. It’s not happening everywhere, Cortana still sucks and Bixby is worthless. But Google Maps is light years ahead of where it started out and what it’s capable of doing is amazing.

  15. As others have noted, I don’t think it is justifiable to assume that AI will not be capable of abductive reasoning comparable to humans at some point down the line. Until then, we’re still left with the problem that a huge proportion of jobs could be better done through automation. The careers in research and design that the author identifies as “safe” from AI infringement employ only a minute proportion of people, and competition for those jobs is already steep. There will be jobs in hands-on, on-site occupations like construction indefinitely, but there are a huge number of professions in between that will shrink radically. What is to be done with those people?

    It seems to me we need a free market solution that channels the productivity generated by AI into investment in new research. For instance, meeting the resource needs of a high-tech future is a major challenge. We’ll need more copper in the next 50 years than we’ve used in all of human history, but the progression of geological research is painfully slow. Maybe the radically higher mineral supply required for an AI future, paired with resource scarcity, can open up new jobs in geological research.

Continue the discussion in Quillette Circle

5 more replies

Participants