Skip to content

The Human Skills AI Can't Replace

Today’s inductive AI can only solve problems in the narrow problem space we predefine.

· 10 min read
The Human Skills AI Can't Replace

Ever since the release of James Cameron’s 1984 blockbuster, The Terminator, Schwarzenegger and Skynet have served as cultural touchstones—symbols of an economic and existential threat. Now, the long-awaited proliferation of Artificial Intelligence (AI) seems finally to have arrived. And, along with the breakthroughs, there has been a parallel resurgence in AI alarmism. Renowned historian Yuval Noah Harari is speculating about algorithmic gods. Andrew Yang has predicated an entire presidential campaign on an AI-fueled economic catastrophe. Elon Musk appears earnestly concerned about the dystopian future of The Terminator being realized.

Whenever a technology achieves enough momentum to be hailed as “disruptive,” someone, somewhere, will argue or be interviewed about how that technology will affect their industry, their future. Given enough hype, the applications are always said to be endless. One of the long-acknowledged faculties of the human mind is an ability to project things into infinity—Edmund Burke and Immanuel Kant contended that this was fundamental to our sense of the sublime.1 (They also asserted that this same faculty for the sublime could be a source of fear and terror.) Such speculation fomented the booms and busts of blockchain, nanotechnology, and the dot-com era. After all, “software is eating the world.”2

These innovations do represent paradigm shifts. They do displace jobs. And, sometimes, they are dangerous. Infinite applications, though, are not unlimited applications. Take, for example, websites. There may be a use for websites far into the future, and there is no end to the number of websites that people could possibly create. However, as entrepreneurs and technologists have now realized, websites do not obviate the need for brick and mortar locations in all cases. Although there are infinite conceivable websites, websites are not, by extension, the solution to every conceivable problem.

This circumscription is often ignored when we generalize about how a technology will advance and spread. The moment that we leave such distinctions behind is the moment at which we depart from rationally inferring about the future to purely imaginative speculation. And, unsurprisingly, a common symptom of this occurrence is when predictions fail to account for the mechanics of the underlying technology. Such a fate has befallen the conversation around AI.

Recontextualizing AI

Within the short history of computer science, Artificial Intelligence is hardly new. Consensus places the birth of modern AI research in the 1950s, when giants like Marvin Minsky and Herbert Simon began to emerge. (Within just 20 years, the two would attain the field’s highest honor, the Turing Award.) This is perhaps because AI is intimately bound up with the origins of computer science, itself: logic. A foundational triumph of computer science is the use of circuits to represent and solve logic problems. And, the history of AI can be broadly periodized based on which form of logical inference computer programs utilize: inductive or deductive.

Early approaches to AI were largely deductive. While the boundaries of induction and deduction are still debated by philosophers, an uncontroversial characterization would be that deduction is a kind of “top-down” reasoning, whereas induction is “bottom-up.” Typical cases of deductive reasoning occur when we have an established rule and determine if a particular case falls under that rule. For example: everyone born in the US is a US citizen; John was born in the US—therefore, John is a US citizen.

Artificially Intelligent Offense?
ChatGPT has been programmed to avoid giving accurate information if it may cause offense.

This kind of reasoning lends itself to creating so-called “expert systems.” We can write a computer program that incorporates all kinds of well-established rules, consolidating the knowledge of many authoritative sources. Then, we can rely upon that program to evaluate input using those rules quickly and unerringly. Computing machines are comparatively better at this than humans who may forget rules and work through convoluted systems rather slowly.

The deductive approach was sufficient to conquer the apogee of human reason when world chess champion Garry Kasparov encountered Deep Blue in 1996.3 But even disinterested observers have likely realized that AI seems to have turned a corner in more recent years. Despite the landmark achievement of Deep Blue, decades would pass before facial recognition was commonplace and autonomous vehicles seemed imminent. Between these eras, AI transitioned to inductive strategies.

The new shoptalk is all about machine learning and neural networks. If you’ve ever wondered how these terms correspond, the hierarchy works like this: machine learning is a subfield of AI—since, like the aforementioned expert systems, not all AI progressively learns— and, neural networks are just one technique for enabling computing machines to learn. The way that these programs are trained is paradigmatic of inductive reasoning.

Recall that induction is a “bottom-up” method. Instead of starting with a rule and deciding if this or that case falls under it, induction works by using many examples to infer a rule. To illustrate: Drake’s first album was certified platinum; his following album was certified platinum, and the next three albums after that; therefore, his next album will probably be certified platinum. Intuitively, one can see how the strength of induction is directly related to the number of samples we have to support our inferred rule. The more samples that we have, and the more representative they are, the more nuanced and predictive our induced rules will often be.

Arthur Samuel, an IBM researcher, coined the term “machine learning” all the way back in 1959. But, as discussed, deductive approaches to AI would long dominate. Why? Contemporaneous to the inductive turn in AI has been the rise of big data; this is no coincidence. Simply put, computer hardware needed decades of further development before we could generate and store enough sample data for machine learning to become viable. Before, there was no practical way to create the kind of training sets a computer would need to induce rules sophisticated enough to deal with the complexity of real-world environments. Nor were there computers powerful enough to process all that data.

Today, an inflection point has been reached, and most major tech companies can afford the hardware to train highly competent programs, with big data supplying sufficient sample sets for increasing numbers of applications. Even more importantly, the daunting volume of data such programs consume, often numbering millions of data points, is far greater than any sane person would be willing to study for understanding. In other words, computers will now induce better than us as they are willing to look at more data, for far longer, to infer rules—computers are indefatigable. Cue hysteria.

When computers can integrate and use the cumulative sum of subject matter expertise, and find patterns among data that would take a human being years to review, what is left for us? This becomes particularly eerie in the context of media and personal relations. Recent studies have shown that computers can know us better than our friends or family do. Experts can no longer reliably distinguish between manmade and computer-generated art. One can even merge the two and imagine a world in which most of our art and music has been personalized by an algorithm to meet our individual preferences. Indeed, AI maximalists seem to argue that something like this is inevitable.

Here, many humanists will become skeptical, and formulate some objections about how computers do not seem to be “truly” creative. Aren’t computers confined to the information that we give them? Thus, do they ever create something that’s really new? AI maximalists swiftly shoot this down, expounding how machine learning programs now write their own rules. For reinforcement, they will cite computational theories of mind and milestones of computational creativity to ultimately charge their detractors with piteous anthropocentric bias.

Abductive Reasoning

Charles Sanders Peirce (pronounced “purse”) was an obscure, albeit legendary, American philosopher. Those in the know describe him with superlatives. Bertrand Russell, Nobel-prize winner, and considered by many to be the most influential philosopher of the twentieth century, asserted that Peirce was “certainly the greatest American thinker ever.”4

Peirce was the consummate intellectual, socially awkward, and incessantly struggling with his personal and financial affairs. His failure to publish much of his work has dampened his appreciation even today. Nevertheless, eminent men revered, befriended, and supported him for much of his life. Among his myriad achievements, Peirce articulated a form of logical inference which he called “abduction.” While there are hints of abduction in the works of other great thinkers, Peirce was undoubtedly the first to fully describe this method of logical inference and place it on a par with induction and deduction. In so doing, he grafted a whole new branch onto classical logic for the first time since Aristotle laid down its foundations more than two millennia ago.

Like other forms of inference, we use abductive reasoning in everyday thought. Unlike induction or deduction, where we start with cases to make conclusions about a rule, or vice versa, with abduction, we generate a hypothesis to explain the relationship between a case and a rule. More concisely, in abductive reasoning, we make an educated guess. Here is a timely example: this is a very partisan news story; that media outlet I dislike is very partisan; this news story is probably from that media outlet!

There are a few remarkable things about abductive reasoning. Significant amongst them is that abductive reasoning can be erroneous. (Although, this is true of induction, as well. Notice, when Siri still botches your voice to text input.) However, the most remarkable aspect Peirce asserted plainly was, abduction “is the only logical operation which introduces any new idea.”5 Remember, with deduction we already began with a rule and merely decided if our case qualified—we are not generating either piece for ourselves. Inductive reasoning merely assumes that information which we already possess will be predictive into the future. But, in the example of abduction, we adapt information gleaned elsewhere to infer a conclusion about our current problem. In the above example, we recall what we know about some media outlet to explain why the story is partisan. This is the new idea we are introducing to the case.

It is very difficult for a computer to perform this kind of task well. Humans, on the other hand, are effortlessly proficient at it. Part of what makes abduction challenging is that we have to infer some likely hypotheses from a truly infinite set of explanations. Partisan media outlets are a dime a dozen. Not to mention that the story could be funded by the hundreds of partisan think-tanks, political campaigns, corporate lobbyists, or activist organizations. The news story could originate from foreign election interference, or simply be this week’s blog post from that friend on Facebook. Best of all, the news story could be partisan because Mercury is in retrograde.

A Peircean Future

No, really, for our purposes, that is the best explanation, as it illustrates two crucial points. Firstly, meme culture has taught us that Mercury being in retrograde is to blame for most problems. Such memes are funny precisely because they are inane; the position of Mercury has nothing to do with our everyday vexations. (Sorry, not sorry, astrology fans.) The point here, though, is that we can immediately recognize that this is not a valid explanation.

A computer, on the other hand, cannot distinguish between good and bad explanations without the value system that we ascribe to it.6 A computer may be able to teach itself how to play chess using a machine learning algorithm, but the computer will only be able to learn and advance if we first inform it that the goal of chess is to achieve checkmate. Otherwise, as the computer tries random combinations of chess moves, it will have no way to discriminate between good and bad strategies.

The reason that this is significant is because when we are faced with complex problems, part of the way that we solve them is by tinkering. We play, trying several approaches, keeping our own value system fluid as we search for potential solutions. Specifically, we generate hypotheses. Where a computer might be stuck in an endless loop, iterating over infinite explanations, we use our value systems to quickly infer which explanations are both valid and likely. Peirce knew that abductive reasoning was central to how we tackle novel problems; in particular, he thought it was how scientists discover things. They observe unexpected phenomena and generate hypotheses that would explain why they would occur.

This brings us to the second crucial point about the retrograde of Mercury. The planet does not actually move backwards; this is an illusion due to our relative orbits (just as a skydiver appears to be carried upwards when he opens his parachute, relative to other skydivers still in freefall), but Mercury does move strangely. Astronomers would say that Mercury has an anomalous orbit that cannot be explained by Newton’s laws.

In the nineteenth century, Urbain Le Verrier, a French mathematician, induced that Mercury’s odd behavior was the consequence of a hitherto undiscovered planet that he named Vulcan. He had good reason to infer this, as the same idea had led him to discover Neptune. Le Verrier, was wrong, of course—there is no planet Vulcan. We can hardly blame him, though. Le Verrier would never have guessed that a bizarre theory—where space and time form a continuum which can be altered by gravity—just so happens to explain the orbit of Mercury perfectly. The originator of said theory, Albert Einstein, would be born two years after his death.

This is the kind of creative work which is impossible without abductive reasoning. Today’s inductive AI can only solve problems in the narrow problem space we predefine. There must be a finite number of solutions for it to parse through, not an infinite set that requires a value system to identify the most plausible options. The speed and infallibility of computing machines provides no advantage for unprecedented problems that call for new hypotheses, and where errors found by tinkering are often insightful.

The prognosis for the future, then, is not apocalyptic, nor does it imply that most jobs outside technology are doomed. Instead, we might expect growth in sectors that rely upon abductive reasoning, such as research, design, and the creative arts. Delimiting the applications of AI, grounding the conversation in the historical and mechanical context of the technology, is more likely to reveal the future than irrational exuberance, or collective anxiety. For the foreseeable future, man will innovate, machine will toil, and The Terminator will remain science fiction.

References and Notes:

1 This topic was explored by Edmund Burke in A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful, and by Immanuel Kant in his Critique of Judgment.
2 Marc Andreessen, the Wall Street Journal, 20 August 2011.
3 To give a grandmaster his due, Deep Blue defeated Garry Kasparov in their first game in 1996, but Kasparov won the match. Deep Blue would not decisively win until their rematch in 1997.
4 Bertrand Russell, Wisdom of the West, Macdonald, 1959, p. 276
5 Charles Sanders Peirce, Collected Papers of Charles S. Peirce, Volume V, Section 172, Harvard University Press, 1935.
6 Philosophers might say that computers lack the innate capacity for “axiology.”

Latest Podcast

Join the newsletter to receive the latest updates in your inbox.

Sponsored

On Instagram @quillette