Skip to content

Rational AI-nxiety: A Counter-Argument

Is there a dark side to artificial intelligence? Many people, including famous science and technology leaders, assert that advancements in AI should be approached with extreme caution.

· 11 min read
Rational AI-nxiety: A Counter-Argument

Artificial Intelligence (AI) is defined by Merriam-Webster as a “branch of computer science dealing with the simulation of intelligent behavior in computers.” As machine learning becomes more advanced and AI continues to become more complex, what does that mean for the future of humanity?

One can easily identify the advantages of AI – these include, but are not limited to, AI’s potential use in improving the accuracy of medical diagnoses; in performing laborious and/or dangerous work; and in rational decision-making during situations in which human emotions can impair efficiency and/or safety. These are only a few examples of why AI can and has become so advantageous. Smart-phones and many applications within the Internet are marvelous examples of the application of AI in our daily lives.

The World Wide Web, introduced in 1989, has become increasingly expansive. AI has allowed for the massive amounts of information on the Web to be organized in a searchable fashion (think: Google Search Engine). Communication has become nearly instant with the introduction of email and audio and video Internet applications. Smart-phones have the ability to “be used as phonebooks, appointment calendars, internet portals, tip calculators, maps, gaming devices, [and]…seem capable of performing an almost limitless range of cognitive activities for us, and of satisfying many of our affective urges.”1 The list of the advantages of AI applications goes on—it has never been easier to access my online banking, pay my bills, buy items remotely, and stay in touch with friends and loved ones. As AI technology expands, it will offer more, making our lives easier, richer, and more satisfying.

But is there a dark side to artificial intelligence? Many people, including famous science and technology leaders such as Elon Musk and Stephen Hawking, assert that advancements in AI should be approached with extreme caution.

Edward Clint, evolutionary psychologist and author of a recent thought-provoking article for Quillette entitled “Irrational AI-nxiety,” argues that humans have an unnecessary fear of AI due to evolutionarily acquired instinctual distrust of the unknown. He claims that AI probably does not have the potential to risk the future of humanity, and he argues that people’s fears of AI are analogous to the hysterical fear of aliens or poltergeists. I agree with Dr. Clint in this respect. However, I am fearful of the peril that AI poses to the future of humanity, but for very different reasons. Reasons that, in my opinion as a neurologist, are more frightening because they are happening inside our very own and willing brains.

The danger of AI lies not in the manner in which it is portrayed by Hollywood films (that is, that robots will some day develop a conscious malicious predilection for destroying human beings). AI is in the process of rendering humans meaningless and unnecessary, stealing away from us the very qualities that make us human. As Nicholas Carr writes in his book, The Shallows: What the Internet is Doing to Our Brains:

Over the last few years, I’ve had an uncomfortable sense that someone, or something is tinkering with my brain, remapping the neural circuitry, reprogramming the memory…I feel it most strongly when I am reading. I used to find it easy to immerse myself in a book or a lengthy article. My mind would get caught up in the twists of the narrative or the turns of the argument, and I’d spend hours strolling through the long stretches of prose. That’s rarely the case anymore. Now my concentration drifts after a page or two … what the Net seems to be doing is chipping away at my capacity for concentration and contemplation… My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in a sea of words. Now I zip along the surface like a guy on a Jet Ski…

Carr’s description of his experience is not uncommon. I have felt the powerful effects of modern technology on my own brain. I recall as a young adult, prior to having a laptop or smart-phone, visiting the library and experiencing the intense wonder and serenity produced by the books that surrounded me. I never experienced the anxiety or loss of focus at the library that I do today when I skim through massive amounts of information on the Internet. Are AI applications causing us to lose our concentration, attention, and our ability for linear, deep, and critical thought? If so, how? And what are the consequences?

Despite early dogmas of the brain being a hard-wired circuit that does not change, research over the past century has discovered that the adult brain is a very ‘plastic’ and dynamic organ. Its complex circuitry and neuronal connections are constantly changing and reorganizing based on our actions, thoughts, and exposures.

The Shallows by Nicholas Carr (2010)

In the 1960s, University of Wisconsin neuroscientist Michael Merzenich showed how dynamic the brain is with his experiments with monkeys.2 He inserted electrical probes into the parts of the monkeys’ brains that correlated with skin sensation of the hand. After damaging the nerves of a hand, he measured the monkey brain’s reaction to the injured nerve. After the nerve was injured, he noticed that the neural connections in the brain that correlated with the nerve became haphazardly scattered and disorganized. For instance, the area of the brain that previously correlated with the tip of a finger now correlated to a hand joint instead. But over time, as the nerve regenerated and healed, the neural circuitry in the brain also reorganized. By the time the nerve healed completely, the reorganized brain circuits once again correlated with the correct analogous body part. In other words, Merzenich was able to show that the neurons, the cells of the brain, were capable of changing and reorganizing. This demonstrated that the brain is not a hard-wired, rigid circuit.

Our brains, Carr explains in The Shallows, are “always breaking old connections and forming new ones, and brand-new nerve cells are always being created.” The brain’s plasticity reflects why humans have the capability to form memories. Research by neuroscientists such as Louis Flexner, at University of Pennsylvania, and Eric Kandel, at Columbia University, found that formation of long-term memory involves structural changes in the brain involving new synaptic connections across neurons, thus leading to measurable physical anatomical changes in the brain.3 However, long-term memory takes time and focused concentration to form. The consolidation of memories, Carr says, “involves a long and involved ‘conversation’ between the cerebral cortex and the hippocampus.”

In the words of the well-recognized adage, “neurons that fire together, wire together.” The opposite can be said about neurons that stop firing together – they unravel. While AI has made our lives easier, it has coddled our brains, allowing us to “outsource our memory.” Its distractions and temptations to revel in multitasking disintegrate the neuronal circuits involved in concentration and attention needed to form long-term memories.

Nerve cells in a human nervous system

Some might argue that outsourcing memory is not so bad, that it increases efficiency. We may not need to have everything stored in our brains if we have computers and smart-phones at our fingertips. Some might go as far as to argue that this reflects the rudimentary beginnings of the brain-computer interface. But biological human memory is very different from computer memory. Kobi Rosenblum, Head of the Department of Neurobiology and Ethology at the University of Haifa in Israel states that, “while an artificial brain absorbs information and immediately saves it in its memory, the human brain continues to process information long after it is received, and the quality of memories depends on how the information is processed.”4

It is humans who give meaning to memories that are stored. “Biological memory is alive, [while] computer memory is not,” Carr writes in The Shallows. “[Enthusiasts of outsourced memory] overlook the fundamentally organic nature of biological memory. What gives real memory its richness and its character, not to mention its mystery and fragility, is its contingency.” The human brain may not be able to store as much data as the Internet, but it is able to decide what is meaningful—in other words, it is able to ‘separate the wheat from the chaff.’ Memory is what makes our own lives meaningful and rich. Evidence suggests that as biological memory improves, the human mind becomes sharper and is more adept at solving problems, learning new ideas and new skills. As William James declared in 1892, “the art of remembering is the art of thinking.”5 If AI is to replace human memory, it no doubt will also come to replace the functions listed above, thus potentially rendering humankind meaningless.

Irrational AI-nxiety
Concerns over the potential harm of new technologies are often sensible, but they should be grounded in fact, not flights of fearful fancy.

Some might argue that what we lose in terms of our deep thinking and memory-forming skills are made up for in our navigational and decision making skills. For instance, a UCLA study using fMRI in 2008 found that people who used the Internet had high activation in the frontal, temporal and cingulate areas of the brain, which control decision-making and complex reasoning. The study inferred that Internet use may actually improve complex decision making and reasoning. Surfing the Web may indeed improve our decision-making abilities, but likely only as it applies to Internet navigation. We are essentially giving up our higher level cortical functioning as human beings (i.e., those involved in deep learning, concentration, and creativity) in exchange for distraction, Internet navigational ‘skills,’ multitasking, and superficial ‘learning.’

A recent psychology study demonstrated that people who read articles that are splattered with hyperlinks and other distracters (commonly seen on the Internet given websites’ monetary incentives to encourage people to click on as many links as possible) are significantly less able to recall what they read compared to people who read the same articles without distracters. Moreover, those exposed to distracting information were less able to identify what the meaning was behind the articles they read. By constantly surfing the Web, we teach our brains to become less attentive, and we become, in Carr’s words, “adept at forgetting, and inept at remembering.”

Studies have demonstrated that people with excessive use of the Internet show gray matter atrophy in the dorsolateral prefrontal cortex and anterior cingulate gyrus-areas of the brain involved in decision-making and regulation of emotions and impulses. The longer the duration of the unhealthy relationship with the Internet, the more pronounced the shrinkage. Moreover, there are disruptions in the functional connectivity in areas responsible for learning, memory, and executive function.6 Additional studies show that excessive use of a smart-phone and the Internet is associated with higher rates of depression, anxiety, increased risk-taking behavior, and impaired ability to control impulses.7 Data has not yet been produced on the long-term neurological effects of chronic exposure to expanding dependence on technology. As a neurologist, I can’t help but wonder whether it may pose an underlying risk for developing dementias, such as Alzheimer’s disease.

Humans have access to so much information, but Carr argues that we are “no longer guided toward a deep, personally constructed understanding of text connotations. Instead, we are hurried off toward another bit of related information, and then another, and another. The strip-mining of “relevant content” replaces the slow excavation of meaning.” Over time, we are unable to think profoundly about the topics we research because we are unable to acquire in-depth knowledge. We lose our deep learning, critical thinking, and introspective neural circuits. We lose our intellectual sharpness and richness, and instead become zombies that resort to primal ways of thinking.

What Are Reasonable AI Fears?
Although there are some valid concerns, an AI moratorium would be misguided.

I am tempted to hypothesize that our dependency on AI may contribute to the increasing popularity of fundamentalist ideologies on college campuses. Instead of participating in critical thought and civilized debate, people with atrophied deep learning and introspective neural circuitries are likely to resort to identifying with simplified dogmatic ideologies. With the introduction of AI, humanity ironically is at risk of regressing from an age of intellectual enlightenment to a Dark Age of ignorance and primal thinking.

Although it holds the potential for being used for compassionate and advantageous purposes, AI also poses a very real risk to human beings that cannot be ignored. In the most recent superhero movie Justice League, Wonder Woman argues with Batman against using an immensely powerful energy source to bring Superman back to life. She asserts that reason, rather than hasty emotions, must be used when introducing new technology: “Technology without reason, without heart,” she warns, “destroys us.”

We MUST proceed with caution in the advancement of artificial intelligence. We must insist that those involved in developing new technology deeply examine their rationale, and scrutinize the intellectual, ethical, and cultural implications of their discoveries and pursuits. It is humanity that will be forced to deal with the repercussions of these creations. We must remain alert, and acknowledge the massive limitations and risks associated with artificial intelligence. Artificial intelligence does threaten the survival of humanity, but not in the sense that this is commonly portrayed. If we continue to ignore this dragon without truly examining the potential consequences, it will continue to grow until we are rendered powerless and obtuse. We must face the dark side of AI, intelligently and with a critically eye.

Anna Moise is is a neurologist and epileptologist in Asheville, North Carolina. She serves as adjunct clinical faculty at University of North Carolina where she teaches medical students and residents. You can follow her on Twitter @annamo2


References:

1 Wilmer, H. H., Sherman, L. E., & Chein, J. M. (2017). Smartphones and Cognition: A Review of Research Exploring the Links between Mobile Technology Habits and Cognitive Functioning. Frontiers in Psychology8, 605. http://doi.org/10.3389/fpsyg.2017.00605

2 Schwartz and Begley, Mind and the Brain, 175

3 Kandel, In Search of Memory, 221

4 University of Haifa, “Researchers Identified a Protein Essential in Long Term Memory Consolidation,” Physorg.com, September 9, 2008, www.physorg.com/news140173258.html

5 William James, Talks to Teachers on Psychology: And to Students on Some of Life’s Ideals (New York: Holt, 1906), 143

6 Weinstein, Aviv. An Update Overview on Brain Imaging Studies of Internet Gaming Disorder. Front Psychiatry. 2017 Sep 29;8:185. doi: 10.3389/fpsyt.2017.00185.

7 Ibid.

On Instagram @quillette