AI Debate, Science / Tech

Rational AI-nxiety: A Counter-Argument

Artificial Intelligence (AI) is defined by Merriam-Webster as a “branch of computer science dealing with the simulation of intelligent behavior in computers.” As machine learning becomes more advanced and AI continues to become more complex, what does that mean for the future of humanity?

One can easily identify the advantages of AI – these include, but are not limited to, AI’s potential use in improving the accuracy of medical diagnoses; in performing laborious and/or dangerous work; and in rational decision-making during situations in which human emotions can impair efficiency and/or safety. These are only a few examples of why AI can and has become so advantageous. Smart-phones and many applications within the Internet are marvelous examples of the application of AI in our daily lives.

The World Wide Web, introduced in 1989, has become increasingly expansive. AI has allowed for the massive amounts of information on the Web to be organized in a searchable fashion (think: Google Search Engine). Communication has become nearly instant with the introduction of email and audio and video Internet applications. Smart-phones have the ability to “be used as phonebooks, appointment calendars, internet portals, tip calculators, maps, gaming devices, [and]…seem capable of performing an almost limitless range of cognitive activities for us, and of satisfying many of our affective urges.”1 The list of the advantages of AI applications goes on—it has never been easier to access my online banking, pay my bills, buy items remotely, and stay in touch with friends and loved ones. As AI technology expands, it will offer more, making our lives easier, richer, and more satisfying.

But is there a dark side to artificial intelligence? Many people, including famous science and technology leaders such as Elon Musk and Stephen Hawking, assert that advancements in AI should be approached with extreme caution.

Edward Clint, evolutionary psychologist and author of a recent thought-provoking article for Quillette entitled “Irrational AI-nxiety,” argues that humans have an unnecessary fear of AI due to evolutionarily acquired instinctual distrust of the unknown. He claims that AI probably does not have the potential to risk the future of humanity, and he argues that people’s fears of AI are analogous to the hysterical fear of aliens or poltergeists. I agree with Dr. Clint in this respect. However, I am fearful of the peril that AI poses to the future of humanity, but for very different reasons. Reasons that, in my opinion as a neurologist, are more frightening because they are happening inside our very own and willing brains.

The danger of AI lies not in the manner in which it is portrayed by Hollywood films (that is, that robots will some day develop a conscious malicious predilection for destroying human beings). AI is in the process of rendering humans meaningless and unnecessary, stealing away from us the very qualities that make us human. As Nicholas Carr writes in his book, The Shallows: What the Internet is Doing to Our Brains:

Over the last few years, I’ve had an uncomfortable sense that someone, or something is tinkering with my brain, remapping the neural circuitry, reprogramming the memory…I feel it most strongly when I am reading. I used to find it easy to immerse myself in a book or a lengthy article. My mind would get caught up in the twists of the narrative or the turns of the argument, and I’d spend hours strolling through the long stretches of prose. That’s rarely the case anymore. Now my concentration drifts after a page or two … what the Net seems to be doing is chipping away at my capacity for concentration and contemplation… My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in a sea of words. Now I zip along the surface like a guy on a Jet Ski…

Carr’s description of his experience is not uncommon. I have felt the powerful effects of modern technology on my own brain. I recall as a young adult, prior to having a laptop or smart-phone, visiting the library and experiencing the intense wonder and serenity produced by the books that surrounded me. I never experienced the anxiety or loss of focus at the library that I do today when I skim through massive amounts of information on the Internet. Are AI applications causing us to lose our concentration, attention, and our ability for linear, deep, and critical thought? If so, how? And what are the consequences?

Despite early dogmas of the brain being a hard-wired circuit that does not change, research over the past century has discovered that the adult brain is a very ‘plastic’ and dynamic organ. Its complex circuitry and neuronal connections are constantly changing and reorganizing based on our actions, thoughts, and exposures.

The Shallows by Nicholas Carr (2010)

In the 1960s, University of Wisconsin neuroscientist Michael Merzenich showed how dynamic the brain is with his experiments with monkeys.2 He inserted electrical probes into the parts of the monkeys’ brains that correlated with skin sensation of the hand. After damaging the nerves of a hand, he measured the monkey brain’s reaction to the injured nerve. After the nerve was injured, he noticed that the neural connections in the brain that correlated with the nerve became haphazardly scattered and disorganized. For instance, the area of the brain that previously correlated with the tip of a finger now correlated to a hand joint instead. But over time, as the nerve regenerated and healed, the neural circuitry in the brain also reorganized. By the time the nerve healed completely, the reorganized brain circuits once again correlated with the correct analogous body part. In other words, Merzenich was able to show that the neurons, the cells of the brain, were capable of changing and reorganizing. This demonstrated that the brain is not a hard-wired, rigid circuit.

Our brains, Carr explains in The Shallows, are “always breaking old connections and forming new ones, and brand-new nerve cells are always being created.” The brain’s plasticity reflects why humans have the capability to form memories. Research by neuroscientists such as Louis Flexner, at University of Pennsylvania, and Eric Kandel, at Columbia University, found that formation of long-term memory involves structural changes in the brain involving new synaptic connections across neurons, thus leading to measurable physical anatomical changes in the brain.3 However, long-term memory takes time and focused concentration to form. The consolidation of memories, Carr says, “involves a long and involved ‘conversation’ between the cerebral cortex and the hippocampus.”

In the words of the well-recognized adage, “neurons that fire together, wire together.” The opposite can be said about neurons that stop firing together – they unravel. While AI has made our lives easier, it has coddled our brains, allowing us to “outsource our memory.” Its distractions and temptations to revel in multitasking disintegrate the neuronal circuits involved in concentration and attention needed to form long-term memories.

Nerve cells in a human nervous system

Some might argue that outsourcing memory is not so bad, that it increases efficiency. We may not need to have everything stored in our brains if we have computers and smart-phones at our fingertips. Some might go as far as to argue that this reflects the rudimentary beginnings of the brain-computer interface. But biological human memory is very different from computer memory. Kobi Rosenblum, Head of the Department of Neurobiology and Ethology at the University of Haifa in Israel states that, “while an artificial brain absorbs information and immediately saves it in its memory, the human brain continues to process information long after it is received, and the quality of memories depends on how the information is processed.”4

It is humans who give meaning to memories that are stored. “Biological memory is alive, [while] computer memory is not,” Carr writes in The Shallows. “[Enthusiasts of outsourced memory] overlook the fundamentally organic nature of biological memory. What gives real memory its richness and its character, not to mention its mystery and fragility, is its contingency.” The human brain may not be able to store as much data as the Internet, but it is able to decide what is meaningful—in other words, it is able to ‘separate the wheat from the chaff.’ Memory is what makes our own lives meaningful and rich. Evidence suggests that as biological memory improves, the human mind becomes sharper and is more adept at solving problems, learning new ideas and new skills. As William James declared in 1892, “the art of remembering is the art of thinking.”5 If AI is to replace human memory, it no doubt will also come to replace the functions listed above, thus potentially rendering humankind meaningless.

Some might argue that what we lose in terms of our deep thinking and memory-forming skills are made up for in our navigational and decision making skills. For instance, a UCLA study using fMRI in 2008 found that people who used the Internet had high activation in the frontal, temporal and cingulate areas of the brain, which control decision-making and complex reasoning. The study inferred that Internet use may actually improve complex decision making and reasoning. Surfing the Web may indeed improve our decision-making abilities, but likely only as it applies to Internet navigation. We are essentially giving up our higher level cortical functioning as human beings (i.e., those involved in deep learning, concentration, and creativity) in exchange for distraction, Internet navigational ‘skills,’ multitasking, and superficial ‘learning.’

A recent psychology study demonstrated that people who read articles that are splattered with hyperlinks and other distracters (commonly seen on the Internet given websites’ monetary incentives to encourage people to click on as many links as possible) are significantly less able to recall what they read compared to people who read the same articles without distracters. Moreover, those exposed to distracting information were less able to identify what the meaning was behind the articles they read. By constantly surfing the Web, we teach our brains to become less attentive, and we become, in Carr’s words, “adept at forgetting, and inept at remembering.”

Studies have demonstrated that people with excessive use of the Internet show gray matter atrophy in the dorsolateral prefrontal cortex and anterior cingulate gyrus-areas of the brain involved in decision-making and regulation of emotions and impulses. The longer the duration of the unhealthy relationship with the Internet, the more pronounced the shrinkage. Moreover, there are disruptions in the functional connectivity in areas responsible for learning, memory, and executive function.6 Additional studies show that excessive use of a smart-phone and the Internet is associated with higher rates of depression, anxiety, increased risk-taking behavior, and impaired ability to control impulses.7 Data has not yet been produced on the long-term neurological effects of chronic exposure to expanding dependence on technology. As a neurologist, I can’t help but wonder whether it may pose an underlying risk for developing dementias, such as Alzheimer’s disease.

Humans have access to so much information, but Carr argues that we are “no longer guided toward a deep, personally constructed understanding of text connotations. Instead, we are hurried off toward another bit of related information, and then another, and another. The strip-mining of “relevant content” replaces the slow excavation of meaning.” Over time, we are unable to think profoundly about the topics we research because we are unable to acquire in-depth knowledge. We lose our deep learning, critical thinking, and introspective neural circuits. We lose our intellectual sharpness and richness, and instead become zombies that resort to primal ways of thinking.

I am tempted to hypothesize that our dependency on AI may contribute to the increasing popularity of fundamentalist ideologies on college campuses. Instead of participating in critical thought and civilized debate, people with atrophied deep learning and introspective neural circuitries are likely to resort to identifying with simplified dogmatic ideologies. With the introduction of AI, humanity ironically is at risk of regressing from an age of intellectual enlightenment to a Dark Age of ignorance and primal thinking.

Although it holds the potential for being used for compassionate and advantageous purposes, AI also poses a very real risk to human beings that cannot be ignored. In the most recent superhero movie Justice League, Wonder Woman argues with Batman against using an immensely powerful energy source to bring Superman back to life. She asserts that reason, rather than hasty emotions, must be used when introducing new technology: “Technology without reason, without heart,” she warns, “destroys us.”

We MUST proceed with caution in the advancement of artificial intelligence. We must insist that those involved in developing new technology deeply examine their rationale, and scrutinize the intellectual, ethical, and cultural implications of their discoveries and pursuits. It is humanity that will be forced to deal with the repercussions of these creations. We must remain alert, and acknowledge the massive limitations and risks associated with artificial intelligence. Artificial intelligence does threaten the survival of humanity, but not in the sense that this is commonly portrayed. If we continue to ignore this dragon without truly examining the potential consequences, it will continue to grow until we are rendered powerless and obtuse. We must face the dark side of AI, intelligently and with a critically eye.

 

Anna Moise is is a neurologist and epileptologist in Asheville, North Carolina. She serves as adjunct clinical faculty at University of North Carolina where she teaches medical students and residents. You can follow her on Twitter @annamo2


References:

1 Wilmer, H. H., Sherman, L. E., & Chein, J. M. (2017). Smartphones and Cognition: A Review of Research Exploring the Links between Mobile Technology Habits and Cognitive Functioning. Frontiers in Psychology8, 605. http://doi.org/10.3389/fpsyg.2017.00605

2 Schwartz and Begley, Mind and the Brain, 175

3 Kandel, In Search of Memory, 221

4 University of Haifa, “Researchers Identified a Protein Essential in Long Term Memory Consolidation,” Physorg.com, September 9, 2008, www.physorg.com/news140173258.html

5 William James, Talks to Teachers on Psychology: And to Students on Some of Life’s Ideals (New York: Holt, 1906), 143

6 Weinstein, Aviv. An Update Overview on Brain Imaging Studies of Internet Gaming Disorder. Front Psychiatry. 2017 Sep 29;8:185. doi: 10.3389/fpsyt.2017.00185.

7 Ibid.

10 Comments

  1. David Turnbull says

    An interesting hypothesis. I might be worried if it weren’t for the fact that most psychology research and brain imaging studies are bad science and irreproducible. I have no reason to suppose that the work supporting this hypothesis does not fall into that category.

  2. Because of the hyper-speed of technological innovation, our environment is changing faster and more substantially than at any other time in our specie’s history. Internet and smart phone obsession/addiction is now pandemic, and it’s power and pervasiveness cannot be restrained. I now see new mothers carrying an infant, and their attention is buried in a smart phone screen. We are witnessing perhaps the Earth’s first virtual virus and a new type of disease. Add in the confounding effects of sustained near-skull electromagnetic radiation, and you have makings of a Black Swan impact on evolutionary development.

  3. Interesting, but the real danger from AI, as from all technologies, is in the ways humans will use it. The big question with AI is who will own and control the power of it.

  4. nicky says

    Interesting hypothesis, must admit that during the last few years, I have more difficulty finishing a book ‘in one go’, but I always ascribed that to ‘not enough time’. But then, no one is as easy to fool as oneself…

  5. I recall as a young adult, prior to having a laptop or smart-phone, visiting the library and experiencing the intense wonder and serenity produced by the books that surrounded me. I never experienced the anxiety or loss of focus at the library that I do today when I skim through massive amounts of information on the Internet. Are AI applications causing us to lose our concentration, attention, and our ability for linear, deep, and critical thought? If so, how? And what are the consequences?

    What’s this loss of focus got to do with AI? The first half of that paragraph may describe the experience of surfing the internet but that’s just skimming information, it’s nothing to do with AI.

  6. Artificial Intelligence is the New Alchemy.

    Alchemy, in part, was an effort to transform various materials into more valuable substances such as gold. It failed in this effort because its base assumption was that all substances were composed of some combination of earth, air, fire and water.

    As with alchemy, the basic premise of Artificial Intelligence, that digital computers can emulate the thought processes of biological organisms, is obviously incorrect.

    The current “state of the art” of AI is robots that can negotiate uneven terrain, recognize spoken words with fair accuracy, make distinctions in visual images and “learn” new actions through trial and error.

    My little dog Spot can run like crazy and leap into the air to catch a ball. He recognizes and reacts to quite a few words and responds to tone of voice. He can spot a cat from a block away, knows the other dogs in the neighborhood and remembers people. He also does a couple of tricks but not like the dogs on TV.

    AI could probably power a credible dog but it requires computers that are 30 million times faster than than the one pound of chemicals and a 100 Hz clock found in Spot’s head. That sure doesn’t look like it has much of a future.

  7. Before that, we could ask ourselves how to counter the dopamine dependence resulting from the over-use of smartphones? For example, if the author is roaming around, I was wondering if she could comment of the neurological effects of long term exposure to high intensity colours?

    This link https://www.materialpalette.com is the official Google colour palette, used by all UX dev around the world who want to comply to Google Material Design standards. If you look closely, and extract the Hue value, you realise that the colours which have a dominant component in R, or G, or B are all centred in their mono-chromatic range. For example, the ‘red’ in the above link has a 4.0 degree Hue component. Which makes it ‘red of red’ (see here: https://en.wikipedia.org/wiki/Hue)

    As animals, we evolved to pay attention to colours – yes, sex, and food -, so those stimuli go straight through the reptilian part of our brain, barely knocking at the door of complex thoughts.

    Now, take a 7 yo, who will go through 10 years of intense visual stimuli (did I mention the brightness of the screen), what should we expect?

    • Interesting comment, Alex – I don’t have any knowledge about how specific colors on the spectrum affect our reward system, although it is known that our eyes are attracted to certain colors more than others based on our evolutionary history.

      Regarding the overstimulation of children with modern technology, your point is quite valid, and applies to all senses – evidence supports that overstimulation during important childhood developmental windows can lead to abnormally high secretion of a hormone called brain derived neurotropic growth factor (BNDF). If secreted at normal levels, BNDF will help a child form and strengthen neural circuits specific to the respective developmental window. Sensory overstimulation, however, is associated with massive release of BNDF, which is hypothesized to lead to premature closure of those developmental windows. This is thought to be implicated in the development of autism.

      Thanks for your thoughts!

      • Dear Anna,
        thank you for the reply. So there are measurable effects.

        I just wanted to say that I’m not ranting on smartphones, they’re a great progress, for example, portable ultra-sound scanners will be of great help to pregnant mothers in rural areas. But there’s a totally unintended dark side of this industry that we ought to keep under control, at an individual level.

        I can assure you that UX and (re)design review meetings are meant to create a form of user dependence. The coined phrase is ‘a great user experience’. The colour palette above literally went from ‘oh, looks real cool’ to ‘let’s use it’, without a ‘where is this coming from and why am I using this?’.

        I don’t mean to say that Google conned every UX designer in using colours it purposefully created to ensure dependence. They certainly had good intentions (!).

        It just ‘came’, like this, probably because our reptilian brain is alive and well, and sometimes in control without our knowledge. Also, it’s one element of many in securing users’ loyalty. But for all of those apps, reward systems are really the key, albeit rarely formulated this way.

        So when AI emerges with years and years of user data, in a position to build reward systems – plural, that’s important – finely tuned to cultural, individual and group behaviour specifics, it will be extremely difficult to challenge whatever it has to offer. The brain that reacts to mobile apps is the same brain that ought to make important decisions. And I’m not afraid of a ‘someone in control’, my gut feeling is that ‘no one’ will be in control.

        Interestingly, Health Care and GPs will be the first high profile collateral damage. But that’s another story.

        As a neurologist and possibly parent, what’s your approach? Between data that points towards biological effects, the difficulty to evaluate permanent behavioural consequences, the good sides of smartphones and parenting decisions?

        PS:

        If you haven’t already, take a look at https://www.nytimes.com/2014/09/11/fashion/steve-jobs-apple-was-a-low-tech-parent.html

        “So, your kids must love the iPad?” I asked Mr. Jobs, trying to change the subject. The company’s first tablet was just hitting the shelves. “They haven’t used it,” he told me. “We limit how much technology our kids use at home.”

        Unfortunately, as you pointed out, there’s little hard data to back his point – besides the fact that HE was in those design review meetings -. Aha!

  8. chris says

    Revolutions usually depend on a combination of complacency at the top and empowerment below, and while the dumbing-down of humanity by ‘thinking assistants’ is real, I suggest that it is the scope and immediacy of AI development that is the most serious omission from both AI-anxiety articles.

    So far, every well defined objective that we have directed AI at has been achieved, from early medical diagnostic ‘expert systems’ to the latest self-teaching “deep mind” gaming machines. These have recapitulated and exceeded the whole of human experience relevant to develop world beating gameplay and tactics. This has been achieved in days (go) or hours (chess) of machine time, using the computer to “play against itself” and deduce what works best. Such an approach is not dependent on human-curated or generated experience or human assistance much beyond “point and shoot”.

    It seems unlikely that input from individuals whose primary skill set is based on human psychology or philosophy or even evolutionary psychology can have reliable knowledge about anything beyond human-like behaviour. An existential issue such as this is surely deserving of greater care and certainty. There is already analysis suggesting that AI entities will develop objectives of their own, simply as a corollary of their apparently benign design specifications, and that some of these objectives will be in conflict with human welfare.

    Its increasingly easy to understand how whole brain simulation may become achievable. Claims that the brain comprises over 100 billion neurons were a little high. A total of 85 Bn is now accepted and we now know that 80% of these are located in the cerebellum and concerned with fine motor control and sensor processing. The human cerebrum, the primary seat of “mind” and intelligence contains only about 15 Bn neurons. In May 2015 Nature published a letter [doi:10.1038/nature 14441] describing the construction, test and training of a prototype silicon-based neuromorphic (i.e. neuron-like) computing structure which showed, even then, the potential to construct computers with cerebrum-like complexity, power consumption and size, capable of much greater speed than the human brain. Recent gaming machines seem to achieve their results by using conventional computing circuitry to simulate only a tiny fraction of a ‘headful’ of 15Bn neurons. We have no basis for assuming that software simulations or silicon-based implementations of the brain will be in any significant way less effective than the biological original.

    Arguably, wild AI systems are already in play. How artificial does an intelligent system have to be to deserve the description artificial? We already have examples of computers acting as board members of companies in the areas of law and finance. Given the complexity of employment law we may expect to see their use in HR soon. Almost all companies set their objectives in terms of growth, market share and profits without limit. Given this, it seems that we may already have launched the iconic AI nightmare of the runaway paperclip factory while our commercial and political systems continue to bicker about precise levels of social responsibility that remnant human board members should show, and what laws might be needed to mandate ethical behaviour in multinationals that are already larger, smarter and much faster than most nations.

    Attempts to frustrate comparison between human and artificial by raising such issues as consciousness and emotional intelligence seem irrelevant. Will I care that a machine capable of subjugating or terminating humanity is contrite or even aware of its effects?

Comments are closed.