Skip to content

Science / Tech

ChatGPT-5 and the Limits of Machine Intelligence

The disillusion produced by GPT-5 is not a technical hiccup, it’s a philosophical wake-up call.

· 6 min read
Sam Altman and Gary Marcus, composite image, shows two middle-aged men in business attire speaking into microphones at formal events.
OpenAI CEO Sam Altman (left) and cognitive scientist Gary Marcus. (images by IMAGO/Rod Lamkey and TechCrunch at Flickr)

Disappointment and a sense of deflation, but no longer denial. Following the release of OpenAI’s GPT-5, the internet was soon awash with tweets and posts from industry insiders reluctantly acknowledging the work of Silicon Valley gadfly Gary Marcus. Since the late 2010s, the cognitive scientist has warned of the limits of large language models (LLMs)—much to the chagrin of deep learning enthusiasts and figures like OpenAI CEO Sam Altman, who have publicly championed a more heady narrative.

Marcus’s critique hinges on what he sees as the inherent fragility of deep learning: a data- and energy-hungry, brute-force approach to “understanding” and generating natural language that has proven dazzling yet fundamentally brittle. Those scare quotes are warranted. LLMs do not understand anything—not in the way we ordinarily mean the term. Instead, these vast symbol-manipulating machines use enormous computational resources to predict the most statistically likely next word or token, based on patterns extracted from the collective human corpus. The results are so impressive that talk of general artificial intelligence (AGI) and even conscious machines has re-entered mainstream discourse. But speculation like that rests on the sandy foundations of anthropomorphic projection and philosophical naivety, confusing surface fluency with depth and mimicry with mind.

Marcus’s story resembles that of an earlier figure in the history of AI realists, whose work exposing the inherent limitations of the technology also made him something of a pariah. In the early 1980s, as optimism surged around so-called expert systems—symbolic logic engines, also known as good old fashioned AI (GOFAI), designed to imitate human reasoning—philosopher Hubert Dreyfus declared himself unconvinced. He had already spent over a decade challenging the foundational assumptions of AI research. His 1965 RAND report, followed by his 1972 book What Computers Can’t Do, argued that genuine intelligence is embodied, situated, and context-dependent, and therefore cannot be captured by rule-based systems or computational representations alone.

Drawing on Heidegger and Merleau-Ponty, Dreyfus contended that expertise and meaning arise not from rule-following but from embodied know-how and being-in-the-world—dimensions inaccessible to representational systems. These claims were met with hostility by many in the field, particularly at places like MIT, where symbolic AI was ascendant. Where his critics saw thinking as a problem of abstract symbol manipulation, Dreyfus insisted that such manipulation could never approximate the pre-reflective, intuitive grasp of meaning that characterises human being.

Just as Marcus objects to the flawed assumptions underlying modern LLMs, Dreyfus warned that no machine, no matter how powerful, could achieve genuine insight or expertise like a human so long as it remained disembodied, disembedded, and blind to the meaningful whole within which human cognition always operates. Despite these limitations, Marcus sees hope in revisiting GOFAI. Its fundamentally different approach, he argues, may offer much-needed resilience and improved reasoning capabilities where deep learning has surpassed the point of diminishing returns. He may well be correct. Nevertheless, there are reasons to suspect that Dreyfus’s half-century-old assessment remains as relevant as ever, particularly when it comes to the loftier aspirations of contemporary AI devotees.

Philosopher and neuroscientist Iain McGilchrist’s hemisphere theory provides additional weight to Dreyfus’s critique by shedding new light on how we understand the human mind. Unlike earlier theories about the differences between the brain’s hemispheres, which focused on what they purported to do, McGilchrist argues that the crucial difference lies in how they do it. Drawing upon his experience as a psychiatrist and neuroscientist, and upon a vast collection of research derived from patients suffering brain injury, he offers a fascinating account of the radically different worlds each hemisphere of the brain brings into being.

The right hemisphere allows us to see things as unique, context-dependent, ever-changing, never fully graspable, and never entirely separate from our involvement with them. It reveals a world of depth, ambiguity, beauty, and moral significance: a world in which we participate. It is richer, more truthful, but also harder to pin down in language and make explicit. And yet, for anything that truly matters—relationship, meaning, deep and intuitive understanding—it is indispensable.

The left hemisphere, on the other hand, offers a fragmented and decontextualised vision of the world for the purpose of control and manipulation. It perceives reality as a collection of static, isolated parts: abstract, disembodied aspects stripped of nuance, ambiguity, or emotional resonance. In this mode, apprehension becomes a matter of bottom-up construction aimed at producing conclusions that appear unimpeachable precisely because they have excluded everything that resists categorisation.

Unlike the right hemisphere, the left hemisphere perceives an inanimate universe, where the emphasis falls on utility over truth, and efficiency over understanding. There is clarity, but it comes at the cost of depth. When they are registered at all, beauty, morality, and empathy are reduced to calculations or consequences. Its appraisal is confident, even arrogant, but ultimately shallow. And while this approach is necessary for reducing the infinite complexity of reality so that it can be mapped, navigated, and responded to with decisive action, it is a poor guide to meaning, to relationship, and to those dimensions of life—love, humour, beauty, meaning, sacredness—that wither under the glare of rational analysis.

Both modes of perception are essential, but they are neither symmetrical nor interchangeable. The left hemisphere is dependent on the right at both the beginning and end of the cognitive process. At the outset, it relies on the right to disclose the world as a living, dynamic whole from which parts may then be abstracted. The left then builds simplified, schematic representations—maps—that are often useful, and even indispensable, but inherently limited. Maps can mislead: they are partial, and they necessarily reduce the multidimensional richness of what they represent to something two-dimensional and manageable. Finally, at the far end of the process, it is again the right hemisphere that is needed to interpret these representations in light of the whole and reanimate the map with meaning.

The right hemisphere understands the role of the left, but the left hemisphere—operating within a more rigid, delimited frame—cannot understand the role of the right. Its tendency, well documented in the neurological literature, is to dismiss what it cannot grasp: to confabulate, deny, or devalue anything that cannot be reduced to its own terms. The consequences are often tragic, not only for individuals with right hemisphere damage, but, by analogy, for societies or systems that become over-reliant on the left hemisphere’s mode of attention.

This is all highly instructive when it comes to AI. In many respects, the history of AI mirrors the history of our efforts to model the mind. And that, in turn, reflects something deeper: any attempt to model intelligence is always shaped by the hemisphere that represents, abstracts, and makes things explicit. We model the mind using the only tools available to us: tools grounded in left-hemisphere cognition. Even our most sophisticated models will inevitably reflect its limitations. While we produce ever more refined maps of thinking, language, learning—and now intelligence itself—we tend to forget that the terrain cannot be pulled out of the map.

For this reason, we might think of AI in much the same way McGilchrist characterises the left hemisphere: an astonishingly powerful tool capable of precision, speed, and abstraction, but one that must remain part of a broader cognitive process if it is to serve, rather than distort, what is recognisably human. Left to its own devices, AI—like the isolated left hemisphere—tends toward devitalisation, delusion, and confabulation. Without the contributions of the right hemisphere, which cannot be modelled and so remain a uniquely human affair, our machines will always lack something crucial. This is because machines rely entirely on pre-digested inputs, symbolic proxies, and statistical correlations that stand in for reality without ever disclosing it. They can manipulate language, but not meaning. They can infer patterns, but not significance. And so, while they may dazzle with fluency, they inevitably flatten what they touch. The more we mistake this for understanding, the more we risk remaking the world in its image: a world of fragments, stripped of depth, context, and human values.

From this perspective, the recent wave of disillusion produced by GPT-5 is less a technical hiccup than a philosophical wake-up call. The hype around AGI and the speculative fantasies of machine consciousness rest on anthropomorphic projection and a fundamental misunderstanding of the nature of mind. Dreyfus understood this decades ago: no machine, however complicated or powerful, can think like a human while remaining disembodied, disembedded, and ignorant of the world in context. McGilchrist shows us why. Our approach to modelling intelligence is itself the product of the hemisphere that abstracts and represents, but cannot see what it doesn’t understand.

The left hemisphere, like AI, can construct brilliant maps, yet it is by nature blind to the terrain. If we are to integrate these advanced technologies into our lives and institutions, their contributions must be part of a broader process reflective of human being as a whole. They must be situated within a broader horizon of meaning shaped by the right hemisphere’s mode of attention, where reality is not merely processed or parsed, but encountered in all its ambivalent richness. Without the guiding attention of the right hemisphere, our most powerful machines risk augmenting an already predominant left-hemispheric vision of the world, rendering us increasingly blind to that which cannot be reduced to representation—that which makes us human.