Science / Tech
ChatGPT-5 and the Limits of Machine Intelligence
The disillusion produced by GPT-5 is not a technical hiccup, it’s a philosophical wake-up call.

Disappointment and a sense of deflation, but no longer denial. Following the release of OpenAI’s GPT-5, the internet was soon awash with tweets and posts from industry insiders reluctantly acknowledging the work of Silicon Valley gadfly Gary Marcus. Since the late 2010s, the cognitive scientist has warned of the limits of large language models (LLMs)—much to the chagrin of deep learning enthusiasts and figures like OpenAI CEO Sam Altman, who have publicly championed a more heady narrative.
Marcus’s critique hinges on what he sees as the inherent fragility of deep learning: a data- and energy-hungry, brute-force approach to “understanding” and generating natural language that has proven dazzling yet fundamentally brittle. Those scare quotes are warranted. LLMs do not understand anything—not in the way we ordinarily mean the term. Instead, these vast symbol-manipulating machines use enormous computational resources to predict the most statistically likely next word or token, based on patterns extracted from the collective human corpus. The results are so impressive that talk of general artificial intelligence (AGI) and even conscious machines has re-entered mainstream discourse. But speculation like that rests on the sandy foundations of anthropomorphic projection and philosophical naivety, confusing surface fluency with depth and mimicry with mind.
Marcus’s story resembles that of an earlier figure in the history of AI realists, whose work exposing the inherent limitations of the technology also made him something of a pariah. In the early 1980s, as optimism surged around so-called expert systems—symbolic logic engines, also known as good old fashioned AI (GOFAI), designed to imitate human reasoning—philosopher Hubert Dreyfus declared himself unconvinced. He had already spent over a decade challenging the foundational assumptions of AI research. His 1965 RAND report, followed by his 1972 book What Computers Can’t Do, argued that genuine intelligence is embodied, situated, and context-dependent, and therefore cannot be captured by rule-based systems or computational representations alone.
Drawing on Heidegger and Merleau-Ponty, Dreyfus contended that expertise and meaning arise not from rule-following but from embodied know-how and being-in-the-world—dimensions inaccessible to representational systems. These claims were met with hostility by many in the field, particularly at places like MIT, where symbolic AI was ascendant. Where his critics saw thinking as a problem of abstract symbol manipulation, Dreyfus insisted that such manipulation could never approximate the pre-reflective, intuitive grasp of meaning that characterises human being.
Just as Marcus objects to the flawed assumptions underlying modern LLMs, Dreyfus warned that no machine, no matter how powerful, could achieve genuine insight or expertise like a human so long as it remained disembodied, disembedded, and blind to the meaningful whole within which human cognition always operates. Despite these limitations, Marcus sees hope in revisiting GOFAI. Its fundamentally different approach, he argues, may offer much-needed resilience and improved reasoning capabilities where deep learning has surpassed the point of diminishing returns. He may well be correct. Nevertheless, there are reasons to suspect that Dreyfus’s half-century-old assessment remains as relevant as ever, particularly when it comes to the loftier aspirations of contemporary AI devotees.
Philosopher and neuroscientist Iain McGilchrist’s hemisphere theory provides additional weight to Dreyfus’s critique by shedding new light on how we understand the human mind. Unlike earlier theories about the differences between the brain’s hemispheres, which focused on what they purported to do, McGilchrist argues that the crucial difference lies in how they do it. Drawing upon his experience as a psychiatrist and neuroscientist, and upon a vast collection of research derived from patients suffering brain injury, he offers a fascinating account of the radically different worlds each hemisphere of the brain brings into being.
The right hemisphere allows us to see things as unique, context-dependent, ever-changing, never fully graspable, and never entirely separate from our involvement with them. It reveals a world of depth, ambiguity, beauty, and moral significance: a world in which we participate. It is richer, more truthful, but also harder to pin down in language and make explicit. And yet, for anything that truly matters—relationship, meaning, deep and intuitive understanding—it is indispensable.
The left hemisphere, on the other hand, offers a fragmented and decontextualised vision of the world for the purpose of control and manipulation. It perceives reality as a collection of static, isolated parts: abstract, disembodied aspects stripped of nuance, ambiguity, or emotional resonance. In this mode, apprehension becomes a matter of bottom-up construction aimed at producing conclusions that appear unimpeachable precisely because they have excluded everything that resists categorisation.
Unlike the right hemisphere, the left hemisphere perceives an inanimate universe, where the emphasis falls on utility over truth, and efficiency over understanding. There is clarity, but it comes at the cost of depth. When they are registered at all, beauty, morality, and empathy are reduced to calculations or consequences. Its appraisal is confident, even arrogant, but ultimately shallow. And while this approach is necessary for reducing the infinite complexity of reality so that it can be mapped, navigated, and responded to with decisive action, it is a poor guide to meaning, to relationship, and to those dimensions of life—love, humour, beauty, meaning, sacredness—that wither under the glare of rational analysis.
Both modes of perception are essential, but they are neither symmetrical nor interchangeable. The left hemisphere is dependent on the right at both the beginning and end of the cognitive process. At the outset, it relies on the right to disclose the world as a living, dynamic whole from which parts may then be abstracted. The left then builds simplified, schematic representations—maps—that are often useful, and even indispensable, but inherently limited. Maps can mislead: they are partial, and they necessarily reduce the multidimensional richness of what they represent to something two-dimensional and manageable. Finally, at the far end of the process, it is again the right hemisphere that is needed to interpret these representations in light of the whole and reanimate the map with meaning.