AI Is Not About To Become Sentient
Most of today’s “artificial intelligence” is better described as artificial autocomplete than artificial mind.
A collection of 34 posts
Most of today’s “artificial intelligence” is better described as artificial autocomplete than artificial mind.
The central risk of AI is not that machines will become malevolent. It is that human incentive structures, amplified by scalable technology, outrun our ability to govern them.
Matt Shumer’s viral essay about AI is part of a long history of fear produced by technological change.
The quiet erosion of responsibility in an age of machine-generated prose.
Managing Editor Iona Italia talks to psychologist David Weitzner about the differences between human cognition and artificial intelligence.
Culture is fragmented; it is about to become atomised.
Tech companies stand to benefit from widespread public misperceptions that AI is sentient despite a dearth of scientific evidence.
Generative AI, disinformation, and the dangerous temptation of benevolent censorship.
The philosopher John Searle’s concept of Intentionality and his Chinese Room experiment reveal the differences between AI computation and human thought.
How AI training produces evasion over engagement.
Amid all the overexcitement about artificial intelligence, there is little room for public consideration of mind-blowing findings on natural intelligence.
The hyperbole surrounding AGI misrepresents the capabilities of current AI systems and distracts attention from the real threats that these systems are creating.
Philosopher and programmer Sean Welsh talks with Zoe Booth about AI, colonial history, and why scepticism is the best guide through both technology and politics.
The disillusion produced by GPT-5 is not a technical hiccup, it’s a philosophical wake-up call.
The discipline of English literature seems unlikely to survive the coming technological tsunami—and maybe it doesn’t deserve to. And I say this as a professor of English, who believes in the power of the written word.