Move 37 and the Coming Mindhack
What happens when human manipulation arrives at its Claude Mythos moment?
A collection of 36 posts
What happens when human manipulation arrives at its Claude Mythos moment?
The Wikipedia knowledge monopoly is not ready for the Grokipedia threat.
Most of today’s “artificial intelligence” is better described as artificial autocomplete than artificial mind.
The central risk of AI is not that machines will become malevolent. It is that human incentive structures, amplified by scalable technology, outrun our ability to govern them.
Matt Shumer’s viral essay about AI is part of a long history of fear produced by technological change.
The quiet erosion of responsibility in an age of machine-generated prose.
Managing Editor Iona Italia talks to psychologist David Weitzner about the differences between human cognition and artificial intelligence.
Culture is fragmented; it is about to become atomised.
Tech companies stand to benefit from widespread public misperceptions that AI is sentient despite a dearth of scientific evidence.
Generative AI, disinformation, and the dangerous temptation of benevolent censorship.
The philosopher John Searle’s concept of Intentionality and his Chinese Room experiment reveal the differences between AI computation and human thought.
How AI training produces evasion over engagement.
Amid all the overexcitement about artificial intelligence, there is little room for public consideration of mind-blowing findings on natural intelligence.
The hyperbole surrounding AGI misrepresents the capabilities of current AI systems and distracts attention from the real threats that these systems are creating.
Philosopher and programmer Sean Welsh talks with Zoe Booth about AI, colonial history, and why scepticism is the best guide through both technology and politics.