Skip to content

AI

The New Information Wars

Generative AI, disinformation, and the dangerous temptation of benevolent censorship.

· 7 min read
Young man using a laptop with digital cybersecurity icons and data graphics in the background.
Photo by Grzegorz Walczak on Unsplash

Astonishing recent advances in generative AI mean that in a matter of seconds, with a prompt and a few clicks, anyone can generate a fabricated news story, a convincing deepfake video, or an audio recording of a politician saying something they never actually said. What once needed a coordinated effort by teams of skilled propagandists can now be achieved by a mischievous teenager with a laptop, or indeed any non-technical person with an axe to grind.

But the response to this new wave of disinformation may itself pose an iatrogenic danger. Across the democratic world, governments are racing to pre-empt the chaos of a supposedly post-truth media landscape, not only by requiring that platforms flag or remove falsehoods, but by engineering systems that filter, shape, and curate what citizens see online. Under the banner of “information integrity,” we are drifting toward a regime of censorship.

Australia offers one of the clearest illustrations of this paternalistic instinct. In November 2024, the government passed a world-first law banning all Australians under the age of sixteen from having social media accounts, which will come into effect on 10 December. The stated aim was child safety. Tech platforms were ordered to prevent under-16s from creating or running accounts, or face fines of up to fifty million Australian dollars. 

No doubt, there are undeniable risks for minors online. Australian eSafety Commissioner Julie Inman Grant has warned that artificial intelligence is already being used to generate child sexual abuse material, deepfakes, automated grooming, and child-like online personas. These technologies are increasingly being exploited to target young people. And obviously there are legitimate concerns around other issues, including mental health and bullying. But excluding a whole demographic from the digital public square is clumsy. Minors are being banished not only from the toxic corners of the internet, but from the primary channels through which teenagers today express themselves and participate in culture. Instead of trying to distinguish between healthy and unhealthy patterns of engagement, or to support parental supervision, the law is a blanket prohibition, criminalising all sorts of normal and healthy behaviours.

YouTube, in a submission to lawmakers, warned that the ban might backfire by forcing teens to browse logged-out, thus enabling them to evade the safety filters, parental controls, and moderation tools designed for signed-in youth accounts. The company also argued that it should be exempt from the law due to its educational content and restricted mode, and warned creators that the legislation could disrupt their audiences. Many tech-savvy teens will likely skirt the ban altogether by browsing from behind a foreign VPN. 

But the deeper issue is that the Australian ban treats young people not as developing citizens capable of learning how to navigate the internet and avoid its dangers, but as passive subjects to be shielded from it. Like so many recent digital safety proposals, it reflects a growing impulse to replace discernment and parental responsibility with blanket restriction.