Skip to content

The Case Against Content Moderation

Aggressive content moderation is presented as a necessary response to hate speech and misinformation—but it's more like a moral panic.

· 15 min read
The Case Against Content Moderation

I joined the internet writing platform Substack a year ago. My poor laptop was overloaded with creative writing that had no obvious path to publication and I was going through a period of political re-examination and wanted to be able to, as it were, think out loud.

It turned out to be a far better decision than I could have expected. Not only could I post whatever I wanted—without needing to negotiate with editors or pay a submission fee to a literary magazine and get a form rejection in response—but there was a genuine community. It was common for writers to post paragraphs-long comments on other people’s work, and an air of thoughtfulness and civility prevailed that was rare in the era of quick-twitch social media.

So I was very surprised to discover, from an Atlantic article of 23 November 2023, that “Substack Has A Nazi Problem.” The piece was written by Jonathan M. Katz, who was at that time himself a Substacker, and who—after rooting around in Substack’s darker corners and finding 16 newsletters that contained “overt Nazi symbols”—declared that, “just beneath the surface, [Substack] has become a home and propagator of white supremacy and anti-Semitism.”

Substack Has a Nazi Problem
The newsletter platform’s lax content moderation creates an opening for white nationalists eager to get their message out.

Katz’s piece triggered much handwringing within the Substack community. An open letter to Substack’s co-founders, Chris Best, Hamish McKenzie, and Jairaj Sethi, signed by 247 “Substackers Against Nazis,” demanded to know why Substack was “platforming and monetizing Nazis” and asked them not to put their “thumb on the scale” by promoting or monetizing Nazis or white supremacists.

The problem is that, as far as I can tell, the Substack leadership isn’t actually doing anything to promote the far-right. The platform is simply upholding the free speech principles outlined in its Content Guidelines. These stipulate a very narrow set of circumstances under which they may censor or prohibit content: for example, the publication of personal details without permission and content that “incite[s] violence based on protected classes.” The guidelines of this US-based company follow a standard closely aligned with reigning Supreme Court interpretation of the American First Amendment. There is objectionable content on Substack, as Katz discovered, but its authors tend to skirt well clear of directly “inciting violence.” So, in practice, what the Substackers Against Nazis are advocating for are modified Terms of Use that allow for more stringent content moderation.

Latest Podcast

Join the newsletter to receive the latest updates in your inbox.


On Instagram @quillette