Skip to content

Building a Better Twitter

Twitter’s current policy on content isn’t one dimensional: It serves up both false positives and false negatives—wrongly banning certain accounts for thoughtcrimes while permitting others to continue on the platform despite engaging in grotesquely abusive behavior.

Building a Better Twitter


There might not be anyone alive who knows more about social-media moderation than Jim Rutt, a long-time technology investor and digital-media pioneer whose experience in the field goes back all the way to pre-World Wide Web services such as the Whole Earth ’Lectronic Link (often known as The WELL), Usenet, and CompuServe forums. Throughout the evolution of all such services, he notes, a common pattern stands out: System administrators initially treat these domains as highly permissive free-speech zones, only to eventually realize that unless some moderation standards are applied, the whole thing will become prohibitively toxic.

“The critical distinction is between moderation of ‘decorum’ (some might alternatively call it ‘behavior’) versus moderation of ‘content,’” Mr. Rutt recently wrote in Quillette. “Concerns about personal attacks, harassment, threats, bullying, and so on fall under ‘decorum.’ Think of it as a set of rules for how users of a platform or service communicate, irrespective of what they are trying to communicate. Examples of decorum rules include bans on profanity and racial slurs.” In Mr. Rutt’s view, moderating users’ substantive viewpoints is wrong, while moderating the manner in which they express those viewpoints is not only advisable, but necessary.

This counsel is worth considering now that Elon Musk has purchased Twitter, seemingly with a view toward reducing the limits imposed on what content users can post.

Times being what they are, news of Mr. Musk’s move was greeted with glee by many on the right side of the political spectrum, and dismay from many on the left. FOX News host Tucker Carlson has started tweeting again after boycotting the platform following his suspension earlier this year. Meanwhile, one widely circulated Canadian Press article bore the headline, “Elon Musk’s Twitter bid may push marginalized voices off the platform: Experts.” And a host of The View warned that Mr. Musk will “unleash the trolls.”

As the New York Times has reported, the SpaceX and Tesla multi-billionaire has an unpredictable leadership style, often taking no one’s counsel but his own. Not known for suffering from any lack of confidence, Mr. Musk may think he is smart enough to design a no-holds-barred approach to social-media operation that somehow won’t allow hate and harassment to completely overwhelm his new corporate possession. As Mr. Rutt notes, unfortunately, this is an impossible task.

From the time of Quillette’s inception, our writers have staked out positions that are strongly supportive of free speech and ideological pluralism. But even most libertarians will acknowledge that some rules are necessary when it comes to speech that is threatening, libellous, or persistently vexatious. We rightly mock those warnings circulated in ultra-progressive circles about micro-aggressions, safe spaces, “staying in your lane,” and all the rest. But it’s important to note that, like so much else that’s associated with woke culture, these overwrought prohibitions are extrapolations of legitimate concerns about how humans are affected by negative forms of communication. We are, after all, social creatures.

That said, the distinction between content and decorum isn’t always clear. We all agree that you shouldn’t be allowed to shout “fire” in a crowded theater without consequences. But what about attacking minority groups in agitated political contexts—during wartime or an acute economic crisis, for instance—in such a way that is designed to incite demagoguery and widespread violence? In such cases, some would argue, the entire world around us may be analogized to that crowded theater.

To take a more immediate example: During the COVID pandemic, various popular figures posted pseudo-scientific information about public health, vaccines in particular. Some social-media services blocked this material. Others didn’t. The former were assailed for compromising free speech, the latter for putting lives at risk. There’s no way to have it both ways, which is why social-media services have had such a difficult time defining the parameters of permissible speech.

What makes the situation more difficult for Mr. Musk is that he’s inheriting a social-media service that already has a somewhat disaffected user base. Even putting aside the problems of hate and harassment, the day-to-day experience of using Twitter can feel tense and joyless, with everyone knowing they’re one bad tweet away from a pile-on. Bots and foreign-funded troll armies lurk everywhere, causing a number of high-profile users to walk away from the service in recent years, citing Twitter’s negative effect on their mental health.

Here at Quillette, we’re used to getting a certain amount of Twitter abuse from militant progressives. But COVID has brought out a different breed of extremist—the radical conservative who accuses anyone defending vaccination and other public-health measures as a fascist (or worse). Our own editor-in-chief, Claire Lehmann, was subject to a torrent of attacks, including death threats, over these issues. As she concluded in a recent column, “paradoxically, free speech conducted in good faith may be able to thrive only in places where the moderation of abuse is quite robust … The rule for healthy internet forums tends to be, ‘We don’t care what you think, we care how you act.’”

Much of the commentary on Mr. Musk’s takeover of Twitter presents content moderation in a one-dimensional way, as if there were some kind of giant Censorship Dial that the new owner was getting set to twist down. But the problem with Twitter’s current policy on content isn’t one dimensional: It serves up both false positives and false negatives—wrongly banning certain accounts for thoughtcrimes while permitting others to continue on the platform despite engaging in grotesquely abusive behavior. If Mr. Musk wants to begin fixing Twitter, both of these problems are ripe for repair.

There’s another possible fix, too, which neither side will like, but which everyone eventually might come to appreciate—and that it is to begin charging Twitter users a fee for use of the service, and requiring that they register for the service using their real identity. At a stroke, this step would eliminate the bots, and incentivize more respectful forms of interaction.

Yes, some users would flee the service, either because they don’t want to pay or because they prefer to lob verbal grenades from behind an anonymous mask. Twitter would thereby become a smaller and less exciting place—but also one that features fewer angry meltdowns and hysterical call-outs. It’s a trade-off that many Twitter users would welcome.

On Instagram @quillette