Skip to content

Musk and Moderation

· 11 min read
Musk and Moderation
A still from “Return To Space,” 2022. © Netflix / Courtesy Everett Collection

Reports that Twitter has accepted Elon Musk’s offer to buy the company for $54.20 a share have provoked much handwringing about his attitudes to free speech, especially with respect to possible changes in the social media platform’s moderation policies. So far, the discussion has been a largely anodyne and clueless collision of absolutist narratives. A realistic conversation should not start with “moderation, yes or no” but rather “what kinds of moderation would make Twitter a fairer and more effective marketplace of ideas.”

For over 40 years, I have been involved with moderating online communities, designing community software, and operating online community businesses. I have also been a power user in every generation of online community platforms from 1981 to today. With that experience in mind, I would like to offer a more nuanced look at the nature of moderation, and to suggest that Musk’s acquisition of Twitter could be a good thing.

Let’s start with some examples of the criticism Musk’s takeover has elicited. From the Washington Post on April 17th:

Alex Stamos, the former Facebook chief security officer who called out Russian disinformation on that platform during the 2016 election, said Musk has a notion of Twitter as a public square for free expression that is divorced from the reality of many individuals and failed to acknowledge that it would give more power to the most powerful.

Without moderation, Stamos said, “anybody who expresses an opinion ends up with every form of casual insult ranging to death and rape threats. That is the baseline of the Internet. If you want people to be able to interact, you need to have basic rules.”

And from the same paper, the following day:

Elon Musk’s vision for Twitter is a public town square where there are few restrictions on what people can or can’t say on the Internet.

But the utopian ideal envisioned by the Tesla CEO ceased to exist long ago and doesn’t take into account what’s happening in the real world, tech executives, Twitter employees and Silicon Valley insiders say. As Musk seeks a $43 billion hostile takeover bid for Twitter, critics say his ambition for what the platform should be — a largely unpoliced space rid of censorship — is naive, would hurt the company’s growth prospects and would render the platform unsafe.

Criticisms of this sort are examples of strawman argumentation. As Musk indicated in his TED talk on April 14th, he is well aware that moderation is needed on Twitter—he doesn’t appear to be against “basic rules,” nor is he in favor of “a largely unpoliced space.” On the other hand, he doesn’t have any experience operating many-to-many online communities and may not be able to articulate what it means to move Twitter in a more free speech direction while preserving the kinds of moderation that are necessary for operating a successful online community.

Decorum moderation versus content moderation

The first step is to distinguish between different kinds of moderation. Doing so allows discussions to engage with specific issues of life online while avoiding simplistic games of tribal signaling. The critical distinction is between moderation of “decorum” (some might alternatively call it “behavior”) versus moderation of “content.” Concerns about personal attacks, harassment, threats, bullying, and so on fall under “decorum.” Think of it as a set of rules for how users of a platform or service communicate, irrespective of what they are trying to communicate. Examples of decorum rules include bans on profanity and racial slurs. Facebook’s somewhat ludicrous “no nipples” rule is an example of decorum moderation.

Decorum moderation is analogous to “manners” in face-to-face society. Online communities can and should vary in their decorum rules, just as face-to-face communities do with respect to manners. You might offer your close friends a vivid account of a recent dating debacle that would not be suitable for your grandmother’s Sunday dinner table. Decorum rules on a Disney family-friendly site might be quite different than they are on an adult-oriented site like Twitter.

Online users choose the communities in which they feel comfortable and welcome, and decorum rules and moderation are an important part of that. The limited success in the marketplace for communities with minimal decorum moderation indicates that most people want relatively strict decorum rules so long as they are enforced in an evenhanded fashion. Musk should be clear about the fact that his Twitter will have decorum moderation.

Content moderation is moderation of the substance of posts and comments. Under content moderation, posts and comments on certain topics are banned or otherwise restricted no matter how decorously they are presented. Many platforms ban “doxxing” or other violations of user privacy. Most communities ban direct threats of violence, advocacy of the more serious varieties of criminal or terroristic behavior, and libelous defamation. Many communities ban inherently dangerous content such as how to make bombs or poisons.

There is relatively little controversy about the need for this kind of content moderation, though sometimes there is controversy about the specifics. It is reasonable to expect that Twitter under Musk’s leadership will retain content moderation of this sort, though he’ll probably be more liberal than others would be inclined to be.

Point-of-view moderation

Where things get controversial, and where I believe Musk’s main concerns lie, is around the subset of content moderation based on “point-of-view.” An example of point-of-view moderation was Twitter and Facebook’s censorship of QAnon content in 2020 and 2021, irrespective of its decorum. Hundreds of thousands of tweets were taken down, and thousands of Twitter users were banned. On Facebook, hundreds of groups were summarily shut down.

QAnon is an ideology comprised of bad ideas that are extremely unlikely to be true. But I could say the same about Christianity, astrology, and Marxism-Leninism, all of which have significant presences on Twitter and Facebook. The public square in a free society should not be restricted to ideas that I approve of, and nor should anybody else’s ideas of right or wrong be the basis for restricting decorous speech beyond the small subset of cases discussed above.

Other examples of point-of-view moderation are perhaps less dramatic but nonetheless disturbing. An example with which I’m familiar is an idealistic political startup movement called Unity 2020, created to challenge the Democratic-Republican political duopoly in the United States with a proposed centrist slate for president and vice president in the 2020 elections. It was a quixotic project, albeit one that I thought might lead to something interesting in the future. I know the people behind it and it was certainly a good-faith contribution to our political discourse. Nevertheless, Twitter deleted the main Unity 2020 account in September 2020, and Facebook banned its founder.

That the platforms should target a startup political movement of this kind ought to be disturbing to anyone who thinks we need new ideas that might improve our dysfunctional politics. While point-of-view-based moderation of new ideas might help to suppress mad and bad ideologies like QAnon, it also risks suppressing the kind of fresh thinking that we need if humanity is to survive.

Or consider the GameB Facebook Group of which I am a co-founder. In our own minds at least, this is a “do-gooder” organization, uninvolved with partisan politics, which has a remarkably well behaved user community. For reasons that the company has never disclosed, Facebook abruptly handed all three group admins permanent and unappealable bans. This seems very unlikely to have been based on violations of decorum policy. Were it not for the fact that we had loud and influential friends, including a few inside Facebook, the admins would probably have not been reinstated. How many other good upstart ideas and movements without influential friends have simply disappeared following an assault by point-of-view moderation?

I know several other leaders in the broader “social change” movement known as the Liminal Web. These are good people organizing for what they believe to be the betterment of humanity. Many of them have been given time-outs or had videos demonetized or been banned or disciplined at one time or another by the various platforms. The platforms as they are operated today appear to have a systemic bias against any ideas that constitute a challenge to the status quo, no matter how well intended or thought out.

In a world in which the status quo is doing a pretty terrible job of dealing with our severe and worsening societal problems, it can’t be beneficial to our overall portfolio of live ideas to allow our public square platforms to pick and choose, especially not when those choices are informed by an apparent pro-status-quo bias. Such platforms need to become “marketplaces of ideas” in which every good-faith voice gets a hearing—even if it is only from its own paltry following. Ideas should spread and prosper or fail and vanish based on their ability to convince and motivate others. Their legitimacy should certainly not be determined by the ideological biases of a small number of gatekeepers in Silicon Valley.

This is what I think Musk has in mind when he says he wants to “increase free speech on Twitter.” He appears to believe sincerely that open inquiry and free expression have been the greatest advantages democracies have enjoyed over their authoritarian competitors, and that we are in danger of squandering those advantages.

Strong and sensible decorum moderation makes a wide and free marketplace of ideas more practical. It is the name-calling, flaming, mob harassment, and other kinds of personal attacks that make discussion of controversial issues so difficult and ugly online. Failure to enforce decorum leads to the “heckler’s veto,” whereby vicious personal attacks from dissenters make discussion so painful that reasonable people are driven away.

Well-defined and evenly applied rules of decorum can help users more comfortably explore a much wider range of ideas. It is best to prevent the heckler’s veto via decorum moderation and demands that dissenters answer arguments they think are bad with counter-arguments they think are better.

Musk could start by telling his critics that he supports careful and coherent decorum rules, and that they will be applied in an evenhanded manner. He could add that Twitter will engage in limited and specific content moderation of illegal or inherently dangerous content, but will back away from the inevitably biased (and expensive) game of point-of-view moderation so that the platform can become the open and fair marketplace of ideas we need.

Enforcement and appeal

But to build trust in any moderation scheme requires a significant increase in precision, transparency, and a real right of appeal. Today, content take-downs and account suspensions and bans typically reference some unspecified “violation of our Terms of Service,” and direct users to a long and murky document which tells them nothing useful about what they did wrong.

Any act of moderation must reference both the specific post(s) or comments(s) that are deemed to be in violation and specify a numbered section of the Terms of Service, much like a criminal statute. Each section should be succinct and intelligible—probably not more than 100 words in plain English—and be translated into all languages supported by the platform. This is hardly beyond the capacity of multibillion dollar companies.

All acts of moderation other than warnings should be appealable to a human. Many moderation actions today are conducted algorithmically. The appeal to human review should take no longer than 24 hours. A second level of appeal should permit a user to stake anywhere between $100 and $1million and demand a review by a professional independent arbitrator to determine whether the specified post(s) actually violated the referenced section of the Terms of Service. As in “baseball arbitration,” the arbitrator must find for the user or for the platform in a binary manner. If the user prevails, the platform pays the user 10 times their stake (minus the $100 arbitration fee). If the platform prevails, the user loses their stake, the first $100 of which goes to the arbitrator.

To ensure that users of modest means have a meaningful second appeal available, a user should be able to syndicate their appeals—that is, post the proposed appeal on a market, along with their initial stake (if any), where third parties can then increase the size of the stake. Third-party stakers get 80 percent of any win, and the appealing user gets the remaining 20 percent. Once the total stake reaches $100 or a higher level specified by the appellant, the appeal would be automatically filed with the platform. Third-party stakers would retain the ability to increase the stake until a decision is returned, should they choose to do so.

Such a system would enable impecunious users who believe they have been wronged to test the quality of their claim and potentially receive a large payout if they are vindicated. If no-one on the market supports their appeal, it is a signal that their case is weak. If it is strong, the claim might attract substantial support. The 10 times payout ratio incentivizes the platform operators to get it right at least 90 percent of the time, which is a pretty decent standard.

The broader perspective

Fixing moderation, however, is not, by itself, enough to make Twitter a fair and effective marketplace in which ideas can rise or fall on their merits. The platform should move away from easily obtained anonymity and require either real name ID or pseudonymous ID that limits one account to one real person and guarantees a “proof of humanity” behind all accounts. The last 40 years have demonstrated that anonymous discourse is generally worse discourse, and real-name or one-person-one-ID verification would help to substantially reduce the presence of bots and sock-puppet collusion networks.

Adding viscosity may also help. One suggestion making the rounds is limiting retweets to two levels. That is, if I retweet something, my readers can retweet it, but thereafter people would have to cut-and-paste it for another round of propagation. This would reduce, but not entirely thwart, the widespread propagation of content. Limiting the number of retweets per day per user is another viscosity proposal. If each user had only two retweets a day, they might be more discerning about what they circulate. It would slow down the flow of messages and quite possibly increase the general quality.

As Musk has said, a move away from a nearly entirely advertising-supported model would also be a huge help in creating a healthier information ecosystem. In an ad-based environment, the platform operator’s economic incentives keep users online for as long as possible to generate the largest possible ad inventory. This drives operators to preferentially offer users the most inflammatory and click-baity material to “increase engagement.” If it stirs up a big fight, so much the better. In a subscription-based model, on the other hand, the platform operator’s incentive shifts towards providing the most utility to the user in the least amount of time online.

Musk’s suggestion of open-sourcing the feed algorithm is potentially a good one, though there is certainly a risk that it would make the algorithm more gameable. Better still would be to provide a marketplace of open-source feed algorithms provided by third parties that allows users to select their own and pay a small fee (a few cents per month) for its use. A diversified ecosystem of feed algorithms will be much harder to exploit.

If changes of the sort I have recommended here were implemented at Twitter, they would likely receive strong support from the vast preponderance of actual users. Others will no doubt vehemently disagree. But as we embark upon this debate, it is important to remember that there is no utopian solution available that will satisfy everyone. Any reforms will require trade-offs of one sort or another, because while the platform’s policies can be amended, human nature cannot. The immediate task at hand, therefore, is not to perfect the Twitter experience but to improve it. And as Musk seeks to undertake this challenging task, his critics should resist the temptation to make an unattainable perfection the enemy of the good.

Latest Podcast

Join the newsletter to receive the latest updates in your inbox.

Sponsored

On Instagram @quillette