recent, Social Media, Top Stories, Who Controls The Platform?

Facebook Has a Right to Block ‘Hate Speech’—But Here’s Why It Shouldn’t

The article that follows is the first instalment of “Who Controls the Platform?”—a multi-part Quillette series authored by social-media insiders. Our editors invite submissions to this series, which may be directed to

In late August, I wrote a note to my then-colleagues at Facebook about the issues I saw with political diversity inside the company. You may have read it, because someone leaked the memo to the New York Times, and it spread outward rapidly from there. Since then, a lot has happened, including my departure from Facebook. I never intended my memos to leak publicly—they were written for an internal corporate audience. But now that I’ve left the company, there’s a lot more I can say about how I got involved, how Facebook’s draconian content policy evolved, and what I think should be done to fix it.

*     *     *

My job at Facebook never had anything to do with politics, speech, or content policy—not officially. I was hired as a software engineer, and I eventually led a number of product teams, most of which were focused on user experience. But issues related to politics and policy were central to why I had come to Facebook in the first place.

When I joined the Facebook team in 2012, the company’s mission was to “make the world more open and connected, and give people the power to share.” I joined because I began to recognize the central importance of the second half of the mission—give people the power to share—in particular. A hundred years from now, I think we’ll look back and recognize advances in communication technologies—technologies that make it faster and easier for one person to get an idea out of their head and in front someone else (or the whole world)—as underpinning some of the most significant advances in human progress that humanity has ever witnessed. I still believe this. It’s why I joined Facebook.

And for about five years, we made headway. Both the company and I had our share of ups, downs, growth and setbacks. But, by and large, we aspired to be a transparent carrier of people’s stories and ideas. When my team was building the “Paper” Facebook app, and then the later redesigned News Feed, we metaphorically aspired for our designs to be like a drinking glass: invisible. Our goal was to get out of the way and let the content shine through. Facebook’s content policy reflected this, too. For a long time, the company was a vociferous (even if sometimes unprincipled) proponent of free speech.

As of 2013, this was essentially Facebook’s content policy: “We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying).”

By the time the 2016 U.S. election craze began (particularly after Donald Trump secured the Republican nomination), however, things had changed. The combination of Facebook’s corporate encouragement to “bring your authentic self to work” along with the overwhelmingly left-leaning political demographics of my former colleagues meant that left-leaning politics had arrived on campus. Employees plastered up Barack Obama “HOPE” and “Black Lives Matter” posters. The official campus art program began to focus on left-leaning social issues. In Facebook’s Seattle office, there’s an entire wall that proudly features the hashtags of just about every left-wing cause you can imagine—from “#RESIST” to “#METOO.”

In our weekly Q&As with Mark Zuckerberg (known internally as “Zuck”), the questions reflected the politicization. I’m paraphrasing here, but questions such as “What are we doing about those affected by the Trump presidency?” and “Why is Peter Thiel, who supports Trump, still on our board?” became common. And to his credit, Zuck always handled these questions with grace and clarity. But while Mark supported political diversity, the constant badgering of Facebook’s leadership by indignant and often politically intolerant employees increasingly began to define the atmosphere.

As this culture developed inside the company, no one openly objected. This was perhaps because dissenting employees, having watched the broader culture embrace political correctness, anticipated what would happen if they stepped out of line on issues related to “equality,” “diversity,” or “social justice.” The question was put to rest when “Trump Supporters Welcome” posters appeared on campus—and were promptly torn down in a fit of vigilante moral outrage by other employees. Then Palmer Luckey, boy-genius Oculus VR founder, whose company we acquired for billions of dollars, was put through a witch hunt and subsequently fired because he gave $10,000 to fund anti-Hillary ads. Still feeling brave?

It’s not a coincidence that it was around this time that Facebook’s content policy evolved to more broadly define “hate speech.” The internal political monoculture and external calls from left-leaning interest groups for us to “do something” about hateful speech combined to create a sort of perfect storm.

As the content policy evolved to incorporate more expansive hate speech provisions, employees who objected privately remained silent in public. This was a grave mistake, and I wish I’d recognized the scope of the threat before these values became deeply rooted in our corporate culture. The evolution of our content policy not only risked the core of Facebook’s mission, but jeopardized my own alignment with the company. As a result, my primary intellectual focus became Facebook’s content policy.

I quickly discovered that I couldn’t even talk about these issues without being called a “hatemonger” by colleagues. To counter this, I started a political diversity effort to create a culture in which employees could talk about these issues without risking their reputations and careers. Unfortunately, while the effort was well received by the 1,000 employees who joined it, and by most senior Facebook leaders, it became clear that they were committed to sacrificing free expression in the name of “protecting” people. As a result, I left the company in October.

The posters that kicked off the “FB’ers for Political Diversity” group. The quotes come from Facebook employees who’d reached out to me after a post I wrote criticizing left-leaning art turned into a moral-outrage mob that tried to make me apologize for offending colleagues

Let’s fast-forward to present day. This is Facebook’s summary of their current hate  speech policy:

We define hate speech as a direct attack on people based on what we call protected characteristics—race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.

The policy aims to protect people from seeing content they feel attacked by. It doesn’t just apply to direct attacks on specific individuals (unlike the 2013 policy), but also prohibits attacks on “groups of people who share one of the above-listed characteristics.”

If you think this is reasonable, then you probably haven’t looked closely at how Facebook defines “attack.” Simply saying you dislike someone with reference to a “protected characteristic” (e.g., “I dislike Muslims who believe in Sharia law”) or applying a form of moral judgment (e.g., “Islamic fundamentalists who forcibly perform genital mutilation on women are barbaric”) are both technically considered “Tier-2“ hate speech attacks, and are prohibited on the platform.

This kind of social-media policy is dangerous, impractical, and unnecessary.

*     *     *

The trouble with hate speech policies begins with the fact that there are no principles that can be fairly and consistently applied to distinguish what speech is hateful from what speech is not. Hatred is a feeling, and trying to form a policy that hinges on whether a speaker feels hatred is impossible to do. As anyone who’s ever argued with a spouse or a friend has experienced, grokking someone’s intent is often very difficult.

As a result, hate speech policies rarely just leave it at that. Facebook’s policy goes on to list a series of “protected characteristics” that, if targeted, constitute supposedly hateful intent. But what makes attacking these characteristics uniquely hateful? And where do these protected characteristics even come from? In the United States, there are nine federally protected classes. California protects 12. The United Kingdom protects 10. Facebook has chosen 11 publicly, though internally they define 17. The truth is, any list of protected characteristics is essentially arbitrary. Absent a principled basis, these are lists that are only going to expand with time as interest and identity groups claim to be offended, and institutions cater to the most sensitive and easily offended among us.

The inevitable result of this policy metastasis is that, eventually, anything that anyone finds remotely offensive will be prohibited. Mark Zuckerberg not only recently posted a note that seemed to acknowledged this, but included a handy graphic describing how they’re now beginning to down-rank content that isn’t prohibited, but is merely borderline.

Graph contained in Mark Zuckerberg’s November 15, 2018 post titled, ‘A Blueprint for Content Governance and Enforcement’

Almost everything you can say is offensive to somebody. Offense isn’t a clear standard like imminent lawless action. It is subjective—left up to the offended to call it when they see it.

On one occasion, a colleague declared that I had offended them by criticizing a recently installed art piece in Facebook’s newest Menlo Park office. They explained that as a transgender woman, they felt the art represented their identity, told me they “didn’t care about my reasoning,” and that the fact they felt offended was enough to warrant an apology from me. Offense (or purported offense) can be wielded as a political weapon: An interest group (or a self-appointed representative of one) claims to be offended and demands an apology—and, implicitly with it, the moral and political upper hand. When I told my colleague that I meant what I said, that I didn’t think it was reasonable for them to be offended, and, therefore, that I wouldn’t apologize, they were left speechless—and powerless over me. This can be awkward and takes social confidence to do—I don’t want to offend anyone—but the alternative is far worse.

Consider Joel Kaplan, Facebook’s VP for Global Public Policy—and a close friend of recently confirmed U.S. Supreme Court Justice Brett Kavanaugh—who unnecessarily apologized to Facebook employees after attending Kavanaugh’s congressional hearing. Predictably, after securing an apology from him, the mob didn’t back down. Instead, it doubled down. Some demanded Kaplan be fired. Others suggested Facebook donate to #MeToo causes. Still others used the episode as an excuse to berate senior executives. During an internal town hall about the controversy, employees interrupted, barked and pointed at Zuck and Sheryl Sandberg with such hostility that several long-time employees walked away, concluding that the company “needed a cultural reset.” The lesson here is that while “offense” is certainly something to be avoided interpersonally, it is too subjective and ripe for abuse to be used as a policy standard.

Perhaps even more importantly, you cannot prohibit controversy and offense without destroying the foundation needed to advance new ideas. History is full of important ideas, like heliocentrism and evolution, that despite later being shown to be true were seen as deeply controversial and offensive because they challenged strongly held beliefs. Risking being offended is the ante we all pay to advance our understanding of the world.

But let’s say you’re not concerned about the slippery slope of protected characteristics, and you’re also unconcerned with the controversy endemic to new ideas. How about the fact that the truths you’re already confident in—for example, that racism is abhorrent—are difficult to internalize if they are treated as holy writ in an environment where people aren’t allowed to be wrong or offend others? Members of each generation must re-learn important truths for themselves (“Really, why is racism bad?”). “Unassailable” truths turn brittle with age, leaving them open to popular suspicion. To maintain the strength of our values, we need to watch them sustain the weight of evidence, argument and refutation. Such a free exchange of ideas will not only create the conditions necessary for progress and individual understanding, but also cultivate the resilience that much of modern culture so sorely lacks.

*     *     *

But let’s now come down to ground level, and focus on how Facebook’s policies actually work.

When a post is reported as offensive on Facebook (or is flagged by Facebook’s automated systems), it goes into a queue of content requiring human moderation. That queue is processed by a team of about 8,000 (soon to be 15,000) contractors. These workers have little to no relevant experience or education, and often are staffed out of call centers around the world. Their primary training about Facebook’s Community Standards exists in the form of a 1,400 pages of rules spread out across dozens of PowerPoint presentations and Excel spreadsheets. Many of these workers use Google Translate to make sense of these rules. And once trained, they typically have eight to 10 seconds to make a decision on each post. Clearly, they are not expected to have a deep understanding of the philosophical rationale behind Facebook’s policies.

As a result, they often make wrong decisions. And that means the experience of having content moderated on a day-to-day basis will be inconsistent for users. This is why your own experience with content moderation not only probably feels chaotic, but is (in fact) barely better than random. It’s not just you. This is true for everyone.

Inevitably, some of the moderation decisions will affect prominent users, or frustrate a critical mass of ordinary users to the point that they seek media attention. When this happens, the case gets escalated inside Facebook, and a more senior employee reviews the case to consider reversing the moderation decision. Sometimes, the rules are ignored to insulate Facebook from “PR Risk.” Other times, the rules are applied more stringently when governments that are more likely to fine or regulate Facebook might get involved. Given how inconsistent and slapdash the initial moderation decisions are, it’s no surprise that reversals are frequent. Week after week, despite additional training, I’ve watched content moderators take down posts that simply contained photos of guns—even though the policy only prohibits firearm sales. It’s hard to overstate how sloppy this whole process is.

There is no path for something like this to improve. Many at Facebook, with admirable Silicon Valley ambition, think they can iterate their way out of this problem. This is the fundamental impasse I came to with Facebook’s leadership: They think they’ll be able to clarify the policies sufficiently to enforce them consistently, or use artificial intelligence (AI) to eliminate human variance. Both of these approaches are hopeless.

Iteration works when you’ve got a solid foundation to build on and optimize. But the Facebook hate speech policy has no such solid foundation because “hate speech” is not a valid concept in the first place. It lacks a principled definition—necessarily, because “hateful” speech isn’t distinguishable from subjectively offensive speech—and no amount of iteration or willpower will change that.

Consequently, hate speech enforcement doesn’t have a human variance problem that AI can solve. Machine learning (the relevant form of AI) works when the data is clear, consistent, and doesn’t require human discretion or context. For example, a machine-learning algorithm could “learn” to recognize a human face by reference to millions of other correctly identified human-face images. But the hate speech policy and Facebook’s enforcement of it is anything but clear and consistent, and everything about it requires human discretion and context.

Case in point: When Facebook began the internal task of deciding whether to follow Apple’s lead in banning Alex Jones, even that one limited task required a team of (human) employees scouring months of Jones’ historical Facebook posts to find borderline content that might be used to justify a ban. In practice, the decision was made for political reasons, and the exercise was largely redundant. AI has no role in this sort of process.

*     *     *

No one likes hateful speech, and that certainly includes me. I don’t want you, your friends, or your loved ones to be attacked in any way. And I have a great deal of sympathy for anyone who does get attacked—especially for their immutable (meaning unimportant, as far as I’m concerned) characteristics. Such attacks are morally repugnant. I suspect we all agree on that.

But given all of the above, I think we’re losing the forest for the trees on this issue. “Hate speech” policies may be dangerous and impractical, but that’s not true of anti-harassment policies, which can be defined clearly and applied with more clarity. The same is true of laws that prohibit threats, intimidation and incitement to imminent violence. Indeed, most forms of interpersonal abuse that people expect to be covered by hate speech policies—i.e., individual, targeted attacks—are already covered by anti-harassment policies and existing laws.

So the real question is: Does it still make sense to pursue hate speech policies at all? I think the answer is a resounding “no.” Platforms would be better served by scrapping these policies altogether. But since all signs point to platforms doubling down on existing policies, what’s a user to do?

First, it’s important to recognize that much of the content that violates Facebook’s content policy never gets taken down. I’d be surprised if moral criticism of religious groups, for example, resulted in enforcement by moderators today, despite being (as I noted above) technically prohibited by Facebook’s policy. This is a short-lived point, because Facebook is actively working on closing this gap, but in the meantime, I’d encourage you to not let the policies get in your way. Say what you think is right and true, and let the platforms deal with it. One great aspect of these platforms being private (despite some clamoring for them to be considered “public squares”) is that the worst they can do is kick you off. They can’t stop you from using an alternate platform, starting an offline movement, or complaining loudly to the press. And, most importantly, they generally can’t throw you in jail—at least not in the United States.

Second, we should be mindful of the full context—that social media can be both powerfully good and powerfully bad for our lives—when deciding how to use it. For most of us, it’s both good and bad. The truth is, social media is a new phenomenon and frankly no one—including me and my former colleagues at Facebook—has figured out how to perfect it. Facebook should acknowledge this and remind everyone to be mindful about how they use the platform.

That said, you shouldn’t wait for Facebook to figure out how to properly contextualize everything you see. You can and should take on that responsibility yourself. Specifically, you should recognize that what you find immediately engaging isn’t the same thing as what’s true, let alone what’s moral, kind, or just. Simply acknowledging this goes a long way toward correctly framing an intellectually and emotionally healthy strategy for using social media. The content you are drawn to—and that Facebook’s ranking promotes—may be immoral, unkind or wrong—or not. This kind of vigilant awareness builds resilience and thoughtfulness, rather than dependence on a potentially Orwellian-institution to insulate us from thinking in the first place.

Whether that helps or not, we should recognize that none of us are entitled to have Facebook (or any other social media service) work these issues out to our satisfaction. Like Twitter and YouTube, Facebook is a private company that we interact with on a wholly voluntary basis—which, should mean “to mutual benefit.” As customers, we should give them feedback when we think they’re screwing up. But they have a moral and legal right to ignore that feedback. And we have a right to leave, to find or build alternate platforms, or to decide that this whole social media thing isn’t worth it.

The fact that it would be hard to live without these platforms—which have been around for barely more than a decade—shows how enormously beneficial they’ve become to our lives and the way we live. But the fact that something is beneficial and important does not entitle us to possess it. Nor do such benefits entitle us to demand that governments forcibly impose our will upon those who own and operate such services. Facebook could close up shop tomorrow, and that’d be that.

By all means, Facebook deserves much of the criticism it gets. But don’t forget: we’re asking them to improve. It’s a request, not a demand. So let’s keep the sense of entitlement in check.

Governments, likewise, should respect the fact that these are private companies and that their platforms are their property. Governments have no moral or legal right to tell them how to operate as long as they aren’t violating our rights—and they aren’t. Per the above, regardless of how much we benefit from these platforms or how important we might conclude they’ve become, we do not have a right to have access to them, or have them operate the way we’d like. So as far as the government ought to be concerned, there are no rights violations happening here, and that’s that.

Many argue that what Facebook and other platforms are doing amounts to “censorship.” I disagree. It comes down to the fundamental difference between a private platform refusing to carry your ideas on their property, and a government prohibiting you from speaking your ideas, anywhere, with the threat of prosecution. These are categorically different. The former is distasteful, unwise, and yes, perhaps even a tragic loss of opportunity; the latter infringes on our right to free speech. What’s more, a system of government oversight wouldn’t work, anyway: The entire issue with speech policies is that having anyone decide for you what speech is acceptable is a dangerous idea. Asking a government to do this rather than Facebook is trading a bad idea for a truly Orwellian idea. Such a move would be a far more serious threat to free speech than anything we’ve seen in the United States to date.

Unfortunately, executives at Facebook and Twitter have both been very clear that they think regulation is “inevitable.” They’ve even offered to help draft the rules. But such statements don’t confer upon the government a moral right to regulate these platforms. Whether a company or a person invites a violation of their rights is immaterial to the legitimacy (morally and legally) of such a rights violation. Rights of this type cannot be forfeited.

Moreover, the fact that these huge platforms are open to regulation shouldn’t come as a surprise. Facebook and Twitter are market incumbents, and further regulation will only serve to cement that status. Imposing government-mandated standards would weaken or prohibit competition, effectively making them monopolies, in the legitimate sense, for the first time. Unlike potential new platforms, Facebook and Twitter have the capital and staff to handle onerous, complicated, and expensive new regulations. They can hire thousands of people to review content, and already have top-flight legal teams to handle the challenge of compliance. The best thing governments can do here is nothing. If this is a serious enough issue—and I think it is—competition will emerge if it’s able to do so.

*     *     *

We are the first human beings to witness the creation and growth of a platform that has more users than any country on the planet has people. And with that comes both triumph and failure at mind-bending scale. I’ve had the privilege of witnessing much of this from the inside at Facebook, and the biggest lesson I learned is this: When incredible circumstances create nuanced problems, that is precisely when we need principled thinking the most—not hot-takes, not pragmatic, range-of-the-moment action. Principles help us think and act consistently and correctly when dealing with complex situations beyond the scope of our typical intuitive experience.

That means that platforms, users, and governments need to go back to their fundamental principles, and ask: What is a platform’s role in supporting free expression? What responsibility must users take for our own knowledge and resilience? What does it mean for our government to protect our rights and not just “ban the bad”? These are the questions that I think should guide a principled approach toward platform speech.


Brian Amerige is a former senior engineering manager at Facebook. You can follow him on Twitter 

Photo by AlexandraPopova / Shutterstock.


  1. Steve says

    (Skipping down to ask a question before I finish reading, I’m only a few paragraphs into the essay. Apologies if you address all of this.)
    Brian, what do you think are the key features of technology/organization/business we could create that would enable the kind of open sharing you were seeking which now seems lost? Absolutely zero advertising revenue seems a must. As does even being dependent on any credit payments even. Is it possible to be insulated from lawsuits in the US? I mean being liable for “hate” content that would shut down a business.

    • Angela says

      There are no hate speech laws in the US. Not for ibdividuals and certainly not for a website that just posted a user comment. With that being said Facebook is a global company and getting tons of legal pressure from the EU to regulate content that is completely legal in the US.

      • Angela says

        Also for FB especially pressure from advertisers had very little to do with their turn against free speech (YouTube is a different story). For FB and Twitter the turn against free speech is almost entirely based on SJW young employees and SJW twitter mobs shaming FB and Twitter into banning offensive speech.

      • Jett Rucker says

        There’s no legal deterrent I know of that prevents Platform X from enforcing the (most-restrictive) limitations imposed by Country A in ALL countries it operates in. Indeed, given the border-jumping qualities of Internet communication, that would seem the only safe path to follow.

        So, freedom of speech within the US might be in effect governed by the requirements of, say, Turkmenistan.Or, BOTH Germany AND Turkmenistan – a net narrower in sum than is imposed in either country by itself.

        It’s tough, wiring humanity all together to communicate with each other …

  2. Farris says

    Trying to prevent offense is a dead end. “Take down this phrase, flag, statue, ect because I find it offensive.” “Well okay, it offends me to have to remove or take down what you find offensive.” Herein lies the problem with reacting or trying to placate the offended. It gives the allegedly offended party dominion over the declarant. If being the offended party has a pay off, then there will be more and more offended parties. Trying to prevent offense simply creates a hierarchy.

    • ga gamba says

      Once you make one concession, next there be 10 and then 100. It’s a never ending whirlwind of grievances and counter grievances.

      You give the people some tools to configure their experience. If they keep complaining, then it’s this message: “Clearly this relationship isn’t working for you. We’ve deleted your account. Good luck in life.”

      • This is the best solution ever. Let people define their own “Hate Speech”, and thus their own filters.

        The problem of course is that professional complainers are professional bullies, and bullying is really what they want to do.

    • Declan says


      Except only oppressor groups can cause offence. The oppressed can cause no offence; women can’t be sexist etc.

      Laying accusations of offense on a member of an oppressed group, would itself be an offense; ergo, further proof that you are an oppressor.

      The hierarchy of who can and can’t give offence already exists.

      • Peter from Oz says


        A great summary of how SJWs think.
        But what they think is unimportant.

      • Outback Bill says

        I beg to differ with you Declan.
        Offense can never be given. It can only ever be taken. If I make a statement that some agree with and others find offensive then the statement cannot be offensive by itself.
        Free speech does have a price and that price is you may read or hear something that you disagree with.

  3. jimhaz says

    [Then Palmer Luckey, boy-genius Oculus VR founder, whose company we acquired for billions of dollars, was put through a witch hunt and subsequently fired because he gave $10,000 to fund anti-Hillary ads. Still feeling brave?]

    He got a 100m lawsuit payout. Don’t expect any sympathy from me. Something smells about this.

  4. ga gamba says

    And you’re wrong because by now acting as an editor Facebook breaches the legal protection from civil suits it lobbied for and received from the US Congress and the President. The intent of the law was to make network service as free as possible. Facebook, Google, Twitter, and others now contravene this yet still enjoy their now unjustified protections. This needs to be repealed.

    Facebook was required to block content that violates law, such as child pornography, genuine threats of violence, terrorism, etc. Individual users are able to control content by customising their security settings and blocking others. What Facebook should have done is improve filtering methods available to users, such as key word and content blocking, as well as mandate users configure their security settings rather than burying these deep in the settings – Facebook knew that users would install the app and configure nothing that wasn’t in front of their dim noses. It’s alleged Facebook updates often undid user settings, and if true this disrespected individual user wants and the time taken to configure them.

    I have some sympathy for the company though. It wanted its users to be as public as possible. Facebook would be their personal billboard and print media – later it added video. This made Facebook users their own publicists and quasi public figures. The issues that once were limited to the lucky (or unlucky) few and considered the price of fame (or infamy) suddenly hit millions of people who didn’t enjoy the downside much; suddenly they wanted Facebook to be more like their private email. “Allow me to be a public figure whilst protecting my privacy” are contradictory wants. Did people misunderstand what Facebook was all about or were they manipulated by traditional media’s hyped up reports of ‘privacy’ because it was angered by the loss of advert revenue? I don’t know.

    Things I know are dinosaur media demanded Facebook protect women from themselves, e.g. women who uploaded their own nude photos failing to understand they shared them to the world, yet then later the same media howled in protest when nudes in art, breast feeding, and Napalm Girl were removed by the censors, be they human or automated. The media and activists demanded Facebook make users more accountable by providing real names, and by implementing this it outraged the trans community who for numerous reasons failed to legally change theirs.

    Ultimately, what did in Facebook was neither the users nor the media. It was the activists the company hired who leaked info to damage the company and whose over amplified voices caused many of their colleagues to cower. Facebook should have been firing activists and definitely never should have hired Sheryl Sandberg, who’s an even weaker and ineffectual leader than Zuckerberg.

    As a free service Facebook should have had the courage to say, “Hey, you get what you pay for. If you dislike it, go to back to MySpace.” Being the only game in town gave Facebook a lot of power to counter complaints using the excuse it was complying with the law by eschewing an editorial role. And it should have had the guts to remove the profiles of its hypocritical media critics such as the Guardian and its journalists who were leading the anti-Facebook charge.

    • D.B. Cooper says

      …acting as an editor Facebook breaches the legal protection from civil suits… Facebook, Google, Twitter, and others now contravene this yet still enjoy their now unjustified protections.

      So, this is exactly what I was going to comment on, but since you already have I briefly try to expand on the idea. The author claims that FB isn’t censuring ideas, but rather they’re simply “refusing to carry your ideas on their property…” But if true, that is, if FB is explicity refusing to carry some ideas, then it would follow that they are, at minimum, implicity accepting to carry others – and in some cases where the content has been flagged, screened and then not removed, it may be said that FB has explicitly accepted to carry the content on their platform.

      In any case, it’s hard to see how FB isn’t editorializing, and to the extent that they are – and, again, it seems quite obvious the company is making value judgements about which content that they carry on their platform – the company should be subject to the same legal constraints (e.g. libel laws) as any other publication, e.g., newspaper, magazine, media organization, etc.

      • Exactly. If FB is choosing the content that appears on my news feed based on their internal evaluation of the content then they are publishing and should be subjected to the same rules as other publishers. I don’t think they are worried about libel laws so much as they are worried about Equal Time and Fairness doctrine issues that would obligate them to provide the same resources to conservative political groups as they do to liberal ones.

    • E. Olson says

      Perhaps I don’t understand social media very fully, but if someone doesn’t voluntarily subscribe to another person’s Twitter feed, or “befriend” an entity on Facebook who posts “hurtful” or “violent” content, how is it possible to get exposure to “hurtful” or “violent” content? Does the Nazi party, KKK, or child porn industry actively send out hateful propaganda and distasteful offers to all Facebook users? If not, then why should any social media platform have to respond to professional Leftist muckrakers who aggressively go searching for “hateful” content that was never intended for widespread distribution?

      • I think it’s more likely, as the author of the piece describes, that leftists work themselves into key positions on the so-called ‘Trust & Safety Councils’ of social media companies, and then begin censoring and deplatforming users they disapprove of before those postings are visible to public users.

      • ga gamba says


        There are a few ways this happens.

        Last I checked, and it’s been a long while so it may have tightened up a bit, by default a person’s profile was open to the public – remember, Facebook (FB) is the sharing platform of self publicity. You are a billboard and broadcaster. People didn’t need to be your friend to read your postings. IIRC, strangers could even post comments w/o friendship. Users may tighten their settings to restrict this, but many don’t – discretion and privacy defeats FB’s raison d’etre. Further, your friends’ list and their profiles are browsable unless you configure security on that. When you friend someone, I think their friends are recommended to you and your friends are recommended to the person you liked.

        Your friends are notified of all your actions on FB. Like something, upload a photo or video, or post a comment, and they are informed.

        Many companies, activist groups, and media outlets have FB presences. Ending their own comments sections and forums on their own websites, they’ve replaced these with FB. You read an article or press release, use your FB credentials to logon to the site, and post a comment on their FB page. All the other FB users of the site may see that comment and reply. Further, your friends are informed of your comment as well as the friends of those who replied to your comment.

        And on it goes. This is the gist of it.

        • E. Olson says

          Thanks GG for the explanation, but again it just points to the ability of users to avoid “hateful” content if they desire. If you have a racist friend, you can unfriend that person to avoid being notified about any of their further racist posts. And if your corporate FB page is attracting Nazi comments you can delete them and post and an official corporate PR comment about how your corporate values are totally opposed to Nazi values. To which some someone who knows Nazi history might reply “since you are totally opposed to Nazi values then we can also assume you are against the environmental movement?”.

          • ga gamba says


            Yes, agreed. They certainly may lock down their privacy settings. That they won’t tells us the issue really isn’t about the avoidance of things causing their own distress. Rather, this is a pretext used to shut down views they dislike so their own monopolise the platform and, by extension, society.

    • peanut gallery says

      There’s apparently a precedent. Back when the Company Town was a thing, the government came in and told them that even though towns were owned by the companies, they had created public spaces and in those spaces they had to protect the first amendment. I think the same principle can be applied to social media. It’s not like we’re buying a service. Your eyeballs are what the companies sell. A social media user is an employee that works for free.

      It’s time to finally say “give me dank memes or give me death.”

      • ga gamba says

        Marsh v Alabama is that case. A religious person attempted to distribute fliers on the pavement near a post office. She was charged with trespassing on private, i.e. company owned, property. SCOTUS ruled that when a private space is similar to a public space, the rights applicable to the public space apply.

        • Farris says


          From a recent previous post, I believe likewise applicable to this scenario

          When does deplatforming involve Freedom of Speech? In the U.S. the Equal Protection Clause prohibits the governments or its agents from denying equal protection of law to some while recognizing it for others. However like First Amendment that only applies to government actions meaning that it would not apply to businesses, lunch counters, ect.. that did not wish to extend services to all individuals. For this reason Civil Rights legislation was passed under the Commerce Clause. Since businesses engage in or impact interstate commerce, the Clause allows the government to regulate their conduct. So if a Christian Bakery is open for business, it must provide service to the public and not just the public it prefers. Consequently, if a platform wishes to be open to the public, it should not discriminate against views with which it disagrees. Therefore the First Amendment civil right of a deplatformed individual may be potentially impacted.

          • ga gamba says

            A consequence of the Civil Rights Laws is that they elevate the protected groups. Yes, they are protected from the denial of service, and they may threaten to make a claim of denial of service to compel an owner to submit, but this coercive ability does not exist for others. Perhaps the crafters of these laws never foresaw this outcome, but it exists now. Either business must be allowed the right to deny service to anyone for any reason, fair or foul, or businesses must be open to all for lawful transactions. I think the double standards that have emerged in many domains is what many find frustrating.

            Internet services were given special protections from civil liability in exchange for not editing content – they were to be neutral platform providers for lawful speech. People complain about harassment or abusive language, and I’m not saying it doesn’t exist, but if someone is verbally abusing me in the street, I have no way to turn them off. I could call police and wait. With social media I can easily and immediately block the offender and be done with it. Social media gives me better protections in the virtual space than I have in the meat space. Yet, I think many use claims of abuse as a way to exact concessions from platforms in favour of their views. If I were running a platform I’d tell the person to block those who upset them and move on.

            Further, the courts have ruled social media accounts of gov’t officials must be open to all. By banning users who haven’t violated the law, they are denied an ability to communicate with their elected officials and the institutions of the state. Officials engage their constituents using social media. For good or ill, today social media is an extension of the town square.

          • D.B. Cooper says


            A bit off topic, but since y’all brought it up, I thought I might say a thing or two about the staggering levels of moral dumbfounding it takes – or took SCOTUS – to square the 14th Amendment’s Equal Protection Clause (EPA) with a Civil Rights policy like affirmative action (AA).

            In truth, I don’t know how the Court could have legitimized AA within the constraints of the EPA, short of the most intellectually tortured redefinition of plain English, and even then I’m not sure how they sold this bill of goods. I’m serious, it’s not at all clear that the conditions of success – for a coherent implementation of this policy – can be found on Earth. If you doubt this, just simply consider that for the entirety of its conception the EPA was, in effect, a guarantee that all citizens would have the guaranteed right to equal protection by law, and then one day SCOTUS decided that these very same words also guaranteed some people the right to unequal (special) treatment under the law.

            Justice Lewis Powell, the architect of this abortion, provided colleges the grounds upon which the policy is still defended today. Powell’s reasoning was that while racial quotas violated the Constitution, a university could consider race as one factor in seeking to assemble a diverse student body. In other words, racial quotas violate the Constitution, but discriminating on the basis of race does not.

            If the phrase, distinction without a difference isn’t materializing in your head right now, then you’re doing it wrong. The best part of this distortion of thought by Powell – by which I mean, the worst part – is that his entire reasoning is based on the justification of what is likely a false premise: the inherent value of a diverse student body.

    • A. W. says

      As I understand it the “requirements” of CDA-230 have no teeth.

      At best it would take a decade of lawfare to get a supreme court ruling, which would require someone with the standing required to want CDA-230 out of the way.

  5. I feel sorry for college kids today. When I went to college, going to an off-the-wall event was fun. The more off-the-wall, the more interesting. I remember going to hear Gus Hall once (I still think communism is the biggest ripoff in history). No one showed up with protest paper stapled to sticks or argued out of line. We listened and asked questions that flattered us as provoking, rather than some sense of moral outrage. It wasn’t all that unusual that after the event, many would gather at a nearby eatery with opposition thinkers sitting at the same table enjoying the conversation’s continuation. You can’t get that today on almost any college campus.

  6. Matt K says

    How is a competitor supposed to get started when all the leftists team up to deny them funding. There has to be regulation somewhere, whether that’s the social media companies or the payment processors.

    • Atticus says

      I also agree, and something needs to be done however, complaining about it is one thing but the more we complain the worse it gets. they cant control us forever. Heck for a long time netscape and Excite where the Googles of OLD they gone. It can happen again…
      The good news is Facebook has lost members and supports and only increase in users seem to be over 55, However, what we used to do in the old days when a company made us angry. We left, When prodigy screwed up people ran to CompuServe and aol. When myspace screwed up we ran to facebook. Half of us only have a facebook account because we were required to have one which has its own issues. But we do have alternatives to search engines to social media pages and so on, we can use to find alternative platforms that fit us. Is it to hard to look up information in an INFORMATION AGE? Heck as a Twitter alternative I been using something called Plurk for the last 6 years which is almost the same age as twitter. There is no reason why we stick to these things. Same thing as hate-watching. The reason they still make shows that no one likes is that of things like hate watching. Hell, look the same magazines, and orgs criticizing facebook and twitter, have the default facebook and twitter logos on the page. Some of these companies are large enough to avoid hate-mobs and de-platformers. I will never understand why we killed chatrooms to run to Profile pages as our means of communication.

    • Atticus says

      Also, i think a lot of these smaller companies and start ups need to form an alliance or a trade union to pool resources.

  7. I think that regulation of social media should be a separate issue from regulating financial institutions, which is what payment processors like PayPal are. Whether it’s wise to regulate PayPal, etc, is yet another question.

  8. “the worst they can do is kick you off.”

    Ah, the author misses a key point here. Often today we build up communities on platforms, made up of groups who could not meet up in real life (maybe they live in different countries, or are housebound), or become financially dependent upon them (needing PayPal etc to engage in commerce online). To be forcibly removed from such is akin to excommunication, house arrest, and exile, and so should not be taken so lightly.

    • ga gamba says

      Further, many government agencies including schools and elected representatives communicate with their constituents primarily through social media. Moreover, Facebook and other platforms have taken it upon themselves to broadcast emergency messages as well as communication from police and fire services.

      If the court says the President of the US can’t block people from commenting on his tweets, then it seems the platform is considered a public space. Banning a user restricts not only his/her ability to speak to their elected officials, it also prevents his/her hearing their officials’ messages.

      • I have the abiding feeling you don’t recognize how corrupt the US Supreme Court became after 1939.

        To my eye, it transformed itself from the least dangerous to the most dangerous branch of government. The Supreme Court is face of the governing oligarchy we must defeat.

    • Atticus says

      Makes me miss chatroom, what you said be forgotten then minutes later. I will never understand why we fled chatroom for blogs. and slow motion chats that resemble single line Dial up BBS of the 1980s with more colors.

  9. “Governments, likewise, should respect the fact that these are private companies and that their platforms are their property. Governments have no moral or legal right to tell them how to operate as long as they aren’t violating our rights—and they aren’t.”

    The government regulates private companies all the time. For example, the government says that a fast-food restaurant cannot turn away a customer because they are black or female or Muslim. It would be entirely reasonable for the government to regulate the major social-media companies that are now part of the infrastructure of our societies, saying that they cannot turn away customers owing to political opinion or viewpoint, and so must allow anything short of illegal speech.

    • peanut gallery says

      It’s quite amusing how progressives suddenly turn libertarian when I mention regulating the companies this way. It’s clear to me from listening to progressive tech types that they want this control and they want to use it in a culture war. They will ban political enemies from H2O if they can.

      • Nakatomi Plaza says

        There is nothing progressive about any of this. Liberal, yes, but big tech is fundamentally libertarian and only uses their liberalism as a shield against regulation.

        And you really think the right is particularly fond of a free and open discourse? If you do, I’ve got a right-wing owned media conglomerate to sell you.

  10. There’s an article that appeared recently in Harper’s about the potential loss of a life’s work when an artist or writer becomes the victim of an SJW hate mob. For example, writers can lose not only their publishing contracts, but also their previous works may be allowed to drop out of print by publishers terrified of being targeted by protesters.

    Luckily, I’m not an artist or a writer or a creative. But I have put time and energy into a user group on Facebook, and I am constantly having to assess what the limits of expression are on that platform. I don’t want what I’ve worked on to just disappear overnight because I have crossed some blurry line.

    I started the group on Facebook about a year ago. Its purpose is to find a “middle way” in a political climate dominated by increasing polarization and extremism. Along the way, the group morphed primarily into an attack on the progressive-left’s blank slate dogma. I link to things like studies in peer-reviewed scientific journals, various statistical databases, and polls, and I post charts and graphs, along with my comments. There’s no inflammatory rhetoric, and group members -– who range from right-wing to left-wing in their views, but are mostly geeky mild-mannered centrists –- seem to get along well. We’re a small, quiet group.

    The group was running smoothly until a few weeks ago when I engaged Amy Harmon, the science reporter of the New York Times, on Twitter, and then wrote a long comment in the online magazine Areo -– both touching on the subject of race, genes, and intelligence. The Areo comment was quickly picked up and published by a well-known blogger for a controversial online magazine. As a result of these activities, the tiny membership roster of my Facebook group grew dramatically, seemingly overnight. A few days later, Facebook analytics informed me that 1,200 persons had visited the group that week, a record.

    Within a day or two of the Areo comment being published, I received a message from a Facebook user accusing my group of engaging in “hate speech” — I suspect because of my recent comments on Twitter and in Areo. Although there was no information in my published comments or tweets (or on my group) which cannot be found in respected scientific journals, and my language has never been in the least bit incendiary or disrespectful, I had touched the fatal “third rail” of American life in writing and tweeting about the subject that I did.

    Almost immediately after I received the “hate speech” message, I suddenly started experiencing technical glitches on my Facebook group (and these have persisted). Persons applying for membership would “disappear” after I clicked on them (I may have lost two or three dozen prospective members this way), I was having trouble uploading images, members were no longer receiving automatic notifications of new posts or comments, member views were confirmed as not being fully counted, comments and posts were disappearing, and so on. The group page had never been glitchy before I received that “hate speech” complaint.

    I find it hard to imagine that Facebook would stoop to making life miserable for a tiny user group on its platform simply because it received a “hate speech” complaint, but the coincidence of receiving the threatening message and the sudden fit of glitchiness occurring at about the same time strikes me as weird and has put me on the defensive. I don’t want to lose all my work and my group. The fact that any of this should be a concern to me at all -– that I should have to worry about being banned from Facebook for posting data and scientific information on issues that need to be discussed -– is probably the most depressing thing in all of this.

    • @ A New Radical Centrism

      I’m one of those people you refer to in your comment who tried to join your Facebook group and couldn’t do it. I read your comment on Areo, found your group on Facebook, and clicked the button to join your group. When i didn’t hear back from you, I went into my activities log (or whatever that thing is called) and couldn’t find any record that I ever applied to join. So I went to your group again, and clicked the button to join again, but the button was dead, nothing happened. I clicked and clicked and clicked, and nothing. It’s as though the option to join had been taken away. I assumed you had blocked me so I gave up. Now I realize after reading your comment here that I was caught up in this mysterious (or maybe not so mysterious) glitch you’re describing. Actually it doesn’t seem that mysterious to me at all. I don’t think any of this is accidental. Your group has been flagged, and there must be some kind of FB automation that’s making it impossible for people to join it.

    • Emile S says

      That was a great comment on Areo. I’d try to join your group, but I’m not on facebook anymore.

  11. FluffyBuffalo says

    Random thought: “content moderator” has to be the worst job available in the IT sector. Imagine wading through the filthy exretions of millions of angry, unhappy or hateful minds, a new insult, threat or dismissive comment every ten seconds, day in, day outThat really has to suck the faith in humanity out of you faster than anything. Then add constant time pressure and minimal pay.

    • Craig WIllms says


      Give me a second to get past your moniker, best one yet!

      Your post reminds me as to why I couldn’t be a cop. To see people day in and day out on the worst days of their lives would leave me depressed within a week. Even if your gig is to ‘help’ people it would drag me down in no time. Being a Facebook censor would likely find me babbling in the corner by the end of the first day.

  12. Stephanie says

    I agree with much of what the author argues, but dislike the characterisation of Facebook users as “customers.” Users don’t pay for anything, they are, instead, the product. And so far Facebook has been harvesting that resource for free.

    One aspect that I think government should regulate is that user data should be treated as the property of the user. No website should be allowed to collect it without paying the user.

    • ga gamba says

      And so far Facebook has been harvesting that resource for free. . . . And so far Facebook has been harvesting that resource for free.

      No. You have traded your privacy for publicity and services. That’s the deal. In exchange for info that allows advertisers (product or political) to better understand you and your associates, you get an always available global directory of billions, storage of your comments, photos and videos, a global telephony system, etc. You may use the platform to promote yourself, your ideas, your products, etc. Think about how much you’d have to pay legacy platforms such as newspapers, fliers, and telephone books to do so. And you wouldn’t come close to attaining the global reach FB provides you. Creating (or acquiring) and maintaining all these convenient services for your use are very costly.

      Think about your comment here on Quillette. Should Quillette pay you for your comment? Your comment is now the site’s property. Does someone want to buy it? Unlikely. But to an advertiser of books who’s looking for a website of x number of daily unique visits by libertarian leaning visitors who spend x number of minutes engaged with the site, knowing this info will lead it to place adverts for books by Hayek and not Marx. It’s gets far more granular and focused than I’ve described – I’ve only scratched the surface. I may be presented adverts for some titles whilst you are presented others because our engagement on other sites has built more in-depth profiles of our likes and dislikes. Moreover, it may even lead to advertisers knowing how to craft messages to appeal to us in specific, more compelling ways.

      Individually, the monetary value of Stephanie and ga gamba info is very low – probably nil. It’s when we are bundled together into a definable and large group of very similar people that there’s value to an advertiser. The more engaged we are online allows FB and others to better assign us to multiple demographic groups as well as how to craft messages that capture our interest and attain our money or action.

      • ga gamba says

        I meant to quote your No website should be allowed to collect it without paying the user. too. Apologies for the dupe.

        • Stephanie says

          @ga gamba, I understand how it works, and that individual data is essentially worthless, but I would like to see a transition to a paradigm where the user has complete ownership and control of their information. Newspapers, TV, ect all make do with untargeted advertisement as a revenue source, there’s no reason social media couldn’t do the same. Indeed, Facebook has a setting that turns off targeted advertisement. Even a transition to a subscription model would be preferable.

          Data collection is a violation similar to spying. It wouldn’t be deemed ethical (or possibly legal) for a third party to record what people say to their friends or strangers in real life in order to influence them, and someone who collects information on minute details of a person’s life would rightly be judged a deranged stalker. In the 21st century you shouldn’t have to subject yourself to this to participate in the internet. Alternatives that respect individual privacy and autonomy are possible.

          A happy side effect would be that it would be more expensive to collect the kind of data advertisers and politicians use, having to rely on market research firms. That might slow the induction into homogeneous ideological bubbles and other dangers of this manipulative technology.

          In an age of Twitter mobs, where something you said or did at any point could be brought back to destroy you, we don’t want to be empowering corporations or government with the ability to keep people controlled.

          • ga gamba says

            but I would like to see a transition to a paradigm where the user has complete ownership and control of their information.

            By not participating you do presently have this ownership. However, when choosing to join a social media platform you’re traded that for a conveniently accessible bundle of services. You didn’t have to accept the deal. It isn’t spying because you invited these services into your life. Like many others, you may not have read the TOS or failed to appreciate the invasiveness of the deal.

            I understand your misgivings, and by and large I sympathise, yet these companies exist to make a profit, a principle I don’t object to. These are multi-billion dollar investments after all.

            How much would you pay for one google search? A dime? A dollar? Now, think about your life without the convenience of a powerful search engine yet still in need for that info. You have to travel to a library and hope they have the resources available – hit or miss unless you have access to an enormous uni one. Still, the resource may be checked out. All that wasted effort, time, and likely expense.

            I advocate an opt-out alternative to data collection. A person pays a monthly fee and this buys their privacy. Yet, when I propose such as measure often it’s derided; I suspect too many have become accustomed to free (non-paid) IT services. This is kind of strange though because people will pay for cable or streaming TV, streaming music, satellite radio, etc. Appears they differentiate entertainment from information and value one more than the other. Personally I value access to info more than entertainment, and the internet has been a godsend for me.

            I place a high value on Google’s bundle of search (with translate), youtube, maps, gdrive, and office application suite. I use gmail but I could transition to proton mail. I place little value on all the other platforms such as Facebook and Twitter. I would pay Google $15 – $20 a month, perhaps even more.

            BTW, let’s not neglect to think about how our credit cards collect our info. An insurer likely places some value knowing one spends one’s money on a lot of vodka, ice cream, and cigarettes. Frequent late night ATM transactions suggest illicit drug use. Based on one’s fuel purchases it can tell whether one’s been honest about how many kilometres one drives per month as well as their compliance to speed regulations. Our digital entertainment providers also collect info. It knows how many hours per day are spent viewing as well as what types of programming appeal to the viewer.

      • Farris says

        Bingo! On Facebook and other social media platforms, the user is the commodity. Facebook exists by collecting and selling the user data. Imagine cattle saying, “we have free food, water, health care and pastures to roam. Wonder what comes next?”

      • Nice response to an absurd comment. Also, if you don’t like websites mining your data then take what steps you can to protect yourself. Use a VPN, private/incognito window, etc.

        • Nakatomi Plaza says

          But Facebook lies about what they do. They’ve been lying to us since day one. Why would anybody excuse that behavior?

  13. Hamilton sunshine says

    Apart from the general issues the bias is most concerning as it then becomes not about protecting people or discouraging hate speech but actually using it to promote, encourage and signal hate speech towards another group.

    I.e. If FB or twitter ban a Conservative for a minor funny look or a centrist for saying ‘I’m not sure that is fair or ok either’ but let someone on the far left call for killing and cancelling white people you are actually saying to those people ‘You are right in calling for their deaths and harassment and we will actively help you to silence them so you can continue without right to reply’.

    Hate Speech laws and trust and safety committee’s are a mess, Orwellian and lead to problems but people would be a lot more comfortable (or at least know where they stand) if they were up front and consistent. They’re not, thus proving it’s just a political tool of censorship and harassment. I still consider myself to be broadly left wing and i even i can see this only hurts left wing ideals as it propels people aginst them. Silicon valley and the left are worried about the rise of the right but refuse to examine their own extremeness wrong think and tactics as contributing to the flight from left to right.

    • Jim Gorman says

      The ultra-left and the SJW crowds will end up eating their own. Facebook will begin to fail when enough people are blocked or leave and the left is all that remains. An echo chamber will quickly become tiring!

  14. E. Olson says

    “During an internal town hall about the controversy, employees interrupted, barked and pointed at Zuck and Sheryl Sandberg with such hostility that several long-time employees walked away, concluding that the company “needed a cultural reset.””

    A few thoughts here on this sentence from the article.

    1) Why are Facebook employees allowed to bark at top management and still keep their jobs? Is it because they are barking Leftist points that make them “unfireable”, because this seems far more fireable than the James Damore case, where he was fired for writing a very polite and footnoted memo in response to a request for feedback on a diversity seminar he was forced to attend.

    2) Who are the barkers and who are the people who walked away? Are the barkers female and non-Asian minorities who have been hired for diversity statistics purposes, and the walk away types the coders and scientists that actually do real work? In other words, is affirmative action and diversity hiring quotas behind the Leftward shift in corporate culture at Facebook and other social media sites?

    3. Does Facebook benefit in any way from being politicized? It certainly seems to increase costs to Facebook as even their half-assed attempts to police “undesired” content has to be very expensive, but is this extra cost made up somewhere else? Does this Leftward politicization reduce the chances of regulation, does it increase the growth in the user base, does it attract more advertisers? Why have all the social media companies gone down this Leftist hole?

    • I’d like to read a reply to E. Olson.
      1. Who were the barkers and who walked away?
      2. Does FB benefit from being politicized?

  15. Truthseeker says

    I have never used Twitter, Facebook or Instagram and never will. At first it was because I am not narcissistic enough to think that what I am doing in my life is important to invisible strangers. Now I glad that I have not played in what has become an abyss, a bottomless pit of uninformed opinions and prima donnas that think what they say and do actually matters just because they express something.

    I am a free speech, free market sort of person that believes that power is responsibility. The issue with Facebook, Twitter, YouTube and Google is that they have acquired power but have never acquired the responsibility, either voluntarily or via enforcement, that should come with that power. Power without responsibility is tyranny. Add to the mix the zealotry of the dopamine fuelled SJWs and the spineless acquiescence of those who have been told that they are special without being encouraged to become more than they are through strength of character and effort, and you get tyrannical outcomes through nothing more that the ravings of those that accept no responsibility for themselves but demand it of others on platforms that can be easily ignored.

    Zealotry is evil in all its forms. It does not matter if it religious zealotry, political zealotry or any other kind of zealotry. There is no other outcome of zealotry than pain, suffering and death, usually in large numbers. SJWs are zealots and they seem to be at the steering wheel at what many people use as a source of information. These people only have the power that you give them. Don’t given them any. Get off social media and try to get considered thoughts at places like Quillette and the various outputs of the IDW.

    • Craig WIllms says


      I do use Facebook but none of the others. I use FB for self-promotion, I am an artist(painter) and singer songwriter(recording artist). I post my creative output but limit it to just that. I never post controversial or God forbid, my own political opinions. This is simply in an effort not offend anyone. I know how much I hate being preached at by the artists and musicians that I follow.

      My point is that Facebook can be an effective communication tool without devolving into a source of despair or outrage. I won’t sell myself out so to speak but I won’t try to inflame emotions unnecessarily. Other than that Facebook is benign to me.

      • Truthseeker says


        I get it. The vast majority of users of FB, Twitter and Instagram are not part of the problem. However, those running the platforms are trying to “change the world” when they are completely ignorant of it. They are hypocritical when making decisions about “hate speech” or “oppression” and act in tyrannical manner when it suits them. The shear weight of numbers give them power they do not deserve. My only suggestion is to spread your reach to other platforms so that you can do your bit to give some traffic to their competitors.

  16. James says

    Facebook and Google have monopoly control over Internet advertising. It’s well past time to break them up under Sherman.

    • Atticus says

      It would also help if sites like this would stop using twitter and facebook as a default. looking at the two icons to the screen right.

  17. “E pur si muove”, remember that when someone tries to force you to bend the knee. Its always helped me.

  18. Morgan Foster says

    Facebook could have avoided much of this at the very beginning if Zuckerberg had not started his weekly Q&A with the employees. That was a serious mistake.

    There is absolutely no good reason for a company like this one to have “internal town hall” meetings. Or internal messaging boards for employees.

    Zuckerberg needs a fixer to come in and tear all of that down, if he doesn’t have the courage to do it himself.

    Unhappy employees? Who cares? Where are they going to go, anyway?

    • Dazza says


      Look what happened when Napoleon stopped his weekly Q&A meetings. The beginning of the end for the farm.
      Hang on a minute, maybe that’s not such a bad idea…..

  19. Alan Green says

    I’ll make a deal with Facebook. They can ban hate speech all they want, but I get to decide what is & isn’t hate speech.

  20. A thousand angry people shouting at me on Twitter is “robust and free speech” or a “mobbing”?

    My employer terminating my at-will employment because he doesn’t like my opinions is an infringement of speech, or a natural consequence of free association?

  21. There are two options for Facebook:

    1.) Facebook doesn’t edit, and doesn’t have liability for content (defamatory or otherwise).

    2.) Facebook edits users, and DOES have liability for content as they are exercising editorial control (negligently or not).

    The idea that Facebook is a private concern and it is a beneficiary of corporate welfare in the form of liability protection for torts, premised on the claim it is an open platform and can’t police content.

    Libertarians always tell us we don’t need regulations because tort law will settle private disputes. But Facebook is regulated, and it is regulated in a way that protects it from tort law. The idea that it shouldn’t be further regulated, because it is playing at politics because “liberty” doesn’t pass the giggle test. But the solution is to eliminate the liability shield, and let Zuck make his private editorial decisions.

  22. This discusses the two main forms of corporate welfare big social media receives:

    They are shielded from copyright infringement and most torts like defamation (as well as from criminal liability as accomplices some of their users crimes). Since they are all geared up to protect their users from wrongthink, then they can protect against defamation and copyright infringement too!

  23. Mark Beal says

    “Governments, likewise, should respect the fact that these are private companies and that their platforms are their property. Governments have no moral or legal right to tell them how to operate as long as they aren’t violating our rights—and they aren’t.”

    I imagine that depends on how you interpret Article 19 of the Universal Declaration of Human Rights:

    “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

  24. Caligula says

    “It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest” said Adam Smith, in ‘The Wealth of Nations’ And then went on to note that the butcher, brewer, baker (etc.) sell their wares by appealing to the customer’s self-interest and, never, ever refer to their own.

    “to make the world more open and connected, and give people the power to share” sounds better than “to gather as much personal data from users as possible and then sell that data for as much as possible,” yet it is the latter that maximizes shareholder value.

    Do you really think the butcher’s concern is that you get sufficient protein? Is Adam Smith’s observation any less true today than when he wrote it?

    “We do it all for you!” Yes, of course you do. How could I ever doubt your interest in my well-being?

    • Umberto Turturro says

      Facebook, Google, and some of the new tech type companies do make this argument that they are doing is all for some altruistic reason, but like Emperor Caligula (I assume that is the proper salutation for you) writes above, what Facebook means is ‘we make a little space for you to play around in, and will aggressively try to monetize anything we can get our hands on’. If they really believed in their “to make the world open and connected” statement, they would look much more like Wikipedia.

      Whenever I hear about Facebook, I think of the classic TV series ‘The Prisoner’. Facebook, like Number Two, ‘wants information’, and they intend to get it, ‘by hook, or by crook’. Regardless of whether we want them to have it.

  25. david of Kirkland says

    As FB controls content, it should be deemed the publisher and thus liable for all content that appears on it. You can’t both claim to be an open communications carrier and restrict some communications based on corporate policies that involve reviewing materials and deeming it acceptable or not to be published via the platform.
    This will require FB to either be a communications platform OR be a publisher responsible for the content. Playing half and half is unequal protection under the law.

  26. Blue Lobster says

    “The fact that it would be hard to live without these platforms—which have been around for barely more than a decade—shows how enormously beneficial they’ve become to our lives and the way we live.”

    This is generally an excellent and informative article, especially for a non-social media user. I, understandably – I think, do take issue with the above sentence as it makes a claim which I would argue is not supported by the relevant evidence. Social media is hard to live without for some people but for many, it is pointless and inconsequential. Almost certainly, social media is unimportant for far more people than, say, electricity, or indoor plumbing. It’s inarguable that certain conveniences and creature comforts are indeed hard to live without for nearly everyone precisely because of the tremendously beneficial effect they have on our lives. Social media is not among these. The relative ubiquity of social media is an indicator of its beneficence in as much as the same is true for alcohol or tobacco products. In fact, this ubiquity is possibly more an indicator of the pernicious quality that all of these products exhibit. I argue that social media is attractive but poisonous – perhaps not in and of itself, but the extreme level of interpersonal connectivity that it creates allows the online venom of a relatively few users to propagate and metastasize at a congruently rapid rate whereas it would otherwise be fairly contained and far less damaging.

  27. “they generally can’t throw you in jail – at least not in the United States. ”
    Pretty much sums up why I don’t use facebook.

  28. Julia says

    It’s untrue that FB is unrelated to censorship. It performs censorship on behalf of authoritarian governments.

    Russian Facebook blocks event page for opposition rally

    Facebook complies with Russia’s request to take down an Instagram post linked to Putin’s rival

    It’s just not any government can censor free speech.

  29. Also ExFB says

    The historical narrative asserted in the article around the history of the development of FB’s content policies is profoundly inaccurate. The standards the author describes are *much* older than 2016 and, in fact, were foundational to the growth of the platform in the first place.

    If you’re interested in an accurate overview of the development of these rules, featuring interviews with people who were actually there at the time (unlike the author) this paper is worth a read:

  30. Aerth says

    There is nothing wrong with concept of safe space, but it becomes wrong when someone assumes entire world should be “safe space”. It becomes worse when it is believed only certain groups deserves to have world as their safe space.

    Left usurped for themselves the right to call what is hate speech and what is not (which sums up to: minorities can say whatever they want and it is ok, majorities must think every word ten times over). When they were spitting crap about Russian bots and sock puppets for years, it was a fair game. When they got hit with NPC meme, they immediately went crying and Twitter staff were all to happy to update their ToS with anti-dehumanization policy.

  31. Umberto Turturro says

    While this article seems longer than necessary, I did appreciate the section on how Facebook actually ‘curates’ content. Outsourcing the classification of ‘is this hateful or not’ to thousands of people around the world, with decision times measured in seconds certainly helps understand the state of Facebook, but it is rather disheartening to read how only the high and mighty have the capital to make Facebook respond to challenges to their decisions. A double standard indeed.

  32. Lightning Rose says

    Vote with your feet–delete your account and LEAVE. No one “needs” to be on a platform that didn’t exist for most of our lifetimes, to do things we never did before. No one can sell me on the idea that social media of any stripe is “necessary.” Want to keep in touch with old friends? Pick up the phone, drop in or write a letter.

    I have several senior lady relatives who use FB. What they seem to do mostly is create the illusion of imaginary “friends,” while spreading gossip about real people like a junior-high clique. It’s nothing but an artificial dopamine hit to make lonely people, steeped in “celebrity culture,” think they have a big fan club out there. At best, the online version of the Obnoxious Christmas Letter. Avoid it like rat poison and it’ll go the way of

  33. Good piece but a few flaws:

    “And I have a great deal of sympathy for anyone who does get attacked—especially for their immutable (meaning unimportant, as far as I’m concerned) characteristics. Such attacks are morally repugnant. I suspect we all agree on that.”
    Needs more thought. Being a pedophile is essentially an immutable characteristics, and when acted upon, deserves prosecution and punishment.

    “Many argue that what Facebook and other platforms are doing amounts to “censorship.” I disagree. It comes down to the fundamental difference between a private platform refusing to carry your ideas on their property, and a government prohibiting you from speaking your ideas, anywhere, with the threat of prosecution.”
    So if fb refuses to carry the opinions of conservatives, that’s not censorship according to the author. What about if it refuses to carry the opinions of black?

  34. Saw file says


    Started out at the beginning of FB.

    Met many ppl, from around the world, that I am still in touch with regularly.

    Went through all the FB bs, and still being there because many don’t have much of a option to continue our contact.

    The ex-FB multi-millionaire tech-guru advises…


    Or, make your own platform!

  35. scribblerg says

    I’m in the social media business too. Lots of words to describe the SJW takeover of Facebook with Zuck cheering it on. Best part of the article? The factual description of how the content monitoring function actually works. You see the elites and wannabe elites who work in management and engineering at Facebook would never lower themselves to the grueling task of actually reviewing content themselves. That’s too much like work for them. So instead, it’s farmed out to cheap contractors, many of whom aren’t native speakers in the language of the content they are supposedly monitoring.

    I have one major correction for the author. He seems to not understand that congress granted “platforms” on the internet an exclusion from liability for content on their platform but in exchange they have to be neutral arbiters of content. Facebook really isn’t free to do as they see fit due to being private. And if it’s going to editorialize with a particularly worldview, which is what it’s doing, then it no longer enjoys that privilege. And should be subject to the same liability risks that any publisher has.

    Why he missed this I don’t know. I do appreciate the insider story of course, but none of this is evens surprising to me. SJWs and the Left run amok and Zuck & Co encourage it and cheer it on. Which makes them anti-liberal, anti-American and fascistic.

  36. Pingback: Facebook | Transterrestrial Musings

  37. “or that directly inflicts emotional distress on a specific private individual (e.g. bullying).”
    And who created the standard for what constitutes “emotionally distressing or bullying” speech?

  38. “Then Palmer Luckey, boy-genius Oculus VR founder, was put through a witch hunt and subsequently fired because he gave $10,000 to fund anti-Hillary ads. Still feeling brave?”
    F##K Yes! That’s the problem with conservatives: they want to change the world but they don’t want to get their hands dirty. Roll your fkng sleeves up!

  39. Rev. Wazoo! says

    The correction for this is for Sates in the US to add FB (and other social media platforms, especially those with a near-monopoly) as “public accommodations” akin to restaurants, hotels etc. and add “political affiliation” (as Washington DC has done) to the list of characteristics which cannot be discriminated against by a public accommodation along with race, sex etc.

    Illegal things like incitement to violence etc can be removed – just as customers inciting violence can be removed from a restaurant – but things like entirely legal posts and requests for service at a bakery etc cannot be refused.

    A simple solution already enacted in law for the same reason: to prevent the arbitrary exclusion of people from public life.

  40. Pingback: Były pracownik Facebooka szczerze o tym, co się dzieje w firmie

  41. Steve in Wisconsin says

    “During an internal town hall about the controversy, employees interrupted, barked and pointed at Zuck and Sheryl Sandberg with such hostility that several long-time employees walked away, concluding that the company “needed a cultural reset.”
    I’m reminded of the movie “Invasion of the Body Snatchers”. Sjw’s taken over by alien beings bent on total destruction of the human race.

  42. Michael says

    Controlling speech, the words we use, has been a quest of the feminist movement since the 70s. They have gone to absurd lengths to find a way to exclude any use of the word “man” so that now the Prime Minister of Canada wants us to use “peoplekind” in place of humanity.

    The circle just gets larger with the goal of encompassing all forms of expression. You can’t criticize policies of an organization as I found out recently. The Boy Scouts have renamed to organization as Scouting Inc (or some such thing) so that girls can join. Now there are within the organization troops that exclude BOYS. They had that with the Girl Scouts but they couldn’t tolerate the notion that boys had an exclusive domain. Social groups have been under attack to the extent there’s very few domains in which men can join other men without bringing the WAGs along for the party.

    I don’t buy the “private company” argument. Once a platform like Facebook or Twitter reaches critical mass and begins to buy up competitors it’s a monopoly and must be regulated as a public utility. And of course for stock holders that’s the death knell of ever larger increases in equity and so they will – as the author here does – argue strenuously against what we all know is right.

  43. Pingback: Mowa nienawiści - Ach ten Facebook - Golden Computer

  44. lukemac says

    I feel the real deep issue with social media such as FB, lies in the way users can select their feeds.
    What seems intuitive to like what you like and receive more of the same content is having a hugely detrimental effect on civil debate and discussion.
    People are receiving a constant diet of group think and conformation bias, where everyone thinks the same as you and people are never being exposed to counter arguments.
    This lack of balance in social media is allowing emotional ploys to replace factual arguments. Once ensnared, users surrounded by a group of same thinking individuals become far more susceptible to emotional rhetoric.
    I feel rather than block or censor content, users should be exposed to counter ideas.
    Sign up to a right wing feed and also be sent some left wing material, i think the old 80/20 rule should apply here. It’s all to easy to paint people of other political persuasions as dumb, or heartless or evil, particularly when you haven’t bothered to consider their point of view. How could you be wrong? Everyone you know agrees with you! With such overwhelming support for your point of view, it suggests there’s no need to spend anytime developing a solid argument based on facts and evidence. It’s far easier and more gratifying to jeer at the opposing team and be cheered on by your support base, with your position receiving many likes from other like minded users.
    I believe the social media companies should pursue a policy of balance and exposure and this would go along way to recentering debate and discussion, where people are more thoughtful and understanding of others positions and ideas.

  45. Pingback: Facebook: a digital boot stamping on a human face, forever – Hector Drummond

  46. Rachel says

    From a former Facebook employee who DID work directly on content policies and content moderation – this article is irresponsibly researched and written. There are so many 100% factually untrue statements here that I don’t know where to begin. It’s too bad you didn’t better educate yourself about what your colleagues were up to before you completely threw them under the bus. Shame on you, and shame on this website for giving you a platform for this sloppily-written crap.

    • Rachel, it would be helpful if you would begin somewhere. Throw us a bone?

  47. Pingback: Wednesday What We’re Reading (Feb. 13, 2019) | The Soapbox

  48. As expected, no mention of Myanmar where Facebook was a major factor in the genocide. Facebook is free in Myanmar so it is the ONLY source of information for most people which means that your’ freedom of speech’ is factual information for most people in Myanmar. While we debate about freedom of speech from our comfortable living rooms, people are dying in Mynamar because one company is so hell bent on growth that they just don’t care. And I am not even going into Cambridge Analytica, Russia, Facebook’s violation of Apple guidelines and countless other scandals.

Comments are closed.