recent, Social Media, Who Controls The Platform?

How Radical Transparency Cures Web Censorship and Surveillance

The article that follows is the third instalment of “Who Controls the Platform?“—a multi-part Quillette series authored by social-media insiders. Our editors invite submissions to this series, which may be directed to pitch@quillette.com.

The internet is set for a renaissance-level transformation that will see users migrate to more open networks and corporate models. Popular web personalities are starting to discuss the need to give less of our time and money to entities that silence us. Comedy itself is experiencing an existential crisis. The founders of Instagram, Whatsapp and Oculus—all bought by Facebook—have left their new corporate master in reaction to issues of privacy and censorship. It isn’t a coincidence that all of this is happening at the same time.

This is about more than social networks. It’s about all forms of digital technology. What browser are you using right now? Get off Safari, Chrome and Edge. Get on Firefox, Tor and Brave. Technologies that we feed will grow. Technologies that we avoid will self-correct or wither. It’s already been proven through such examples as GNU/Linux, Wikipedia, WordPress, and Bitcoin that open-source systems can thrive to the point of becoming global standards for technology, information-sharing, commerce, and even multi-billion dollar projects.

Whether or not you care about access to source code, privacy or a decentralized global infrastructure is irrelevant from why you might want to care about demanding free software (free as in freedom, not free beer). It’s the same reason you want your food labeled. The experts who do care need that access in order to audit the apps. This allows them to make sure your newsfeed isn’t secretly funneling away likes and impressions into the void in favor of more suitable advertisements. Sunlight disinfects and simultaneously causes more secure software to evolve through ruthless peer-review.

My own corporate trajectory was shaped by my hunch that there was room for a social network that would encompass a more honest and complete representation of its users’ thoughts, interests and beliefs. The old architecture of the web had forced them to choose between dumbed down silos of app-mediated communications on one hand, and a wild, unstructured subculture of knowledge exchange on the other.

And so emerged Minds, a social media network I co-founded in 2011. Our blockchain-driven system now has several million users leveraging a fully open source stack and micro-crypto-economy, which allows creators to earn token revenue for contributions without fear of demonetization or algorithm manipulation. We are systematically reproducing the functionality of closed-source legacy juggernauts, and just launched video-conferencing services to complement our existing groups, blogs, newsfeed, encrypted messenger, voting system, wallets, video and photo hosting. We also figured we’d bring a crowdfunding tool to the app after raising over $1m in a record-breaking round of community-sourced equity funding. It was our destiny to be co-community-owned.

The legacy social networks have betrayed the trust of their users to such extent that their brands are essentially unsalvageable. Data has been compromised and content censored. The spy robots live in our pockets and on our desktops. And their malign effect on digital life cannot be concealed behind cute logos.

Many alternative systems have popped up in response to such concerns. But the only litmus test that matters is whether they share their source code and algorithms with users. Do they give you control? Are the blueprints publicly available? Does the community have a voice in the direction of the entity? If not, it isn’t part of the transformation we need. That’s not to say they aren’t moving in the right direction, but compromise when it comes to transparency will only end in imbalance of power between the corporations and communities.

*     *     *


Most users want less violence, racism, sexism and bigotry online—regardless of their position on the social or political spectrum. Unfortunately, there is a clear split in opinion about the best way to address the problem. One common, simplistic approach is simply to censor what is clearly hateful and tilt toward safety in gray areas. Some believe this approach works. And it might, in the short run. The authors of a 2017 study analyzed data from 100 million Reddit posts that were created before and after a bundle of controversial subreddits were banned. They found that “more accounts than expected discontinued their use on the site, and accounts that stayed after the ban drastically reduced their hate speech.”

This might sound like a win, but closer investigation reveals that, in anything but the very short term, it’s not so simple. The study goes on to point out that censorship may simply serve to “relocate such behavior” to different areas of the web: “In a sense, Reddit has made these users (from banned subreddits) someone else’s problem. To be clear, from a macro perspective, Reddit’s actions likely did not make the internet safer or less hateful. One possible interpretation, given the evidence at hand, is that the ban drove the users from these banned subreddits to darker corners of the internet.” (Despite this, numerous mainstream publications came out with headlines pronouncing the study a total success.)

More recently, in April 2018, Reddit CEO Steve Huffman came under fire for his foggy responses surrounding the same issue. A Verge piece titled, in part, “Reddit CEO Says Racism Is Permitted on the Platform,” cites a specific exchange in an open online forum:

“I need clarification on something: Is obvious open racism, including slurs, against Reddit rules or not?” asked a Reddit user called chlomyster. “It’s not,” Huffman…responded. “On Reddit, the way in which we think about speech is to separate behavior from beliefs. This means on Reddit there will be people with beliefs different from your own, sometimes extremely so. When users actions conflict with our content policies, we take action…Our approach to governance is that communities can set appropriate standards around language for themselves. Many communities have rules around speech that are more restrictive than our own, and we fully support those rules.”

After the article’s publication, Huffman issued additional clarifications, prompting the Verge to publish a follow-up piece titled “Reddit CEO Steve Huffman Clarifies that Racism Is Not Welcome on the Platform.” By stitching together the information contained in the two articles, one might produce a short-form distillation of the Reddit policy as “Hate speech isn’t welcome, but it’s allowed.” Huffman seems to understand the importance of protecting free speech, but he also is bogged down by a need to appease aggrieved parties—which inevitably leads to contradictory actions and policies that infuriate and confuse rank-and-file Reddit users.

Huffman told The Verge that “I try to stay neutral on most political topics, but this isn’t one of them”—which sounds admirable. But a growing body of evidence shows this approach to be counterproductive, as documented in a 2015 Australian academic study titled “The Streisand Effect and Censorship Backfire.” The title references a well-known phenomenon whose name is traced to efforts by celebrity Barbra Streisand to block access to photos of her Malibu mansion in 2003. Streisand sued both the photographer and the photo-sales company for violating privacy laws. Thanks to the publicity generated by this legal campaign, photos of her home were downloaded more than 400,000 times within the space of a month. The same principle applies, writ large, to prohibited ideas more generally: Tell someone they can’t talk about X, and you will create demand for X-related content.

Some activists claim that hateful rhetoric is so disruptive—and even inherently violent—that state intervention is required. But I am more persuaded by the words of U.S. Supreme Court Justice Louis Brandeis, whose concurrence in the landmark case of Whitney v. California, 1927 warned that “if there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.” Brandeis’ view was vindicated in the case of Brandenburg v. Ohio, 1969 which stipulated that the state should step in to silence hate speech only when the speech is of such a character as to have the intended and likely effect of sparking real and imminent violence.

Hate groups do exist, and it is only natural that outraged observers would look to the government or corporations to shut them down. But empirical evidence proves that achieving social peace is easier when you allow the free exchange of ideas. Embracing people with whom you disagree can even be personally transformative, as modeled by the outreach efforts of legendary blues musician Daryl Davis.

Davis has an odd-seeming hobby of befriending members of the Ku Klux Klan, as he described in this interview. According to Davis, “Once the friendship blossoms, the klansmen realize that their hate may be misguided.” Simply by meeting with Klansmen, David reportedly has inspired over 200 members to give up their robes.

The same approach has been promoted by the ACLU, which once commissioned an advertisement in which a woman wearing a hijab was pictured standing alongside graffiti that read “Muslims go home.” In the next frame, young men held signs proclaiming “freedom of religion” and “love thy neighbor.” The ad ended with the exhortation to “fight hate Speech with more speech.”

Of course, it’s hard to start a dialogue with internet trolls, who pollute the web with unconstructive, insensitive, and polarizing comments as a means to gain attention and intimidate ideological opponents. And as this Slate article points out, the damage inflicted on users is not to be taken lightly. However, the knee-jerk reaction to simply censor these trolls might do more harm than good.

Banning sadistic trolls in mainstream fora serves to push them into desolate corners where their impulses will fester and intensify. The authors of a 2017 UNESCO study say that banning trolls is like playing “whack-a-mole”: They will just pop up somewhere else. A Brookings study similarly concluded that removing the accounts of trolls typically “increase[s] the speed and intensity of radicalization for those who do manage to enter the network.”

While having a dialogue with trolls is difficult, it isn’t always impossible. And evidence-based research suggests that engaging even toxic ideological opponents in discussion sometimes can give them a chance to express themselves in a more positive way—while defusing their ability to create the friction and polarization that always attends the use of de-platforming. Which is to say: We don’t need more censorship. We need more soldiers in the fight for rationalism and civility.

One of the most persuasive justifications for online moderation is that it helps protect children. But a study in the Journal of Pediatrics shows that at least one in six kids have had a negative online experience—notwithstanding the heavy moderation that already takes place on most popular online fora. According to the study author, Andrew K. Przybylski of the Oxford Internet Institute, “It’s kind of crazy that so much time and effort and money is spent to protect kids in this way when we don’t know if it’s effective at all.”

Another study, this one conducted by the American Library Association, shows that controlling access to online content could harm the educational process, because the filters that are placed on school networks can prevent students from creating and sharing content: “Schools that over-filter restrict students from learning key digital readiness skills that are vital for the rest of their lives. Over-blocking in schools hampers students from developing their online presence and fully understanding the extent and permanence of their digital footprint.”

In 2012, Tumblr announced that it would close blogs that promoted self-harm. One large group of users affected by this policy consisted of proponents of the so-called pro-ana movement—which encourages the dangerous, life-threatening behaviors associated with anorexia, bulimia, and other eating disorders. They were forced to find another forum to engage with each other. The result was an example of what’s often called the Toothpaste tube effect, as described by researcher Paola Tubaro: “By forcing blogs to converge into one of the bigger clusters, censorship encourages the formation of densely-knitted, almost impenetrable ana-mia cliques. This favours bonding, [allowing] pro-ana-mia bloggers will tend to exchange messages, links and images among themselves and to exclude other information sources.”

But if censorship doesn’t work, readers may ask, How should social networks help improve society and make users happy? The answer is that, since everyone has their own definition of happiness, it makes the most sense to give users the power to moderate their own experiences. Giving people the opportunity to think about the content with which they interact—as opposed to having a centralized authority determine that for them on a top-down basis—helps develop self-discipline. While some users will use this power to curate their own echo chambers (which, as indicated in this 2016 Wired article, comes with drawbacks) it certainly beats having a one-size-fits-all approach set down by government officials of corporate bureaucrats in Silicon Valley.

One solution is to add a customizable language tool that can filter our language and content that individual users find offensive. Instagram took this approach (on top of its clear censorship policies). There are few comprehensive studies on how such an approach works in practice. But it nevertheless should be seen as one tool, among others, that could help users tailor their online experiences. Another add-on approach, for users who truly have been traumatized by extreme forms of online harassment, would be for social media companies to partner with groups such as Talk Space, which help users connect with professional therapists in times of crisis.

In some parts of the world, such as China, governments are able to censor the entire internet experience of most citizens. That is not something many activists advocate in the West—not only because it would be regarded as dictatorial, but because censors correctly believe they can accomplish most of what they want just by censoring social media, which is the spigot through which most users experience the entire internet. And a recent tragedy that occurred outside the YouTube headquarters in San Bruno, California symbolizes just how much control social media companies really do have.

Nasim Najafi Aghdam, a 39-year-old woman from San Diego, shot three people and then killed herself after complaining about discrimination and censorship by YouTube. Her family had warned police that she might be headed to the HQ after being upset about their policies. Aghdam’s crimes certainly can’t be laid at the feet of YouTube. She is not a free speech martyr, and YouTube (which is owned by Google) has every legal right to oversee its network as its officials see fit. But these horrifying actions do speak to the popular (and correct) perception that social media companies have a lot of control over people’s lives—especially entertainers, writers, musicians, activists and online entrepreneurs whose whole livelihood often hinges on access to one or two high-traffic web sites. Even among ordinary web users who would never dream of committing a violent crime, there is outrage about the secretive algorithms used by social media giants, which diminish the reach of user content; and demonetize it as well, creating a situation some have called “the adpocalypse.”

What is the path forward? My social network, Minds, will be part of the solution, I hope. But one network can’t do the job alone. What will be needed in coming years is a broad alliance of other networks, thought leaders, scientists, NGOs, universities, tech and finance corporations and governments to commit to a global online standard centered on free speech and open-source technology.

The obvious precedent for this is the creation of the internet itself in the 20th century. And there is no reason why that same model could not be applied to the social media overlay that now sits on top of the bare-metal digital communications protocols first established in the middle decades of the Cold War. At the same time, users should be empowered with the controls and filtering tools they need to self-curate.

It is essential that participants in such a project would come from all points in the political spectrum, so that it is not seen as an effort to boost the influence of any particular constituency. It will require sensitivity to social issues, but also call on users to eschew a posture of victimhood. As politicized as most forms of social media have become, the effort to reinvent social media must itself be free of such influences.


Bill Ottman is CEO and co-founder of Minds.

27 Comments

  1. D.B. Cooper says

    There would appear to be a degree of consilience between centrally planned, top-down models of censorship and those of economies. Consider, for example, the similarities between what Ottman describes as the Toothpaste tube effect and the black markets that arise in places like Cuba (or even America’s illicit drug market for that matter). Or further, the ‘sunlight is the best disinfectant’ mantra as well as Daryl Davis’ outreach efforts that censorship prevents, and the need for an informative feedback mechanism (market prices) which central economies necessarily lack.

    Of course, further examples can likely be had, but ultimately it seems to me, the fatal flaw in both cases lies not with the system, but people. I would argue, as many have, that the logical outcome of any centrally planned system owning sufficient discretionary power will, necessarily, devolve towards totalitarianism.

  2. The only thing I find off-putting about Minds is the name. It is very difficult to pronounce distinctly from something that sounds like “mines” – something which is very significant when you’re trying to grow your audience – and it comes across as much more intellectual than it needs to be. Let’s be honest, Minds.com would benefit from Let’s Play videos and cookery videos just as much as it would benefit from Dr Peterson’s latest lecture on the nature of Chaos.

    • Asenath Waite says

      @Nick

      That’s valid. Although Twitter is about the worst name ever, and they seem to be doing pretty well.

      • stoned says

        Twitter’s the worst but it perfectly describes what it is: disposable trash media. Dumb people with 140-character opinions get really into it and the company starts data mining. It’s like the web economy is heavily skewed toward selling personal data.

        • The Ulcer says

          You hit the nail on the head: dumb opinions. People are lazy-minded which is why censorship and easy outrage are the order of the day. Who has the fortitude or capacity to build a rational and nuanced perspective these days, when you can just cook prepackaged ideas in the microwave?

    • bobdub says

      Yes. I really didn’t like the name at all – and it took me a few months before I actually brought myself to hop over to Minds and find out what it was about. I also didn’t like the ‘remind’ and ‘channel’ monikas. Now that I am full time on the platform, I find that the slight strangeness has become a positive for me. Probably some well studied psychological phenomenon…

  3. Dan Love says

    I felt the article was lukewarm and that the author does not understand the depth of the issue involved.

    As an example of the lack of depth, consider the author’s statement “Most users want less violence, racism, sexism and bigotry online—regardless of their position on the social or political spectrum. Unfortunately, there is a clear split in opinion about the best way to address the problem.”

    This is just a banal talking point. To me, the problem is much deeper: what the hell do these terms – violence, racism, sexism, bigotry – mean in the first place? If we don’t agree on the problem, of course we won’t agree on a solution. The author ignores the incredible amount of disagreement over what those words mean. Indeed, their definitions have changed radically over just the last 4 years.

    These words are used as weapons, and one group of people’s definitions have very little overlap with those of the rest of us. For example, if we are going by an increasingly popular “leftist” definition for those words (a definition embraced by much of the media), then no, I don’t want less “violence”, “racism”, “sexism”, and “bigotry”, because people embracing a vogue leftist ideology, in no uncertain terms, believe disagreement with their ideology is violence, criticism of radical feminism is sexist, criticism of BLM is racist, and not being an SJW is bigoted.

    The people regulating these definitions have glaring double standards and exhibit massive hypocrisy. For example, they will allow anything whatsoever to be said about white men but will have a mental breakdown over simply questioning whether some women lie about being raped. Their definitions for these terms are one-sided and ideological.

    I deem tolerable half of what the left considers reprehensible, and I consider reprehensible half of what the left considers tolerable. Why would I think leftist problems are problems I share?

    Regulating violence, racism, sexism, and bigotry seems for too close to regulating ideology, because those words now carry ideological definitions people no longer agree on.

    • The way I see it, the only authoritative definitions of these words are specified in the dictionary – and once the leftists inevitably get their hands on the dictionaries and begin to vandalise them, we need only discount a definition if its etymology is either void, or states that a word’s origin is from sociology departments. 😛

    • E. Olson says

      Good comment Dan. The whole Leftist premise of words as “violence” is designed to shut down any point-of-view or objective facts (i.e. I’m offended by the truth) that don’t fit the victim hierarchy narrative.

    • david of Kirkland says

      And if so few want such “bad speech,” why is there so much of it created? Perhaps people actually enjoy discussing these things, and of course many enjoy it more when confronted with a true believer, as fanatics and fundamentalists can be entertaining to some.

    • Stephanie says

      Great comment, Dan. I’m sceptical of our ability to adequately counteract the regressive left so long as we accept their assumptions.

    • Dan, I stopped at the exact same place in the article. I thought, do I want “less violence, racism, sexism and bigotry online” ? Or do I really never think about such things at all.

      Where are these people coming across so much of this stuff that it becomes their primary focus? Is it possible they’re just overreacting to opposing viewpoints?

      Frankly I don’t want less of anything online except censorship. I can avoid what’s not for me. I want more sites like quillette and reason where comments are not censored. More more and more.

  4. stoned says

    This whole series is pretty milquetoast, but then again it’s about social media. We already came down from the transparency and decentralization high even before summer 2013, during the “social” Tor and Bitcoin experiment that failed.

    First sentence: pushing an agenda. Later: join my social network. I saw the title and thought that it “causes” instead of “cures” surveillance and censorship. The problem is that promises of techno-utopia made people forget what they knew since World War I: anything on the wire can be intercepted.

    Sorry friend, the government isn’t going to mandate FOSS and your social network, and that won’t fix anything anyway, but you might convince it to teach kids better computer skills. Like a class where they learn to host their own email (FOSS and/or Exchange).

    Then kids know that if they want to, they could run email lists and websites and know how public/permanent/insecure the protocols really are. I took out all the isms and blather for you.

    Dan, you’re a level ten ismist, my man.

  5. Ray Andrews (the dolphin) says

    I myself would like to be a part of whatever media outlets explicitly practice one form of censorship and that is the removal of stupid or trolling comments that have no intellectual content — but leaving all well presented ideas untouched. In short I just don’t want my time wasted by morons and if they were filtered out, I’d be grateful. Here at Quillette the number of moronic posts must be < 1% however and that's pretty damn fine.

    • Ray, Disqus has a block feature that works well for that. You’ll see no more comments from that person. If one comment is asanine, probably so are others.

      I hesitate before blocking though. Is it that I see trolling, ignorance, idiocy, or only a very different POV…

      • stoned says

        Man, I’ve read so much techno-utopia propaganda by now that it’s tiring, and most articles weren’t even advertising their pet social network. Why would you look toward the supposedly “neutral” tools of communications technology (that always contain exploitable vulnerabilities) to protect you from surveillance and censorship?

        Building a walled garden “social ecosystem” on the Ethereum blockchain doesn’t impress me. If anything it only proves the well known concept that a social network’s success still largely depends on producing actionable consumer bulk data that’s worth more than the users’ individual contributions. Minds has a potentially valuable product: an immutable financial ledger with organically produced annotations.

        I’m all for teaching computer literacy and using FOSS over proprietary closed source software but we have to be realistic. The solution isn’t “browse the author’s social media site with Firefox on Linux” because we must account for a global passive adversary and software that has bugs older than a good portion of its user base.

        Maybe if people grow up learning that the contents and metadata of every email can be recorded by every server it passes through, for example, people will rethink their relationship to technology. Like I said, Snowden already killed everyone’s faith in the hyped-up talking points the author uses to advertise his website.

  6. david of Kirkland says

    Blasphemy and heresy are the most hateful responses from the prior generation of the “correct speech” police, to protect those who have to hear it, and of course to save the children.

  7. Stephanie says

    “But the only litmus test that matters is whether they share their source code and algorithms with users.”

    I agree very much with this. When I got married and my new husband and I started joking about divorce, we both started getting ads on our phones for divorce lawyers. Alarmed, I installed a microphone blocker. However, there’s a Black Mirror episode that points out such an app is exactly where a nefarious actor would hide surveillance software. Without having access to the code, how can the consumer know for certain they aren’t being spied on?

    As for algorithms, when one of my fellow grad students started challenging comments I posted on Facebook news posts, Facebook decided to put those debates on my supervisor’s timeline. She’s not even my “friend,” and yet she says she was shown several such discussions. How did Facebook decide this was of interest to her? Considering how rarely I am shown anyone’s public comments, why are my comments being broadcast so broadly? Maybe I’m just getting paranoid, but this seems designed to “out” me as a conservative to my boss.

  8. a bee ee? says

    “Simply by meeting with Klansmen, Davi[d] reportedly has inspired over 200 members to give up their robes.”

    That probably represents half of their entire membership these days.

  9. With Pro-ana-mia, part of the problem may be the creation of algorithmic bubbles that fail to supply opposing viewpoints and instead encourage the viewing of similar materials.

    Why censor the problem when you know where these people are going for their information on the internet? Instead you should provide adverts for healthy eating, food, and counselling services. Force the opposing viewpoint (i.e. that anorexia and bulimia are unhealthy) into the information they are viewing. Provide assistance in an attempt to counter the very content you disagree with.

    Many social media problems are caused by the assumption of social networks of what the viewers WANT to view rather than opening up the platform, thus encouraging discourse.

    Censorship in this case may actually be causing the deaths of vulnerable people who have not been offered help but instead have been demonised and banned.

    • Now just imagine if the SJW’s decided en mass that Quillette discussions needed that kind of “healthy” alternative viewpoint injection (beyond the occasional trolling I see now and then) …

  10. To be clear, from a macro perspective, Reddit’s actions likely did not make the internet safer or less hateful.

    It’s a pity that the author buys into the woke-speak claim that stuff that is merely offensive amounts to making someone “unsafe”. The whole concept of “hate speech” is also dubious.

  11. I really like this magazine and its commitment to defending enlightenment values, and this is an interesting series of articles about social media. There does seem to be an elephant in the room however….

    My understanding is that companies like facebook allowed companies like Cambridge Analytica and others to scrape users’ personal data; that data was shared with the russian government who used it/are using it in a psychological warfare campaign against the citizens of the west with the presumable goal of overthrowing those enlightenment values.

    I hope that future articles in this series will touch on this subject. I note that the first article was written by a former facebook employee – does he have any comment to make on this vital issue?

    • So was that also wrong when FB allowed similar data exposure in 2012 when Obama campaigners used the data to identify neighbors to canvass door-to-door if they were identified via FB activity as not being on board with actively supporting his re-election, but likely to if solicited face-to-face?

      It does not seem, in effect, to be so harmful, when considering that kind of canvassing has been done traditionally in the past (e.g. using official voter registration data from cities/counties), but just not with such specific “targeting data”. However it is the same sort of disturbing use of social media personal data as the Analytica case I would think.

  12. Craig WIllms says

    I will be checking this out. I’m interested in alternatives, but am skeptical… When I joined Facebook – the only social media I’m on – it was a blast, I’m older and I really enjoyed reconnecting with old friends. Within a year the bloom was off the rose, it’s wore out it’s welcome with me. It could be a better, much better experience.

  13. Pingback: MINDS does censor the far right (seen it) https://quillette.com/201… | Dr. Roy Schestowitz (罗伊)

Comments are closed.