The article that follows is the third instalment of “Who Controls the Platform?“—a multi-part Quillette series authored by social-media insiders. Our editors invite submissions to this series, which may be directed to pitch@quillette.com.
The internet is set for a renaissance-level transformation that will see users migrate to more open networks and corporate models. Popular web personalities are starting to discuss the need to give less of our time and money to entities that silence us. Comedy itself is experiencing an existential crisis. The founders of Instagram, Whatsapp and Oculus—all bought by Facebook—have left their new corporate master in reaction to issues of privacy and censorship. It isn’t a coincidence that all of this is happening at the same time.
This is about more than social networks. It’s about all forms of digital technology. What browser are you using right now? Get off Safari, Chrome and Edge. Get on Firefox, Tor and Brave. Technologies that we feed will grow. Technologies that we avoid will self-correct or wither. It’s already been proven through such examples as GNU/Linux, Wikipedia, WordPress, and Bitcoin that open-source systems can thrive to the point of becoming global standards for technology, information-sharing, commerce, and even multi-billion dollar projects.
Whether or not you care about access to source code, privacy or a decentralized global infrastructure is irrelevant from why you might want to care about demanding free software (free as in freedom, not free beer). It’s the same reason you want your food labeled. The experts who do care need that access in order to audit the apps. This allows them to make sure your newsfeed isn’t secretly funneling away likes and impressions into the void in favor of more suitable advertisements. Sunlight disinfects and simultaneously causes more secure software to evolve through ruthless peer-review.
My own corporate trajectory was shaped by my hunch that there was room for a social network that would encompass a more honest and complete representation of its users’ thoughts, interests and beliefs. The old architecture of the web had forced them to choose between dumbed down silos of app-mediated communications on one hand, and a wild, unstructured subculture of knowledge exchange on the other.
And so emerged Minds, a social media network I co-founded in 2011. Our blockchain-driven system now has several million users leveraging a fully open source stack and micro-crypto-economy, which allows creators to earn token revenue for contributions without fear of demonetization or algorithm manipulation. We are systematically reproducing the functionality of closed-source legacy juggernauts, and just launched video-conferencing services to complement our existing groups, blogs, newsfeed, encrypted messenger, voting system, wallets, video and photo hosting. We also figured we’d bring a crowdfunding tool to the app after raising over $1m in a record-breaking round of community-sourced equity funding. It was our destiny to be co-community-owned.
The legacy social networks have betrayed the trust of their users to such extent that their brands are essentially unsalvageable. Data has been compromised and content censored. The spy robots live in our pockets and on our desktops. And their malign effect on digital life cannot be concealed behind cute logos.
Many alternative systems have popped up in response to such concerns. But the only litmus test that matters is whether they share their source code and algorithms with users. Do they give you control? Are the blueprints publicly available? Does the community have a voice in the direction of the entity? If not, it isn’t part of the transformation we need. That’s not to say they aren’t moving in the right direction, but compromise when it comes to transparency will only end in imbalance of power between the corporations and communities.
Most users want less violence, racism, sexism and bigotry online—regardless of their position on the social or political spectrum. Unfortunately, there is a clear split in opinion about the best way to address the problem. One common, simplistic approach is simply to censor what is clearly hateful and tilt toward safety in gray areas. Some believe this approach works. And it might, in the short run. The authors of a 2017 study analyzed data from 100 million Reddit posts that were created before and after a bundle of controversial subreddits were banned. They found that “more accounts than expected discontinued their use on the site, and accounts that stayed after the ban drastically reduced their hate speech.”
This might sound like a win, but closer investigation reveals that, in anything but the very short term, it’s not so simple. The study goes on to point out that censorship may simply serve to “relocate such behavior” to different areas of the web: “In a sense, Reddit has made these users (from banned subreddits) someone else’s problem. To be clear, from a macro perspective, Reddit’s actions likely did not make the internet safer or less hateful. One possible interpretation, given the evidence at hand, is that the ban drove the users from these banned subreddits to darker corners of the internet.” (Despite this, numerous mainstream publications came out with headlines pronouncing the study a total success.)
More recently, in April 2018, Reddit CEO Steve Huffman came under fire for his foggy responses surrounding the same issue. A Vergepiece titled, in part, “Reddit CEO Says Racism Is Permitted on the Platform,”cites a specific exchange in an open online forum:
“I need clarification on something: Is obvious open racism, including slurs, against Reddit rules or not?” asked a Reddit user called chlomyster. “It’s not,” Huffman…responded. “On Reddit, the way in which we think about speech is to separate behavior from beliefs. This means on Reddit there will be people with beliefs different from your own, sometimes extremely so. When users actions conflict with our content policies, we take action…Our approach to governance is that communities can set appropriate standards around language for themselves. Many communities have rules around speech that are more restrictive than our own, and we fully support those rules.”
After the article’s publication, Huffman issued additional clarifications, prompting the Verge to publish a follow-up piece titled “Reddit CEO Steve Huffman Clarifies that Racism Is Not Welcome on the Platform.”By stitching together the information contained in the two articles, one might produce a short-form distillation of the Reddit policy as “Hate speech isn’t welcome, but it’s allowed.” Huffman seems to understand the importance of protecting free speech, but he also is bogged down by a need to appease aggrieved parties—which inevitably leads to contradictory actions and policies that infuriate and confuse rank-and-file Reddit users.
Huffman toldThe Verge that “I try to stay neutral on most political topics, but this isn’t one of them”—which sounds admirable. But a growing body of evidence shows this approach to be counterproductive, as documented in a 2015 Australian academic study titled “The Streisand Effect and Censorship Backfire.” The title references a well-known phenomenon whose name is traced to efforts by celebrity Barbra Streisand to block access to photos of her Malibu mansion in 2003. Streisand sued both the photographer and the photo-sales company for violating privacy laws. Thanks to the publicity generated by this legal campaign, photos of her home were downloaded more than 400,000 times within the space of a month. The same principle applies, writ large, to prohibited ideas more generally: Tell someone they can’t talk about X, and you will create demand for X-related content.
Some activists claim that hateful rhetoric is so disruptive—and even inherently violent—that state intervention is required. But I am more persuaded by the words of U.S. Supreme Court Justice Louis Brandeis, whose concurrence in the landmark case of Whitney v. California, 1927 warned that “if there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.” Brandeis’ view was vindicated in the case of Brandenburg v. Ohio, 1969 which stipulated that the state should step in to silence hate speech only when the speech is of such a character as to have the intended and likely effect of sparking real and imminent violence.
Hate groups do exist, and it is only natural that outraged observers would look to the government or corporations to shut them down. But empirical evidence proves that achieving social peace is easier when you allow the free exchange of ideas. Embracing people with whom you disagree can even be personally transformative, as modeled by the outreach efforts of legendary blues musician Daryl Davis.
Davis has an odd-seeming hobby of befriending members of the Ku Klux Klan, as he described in this interview. According to Davis, “Once the friendship blossoms, the klansmen realize that their hate may be misguided.” Simply by meeting with Klansmen, David reportedly has inspired over 200 members to give up their robes.
The same approach has been promoted by the ACLU, which once commissioned an advertisement in which a woman wearing a hijab was pictured standing alongside graffiti that read “Muslims go home.” In the next frame, young men held signs proclaiming “freedom of religion” and “love thy neighbor.” The ad ended with the exhortation to “fight hate Speech with more speech.”
Of course, it’s hard to start a dialogue with internet trolls, who pollute the web with unconstructive, insensitive, and polarizing comments as a means to gain attention and intimidate ideological opponents. And as this Slate article points out, the damage inflicted on users is not to be taken lightly. However, the knee-jerk reaction to simply censor these trolls might do more harm than good.
Banning sadistic trolls in mainstream fora serves to push them into desolate corners where their impulses will fester and intensify. The authors of a 2017 UNESCO study say that banning trolls is like playing “whack-a-mole”: They will just pop up somewhere else. A Brookings study similarly concluded that removing the accounts of trolls typically “increase[s] the speed and intensity of radicalization for those who do manage to enter the network.”
While having a dialogue with trolls is difficult, it isn’t always impossible. And evidence-based research suggests that engaging even toxic ideological opponents in discussion sometimes can give them a chance to express themselves in a more positive way—while defusing their ability to create the friction and polarization that always attends the use of de-platforming. Which is to say: We don’t need more censorship. We need more soldiers in the fight for rationalism and civility.
One of the most persuasive justifications for online moderation is that it helps protect children. But a study in the Journal of Pediatrics shows that at least one in six kids have had a negative online experience—notwithstanding the heavy moderation that already takes place on most popular online fora. According to the study author, Andrew K. Przybylski of the Oxford Internet Institute, “It’s kind of crazy that so much time and effort and money is spent to protect kids in this way when we don’t know if it’s effective at all.”
Another study, this one conducted by the American Library Association, shows that controlling access to online content could harm the educational process, because the filters that are placed on school networks can prevent students from creating and sharing content: “Schools that over-filter restrict students from learning key digital readiness skills that are vital for the rest of their lives. Over-blocking in schools hampers students from developing their online presence and fully understanding the extent and permanence of their digital footprint.”
In 2012, Tumblr announced that it would close blogs that promoted self-harm. One large group of users affected by this policy consisted of proponents of the so-called pro-ana movement—which encourages the dangerous, life-threatening behaviors associated with anorexia, bulimia, and other eating disorders. They were forced to find another forum to engage with each other. The result was an example of what’s often called the Toothpaste tube effect, as described by researcher Paola Tubaro: “By forcing blogs to converge into one of the bigger clusters, censorship encourages the formation of densely-knitted, almost impenetrable ana-mia cliques. This favours bonding, [allowing] pro-ana-mia bloggers will tend to exchange messages, links and images among themselves and to exclude other information sources.”
But if censorship doesn’t work, readers may ask, How should social networks help improve society and make users happy? The answer is that, since everyone has their own definition of happiness, it makes the most sense to give users the power to moderate their own experiences. Giving people the opportunity to think about the content with which they interact—as opposed to having a centralized authority determine that for them on a top-down basis—helps develop self-discipline. While some users will use this power to curate their own echo chambers (which, as indicated in this 2016 Wired article, comes with drawbacks) it certainly beats having a one-size-fits-all approach set down by government officials of corporate bureaucrats in Silicon Valley.
One solution is to add a customizable language tool that can filter our language and content that individual users find offensive. Instagram took this approach (on top of its clear censorship policies). There are few comprehensive studies on how such an approach works in practice. But it nevertheless should be seen as one tool, among others, that could help users tailor their online experiences. Another add-on approach, for users who truly have been traumatized by extreme forms of online harassment, would be for social media companies to partner with groups such as Talk Space, which help users connect with professional therapists in times of crisis.
In some parts of the world, such as China, governments are able to censor the entire internet experience of most citizens. That is not something many activists advocate in the West—not only because it would be regarded as dictatorial, but because censors correctly believe they can accomplish most of what they want just by censoring social media, which is the spigot through which most users experience the entire internet. And a recent tragedy that occurred outside the YouTube headquarters in San Bruno, California symbolizes just how much control social media companies really do have.
Nasim Najafi Aghdam, a 39-year-old woman from San Diego, shot three people and then killed herself after complaining about discrimination and censorship by YouTube. Her family had warned police that she might be headed to the HQ after being upset about their policies. Aghdam’s crimes certainly can’t be laid at the feet of YouTube. She is not a free speech martyr, and YouTube (which is owned by Google) has every legal right to oversee its network as its officials see fit. But these horrifying actions do speak to the popular (and correct) perception that social media companies have a lot of control over people’s lives—especially entertainers, writers, musicians, activists and online entrepreneurs whose whole livelihood often hinges on access to one or two high-traffic web sites. Even among ordinary web users who would never dream of committing a violent crime, there is outrage about the secretive algorithms used by social media giants, which diminish the reach of user content; and demonetize it as well, creating a situation some have called “the adpocalypse.”
What is the path forward? My social network, Minds, will be part of the solution, I hope. But one network can’t do the job alone. What will be needed in coming years is a broad alliance of other networks, thought leaders, scientists, NGOs, universities, tech and finance corporations and governments to commit to a global online standard centered on free speech and open-source technology.
The obvious precedent for this is the creation of the internet itself in the 20th century. And there is no reason why that same model could not be applied to the social media overlay that now sits on top of the bare-metal digital communications protocols first established in the middle decades of the Cold War. At the same time, users should be empowered with the controls and filtering tools they need to self-curate.
It is essential that participants in such a project would come from all points in the political spectrum, so that it is not seen as an effort to boost the influence of any particular constituency. It will require sensitivity to social issues, but also call on users to eschew a posture of victimhood. As politicized as most forms of social media have become, the effort to reinvent social media must itself be free of such influences.