Skip to content

Stifling Free Speech Online: Australia’s Misinformation Bill

Every censorship regime in history has claimed to be protecting the public. But no regime can have prior knowledge of what is true or good. It can only know what the approved narratives are.

· 11 min read
Hooded hacker person using smartphone in infodemic concept with digital glitch effect
Shutterstock.

In our Internet age, public discourse has largely moved onto digital platforms owned and controlled by a handful of private companies. Censorship by those companies has become more vigorous and constrictive than the government version, at least in the West. And now, to make matters worse, governments themselves have become increasingly convinced of the necessity of ensuring “web safety” by making online censorship mandatory.

Some more open digital platforms have emerged recently, especially following Elon Musk’s conversion of Twitter into “X.” Internet chatter has become far freer (if no less inane). But even as society has begun to reopen Internet discourse, governments have been trying to suppress unwanted online speech again with tactics ranging from quiet deep-state puppeteering of digital platforms to explicit legislation restricting online speech in the EU, the UK, and now here in Australia, where Australia’s Minister for Communications, Michelle Rowland, has expressed her intent to hold digital platforms to account for their supposedly inadequate moderation of user-generated content.

Rowland’s own department’s experience attempting to vet online content shows just how unfeasible it is to expect platforms to monitor all their user content. The Department of Communications asked the public for detailed proposals for a possible Misinformation and Disinformation Bill—but the department was quickly overwhelmed by the need to vet every proposal received for compliance with legal standards and the absence of private information and offensive content. It took them a full month to publish the first 150 such proposals and four months altogether to vet the 2,418 proposals published  (the opposition claims that a total of 23,000 submissions were received).

It is unclear what the purpose of reviewing every submission even was—why can’t people submit their own private information if they choose to do so? And why do we need to protect people from finding a bureaucratic document offensive? If the department itself finds the job policing content so onerous, this surely suggests that websites shouldn’t have to do so.

The overall effect of all this urge to moderate content has been to silence members of the public, in the name of a safetyist ideology that holds that platforms have a duty to protect their users, rather than to respect them. Instead of allowing users to access their desired sources of information, websites are to shield them from sources deemed to be hate speech, mis- or disinformation, or otherwise harmful.


Every censorship regime in history has claimed to be simply protecting the public. But in fact, no regime can have privileged, prior knowledge of what is true or good. It can only know what the approved narratives are and uphold the status quo. As is usual in democratic politics, Western governments are being steered down this path less by a simple lust for power than by the efforts of an activist industry: in this case, a network of individuals and organizations that earn their keep by warning of the dangers of harmful information and taking it upon themselves to determine what information is “harmful” and what is “safe.”

Take, for example, the RMIT FactLab, a university research lab based at the Royal Melbourne Institute of Technology, initially funded by Meta (Facebook) to factcheck social media posts, and recently suspended after an investigation by Sky News found that its fact checks had overwhelmingly favoured the “Yes” campaign in Australia’s recent Voice to Parliament referendum. The head of the FactLab, Russell Skelton, is a well-known left-wing partisan. RMIT’s ABC Fact Check (a different organ) has been collaborating with the Australian Broadcasting Corporation. The organisation attained some notoriety after being forced to apologise to pro-nuclear Australian businessman Dick Smith for wrongly fact-checking his claim that “No country has ever been able to run entirely on renewables.” The fact-checkers responded with a list of countries (Albania, Bhutan, Paraguay, and Nepal) with 100% renewable-powered electricity grids. But as the ABC has been forced to acknowledge, no country runs on electricity alone, and all these countries rely on non-renewable sources for a large fraction of their energy needs.

Another example, from the US, is the Hamilton 68 dashboard created by an NGO called the Alliance for Securing Democracy. The dashboard has been reportedly tracking Russian trolls and bots, or, as the Alliance puts it, has conducted an “analysis” that “linked 600 Twitter accounts to Russian influence.” As journalist Matt Taibbi has documented, this dashboard was used as a reliable source by journalists charting Russian influence on American politics. However, the (pre-Musk) Twitter moderation team discovered that the 600-odd accounts were “neither strongly Russian nor strongly bots”—they were mostly simply conservatives who tweeted right-wing or pro-Russia talking points.

US security agencies have considerable influence over the big tech platforms, as the Twitter Files released in the wake of Elon Musk’s takeover revealed. Security agencies had regular meetings with high-level Twitter executives; the FBI would tag tweets en masse for moderation; and at least one former high-level FBI official who moved to Twitter was caught running interference against the Twitter Files reporters.

Of course, those who are in the business of providing factchecking services are keen to stress how important it is to protect people from unfiltered information—after all, they are paid to provide this service.

But they are wrong.


In the US, the First Amendment protects citizens from censorship by government. But the First Amendment does not apply to interactions between private entities, where it potentially conflicts with other foundational freedoms. In private life, in their own homes and workplaces, people don’t have to tolerate speech they dislike. The tricky questions arise when this principle is applied to digital platforms that are privately owned but used by a large proportion of the general public. 

Digital platforms are not telephone lines: we expect them to filter the information they provide by, for example, removing spam or ranking search results. But we don’t expect them to make political or moral judgements. Searches in a web browser should rank results by how relevant they are to the user’s query and not by how much the moderators like them—just as, if an email message is sent straight to a spam folder, that should be because the user is likely consider it to be spam, not because the email provider doesn’t like the sender.

Instead, the major platforms—the places where most modern public discourse takes place—have begun to filter based on what they think their users ought to want. This has given rise to calls for regulation in the name of free speech. In the US, however, such regulation is fraught, because there is a decent argument that, under the Constitution, the platforms themselves should be treated as if they were private individuals, with the freedom to choose whom they associate with. Lawyers in the US will have to settle that argument themselves. But other countries unconstrained by the US Constitution should use that freedom to tackle this threat to democracy and freedom of expression. Instead, they have done just the opposite.

Such laws as Britain’s Online Safety Bill and the EU’s Digital Services Act require platforms to speedily block illegal content. Australia’s proposed new law goes one step further: it requires platforms to suppress some legal content. The draft bill binds digital platforms to mandatory codes and standards that require them to block content considered to be misinformation. It defines “misinformation” in the following way:

For the purposes of this Schedule, dissemination of content using a digital service is misinformation on the digital service if: (a) the content contains information that is false, misleading or deceptive; and (b) the content is not excluded content for misinformation purposes; and (c) the content is provided on the digital service to one or more end-users in Australia; and (d) the provision of the content on the digital service is reasonably likely to cause or contribute to serious harm.

This definition is both vague and broad and does not limit itself to content that is prohibited in Australia. It also tilts the playing field because it defines “excluded content” as content produced by the Australian government and by officially recognized media and educational institutions, including those owned by foreign governments.

The Great Misinformation Panic
By going to war against “misinformation” governments are merely diverting finite resources from addressing real harm to people and property, which purportedly justifies the panic in the first place.

Like all such laws, Australia’s new bill pays lip service to free speech. Authorities are asked politely to consider the importance of freedom of expression when making their judgements—but no actionable limits are set on their censorship, nor are there any penalties for silencing people. The bill delegates decision-making power to platforms that can already block content at their discretion, without having to show that it violates any law. That discretion will be exercised in the context of laws that punish them for permitting speech of which people disapprove, and do not punish them for censoring truthful, legal speech. Decisions to censor any individual are left up to risk-averse corporations, who will be anxious not to attract the ire of a regulator.

These laws are not simply an expression of countries’ own constitutional norms. They are a dodge around those norms. Free speech traditionally flourishes in democracies because their citizens can’t be muzzled at the mere say-so of some censor appointed by a theocrat, strongman, or authoritarian. Even in democracies with nothing like America’s First Amendment, speech can traditionally only be limited by specific laws. But on the Internet, none of this applies. Posts can be quietly downranked or deleted at will. The platforms are not accountable to anyone and the censored have no legal recourse. Civil servants, government officials, security forces, and other agencies of the state can exploit this by pressuring platforms to mute their critics.


Luckily, people are beginning to challenge this ability to censor by proxy. As usual, Americans are taking the lead. In Missouri v. Biden, various plaintiffs, including the US states of Missouri and Louisiana, alleged that “numerous federal officials coerced social-media platforms into censoring certain social-media content, in violation of the First Amendment.” The US Fifth Circuit Court of Appeals agreed, finding:

that the White House, acting in concert with the Surgeon General’s office, likely (1) coerced the platforms to make their moderation decisions by way of intimidating messages and threats of adverse consequences, and (2) significantly encouraged the platforms’ decisions by commandeering their decision-making processes, both in violation of the First Amendment.

The Fifth Circuit Court issued a temporary injunction forbidding relevant officials from coercing or significantly encouraging social media companies to censor protected speech—even by indirect or informal means. The Supreme Court has stayed that injunction but is now hearing the case for itself.

Meanwhile, Texas and Florida have passed anti-discrimination statutes, outlawing censorship by digital platforms. The Texas law makes it an offense for a platform to “block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.” The Florida law also requires platforms to “publish the standards including detailed definitions, it uses or has used for determining how to censor” and to apply those standards consistently.

These state-level statutes have little power to change the behaviour of global platforms, but they are forcing judges to articulate whether platforms have a constitutional right to moderate content in a capricious, inconsistent, or discriminatory manner. Legal challenges to both statutes have been issued and the US Supreme court has agreed to hear these challenges. The Court’s ruling may pave the way for future federal-level reforms.

But in the end, all that statutes or constitutional lawfare can do is hold open a space for our most potent tool against private-sector censorship: societal responses, including by the free market. In response to the increase in censorship, free-thinking journals like the one you are currently reading have surged in popularity, while Substack has supercharged this phenomenon by providing a blogging platform for the orthodox and heterodox alike. Twitter/X has become a more open place since Elon Musk’s quixotic takeover. Rumble provides a viable choice for those who need an alternative to YouTube.

The official consensus as to which narratives should be supported and which should be  suppressed is cracking. Yet even as voluntary private-sector censorship begins to falter, governments are shoring it up with innovatively illiberal measures. In the US, the First Amendment will provide some protection against such measures, but elsewhere the battle will be more political than legal. In their writings, proponents of the new laws never seem to consider whether the old norms already meet the challenges of the new digital age. They might argue that in a world in which posts can go viral within hours, it would simply be too expensive and time-consuming to have to prosecute every bad actor or otherwise be bound by the rule of law. Yet, far from being a novel response to new circumstances, this censorship method is a throwback to the dawn of the age of the printing press. Renaissance princes found it much easier to regulate printers than authors and much more effective to quietly strangle publication than to risk high-profile libel, blasphemy, or obscenity trials.

That approach failed. Today, the printing presses of the advanced world are remarkably free. Whatever laws might regulate the electronic media, the literati are free to put almost any kind of smut or nonsense into print. This right is the outcome of a Darwinian process. Those societies that rejected prepublication censorship developed into the most prosperous, powerful, and humane societies in the world. Hopefully, we will not have to relearn this lesson the hard way. People must push back against the language of online “safety” and “harm” and recommit to traditional liberal approaches. America’s dynamic entrepreneurs, stroppy governors, and hair-splitting constitutional lawyers are making a good start. But the rest of the world should not wait to follow America’s lead.

Liberalism in Australia is also showing a few belated signs of fighting spirit. New civil society organisations, such as the Free Speech Union of Australia, are cropping up. Also, the government’s new misinformation law has not yet passed and is looking increasingly friendless. It has been denounced by Australia’s Human Rights Commissioner, by the Victorian Bar Association, and by prominent scholars and publications.

Australia’s New Free Speech Union: Quillette Cetera Episode 29
A conversation with the director of Australia’s new Free Speech Union.

The left also needs to wise up. Today, critics of Big Pharma, Russia-friendly peaceniks, and even radical feminists are becoming associated with the political right because the mainstream left wants their views censored. Some on the progressive left are pleased at the suppression of opposing viewpoints because they imagine that they are winning today’s culture war. But they should observe that—far from taking over the centres of power, such as national governments, the intelligence agencies, and big business—they and their ideas are losing ground.

With luck, liberal forces from both left and right can hold back Australia’s Misinformation and Disinformation Bill. But there’s no need to just stay on the defensive. If there is a political will to hold Big Tech to account, it should be harnessed to entrench free speech and open discourse and to introduce legislation that guarantees neutrality, transparency, and procedural fairness in content moderation.

The legislation in Texas and Florida points in the right direction. (It’s no coincidence that these regulations were proposed in the country with the most robust tradition of legally protected free speech.) But even the censorious legislation of other countries contains some seeds of wisdom. The EU’s Digital Services Act requires tech companies to provide a transparent dispute resolution process. This is a good start. If such requirements are backed up by substantive rights to free expression and real penalties for violating those rights, we may be able to ensure that the digital world protects the free speech rights on which liberal democracies depend.

Today’s power-grabs in the name of web safety might violate the basic principles of liberal constitutionalism, but they enjoy some popular support because people have real concerns. Nobody likes their social media feed to be full of scams and vitriol, nor do parents want their children subjected to bullying and indoctrination via their phones. People want both industry and government to protect them from these evils. To remain free, we must fight spam, bullying and fake news—but without handing the powers over speech and silence to censorious bullies who treat us like children. Our existing traditions of liberal law already provide plenty of lessons in how to do this. We just have to learn how to apply those lessons to the 21st century.

On Instagram @quillette