Skip to content

Why Fake News Flourishes: Emitting Mere Information Is Easy, But Creating Actual Knowledge Is Hard

Digital media, by contrast, had hardly any paying customers and lured advertisers with fleeting “impressions” and “engagement,” launching a no-holds-barred race to attract eyeballs.

· 9 min read
Why Fake News Flourishes: Emitting Mere Information Is Easy, But Creating Actual Knowledge Is Hard
Lithograph illustration accompanying an August 1835 New York Sun hoax article purporting to describe a Moon-based civilization discovered by a prominent astronomer

In America’s founding era, journalism was notoriously partisan and unreliable. Almost anyone could open a newspaper, and almost anyone did. Standards were low to nonexistent. “Editors and promoters, however much they proclaimed their loyalty to truth, more often than not were motivated by partisan political goals and commercial interests in the sensational,” writes the historian Barbara J. Shapiro. Newspapers did not scruple to publish what today we call fake news. In 1835, the New York Sun published bogus reports of life on the Moon; in 1844, it published a fake story—­by one Edgar Allan Poe—­about a transatlantic hot-air-balloon journey. Fake news was not always harmless. Benjamin Franklin complained that “tearing your private character to flitters” could be done by anyone with a printing press. Press freedom, he groused, had come to mean “the liberty of affronting, calumniating, and defaming one another.” At the Constitutional Convention, Elbridge Gerry, a prominent politician, observed bitterly that the people “are daily misled into the most baneful measures and opinions, by the false reports circulated by designing men, and which no one on the spot can refute.” Pamphleteers were even more scurrilous. “In the decades immediately before and after the American Revolution,” write Cailin O’Connor and James Owen Weatherall in their 2019 book, The Misinformation Age: How False Beliefs Spread, “partisans on all sides attacked their opponents through vicious pamphlets that were often filled with highly questionable accusations and downright lies.”

In the latter half of the 19th century, urbanization and breakthroughs in printing technology transformed small presses into mighty urban newspapers, capable of reaching millions every day at the crack of dawn and generating previously unimaginable advertising revenues. Still, newsrooms remained rowdy fraternities following ad hoc rules. Reporting was more a trade than a profession, and coverage was as likely to focus on gossip or sensation as on what today we think of as newsworthy public events. “Faking was a rampant journalistic practice during the final quarter of the nineteenth century,” writes Randall S. Sumpter in his 2018 history of journalism ethics. In his memoir of life as a young reporter at the turn of the 20th century, the American journalist H. L. Mencken nostalgically recounts how he and other reporters made up fake scoops to beat a competitor. They thought it was hilarious. Publishers would do anything to attract audiences, up to and including printing rumors, fake news, and wildly sensationalized articles that helped incite a war with Spain.

As the century turned, however, journalistic practices began to coalesce into informal codes of conduct. In 1893, the University of Pennsylvania’s business school introduced the first journalism curriculum taught by a news professional; 15 years later, the University of Missouri founded the first separate school of journalism. The American Society of Newspaper Editors was founded in 1922, and its first order of business was to promulgate an ethics code. “By every consideration of good faith a newspaper is constrained to be truthful,” the code said. “It is not to be excused for lack of thoroughness or accuracy within its control.” The code called for distinguishing between news and opinion, and for soliciting a response from anyone whose “reputation or moral character” might be impugned in print. News judgment should respect privacy: “A newspaper should not involve private rights or feeling without sure warrant of public right as distinguished from public curiosity.” And “it is the privilege, as it is the duty, of a newspaper to make prompt and complete correction of its own serious mistakes of fact or opinion, whatever their origin.” Those two words, “the privilege,” speak volumes; to news professionals, correcting error should be a point of pride, a distinguishing and defining feature of the culture.

I am a product of that culture. Beginning at my college newspaper in the late 1970s and then in my first job at a local paper in North Carolina, I had it drummed into me that accuracy matters, that real people would be hurt if I made mistakes, that I had a duty to seek comment from those I wrote about, that I should conduct interviews on­ record whenever possible, that uncorroborated sources are suspect, that when I was wrong I should own up to it and file a correction. In journalism schools and mainstream newsrooms, reporters are still taught those values. “If you have a very small staff, checking what somebody said with a bunch of different sources is not always doable,” a young reporter at a small-town newspaper told me. “But those principles are at the forefront of what we do every day. They’re in the conversations we have about things we’re working on. In the newsroom, we talk about those things with editors. When we get something wrong, we make sure the record is set straight. Particularly when you work for a small newspaper and you’re the news source of record for that community, it’s exceptionally important. We don’t want to get it wrong the first time, but when we do, we have to own up to it.”

In 2019, Harvard’s student newspaper, the Crimson, earned itself a student boycott and a vote of condemnation by the Undergraduate Council for seeking a comment from the US Immigration and Customs Enforcement agency about an “Abolish ICE” rally on campus. I felt proud of the paper when it issued a firm response to claims that its coverage had caused “feelings of unsafety”: “Fundamental journalistic values obligate the Crimson to allow all subjects of a story a chance to comment … For this story and all others, the Crimson strives to adhere to the highest standards of journalistic ethics and integrity.” That was the voice, not just of one student editor, but of 100 years of reality-based professionalism.

What the institutionalization of modern, fact-based journalism did was to create a system of nodes—professional newsrooms—­which can choose whether to accept information and pass it on. The reality-based community is a network of such nodes: publishers, peer reviewers, universities, agencies, courts, regulators, and many, many more. I like to imagine the system’s institutional nodes as filtering and pumping stations through which propositions flow. Each station acquires and evaluates propositions, compares them with stored knowledge, hunts for error, then filters out some propositions and distributes the survivors to other stations, which do the same.

In Defense of Objective Knowledge
Sydney. London. Toronto.

Importantly, they form a network, not a hierarchy. No single gatekeeper can decide which hypotheses enter the system, and there are infinitely many pathways through it. If one journal or media organization rejects your claim, you can try another, and another. Still, if each node is doing its job, the system as a whole will acquire a strongly positive epistemic valence. A poorly supported claim might have a 50 percent chance of passing through one filter, but then a one in four chance of passing two filters and only a one in eight chance of passing three. Eventually—­usually quickly—­it dies out. A strongly supported claim will fare better, and if it is widely accepted it will disseminate across the network and enter the knowledge base. Working together, the pumps and filters channel information toward truth.

Now imagine running them in reverse. Suppose some mischievous demon were to hack into the control center one night and reverse the pumps and filters. Instead of straining out error, they pass it along. In fact, instead of slowing the dissemination of false and misleading claims, they accelerate it. Instead of marginalizing ad hominem attacks, they encourage them. Instead of privileging expertise, they favor amateurism. Instead of validating claims, they share claims. Instead of trafficking in communication, they traffic in display. Instead of identifying sources, they disguise them. Instead of rewarding people who persuade others, they reward those who publicize themselves. If that were how the filtering and pumping stations worked, the system would acquire a negative epistemic valence. It would actively disadvantage truth. It would be not an information technology but misinformation technology.

No one saw anything like that coming. We—­I certainly include myself—­expected digital technology to broaden and deepen the marketplace of ideas. There would be more hypotheses, more checkers, more access to expertise. How could that not be a leap forward for truth? At worst, we assumed, the digital ecosystem would be neutral. It might not necessarily tilt toward reality, but neither would it systematically tilt against reality.

Unfortunately, we forgot that staying in touch with reality depends on rules and institutions. We forgot that overcoming our cognitive and tribal biases depends on privileging those rules and institutions, not flattening them into featureless, formless “platforms.” In other words, we forgot that information technology is very different from knowledge technology. Information can be simply emitted, but knowledge, the product of a rich social interaction, must be achieved. Converting information into knowledge requires getting some important incentives and design choices right. Unfortunately, digital media got them wrong.

The commercial internet was born with an epistemic defect: its business model was primarily advertising-driven and therefore valued attention first and foremost. Traditional media companies relied partly (often heavily) on ad revenue, to be sure, but they attracted advertisers by building audiences of regular users and paying consumers, and many were rooted in communities where they were known and trusted, and so they tended to build constituencies to whom they felt reputationally and financially accountable. The gutter press and fly-by-night media also existed, but they were the exception rather than the rule, at least in the modern era. Digital media, by contrast, had hardly any paying customers and lured advertisers with fleeting “impressions” and “engagement,” launching a no-holds-barred race to attract eyeballs. Digital media companies could use granular metrics to slice and sort their audiences, but those statistics were very different from accountable relationships with users and communities and sponsors.

The whole system was thus optimized to assemble a responsive audience for whatever information someone wanted to put in front of people, with only incidental regard (if any) for that information’s accuracy. The metrics and algorithms and optimization tools were sensitive to popularity but indifferent to truth. The computational engines were indifferent even to meaning, since they had no understanding of the content they were disseminating. They were exclusively, but relentlessly, aware of clicks and page views. A search or browsing session might turn up information or misinformation, depending on what people were clicking on. How-to videos about repairing your toilet were usually pretty reliable; information about vaccines and claims about controversial political issues, not so much. But whatever; the user would sort it out.

Now, the digital era was hardly the first time a new and ostensibly neutral information medium tilted against truth in practice. As we have seen, the scramble to attract eyeballs drove American journalism into surreal realms of half-truth and fake news in the 19th century, a problem that required several decades of institutional reform to iron out. Digital technology, with its capacity to disseminate information instantaneously and at almost no cost, raised similar problems, but to one or two higher orders of magnitude.

Normally, if an information-technology system is as likely to deliver false results as true results, or cannot distinguish between the two, we say it is broken. If we are then told that the fault lies with the user, for failing to figure out on her own what is true or false, we reply that error-prone humans need help. We need our information systems to steer us away from error and bias, which was what the institutions and standards of modern science and journalism were set up to do. Digital media’s built-in business model of treating all information like advertisements pretty much guaranteed an attention-seeking race to the bottom. That would have been challenging enough; but the digital information ecology compounded the problem by developing characteristics which were not just blind to misinformation but amplified it.

Reprinted with permission from The Constitution of Knowledge: A Defense of Truth by Jonathan Rauch published by Brookings Institution Press, © 2021 by Jonathan Rauch.

Latest Podcast

Join the newsletter to receive the latest updates in your inbox.

Sponsored

On Instagram @quillette