Skip to content

The Unbearable Asymmetry of Bullshit

There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect.

· 8 min read
The Unbearable Asymmetry of Bullshit

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”

I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.

And it is with that in mind that I bring up the subject of bullshit.

There is a veritable truckload of bullshit in science.¹ When I say bullshit, I mean arguments, data, publications, or even the official policies of scientific organizations that give every impression of being perfectly reasonable — of being well-supported by the highest quality of evidence, and so forth — but which don’t hold up when you scrutinize the details. Bullshit has the veneer of truth-like plausibility. It looks good. It sounds right. But when you get right down to it, it stinks.

Science vs. Purpose: The College Board’s New Adversity Index
Sydney. London. Toronto.

There are many ways to produce scientific bullshit. One way is to assert that something has been “proven,” “shown,” or “found” and then cite, in support of this assertion, a study that has actually been heavily critiqued (fairly and in good faith, let us say, although that is not always the case, as we soon shall see) without acknowledging any of the published criticisms of the study or otherwise grappling with its inherent limitations.

Another way is to refer to evidence as being of “high quality” simply because it comes from an in-principle relatively strong study design, like a randomized control trial, without checking the specific materials that were used in the study to confirm that they were fit for purpose. There is also the problem of taking data that were generated in one environment and applying them to a completely different environment (without showing, or in some cases even attempting to show, that the two environments are analogous in the right way). There are other examples I have explored in other contexts, and many of them are fairly well-known.

But there is one example I have only recently come across, and of which I have not yet seen any serious discussion. I am referring to a certain sustained, long-term publication strategy, apparently deliberately carried out (although motivations can be hard to pin down), that results in a stupefying, and in my view dangerous, paper-pile of scientific bullshit. It can be hard to detect, at first, with an untrained eye—you have to know your specific area of research extremely well to begin to see it—but once you do catch on, it becomes impossible to un-see.

I don’t know what to call this insidious tactic (although I will describe it in just a moment). But I can identify its end result, which I suspect researchers of every stripe will be able to recognize from their own sub-disciplines: it is the hyper-partisan and polarized, but by all outward appearances, dispassionate and objective, “systematic review” of a controversial subject.

To explain how this tactic works, I am going make up a hypothetical researcher who engages in it, and walk you through his “process,” step by step. Let’s call this hypothetical researcher Lord Voldemort. While everything I am about to say is based on actual events, and on the real-life behavior of actual researchers, I will not be citing any specific cases (to avoid the drama). Moreover, we should be very careful not to confuse Lord Voldemort for any particular individual. He is an amalgam of researchers who do this; he is fictional.

In this story, Lord Voldemort is a prolific proponent of a certain controversial medical procedure, call it X, which many have argued is both risky and unethical. It is unclear whether Lord Voldemort has a financial stake in X, or some other potential conflict of interest. But in any event he is free to press his own opinion. The problem is that Lord Voldemort doesn’t play fair. In fact, he is so intent on defending this hypothetical intervention that he will stop at nothing to flood the literature with arguments and data that appear to weigh decisively in its favor.

As the first step in his long-term strategy, he scans various scholarly databases. If he sees any report of an empirical study that does not put X in an unmitigatedly positive light, he dashes off a letter-to-the-editor attacking the report on whatever imaginable grounds. Sometimes he makes a fair point—after all, most studies do have limitations—but often what he raises is a quibble, couched in the language of an exposé.

These letters are not typically peer-reviewed (which is not to say that peer review is an especially effective quality control mechanism); instead, in most cases, they get a cursory once-over by an editor who is not a specialist in the area. Since journals tend to print the letters they receive unless they are clearly incoherent or in some way obviously out of line (and since Lord Voldemort has mastered the art of using “objective” sounding scientific rhetoric to mask objectively weak arguments and data), they end up becoming a part of the published record with every appearance of being legitimate critiques.

The subterfuge does not end there.

The next step is for our anti-hero to write a “systematic review” at the end of the year (or, really, whenever he gets around to it). In it, He Who Shall Not Be Named predictably rejects all of the studies that do not support his position as being “fatally flawed,” or as having been “refuted by experts”—namely, by himself and his close collaborators, typically citing their own contestable critiques—while at the same time he fails to find any flaws whatsoever in studies that make his pet procedure seem on balance beneficial.

The result of this artful exercise is a heavily skewed benefit-to-risk ratio in favor of X, which can now be cited by unsuspecting third-parties. Unless you know what Lord Voldemort is up to, that is, you won’t notice that the math has been rigged.

So why doesn’t somebody put a stop to all this? As a matter of fact, many have tried. More than once, the Lord Voldemorts of the world have been called out for their underhanded tactics, typically in the “author reply” pieces rebutting their initial attacks. But rarely are these ripostes — constrained as they are by conventionally miniscule word limits, and buried as they are in some corner of the Internet — noticed, much less cited in the wider literature. Certainly, they are far less visible than the “systematic reviews” churned out by Lord Voldemort and his ilk, which constitute a sort of “Gish Gallop” that can be hard to defeat.

Science and Data: Notes on a Misconception
Sydney. London. Toronto.

The term “Gish Gallop” is a useful one to know. It was coined by the science educator Eugenie Scott in the 1990s to describe the debating strategy of one Duane Gish. Gish was an American biochemist turned Young Earth creationist, who often invited mainstream evolutionary scientists to spar with him in public venues. In its original context, it meant to “spew forth torrents of error that the evolutionist hasn’t a prayer of refuting in the format of a debate.” It also referred to Gish’s apparent tendency to simply ignore objections raised by his opponents.

A similar phenomenon can play out in debates in medicine. In the case of Lord Voldemort, the trick is to unleash so many fallacies, misrepresentations of evidence, and other misleading or erroneous statements — at such a pace, and with such little regard for the norms of careful scholarship and/or charitable academic discourse — that your opponents, who do, perhaps, feel bound by such norms, and who have better things to do with their time than to write rebuttals to each of your papers, face a dilemma. Either they can ignore you, or they can put their own research priorities on hold to try to combat the worst of your offenses.

It’s a lose-lose situation. Ignore you, and you win by default. Engage you, and you win like the pig in the proverb who enjoys hanging out in the mud.

As the programmer Alberto Brandolini is reputed to have said: “The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.” This is the unbearable asymmetry of bullshit I mentioned in my title, and it poses a serious problem for research integrity. Developing a strategy for overcoming it, I suggest, should be a top priority for publication ethics.

Footnote

  1. There is a lot of non-bullshit in science as well!

Acknowledgement

This is a modified version of an article that is set to appear, in its final and definitive form, in a forthcoming issue of the HealthWatch Newsletter (no. 101, Spring 2016). See http://www.healthwatch-uk.org/.

Latest Podcast

Join the newsletter to receive the latest updates in your inbox.

Sponsored

On Instagram @quillette