One of the most commonly heard debater’s challenges, online and in real life, is: “Are YOU an expert in (X)?” The obvious if generally unspoken corollary is: “If not, then shut up.” However, very often, you don’t need to. There is little evidence that a smart normal citizen, capable of effective analysis of empirical data, cannot criticize the work of academic or journalistic “experts” in most fields—or any reason that he or she should be intimidated by these title-holders.
Obviously, some professional background in a topic that one is discussing or researching is a good thing. However, no credential can substitute for a relatively unbiased and non-partisan approach to data, or for what can bluntly be called intelligence. Whether due to political motivation or plain incorrect statistical assumptions, credentialed experts have a long and entertaining history of wildly false predictions—like the recent predictions of between 1,000,000 and 10,000,000 COVID-19 deaths in the United States before the end of 2020.1, 2 This sort of thing is likely to become even more common in the politicized academy of today, where essentially no statistical support appears to exist for theories of “white fragility” and univariate white privilege. When debating such questions as “How many human sexes are there?” a taxpayer who finds the experts arrayed against her need not feel a fool.
The basic fact that famous experts are often wrong is not itself in dispute—but is worth reviewing. Scholars writing in my own “neck of the woods”—the intersection of hard quantitative methods with topics of interest to social scientists—have a long history of producing (in addition to much fine work) globally influential but false apocalyptic predictions. Most notably, Stanford University’s Paul Ehrlich penned the international best-seller The Population Bomb in 1968, arguing that worldwide famines would devastate Earth during the 1970s and 1980s due to overpopulation. The book opens by claiming that hundreds of millions of people will starve before 1980 despite mankind’s best efforts—“The battle to feed all of humanity is over”—and goes on to argue that widespread human sterilization may be necessary and then to essentially write off the entire nation of India—arguing that there is no way the sub-continental giant could feed even “200 million more people.”
In reality, of course, India is with us still. Well known—and frankly fairly predictable—improvements in technology, such as hormonal birth control and the Green Revolution, averted the dark vision of a long-pig hunting future for humanity. However, similar predictions continued to semi-regularly achieve global notoriety for decades afterwards. Older academic readers may recall the Club of Rome’s 1972 report The Limits of Growth, which used crude computer simulations to argue that global economic growth would soon slow or stop because of resource depletion, or the smaller panic about global cooling that preceded the current one about global warming. The famous 1998 papers predicting near-future Peak Oil?3 The unavoidable Western epidemic of middle-class heterosexual AIDs? Y2K? The list of these swing-and-a-miss doomsday calls is a long one, and yet our species struggles on.
Experts get things badly wrong for many reasons, including legitimate fear and haste (as with COVID-19) and using technology that’s inadequate to the task (probably the Club of Rome’s problem). However, one constant behind nonsense claims from “Saddam has WMDs” on the political Right to “One in four women will be raped while in college”4 on the Left has been partisan bias. This fact bodes ill for the modern academy across the Anglosphere, which has become remarkably partisan in an increasingly one-sided and sometimes unhinged fashion in recent years. A nationally representative 2006 study of American university professors by Harvard’s Neil Gross and George Mason’s Solon Simmons found that 24 percent of the social scientists surveyed described themselves as “radicals,” 20.6 percent described themselves as “activists,” and 17.6 percent described themselves as “Marxists5”—a term often considered synonymous with “Communist” in the USA.
Things were better in the hard sciences, but—back in the Humanities—19 percent of faculty members identified as radicals, fully 26.2 percent were activists, and five percent were Marxists. And Gross and Simmons hardly write alone. In 2016, Brooklyn College’s Tony Quain and George Mason’s Daniel Klein conducted another large-scale study of 7,243 professors at 40 top universities that was published in Econ Journal Watch.6 These researchers found that, of 3,937 faculty members who identified with a modern American political party, 3,623 were Democrats and only 314 were Republicans—a roughly 11 to one ratio. The field with the most conservatives on faculty was Economics, with only 4.5 Democratic or liberal professors for every Republican. Conversely, History and Sociology were the least welcoming departments for Pachyderms, with Democratic historians outnumbering their GOP colleagues by about 34 to one.
While it is awkward to discuss this as an academic, the tendency of partisans to produce bad results and predictions is amplified by the fact that such partisans are most prevalent in those academic fields that are skeptical of entry requirements. It is no secret, at least within The Tower, that GRE scores and the like vary widely by field. According to an empirical breakdown of the most recently available GRE data,7 the percentage of entering graduate students to score at the highest level on either the verbal or quantitative portion of the test was highest in purely empirical fields such as Mathematics (18.5 percent) and Banking (17 percent), lower in traditional social science fields like Political Science (3.4 percent) and Sociology (1.9 percent), and lowest in “helping professions” fields often associated with activism—like Social Work (0.3 percent) and Early Education (0.1 percent).
Average scores on the exam reflect a similar pattern. As of 2016, the average quantitative GRE score—chosen here because math tests seem immune to charges of “cultural bias”—was 162 for aspiring Physicists, 158 for would-be Chemists, 156 for Computer and Information Scientists, 151 for Business students, 150 for “Other” graduate majors (the “Studies” fields fall here), 149 for Journalists, 147 for Family/Consumer Science students, and 144 or 145 for Social Workers with 146 for both Early Educators and those planning to enter the increasingly influential field of Student Counselling and Services.9 To an increasing degree, departments and even entire universities located within politically Left-leaning US states are settling on one way to resolve this conflict between political activism and low test scores: simply eliminate the tests. The entire University of California system, largely driven by activist faculty, is the most recent large academic institution to completely drop exams like the SAT and ACT from at least the undergraduate admission process,10 and not a few other states are expected to do the same.
Obviously, averages are not people, and much great work has been done over the years in History and Sociology departments. I am a Political Scientist myself, and need to bone up on my “softer” qualitative methods as least as much as the number-crunching I focused on in graduate school. However, the plain fact is that the 62.2 percent of social scientists who describe themselves as “radicals,” “activists” or “Marxists” (24 percent + 20.6 percent + 17.6 percent) are likely to hold some dubious ideas that cannot withstand rigorous empirical analysis. And we see such ideas aplenty in modern discourse. Simply put, there exists essentially no empirical evidence for many core theses of some of the most lauded academic “experts” of our time.
A quick historical example: for my PhD dissertation, I tested the famous “Hacker Question,” which—along with Peggy McIntosh’s idea of the “invisible knapsack11”—became one of the building blocks of the academic idea of white privilege. Dr. Hacker, a professor at Queens College, asked a classroom full of white students how much money they would have to be paid to—if this were possible—become black for the rest of their lives. The average answer was $50,000,000. To his credit, Hacker—a skilled scholar—intended his experiment only as an innovative classroom exercise. However, it became widely cited, in major papers by authors such as Cheryl Harris,12 as an example of the value of white status or “whiteness as property” in an institutionally, systemically, structurally, genocidally, historically… etc… racist society.
There was just one catch. Dr. Hacker (who was, again, not trying to start a new field of scholarship) never asked any black people how much they’d have to be paid to be white. When I did so at some length, I found that most people of color would not consider changing their race at all, and the average amount demanded by those who would was above $50,000,000. Group pride and perceived affirmative action effects fully countered the alleged advantages of whiteness—although we do also have to take into account inflation between 1992 and 2015.
One more-than-suspects that any level of serious scrutiny would reveal similar problems with many of the most popular theories advanced by the academic experts of today. Preliminary tests conducted by myself and others strongly indicate that much of the “evidence” presented for the idea of white privilege, broadly conceived, more accurately represents plain social class. The vaunted Implicit Association Test turns out to predict almost nothing in the actual business world, where people meet one another for more than 2.1 seconds at a time and discrimination is illegal.13 Other influential ideas have apparently never been tested at all—are whites more fragile in the context of hostile questioning than blacks or Asians?—or are unfalsifiable nonsense (“cultural appropriation”).
Perhaps no area of modern discourse has been more influenced by partisanship and activism than the transgender conversation. Obviously, all people of goodwill want individuals struggling with gender dysphoria or simply dealing with internal gender conflicts to be treated with kindness and to enjoy full human rights. However, it is hard not to notice that many recent changes to suggested medical and psychological practice in this sector seem to be the result of persistent advocacy rather than any genuine new innovations in science. Gender dysphoria is, to put this rather mildly, one of a very few conditions listed in the DSM-5 for which the primary suggested treatment is full affirmation of what the patient believes to be true.
This has real world consequences. Authors such as Abigail Shrier and Debra Soh have recently pointed out that the advocacy of puberty-blocking drugs for young teenagers has become more common in recent years—despite strong evidence that more than half of children experiencing gender conflict essentially “grow out of it” in the absence of such treatment, while nearly 100 percent of the kids treated with puberty-blockers go on to transition.14 Similarly, full-on “bottom surgery” involving natal-genital removal is almost universally considered to be at least one appropriate pathway for trans-identified adults, despite major studies showing no reduction in suicidality for those who undergo these painful and sometimes risky procedures.15 Further, at least one major study has found that the administration of puberty-blocking drugs to even those minors dealing with “severe and permanent gender dysphoria” has “no significant effect on their psychological function, thoughts of self-harm, or body image.” The desire to make trans-affirming claims, whether or not supported by new science, has extended to the editing of the DSM-5 itself—it now defines gender dysphoria as a mental illness only if it is accompanied by depression—and even to the generation of numerous major articles arguing that biological sex itself hardly exists.16
In reality, of course, human sex exists—none of us would be here if it did not. Inter-sex people also exist, but, as writers like Quillette’s Colin Wright have pointed out, it is not especially difficult to say that someone with an XY or XYY chromosomal order, a reproductive system that produces small gametes instead of large ones, and a penis (!) is a male. Similarly, we as normally intelligent citizens can use logical and empirical tools to conclude that West Virginia coal miners are not less oppressed than Barack Obama, POC (and white) campus radicals are among the most “fragile” folks on Earth, and letting 11-year-old schoolchildren make life-changing medical decisions for themselves may be a bad idea. With all else being equal, more training in a field is better than less. But biased smart people have said foolish things at least since ancient Greece’s sophists, and it is still a top idea today not to agree black is white simply because an “expert” tells you that it is.
Wilfred Reilly is an associate professor of Political Science at Kentucky State University. His most recent book is Taboo: Ten Facts You Can’t Talk About. You can follow him on Twitter at @will_da_beast630.
14 https://pubmed.ncbi.nlm.nih.gov/23702447/, https://www.kqed.org/futureofyou/441784/the-controversial-research-on-desistance-in-transgender-youth, https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0016885
15 https://www.heritage.org/gender/commentary/sex-reassignment-doesnt-work-here-the-evidence. This is a conservative source, but multiple good primary sources are linked in the piece.