Identity, Long Read, Politics, Psychology, Science, Social Science, Top Stories

The Bias that Divides Us

As we sit here over six months after the initial lockdown provoked by COVID-19, the United States has moved out of a brief period of national unity into distressingly predictable and bitter partisan division. The return to this state of affairs has been fuelled by a cognitive trait that divides us and that our culture serves to magnify. Certainly many commentators have ascribed some part of the divide to what they term our “post-truth” society, but this is not an apt description of the particular defect that has played a central role in our divided society. The cause of our division is not that people deny the existence of truth. It is that people are selective in displaying their post-truth tendencies.

What our society is really suffering from is myside bias: People evaluate evidence, generate evidence, and test hypotheses in a manner biased toward their own prior beliefs, opinions, and attitudes. That we are facing a myside bias problem and not a calamitous societal abandonment of the concept of truth is perhaps good news in one sense, because the phenomenon of myside bias has been extensively studied in cognitive science. The bad news, however, is that what we know is not necessarily encouraging.

The many faces of myside bias

Research has shown that myside bias is displayed in a variety of experimental situations: people evaluate the same virtuous act more favourably if committed by a member of their own group and evaluate a negative act less unfavourably if committed by a member of their own group; they evaluate an identical experiment more favourably if the results support their prior beliefs than if the results contradict their prior beliefs; and when searching for information, people select information sources that are likely to support their own position. Even the interpretation of a purely numerical display of outcome data is tipped in the direction of the subject’s prior belief. Likewise, judgments of logical validity are skewed by people’s prior beliefs. Valid syllogisms with the conclusion “therefore, marijuana should be legal” are easier for liberals to judge correctly and harder for conservatives; whereas valid syllogisms with the conclusion “therefore, no one has the right to end the life of a fetus” are harder for liberals to judge correctly and easier for conservatives.

I will stop here, because I have not even begun to enumerate the many different paradigms that psychologists have used to study myside bias. As I show in my new book, The Bias That Divides Us, myside bias is not only displayed in laboratory paradigms, but it characterizes our thinking in real life as well. In early May 2020, demonstrations took place in several state capitols in the United States to protest mandated stay-at-home policies, ordered in response to the COVID-19 pandemic. Responses to these demonstrations fell strongly along partisan lines—one side deploring the societal health risks of the demonstrations and the other supporting the demonstrations. Only a few weeks later, these partisan lines on large public gatherings completely reversed polarity when new mass demonstrations occurred for a different set of reasons.

Many cognitive biases in the psychological literature are only displayed by a subset of subjects—sometimes even less than a majority. In contrast, myside bias is one of the most ubiquitous of biases because it is exhibited by the vast majority of subjects studied. Myside bias is also not limited to individuals with certain cognitive or demographic characteristics. It is one of the most universal of the cognitive biases.

The outlier bias

Although it is ubiquitous, myside bias is an outlier bias in the psychological literature in several respects. When my colleague Richard West and I began examining individual differences in cognitive biases in the 1990s, one of the first consistent results from our early studies was that the biases tended to correlate with each other. Another consistent observation in our earliest studies was that almost every cognitive bias was negatively correlated with intelligence as measured with a variety of cognitive ability indicators. Individual differences in most cognitive biases were also predicted by several well-studied thinking dispositions such as actively open-minded thinking. These findings have held for some of the most well-studied biases in the literature: anchoring biases, framing biases, overconfidence bias, outcome bias, conjunction fallacies, base-rate neglect, and many others.

This previous work framed our expectations about what we would find when we began investigating myside bias. The clear expectation was that it would show the same correlations with individual difference variables as do all the other biases. This body of previous work formed the context for the startling finding about the individual difference predictors of myside bias that we actually did observe. The really startling finding was: there weren’t many!

It turns out that myside bias is not predictable from standard measures of cognitive and behavioral functioning. The degree of myside bias that people show is not correlated with their intelligence or level of actively open-minded thinking; nor is it correlated with their educational level. It is not correlated with how much they display other biases. Furthermore, it is a bias that has very little domain generality. That is, myside bias in one domain is not a predictor of the myside bias shown in another domain. It is simply one of the most unpredictable of the biases in an individual differences sense.

Is myside bias even irrational?

Myside bias is an outlier bias in another important way. For most of the other biases in the literature (anchoring biases, framing effects, base rate neglect, etc.), it is easy to show that in certain situations they lead to thinking errors. In contrast, despite all the damage that myside bias does to our social and political discourse, it is shockingly hard to show that, for an individual, it is a thinking error.

In determining what to believe, myside bias operates by weighting new evidence more highly when it is consistent with prior beliefs and less highly when it contradicts a prior belief. This seems wrong, but it is not. Many formal analyses and arguments in philosophy of science have shown that in most situations that resemble real life, it is rational to use your prior belief in the evaluation of new evidence. It is even rational for scientists to do this in the research process. The reason that it is rational is that people (and scientists) are not presented with information that is of perfect reliability. The degree of reliability is something that has to be assessed. A key component of that reliability involves assessing the credibility of the source of the information or new data. For example, it is perfectly reasonable for a scientist to use prior knowledge on the question at issue in order to evaluate the credibility of new data presented. Scientists do this all the time, and it is rational. They use the discrepancy between the data they expect, given their prior hypothesis, and the actual data observed to estimate the credibility of the source of the new data. The larger the discrepancy, the more surprising the evidence is, and the more a scientist will question the source and thus reduce the weight given the new evidence.

This cognitive strategy is sometimes called knowledge projection, and what is interesting is that it is rational for a layperson to use it too, as long as their prior belief represents real knowledge and not just an unsupported desire for something to be true. What turns this situation into one of inappropriate myside bias is when a person uses, not a belief that prior evidence leads them to think is true, but instead projects a prior belief the person wants to be true despite inadequate evidence that it is, in fact, true. Psychologist Robert Abelson terms the first type of belief a testable belief. The second type of belief is technically termed a distal belief. A less abstract term for a distal belief would be to call it a conviction. The term conviction better conveys the fact that these types of beliefs are often accompanied by emotional commitment and ego preoccupation. They can sometimes derive from values or partisan stances. The problematic kinds of myside bias derive from people projecting convictions, rather than testable beliefs, onto new evidence that they receive. That is how we end up with a society that seemingly cannot agree on empirically demonstrable facts.

An example might help here. Imagine a psychology professor who was asked to evaluate the quality of a new study on the heritability of intelligence. Suppose the professor knows the evidence on the substantial heritability of intelligence, but because of an attraction to the blank-slate view of human nature, wishes that were not true—in fact, wishes it were zero. The question is, what is the prior belief that the professor should use to approach the new data? If the professor uses a prior belief that the heritability of intelligence is greater than zero and uses it to evaluate the credibility of new evidence, that would be the proper use of a prior belief. If instead they projected onto new evidence the prior belief that the heritability of intelligence equals zero, that would be an irrational display of myside bias, because it would be projecting a conviction—something that the professor wanted to be true, rather than a prior expectation based on evidence. Projecting convictions in this way is the kind of myside bias that leads to a failure of society to converge on the facts.

Beliefs as possessions and beliefs as memes

Most of us feel that beliefs are something that we choose to acquire, just like the rest of our possessions. In short, we tend to assume: (1) that we exercised agency in acquiring our beliefs, and (2) that they serve our interests. Under these assumptions, it seems to make sense to have a blanket policy of defending beliefs by having a myside bias. But there is another way to think about this—one that makes us a little more skeptical about our tendency to defend our beliefs, no matter what.

As I mentioned above, research has shown that people who display a high degree of myside bias in one domain do not tend to show a high degree of myside bias in a different domain—myside bias has little domain generality. However, different beliefs vary reliably in the degree of myside bias that they engender. In short, it might not be people who are characterized by more or less myside bias, but beliefs that differ in how strongly they are structured to repel ideas that contradict them. These facts about myside bias have profound implications because they invert the way we think about beliefs. Models that focus on the properties of acquired beliefs, such as memetic theory, provide better frameworks for the study of myside bias. The key question becomes not “How do people acquire beliefs?” (the tradition in social and cognitive psychology) but instead, “How do beliefs acquire people?”

To avoid the most troublesome kind of myside bias, we need to distance ourselves from our convictions, and it may help to conceive of our beliefs as memes that may well have interests of their own. We treat beliefs as possessions when we think that we have thought our way to these beliefs and that the beliefs are serving us. What Dan Dennett calls “the meme’s eye view” leads us to question both assumptions. Memes want to replicate whether they are good for us or not; and they don’t care how they get into a host—whether they get in through conscious thought or are simply an unconscious fit to innate psychological dispositions.

But how, then, do we acquire important beliefs (convictions) without reflection? In fact, there are plenty of examples in psychology where people acquire their declarative knowledge, behavioral proclivities, and decision-making styles from a combination of innate propensities and (largely unconscious) social learning. For example, in his book The Righteous Mind, Jonathan Haidt invokes just this model to explain moral beliefs and behavior.

The model that Haidt uses to explain the development of morality is easily applied to the case of myside bias. Myside-causing convictions often come from political ideologies: a set of beliefs about the proper order of society and how it can be achieved. Increasingly, theorists are modeling the development of political ideologies using the same model of innate propensities plus social learning that Haidt applied to the development of morality. For example, there are temperamental substrates that underlie a person’s ideological proclivities, and these temperamental substrates increasingly look like they are biologically based: measures of political ideology and values show considerable heritability; liberals and conservatives differ on two of the Big Five personality dimensions that are themselves substantially heritable; studies have found ideological position to be correlated with brain differences and neurochemical differences; and these differences in personality between liberals and conservatives seem to appear very early in life. [For footnotes and research citations, see here]

In short, the convictions that are driving your myside bias are in part caused by your biological makeup—not anything that you have thought through consciously. Of course, stressing that we didn’t think our way to our ideological propensities is dealing with only half of Haidt’s “innateness and social learning” formulation. However, for those of us who hold to the old folk psychology of belief (“I must have thought my way to my convictions because they mean so much to me”), the social learning part of Haidt’s formulation provides little help. Values and worldviews develop throughout early childhood, and the beliefs to which we as children are exposed are significantly controlled by parents, peers, and schools. Some of the memes to which a child is exposed are quickly acquired because they match the innate propensities discussed above. Others are acquired, perhaps more slowly, whether or not they match innate propensities, because they bond people to relatives and cherished groups.

In short, the convictions that determine your side when you think in a mysided fashion, often don’t come from rational thought. People will feel less ownership in their beliefs when they realize that they did not consciously reason their way to them. When a conviction is held less like a possession, it is less likely to be projected on to new evidence inappropriately.

The myside blindness of cognitive elites

The “innate plus social learning” approach to understanding the convictions that drive our myside bias combines with the empirical trend I mentioned earlier (cognitive sophistication does not attenuate myside bias) in a particularly important way. It creates a form of blindness about our own myside bias that is particularly virulent among cognitive elites.

The bias blind spot is an important meta-bias demonstrated years ago in a paper by Emily Pronin and colleagues. They found that people thought that various psychological biases were much more prevalent in others than in themselves, a much-replicated finding. In two studies, my research group found positive correlations between the blind spots and cognitive sophistication—more cognitively skilled people were more prone to the bias blind spot. This makes some sense, however, because most cognitive biases in the heuristics and biases literature are negatively correlated with cognitive ability—more intelligent people are less biased. Thus, it would make sense for intelligent people to think that they are less biased than others—because they are!

However, one particular bias—myside bias—sets a trap for the cognitively sophisticated. Regarding most biases, they are used to thinking—rightly—that they are less biased. However, myside thinking about your political beliefs represents an outlier bias where this is not true. This may lead to a particularly intense bias blind spot among certain cognitive elites. If you are a person of high intelligence, if you are highly educated, and if you are strongly committed to an ideological viewpoint, you will be highly likely to think you have thought your way to your viewpoint. And you will be even less likely than the average person to realize that you have derived your beliefs from the social groups you belong to and because they fit with your temperament and your innate psychological propensities. University faculty in the social sciences fit this bill perfectly. And the opening for a massive bias blind spot occurs when these same faculty think that they can objectively study, within the confines of an ideological monoculture, the characteristics of their ideological opponents.

The university professoriate is overwhelmingly liberal, an ideological imbalance demonstrated in numerous studies conducted over the last two decades. This imbalance is especially strong in university humanities departments, schools of education, and the social sciences; and it is specifically strong in psychology and the related disciplines of sociology and political science, the sources of many of the investigations studying cognitive differences among voters. Perhaps we shouldn’t worry about this, because it could be the case that the ideological position that characterizes most university faculty carries less myside bias. But this in-principle conjecture has not held up when tested empirically. In a recent paper, Peter Ditto and colleagues meta-analyzed 41 experimental studies of partisan differences in myside bias that involved over 12,000 subjects. After amalgamating all of these studies and comparing an overall metric of myside bias, Ditto and colleagues concluded that the degree of partisan bias in these studies was quite similar for liberals and conservatives. In short, there is no evidence that the particular type of ideological monoculture that characterizes the social sciences (left/liberal progressivism) is immune to myside bias.

This confluence of trends is a recipe for disaster when it comes to studying the psychology of political opponents. Nowhere has this been more apparent than in the relentless attempts to demonstrate that political opponents of left/liberal ideas are cognitively deficient in some manner. This has certainly characterized social science in the aftermath of the 2016 votes in the US and the UK. The assumption was that the psychologically defective and uninformed voters had endorsed disastrous outcomes that just happened to conflict with the views of hyper-educated university faculty. Regardless of how one views the outcomes of these votes, there is no strong evidence that the prevailing voters were any more psychologically impaired or ill-informed than were the voters on the losing side. And, as I show in The Bias That Divides Us, there are also no differences in rationality, intelligence, or knowledge separating people holding liberal ideologies from those on the conservative side.

An obesity epidemic of the mind

Anything that makes us more skeptical about our beliefs will tend to decrease the myside bias that we display (by preventing beliefs from turning into convictions). Understanding that your resident memes can make you fat in the same way that your genes can will help to cultivate skepticism about them. Organisms tend to be genetically defective if any new mutant allele is not a cooperator. This is why the other genes in the genome demand cooperation. The logic of memes is slightly different but parallel. Memes in a mutually supportive relationship within a memeplex would be likely to form a structure that prevented contradictory memes from gaining brain space. Memes that are easily assimilated and that reinforce resident memeplexes are taken on with great ease.

Social media have exploited this logic, with profound implications. We are now bombarded with information delivered by algorithms specifically constructed to present congenial memes that are easily assimilated. All of the congenial memes we collect then cohere into ideologies that tend to turn simple testable beliefs into convictions. In an earlier book, I described the parallel logic of how free markets come to serve the non-reflective first-order desires of both genes and memes.

Genetic mechanisms designed for survival in prehistoric times can be maladaptive in the modern day. Thus, our genetic mechanisms for storing and using fat, for example, evolved in times when doing this was essential for our survival. But these mechanisms no longer serve our survival needs in a modern technological society, where there is a McDonald’s on every other corner. The logic of markets will guarantee that exercising a preference for fat-laden fast food will invariably be convenient because such preferences are universal and cheap to satisfy. Markets accentuate the convenience of satisfying uncritiqued first-order preferences, and they will do exactly the same thing with our preferences for memes consistent with beliefs that we already have—make them cheap and easily attainable. For example, the business model of Fox News (targeting a niche meme-market) has spread to other media outlets on both the Right and the Left (e.g., CNN, Breitbart, the Huffington Post, the Daily Caller, the New York Times, the Washington Examiner). This trend has accelerated since the 2016 presidential election in the United States.

In short, just as we are gorging on fat-laden food that is not good for us because our bodies were built by genes with a selfish replicator survival logic, so we are gorging on memes that fit our resident beliefs because cultural replicators have a similar survival logic. And just as our overconsumption of fat-laden fast foods has led to an obesity epidemic, so our overconsumption of congenial memes has made us memetically obese as well. One set of replicators has led us to a medical crisis. The other set has led us to a crisis of the public communication commons whereby we cannot converge on the truth because we have too many convictions that drive myside bias. And we have too many mysided convictions because there is too much coherence to our belief networks due to self-replicating memeplexes rejecting memes unlike themselves.

The antidote to this obesity epidemic of the mind is to recognize that beliefs have their own interests, and for each of us to use this insight to put a little distance between our self and our beliefs. That distance might turn some of our convictions into testable beliefs. The fewer of our beliefs that are convictions, the less myside bias we are likely to display.

Myside bias and identity politics

If myside bias is the fire that has set ablaze the public communications commons in our society, then identity politics is the gasoline that is turning a containable fire into an epic conflagration. By encouraging people to view every issue through an identity lens, it creates the tendency to turn simple beliefs about testable propositions into full-blown convictions that are then projected onto new evidence. Although our identity is central to our narrative about ourselves, and many of our convictions will be centered around our identities, that doesn’t mean that every issue we encounter is relevant to our identities. Most of us know the difference, and do not always treat a simple testable proposition as if it represented a conviction. But identity politics encourages its adherents to see power relationships operating everywhere, and thus enlarge the class of opinions that are treated as convictions.

Identity politics advocates have succeeded in making certain research conclusions within the university verboten. They have made it very hard for any university professor (particularly the junior and untenured ones) to publish and publicly promote any conclusions that these advocates dislike. Faculty now self-censor on a range of topics. The identity politics ideologues have won the on-campus battle to suppress views that they do not like. But what these same politicized faculty members and students (and, increasingly, university administrators) cannot seem to see is that one cost of their victory is that they have made the public rightly skeptical about any conclusions that now come out of universities on charged topics. In the process of achieving their ideological dominance, they have neutered the university as a trusted purveyor of information about the topics in question.

University research on all of the charged topics where identity politics has predetermined the conclusions—immigration, racial profiling, gay marriage, income inequality, college admissions biases, sex differences, intelligence differences, and the list goes on—is simply not believable anymore by anyone cognizant of the pressures exerted by the ideological monoculture of the university. Whether or not some cultures promote human flourishing more than others; whether or not men and women have different interests and proclivities; whether or not culture affects poverty rates; whether or not intelligence is partially heritable; whether or not the gender wage gap is largely due to factors other than discrimination; whether or not race-based admissions policies have some unintended consequences; whether or not traditional masculinity is useful to society; whether or not crime rates vary between the races—these are all topics on which the modern university has dictated the conclusion before the results of any investigation are in.

The more the public comes to know that the universities have approved positions on certain topics, the more it quite rationally loses confidence in research that comes out of universities. As we all know from our college training in Popperian thinking, for research evidence to scientifically support a proposition, that proposition must itself be “falsifiable”—capable of being proven false. However, the public is increasingly aware that, in universities, for many issues related to identity politics, preferred conclusions are now dictated in advance and falsifying them in open inquiry is no longer allowed. We now have entire departments within the university (the so-called “grievance studies” departments) that are devoted to advocacy rather than inquiry. Anyone who entered those departments with a “falsifiability mindset” would be run out on a rail—which of course is why conclusions on specific propositions from such academic entities are scientifically worthless. University scholars serve to devalue data supporting conclusion A if they create a repressive atmosphere in which scholars are discouraged from arguing not-A, or pay too heavy a reputational price for presenting data in favor of proposition not-A. In their zeal to suppress proposition not-A, ideologically oriented faculty destroy the credibility of the university as a source of evidence in favor of A.

Of course, when this research makes its way into the general media, we have a doubling down on the lack of credibility. So, for example, a university professor describes research in the New York Times that leads to the conclusion that you should make your marriage “gayer.” Why? Because (wait for the drumroll) a university study found that gay marriages were less stressful and had less tension. The public is becoming more aware, however, that a heterosexual male researcher in a university who found that gay couples had more stress and tension than heterosexual couples would be ostracized. And the public is also becoming more aware that if, by some miracle, such a finding were to make its way through the review process of a journal in the social sciences, that the New York Times would never choose it for a prominent summary article with the title: “The Downside of Gay Marriages—More Stress and Tension”; whereas the actual article published (“Same-Sex Spouses Feel More Satisfied”) would be welcomed with open arms. The readers of the New York Times want to hear this conclusion—but not its converse. Both academia and the Times are simply serving their constituencies who are willing to pay for myside bias. Neither is a neutral arbiter of evidence on this particular topic, and the public increasingly knows this.

When the universities make it professionally difficult for academics to publish politically incorrect conclusions in one politically charged area, the public will come to suspect that the atmosphere in universities is skewing the evidence in other politically charged areas as well. When the public sees university faculty members urge sanctions against a colleague who writes an essay arguing that the promotion of bourgeois values could help poor people (the Amy Wax incident), then we shouldn’t be surprised when the same public becomes skeptical of research on income inequality conducted by university professors. When a professor compares the concepts of transracialism and transgenderism in an academic journal and dozens of colleagues sign an open letter demanding that the article be retracted (the Rebecca Tuvel incident), the public can hardly be blamed for being skeptical about university research on charged topics such as child rearing, marriage, and adoption. When university faculty members contribute to the internet mobbing of someone who discusses the evidence on differing interest profiles between the sexes (the James Damore incident), then we shouldn’t be surprised that the public is skeptical about research that comes out of universities regarding immigration. In short, we shouldn’t be surprised that only Democrats thoroughly trust university research anymore, and that independents, as well as Republicans, are much more skeptical.

The unique epistemic role of the university in our culture is to create conditions in which students can learn to bring arguments and evidence to a question, and to teach them not to project convictions derived from tribal loyalties onto the evaluation of evidence on testable questions. In contrast, identity politics entangles many testable propositions with identity-based convictions, transforming positions on policy-relevant facts into badges of group-based convictions. The rise of identity politics should have been recognized by university faculty as a threat to their ability to teach decontextualized argumentation. One of the most depressing social trends of the last couple of decades has been university faculty becoming proponents of a doctrine that attacks the heart of their intellectual mission.

When I talk to lay audiences about different types of cognitive processes, I use the example of broccoli and ice cream. Some cognitive processes are demanding but necessary. They are the broccoli. Other thinking tendencies come naturally to us and they are not cognitively demanding processes. They are the ice cream. In lectures, I point out that broccoli needs a cheerleader, but ice cream does not. This is why education rightly emphasizes the broccoli side of thinking—why it stresses the psychologically demanding types of thinking that people need encouragement to practice.

Perspective switching, for example, is a type of cognitive broccoli. Taking a person out of the comfort zone of their identities, or those of their tribes, was once seen as one of the key purposes of a university education. But when the university simply affirms students in identities they have assumed even before they have arrived on campus, then it is hard to see the value added by the university anymore. In fostering identity politics on their campuses, the universities are simply encouraging students to eat ice cream. No one needs to be taught to luxuriate in the safety of perspectives they have long held. It is something we will all naturally do. Instead, students need to be taught that, in the long run, myside processing will never lead them to a deep understanding of the world in which they live.

 

Keith E. Stanovich is professor emeritus of applied psychology and human development at the University of Toronto, and lives in Portland, Oregon. This essay is adapted from his latest book, to appear in late 2020: The Bias That Divides Us: The Science and Politics of Myside Thinking (MIT Press).

Photo by Robin Jonathan Deutsch on Unsplash

Comments

  1. I think it’s getting harder to know what’s good evidence and what’s bogus evidence. It seems I can find “convincing sounding” evidence for most anything, offered by experts with doctorate degrees.

    The sciences of biology, chemistry and physics are more easily agreed upon than sciences that deal with human activities where we generally cannot do double blind studies or other experimentation, yet both are described as science and so we should trust scientists. But science has nothing to say about morality, liberty, equal protection, or whether any ends indicated by scientific evidence are justified by any political means to achieve them.

  2. However, one particular bias—myside bias—sets a trap for the cognitively sophisticated. Regarding most biases, they are used to thinking—rightly—that they are less biased. However, myside thinking about your political beliefs represents an outlier bias where this is not true. This may lead to a particularly intense bias blind spot among certain cognitive elites. If you are a person of high intelligence, if you are highly educated, and if you are strongly committed to an ideological viewpoint, you will be highly likely to think you have thought your way to your viewpoint. And you will be even less likely than the average person to realize that you have derived your beliefs from the social groups you belong to and because they fit with your temperament and your innate psychological propensities. University faculty in the social sciences fit this bill perfectly. And the opening for a massive bias blind spot occurs when these same faculty think that they can objectively study, within the confines of an ideological monoculture, the characteristics of their ideological opponents.

    One of the points that I keep making on here is that everyone is biased. It’s one of the things that I’ve learned from being a neurobiologist, we’re all limited by our meat. It’s hard to think outside the confines of your brain, when it is your brain that does the thinking , and it is extremely Limited in a whole bunch of ways.

    I think this may be the reason why Trump derangement syndrome is so common among cognitive Elites, and why they do not like to admit that they have it. This is the one bias that they cannot guard themselves against. So instead, University faculty accuse everyone else of being biased, of having white privilege, of having innate biases against X minority.

    There is one particular person that really exemplifies this, and I think that would probably be Robin DiAngelo. No, I’m not talking to anyone on here, as I think everyone in here is smart enough to figure these things out for themselves. Her bias is reinforced by the large amounts of money that she gets paid to introduce this bias to everyone else, and because this is a bias that everyone else is vulnerable to, no matter what their intelligence, they fall for it too. Furthermore, they’re willing to pay large amounts of money for it.

    It’s also a huge bias of the transhumanists, I think. They forget how limited they are, because they think that they are beings of pure rationality and math. After all, they program things of pure rationality. Nope, you can’t escape the meat. Anyone who could would probably be judged by the rest of us as hopelessly psychotic.

  3. “Whether or not some cultures promote human flourishing more than others; whether or not men and women have different interests and proclivities; whether or not culture affects poverty rates; whether or not intelligence is partially heritable; whether or not the gender wage gap is largely due to factors other than discrimination; whether or not race-based admissions policies have some unintended consequences; whether or not traditional masculinity is useful to society; whether or not crime rates vary between the races—these are all topics on which the modern university has dictated the conclusion before the results of any investigation are in.”

    Wow, it’s a good thing I got out of uni when I did. It’s hard to believe that intelligent people can find a way to deny all of these facts! This may be the best argument for viewpoint diversity I’ve ever read.

  4. Have you looked at physics lately? Crazy Town.

    The problem is that a lot of biologists are terrible at actually explaining their work, at least to anyone who’s not a scientist. They also have trouble arguing with dishonest arguments, simply because in the field, people generally don’t do that stuff. We’re better mannered and better trained than those idiots in the social sciences, who are perfectly happy to substitute ad hominem attacks for data. You know, things like calling the professor racist instead of attacking their conclusions.

    Furthermore, if the professor has an association with something, like a clinic or a company, they can call them and get them to disassociate from that person. Remember Lisa Littman, who has published an article on her experiences on quillette? She lost, by my memory, a position at a clinic where she worked as an OBGYN, thanks to activist pressure. Her crime? She looked into ROGD, rapid onset gender dysphoria.

  5. When I got my very first undergrad uni liberal arts essay topic, I thought I’d done great, evaluating the evidence and arguments on either side. I failed the essay. Lecturer said, “You have to argue a position”. Back then, Derrida and Foucault were just new. From them, I fashioned a heavy wooden club with nails sticking out of it. I then used this to smash my way through all my essays. I got As and a postgrad scholarship. (Later, I decided Derrida et al were an endless rabbithole, no longer subscribe to them)
    So my point is, in academia at least, cognitive bias pays.

  6. In early May 2020, demonstrations took place in several state capitols in the United States to protest mandated stay-at-home policies, ordered in response to the COVID-19 pandemic. Responses to these demonstrations fell strongly along partisan lines—one side deploring the societal health risks of the demonstrations and the other supporting the demonstrations. Only a few weeks later, these partisan lines on large public gatherings completely reversed polarity when new mass demonstrations occurred for a different set of reasons.

    I nearly stopped reading your interesting article right here. One of these things is not like the other. This poorly chosen comparison implies that both sides objected to the societal health risks of mass gatherings that didn’t fall in line with their preferred ideologies. I honestly don’t know anyone who objected to the BLM protests as a societal health risk. (There certainly were people who mocked the hypocrisy of those who had been loudly in opposition to the first set of protests as “health hazards” but who now just as loudly supported the “essential” BLM protests.) The objection to the BLM “protests” was primarily because they turned into riots and not protests–with massive property damage, loss of life, etc. And the more that this became obvious, the less clear-cut were those “partisan lines” that you mentioned.

    Conflating these very different scenarios does not lend strong support to your thesis of myside bias. Another comparison would have served you better.

  7. I agree with you. Both are an impediment to rational discourse and common sense policies However, only one side has taken over the social sciences, Humanities, and many of the university administrations. They’ve turned them from institutions of honest research and inquiry into indoctrination centers. And now there coming for the hard sciences.

  8. I am completely happy to accept Stanovich’s idea that humanities academics have minds now obese and unfit from grazing on far too many high-fat high-sugar junk ideas, without any challenging intellectual workout from opposing views. Marxists are fond of Hegel’s dialectic, with which to propel thought forward – so they ought to know they are being slobs. But all the opposite team have been banished because contact sports hurt people, so intellectual footy these days is all about lapping the pitch with the ball and giving yourselves participation trophies at the end.

    He says: “The key question becomes not “How do people acquire beliefs?” … but instead, “How do beliefs acquire people?”” Sounds cool, but beliefs have no agency, so this is a rhetorical question, and an unnecessary one, because either way it drives, Stanovich offers the same two factors. First is: approval-seeking behavior within a group, i.e. tribal thinking or herd mentality. Second is: personality (heritable seems to be Stanovich’s preference, but why not epigenetic as well?) – here, beliefs are gum that stick only to some shoes, not others, and there are different types of gum and different types of shoe. Not much rational thought is involved, so this is not an important third factor.

    The use of memes as an explanatory framework, in the way Stanovich does, is not satisfying. He says “it may help to conceive of our beliefs as memes that may well have interests of their own.” But again, beliefs have no agency. They are not life forms. I’d prefer if he read up on dynamic systems theory. This doesn’t require that beliefs can wiggle. Instead, the system is self-organized. Here, you have initial attractor states, i.e. belief systems caused, in Stanovich’s case, by personality proclivities and tribal thinking forces. Occasionally, unusually, new paradigms are formed, but that’s rare. As ideas accumulate around the attractor, gravity and stability of the attractor increase – it snowballs into an unshakeable system of conviction. There’s an assumption in the article, I think, that there’s just two states – red and blue. A Hegelian dialectic would pit these attractor states against one another, but I’m not sure how you’d say a synthesis results. Maybe that’s exactly our modern problem in the battle of ideas.

    As a personal quest to develop a less-closed mind (closed mind roughly equates to his “distal belief”, or “conviction”) than the above suggests, there has to be provision for maintaining ideas in a flux, or unstable state, that does not collapse into the attractor state. For this, for dissociating ideas from the attractor belief system and its bias, keeping an open mind, maintaining a testable belief, Stanovich suggests that ideas should not be seen as a possession, but rather as artefacts of our personalities and group membership. I’m skeptical about this – isn’t that exactly what identity politics assumes – that our ideas are manifestations of unconscious, inherited, determined, structures? It just says, people will hold onto their convictions even more tightly – as part of their ascribed identity. So as a political chill-out therapy, I can’t see it working.

    You also have to be able to explain the state of epiphany, and the phenomenon (in US political thinking) of being “red-pilled” or “blue-pilled”, where your political beliefs and outlook fairly suddenly en-masse switch to the opposing side. That’s not really something his theory can explain. I’m not too sure about dynamic systems theory either, but it’s closer – there are geophysical chaos theories about geomagnetic reversal. Another thing I don’t think he had an answer to, is why is the polarization of belief systems intensifying – why is myside bias becoming so much worse?

    Cheers.

  9. “Everything I wish wasn’t true is a conspiracy theory” Dershem at work again

    EPSTEIN DIDN’T KILL HIMSELF, Kurt.

    But Kurt’s refusal to acknowledge reality because of his politics is a great opportunity to talk about how this piece is basically a cryptic run-down of the philosophy behind Bayesian statistical inference. Classical “frequentist” statistics – the ones that produce ‘p-values’ – assume a set level of empirical support necessary to disprove a null hypothesis. In other words, no matter how exotic the topic I’m trying to prove, I need the same level of evidence to support it that I would to support something uncontroversial. Example: I would need the same number of experimental successes to prove that I can control the weather with a magic spell as I would to prove that a coin is unbiased in terms of heads vs. tails.

    Bayesian statistics, on the other hand, require you to take your prior belief about a hypothesis into consideration. If I am strongly skeptical about a hypothesis, I should need stronger evidence to accept it as true. Observers would be right, under Bayesian inference, to demand more evidence of my weather magic than they would about the fairness of a coin.

    The problem here is that, as more and more of our lives become politicized, the number of hypotheses where people have radically different priors increases as well. Consider the coronavirus. My prior belief as a microbiologist is that there is no such thing as a mammal-infecting virus that doesn’t stimulate some level of immunity in people who catch it and subsequently recover. Therefore I don’t accept sparse evidence of re-infection as evidence that herd immunity isn’t ultimately likely. Other people, however, have very different priors and readily accept news articles making the claim that herd immunity is a pipe dream. Under classical statistics, one or the other of us is right; under Bayesian statistics, all we have is a disagreement based on the level of evidence necessary to cause us to accept a hypothesis, based on different prior beliefs.

    So calling it “myside bias” I think is inappropriate; the nature of “sides” is largely one of differing prior beliefs occurring in clusters. The only way out of it would be a principled rejection of the politicization of science, but my prior on that one is that it is extraordinarily unlikely. An alternative would be to demand that scientists state their prior beliefs explicitly in publications, or “pre-register” them as psychologists have started doing. Perhaps we should all “pre-register” our beliefs before engaging in conversations on the Internet too.

  10. Well, here’s the thing. I never objected to the protests following George Floyd’s death. I thought they were premature and politically motivated, and I did not like, at all, the people directing these protests. On the other hand, I felt that the protesters were sincerely motivated. Certainly one that occurred outside where I lived was.

    The thing that I noticed, however, was the protest very quickly got vicious. I don’t think that this was entirely because of George Floyd. This was a combination of things. Some of it was just lockdown fever, people needing to get out and having serious mental issues that come from being locked in your own house for months on end. However, some of it was because they were very definite actors encouraging violence , and there were people supporting those violent actors out of support for shared ideologies.

    Those people I did and do object to.

    As far as the protests, including the one in my own town, one of the big reasons I did not join them is because I tend to think that such protests are fairly useful only if you are in the town in which they are relevant to. The other is because there’s a huge group of people, many not wearing masks, and I’m not staying around back. I went for a long walk when they walked underneath my window. So my objection to peaceful protest was in fact mostly that they were acting like idiots and not following Health guidelines. My objection to the riots was for a much different reason.

    As one police officer I talked to said, let’s just hope this lets them blow off steam so the protesters don’t decide to become rioters tonight. I wholeheartedly agree.

    Frankly, I think that part of the reason we had such angry protests was because people needed to get out and we’re getting very depressed and anxious locked in. The protests were a form of coping mechanism and therapy. Not the best coping mechanism, but we all take what we can get sometimes. It’s certainly better than doing the sisal two-step.

  11. Imagine doing a BSc in psyc but never even hearing the term “myside bias”. Why do we need a new (12 years according to google trends) term to replace one in use for over 80?

    The reason is that they (academics) want to 1) feel relevant when they aren’t doing anything new, and 2) to cut off new students from old sources of knowledge. If you can’t search for it you can’t educate yourself.

    A pertinent question here is “how can we make knowledge which is deliberately being hidden through the renaming of terms available to the youth of today?”

  12. The myside theory and advice may be correct, but…

    What our society is really suffering from is myside bias : People evaluate evidence, generate evidence, and test hypotheses in a manner biased toward their own prior beliefs, opinions, and attitudes.

    I object to the word “prior”.

    Prior to very recently people believed that all lives mattered, that most police are good people, that men were not women, that America was not founded because of slavery, etc. etc.

    There is a cult at work. People are being indoctrinated.

  13. This is an oversimplification of the argument, and it’s a bit of a misrepresentation. I don’t think anyone is dumb enough to make that claim, which is why of course the left accuses the right of making it. It’s a way of putting people on the defensive and putting words in other people’s mouths. There are a couple of questions, though.

    1. What causes the difference?

    2. Once we have agreed on what causes the difference, what interventions are available to fix the difference, and what will the side effects of those interventions be? Are they acceptable to society?

    We’re still having trouble with number one. One of the problems is, a lot of the data out there is supported by a lot of raw emotion. Thus, people push solutions that sound good, and ignore a bunch of the data.

    For example, one of the reasons that is continually pushed is that the problem is some kind of structural racism. Therefore, the solution is structural racism in the opposite direction, including education that tries to defeat whites and bring them down so that now we can have equality. Sorry, Equity. There is a difference.

    This is very emotionally satisfying, and there are a lot of careers built off of it. However, this is an 8-bit solution to a 256-bit problem. It leaves out the complexity, and doesn’t really worry about the other factors. For example, does that kid have a father in the home? Does the father have a job? How many men go through that home on a regular basis?

    Other questions that matter, why is there not a father in the home if there isn’t one? What is the quality of the schools, and if it is low, why? What is getting in the way of improving that?

    Other question. Is the housing full of black mold? Are there rat problems? Are there drug dealers around? I’ve had students who lived below a drug dealer, and the guy loved to give them freebies to get them hooked. One of them damn near passed out in class, while a sub was watching them.

    Is there some kind of community structure in the neighborhood? What enforces this community structure? Is the neighborhood clean and does it have some kind of civic pride?

    Here’s the nuance. All of the questions I asked give you a much better data set, and actually offered different interventions. For example, you remember that jerk of a trump supporter who was going into poor areas and cleaning them up? You know, getting rid of trash that could Harbor rats, cleaning up infested sites, making the neighborhood look good? That’s an intervention, and it’s one that, for some reason, the Democrats who ran the city never bothered with. It’s actually a very useful thing to do, because it directly affects the public health, by reducing all those rats, and also affects the civic pride in the neighborhood. Kid who walks through neat streets that aren’t swarming with vermin is going to do better as well.

    I teach in the inner city, and the first thing I tend to look for in a student is just to check if there is a father in the home. Simplest possible intervention. If there is, it sets the kid up for a much better future. Then we look at simple things like nutrition, is the kid getting enough to eat. Our school feeds them. Medical issues, these need to be checked.

    Oddly enough, though, the most important thing for them is to just have a father around. If they have a stream of Father figures going through the household, they’re doomed. And racism doesn’t address this, it doesn’t have the tools.

    What does affect this is actually strict discipline. If they have no father, teachers very often have much more trouble with them, because they act out. They require more discipline from the school because they get none at home. Students crave structure of that kind. They hate it, they rebel against it, but when you provide it, they love you for it.

    Then comes another issue, once you have actually looked at these and gotten a nuanced picture. What do you do about it?

    One of the things that you have to be careful about is letting people make sweeping changes. It’s too easy to break things. For example, in the 1960s, LBJ managed to break the black family. We haven’t managed to fix it since. That, in my opinion, it’s probably the first thing that needs to get put together. Kids who have fathers grow up sturdier, more mature, and have much better success. It actually doesn’t matter whether they are white, black, Asian, or whatever. I have seen this in all Races.

    The problem is, some interventions break too much. In other words, when you do something out of compassion and don’t think through the unintended consequences, they very often destroy that which you intended to do in the first place. For example, being incredibly kind to the homeless is exactly what gets more of them. Doing what the homeless need done, fixing drug and alcohol abuse, getting those who need to be institutionalized into institutions, and in general not putting up with them just letting their lives go to ruin and sprawl all over the streets, that is better for them. But somehow, this doesn’t get done, because it’s not a compassionate solution. It’s not compassionate for the homeless, anyway, but it is very compassionate for all of the small business owners who don’t like to have people breaking into their shops, pooping on their doorsteps, mugging them, Etc. So it’s a question of do you enact reforms that help A specific group at the cost to the whole? Do you enact low-resolution reforms that do not address the problem?

    If that’s what you like, I suggest you either move to New York City or to California. They are Masters at doing exactly that.

  14. That’s what was asked–

    We, the rational, are endlessly set upon by those who see the sciences as tools of oppression, who see facts as subject to feeling, who refer to rational explanations of the unfolding of events in reality as ‘conspiracy theories’ because they expose the lies they take as fact for the buffoonery that it is.

Continue the discussion in Quillette Circle

155 more replies

Participants