Skip to content

Intelligence Research

A Heterodox Education

The Constance Holden Memorial Address, 2025.

· 22 min read
Claire Lehmann at ISIR, 2025
ISIR 2025. Photo via X

Editor’s Note: On 26 July 2025, Claire Lehmann delivered the Holden Memorial Address for Distinguished Journalism at the Annual Conference of the International Society for Intelligence Research at Northwestern University in Chicago. What follows is an edited version of her speech.

I. The Promise and a False Start

I grew up in Rosewater, Adelaide, a working-class suburb near the industrial hub of Port Adelaide. I rode my bike to school past abandoned wool sheds and social housing. For most of my classmates, university was not on the horizon. So, when I got to the University of Adelaide, and I walked down the elegant boulevard of North Terrace every day—past the state library and the art gallery before arriving at Bonython Hall with its stately gothic towers—I felt I’d arrived. 

I went to university to study English. I was inspired to become a Shakespearean scholar by a brilliant year 12 English teacher who introduced me to Hamlet. And to my seventeen-year-old self, being an English professor at a sandstone university was the most glamorous job in the world. 

The first year of my English studies lived up to my expectations. We studied a bit of Chaucer and the 19th-century novel, including Robert Louis Stevenson’s The Strange Case of Dr Jekyll and Mr Hyde and Mary Shelley’s Frankenstein. But in second and third year, the coursework changed. Instead of studying the great works, we were assigned “texts,” which could be art installations, films, or TV shows. The assigned readings were all about the ideas of a philosopher named Michel Foucault. It felt like a bait and switch: reel them in with the promise of literature, then bury them in readings about heteronormativity.

Oh, the heteronormativity. In third year we were assigned a 1995 book called The Invention of Heterosexuality by Jonathan Katz. It argued that heterosexuality was “created” in the late 19th century as a byproduct of Krafft-Ebing’s classification of homosexuality. According to Katz, before heteronormative discourse emerged, heterosexual sex occurred only for the purpose of procreation, with rigid gender norms ensuring that “good” men and women were chaste. “Penis and vagina were instruments of reproduction, not of pleasure,” he wrote.

It was a bold argument. Katz opens his book by declaring that in “the early nineteenth-century United States, from about 1820 to 1860, the heterosexual did not exist.” He qualifies this by acknowledging that, of course, heterosexual behaviour existed—but the identity did not. He contends that the very act of categorising the “heterosexual” invented the identity, emphasising that “it is difficult to overstress the importance of that new way of categorising.” Before this category existed, Katz writes, “lust in men was roving,” not limited to the opposite sex—and heterosexual behaviour was typically framed as functional, not erotic. He denounces the recognition of heterosexuality as an “authoritarian” practice of “standardisation,” comparable to the attempt to standardise intelligence, and describes the outcome of this new knowledge system as “an erotic apartheid that forcefully segregated the sex normals from the sex perverts.”

I was sceptical. I was sure that humans had pursued erotic pleasure long before Victorian doctors began classifying sexualities. What about the pictures on the walls of Pompeii or the Khajuraho temples? Or the bawdy humour in The Canterbury Tales? I wasn’t sure exactly when the Kama Sutra was written, but I suspected it predated Victorian England. I was also doubtful that describing something is necessarily an act of invention. When European explorers arrived in Australia, they were surprised to find a beaver-like creature with a duck’s beak. They drew pictures of the platypus and sent them back home. The English thought the creature was a hoax. But the explorers did not “invent” the platypus by describing it.

Despite these logical flaws, among the professors of the English department, Katz’s work was considered so novel and fresh that it was required reading. It was a continuation of the work of Foucault, who had argued that medical categories did not describe pre-existing realities, they merely constructed phenomena under study. In Birth of the Clinic, Foucault argued that diseases were “constructed as objects of knowledge” through particular ways of seeing, classifying, and intervening. In Madness and Civilization, he argued that “madness” as a concept was shaped by social forces rather than nature. While Foucault’s original works did provide some flashes of insight, those who were influenced by him stripped his analytical approach down to a crude and simplistic formula. To Foucault’s acolytes, all empirical inquiry was suspect because investigation and classification served power not truth. As this view came to dominate the humanities, it made the study of human nature almost impossible without the threat of moral opprobrium. 

It dawned on me that what I was being taught was unlikely to be true. But I had also become equipped with the tools to play the game. I knew how to sprinkle my essays with words like “fluidity,” “subjectivity,” and “performativity,” which virtually guaranteed me a good grade. All I had to do was cite Foucault on “power-knowledge,” and name-drop Deleuze and Guattari on the fluidity of desire. Nothing is natural or normal—culture constructs what we mistake for nature. 

II. Finding My Athena

One summer, out of frustration, I went looking for criticisms of Foucault. He was treated like God in my department, quoted the way evangelicals quote the Bible, and I figured that, just like any faith, this doctrine must have its sceptics. A Google search brought me to Camille Paglia. 

Poststructuralism is a corpse!” Paglia declared. She described Foucault’s scholarship as weak and bloodless, imitative of Nietzsche, and lacking in rigour. But she didn’t just attack Foucault’s work, she assailed the entire academic establishment for elevating him as a guru: 

Lacan, Derrida, and Foucault are the perfect prophets for the weak, anxious academic personality, trapped in verbal formulas and perennially defeated by circumstances. They offer a self-exculpating cosmic explanation for the normal professorial state of resentment, alienation, dithering passivity, and inaction.

Paglia’s own background contributed to her irreverence. Having grown up on a working dairy farm, she drew on her rural, working-class, Italian-immigrant background to critique the “bourgeois” academics who surrounded her. She saw through the superficial status games of her colleagues, and identified the shallow careerism in which many of them were engaged at the expense of true scholarship. But what I found most interesting about her work was that, although she was a literature professor, she recognised that there was much more to life than language and words.

Camille Paglia
Camille Paglia, New York, 2005. Alamy

Paglia attributed her reverence for art and beauty to growing up in the Catholic Church. In a 2008 interview, she described how she was moved as a child by the sunlight shining through the stained glass windows onto the pews below, and the graceful lines inscribed below an imposing statue of St. Matthew:

I’ve never lost that sense—and I still argue it constantly—that the visual sense and the language of the body are primary ways of communication. And that is why, though I’m a literature professor, I have waged a fierce battle against post-structuralism over the last 20 years, because I feel it’s absolutely absurd to think that the only way we know anything is through words. My earliest thoughts were visual, and my earliest responses were to colour and line and gesture.

Paglia understood that poststructuralism’s error was to reduce every experience to language and discourse. She knew that our instincts—including those related to carnality and desire—exist outside language. We are attracted to bodies and faces we like because of their colour and shape. Sex is more about scent and touch than it is about sentences on a page. We are not brainwashed into being attracted to those who are beautiful because of modern discourse—our desire exists at a pre-civilised, ancient, primal level. For Paglia, this reality was ontological: the body comes first, and the stories we tell about it come later. 

Camille Paglia: It’s Time for a New Map of the Gender World
A society that respects neither religion nor art cannot be called a civilization.

Reflecting on her insights and her criticisms of the academic establishment in the humanities, I realised that I would never get the education I wanted in English. So, that summer, I changed degrees and enrolled in psychology.

III. Jensen and Gould in the Underworld

To reach the psychology department, I had to descend four levels underground into what felt like a dungeon—fluorescent-lit corridors leading to small, windowless rooms filled with strange experiments.

Yet despite the bunker-like quality of the environment, the intellectual experience proved to be clarifying. Gone were the endless readings on “power,” “discourse,” and “performativity.” I was instead instructed on the concepts of internal and external validity, reliability, and falsification. I found the professors to be unassuming—they told us that psychology was a young science, and that all findings were provisional. This intellectual humility was at variance with the sinister caricature of psychologists that I had found in my poststructuralist readings. 

The department at that time was led by a researcher named Ted Nettelbeck, an affable older man with foppish hair and round spectacles. Nettelbeck was recognised as an ISIR Distinguished Contributor in 2008, having pioneered research into inspection time during the 1970s—a simple cognitive task that correlates with IQ. He was also a celebrated jazz pianist, and would be inducted into the South Australian Music Hall of Fame in 2022. As an aspirational student of the arts, I was encouraged—here was someone who demonstrated that the arts and sciences could go together.

Under Nettelbeck’s leadership, psychometrics was at the heart of our undergraduate psychology training. We studied intelligence theories, and we learned why Gardner’s “multiple intelligences” hypothesis was bunk. We were taught that brain size correlated with IQ, and that the construct of “emotional intelligence” was best explained by a combination of general intelligence and the the Big Five personality trait of agreeableness. Our professors drilled us in the importance of construct validity—any new measure devised by psychologists must predict something beyond established constructs, otherwise it was likely to be a waste of time. Here, scepticism was actively encouraged. If we questioned the validity of a study or finding, our professors took it as a sign they had done their job.

One morning, upon arriving in the tutorial room, we each found a paper on our desks titled “Debunking of Scientific Fossils and Straw Persons” by someone called Arthur Jensen. I had never heard of my namesake, but the title piqued my interest. His paper was a response to an apparently influential paleontologist named Stephen Jay Gould—another figure I had never heard of—who had written a book attacking psychometrics titled The Mismeasure of Man.

As I read Jensen’s paper, I learned that Gould had accused psychometricians of employing the fallacy of “reification.” By this, Gould meant that psychologists were treating intelligence as a tangible physical entity that could be located in the brain. Gould maintained that general intelligence or g—the observation that scores on IQ subtests usually correlate with one another—was a statistical artefact not a meaningful trait. Jensen pointed out that Gould had misrepresented him on the sensitive topic of IQ and race:

Gould states that the normal variation within a population is a different biological phenomenon from the variation in average values between populations. (Actually, this may be or may not be true for any given trait; it is an empirical question.) Failure to recognize this distinction, Gould claims, is an error that occurs “over and over again” and is the “basis of Arthur Jensen’s fallacy in asserting that average differences in IQ between American whites and blacks are largely inherited” (p. 127). The fact is, of course, that I have never “asserted” (Webster: “assert implies stating confidently without need for proof or regard for evidence”) that IQ differences between any races are largely inherited. Nor have I ever claimed that the well-established heritability of individual differences in IQ within races proves the heritability of differences between races. To quote directly from some earlier writing (Jensen, 1970): “Group racial and social class differences are first of all individual differences [i.e., they are the statistical averages of individual measurements], but the causes of the group differences may not be the same as of the individual differences” (p.154, italics added). Whether the causes are or are not the same for any particular trait for any particular groups is a question open to rival hypotheses and empirical investigation. Such has always been my position, a position spelled out most recently in Chapter 6 of my book Straight Talk About Mental Tests (Jensen, 1981a).

Gould was not a poststructuralist, but it seemed to me that he was employing a similar mode of analysis. The notion that intelligence could be “reified” by noticing that test scores correlated with one another sounded to me like the claim that heterosexuality could be “invented” by describing it. Gould didn’t like the idea that intelligence tests provided a score—“a single number ... permitting a unilinear ranking of people.” And he argued that psychometrics was morally tainted and dangerous, and would open the door to what he called “scientific racism.” These arguments took me back to my English classes. Here we go again, I thought.

But this time the arguments did not go unanswered. Jensen patiently responded to all of them, and by doing so, he defended the possibility that the world—and human nature specifically—could be studied scientifically. He wrote:

Progress in scientific knowledge is distilled out of the endeavors of the many individually imperfect scientists who investigate the same phenomenon. The enterprise succeeds in its aim of objectivity, in the long run, despite the subjective biases of individual scientists and despite the influence of social context as portrayed by the Marxist sociology of science.

And: 

Lysenko’s [theory] is rejected (even by the Soviet ideologues who once promoted it), not because one scientist was necessarily a better man than the other, but because there is indeed a reality out there in the realm of phenomena, a reality in terms of which theories can be criticized and tested by innumerable other scientists.

Statements like these may seem to be self-evident to those trained in the sciences. But as a former English student steeped in poststructuralism, they were a revelation to me. It was the first time I had read a scholar defending empirical inquiry against accusations of political bias and moral contamination. Jensen expressed something that is often taken for granted but rarely articulated—that reality exists independently of our theories about it, and that it can be studied honestly through careful scientific work. I realised then that critics like Gould and the poststructuralists were not offering up methodological refinements in their critiques with the purpose of furthering inquiry; they were mounting a challenge to the possibility of studying human nature in the first place. 

Moving between the intellectual worlds of the humanities and the sciences allowed me to notice a pattern. I realised that scholars throughout the 20th century had repeatedly engaged in a philosophical error that I call the fallacy of disembodiment, whereby heritable human traits are dissolved into abstractions of language or culture, to avoid recognising biological reality. Reading Gould made me realise that this faulty logic didn’t just occur in the humanities; it persisted across disciplines, in different forms, and with the same pernicious effect of foreclosing inquiry.

But science marches on. Since Gould’s Mismeasure of Man—the second edition of which was released in 1996—neuroscientists have validated the reality of g. Neuroimaging studies by researchers like Richard Haier show that intelligence isn’t tied to one local area in the brain, it reflects how different brain regions work together with efficiency. The Parieto-Frontal Integration Theory suggests that networks linking the major regions of the brain form the biological foundation of human intelligence. These networks enable activity and messaging across multiple domains, supporting reasoning, problem-solving, and adaptive behaviour.

The Neuroscience of Intelligence: An Interview with Richard Haier
Is it possible to see if someone is high in g by their brain activity on a PET scan or fMRI scan – and if so, what does it look like?

Had Gould possessed greater intellectual humility, he might have realised that psychometricians like Jensen were not wrong, they were simply early. Sometimes our investigative methods are not ready for our hypotheses. Science advances not only through hypothesis testing, but also through the evolution of methods themselves. The poststructuralists and Gould both held that human traits are constructs rather than biological realities. This was a revival of Cartesian dualism: not mind separated from body, but culture separated from nature, and discourse separated from biology.

This worldview ultimately functions to inflate the self-perceived importance of humanities academics and literary intellectuals. If language shapes reality, then who could be more important than those who work with language? Those working with numbers, or with their hands or bodies, might earn more money, but they are not shaping, creating, and “inventing” reality. It’s also a worldview that promises endless malleability for activists: if naming something can bring it into being, then entire identities can be constructed from scratch. From invented pronouns to self-defined genders, the only limit to reshaping the world is how far language can be stretched. But the cost of all this is profound. It renders entire research domains as dangerous, and it widens the chasm between the sciences and the humanities, and replaces authentic knowledge with empty ideology.

IV. James Flynn and the Rising Mean

Besides exposing his students to the quarrel between Arthur Jensen and Stephen Jay Gould, Nettelbeck also introduced us to a researcher named James Flynn. Flynn had a long-running disagreement with Jensen over the heritability of IQ. He noticed that IQ scores had increased since testing was introduced, which meant that tests had to be re-normed every few years. And he disputed Jensen’s hypothesis that general intelligence or g emerged from genetic factors rather than environmental influences. This disagreement notwithstanding, both men treated one another with cordiality and respect. In a 2018 essay titled “Reflections About Intelligence Over 40 Years,” Flynn noted that Jensen “did have a trait scholars should emulate: he never wavered in stating what he saw as the truth. He used to quote Mahatma Gandhi, one of his heroes, who pledged that he would always say in public what he believed in private.” 

This mutual respect existed in part because both men understood the complexity of intelligence. Flynn’s research did not disprove the hereditarian hypothesis, nor did Jensen’s findings refute the fact that environmental factors also play a role. While twin studies have shown that around fifty percent of IQ variation can be explained by genetics, genome-wide association studies (GWAS) often account for only ten to fifteen percent of that variation. Psychiatrist and science writer Scott Alexander has argued that the “missing heritability”—the remaining 35–40 percent—is likely due to rare genetic variants and gene/gene or gene/environment interactions that the GWAS analyses miss. 

Flynn understood the power of gene/environment interactions. In his 2018 essay, he wrote: 

Within a cohort, genetic quality tends to dictate how you respond at school, how hard you work, whether you join a book club, how you will do in high school, what university you attend—your genetic quality will eventually tend toward a matching quality of environment for cognition. Genes predict both environment and IQ and environment alone predicts very little. Between cohorts spaced over time, different forces operate.

Since the industrial revolution began, social change has caused new cognitive exercise … more schooling, more cognitively demanding work, and more cognitively demanding leisure. These environmental factors initially triggered a mild rise in average performance, but this rise was greatly magnified by feedback mechanisms and over a century average IQ escalated.

As the average years of schooling rose, the rising mean itself became a powerful engine in its own right as people chased it to keep up. The reasons why people did so are as varied as the people themselves. For some, it is mere imitation: parents who see other parents keeping their children in school longer tend to keep their own children in school longer. Whatever the mechanics of the behavior, there has been an education explosion, from everyone having an average of 6 years of schooling, to most people having 8 years of education, to most people having a high school education, to more than half of people having some tertiary experience. The children need a better and better education to keep matching the rising average—particularly if they are to reap the benefit of thousands of new cognitively demanding jobs, which are better paid. As formal schooling frees people’s minds from the concrete world to use logic on abstractions, performance on Raven’s Progressive Matrices soars.

Flynn argued that the same logic used to explain IQ differences within a generation couldn’t be used to explain changes between generations. Within a generation, genes tend to guide people into environments that match their abilities—a process he called the individual multiplier. But across generations, broad social changes—including better education and more complex jobs—created rising average IQ scores. This social multiplier meant that small environmental changes could have big long-term effects. Flynn called this sociological arithmetic, and it explained why IQ could rise over time even if our genes stayed the same.

Flynn also acknowledged that this trend could reverse. In many countries, average IQ scores have been in decline since the mid-1990s—a phenomenon known as the reverse Flynn Effect. One explanation, put forward by researchers like Woodley of Menie et al., is that genetic variants linked to higher cognitive ability are becoming less common in the population due to fertility patterns. In their analysis of the Wisconsin Longitudinal Study, they found that women with higher polygenic scores for intelligence tended to have children later, which reduced their overall fertility. Based on this, they estimated a modest genetic decline in IQ—between 0.2 and 0.4 points per decade.

But another 2018 study by Bratsberg and Rogeberg found that IQ drops in Norway occurred within families, not just across them, pointing to possible environmental causes rather than (or in addition to) genetic ones. One such cause may be the rise of digital technology: more time spent on phones and screens may reduce engagement with activities that foster deep, abstract thinking. In this view, the same environmental forces that once boosted cognitive performance can now work in the opposite direction.

Empirical questions regarding the role of genes, environments, and chance in such phenomena as human intelligence remain open. But over the long run, both Jensen and Flynn have been shown to be right: biology, environment, and chance each play a role in shaping intelligence—particularly during early developmental processes in utero. The real mistake was made by Gould, who denied that intelligence could be identified or measured at all. 

V. Two Cultures

During my studies, I discovered that what I was learning about intelligence was considered vaguely heretical outside of the psychology department. I noticed during pub conversations with friends that people would react with discomfort, disbelief, and sometimes anger if I mentioned that intelligence could be measured, and that this measurement was usually reliable. It occurred to me that intelligence research ventured into sacrilegious territory. It undermines a vision of human nature that Steven Pinker described in his 2000 masterpiece The Blank Slate.

But it seemed to me that such hostile reactions were unfounded. Learning about intelligence did not make me arrogant. It did not make me less sympathetic towards people who were not bright. I had grown up with kids who didn’t have the cognitive ability to make it to university—and some of these people were my best friends. I didn’t accept the theory of “multiple intelligences” but nor did I have any problem recognising valuable traits such as kindness, athleticism, musical talent, humour, and beauty. If anything, learning about the reality of intelligence made me more compassionate. I increasingly saw how the world was stacked against those without sophisticated cognitive ability. I reflected on the fact that tasks I found barely manageable—such as filing taxes, planning long-term projects, or managing my own personal records—must be overwhelming for people with below-average intelligence. I realised that, despite growing up in a working-class environment, I had actually lucked out, and that I was a beneficiary of a form of “privilege” that I had not earned. 

But I also came to resent the fact that findings replicated thousands of times are treated as “fringe theories” outside the university. Intelligence testing is incredibly useful for identifying learning disabilities, legal competency, and job performance. All the while, public discourse portrays IQ research as discredited pseudoscience. This partly reflects the “Two Cultures” problem, first identified by E.P. Snow in his famous 1959 essay of the same name. Snow described a mutual incomprehension between literary intellectuals and scientists—literary intellectuals were ignorant of basic scientific facts while scientists failed to appreciate the cultural significance of their work. This mutual ignorance, argued Snow, impoverished both domains. 

Snow’s essay was published 66 years ago. Since then, the divide he described has widened into a gaping chasm, where mutual ignorance has hardened into active hostility. Today, largely due to the influence of thinkers like Foucault, it is nearly impossible to study the biological basis of human nature without provoking moral disapproval. Scholars who attack empirical inquiry often cast themselves as a progressive vanguard battling reactionary forces. But this is a projection. Figures such as Foucault, Gould, and Katz were themselves reacting against a deeper scientific revolution begun by Darwin. In this light, postmodernism appears not as a radical departure, but as a moral backlash: a resistance to the idea that human nature can be studied scientifically. Gould and Foucault resembled scandalised Victorians, outraged by The Descent of Man.

But in his essay on the Two Cultures, Snow also observed that scientists have some responsibility for the mutual incomprehension between the two domains. Those working in quantitative fields often assume that their work speaks for itself. They believe that good research naturally triumphs, and that evidence convinces rational people. But this has proved to be a mistake. While the sin of the humanities is cynicism about empirical inquiry, the sciences’ sin is complacency about public engagement. Too many researchers retreat into technical journals, and as a consequence, the public conversation is dominated by activists and ideologues.

VI. Creating Quillette

After I completed my psychology degree, I went on a professional wander. I moved to Canberra to work for the Department of Health, before enrolling in a Masters of Forensic Psychology at UNSW. Again, my training focused on the practical value of cognitive assessment. Writing dozens of reports based on the case files of criminal offenders attending a parole office clinic reinforced my belief that intelligence testing is a key component of risk assessment.

But while I was studying for this degree, I also found a community of people online who were grappling with the same questions about human nature that I was interested in. On Twitter, I met a biosocial criminologist named Brian Boutwell, a geneticist named Razib Khan, evolutionary psychologists like David Schmitt, Geoffrey Miller, and Diana Fleischman, and I spent time reading papers by psychologists like David Buss and Roy Baumeister. Discussing issues with people online gave me greater insight into the fact that rigorous findings from psychology and other fields were being routinely overlooked and often misrepresented in mainstream media.

Why Is Most Journalism About IQ So Bad?
It is easy to create a negative image of intelligence research because most people know very little about the topic. But distorting intelligence research does a disservice to the field’s hard-working scientists and the general public.

After dropping out of my master’s program (I had a toddler at home and found the unpaid clinical placements challenging), I thought about how I could combine my love for the English language with my admiration for scientists and heretics. It occurred to me that I could create a space for ideas that were considered off-limits by journalists in mainstream publications. If done well, a new publication could fill an epistemological gap—it could communicate ideas to an educated audience without the burdens of heavy-handed ideological filters.

So in 2015, I registered the domain name of quillette.com. I had no funding, no team, a young child to take care of, and a new baby on the way. But I also had the conviction that rigorous ideas deserved wider circulation, and that the sciences provided humanity’s best toolkit for understanding. Early essays we published became instant classics, including “Why Parenting May Not Matter and Why Most Social Science Research is Probably Wrong,” “Rape Is About Sex, Not Power,” and “On the Reality of Race and the Abhorrence of Racism.” Of course, I expected—and even welcomed—resistance for publishing articles contrary to blank-slate dogma, but the vitriolic nature of some of the pushback took me by surprise.

In 2019, during the peak of the “Great Awokening,” prominent figures like Nassim Taleb repeatedly described me as a “neo-Nazi” online, and the Columbia Journalism Review called me a disinformation “villain” and listed me alongside Mohammed bin Salman and Yevgeniy Prigozhin. But it was obvious that attacks like these were not about specific findings or methodological disagreements. Like Gould’s attacks on Jensen, or like Katz’s attacks on sexology, the criticism rained on my young publication was moralistic in nature. Rather than focusing on what was true or false, my critics wanted to reframe the question as being one of right and wrong. 

What I’ve described here is not simply a debate about political correctness or campus culture wars, it is about whether human behaviour can be studied scientifically at all. When entire research domains are deemed to be off-limits, we lose crucial knowledge. And what are the real-world consequences? We currently have educational policies that ignore cognitive differences, criminal-justice reforms that misunderstand antisocial behaviour, social programs that assume behavioural malleability, and mental-health treatments that have limited efficacy.

All of this reflects a retreat from the Enlightenment. If nature does not exist outside our theories about it, and if scientific methods serve power not truth, we effectively abandon the possibility of evidence-based knowledge altogether.

VII. A Tradition Worth Defending

So, this is my message to you. If you are a researcher in the field of intelligence, you are the inheritor of a tradition that is worth defending. It is the tradition of Francis Bacon’s systematic empirical inquiry, of Darwin’s integration of humans into the natural world, of Spearman’s mathematical analysis of cognitive ability, and of countless other scholars who believed human nature could be studied scientifically. Resistance to this tradition is not new. The Romantics of the 18th century recoiled from Enlightenment rationalism. Victorian moralists rejected the theory of evolution. Twentieth-century ideologues suppressed research for their Soviet and Nazi regimes, and contemporary postmodernists will dismiss findings that challenge their faith in the primacy of discourse. 

But scientific methods of inquiry persist because they produce knowledge that helps to improve human welfare. The application of intelligence research, in particular, has led to better educational practices, fairer employment, more effective interventions for those with either intellectual disability or intellectual giftedness. The field of behavioural genetics continues to answer questions about nature and nurture, while evolutionary psychology sheds light on the universal patterns that underlie our behaviour. 

But grappling with intelligence research is not always going to be easy. Understanding how predictive a trait like g is, and understanding that it is largely heritable, forces us to come to terms with the limits of human plasticity. It forces us to contend with the fact that some social and economic inequalities may be more persistent than we’d like and forces us to accept the limits of social engineering. Yet the alternative to uncomfortable knowledge is destructive delusion, and many lessons from history show where such delusions can lead. The track record of subordinating empirical inquiry to political orthodoxy is one of death and disaster. 

In conclusion, my message to those who teach intelligence, behavioural genetics, and evolutionary psychology is to keep going. Keep putting papers on students’ desks. Keep teaching the WAIS, factor analysis, and the reality of g. Keep explaining twin studies, heritability, and what evolution tells us. Not because it makes you popular—because it won’t. And not because you know where it leads—findings are always provisional. But because truth matters, and the search for truth is one of humanity’s noblest endeavours. When you teach this material, you are not just defending particular findings, you are defending the right to study human nature honestly. You are showing students that careful, humble research can still improve understanding. This is how traditions survive. This is how knowledge advances. It is why what you do here matters more than you realise.

Thank you.

If you found this address valuable and would like to support further research in human intelligence, please consider donating to the International Society for Intelligence Research.