Economics, Education, Recommended, Social Science

How to Tackle the Unfolding Research Crisis

Research [is] a strenuous and devoted attempt to force nature into the conceptual boxes supplied by professional education.
~Thomas Kuhn, The Structure of Scientific Revolutions

Scholarly research is in crisis, and four issues highlight its dimensions. The first is that important disciplines such as physics, economics, psychology, medicine, and geology are unable to explain over 90 percent of what we see: dark matter dominates their theoretical understanding. In cosmology, 95 percent of the night sky is made up of dark matter and dark energy which are undetectable and inexplicable. Some 90 percent of human decisions are made autonomously by our sub-conscious, and even conscious decisions often emerge from a black box and have little support. The causes and natural history of important illnesses—including heart disease, cancer, obesity, and mental illness—are largely unknown for individuals.

The second dimension of the research crisis is that systems which are critical to humankind—especially climate, demography, asset prices, and natural disasters—are minimally predictable. The best example of misguided theory can be seen in the conduct of organisations. Although a high proportion of their executives have formal training in management, its inadequacy is continuously evidenced by failures as serious as bankruptcies of globally significant firms (such as Enron, Lehman Brothers, Merrill Lynch, and Parmalat) and global recessions (including as recently as 2000 and 2008). Perhaps the best evidence of management theories’ poor value is an analysis rhetorically entitled “Is the US Public Corporation in Trouble?” which showed that profitability of firms fell significantly in the 40 years to 2015. This brings the important corollary that modern management and finance theory have been associated with declining economic performance.

The third dimension is a chronic inability to reproduce research findings. This replication crisis was highlighted by an online survey by Nature in 2016, in which over 70 percent of researchers reported that they had tried and failed to reproduce other researchers’ published findings. More specifically, scientists at biotechnology firm Amgen reported in 2012 that they could only confirm the results of six of 53 (11 percent) landmark cancer studies published in high impact journals. Another indication of the dubious benefits of research is its inability to ensure the integrity of findings. Doubts over quality are serious enough to be expressed in papers along the lines of that by medical researcher John Ioannidis entitled “Why Most Published Research Findings Are False.” The variety of deceit is made clear at retractionwatch.com, and its extent is consistent with reports of endemic misconduct amongst researchers.

The final indicator of crisis in research is that progress in developing better theory and forecasting capability has stagnated since the 1960s. A paper by Stanford and MIT economists asked “Are Ideas Getting Harder to Find?” and concluded that—although the number of researchers is growing exponentially—they are becoming less productive in terms of ideas generated, and sustaining research productivity requires ever greater expenditure. For example, yields of agricultural crops have roughly doubled since the 1960s, but this required research expenditure to increase by a factor of up to six.

Symptoms of incomplete theory were made clear by the 2017 BBC radio program: “Is the Knowledge Factory Broken?” which raised awkward questions about the poor return on vast sums being given to universities and other research institutions. It argued that something has gone fundamentally wrong because most published research cannot be repeated and thus is questionable, which means that science is not being done properly.

There are two bleak implications of this chronic failure of expertise. The first arises from the fact that risks in life are related to a lack of knowledge about the future: uncertainty leaves us unable to eliminate or avoid unwanted events and so we face a future that is axiomatically risky. This leads to the second implication of weak research which is that evidence-based policy and investment are chimera. Taxpayers and investors have the reasonable expectation that public policy and corporate strategy will be founded on accurate predictions so that financial, intellectual, and other resources secure a positive outcome. Weak research, though, means that actions are based on flawed beliefs, and are literally gambles. Every time knowledge gaps lead to social policies and corporate strategies that are based on unsupportable premises it brings a human cost. There is also a huge opportunity cost in the $1 trillion which governments provide to research councils each year, with ten times more in academics’ salaries.

What has caused the crisis in research? The most obvious reason is that systems and phenomena which are important to us—such as climate, markets, disease, institutions, and so on—are complex and dynamic. Complex because they comprise multiple systems: the performance of institutions as diverse as churches and corporations, for instance, depends on their governance and leadership, internal culture and mores in their sector, the tasks they face and the resources available. Dynamic because they face feedbacks from outcomes, and so important data about their future are not yet available. When the question of time is incorporated into such systems they become inherently hard to explain, and impractical to predict accurately. Researching them is difficult.

In addition to research’s inherent difficulties, its performance has been hampered by the sector’s poor structure and conduct. Most obviously, it can rely on government funds and is self-regulated by academic elites which has never worked for any endeavour. Research is also conducted in isolation from the real world, which means there is no external scrutiny of choice of topics and labour practices, and it never faces the competition or scrutiny that continuously improves other important tasks. This gives research weak economics such as the absence of a market and of independent measures of value added. Siloing of research and specialist theories within disciplines establish an intellectual echo chamber for insiders; and an opaque Tower of Babel for research outsiders. Inappropriate structure of the sector has allowed poor research to survive, often even to thrive.

Without regular real-world validation, research has become insular and focussed on normal science that is intended to buttress existing paradigms. That is, it takes existing intellectual frameworks or theories as truth and either fills in gaps or develops innovative strategies to confirm what is known. The result is intellectual stagnation which has two consequences. One is a marked slowdown in the rate of knowledge accumulation. An excellent example is provided by American Economic Review, which is its discipline’s premier journal. AER celebrated its centenary in 2011 by commissioning a distinguished panel of researchers to choose the top 20 most “admirable and important articles” that the journal had published. This presumably would list economics’ most innovative and influential thinking. So it is staggering that the most recent of these articles dates to 1981: the leading economists of our day think it is almost 30 years since the pre-eminent AER published an important idea!

It is hard to believe that a discipline which pretends to any credibility has not seen a thinking breakthrough published in its leading journal in three decades. A similar perspective, though, emerges from an article in Nature Ecology & Evolution which lists 100 seminal papers that comprise “a general must-read list for any new ecologist … to achieve satisfactory ecological literacy.” Only two date to this decade, and 27 to either the 2000s and 1990s: almost three quarters were published before 1990. The cognitive slowing of economics and ecology is matched by other disciplines where researchers show little curiosity about new theories, almost as if all questions have long since been answered and everything can now be interpreted through a Beatles-era prism.

Intellectual stagnation is facilitated by researchers’ studied refusal to investigate puzzling observations. Few are taught ontology and epistemology, or receive training in research techniques other than discipline staples: researchers rarely give much thought to their strategy, almost as if truth will emerge automatically. Thus, most fields treat shocks and crises as irrelevancies to be stepped around rather than learning opportunities. For instance, in March 2010, University of California economics professor Barry Eichengreen wrote in the National Interest that “the great credit crisis has cast doubt over much of what we thought we knew about economics.” This global market and economic catastrophe reverberated for years after 2007, but economics and finance theories saw zero change. Lack of researchers’ curiosity is abetted by perverse incentives that are based around the volume of publications in top journals and give no indication of research’s repeatability or accuracy, much less its real world significance. Researchers driving for volume publish rapidly, so the knowledge factory turns out noise that wastes resources and talent.

Why is weak research so seldom remarked upon? It seems that the staggering achievements of entrepreneurial engineers and scientists in commercialising new communications, transport, and other technologies during recent decades has distracted attention from poor research. Unproductive academic researchers were gifted a free ride.

The poor and worsening position of research is not self-correcting, and the sector needs to be redirected towards the solution of real world problems and developing an effective predictive capability. Energising research is best achieved through economic levers, and two are available because of the sector’s concentrated cashflows. Most researchers depend on funding from research bodies such as the Australian Research Council or through university salaries. Funders should develop a detailed code of research practice; and—without affecting researchers’ independence—prioritise research that solves puzzles in paradigms and enables forecasts of important phenomena. This would be best achieved by linking a significant portion of grants and incentives to paybacks from improving real world understanding and predictability.

A second economic lever is held by universities because scientific publications are researchers’ main communications channel, and publishers derive their income through subscriptions, or on a pay-to-play basis where authors meet publication charges. In both cases, universities provide most of the money, which should enable them to impose requirements on the content and editorial practices of journals while respecting the integrity of academics and journal editors.

In particular, universities should restrict funding to publishers with robust integrity programs, and preference journals which promote research quality. Publishers should centralise integrity checks of all submissions and send quality papers for review by qualified peers chosen at random. Journals should require self-replication of research so that innovative findings are confirmed in a totally independent setting. They should also adopt a multidisciplinary perspective, and encourage review articles which critically evaluate the prevailing paradigm with summaries including strengths and weaknesses and provide informed commentary on developments in research and its strategy.

There are further strategies available to universities to enhance research. One is to catalyse a holistic inter-disciplinary approach. Groups of complementary universities could develop an online platform where researchers could register their interests. This would reduce search costs for innovative researchers seeking like minds and promote formation of multi-disciplinary teams to focus on critical puzzles. A second strategy is to promote a groundswell in heterodox research by putting resources behind suitable journals, encouraging international conferences and incentivising researchers.

In these ways, we can begin to tackle the crisis in research and promote change in the way it is conducted by setting out practical and realistic solutions to obvious shortcomings. The goal must be to realign research priorities and incentives to avoid opportunity costs of weak research, and raise quality expectations to accelerate development of better theory that delivers real-world solutions of value to stakeholders. Failure to make research more productive will become increasingly important as the scale and complexity of societies grow—if we lack the foresight to respond, unmanaged risks will become ever more serious.

 

Les Coleman is Senior Lecturer in Finance at the University of Melbourne. His latest book How to Thrive through the Unfolding Research Crisis will be published in 2020.

Comments

  1. A few additional thoughts:

    1. Publish the results when a study or experiment fails. That can be just as instructive as success.
    2. I’m not sure how much the author’s argument is bolstered by the fact that when surveyed about the most significant articles or findings, scholars don’t include many recent ones. Things need time to establish themselves as great and influential, and people know this.
    3. Something obvious that’s missing from this piece: Maybe all the research failures and shortcomings and big unknowns should increase our humility about our ability to understand and explain everything, especially when it comes to predicting the future of complex, dynamic systems and of social phenomena.
  2. “A similar perspective, though, emerges from [an article]”(https://www.nature.com/articles/s41559-017-0370-9?WT.feed_name=subjects_scientific-community-and-society) in Nature Ecology & Evolution which lists 100 seminal papers that comprise “a general must-read list for any new ecologist … to achieve satisfactory ecological literacy.”

    How strange is this! When you don’t want to see the truth, it’s understandable you will consider the reverse.
    The main issue with modern sciences, I mean hard sciences, is the same than in the other fields of human activity : political correctness, dogmatic presumption increasingly tightening its grip on all the subjects that matter.
    For instance, in the field of genetics, developmental biology and biochemistry, it’s the opposite of what the author says : there have been numerous and great discoveries during the last three decades in evo-devo or epigenetics. But they don’t advocate for the mainstream stance, all the contrary. So hush, hush… What do you think, my friend?.. You put your career at risk… Remember what happened to Galileo?.. No inquisition this time?.. Oh, oh, oh, What a gullible mind you have!

  3. My time as a student in a research lab taught me a very valuable lesson. That lesson is that the system we have today for building our knowledge is broken.

    I was working with a professor who for the past 15 years had been trying to right a wrong with replication after replication. This started with an experiment he ran that had a well liked result by his academic peers. He realized after however there was a major confound in the original experiment and when it was removed it the results reversed. The damage was do though. Another researcher loved the original results and started working to come up with ways to reproduce it. My professor would find a confound in that individual’s work, correct for it and fail to reproduce. While he published all his follow up trials the result he got just didn’t have the same appeal. 15 years later the original results were still found cited in textbooks even though he as the original author had published dozens of follow ups that failed to replicate!

  4. I absolutely love the topic of this article, but as a professional researcher, the entire article wreaks of an economist’s take on what research is and how it should be done. There are huge differences between the fields in what research looks like, practices, the degree to which results are tested, the degrees to which they are connected to real world applications. To treat the entire field like one giant homogenous blob is like talking about how all humans act, think and feel. It’s not that you can’t get anything out of that conversation, but it would be at such a general, superficial level and you’d always know that if you scratched under the surface at all, everything would change.

    Articles like this arguing we’re not coming up with new ideas, we’re intellectually stuck completely miss how intellectual, physical and artistic progress works. Whether I’m teaching programming, tennis or guitar lessons, there are frequently replicated progressions in learning. The first steps of learning commands and syntax in programming, swings in tennis or chord fingering with a guitar takes a while, can be frustrating, produces awful results and many folks give up. If you make it past those, you emerge into a place of rapid growth. There’s a command for everything in programming, once you get a basic swing down in tennis you can start rallying consistently and with a few chords you can play tons of songs. But almost every field has a plateau (and multiple levels of plateaus) that emerge after a new level of learning. The next level of programming, learning to swing well on the move or moving beyond simple chords typically progresses much more slowly, takes much longer, is much less obviously productive and many folks get stuck at these plateaus.

    There are no new main ideas in reading research (my main area of research) not because we don’t have excellent people conducting research but because it’s not that complicated of a topic, the main ideas are pretty well known and we’re past the general ideas and into the micro-level details of how individual students learn best or the macro level social policy discussions about how should one organize a school district or make state law regulating instruction. Coming up with some “big new idea” would 99% likely be a publicity stunt only, for the purpose of getting attention and research dollars.

    Think about the research happening right now with information science and artificial intelligence. Pretty much everything the author says is 100% wrong. 1st, almost none of the high quality stuff is being done by universities, overseen by elite academics and funded by government grants. Two, it’s all being immediately tied to actual commercial projects/real needs, often in real time, often with hilarious results (siri pranks) or awful results (self driving car accidents). All of the work is being tested, replicated, evaluated, the money follows production, it just couldn’t be more different than what the author describes.

    I do educational research so I get that I’m not high on the totem pole of research ranking and that much of what comes out of education research is exactly what the author describes. But economics folks have imho the worst perspective from which to analyze research and the scientific process in general. I do think the author is on to some hugely important ideas, but by starting from a horrible vantage point and missing some foundational ideas, the ending result is seriously off track. But I love the conversation and the opening quote.

  5. A personal story right from the kitchen of research , and profitability of research: I worked some 5 years for a pharmaceutical company in Mexico and Peru on research of medicinal plants, on Dioscorea in Mexico, Chondodendron in Peru.In both cases, the challenge was to raise these wild plants in a plantation and to deal with the problems of cultivating them (diseases mainly). This project was paid wholly by the company, no public money thus, I was not even allowed to write/publish on the results. In the end, the million or so of money spent (salaries,travel,material,support services) had a zero result (the plants became obsolete as a source for pills and relaxants, other sources were easier and cheaper) money wasted, endeavours in vain?? You might say so , but the fact is that my company had about 10 similar lines of research, of which only 1 became successful and profitable, all costs by the failures are recovered by that 1 success story, of which nobody would have know from the beginning whether it would be such. The common ways in research of agriculture and medicine (and other fields??).

  6. @rickoxo, you say you do educational research?

    As a teacher (previously a college prof), I’ve been flabbergasted by the unscientific methods of educational research as it’s presented to us practitioners. Would you comment on this? How common is it? Is the appallingly bad research somehow more powerful and influential?

    Here is the process I’ve observed over the years, on manifold topics:

    A scientist has a theory about, say, teaching writing fiction. He doesn’t write professionally, knows nothing about the subject–but undeterred, breaks down the “writing process” into digestible and random ‘steps’ He names the steps. Then he ‘researches’ its efficacy. The research consists of a single artificially small class, of, say, 6 students in Ohio, with 2 teachers, and is conducted over 7 weeks. No follow up is conducted. The paper finds the scientist’s theory is effective (surprise). This scientist then names the writing method after himself and markets it. He then spends his entire career supposedly replicating these results --the only catch is that no one outside his lab replicates the results; all the quotes are either of himself quoting himself or his students quoting him, or, later, students quoting other students of his. Eventually he gives a Ted Talk and writes a book. His students and some colleagues who owe him quote glowingly on the back. Eventually, this ‘theory’ is forced on teachers. Outside ‘consultants’ who are paid $10K come into the school peddling the book and talk about this scientist as though he were God and his writing method as though they were the ten commandments. We are told we must use this method. We are each given the book telling us how to use this method. The method is manifestly stupid on nearly every level and utterly ineffective in the classroom. Furthermore, it acts as though teachers are idiots who know nothing about their own field of expertise, as though we were puppets.

    After the ‘consultant’ has left and the board has distributed the silly booklet to us all, spending who-knows-how-much, the writing method is forgotten and abandoned–until next New Big Theory.

    I have seen this over and over and over, in multiple arenas. What is your experience on the ‘other end’? Surely the field cannot be filled with self-aggrandizing hucksters. What is the process like from your end?

  7. This is another article in Quillette in which the author identifies a real problem and then immediately starts dancing around what is , in my opinion, the truth.
    Research in the western world is slowing down because of two reasons: Dysgenic trends and Progressivism. While many researchers working at the fringe (Lynn, Murray) have been warning us for decades about the drop in average IQ, it is only now that this issue is becoming mainstream. High IQ individuals in our modern societies are simply not having enough children. This very likely diminishes the number of geniuses we get each generation. Contrary to what our leaders think, it is not “funding” or “inclusion” that drives research forward, it is novel ideas brought to life by unique individuals. Without them, there is no progress.
    However, why aren’t we producing more quality research even with the few geniuses that we still have? This is because the ideological dominance of progressivism brings a whole set of managerial and dogmatic problems that are deadly to quality research. The speech codes, hate speech laws, anti-harassment and anti-discrimination bills, have led to the rise of a managerial ecosystem in which the truth is secondary. No genius could survive, nor would get hired in such an environment. As a consequence, universities end up being populated by 120-130 IQ smart but uninventive people who spend decades writing grant proposals and replicating the same researches with a little twist. Moreover, progressivism imposes the belief in the blank slate. This renders the work in many disciplines completely useless. Explaining the success of social democracy in Sweden is useless if the swedes themselves are not the main factor in your paper. Theorizing the lack of economic reforms in the Third World is useless if you don’t mention IQ.
    None of what the author of this piece mentions as solutions will work. The very dogmas of our times are anathema to proper research.

  8. I’m from the NLs too. For context, I was raised by a feminist (in the 60s/70s), and I even attended some women studies classes (as a male student). I studied Foucault, Frankfurter Schule, Erich Fromm. I think the sympathy for women basically enabled the growth of women studies. They were initially ‘manned’ though by people from other humanities studies, without a rigorous scientific background, or capabilities, being ‘non-STEM’.

    They ended up hiring like-minded people with similar low-scientific qualities. And of course the marxist language of oppression definitely appealed to women at the time. So I think it follows that they adopted marxist rhetoric. Incidentally, a lot of the educational science originated from that faculty too. And thinking back to how they approached the scientific process and their knowledge of statistics, I don’t give a penny for their research.

  9. Reply to DCL
    Sadly, what you describe is true far too often. Education is full of folks doing either semi-fake research with horrible methods, small samples, and almost non-existant effect sizes (who then try to sell their methods in the commercial curriculum market like you described) or folks making bold philosophical statements about an educational topic (i.e. traditional math instruction is racist) and demanding that entire disciplines be restructured to be more inclusive or equitable.

    Apart from all that noise, there are folks who are still trying to figure out things like best methods to teach kids how to read, how to teach writing, how to develop conceptual understanding in math instruction, etc. etc. But as you can see from any of those, education is a horrible place to do research. First, you can never subject kids to random assignment, especially to groups where you are sure the outcome will be less than helpful. Second, it’s a huge problem to define ideas like what does conceptual understanding of negative numbers mean or look like, or how do you assess it. Third, any application of educational ideas that addresses public schools has such a diverse population of students affected by it, that coming up with a single idea and “knowing” what it’s impact will be is close to impossible. Last, there are huge, incredibly powerful forces controlling what happens in education, some political, but the giant commercial publishing companies have control over most things curricular, and they have no interest in research, just in the ability to sell materials.

    So in that sense, the original author was on to something in that educational research exists in a complex, indeterminate space with little to no ability to conduct high quality research and high power elites mostly control both the universities and funding. Why I still bother is because public education is charged with providing millions of students on a daily basis their primary experience of learning and instruction over the course of 13 years. Doing it wrong repeatedly is a national disaster and disgrace, figuring out how to do it better is essential.

  10. That’s a ridiculous statement. Murray is a serious researcher with a 45 year career. To proclaim him a “White Nationalist” is just the cheapest of ad hominem attacks. If you can’t address the actual arguments but have to resort to a series of logical fallacies, such as “tainted sources”, then you are pretty much admitting you don’t have a logical rebuttal.

    Reference: “Tainted sources” an explicit use of the genetic fallacy.

  11. Using the Southern Poverty Law Center as a source is as tainted as a source comes. I personally would be embarrassed to quote the SPLC as backing for any viewpoint as they are manipulative and exploitive race baiters and seldom spread the truth.
    Please be careful in who you label a “white nationalist” , I realize it is a current leftist smear term, but that does not make it correct to use.
    If you read the book, why do you need to quote SPLC, what was your insights?

  12. Most of the neuroscience community agrees that a large portion of your decision making happens subconsciously, and you cannot be aware of the processes happening prior to a conscious narrative being fed to you by your brain. To what degree your subconscious is “you” is contentious. The human subconscious is primarily a machine that operates functionally, your brain receives sensory inputs, filters what it assumes is relevant through the algorithms, and produces actions. A conscious explanation for what is occurring during the aforementioned process is summarized after an action has been chosen. Human consciousness is like your own personal Morgan Freeman, it isn’t moving the progress of the story, so much as trying to tell a story. Likely, consciousness evolved from the existing capacity of human memory, and the need to form those memories into useful tools for predicting the potential future outcomes of various actions taken by individual or causal factors acting on the individual. Three author’s position on this matter is not controversial (things get worse if you ask a sizable portion of philosophers, and physicists, who advocate determinism.

  13. Pretty sure your “passion” is just going to lead people to dislike you and reject your positions more stubbornly, even in areas where they might have been swayed. Your rant about global warming, oil companies, capitalism, Iran, Iraq, the Bush administration, and Israel seems unhinged, forcing me to conclude your claim to “truth” is either ironic and you’re trolling, or you lack the tools for constructive engagement.

    Either way, I’m going to suggest Quillette commenters let you flail about until your energy is spent.

  14. It’s tough to resist, because we tend to assume we can productively talk through differences of opinion. However, when someone throws out a word salad touching on topics as disparate as climate change and Israel (neither of which are the topic at hand), engaging is pointless. How can you address such varied topics, when no complete, coherent thought was presented on any of them? Best to let these people have their rant and wait for the dumpster fire to run out of oxygen.

  15. Speaking as a scientist, people who act like you are doing are just plain irritating. You are acting like an activist. All ad hominem, no logic, no data, and an inability to accurately understand the problem well enough to solve it.

    Are you an activist or a journalist by any chance?

Continue the discussion in Quillette Circle

84 more replies

Participants

Comments have moved to our forum

100 Comments

  1. Pingback: Science: What it Is | al fin next level

Comments are closed.