The poor and worsening position of research is not self-correcting, and the sector needs to be redirected towards the solution of real world problems and developing an effective predictive capability.
Research [is] a strenuous and devoted attempt to force nature into the conceptual boxes supplied by professional education. ~Thomas Kuhn, The Structure of Scientific Revolutions
Scholarly research is in crisis, and four issues highlight its dimensions. The first is that important disciplines such as physics, economics, psychology, medicine, and geology are unable to explain over 90 percent of what we see: dark matter dominates their theoretical understanding. In cosmology, 95 percent of the night sky is made up of dark matter and dark energy which are undetectable and inexplicable. Some 90 percent of human decisions are made autonomously by our sub-conscious, and even conscious decisions often emerge from a black box and have little support. The causes and natural history of important illnesses—including heart disease, cancer, obesity, and mental illness—are largely unknown for individuals.
The second dimension of the research crisis is that systems which are critical to humankind—especially climate, demography, asset prices, and natural disasters—are minimally predictable. The best example of misguided theory can be seen in the conduct of organisations. Although a high proportion of their executives have formal training in management, its inadequacy is continuously evidenced by failures as serious as bankruptcies of globally significant firms (such as Enron, Lehman Brothers, Merrill Lynch, and Parmalat) and global recessions (including as recently as 2000 and 2008). Perhaps the best evidence of management theories’ poor value is an analysis rhetorically entitled “Is the US Public Corporation in Trouble?” which showed that profitability of firms fell significantly in the 40 years to 2015. This brings the important corollary that modern management and finance theory have been associated with declining economic performance.
The third dimension is a chronic inability to reproduce research findings. This replication crisis was highlighted by an online survey by Nature in 2016, in which over 70 percent of researchers reported that they had tried and failed to reproduce other researchers’ published findings. More specifically, scientists at biotechnology firm Amgen reported in 2012 that they could only confirm the results of six of 53 (11 percent) landmark cancer studies published in high impact journals. Another indication of the dubious benefits of research is its inability to ensure the integrity of findings. Doubts over quality are serious enough to be expressed in papers along the lines of that by medical researcher John Ioannidis entitled “Why Most Published Research Findings Are False.” The variety of deceit is made clear at retractionwatch.com, and its extent is consistent with reports of endemic misconduct amongst researchers.
The final indicator of crisis in research is that progress in developing better theory and forecasting capability has stagnated since the 1960s. A paper by Stanford and MIT economists asked “Are Ideas Getting Harder to Find?” and concluded that—although the number of researchers is growing exponentially—they are becoming less productive in terms of ideas generated, and sustaining research productivity requires ever greater expenditure. For example, yields of agricultural crops have roughly doubled since the 1960s, but this required research expenditure to increase by a factor of up to six.
Symptoms of incomplete theory were made clear by the 2017 BBC radio program: “Is the Knowledge Factory Broken?” which raised awkward questions about the poor return on vast sums being given to universities and other research institutions. It argued that something has gone fundamentally wrong because most published research cannot be repeated and thus is questionable, which means that science is not being done properly.
There are two bleak implications of this chronic failure of expertise. The first arises from the fact that risks in life are related to a lack of knowledge about the future: uncertainty leaves us unable to eliminate or avoid unwanted events and so we face a future that is axiomatically risky. This leads to the second implication of weak research which is that evidence-based policy and investment are chimera. Taxpayers and investors have the reasonable expectation that public policy and corporate strategy will be founded on accurate predictions so that financial, intellectual, and other resources secure a positive outcome. Weak research, though, means that actions are based on flawed beliefs, and are literally gambles. Every time knowledge gaps lead to social policies and corporate strategies that are based on unsupportable premises it brings a human cost. There is also a huge opportunity cost in the $1 trillion which governments provide to research councils each year, with ten times more in academics’ salaries.
What has caused the crisis in research? The most obvious reason is that systems and phenomena which are important to us—such as climate, markets, disease, institutions, and so on—are complex and dynamic. Complex because they comprise multiple systems: the performance of institutions as diverse as churches and corporations, for instance, depends on their governance and leadership, internal culture and mores in their sector, the tasks they face and the resources available. Dynamic because they face feedbacks from outcomes, and so important data about their future are not yet available. When the question of time is incorporated into such systems they become inherently hard to explain, and impractical to predict accurately. Researching them is difficult.
In addition to research’s inherent difficulties, its performance has been hampered by the sector’s poor structure and conduct. Most obviously, it can rely on government funds and is self-regulated by academic elites which has never worked for any endeavour. Research is also conducted in isolation from the real world, which means there is no external scrutiny of choice of topics and labour practices, and it never faces the competition or scrutiny that continuously improves other important tasks. This gives research weak economics such as the absence of a market and of independent measures of value added. Siloing of research and specialist theories within disciplines establish an intellectual echo chamber for insiders; and an opaque Tower of Babel for research outsiders. Inappropriate structure of the sector has allowed poor research to survive, often even to thrive.
Without regular real-world validation, research has become insular and focussed on normal science that is intended to buttress existing paradigms. That is, it takes existing intellectual frameworks or theories as truth and either fills in gaps or develops innovative strategies to confirm what is known. The result is intellectual stagnation which has two consequences. One is a marked slowdown in the rate of knowledge accumulation. An excellent example is provided by American Economic Review, which is its discipline’s premier journal. AER celebrated its centenary in 2011 by commissioning a distinguished panel of researchers to choose the top 20 most “admirable and important articles” that the journal had published. This presumably would list economics’ most innovative and influential thinking. So it is staggering that the most recent of these articles dates to 1981: the leading economists of our day think it is almost 30 years since the pre-eminent AER published an important idea!
It is hard to believe that a discipline which pretends to any credibility has not seen a thinking breakthrough published in its leading journal in three decades. A similar perspective, though, emerges from an article in Nature Ecology & Evolution which lists 100 seminal papers that comprise “a general must-read list for any new ecologist … to achieve satisfactory ecological literacy.” Only two date to this decade, and 27 to either the 2000s and 1990s: almost three quarters were published before 1990. The cognitive slowing of economics and ecology is matched by other disciplines where researchers show little curiosity about new theories, almost as if all questions have long since been answered and everything can now be interpreted through a Beatles-era prism.
Intellectual stagnation is facilitated by researchers’ studied refusal to investigate puzzling observations. Few are taught ontology and epistemology, or receive training in research techniques other than discipline staples: researchers rarely give much thought to their strategy, almost as if truth will emerge automatically. Thus, most fields treat shocks and crises as irrelevancies to be stepped around rather than learning opportunities. For instance, in March 2010, University of California economics professor Barry Eichengreen wrote in the National Interest that “the great credit crisis has cast doubt over much of what we thought we knew about economics.” This global market and economic catastrophe reverberated for years after 2007, but economics and finance theories saw zero change. Lack of researchers’ curiosity is abetted by perverse incentives that are based around the volume of publications in top journals and give no indication of research’s repeatability or accuracy, much less its real world significance. Researchers driving for volume publish rapidly, so the knowledge factory turns out noise that wastes resources and talent.
Why is weak research so seldom remarked upon? It seems that the staggering achievements of entrepreneurial engineers and scientists in commercialising new communications, transport, and other technologies during recent decades has distracted attention from poor research. Unproductive academic researchers were gifted a free ride.
The poor and worsening position of research is not self-correcting, and the sector needs to be redirected towards the solution of real world problems and developing an effective predictive capability. Energising research is best achieved through economic levers, and two are available because of the sector’s concentrated cashflows. Most researchers depend on funding from research bodies such as the Australian Research Council or through university salaries. Funders should develop a detailed code of research practice; and—without affecting researchers’ independence—prioritise research that solves puzzles in paradigms and enables forecasts of important phenomena. This would be best achieved by linking a significant portion of grants and incentives to paybacks from improving real world understanding and predictability.
A second economic lever is held by universities because scientific publications are researchers’ main communications channel, and publishers derive their income through subscriptions, or on a pay-to-play basis where authors meet publication charges. In both cases, universities provide most of the money, which should enable them to impose requirements on the content and editorial practices of journals while respecting the integrity of academics and journal editors.
In particular, universities should restrict funding to publishers with robust integrity programs, and preference journals which promote research quality. Publishers should centralise integrity checks of all submissions and send quality papers for review by qualified peers chosen at random. Journals should require self-replication of research so that innovative findings are confirmed in a totally independent setting. They should also adopt a multidisciplinary perspective, and encourage review articles which critically evaluate the prevailing paradigm with summaries including strengths and weaknesses and provide informed commentary on developments in research and its strategy.
There are further strategies available to universities to enhance research. One is to catalyse a holistic inter-disciplinary approach. Groups of complementary universities could develop an online platform where researchers could register their interests. This would reduce search costs for innovative researchers seeking like minds and promote formation of multi-disciplinary teams to focus on critical puzzles. A second strategy is to promote a groundswell in heterodox research by putting resources behind suitable journals, encouraging international conferences and incentivising researchers.
In these ways, we can begin to tackle the crisis in research and promote change in the way it is conducted by setting out practical and realistic solutions to obvious shortcomings. The goal must be to realign research priorities and incentives to avoid opportunity costs of weak research, and raise quality expectations to accelerate development of better theory that delivers real-world solutions of value to stakeholders. Failure to make research more productive will become increasingly important as the scale and complexity of societies grow—if we lack the foresight to respond, unmanaged risks will become ever more serious.