Skip to content

The Overregulation of Science

Overly burdensome rules dampen enthusiasm for research and delay scientific progress.

· 11 min read
The Overregulation of Science
Norman Thagard self-experimenting aboard the Space Shuttle. He conducted physiological experiments on personnel during the STS-7mission. Wikipedia.

To the outsider, scientific research might seem like a boring parade of spreadsheets and test-tubes. It isn’t. Research is often hard, repetitive, and slow-moving. But the grueling work in the trenches is punctuated with lightning strikes of brilliance that can be game-changers. These moments must be acted on while the enthusiasm, the opportunity, and the ideas are hot. Carpe Cogitationem!

Forces that discourage brilliant action and timely investigation of hypotheses are bad for science. One such force is excessive regulation. When rules and regulations are illogical and overly burdensome, their primary effect is to dampen enthusiasm for research and to delay scientific progress. In the realm of medical research with human subjects, the Institutional Review Board (IRB) is often the culprit. To get a sense of the costs of over-regulation by the IRB, it is instructive to look back to episodes that predate the IRB entirely, predate the modern all-encompassing IRB, or were outside the IRB’s purview.

In 1929, in a hospital in Eberswalde, Germany, Werner Forssmann, a medical intern, made an incision in his own arm vein and fed a long catheter up his vein and all the way into his own heart. Such techniques had previously been attempted on animals, but this was the first recorded instance of a catheter being inserted into the heart of a living person. We know that it happened because Forssmann was careful to obtain X-ray evidence of his feat. He was motivated in part by pure scientific curiosity but was also acting with the hope that, once perfected, the method could be used to study the workings of the living heart by taking blood and tissue samples from, and delivering drugs directly to, it.

Forssmann had been warned by colleagues and supervisors not to attempt the technique on others or on himself. He may have been insubordinate, but he was not acting recklessly. He was familiar with the relevant animal experiments and had good reason to expect that he would succeed. He was confident that his procedure was reasonably safe and important, so he persevered. There were no ethics panels nor any government regulations to satisfy. His audacious—if impatient—experimentation ushered in the field of cardiac catheterization.

Forssmann is not an outlier. A self-experimenter lives in every scientist. Many of us are ready to put our bodies on the line for the sheer joy of testing a hypothesis, proving a point, or immersing ourselves fully in the scientific enterprise to which we are committed. Sometimes, we have multiple motivations.

In 1984, while a graduate student at Case Western Reserve University in Cleveland, I allowed a medical intern to insert a catheter down my throat, through my vocal cords, and into my bronchi. The intern, and the graduate student he was helping, used the catheter to flush some saline into my lungs (“bronchoalveolar lavage”) and retrieve macrophages—a type of immune cell that lives in the lungs—for their research. By then, catheters had advanced considerably from Forssmann’s era to be outfitted with fiber-optics and eyepieces so that both the intern and the willing subject (me) could observe the subject’s airways directly from the investigator-guided catheter. Because the light in a fiber-optic can bend around corners, the investigator can “go” anywhere the catheter can go.

Like Forssmann, I was acting partly out of pure scientific curiosity. “This is an episode of NOVA!” I told myself. “Welcome to the insides of Evan’s bronchi!” (The reimbursement rate of $50 for my macrophages and inconvenience was just enough to secure me a seat in a van traveling overnight from Cleveland to New York City where my then girlfriend was living.) I might have signed a consent form; I don’t recall. I don’t know that my macrophages ushered in any new fields of science. I do hope the grad student got his PhD.

After Forssmann’s success with self-catheterization, he continued to study the heart. Catheterization of the heart allowed the delivery of dyes that could be seen on an X-ray, the only medical imaging device available at the time. In this work, Forssmann adopted a slightly more conventional approach. He started with experiments on dogs. But his hospital had no animal housing. So, being a dedicated but impatient scientist, he raised the animals in his mother’s apartment and secreted them into the hospital in a potato sack. This was the beginning of cardiac angiography—the imaging of the heart by using contrast agents that enhance the contrast on the image between heart tissue and the neighboring blood-filled cavities or between viable and necrotic tissue. Forssmann’s audacious exploits are recounted in a wonderful book about the history of self-experimentation, Who Goes First, by Lawrence K. Altman.

Inside every workhorse scientist lives a thoroughbred, champing at the bit to break out of the gate, run the race for the results, and capture the glory—however obscure. This is a common trait of successful researchers.

In 2004, my colleagues at Purdue University, our graduate student, and I, like Forssmann before us, were developing a technique in medical imaging. We desperately needed a high-resolution anatomical picture of a rat brain. Whereas Forssmann had pioneered experimental methods, we were working on an algorithmic one. Every new algorithm requires data for testing—in particular, it requires data for which the “ground truth” is known. An algorithm to differentiate pictures of dogs from cats must be tested on pictures of known dogs and known cats. “Ground truth” for a dogs-vs-cats algorithm would be pictures of dogs that are universally recognized as dogs.

One typical test would be to see how much blurring of the canonical canine could be tolerated before the algorithm starts to classify it as a cat. When the ground-truth is unknown (or even unknowable), one must first create it, that is, “simulate” it. In our case, we needed detailed, ground truth images of the chemistry of the rat brain (i.e., where the receptor molecules are located), which did not yet exist. But we could simulate them by starting with detailed images of the anatomy of the rat brain. The detailed anatomical images that we sought could be achieved with an MRI scanner—one that was used for humans. Years of work had gone into the creation of our algorithm. All that remained was for us to test its performance on high quality simulations.

We knew what we had to do. We did not consult the animal care review committee. We did not check any government regulations. Instead, we persuaded an animal technician to leave a just-sacrificed rat (euthanized properly and painlessly during a separate, unrelated experiment) in the refrigerator one day so that we could re-use it. At night, we transported the rat in a paper bag through the halls of Indiana University Hospital and snuck our “rat-in-a-bag” into the radiology department where we scanned it on an available human MRI machine to create a high-resolution image of the rat brain. (Yes, we sanitized the MRI scanner afterwards.)

Our journal article on what is called “direct reconstruction of PET images” may not be in Forssmann’s league, but in the intervening 17 years it has garnered more than 300 citations (that is, mentions in other scientific papers). Given that the top papers in our field of neuroimaging are cited an average of 82 times over their lifetimes, one could argue that our innocuous subterfuge in the service of efficiency was justified by the ends.

In 1983, a team of scientists at Johns Hopkins University, led by Drs. Henry Wagner and Michael Kuhar, was completing yet another breakthrough study related to medical imaging. The researchers were attempting the first ever scan to image dopamine receptors in the living brain of a primate. Wagner, Kuhar, and their colleagues, who hoped to identify targets for treatment of schizophrenia, were performing a PET (Positron Emission Tomography) scan. The brain’s dopamine receptors are also key players in Parkinson’s disease and in all types of addictions. The primate in question was a baboon. At what must have been the last minute (the publication makes no mention of subject recruitment or approval by an IRB), the team decided to image another type of primate.

At the start of the scan, one of the team members injected Wagner, then head of Nuclear Medicine, with the radioactive molecule (another type of contrast agent) that attaches only to dopamine receptors. This may not have carried the same risk as feeding a catheter into one’s heart or even one’s lungs, but the spirit of self-experimentation by impatient, results-driven scientists was exactly the same. The subsequent paper, published in Science in 1983, ‘Imaging Dopamine Receptors in the Human Brain by Positron Tomography,’ is a landmark achievement in the field of PET imaging of neurochemistry. It has been cited more than 900 times by other publications and quite likely would not have achieved the same visibility or scientific impact had it not contained images of the human (experimenter’s) brain.

Self-experimentation is one example of scientific audacity. It combines commitment to the scientific process with derring-do, and even a bit of showmanship. It serves many purposes. It draws attention to important work, as it did for Forssmann and for Wagner et al. It can also serve educational purposes, to engage students and inspire them, when it is permitted.

In 2006, while an assistant professor of Biomedical Engineering at Indiana University and Purdue, I was teaching a course on medical imaging. My colleagues and I had just been awarded a new grant to image the brain’s dopamine receptors responding to alcohol. To get our experiment up and running, we first needed a few control subjects, that is, healthy volunteers—no alcohol would be administered. In the hope of achieving my “Henry Wagner moment,” and to maximize the moment’s educational value, I decided that I would be the first healthy volunteer, and I would invite my class to observe the scan.

Unlike the Werner Forssmann or Henry Wagner episodes before us, my colleagues and I went through all the proper channels and applied for permission from our IRB. It was a spring semester class. We applied before the semester began. The review process was slow, tedious, and painful. One reviewer, apparently confusing procedure with understanding, demanded to know, “how would the subject consent himself?” Maybe the reviewer thought I should stare into a mirror and the man in the mirror would question me, “Do you understand what I’m telling you, Evan…?” like a Jimmy Fallon bit on Saturday Night Live. Despite what the reviewer might have thought, it is not the mechanical procedure of asking the subject a question that matters for obtaining consent, it is the fact of the subject understanding the risks of participation. It is fair to conclude the principal investigator already understood. The review process dragged on so long that the spring semester ended without IRB approval and without the students ever getting to witness a new brain imaging study at its inception.

The prospects of a creative but impatient scientist being able to act quickly on an inspired idea have dimmed considerably compared to 2006, let alone 1983 or 1929. No experiments—ground-breaking or merely red-tape-cutting—happen fast. A major impediment is over-regulation of all experiments involving humans, called rather imprecisely, “clinical research.” The IRB approval process contributes considerable drag to the flow of research involving human subjects.

The IRB’s mandate is good and necessary. Its primary function is to evaluate scientific protocols (designs of experiments) for their adherence to the core principles of research with human subjects as laid out in the 1978 document, “The Belmont Report.” The report follows on the Nuremberg Code, written to guide clinical scientists in the immediate aftermath of the Holocaust. (Interestingly, self-experimentation (even if it risks death) is not proscribed by the Nuremberg Code.) In any case, the IRB is there to check that the investigator has taken all necessary steps to mitigate risk, that the volunteers are properly informed of the risk, that the unavoidable risk is somehow commensurate with the benefits of the research, and that whatever benefits of the work may result will accrue to all populations and not exclude those who are most likely to volunteer.

In practice, unfortunately, the role and reach of the IRB are not so clear. As Robert Klitzman has documented in his detailed examination of the workings of IRBs, The Ethics Police?, there is no uniform training of IRB members and there is little institutional memory. A procedure that was reviewed and approved for one project (i.e., one protocol) is subject to a separate and independent review when the identical procedure is part of a separate project. Even worse, when a protocol is amended, the whole protocol may again be subjected to scrutiny. You don’t get to tell the IRB, “Look, all I did was change the wording on page 77, you guys already approved everything else. So, let the rest alone.” As a colleague who uses PET scans at a top university to study Alzheimer’s disease asked in frustration over the need for yet another IRB amendment, “what the *&^% could be the enhanced risk of opening up the recruitment window from 79 to 80+?” Meaning, why did he have to seek IRB permission and all the delays that it usually entails to widen age restrictions on his study to include marginally older but otherwise identical patients?

Doubtless, the regulations that codify the central ideas in the Belmont Report protect human volunteers. We know that some researchers will cut corners that should never be cut. Before there was a Belmont Report and attendant government regulations, Henry Beecher’s touchstone article in the New England Journal of Medicine in 1966 was necessary to expose egregious violations of research ethics. Beecher’s story is engagingly placed into the context of a rapid post-WWII expansion of government-funded science by David Rothman in Strangers at the Bedside. The revolution in clinical research ethics that Beecher ignited by identifying 22 shocking cases in the scientific literature of ethical transgressions against human subjects has done much to protect human volunteers. But the pendulum can and has swung too far away from efficiency and latitude for dedicated but (admirably) impatient researchers. And in that swing toward more regulation and longer review, crucially important conditions for breakthrough science have been lost or starved of fuel. Even the most audacious and impatient researcher can have the enthusiasm for his science beaten out of him.

In my own research using PET imaging of dopamine receptors to study smoking addiction, I have experienced considerable frustrations with the IRB (and, I must admit, a few expedited successes). With the advent of vaping, there are no longer any pure cigarette smokers to be recruited. At the same time, clinical research has suffered mightily under COVID. We need subjects now before our grants run out. So, we must adapt. Or more to the point, we must be allowed to adapt. Like my colleague the Alzheimer’s researcher, I do not see a good reason—based in risk assessment—to seek yet another IRB approval just to alter the wording on our recruitment ads to read, “do you vape or smoke?” Should I take a driving test every time I buy gas at a new gas station?

The IRB has a well-intentioned mission: protect human subjects. It is staffed by well-intentioned people. But in the end, it is a bureaucracy. Bureaucracies only grow, they never shrink. They demand more attention, more resources, more paperwork, and more compliance. They are subject to mission-creep. Every interaction with the bureaucracy is protracted and time-consuming and enervating. Imagine how I felt this year at Yale when I submitted a trivial amendment (one that requires a microscopic change to the risk calculation) to switch from the current PET scanner to a newer model and was admonished that if I also intended to use a new MRI scanner, I must follow yet a different review process. To be clear, I never said anything in my application about MRI scanners (or space-lasers for that matter.) And yet, I still had to waste time addressing the comment with a formal reply. The questions that come from out of left field, the full-board reviews for trivial matters that do not impact risk, all the make-works that come with a bureaucratized, computerized system have gotten so bad that one of my creative collaborators (he, of a recounted episode) has taken concrete steps to leave science. The costs of an overly regulated science environment are real. There are costs in wasted resources, foregone experiments, missed educational opportunities, and loss of good people.

Nowadays, I sit on my ideas for longer than I used to. Sometimes I give up on them simply for fear of the protracted review process. Protections for human volunteers are much stronger than prior to Beecher and Belmont. The girlfriend, whose cross-country visit cost me some lung cells, is my wife of 37 years. Many things have been lost. Others have been gained.

Evan D Morris

Evan D Morris, PhD, is a Professor of Radiology and Biomedical Imaging at Yale. Before that he was on the faculty of Indiana and Purdue Universities.

Keep Reading

On Instagram @quillette