Utilitarianism’s Missing Dimensions

In 2001, Joshua Greene and colleagues published a report in Science that helped turn a once-obscure philosophical conundrum involving trolleys into a topic of conversation at scientific conferences, philosophical meetings, and dinner tables across the globe. The report used fMRI technology to probe what is going on in the brains of research subjects when they are faced with hypothetical ethical dilemmas represented by two classic scenarios. In one, subjects are asked if they would be willing to pull a lever to divert a trolley onto a track on which one person is standing, if doing so would prevent the death of five people standing on the track of the trolley’s current trajectory. In scenarios like this one, where there is no direct physical contact between the person taking the action and the person being sacrificed, most subjects say it would be ethically appropriate to sacrifice one to save five. In the second scenario, subjects are asked if it would be appropriate to push a strange man off a footbridge onto a track, if his death would stop a trolley hurtling toward five other people. Famously, it turns out that in scenarios like this one, where there is direct physical contact between the person taking the action and the person being sacrificed, most subjects say it would not be appropriate to push the strange man off the footbridge to prevent the death of the five.

According to the ethical theory of utilitarianism, refusing to push the strange man is unethical. In both scenarios, the ethical action is to sacrifice one to save five. Based on what Greene and his colleagues observed when subjects were faced with the two sorts of scenarios in an fMRI machine, they concluded that activity in the emotional part of the brain was getting in the way of most subjects making the right utilitarian judgment. Pushing the strange man engaged emotions in a way that pulling the lever did not. After the publication of that report, sacrificial dilemmas of this sort started to get so much attention that it began to seem that a willingness to do harm to others was the core of utilitarianism.

It is that picture of utilitarianism—with a willingness to do harm at the core—that a major new paper seeks to correct. The paper, “Beyond Sacrificial Harm: A Two-Dimensional Model of Utilitarian Psychology,” appears in Psychological Review. Its authors (Guy Kahane, Jim A. C. Everett, Brian D. Earp, Lucius Caviola, Nadira S. Faber, Molly J. Crockett, and Julian Savulescu) are philosophers and psychologists, who are all sympathetic to utilitarianism. They succeed marvelously in revealing the woeful incompleteness of a picture of utilitarianism that so prominently features the willingness to do harm. But in giving a more complete picture of utilitarianism, I will argue, they inadvertently remind us of why utilitarianism alone cannot provide anything like a complete picture of human well-being.

The authors of the new paper are now, or have recently been, at Oxford. Joshua Greene, the lead author on the 2001 paper I mentioned above, who is so closely associated with the picture of utilitarianism that the authors of the new paper aim to correct, is now at Harvard. For the purpose of this discussion, I will refer to “the Oxfordians” as the ones who seek to set the record straight and to the “Harvardians” as the ones who have created the impression of utilitarianism that needs correcting.

Missing the “Impartial Beneficence” Dimension of Utilitarianism

In a word, the Oxfordians argue that, whereas utilitarianism in fact has two key dimensions, the Harvardians have been calling attention to only one. A significant portion of the new paper is devoted to explicating a new scale they have created—the Oxford Utilitarianism Scale—which can be used to measure how utilitarian someone is or, more precisely, how closely a person’s moral decision-making tendencies approximate classical (act) utilitarianism. The measure is based on how much one agrees with statements such as, “If the only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice,” and “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.”

According to the Oxfordians, while utilitarianism is a unified theory, its two dimensions push in opposite directions. The first, positive dimension of utilitarianism is “impartial beneficence.” It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another. This dimension of utilitarianism requires self-sacrifice. Once we see that children on the other side of the planet are no less valuable than our own, we grasp our obligation to sacrifice for those others as we would for our own. Those of us who have more than we need to flourish have an obligation to give up some small part of our abundance to promote the well-being of those who don’t have what they need.

The Oxfordians dub the second, negative dimension of utilitarianism “instrumental harm,” because it demands that we be willing to sacrifice some innocent others if doing so is necessary to promote the greater good. So, we should be willing to sacrifice the well-being of one person if, in exchange, we can secure the well-being of a larger number of others. This is of course where the trolleys come in. We should (granting certain stipulations and background conditions that may or may not be all that plausible in real life) be willing to push a strange man off a footbridge to stop a trolley barreling down the tracks, if doing so would save the lives of five others who would otherwise be killed. It is this second, negative dimension to which, on the Oxfordian account, the Harvardians have been calling too much attention.

The Harvardians fully recognize the way in which the negative dimension runs counter to the intuitions of most people, but on their view, that is the problem. Most people rely on intuition or emotion when it comes to morality. The problem is that moralities rooted in intuition and emotion developed tens of thousands of years ago, when we lived in small tribes and needed to defend ourselves against other tribes competing for scarce resources. There is a potentially catastrophic misfit between the emotion-based tribal morality we developed in the distant past and the reason-based cosmopolitan morality we need today. Once upon a time a refusal to harm those who are close to us served us well, but it does no longer.

Since the publication of Greene’s 2001 report in Science, he and other Harvardians (such as Steven Pinker and Paul Bloom—who is at Yale) have been building the case that, whereas people who fail to reach the right utilitarian judgments are relying on the ancient, “emotional part” of their brains, people who succeed in reaching the right judgments rely on the more recent, “reasoning part.” The Harvardians have been creating a rather flattering picture of utilitarians, which depicts them as the few who can discern what reason dictates to be ethical.

But within a handful of years after the report in Science, new fMRI studies began to appear that had far less flattering ramifications. It turned out that the fMRI profile of people who reached what Greene and his collaborators were labeling “utilitarian judgments” resembled in some ways the fMRI profiles of people who exhibited clinical and sub-clinical psychopathy. That is, the fMRI profiles of the utilitarians bore an unflattering resemblance to the fMRI profiles of people who exhibited antisocial tendencies and a reduced concern about harm to others.

So, by articulating two dimensions of utilitarianism in their new paper, the Oxfordians are able to show why that unflattering resemblance is misleading. Yes, with regard to the second, negative dimension, the brains or at least mental tendencies of psychopaths and utilitarians might in some ways resemble each other. But, the Oxfordians cogently argue, utilitarianism is far more than a willingness to do harm to individuals. It is also, and far more importantly, a capacity to be motivated by what they call “empathic concern” for all. It is a capacity to, out of a commitment to impartiality, benefit others, regardless of their proximity to us in space or time. Psychopaths might be happy to sacrifice others, but they don’t show anything like a commitment to the sort of “impartial beneficence” that the Oxfordians say is the true core of utilitarianism.

The Oxfordians do not only paint a more flattering and complete picture of what utilitarianism is. They also offer a more realistic picture of how human beings come to endorse one or more dimensions of any ethical theory. Different from the Harvardians, who suggest that it is their unusually large rational endowment that enables them to endorse the dimension of utilitarianism that requires doing harm to others, the Oxfordians allow that how much any individual endorses one dimension or the other is at least partly a function of emotion or affect. As they say, their findings suggest that both dimensions of utilitarianism “are largely driven by affective dispositions rather than explicit reasoning.”

Indeed, in this paper, the Oxfordians begin to sound a lot like Jonathan Haidt, who has long argued that it isn’t just religious kooks and deontological philosophers whose ethical views are infused with affect. The Oxfordians write: “It is likely that both attraction to, and rejection of, explicit ethical theories is driven, at least in part, by individual differences in … pre-theoretical moral tendencies.” This explicit recognition that affect plays an important role in determining which dimension of utilitarianism one embraces, or for that matter, which ethical theory one embraces—is a radical and welcome departure from the picture that the Harvardians have been painting.

Missing the “Partiality” Dimension of Well-Being

In addition to giving a more complete picture of what utilitarianism actually entails, the Oxfordians also help solve what can look like the great puzzle represented by our most famous living utilitarian, Peter Singer. Mindbogglingly more than most of us, Singer sacrifices some of his own well-being to promote the total amount of well-being in the world. As a father of animal liberation and champion of vegetarianism, he sacrifices the pleasure of eating animals. As a father of effective altruism, he sacrifices at least 10% of his annual income to those who are less fortunate than he. He is plainly full of empathic concern, and extraordinarily high on the first dimension of utilitarianism.

And, he is equally high on the second dimension, which demands a willingness to sacrifice some human beings for the sake of what he understands to be the maximization of the well-being of a greater number. In theory, he is eager to endorse the rightness of pushing the strange man off the footbridge. And, in the real world, he is eager—or at least unperturbed—to argue on behalf of the option to euthanize infants with profound cognitive disabilities, insofar as doing so would increase overall well-being.

So what can seem contradictory—an abundance of empathy cheek-by-jowl with the appearance of its lack—comes into view as a sign of his consistency. Along the positive and negative dimensions, he is doing what is required to increase aggregate well-being, as he understands well-being.

But in showing the way in which the contradiction is only apparent, the Oxfordians inadvertently bring our attention to what utilitarianism leaves out of its account of well-being. From what Singer takes to be the point of view of the universe, babies with profound cognitive disabilities do not have the self-awareness—the sense of being a subject with a past and a future—that he thinks is necessary to be a person. He is of course aware that many parents take those babies to be persons. He is aware that many parents fall deeply in love with their children with such disabilities, and that many of them say that being in relationship with those children is one of the great gifts of their lives. Singer’s response, however, appears to be that those parents are in some sense making a mistake. They think their child with profound cognitive disabilities is a person, but in reality “it” is not.  On his account, emotion clouds what reason makes crystal clear.

His response, however, fails to take sufficiently seriously that love is one component of human well-being, and that, contrary to his deepest intuitions, love can grow in places that seem impossible to him. We have, after all, evolved to have the capacity to love human beings who are close to us, even when they do not have the cognitive capacities that he thinks need to be in place to be a person. This, let us call it, “partiality” that we can experience is not a defect in the systems that we are. It is a constitutive feature. It is not parents who make a mistake when they love their children, independent of their cognitive capacities. Nor are parents making a mistake when they love their own children more than other people’s children. It is utilitarians who make a mistake when they fail to take into sufficient account the fact that such partiality can be an essential component of human well-being.

Impartiality and Partiality

The Oxfordians are right to insist that utilitarianism has at its core a profound and beautiful insight regarding our capacity and obligation to be impartial—an insight that has of late been obscured by the Harvardians’ focus on the willingness to do harm to achieve more good overall. From the point of view of the universe, none of us is worth more than another. Or—and it is to the credit of the authors of the new study that they acknowledge the continuity—as some religious people would put it, all human beings are made in the image of God.

But a more complete picture of human well-being needs also to honor the point of view from which animals like us experience one of the most meaningful features of our lives. From the point of view of the person, those who are close to us can matter more than those who are far away. It is with those who are close, those in our given or chosen families, with whom, if we are lucky, we experience love. Failing to acknowledge the centrality of intimate relationships to human well-being can begin to look like one of the infections that can afflict the religions that utilitarians disdain. It can begin to sound like a self-hating form of asceticism, a radical eschewal of the needs that we embodied animals have developed over deep evolutionary time.

A family fishing at the sea of Galilee, Israel

Yes, of course, that evolved capacity for attachment to those who are close to us—that partiality—is also at the root of the tribalism that wreaks so much misery. The Harvardians deserve kudos for elucidating the dangerous and ugly underside of our partiality. But to fail to allow for the way in which that partiality is also constitutive of what is best in our lives is a mistake.

Thanks to the Oxfordians, we now have a much more complete picture of utilitarianism. Perhaps, though, it is time for them to contemplate a more complete picture of well-being. Given how masterfully they have handled the importance of, and the tension between, the positive and negative dimensions of utilitarianism, perhaps next they will reflect more seriously on the importance of, and tension between, what we might call two dimensions of human well-being. Such a picture would draw our attention not only to the value of impartiality, but to the value of partiality, too.


Erik Parens is a senior research scholar at The Hastings Center, a bioethics research institute in Garrison, New York, and is the author of Shaping Our Selves: On Technology, Flourishing, and a Habit of Thinking.


  1. Instead of tossing someone else off a bridge, why isn’t it framed as flinging one’s self off the bridge into the path of the trolley? Cut out the middle man. Then see just how empathetic people are when it’s their own life they willingly sacrifice for others. If the rate of willingness changes, all you really knew before was how many people could kill someone to save someone. That’s no real sacrifice at all.

    • I think you bring up a great point, Rick. It is ironic that someone who considers himself a serious moral thinker spends his time debating what the appropriate sacrifice of other people should be. Not only are the situations absurd and of no practical use, they completely miss the brilliant revelation of Judeo-Christian thought: the most important battleground for morality lies within the individual and his ability to sacrifice toward a greater good. These utilitarians pontificate about what they would like other people to sacrifice, conveniently neglecting their own hypocrisy and responsibility. Listening to them, you would think that humanity’s largest problems and moral errors came from the inability to coldly (and psychotically, as the research showed) shove someone into a moving train, rather than each individual’s failure to strive and sacrifice for the greatest good they can imagine. Like I once heard someone say, “you have to be pretty smart to be this stupid.”

      • “the brilliant revelation of Judeo-Christian thought: the most important battleground for morality lies within the individual and his ability to sacrifice toward a greater good”. Anyone who thinks that is brilliant, a revelation, or unique is remarkably simple, ignorant, or infected with the gushing affliction of the religious mindset. One day I’d like to see something honest, insightful, and interesting come from a religiously motivated mind. That, or for them keep their religion to themselves.

        • “One day I’d like to see something honest, insightful, and interesting come from a religiously motivated mind.” There’s plenty out there for you to read. Try some C.S. Lewis, for instance. Also, if you think someone is wrong, you should explain why. It’s clear from your haughty tone that you don’t have the humility that comes from contending with strong arguments, not to mention the intellectual laziness of arguing a negative and not providing your own argument for critique. A deep breath and some more reading would do you some good.

          • “If you think someone is wrong, you should explain why”. Some things are obvious and don’t need explaining. The falsity of the claim of brilliance, revelation, and ownership by Judaeo-Christian thought of the idea that morality lies in an individual’s ability to sacrifice for the greater good is one such thing. If we’re trading accusations of breadth of reading then the idea that there’s a strong argument there suggests you’ve not come across many. In fact, there is no argument. It’s an assertion. Some things really do earn dismissal. The onus, here, is not on me to do the work to demonstrate the falsity, the onus is the one making the blatantly flimsy, grandiose assertion.

            I’ve had my fill of religious thought, thank you. And until the religious reveal an intellectual integrity then I’m not going back to reading religious thought, no matter how many times C.S. Lewis is recommended.

    • The reason why it isn’t framed that way is because people are more likely to say that they would be willing to self-sacrifice even if it is not true, because it makes us look/feel good (social desirability). Posing the problem where you have to “look bad” either way is supposed to make the answers more believable.

  2. David Pittelli says

    One difference with the notion of pushing a man in front of a train to save 5 people, is that it is less clear that killing the 1 will indeed save the 5. Whereas the track-switch alternative, if it kills the 1, will certainly save the 5. This seems at least as relevant as proximity. If proximity is the real issue, then we should be willing to shoot the 1 man who will fall on the track to save 5.

  3. Ruslan says

    I believe there’s a mistake in the article. The “from” should be replaced with the “ONTO” in the following sentence.
    “In one, subjects are asked if they would be willing to pull a lever to divert a trolley FROM a track on which one person is standing, if doing so would prevent the death of five people standing on the track of the trolley’s current trajectory.”

    Maybe it’s just me, but the phrase in its current form does not make much sense.

    Despite that, thank you for the thought-provoking article.

  4. I thought the original dilemma involved pushing a fat man off the bridge, the assumption being that his body could stop the trolley while your own less massive frame would not, hence self-sacrifice isn’t an option.

  5. This seems at least as relevant as proximity. If proximity is the real issue, then we should be willing to shoot the 1 man who will fall on the track to save 5.

    Temporal proximity might be more of an issue than spatial.

    If I had the chance to go back and kill Hitler just before he came to power I wouldn’t hesitate. But if I had the chance to kill 5 year old Hitler even knowing what he’d do? That’s not so easy.

    Say you can throw the fat guy off the bridge right now or let the trolley run on another 15 minutes before it kills the other 5 guys. That strikes me as a harder decision than killing one man to prevent a greater immediate tragedy.

    That’s more like killing an innocent man because there are 5 people who need his organs.

  6. I think it’s hubris to think you can predict your actions will, with 100% certainty, result in the saving of some lives by the action of hurting or killing others. (You are not omniscient.) Therefore the most moral action is to do no harm to Innocents. Not applicable to police etc, which brings up another point, if a majority decided that a minority group had to die or they would kill themselves, surely no one would support the majority?

  7. augustine says

    Utilitarianism appears to be another expression of hyper-egalitarianism. Together with “impartiality” they help form an idea that reason and rational thought are our supreme mental and ethical modalities. This view is challenged all the time as well it should be.

    How would this ethics apply in peacetime versus wartime? This is an essential consideration. We can, as moral and atheistic beings, decide very rationally and based on cold reasoning of threat assessment, to destroy another country or large population by whatever means. Not only can we do this, utilitarianism says (if I have it right) that we are _obligated_ to do this if we believe this action is certain to result in more lives being saved (probably ours). In ethical, theoretical terms is there any difference in considering pushing someone off a platform versus pushing a button and killing millions? Would this scenario be more terrible if it were carried out by a regime informed by religion?

    “It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another.”

    The point of view of the universe is not and cannot be the human view of things. We are, as the author notes, reliant on both partiality and impartiality for our essential progress. None of us values all other humans equally, even if we would cast aside any prejudice momentarily to e.g. save someone from a burning car.

    “It is a capacity to, out of a commitment to impartiality, benefit others, regardless of their proximity to us in space or time.”

    Who is committed to impartiality exactly? That really does sound like psycopathy. What we need is greater commitment by individuals and groups to concepts of mercy, charity, and so on. Obviously this is subjective and will vary from place to place but why is that a problem? Does the idea of an uneven ethical standard across the world really bother some folks? Why is that?

    The emphasis placed by utilitarianism on impartial, unbiased ethical decision-making and actions is concerning in its implications for how its proponents seem to feel we should or must treat one another. It is a universal philosophy as well and demonstrates that scientism is a force, in some instances, at work against humanity in pursuit of Utopia.

  8. I don’t want to get into too many spoilers but there’s a variation on the trolley problem in the latest series of Sky Atlantic’s The Tunnel and it’s the Aspie character who comes down in favour of the utilitarian POV.

    • Leon says

      Any data on matching Utilitarian and Aspie personalities? Seems a nice match – they both get “psycho” chucked at them, even.

  9. Its got to suck if you toss the fat guy off the bridge only for the 5 guys further down the track to step off the tracks just in time.

    I’d like to see utilitarianism correlated against optimism and pessimism. My gut tells me that optimists might be more reluctant to toss off the fat bloke because that has a finality about it and they still hold out hope that some other event might intervene before the other five guys are hit.

  10. Lee Moore says

    One of the many reasons that you should never turn your back on a utilitarian is their manic confidence in their ability to predict the consequences of their actions, computed across all persons (and, it appears, animals) across all time. If this is not psychopathy it’s certainly psychosis.

    1. Make your moral decisions parochially. You have a much better chance of working out whether you’re going to do some good.

    2. Don’t be ashamed of declining to stick your oar in if you’re not confident of what to do. Every time you interfere in other people’s lives uninvited, you chip away at their liberty.

    3. Be loyal to your family and friends. People who purport to care about everyone in the world equally aren’t lying. They don’t actually care about anybody. People are undifferentiated tokens to them. It’s a short step from thinking of humans as tokens to ploughing their skulls into the fields in pursuit of utopia.

  11. Neither side has any foundations on which to construct their rationalistic moral structures….none whatsoever. This type of rationalism tends to result in totalitarian govts…..even those “controlled” by the people. I would not want to live in a country where the govt. has the power to sacrifice me for the greater good…..whenever it seems to them that it is the moral thing to do.

  12. Answers to hypothetical questions are worth the pixels they consume, and not a byte more.

  13. Utilitarianism seems to rest on the idea of a social-welfare function, in which the well-being of persons can be summed to arrive at a grand total of “happiness” or “non-misery” or something like that. If you believe in such a thing, as utilitarians do implicitly, consider the situation wherein A gains great pleasure from killing people in a way that causes them no suffering. Do you suppose that the value of the social-welfare function rises every time A kills someone? If you do, are you willing to be one of A’s victims?

    • DiscoveredJoys says

      Utilitarianism also seems to include a ‘greater numbers of people is better’ axiom by default. *If* you believed that the world was overpopulated (and surviving people and animals would have a greater overall happiness if there were fewer people) then it would be more ‘utilitarian’ to allow the trolley to kill the greatest number. And if that meant that you didn’t have to throw the switch or push the fat man, so much the better for you.

  14. Nicholas Conrad says

    For a guy who thinks people should be willing to unquestioningiy cut off their own limbs and commit murder to promote the welfare of stranger, 10% seems like a pretty stingy portion of his income devoted to charity. How many lives could the next (and the next, etc..) 10% of his income save?

  15. Excellent piece. I find we forget these moral frameworks are imperfect models. They help us understand situations and ourselves; we can also use them as tools to guide action. But none of the models are perfect or complete so it is better to view them as helpful tools in certain situations and unhelpful in other situations. Rather than arguing that this tool is the ultimate and final truth.

  16. The trolley thought experiment is itself the problem: it’s unhelpful because it makes you focus on “lives saved” but that’s not really what our brains are doing. Here’s the key: brains calculate suffering (and it’s always MY sufferring, which includes MY when I believe others are suffering). So, forget about lives, it’s always about my suffering. Now, the problem is our brains are very limited at calculating suffering. That’s the real problem.
    Why can’t push the fat guy? Probably my brain is suffering calculating those seconds when the guy is falling terrified.

  17. ccscientist says

    The willingness to sacrifice others “for the greater good” is at the root of communism’s horrors and shows up clearly in Singer’s willingness to snuff out disabled children. I would propose that we must reject this option on the basis of our inability to ever take God’s point of view as to what is truly the greater good. By pushing the fat guy in front of the trolley, perhaps the trolley derails and kills 10! As soon as we are willing to sacrifice others, we lose our humanity, we become killers, and this is not justified by the greater good except under the most extreme circumstances (a man with a hostage has his finger on the nuclear bomb and we must kill both to stop him). Even under extreme circumstances it behooves us to find another way. Rather than kill the defective babies, perhaps we can heal them. After all, we are all defective in some way compared to perfection.
    I also find it interesting that some (many?) environmentalists believe they know what “the universe” wants, and that it wants fewer of us troublesome people, so they are willing to sacrifice lots of people to benefit the greater good (gaia)–thus what you value determines who you are willing to sacrifice and utilitarianism fails again.

Comments are closed.