The Discomforts of Being a Utilitarian

I recently answered the nine questions that make up The Oxford Utilitarianism Scale. My result: “You are very utilitarian! You might be Peter Singer.”

This provoked a complacent smile followed by a quick look around to ensure that nobody else had seen this result on my monitor. After all, outright utilitarians still risk being thought of as profoundly disturbed, or at least deeply misguided. It’s easy to see why: according to my answers, there are at least some (highly unusual) circumstances where I would support the torture of an innocent person or the mass deployment of political oppression.

Choosing the most utilitarian responses to these scenarios involves great discomfort. It is like being placed on a debating team and asked to defend a position you abhor. The idea of actually torturing individuals or oppressing dissent evokes a sense of disgust in me – and yet the scenarios in these dilemmas compel me not only to say such acts are permissible, they’re obligatory. Biting bullets is almost always uncomfortable, which goes a long way in explaining the lack of popularity utilitarianism enjoys. But this discomfort largely melts away once we recognize three caveats relevant to the Oxford Utilitarianism Scale and to moral dilemmas more generally.

The first of these relates to the somewhat misleading nature of these dilemmas. They are set up to appear as though you are being asked to imagine just one thing, like torturing someone to prevent a bomb going off, or killing a healthy patient to save five others. In reality, they are asking two things of you: imagining the scenario at hand, and imaging yourself to be a fundamentally different being – specifically, a being that is able to know with certainty the consequences of its actions.

The ‘Trolley Problem’

That is, in addition to imagining, say, having a captive who knows where a nuclear device has been hidden, or of being in a position to push a fat man in front of a trolley, you also have to imagine knowing that torturing the captive will work, or that the fat man really is fat enough to derail the trolley. But we are, of course, not clairvoyant beings. Every intuition we have about right and wrong, fair and unfair, has evolved or was instilled in the context of us being the sorts of creatures that cannot know the future as we know the present.

So what intuitions of right and wrong, fair and unfair, would we have if we were clairvoyant beings? What if we had access to knowledge about the consequences of our actions in the same way that we have access to knowledge about our current surroundings or posture? It’s difficult to imagine, but it certainly seems reasonable to assume we’d have evolved quite different moral intuitions. I’d say clairvoyant beings would probably have few (or at least fewer) qualms about utilitarianism.

On the other hand, the fact that we aren’t clairvoyant is not an argument against utilitarianism, it’s an argument for why human utilitarians – with their lack of foreknowledge – probably should not push the fat man or support political oppression. Not in the real world anyway. As for hypothetical worlds where we are also clairvoyant beings, it should be no surprise that our non-clairvoyant intuitions fail us there.

There is a second way in which these scenarios can be misleading: they ask us to assume that their stipulations – the blunt rules and conditions of the world they require us to imagine – are worth taking seriously. The Oxford Utilitarianism Scale is not particularly relevant here, but we can see this issue arise in other scenarios where we are asked, say, to imagine a world where slavery is the only way to maximize overall well-being. The implicit premise here is that it is conceivable that having a slave (i.e. a highly oppressed person leading an absolutely terrible life) could create more well-being than the well-being lost from being a slave. If we were to add up all the well-being slave owners gain from having slaves, it could be greater than all the well-being lost from others becoming slaves.

Is this plausible? It is, but only if you are picturing humans with a fundamentally different psychology to our own – one where being oppressed is not as bad as being an oppressor can be good. In reality, when applied to people as we know them, this simply makes no sense. (If you doubt this, see Greene and Baron’s experiments showing how bad we are, including philosophers, at thinking about declining marginal utility.)

We are being asked to apply our intuitions about well-being and suffering to hypothetical people who are wired up with a fundamentally different relationship to well-being and suffering. In other words, the stipulations of some of these scenarios don’t merely ask us to envision them, they often also implicitly ask us to imagine people who experience suffering and flourishing in critically different ways than we do. It should come as no surprise then that our moral intuitions fail us in these hypothetical worlds. The good news is, we don’t need to take these scenarios seriously. Some are just silly, failing even to tell us anything relevant about our own implicit beliefs or intuitions.

Finally, there is at least one more reason why utilitarian answers to these scenarios create discomfort: they typically imply that you are a failure. In fact, to be a utilitarian is, to some extent, to lead your life as a failure – and perhaps the worst kind of failure: a moral failure. This becomes self-evident when you answer in agreement to scenarios requiring you to sacrifice your own leg to save another person, or to give a kidney to a stranger who needs it. You say you would, but you probably won’t be donating your kidney any time soon. You are a moral failure by your own standards.

We could probably convince our consciences that these extreme actions would ultimately fail to maximize well-being, if at least for the horror toward utilitarianism it would create in others. Maximizing overall well-being would be better served if we took into account our psychological limitations and didn’t prescribe the sorts of actions that are likely to backfire by making everyone else terrified of the very idea of striving to maximize well-being. Maximization through moderation seems, paradoxically, the way to go.

But even the demands imposed by this curbed utilitarianism are quite burdensome: it still entails radical and uncomfortable changes to our lives – at least for many of those reading this – and most of us consequently won’t make those changes. But most of us also feel like we are good people, or at least not particularly bad ones. This self-perception is difficult to reconcile with the moral failure that utilitarianism insists you are. To accept such a label feels like a particularly bitter pill to swallow, especially for moral philosophers, who, more than any other group of individuals, may find it particularly insulting.

Perhaps for this reason more than any other, utilitarianism will probably remain a minority view. And yet, the discomfort of this label can also become uplifting if we change our relationship to what it means to be a moral failure. A moral failure need not be a bad person. They could merely be a person who acknowledges their limitations and strives to fail a little less each day. And hopefully, lab-grown kidneys will soon enough help them rationalize away their greedy desire to keep their extra one all to themselves.


Hazem Zohny is a research fellow in bioethics and bioprediction at The University of Oxford. You can follow him on Twitter @hazemzohny

Filed under: Philosophy


Hazem Zohny is a research fellow in bioethics and bioprediction at The University of Oxford.


  1. Paul H. says

    What motivates you to adopt utilitarianism in the first place? Perhaps a utilitarian should abandon philosophy on account of all the trouble thinking causes for oneself! I often hear that utilitarianism is overall intuitive, but in this context intuition presumably means feelings, prejudices, or instinctive reactions. That would seem to be in tension with the goal of having a rational ethical theory.

    I agree focusing on solving dilemmas/scenarios is wrongheaded, although not for the reasons you state. Yet in this example, I don’t see why the scenarios should be faulted for giving you a defined outcome. Wouldn’t adding such uncertainty to the scenario, make the analysis even more difficult? If you cannot determine the implications of a moral theory when consequences are known and specified, then the theory becomes even more useless if the consequences are probabilistic or indeterminate.

    • Paul, a true utilitarian will take into account everything, including feelings, instincts and uncertainty in their calculations. Instead of being in tension with rationality, this is required.

      Maybe the most utilitarian action is to not focus on utilitarianism for day to day life, but instead use it as a guiding principle for coming up with more practical principles.

      But these are practical concerns that can be handled within the framework. They are not objections to it.

      • Paul Hartyanszky says

        I think you are misreading my concern and conflating the two issues I raised. The first one is the justification for utilitarianism. Utilitarians claim that the goal of ethical action is to maximize utility and for some utilitarians ‘utility’ is inclusive of considerations of feelings and instincts. Preference utilitarian is one such prominent theory.

        Instead, I ask, “why adopt utilitarianism in the first place”. Why take its utility principle to be more rational than any other theory? What reason is there for taking the notion that we should always choose actions that consequently maximize utility, all persons considered? What is it about pleasure/happiness/preferences (or whatever) that says they ought be chosen?

        This is an important point, because for one thing, philosophers are always interested in arguments and reasons. Even more so relevant because utilitarians themselves greatly emphasize the apparent rationality of their ethics over customary morality and traditional theories involving God, law of nature, etc.

        Utilitarians might be able to ground the theory in something more fundamental (although in which case it may entail the idea that there are more important things than pure utility, weakening the original utilitarian theory). Or they might say that it is intuitive or obvious that utilitarianism is true (i.e. that we should maximize utility and minimize harm). Zohny doesn’t go into this, but since he raises the issue of “evolved intuitions”, I wonder if he thinks they can justify an ethical theory like utilitarianism or have significance for moral philosophers.

        My second concern was his criticism of the scenarios because of the clearly delineated outcomes. This was not a challenge to utilitarianism at all, it was a argument against his claim that the clairvoyant nature of the scenarios makes our reasoning with the scenarios misleading.

        • Hi Paul, it seems to me that happiness is the thing that humans are naturally drawn to. If I’m sitting in an uncomfortable position I shift to something more comfortable. If I do a job that’s unpleasant it’s with a view to increasing my happiness. What more natural thing could there be than to increase the overall sum of happiness?

          (I agree there’s no law of the universe saying we *must* use happiness; but unless one’s an adherent of one of the religions there’s no law prescribing any other basis for morality either.)

  2. NeonCrusader says

    What always seems to me to be missing from the utilitarian thought experiments is the fact that a single human being is not the entirety of the universal moral balance.

    The reason people are instinctively uncomfortable with the notion of, say, killing their own child to save a million other children, or pushing the fat man on the trolley, is that it asks of the individual that he destroy himself utterly (or at least risk a lot and commit great cruelty) in the name of balancing a cosmic morality scale.

    This is utterly incoherent, since everyone is but a single individual consciousness, and no one wants to consign themselves to a horrible fate in the name of abstract calculations of collective right or wrong.
    I feel it is much better to consider morality from a more individual standpoint, where the question asked is more: ”What can you personally do to make existence better for yourself and others, and not worse?” Rather than the purely utilitarian: ”Do whatever it takes to make the entire world ultimately have a better coefficient of good than bad.”

  3. Excellent piece, Hazem, and courageous in its own humble way in standing against the tide of the many baroque strains of conservatism that attract all too much support in this age of facile populism.

    While utilitarianism, like any other broad ethical doctrine, may be absurd at its far edges, it remains closest at its core to embodying the basic humanitarian injunction of placing equal value on each and every human life. Moreover, it’s serviceable: with the most minor of repairs, as cogently argued in this piece, it emerges as closest to many of our deepest moral intuitions and aspirations. And it is, of course, the fount of classical liberalism, at this moment, the leading certified antidote to the authoritarianism, paranoia, and preposterous programmes of both the far left and far right.

    Well done, both Hazem, and, as always, Quillette.

  4. Thanks for the interesting article. Some assorted thoughts.

    Regarding the first caveat, this seems to be an argument in favour of rule utilitarianism. As you say, humans are not omniscient. Our judgement is not all that reliable. Summing the total impact on human happiness is a tough job in the best of cases, let alone when the trolley is speeding down the track and there’s only seconds to make a choice. So rule utilitarianism does the hard work for us. Further to that, rules give us security and predictability which has an inherent value in itself. One thing I find interesting (if I’m reading it correctly) is that the Oxford Utilitarianism Scale appears to rate rule utilitarianism as less utilitarian than classic utilitarianism. Is this fair (or more importantly, useful)? It seems to me that in practice, classic utilitarianism is unlikely to result in much increase in human happiness.

    (I appreciate that rule utilitarianism has its own drawbacks. I’d lean towards a hybrid – rule utilitarianism in most cases but act utilitarianism where the rule is clearly flawed.)

    As for utilitarian choices feeling uncomfortable, I guess this comes down to us being wired in somewhat contradictory ways. We are naturally selected to be good at passing our genes on to future generations. This involves a combination of selfishness (so our own, specific genes survive) and concern for others (so the collective genes survive, to the benefit of all including our own descendants). So a moral system that prioritises the common good may seem uncomfortable if it clashes with our selfish interests (cutting off our own leg) or our concern for others (pushing someone in front of the trolley). Is this not just natural? “Uncomfortable” is clearly not the same thing as “immoral”.

    I don’t think that writing the extreme scenarios off as silly is necessarily helpful. It’s a good test of any philosophy, how it stands up to extreme tests. And is calling scenarios “silly” just a way of avoiding the slightly-less-extreme-but-still-difficult scenarios? Utilitarians still have to deal with whether it’s ok to bomb a wedding party or torture suspected terrorists. We’re not actively pushing people under trams but we are, as societies, making collective decisions that may kill people. If the Oxford Utilitarianism Scale uses scenarios that are silly then perhaps the test is flawed?

    (My Utilitarian score: 48 out of 63)

    • Hazem Zohny says

      Thanks for the thoughts, Mika. The silly thought experiments in mind are not the ones depicted in the Oxford scale, but the ones that ask you to imagine a case of, say, extreme, oppressive inequality AND one where that is the only way to maximize overall well-being. An example is the slavery case used here. It’s relevant to question the stipulations of such scenarios, especially if they are implicitly positing a version of a human being that simply shares a fundamentally different relationship with well-being and suffering. These particular thought experiments are misleading, though unintentionally so I think.

Comments are closed.