Hypothesis

The Mathematics of Happiness

If you could choose between these two possible lives, which would you prefer?

(1) You struggle through difficulties, challenges, and a fair amount of misery for 60 years, after which you emerge wise and content. You die at age 80.

(2) You spend the first 60 years indulging in relatively shallow pleasures. At age 60, you develop a chronic health condition where you can no longer do many things you enjoy, and you spend the last 20 years of your life regretting that you didn’t do anything very meaningful with the first 60.

A conundrum along these lines came up during a recent episode of Sam Harris’s podcast [1], with guest Jonathan Haidt. I am a big fan of these two — their recent books, Waking Up and The Righteous Mind by Harris and Haidt respectively, were both life-changing for me, so I had been looking forward to the episode for a long time. In the podcast (starting at 1:30:48), Haidt used an example like the one above to argue that utilitarian conceptions of the good life (ones with more happiness than suffering), for which Sam Harris has argued [2], are overly simplistic.

It’s not just the area under the curve.

According to Haidt, when he asks this question in class, most people choose life (1) over life (2), even though life (2) has a greater proportion of good years. I think I’d agree, but it isn’t totally obvious, and would depend on the intensity of the positive and negative experiences in the two lives. Famed psychologist Daniel Kahneman has explored the idea that there can be a big difference between how an experience feels in the moment (by the “experiencing self”) and how it is remembered later (by the “remembering self”). We fondly remember vacations that were, in reality, highly stressful. I have mostly fond memories of my mother, even though daily interactions with her were usually difficult due to her personality disorder. However, in this article I will argue that this example does not rule out theories where the total value of a life is calculated by adding up the values of the moments it contains, and that there are good reasons why addition might appear in any reasoning about these issues.

It may sound a bit crass to suggest that addition, or indeed any mathematics at all, can be applied to this question. However, fortune favours the brave. We can always try and see what we come up with. First, we need to understand why addition is useful in so many areas. Why do two apples combined with three apples result in five apples? I had never considered such “obvious” questions until I decided to dig into the work of my friend and colleague Kevin Knuth (no known relation to Donald) on the foundations of physics, beginning with an essay entitled “The Deeper Roles of Mathematics in Physical Laws” [3]. In 1946, Richard Cox derived the rules of probability based on the desire to describe plausibility with numbers, in a way consistent with standard logic [4], uniting almost all scientific reasoning in a single mathematical structure [5]. The research of Knuth and colleagues uses similar reasoning in various domains.

Here are some primary-school level examples of addition. If I have a collection of two apples, and I combine them with a different collection of three apples, I end up with a collection of five apples (since 2 + 3 = 5). If I walk 500 metres and then walk 200 metres, I walked a total of 700 metres. So far, so obvious.

What do these examples have in common? One feature is the idea of combining objects to make compound objects. A bunch of apples with another makes a new bunch, a walk followed by a second walk can be thought of as one walk. In many applications, combination is associative — that is, combining a with the combination of b and c gives the same result as combining a with b, and then combining the result with c. Mathematically, this can be written using a notation where “a v b” means “object a combined with object b” –

Associativity

a v (b v c) = (a v b) v c

Say we want to assign numbers to different objects, to describe something about them. For example, I might want to quantify the size of collections of apples. One option would be to assign numbers arbitrarily: perhaps the collection of apples called “a” has value 1.432, collection “b” has value 8.322, and collection (a v b) has value -21.05. Of course, you can always do this if you want, but it’s unlikely to lead to a particularly useful theory. However, we can start from ideas like this and make further progress by imposing constraints on what kinds of assignments make sense for the application at hand.

One constraint is that the values we assign must be consistent with the associativity property. Since a v (b v c) and (a v b) v c are the same thing, the value we assign to each must be the same. Another important constraint is order. Thinking about the apple example, suppose I have three collections a, b, and c. If a is bigger than b, then the combination of a with c is bigger than the combination of b with c. The idea of “a being bigger than b” is retained even as we combine a and b with other things. This is the order property. Formally, it can be written as:

Order

If Value(x) > Value(y), then

Value(x v z) > Value(y v z) and Value(z v x) > Value(z v y)

Amazingly, associativity and order together effectively imply that values must add [6].

We can apply this insight to the question of assigning values to possible lives. A simple model of lives is that they are built by combining moments together. The moments themselves can be thought of as very short lives. Here are three possible lives. The first consists of being tortured for a moment, and that is all. The second consists of a momentary experience of the taste of ice cream, and that is all. The third consists solely of a moment that feels like sitting on the couch watching TV.

Suppose we decide the value of torture is -1000, the value of ice cream is 200, and the value of couch is 50. Then the value of a life consisting of all three must be (-1000 + 200 + 50) = -750. The values I chose above were somewhat arbitrary, but model the fact that the ice cream moment is better than the couch moment, which is better than nonexistence (which would have value zero, so that adding more of it does nothing), which is better than the torture moment.

Does the arbitrariness of the assigned values negate the usefulness of the sum rule? No. Similar “problems” come up in other quantitative sciences, and we can use analogies with these to clarify the matter. For example, consider probability theory, which is a mathematical model of reasoning with uncertainty. Using probability theory ensures your logic meets certain minimum standards of consistency, but apart from that, the exact numbers you use are up for debate.

Physicist John Skilling summarised the situation with the phrase, “the language is fixed but the content is free”. Even laws of physics have this character. For example, Newton’s law of universal gravitation tells us how objects will move around over time from an initial state, but they don’t tell us what the initial state is — we have to get those from somewhere else, such as measurements, or we can simply postulate them in order to see what the consequences would be.

Now we can go back to the question I opened the article with. The total value of life (1) can be calculated by adding up the values of its constituent moments. Since both lives are divided into 20 year segments, we can think of “moments” that last for 20 years. To avoid having to worry about the order of occurrence of the 20 year segments, we can absorb the fact that the last segment of life (2) involves experiencing memories of the first three segments into the very definition of that segment. We might assign values to the life segments as follows:

Values for life 1 = {10, 10, 10, 90}

Values for life 2 = {50, 50, 50, 20}

The totals are 120 and 170 respectively, so under these assumptions, life (2) is better. If you disagree with the assignments, that’s legitimate, but the addition rule still holds. The only way to escape it is to deny order or associativity. In fact, Harris pressed this very point in his podcast, by asking Haidt whether it would be better to improve your well-being at the expense of “some little girl somewhere”, or in a way that also benefits her. Haidt, of course, preferred the second course of action, which is exactly what you’d expect from the order property.

An alternative set of values might be:

Values for life 1 = {20, 20, 20, 95}

Values for life 2 = {40, 40, 40, 20}

With these assignments, the total values of the lives are 155 and 140 respectively, and life (1) wins, in accordance with Haidt’s psychology students. It can be hard to justify the exact numbers assigned to the moments. How much better is ice cream than being tortured? How much better is contentment than intoxication? Would contentment even be better at all if getting drunk didn’t cause hangovers and other health effects? But progress can be made. Economists routinely try to infer people’s preferences from their behaviours, and I agree with Sam Harris that an improved understanding of subjective experience would also help. For example, if we discovered that the suffering of fish was much greater or less than what we currently think, this would have implications for our behaviour.

Because of this difficulty, I’m certainly not arguing for naive use of this kind of analysis (look, I added up some numbers and solved all the world’s problems!). However, to the extent that we can agree on an assignment, we can agree on the conclusions. To the extent that we disagree, we can pinpoint the source of the disagreement, and where we might look to resolve it. If sunlight is a good disinfectant for bad ideas, this kind of mathematical modelling is a useful source of artificial light.

You may have noticed that all I’m describing here is the concept of utility, used by economists, statisticians, and moral philosophers. Whereas basic economic models treat individuals as though they want to maximise the utility of their own lives, some moral philosophers such as Peter Singer want to maximise the utility summed over all conscious beings. Summation is inevitably involved in both cases, just as summation applies to my bank balances, the weights of objects in my backpack, the quantum amplitudes of paths of electrons, and the lengths of pieces of string.

It is popular wisdom that there are many things in this world that just can’t be quantified. It may be the case, as sociologist William Bruce Cameron suggested, that “not all the counts can be counted”. But if you try sometimes, you just might find that “non-mathematical” things can indeed be counted.

 

Brendon Brewer is a senior lecturer in the Department of Statistics, University of Auckland. Follow him on Twitter @brendonbrewer

 

References

  1. Evolving Minds: A conversation with Jonathan Haidt. Waking Up Podcast, March 9 2016.
  2. Harris, S., 2011. The moral landscape: How science can determine human values. Simon and Schuster.
  3. Knuth, K. H. 2015. The Deeper Roles of Mathematics in Physical Laws.
  4. Cox, R.T., 1946. Probability, frequency and reasonable expectation. American Journal of Physics, 14(1), pp.1-13.
  5. Jaynes, E.T., 2003. Probability theory: the logic of science. Cambridge university press.
  6. Knuth, K.H. and Skilling, J., 2012. Foundations of inference. Axioms, 1(1), pp.38-73.

Filed under: Hypothesis

by

Brendon Brewer is a senior lecturer in the Department of Statistics in Auckland.

6 Comments

  1. Daniel says

    May I add that this model of added happiness neglects devaluation in hindsight as perhaps experienced by betrayed lovers.

  2. Bitfu says

    The Paradox Of Hedonism

    “The impulse towards pleasure can be self-defeating. We fail to attain pleasures if we deliberately seek them. This is what Sidgwick (The Methods of Ethics) called the paradox of hedonism.

    There is a similar paradox concerning happiness. In order to be happy, an agent must aim at things other than his own happiness. Some writers use the same label for this paradox, somewhat inaccurately, since pleasure is not the same as happiness.”

    I think your model fails to take this into account.

  3. Im quite sympathetic to this argument, which the author makes very neatly, but one concern I always have with it (going back to Kahneman’s chapter in ‘Wellbeing: the foundations of hedonic psychology’) is that there are arguably different kinds of utility. Kahneman and co. have started referring to them as hedonic and eudemonic wellbeing i.e. crude pleasure vs. some kind of deeper life satisfaction. Now often the things that result in life satisfaction require some suffering, like getting a PhD, and the payoff of life satisfaction is distinct from hedonia. There is probably some hedonia at your grad ceremony and such, but it seems intuitively unlikely that this would be commensurate with the suffering of the previous 12 months given what we know about hedonic adaptation. There is thus an unresolved question about whether a virtuous life in the Aristotelian sense is more ‘valuable’ in your additive sense than a hedonistic or even Epicurian life. I certainly think it is, but such value judgements are hard to engage with using math…

  4. Ananda Hohenstaufen says

    I’m a little bewildered by Sam choosing #2 over #1. The guy spent 2 years in Asia studying meditation. Apparently he didn’t learn very much.

  5. Pingback: Measure theory and sports team selection | Plausibility Theory

  6. Richard Kennaway says

    Am I the only one who finds life (2) appalling, and rates life (1) higher not merely in total, but all the way through? Apart from that, ordering matters. Rags to riches is better than riches to rags, improvement better than decay, ascent better than decline, so Knuth’s work on additivity doesn’t apply. People also value what they leave when they die, which (1) likely has more of than (2).

Comments are closed.