Showing posts with label well-being. Show all posts
Showing posts with label well-being. Show all posts

Monday, December 1, 2025

Desire, preference, and utilitarianism

Desire-satisfaction utilitarianism (DSU) holds that the right thing to do is what maximizes everyone’s total desire satisfaction.

This requires a view of desire on which desire does not supervene on preferences as in decision theory.

There are two reasons. First, it is essential for DSU that there be a well-defined zero point for desire satisfaction, as according to DSU it’s good to add to the population people whose desire satisfaction is positive and bad to add people whose desire-satisfaction is negative. Preferences are always relative. Adding some fixed amount to all of a person’s utilities will not change their preferences, but can change which states have positive utility and which have negative utility, and hence can change whether the person’s on-the-whole state of desire satisfaction is positive or negative.

Second, preferences cannot be compared across agents, but desires can. Suppose there are only two states, eating brownie and eating ice cream (one can’t have both), and you and I both prefer brownie. In terms of preference comparisons, there is nothing more to be said. Given any mixed pair of options i = 1, 2 with probability pi of brownie and 1 − pi of ice cream, I prefer option i to option j if and only if pi > pj, and the same is true for you. But this does not capture the possibility that I may prefer brownie by a lot and you only by a little. Without capturing this possibility, the preference data is insufficient for utilitarian decisions (if I prefer brownie by a lot, and you by a little, and there is one brownie and one serving of ice cream, I should get the brownie and you should get the ice cream on a utilitarian calculus).

The technical point here is that preferences are affine-invariant, but desires are not.

But now it is preferences that are captured behavioristically—you prefer A over B provided you choose A over B. The extra information in desires is not captured behavioristically. Instead, it seems, it requires some kind of “mental intensity of desire”.

And while there is reason to think that the preferences of rational agents at least can be captured numerically—the von Neumann–Morgenstern Representation Theorem suggests this—it seems dubious to think that mental intensities of desire can be captured numerically. But they need to be so captured for DSU to have a hope of success.

The same point holds for desire-satisfaction egoism.

Monday, November 3, 2025

The good of success is not at the time of success

It’s good for one to succeed, at least if the thing one succeeds in is good. And the good of succeeding at a good task is something over and beyond the good of the task’s good end, since the good end might be good for someone other than the agent, while the good of success is good for the agent.

Here’s a question I’ve wondered about, and now I think I’ve come to a fairly settled view. When does success contribute to one’s well-being? The obvious answer is: when the success happens! But the obvious answer is wrong for multiple reasons, and so we should embrace what seems the main alternative, namely that success is good for us when we are striving for the end.

Before getting to the positive arguments for why the good of success doesn’t apply to us at the time of success, let me say something about one consideration in favor of that view. Obviously, we often celebrate when success happens. However, notice that we also often celebrate when success becomes inevitable. Let’s now move to the positive arguments.

First, success at good tasks would still be good for one even if there were no afterlife. But some important projects have posthumous success—and such success is clearly a part of one’s well-being. And it seems implausible to respond that posthumous success only contributes to our well-being because as a matter of fact we do have an afterlife. Note, too, that in order to locate the good of success at the time of success, we would not just need an afterlife, but an afterlife that begins right at death. For instance, views on which we cease to exist at death and then come back into existence later at the resurrection of the dead (as corruptionist Christians hold) won’t solve the problem, because the success may happen during the gap time. I believe in an afterlife that begins right at death, but it doesn’t seem like I should have to in order to account for the good of success. Furthermore, note that to use the afterlife to save posthumous success, we need a correlation between the timeline the dead are in and the timeline the living are in, and even for those of us who believe in an afterlife right at death, this is unclear.

Second, suppose your project is ensure that some disease does not return before the year 2200. When is your success? Only in 2200. But suppose your project is even more grandiose: the future is infinite and you strive to ensure that the disease never returns. When is your success? Well, “after all of time”. But there is no time after all of time. So although it may be true that you are successful, that success does not happen at any given time. At any given time, there is infinite project-time to go. So if you get the good of success at the time of success, you never get the good of success here. Even an afterlife won’t help here.

Third, consider Special Relativity. You work in mission control on earth to make sure that astronauts on Mars accomplish some task. You are part of the team, but the last part of the team’s work is theirs. But since light can take up to 22 minutes (depending on orbital positions) to travel between Earth and Mars, the question of at what exact you-time the astronauts accomplished their task depends on the reference frame, with a range of variation in the possible answers of up to 22 minutes. But whether you are happy at some moment should not depend on the reference frame. (You might say that it depends on what your reference frame is. But there is no unambiguous such thing as “your” reference frame in general, say if you are shaking your head so your brain is moving in one direction and the rest of your body in another.)

Here is an interesting corollary of the view: the future is not open (by open, I mean the thesis that there are no facts about how future contingents will go). For if the future is open, often it is only at the time of success that there will be a fact about success, so there won’t be a fact of your having been better off for the success when you were striving earlier for the success. That said, the open-futurist cannot accept the third argument, and is likely to be somewhat dubious of the second.

Friday, August 8, 2025

Extrinsic well-being and the open future

Klaus: Sometimes how well or badly off you are at time t1 depends on what happens at a later time t2. A particularly compelling case of this is when at t1 you performedan onerous action with the goal of producing some effect E at t2. How well off you were in performing the action depends on whether the action succeeded—which depends on whether E eventuates at t2. But now suppose the future is open. Then in a world with as much indeterminacy as ours, in many cases at t1 it will be contingent whether the event at t2 on which your well-being at t1 depends eventuates. And on open future views, at t1 there will then be no fact of the matter about your well-being. Hence, the future is not open.

Opie: In such cases, your well-being should be located at t2 rather than at t1. If you jump the crevasse, it is only when you land that you have the well-being of success.

Klaus: This does not work as well in cases where you are dead at t2. And yet our well-being does sometimes depend on what happens after we are dead. The action at t1 might be a heroic sacrifice of one’s life to save one’s friends—but whether one is a successful hero or a tragic hero depends on whether the friends will be saved, which may depend on what happens after one is already dead.

Opie: Thanks! You just gave me an argument for an afterlife. In cases like this, you are obviously better off if you manage to save your friends, but you aren’t better off in this life, so there must be life after death.

Klaus: But we also have the intuition that even if there were no afterlife, it would be better to be the successful hero than the tragic hero, and that posthumous fame is better than posthumous infamy.

Opie: There is an afterlife. You’ve convinced me. And moral intuitions about how things would be if our existence had a radically different shape from the one it in fact has are suspect. And, given that there is an afterlife, a scenario without an afterlife is a scenario where our existence has a radically different shape. Thus the intuition you cite is unreliable.

Klaus: That’s a good response. Let me try a different case. Suppose you perform an onerous action with a goal within this life, but then you change your mind about the goal and work to prevent that goal. This works best if both goals are morally acceptable, and switching goals is acceptable. For instance you initially worked to help the Niners train to win their baseball game against the Logicians, but then your allegiance shifted to the Logicians in a way that isn't morally questionable. And then suppose the Niners won. Your actions in favor of the Niners are successful, and you have well-being. But it is incorrect to locate that well-being at the time of the actual victory, since at that time you are working for the Logicians, not the Niners. So the well-being must be located at the time of your activity, and at that time it depends on future contingents.

Opie: Perhaps I should say that at the time Niners beat the Logicians, you are both well-off and badly-off, since one of your past goals is successful and the other is unsuccessful. But I agree that this doesn’t quite seem right. After all, if you are loyal to your current employer, you’re bummed out about the Logicians’ loss and you’re bummed out that you weren’t working for them from the beginning. So intuitively you're just badly off at this time, not both badly and well off. So, I admit, this is a little bit of evidence against open future views.

Wednesday, April 17, 2024

Desire-fulfillment theories of wellbeing

On desire-fulfillment (DF) theories of wellbeing, cases of fulfilled desire are an increment to utility. What about cases of unfulfilled desire? On DF theories, we have a choice point. We could say that unfulfilled desires don’t count at all—it’s just that one doesn’t get the increment from the desire being fulfilled—or that they are a decrement.

Saying that unfulfilled desires don’t count at all would be mistaken. It would imply, for instance, that it’s worthwhile to gain all the possible desires, since then one maximizes the amount of fulfilled desire, and there is no loss from unfulfilled desire.

So the DF theorist should count unfulfilled desire as a decrement to utility.

But now here is an interesting question. If I desire that p, and then get an increment x > 0 to my utility if p, is my decrement to utility if not p just  − x or something different?

It seems that in different cases we feel differently. There seem to be cases where the increment from fulfillment is greater than the decrement from non-fulfillment. These may be cases of wanting something as a bonus or an adjunct to one’s other desires. For instance, a philosopher might want to win a pickleball tournament, and intuitively the increment to utility from winning is greater than the decrement from not winning. But there are cases where the decrement is at least as large as the increment. Cases of really important desires, like the desire to have friends, may be like that.

What should the DF theorist do about this? The observation above seems to do serious damage to the elegant “add up fulfillments and subtract non-fulfulfillments” picture of DF theories.

I think there is actually a neat move that can be made. We normally think of desires as coming with strengths or importances, and of course every DF theorist will want to weight the increments and decrements to utility with the importance of the desire involved. But perhaps what we should do is to attach two importances to any given desire: an importance that is a weight for the increment if the desire is fulfilled and an importance that is a weight for the decrement if the desire is not fulfilled.

So now it is just a psychological fact that each desire comes along with a pair of weights, and we can decide how much to add and how much to subtract based on the fulfillment or non-fulfillment of the desire.

If this is right, then we have an algorithm for a good life: work on your psychology to gain lots and lots of new desires with large fulfillment weights and small non-fulfillment weights, and to transform your existing desires to have large fulfillment weights and small non-fulfillment weights. Then you will have more wellbeing, since the fulfillments of desires will add significantly to your utility but the non-fulfillments will make little difference.

This algorithm results in an inhuman person, one who gains much if their friends live and are loyal, but loses nothing if their friends die or are disloyal. That’s not the best kind of friendship. The best kind of friendship requires vulnerability, and the algorithm takes that away.

Thursday, December 9, 2021

Yet another account of life

I think a really interesting philosophical question is the definition of life. Standard biological accounts fail to work for God and angels.

Here is a suggestion:

  • x has life if and only if it has a well-being.

For living things, one can talk meaningfully of how well or poorly off they are. And that’s what makes them be living.

I think this is a simple and attractive account. I don’t like it myself, because I am inclined to think that everything has a well-being—even fundamental particles. But for those who do not have such a crazy view, I think it is an attractively simple solution to a deep philosophical puzzle.

Monday, May 18, 2020

Gamification

Most philosophers don’t talk much about games. But games actually highlight one of the really amazing powers of the human being: the power to create norms and to create new forms of well-being.

Lately I’ve been playing this vague game with vague rules and vague non-numerical points when out and about:

  • Gain bonus points if I can stay at least nine feet away from non-family members in circumstances in which normally I would come within that distance of them; more points the further away I can be, though no extra bonus past 12 feet.

  • Win game if I avoid inhaling or exhaling within six feet of a non-family member. (And of course I have to be careful that the first breath past the requisite distance be moderate in size rather than a big huff.)

When the game goes well, it’s delightful, and adds value to life. On an ordinary walk around campus, I almost always win the game now. Last time I went shopping at Aldi, I would have won (having had to hold my breath a few times), except that I think I mumbled “Thank you” within six feet of the checkout worker (admittedly, if memory serves, I think I mumbled it quietly, trying to minimize the amount of breath going out, and then stepped back for the inhalation after the words; and of course I was wearing a mask, but it's still a defeat). Victory, or even near-victory, at the social distancing game is an extra good in life, only available because I imposed these game norms on myself, in addition to the legal and prudential norms that are independent of my will. Yesterday, I think I won the game all day despite going on a bike ride and a hike, attending Mass (we sat in the vestibule, in chairs at least nine feet away from anybody else, and the few times someone was passing by I held my breath), and playing tennis with a grad student. That's satisfying to reflect on. (At the same time, playing a game also generally adds a bit of extra stress, since there is the possibility, and sometimes actuality, of defeat. And it's hard to concentrate on the Mass while all the time looking around for someone who might be approaching within the forbidden distance. And, no, I didn't actually think of it as a game when I was at Mass, but rather as a duty of social responsibility.)

I think the only other person in my family who has gamified social distancing is my seven-year-old.

Monday, August 13, 2018

Calling for an explanation

If I am playing a board game and the last ten rolls of my die were 1, that calls out for an explanation. If only Jewish and Ethiopian people get Tay-Sachs disease, that calls out for an explanation.

It seems right to say that

  1. a fact calls out for an explanation provided it is the sort of fact that we would expect to have an explanation, a fact whose nature is such that it "should" have an explanation, a fact such that we would be disappointed in reality in not having an explanation of.

But now consider two boring facts:

  1. 44877 x 5757 = 258356889
  2. Bob is wearing a shirt
These are facts that we all expect to have an explanation (e.g., the explanation of (2) is long and boring, involving many instance of the distributive law and the explanation of (3) presumably has to do with psychosocial and physical facts). They are, moreover, facts that "should" have an explanation. There would be something seriously wrong with logic itself if a complex multiplication fact had no explanation (it's certainly not a candidate for being a Goedelian unprovable truth), and with reality if people wore shirts for no reason at all.

So by (1), these would have to be facts that call out for an explanation. But I don't hear their cry. I am confident that they have explanations, but I wouldn't say that they call out for them. So it doesn't seem that (1) captures the concept of calling out for an explanation.

As I reflect on cases, it seems to me that calling out for an explanation has something to do with the intellectual desirability of having an explanation rather. Someone with a healthy level of curiosity would want to know why the last ten rolls were 1 or why only Jewish and Ethiopian people get Tay-Sachs. On the other hand, while I'm confident that there is a fine mathematical reason why 44877 x 5757 = 258356889, I have no desire to know that reason, even though I have at least a healthy degree of curiosity about mathematics.

This suggests to me an anthropocentric (and degreed) story like the following:

  1. A fact calls out for an explanation to the degree that one would be intellectually unfulfilled in not knowing an explanation.

It is sometimes said that a fact's calling out for an explanation is evidence that it has an explanation. I think (4) coheres with this. That something is needed for our fulfillment is evidence that the thing is possible. For beings tend to be capable of fulfillment. (This is a kind of cosmic optimism. No doubt connected to theism, but in what direction the connection runs needs investigation.)

Friday, May 11, 2018

Fun with desire satisfaction

Suppose that desire satisfaction as such contributes to happiness. Then it makes sense to pay a neuroscientist to induce in me as many desires as possible for obvious mathematical truths: the desire that 1+1=2, that 1+2=3, that 1+3=4, etc.

Or if desire has to be for a state of affairs in one’s control, one can pay the neuroscientist to induce in me as many desires as possible for states of affairs like: my not wearing a T-shirt that has a green numeral 1, my not wearing a T-shirt that has a green numeral 2, etc. Then by not wearing a T-shirt with any green numerals, I fulfill lots of desires.

Friday, January 12, 2018

Four grades of normative actuality

Here are four qualitatively different grades of the normative actuality of a causal power:

  1. Normative possession (zeroeth normative actuality): x has a nature according to which it should have causal power F. (An adult human who lacks sight still has normative possession of vision.)
  2. First normative actuality: x has a normal (for x’s nature) causal power F. (The human with closed eyes has first actuality of vision.)
  3. Second normative actuality: x exercises the normal causal power F. (The human who sees has second actuality of vision.)
  4. Full (third) normative actuality: x exercises the normal causal power F and achieves the full telos of the causal power. (The human who gains knowledge through seeing has full actuality of vision.)

What I call normative possession is close to Aristotelian first potentiality, but is not exactly the same. The newborn has first potentiality for speaking Greek—namely, she is such that eventually she can come to have the power of speaking Greek—but she does not have normative possession of speaking Greek, since human nature does not specify that one should be able to speak Greek. However, the newborn does have normative possession of language in general.

I think each of the four grades of normative actuality is non-instrumentally valuable, and that the grades increase in non-instrumental value as one goes from zero to three.

Grade zero can carry great value, even in the absence of higher grades. For instance, normative possession of the causal powers constitutive of rational agency makes one be a person (or so I say, following Mike Gorman). And it is very valuable to be a person. This may, however, be a special case coming from the fact that persons have a dignity that other kinds of things do not; maybe the special case comes from the fact that persons need to have a fundamentally different kind of form from other things. For other causal powers, grade zero doesn’t seem to carry much value. Imagine that you found out that (a) normal Neanderthals have the ability to run five hundred kilometers and (b) you are in fact a Neanderthal. By finding out these things, you’d have found out that you have normative possession of the ability to run 500km—but of course, you have no actual possession of that ability. The normative possession is slightly cool, but so what? Unless one has a higher grade of actuality of this ability, simply being the kind of thing that should have that ability does not seem very valuable. And the same is true for abilities more valuable than the running one: imagine that Neanderthals turn out to have Einstein-level mathematical abilities, but you don’t. It would be a bit cool to be of the same kind as these mathematical geniuses (maybe this is a little similar to how it’s cool for a Greek to be of the same nation as Socrates), but in the end it really doesn’t count for much.

Grade one is also valuable even in the absence of higher grades. It makes for the difference between health and impairment, and health is valuable. But I can’t think of cases where first normative actuality carries much non-instrumental value. Imagine that I know for sure that I am going to spend all my life with my eyes tightly closed (e.g., maybe I am hooked up to machine that will kill me if I attempt to open them). It is objectively healthier that I have sight than that I do not. But it seems rational to sacrifice all of my sight for a slight increase in the acuity of touch or hearing, given that I can actually exercise touch and hearing (second or third actuality) while I can’t exercise sight. Even slight amounts of second or third normative actuality seem to trump first normative actuality.

Grade two seems quite valuable, even absent grade three. Here, examples can be multiplied. Sensory perception that does not lead to knowledge can still be well worth having. Sex is valuable even absent successful reproduction. Running on a treadmill can have a value even if does not achieve locomotion. While it seems to be generally true that a great amount of first actuality can be sacrificed for a small amount of second actuality, this is not as generally true with second and third actuality. One might reasonably prefer to run two kilometers on a treadmill—even for the non-instrumental goods of the exercise of leg muscles—instead of running two meters on the ground.

All of the lower grades of normative actuality derive their value in some way from the value of full normative actuality. But full normative actuality does not always trump grade two. It seems to generally trump grade one. Grade zero is special: most of the time it does not seem to carry much value, but it does in the case where it constitutes personhood. (Maybe, though, the dignity of personhood shouldn’t be thought of in terms of value.)

Wednesday, November 2, 2016

Cessation of existence and theories of persistence

Suppose I could get into a time machine and instantly travel forward by a hundred years. Then over the next hundred (external) years I don’t exist. But this non-existence is not intrinsically a harm to me (it might be accidentally a harm if over these ten years I miss out on things). So a temporary cessation of existence is not an intrinsic harm to me. On the other hand, a permanent cessation of existence surely is an intrinsic harm to me.

These observations have interesting connections with theories of persistence and time. First, observe that whether a cessation of existence is bad for me depends on whether I will come back into existence. This fits neatly with four-dimensionalism and less neatly with three-dimensionalism. If I am a four-dimensional entity, it makes perfect sense that as such I would have an overall well-being, and that this overall well-being should depend on the overall shape and size of my four-dimensional life, including my future life. Hence it makes sense that whether I undergo a permanent or impermanent cessation of existence makes a serious difference to me.

But suppose I am three-dimensional and consider these two scenarios:

  1. In 2017 I will permanently cease to exist.

  2. In 2017 I will temporarily cease to exist and come back into existence in 2117.

I am surely worse off in (1). But if I am three-dimensional, then to be worse off, I need to be worse off as a three-dimensional being, at some time or other. Prior to 2117, I’m on par as a three-dimensional being in the two scenarios. So if there is to be a difference in well-being, it must have something to do with my state after 2117.

But it seems false that, say, in 2118, I am worse off in (1) than in (2). For how can I be better or worse off when I don’t exist?

The three-dimensionalist’s best move, I think, is to say that I am actually worse off prior to 2017 in scenario (1) than in scenario (2). For, prior to 2017, it is true in scenario (1) that I will permanently cease to exist while in (2) it is false that I will do so.

It can indeed happen that one is worse off at time t1 in virtue of how things will be at a later time t2. Perhaps the athlete who attains a world-record that won’t be beaten for ten years is worse off at the time of the record than the athlete who attains a world-record that won’t be beaten for a hundred years. Perhaps I am worse off when publishing a book that will be ignored than when publishing a book that will be taken seriously. But these are differences in external well-being, like the kind of well-being we have in virtue of our friends doing badly or well. And it is counterintuitive that permanent cessation of existence is only a harm to one’s external well-being. (The same problem afflicts Thomas Nagel’s theory that the badness of death has to do with unfinished projects.)

The problem is worst on open future views. For on open future views, prior to the cessation of existence there may be no fact of the matter of whether I will come back into existence, and hence no difference in well-being.

The problem is also particularly pressing on exdurantist views on which I am a three-dimensional stage, and future stages are numerically different from me. For then the difference, prior to 2017, between the two scenarios is a difference about what will happen to something numerically different from me.

The problem is also particularly pressing on presentist and growing block views, for it is odd to say that I am better or worse off in virtue of non-existent future events.

Of the three-dimensionalists, probably the best off is the eternalist endurantist. But even there the assimilation of the difference between (1) and (2) to external well-being is problematic.

Thursday, October 27, 2016

Three strengths of desire

Plausibly, having satisfied desires contributes to my well-being and having unsatisfied desires contributes to my ill-being, at least in the case of rational desires. But there are infinitely many things that I’d like to know and only finitely many that I do know, and my desire here is rational. So my desire and knowledge state contributes infinite misery to me. But it does not. So something’s gone wrong.

That’s too quick. Maybe the things that I know are things that I more strongly desire to know than the things that I don’t know, to such a degree that the contribution to my well-being from the finite number of things I know outweighs the contribution to my ill-being from the infinite number of things I don’t know. In my case, I think this objection holds, since I take myself to know the central truths of the Christian faith, and I take that to make me know things that I most want to know: who I am, what I should do, what the point of my life is, etc. And this may well outweigh the infinitely many things that I don’t know.

Yes, but I can tweak the argument. Consider some area of my knowledge. Perhaps my knowledge of noncommutative geometry. There is way more that I don’t know than that I know, and I can’t say that the things that I do know are ones that I desire so much more strongly to know than the ones I don’t know so as to balance them out. But I don’t think I am made more miserable by my desire and knowledge state with respect to noncommutative geometry. If I neither knew anything nor cared to know anything about noncommutative geometry, I wouldn’t be any better off.

Thinking about this suggests there are three different strengths in a desire:

  1. Sp: preferential strength, determined by which things one is inclined to choose over which.

  2. Sh: happiness strength, determined by how happy having the desire fulfilled makes one.

  3. Sm: misery strength, determined by how miserable having the desire unfulfilled makes one.

It is natural to hypothesize that (a) the contribution to well-being is Sh when the desire is fulfilled and −Sm when it is unfulfilled, and (b) in a rational agent, Sp = Sh + Sm. As a result of (b), one can have the same preferential strength, but differently divided between the happiness and misery strengths. For instance, there may be a degree of pain such that the preferential strength of my desire not to have that pain equals the preferential strength of my desire to know whether the Goldbach Conjecture is true. I would be indifferent whether to avoid the pain or learn whether the Goldbach Conjecture is true. But they are differently divided: in the pain case Sm >> Sh and in the Goldbach case Sm << Sh.

There might be some desires where Sm = 0. In those cases we think “It would be nice…” For instance, I might have a desire that some celebrity be my friend. Here, Sm = 0: I am in no way made miserable by having that desire be unfulfilled, although the desire might have significant preferential strength—there might be significant goods I would be willing trade for that friendship. On the other hand, when I desire that a colleague be my friend, quite likely Sm >> 0: I would pine if the friendship weren’t there.

(We might think a hedonist has a story about all this: Sh measures how pleasant it is to have the desire fulfilled and Sm measures how painful the unfulfilled desire is. But that story is mistaken. For instance, consider my desire that people not say bad things behind my back in such a way that I never find out. Here, Sm >> 0, but there is no pain in having the desire unfulfilled, since when it’s unfulfilled I don’t know about it.)

Friday, November 20, 2015

The value of victory

Is winning a game always worthwhile? Consider this solitary game: I guess a number, and if my number is different from the number of particles in the universe, then my score equals the number of particles in the universe. I can play this over and over, winning each time. If it's good for me to win at a game, I continue to rack up benefits. So for reasons of self-interest, I should play this game all the time. I could even set myself up as playing it by default: I announce that each time I breathe, the length of my inspiration in milliseconds counts as my guess. I will be racking up benefits every day, every night. But that's silly.

Thursday, July 16, 2015

Health

Is there a good of overall health, over and beyond particular goods of health, such as having keen eyesight, being able to run fast, etc.?

Suppose you have a broken leg and you believe this was your only health problem. But then you learn that your hearing is below normal and that this cannot be cured. Before you learned this bad news, you thought that fixing the fracture would both restore the health of the leg and overall health. But after learning the bad news, you knew that fixing the fracture would restore the health of the leg but not overall health. If overall health has a value over and beyond its components, then your level of motivation should go down, since previously actions that promoted the health of the leg apparently promoted two goods, while now you see that they promote only one. Yet surely your motivations wouldn’t decrease, or they hardly would. This suggests that the good of overall health is either not a further good or at best a minor good.

Thursday, June 4, 2015

Teleological personhood

It is common, since Mary Anne Warren's defense of abortion, to define personhood in terms of appropriate developed intellectual capacities. This has the problem that sufficiently developmentally challenged humans end up not counting as persons. While some might want to define personhood in terms of a potentiality for these capacities, Mike Gorman has proposed an interesting alternative: a person is something for which the appropriate developed intellectual capacities are normal, something with a natural teleology towards the right kind of intellectual functioning.

I like Gorman's solution, but I now want to experiment with a possible answer as to why, if this is what a person is, we should care more for persons than, say, for pandas.

There are three distinct cases of personhood we can think about:

  1. Persons who actually have the appropriate developed intellectual capacities.
  2. Immature persons who have not yet developed those capacities.
  3. Disabled persons who should have those capacities but do not.

The first case isn't easy, but since everyone agrees that those with appropriate development intellectual capacities should be cared for more than non-person animals, that's something everyone needs to handle.

I want to focus on the third case now, and to make the case vivid, let's suppose that we have a case of a disabled human whose intellectual capacities match those of a panda. Here is one important difference between the two: the human is deeply unfortunate, while the panda is--as far as the story goes--just fine. For even though their actual functioning is the same, the human's functioning falls significantly far of what is normal, while the panda's does not. But there is a strong moral intuition--deeply embedded into the Christian tradition but also found in Rawls--that the flourishing of the most unfortunate takes a moral priority over the flourishing of those who are less unfortunate. Thus, the human takes priority over the panda because although both are at an equal level of intellectual functioning, this equality is a great misfortune for the human.

What if the panda is also unfortunate? But a panda just doesn't have the range of flourishing, and hence for misfortune, that a human does. The difference in flourishing between a normal human state and the state of a human who is so disabled as to have the intellectual level of a panda is much greater than the total level of flourishing a panda has--if by killing the panda we could produce a drug to restore the human to normal function, we should do so. So even if the panda is miserable, it cannot fall as far short of flourishing as the disabled human does.

But there is an objection to this line of thought. If the human and the panda have equal levels of intellectual functioning, then it seems that the good of their lives is equal. The human isn't more miserable than the panda. But while I feel the pull of this intuition, I think that an interesting distinction might be made. Maybe we should say that the human and the panda flourish equally, but the human is unfortunate while the panda is not. The baselines of flourishing and misfortune are different. The baseline for flourishing is something like non-existence, or maybe bare existence like that of a rock, and any goods we add carry one above zero, so if we add the same goods to the human's and the panda's account, we get the same level. But the baseline for misfortune is something like the normal level for that kind of individual, so any shortfall carries one above zero. Thus, it could be that the human's flourishing is 1,000 units, and the panda's flourishing is 1,000 units, but nonetheless if the normal level of flourishing for a human is, say, 10,000 units (don't take either the numbers or the idea of assigning numbers seriously--this is just to pump intuitions), then the human has a misfortune of 9,000 units, while the panda has a misfortune of 1,000 units.

This does, however, raise an interesting question. Maybe the intuition that the flourishing of the most unfortunate takes a priority is subtly mistaken. Maybe, instead, we should say that the flourishing of those who flourish least should take a priority. In that case, neither the disabled human doesn't take a priority over the panda. But this is mistaken, since by this principle a plant would take priority over a panda, since the plant's flourishing level is lower than a panda's. Better, thus, to formulate this in terms of misfortune.

What about intermediate cases, those of people whose functioning is below a normal level but above that of a panda? Maybe we should combine our answers to (1) and (3) for those cases. One set of reasons to care for someone comes from the actual intellectual capacities. Another comes from misfortune. As the latter reasons wane, the former wax, and if all is well-balanced, we get reason to care for the human more than for the panda at all levels of the human's functioning.

That leaves (2). We cannot say that the immature person--a fetus or a newborn--suffers a misfortune. But we can say this. Either the person will or will not develop the intellectual capacities. If she will, then she is a person with those capacities when we consider the whole of the life, and perhaps therefore the reasons for respecting those future capacities extend to her even at the early stage--after all, she is the same individual. But if she won't develop them, then she is a deeply unfortunate individual, and so the kinds of reasons that apply in case (3) apply to her.

I find the story I gave about (2) plausible. I am less convinced that I gave the right story about (3). But I suspect that a part of the reason I am dissatisfied with the story about (3) is that I don't know what to say about (1). However, (1) will need to be a topic for another day.

Monday, September 10, 2012

Lauinger's Well-being and Theism

This is a plug for something I just got in the mail: my former student William Lauinger's first book, Well-being and Theism. There is some really nice material in the book.

First, we get a new account of well-being. On the one hand, the literature has natural-law accounts on which something is an aspect of one's well-being provided that it perfects one. On the other hand, there are desire-fulfillment theories on which something is an aspect of one's well-being provided that one desires it, or would desire it under appropriate conditions. Lauinger criticizes both (I am convinced by the criticism of desire-fulfillment but not of the natural-law accounts), and then makes a move that normally would be a non-starter but is surprisingly promising here: he conjoins the two by saying that something is an aspect of one's well-being provided it perfects one and satisfies a desire. The criticisms of desire-fulfillment accounts of well-being are very powerful, and ever since reading them in Lauinger's dissertation they have shaped much of my thinking about desire-fulfillment theories.

There is also some really helpful empirically-grounded material in the book arguing that non-standard cases where adults lack desires for basic goods like friendship and health are either much more rare than one might think or non-existent.

The book comes to a completion with (a) an argument that neither evolutionary nor Aristotelian groundings for the perfectionist aspects of the account as satisfactory unless supplemented with theism and (b) discussion of our desires as a desires for something infinite.

I only wish the book wasn't so expensive.

Wednesday, April 18, 2012

Punishment is good for those who are justly punished

Suppose Satan is the only creature in existence and Satan sins gravely through pride, does not repent, and goes to hell forever. Hell is a punishment from God. Now in punishing Satan in this world, God does something good to creation, since God does not do anything to creation that isn't good.

But every good is a good for someone. In that world, however, there is only God and Satan. So for whom is that punishment good? For God alone or for Satan alone or for both God and Satan?

It does not seem that the "for God alone" answer is satisfactory. For God, considered on his own, has an unchangeable perfect flourishing. Additionally, there is an extended well-being that God has when those that he loves receive goods, but that presupposes that God isn't the only recipient of the good. Besides, surely, when God acts in creation, he produces good effects--he is, after all, omnibenevolent.

Hence, the punishment of Satan in that world is good for Satan (and maybe for God, derivatively via extended well-being).

But if it is good for Satan in that world, why not in ours as well?

And why is it necessarily good for Satan? Presumably because in general punishment is good for those who justly receive it.

Wednesday, February 8, 2012

Virtues and skills, optional and not

Being a coward is an unhappy fate, even if you know you will never need to face danger. Courage is worth having whether or not you ever use it. On the other hand, the ability to get to Waterloo Station seems to be a useless skill if you're never going to be in London.[note 1] Of course there may be some incidental value in being able to get to Waterloo Station (an eccentric employer whose formative experiences have been around Waterloo Station may require the ability of all her employees) but there could also be similar incidental value in being unable to get to Waterloo Station (maybe an eccentric employer who hates Waterloo Station uses a polygraph to rule out all employees who know how to get there). And it may also be that in gaining the skill of getting to Waterloo Station, one might gain some other useful skill, but that's incidental, too.

Now, maybe, there is some non-instrumental value in being able to get to Waterloo Station. I have a certain pull to say there is. But the following seems clear: there is nothing unfortunate about not being able to get to Waterloo Station, unless you need to get to Waterloo Station or something odd (like an eccentric employer story) is the case.

Are there any virtues that are like being able to get to Waterloo Station, so that it need not be unfortunate that one lacks them? Or is it a mark of a virtue that lacking it is unfortunate, no matter whether one needs to exercise the virtue or not? Let's call any virtues that it is not unfortunate to lack "optional virtues". Thus, virtues can be divided into the optional and non-optional. Plausibly, central general virtues like prudence, courage, patience, generosity and appropriate trust are non-optional. But there may be some optional virtues.

I don't know if there are any optional virtues. Maybe, though, there are some virtues that are tied to particular vocations that it is not unfortunate to lack if you don't have that vocation? I am not sure.

Interestingly, I am inclined to think there are also non-optional skills, skills which it is unfortunate to lack, whether or not you need to exercise them or not. For instance, it is unfortunate to lack interpersonal skills even if you are going to live on a desert island, for then you are lacking something centrally human. (It is, I think, unfortunate to lack legs even if you're going to spend the rest of your life in a coma. That's part of why it's wrong to steal a permanently comatose patients' legs.)

When I started writing this post, I thought that the question of what state is unfortunate to have might neatly delineate between virtues and skills. But I think it doesn't. It may be an orthogonal distinction.

Thursday, January 26, 2012

Presentism and Epicurus' death argument

Becoming friendless is a harm, even if one does not know that one's last friend has just betrayed one. Likewise, one is harmed when the persons or causes one reasonably cares about are harmed, again whether or not one knows about the harm. But we also, I think, have the intuition that this is a different sort of harm from that which one undergoes when one loses an arm or when one is tortured. Call the first set of harms, extrinsic, and the well-being that they detract from extrinsic well-being, and call the second set of harms intrinsic. Apart from an incarnation, God is not subject to intrinsic harms, but he may be subject to extrinsic harms, such as when someone he loves (i.e., anyone at all) is harmed.

Now, introduce the intuitive notion of a temporally pure property. A temporally pure property is one that is had by x only in virtue of how x is at the given time. Thus, being circular is temporally pure but being married to a future president of the United States or being fifty years old are temporally impure. (If the fact that x has Fness is a soft fact, in the Ockhamist sense, then F is temporally impure.)

Then:

  1. (Premise) Only the having of an intrinsic property can constitute an intrinsic harm.
  2. (Premise) Ceasing to exist can be an intrinsic harm.
  3. (Premise) If presentism is true, only temporally pure properties can be intrinsic.
  4. (Premise) Ceasing to exist cannot be a property constituted in virtue of how x is at a particular time.
  5. Ceasing to exist cannot be constituted in virtue of one's temporally pure properties. (4 and definition)
  6. If presentism is true, ceasing to exist cannot be an intrinsic property. (3 and 5)
  7. If presentism is true, ceasing to exist cannot be an intrinsic harm. (1 and 6)
  8. Presentism is not true. (2 and 7)

This is of course in the same spirit as Epicurus' argument that death isn't bad because when you're dead, you don't exist and hence can't be badly off, and when you're not dead, you're not dead. But notice that Epicurus' argument fails to show that death isn't extrinsically bad. Also, I formulated the argument in terms of a (hypothetical) cessation of existence rather than death, since in fact death is not a cessation of existence for human beings, and it is not completely clear that death is an intrinsic harm to non-human animals.

Interestingly, the growing block theorist, who thinks only past and present events and things are real, has a similar problem. For if growing block is true, only hard properties (ones that depend only on how things were or are) can be intrinsic properties, and ceasing to exist is not a hard property.

The eternalist, however, can say that the property of being such that one ceases to exist is an intrinsic property, at least on one interpretation of "ceases to exist". It is an intrinsic property of oneself as a temporally extended being, the property of one's life being futureward finite. It is just as much an intrinsic property as the property of being circular or of finite girth. And if someone were to cause one to have the property of one's life being futureward finite, or a more specific property like that of one's life being being no more than 54 years long, she would thereby be imposing a harm on one.

And even the cessation of existence at age 54 as such isn't an intrinsic harm, the eternalist can talk of such intrinsic harms to someone as that one's life does not include any joys after the the age of 54, thereby doing some justice to the intuition that cessation of existence is intrinsically harmful.

Tuesday, November 22, 2011

Spinoza's argument for internalism about truth

Internalism about truth holds that a belief's being true is a function of things internal to the mind of the believer. Coherentism and Spinoza's extreme rationalism are two kinds of internalisms about truth. Spinoza's argument in the Treatise for the Emendation of the Intellect is basically:

  1. If internalism is not correct, truth is not worth having.
  2. Truth is worth having.
  3. Therefore, internalism is correct.

For (1) to be at all plausible, we need "worth having" to mean intrinsically worth having, and that makes (2) less plausible, though I think (2) remains true. But I deny (1), with or without the qualification, because some things can be intrinsically worth having without being internal or intrinsic to the person. Thus, it is worth having one's friends do well, even though my friends' doing well is not internal or intrinsic to me. Of course my friends' doing well tends to affect me. But not always: my friend could be doing well in my absence, without any contact we me, and that directly makes me better off.

One can also run the argument in terms of knowledge instead of truth. (I think for Spinoza the two come to the same thing! Spinoza thinks knowledge is true belief, but he has high standards for what counts as true belief—beliefs not justified up to Cartesian standards need not apply.)

Saturday, October 29, 2011

Another objection to a hypothetical desire-satisfaction theory of well-being

According to the simple desire-satisfaction theory of well-being, something would contribute to your well-being, would be good for you, precisely to the extent that it would satisfy your desires. The simple theory is clearly mistaken, because one's desires could be based on false beliefs or mental illness, and so it is easy to come up with examples of desires the satisfaction of which does not make one be well off. Imagine, for instance, that one desires that Patrick flourish, because one believes that Sally is one's long-lost brother, but in fact Patrick is one's long-lost brother's murderer; Patrick's flourishing is not a part of one's well-being.

The standard move is to hypotheticalize the theory by defining well-being in terms of the desires one would have after being informed of all the relevant non-normative facts and being given ideal psychotherapy. There are serious problems with this suggestion (for instance, the order in which one is informed of the non-normative facts can clearly make a difference as to what desires one comes to have.[note 1]

Here I want to focus on one particular difficulty that has struck me. Suppose I have no genuine friends and no prospects for friendship, but I desperately want friendship. Surely, friendship would contribute to my well-being, and I am badly off for not having a friendship. But the following is also conceivable. After ideal psychotherapy, and after being informed of the non-normative fact that there are no prospects for friendship for me, I might stoically suppress the desire for friendship. Indeed, unless one thinks that friendship is an essential aspect of human well-being, it might be quite rational, even rationally required, to suppress the desire under the circumstances. But now it would be absurd to say to someone who desparately wants friendship that she is not badly off for not having one because after ideal psychotherapy she would stoically and rationally suppress that desire.

That's a dreary example. Here's a positive one. Suppose I have no interest in collecting matchboxes. Getting a matchbox would not contribute to my well-being, surely. But it might be that after ideal psychotherapy and being informed of the details of the hobby, I would come to realize that collecting matchboxes is a perfect hobby for me, and come to desire matchboxes. But I don't go for the ideal psychotherapy, and I don't come to desire matchboxes, and so I don't desire matchboxes. How does getting a matchbox contribute to my well-being? (Well, on an objective theory, it might contribute somewhat despite my lack of desire, because the matchbox is intrinsically valuable. But the desire-satisfaction theorist can't say that.)

This post is inspired by William Lauinger's work on well-being. I would not be surprised if some of my examples parallel his.