Showing posts with label desire. Show all posts
Showing posts with label desire. Show all posts

Monday, December 1, 2025

Desire, preference, and utilitarianism

Desire-satisfaction utilitarianism (DSU) holds that the right thing to do is what maximizes everyone’s total desire satisfaction.

This requires a view of desire on which desire does not supervene on preferences as in decision theory.

There are two reasons. First, it is essential for DSU that there be a well-defined zero point for desire satisfaction, as according to DSU it’s good to add to the population people whose desire satisfaction is positive and bad to add people whose desire-satisfaction is negative. Preferences are always relative. Adding some fixed amount to all of a person’s utilities will not change their preferences, but can change which states have positive utility and which have negative utility, and hence can change whether the person’s on-the-whole state of desire satisfaction is positive or negative.

Second, preferences cannot be compared across agents, but desires can. Suppose there are only two states, eating brownie and eating ice cream (one can’t have both), and you and I both prefer brownie. In terms of preference comparisons, there is nothing more to be said. Given any mixed pair of options i = 1, 2 with probability pi of brownie and 1 − pi of ice cream, I prefer option i to option j if and only if pi > pj, and the same is true for you. But this does not capture the possibility that I may prefer brownie by a lot and you only by a little. Without capturing this possibility, the preference data is insufficient for utilitarian decisions (if I prefer brownie by a lot, and you by a little, and there is one brownie and one serving of ice cream, I should get the brownie and you should get the ice cream on a utilitarian calculus).

The technical point here is that preferences are affine-invariant, but desires are not.

But now it is preferences that are captured behavioristically—you prefer A over B provided you choose A over B. The extra information in desires is not captured behavioristically. Instead, it seems, it requires some kind of “mental intensity of desire”.

And while there is reason to think that the preferences of rational agents at least can be captured numerically—the von Neumann–Morgenstern Representation Theorem suggests this—it seems dubious to think that mental intensities of desire can be captured numerically. But they need to be so captured for DSU to have a hope of success.

The same point holds for desire-satisfaction egoism.

Wednesday, April 17, 2024

Desire-fulfillment theories of wellbeing

On desire-fulfillment (DF) theories of wellbeing, cases of fulfilled desire are an increment to utility. What about cases of unfulfilled desire? On DF theories, we have a choice point. We could say that unfulfilled desires don’t count at all—it’s just that one doesn’t get the increment from the desire being fulfilled—or that they are a decrement.

Saying that unfulfilled desires don’t count at all would be mistaken. It would imply, for instance, that it’s worthwhile to gain all the possible desires, since then one maximizes the amount of fulfilled desire, and there is no loss from unfulfilled desire.

So the DF theorist should count unfulfilled desire as a decrement to utility.

But now here is an interesting question. If I desire that p, and then get an increment x > 0 to my utility if p, is my decrement to utility if not p just  − x or something different?

It seems that in different cases we feel differently. There seem to be cases where the increment from fulfillment is greater than the decrement from non-fulfillment. These may be cases of wanting something as a bonus or an adjunct to one’s other desires. For instance, a philosopher might want to win a pickleball tournament, and intuitively the increment to utility from winning is greater than the decrement from not winning. But there are cases where the decrement is at least as large as the increment. Cases of really important desires, like the desire to have friends, may be like that.

What should the DF theorist do about this? The observation above seems to do serious damage to the elegant “add up fulfillments and subtract non-fulfulfillments” picture of DF theories.

I think there is actually a neat move that can be made. We normally think of desires as coming with strengths or importances, and of course every DF theorist will want to weight the increments and decrements to utility with the importance of the desire involved. But perhaps what we should do is to attach two importances to any given desire: an importance that is a weight for the increment if the desire is fulfilled and an importance that is a weight for the decrement if the desire is not fulfilled.

So now it is just a psychological fact that each desire comes along with a pair of weights, and we can decide how much to add and how much to subtract based on the fulfillment or non-fulfillment of the desire.

If this is right, then we have an algorithm for a good life: work on your psychology to gain lots and lots of new desires with large fulfillment weights and small non-fulfillment weights, and to transform your existing desires to have large fulfillment weights and small non-fulfillment weights. Then you will have more wellbeing, since the fulfillments of desires will add significantly to your utility but the non-fulfillments will make little difference.

This algorithm results in an inhuman person, one who gains much if their friends live and are loyal, but loses nothing if their friends die or are disloyal. That’s not the best kind of friendship. The best kind of friendship requires vulnerability, and the algorithm takes that away.

Monday, March 11, 2024

Consent, desire and promises

I have long argued that desire is not the same as consent: the fact that I want you to do something does not constitute consent to your doing it.

Here is a neat little case that has occurred to me that seems to show this conclusively. Alice borrowed a small sum of money from me, and the return is due today. However, I know that I have failed Alice on a number of occasions, and I have an unpleasant feeling of moral envy as to how she has always kept to her moral commitments. I find myself fantasizing about how nice it would feel to have Alice fail me on this occasion! It would be well worth the loss of the loan not to “have to” feel guilt about the times I failed Alice.

But now suppose that Alice knows my psychology really well. Her knowing that I want her to fail to return the money is no excuse to renege on her promise.

There are milder and nastier versions of this. A particularly nasty version is when the promisee wants you to break a promise so that you get severely punished: one thinks here of Shylock in the Merchant of Venice. A mildish (I hope) version is where I am glad when people come late to meetings with me because it makes me feel better about my record of unpunctuality.

Or for a very mild version, suppose that I typically come about a minute late to appointments with you. You inductively form the belief that I will do so this time, too. And it is a pleasure to have one’s predictions verified, so you want me to be late.

The above examples also support the claim that we cannot account for the wrong of promise-breaking in terms of overall harm to the promisee. For we can tweak some of these cases to result in an overall benefit to the promisee. Let’s say that I feel pathologically and excessively guilty about all the times I’ve been late to appointments, and your breaking your promise to show up at noon will make me feel a lot better. It might be that overall there is a benefit from your breaking the promise. But surely that does not justify your breaking the promise.

Or suppose that in the inductive case, the value of your pleasure in having your predictions verified exceeds the inconvenience of waiting a minute.

Objection: Promises get canceled in the light of a sufficiently large benefit to the promisee.

Response: The above cases are not like that. For the benefit of relief of my guilt requires that you break the promise, not that the promise be canceled in light of a good to me. And the pleasure of verification of predictions surely is insufficient to cancel a promise.

Monday, March 13, 2023

Divine desire ethical theories are false

On divine desire variants of divine command ethics, necessarily an action is right just in case it accords with God’s what God wants.

But it seems:

  1. Necessarily, if God commands an action, the action is right.

  2. Possibly, God commands an action but does not want one to do it.

Given (1) and (2), divine desire ethics is false.

I think everyone (and not just divine command theorists) should agree about (1): it is a part of the concept of God that he is authorative in such a way that whatever he commands is right.

What about (2)? Well, consider a felix culpa case where a great good would come from obedience to God and an even greater one would come from disobedience, and in the absence of a command one would have only a tiny good. Given such a situation, God could command the action. However, it seems that a perfectly good being’s desires are perfectly proportioned to the goods involved. Thus, in such a situation, God would desire that one disobey.

This is related to the important conceptual point about commands, requests and consentings that these actions can go against the characteristic desires that go with them. In the case of a human being, when there is a conflict between what a human wants and what the human commands, requests or consents to, typically it is right to go with what is said, but sometimes there is room for paternalistically going with the underlying desire (and sometimes we rightly go against both word and desire). But paternalism to God is never right.

Tuesday, January 25, 2022

A problem for non-command divine command theories

Some divine “command” theories do not ground obligations in commands as such, but in divine mental states, such as his willings, intentions or desires. It’s occurred to me that there is a down-side to such theories. Independently of accepting a divine command theory of any sort, I think the following is plausible (pace Murphy):

  1. All humans have a duty to obey any commands from God.

But if obligations are grounded in divine mental states, there is the following possibility: God commands one to ϕ even though God does not will, intend or desire that one ϕ, and so I am not obligated to ϕ. The actuality of this possibility would not fit with (1). In fact, the case of the Sacrifice of Isaac appears precisely such: God commanded Abraham to sacrifice Isaac, but did not will, intend or desire for Abraham to do so. God only willed, intended and desired for Abraham to prepare to sacrifice Isaac.

In my previous post, I was happy with the corollary of the divine intention account of duty that Abraham did not have a duty to sacrifice Isaac. But given the plausibility of (1), I should not have been happy with that.

The command version of divine command theory obviously verifies (1). So do natural law theories on which obedience to God is a part of our nature (either explicitly or as a consequence of some more general duty).

Monday, April 19, 2021

How I learned to be a bit less judgmental about social distancing

Earlier in the pandemic, I was very judgmental of students hanging around outside in groups and not respecting six-foot spacing. Fairly quickly I realized that it is inadvisable for the university to rebuke students for doing this, since such rebukes are likely to lead to their taking such interactions to private indoor venues, which would be much worse from a public health standpoint. But that practical consideration did not alleviate my strong judgmental feelings.

Eventually, however, these observations have made me realize that in our species, there is a natural desire to spend time in relatively close physical proximity to each other. And indeed, this is quite unsurprising in warmblooded social animals. Realizing that social distancing—however rationally necessary—requires people to go against their natural instincts has made me quite a bit less judgmental about noncompliance.

It took observation of others to realize this, because apart from practicalities, I find myself to prefer something like two meter spacing for social interaction with people outside my family. Greater physical distance from people outside my family has been quite pleasant for me. A conversation at two meters feels a little bit less stressful than at one meter. But looking at other people, it is evident that my preference here is literally unnatural, and that for other people such distancing is quite a burden.

Of course, sometimes it is morally necessary to go against one’s natural desires. It is natural to flee fires, but fire fighters need to go against that desire. And in circumstances where it is morally necessary to go against natural desires, people like me who lack the relevant natural desires are particularly fortunate and should not be judgmental of those for whom the actions are burden.

Desires for another's action

Suppose that Alice is a morally upright officer fighting against an unjust aggressor in a bloody war. The aggressor’s murderous acts include continual slaughter of children. Alice has sent Bob for a mission behind enemy lines. Bob’s last message said that Bob has found a way to end the war. The enemy has been led to war by a regent representing a three-year-old king. If the three-year-old were to die, the crown would pass to a peaceloving older cousin who would immediately end the war. And Bob has just found a way to kill the toddler king. Moreover, he can do it in such a way that it looks like it is a death of natural causes and will not lead to vengeful enemy action.

Alice responds to the message by saying that the child-king is an innocent noncombatant and that she forbids killing him as that would be murder. It seems that Alice now has two incompatible desires:

  • that Bob will do the right thing by refraining from murdering the child, and

  • that Bob will assassinate the child king, thereby preventing much slaughter, including of children.

And there is a sense in which Alice wants the assassination more than she wants Bob to do the right thing. For what makes the assassination undesirable—the murder of a child—occurs in greater numbers in the no-assassination scenario.

But in another sense, it was the desire to have Bob do the right thing that was greater. For that was the desire that guided Alice’s action of forbidding the assassination.

What should we say?

Here is a suggestion: Alice desires that Bob do the right thing, but Alice wishes that Bob would assassinate the king. What Alice desires and what Alice wishes for are in this case in conflict.

And here is a related question. Suppose someone you care about wants you to do one thing but wishes you to do another. Which should you do?

In the above case, the answer is given by morality: assassinating the three-year-old king is wrong, no matter the consequences. And considerations of authority concur. But what if we bracket morality and authority, and simply ask what Bob should do insofar as he cares about Alice who is his friend. Should he follow Alice’s desires or her wishes? I think this is not so clear. On the one hand, it seems more respectful to follow someone’s desires. On the other hand, it seems more beneficent to follow someone’s wishes.

Friday, November 6, 2020

Conditional and unconditional desires, God's will, and salvation

Consider three cases:

  1. Bob doesn’t care either way whether Alice wants to go out with him. And he wants to go out with Alice if she wants to go out with him.

  2. Carl wants Alice’s desires to be fulfilled. And he wants to go out with Alice.

  3. Dave doesn’t care either way whether Alice wants to go out with him. And he wants to go out with Alice even if she doesn’t want to go out with him.

As dating partners, Dave is a creep, Bob is uncomplimentarily lukewarm and Carl seems the best.

Here’s how we could characterize Dave’s and Bob’s desires with respect to going out with Alice:

  • Bob’s desire is conditional.

  • Dave’s desire is unconditional.

What about Carl’s desire? I think it’s neither conditional nor unconditional. It is what we might call a simple desire.

The three desires interact differently with evidence about Alice’s lack of interest. Bob’s conditional desire leads him to give up on dating Alice. Dave’s creepy desire is unchanged. And Carl, on the other hand, comes to hope that Alice is interested notwithstanding the evidence to the contrary, and is motivated to act (perhaps moderately, perhaps excessively) to try to persuade Alice to want to go out with him.

One might query regarding Carl what happens if he definitively learns that his two desire to go out with Alice and to have Alice want to go out with him cannot both be fulfilled. Then, as far as the desires go, he could go either way: he could become a creep or he could resign himself. Resignation is obviously the right attitude. Note, however, that while resignation requires him to give up on going out with Alice, it need not require him to give up on desiring to go out with Alice (though if that desire lasts too long after learning that Alice has no interest, it is apt to screw up Carl’s life).

Now, it seems a pious thing to align one’s desires with God’s in all things. One “thing” is one’s salvation. One could have three attitudes analogous to the attitudes towards dating Alice:

  1. Conditional: Barbara desires to be saved if God wills it. But doesn’t care either way about whether God wills it.

  2. Simple: Charlotte desires to be saved. She desires that God’s will be done, and hopes and prays that God wills her salvation.

  3. Unconditional: Diana desires to be saved even if God doesn’t will it. She doesn’t care whether God wills it.

Barbara’s attitude is lukewarm and shows a lack of love of God, since she doesn’t simply want to be with God. Diana is harder to condemn than Dave, but nonetheless her attitude is flawed. Charlotte has the right attitude.

So, when we say we should align our desires with God’s in all things, that doesn’t seem to mean that all our desires should be conditional. It means, I think, to be like Charlotte: it desire an alignment

And there is one further distinction to be made, between God’s antecedent and God’s consequent will. The classic illustration is this: When Scripture says that God wills all people to be saved (1 Tim. 2:4), that’s God’s antecedent will. It’s what God wants independently of other considerations. But because of the inextricable intertwining of God’s love and God’s justice (indeed, God’s love is his justice), God also antecedently wants that those who reject him be apart from him. Putting these antecedent desires of God’s, God has a consequent desire to damn some, namely those who reject God.

I think what I said about Barbara, Charlotte and Diana clearly applies to God’s consequent will. But it’s less clear regarding God’s antecedent will. Necessarily, God antecedently wills all and only the goods. It seems not unreasonable to desire salvation only conditionally on its being a good thing, and hence to desire it only conditionally on its being antecedently willed by God. But I think Charlotte’s approach is also defensible. Charlotte desires to be with God for eternity and desires that being with God is a good thing.

Thursday, October 1, 2020

Acting on a desire

Suppose I have a desire for A, and I act on this desire to get A. There are at least three different stories about my motivations that are compatible with this:

  1. I pursued A non-instrumentally.

  2. I pursued A instrumentally in order to satisfy my desire.

  3. I pursued A instrumentally in order to rid myself of my desire.

The distinction between (2) and (3) should be somewhat familiar to many: when one struggles with temptation, the temptation whispers to one that if one gives in, the struggle will be over. (This is, of course, a deception: for if one gives in, the temptation is likely to return strengthened later.) The distinction between (1) and (2) is subtler. In case (1), the desire reveals to us something desirable and we pursue it. The pursuit satisfies our desire, but we don’t do it to satisfy the desire, but simply because the thing is desirable.

Tuesday, September 29, 2020

More on the privation theory of evil

Back in April, I suggested that there are two possible privation theories concerning evil:

  1. every evil is a privation

  2. for every evil, what makes it be evil is a privation.

Well, Aquinas essentially scooped me, in the first article of the De Malo, by distinguishing two senses of evil in the statement “evil is a privation”. If “evil” means the evil thing, the claim is false. But by “evil” we could mean the evilness of the evil thing, and then Aquinas holds the claim to be true. And it seems to me that the evilness of the evil thing is basically that which makes it evil, so Aquinas’ theory is basically my theory (2).

I think too much of our current literature on the privation theory of evil suffers from a failure to explicitly make the distinction between the evil thing and the evilness of the evil thing. As a result, some of the counterexamples in the literature are only counterexamples to (1). And indeed it’s not hard to find uncontroversial counterexamples to the claim that every evil entity is a privation. Josef Stalin was an evil entity (I hope he has repented since), but he was never a privation; an act of adultery is an evil thing, but it is not a privation.

Consider, for instance, the most discussed example in the literature: pain. It gets pointed out that pain is not a lack of pleasure or any other kind of privation. That is very likely true. But Aquinas’ version of the privation theory does not require him to hold that pain is a privation. He can just say that pain is an evil thing, but evil things don’t have to be privations. Rather, what makes the pain be an evil is a privation. Of course that still requires a privative theory as to what makes pain be an evil. But there are such theories. For instance, one might hold a modification of Mark Murphy’s theory about pain and say that what makes pain in paradigmatic cases bad is a privation of a correspondence between our mental states and our desires, given that in paradigmatic cases we desire not to be in pain (and it’s not much of a bullet to bite to say that pain isn’t bad when it doesn’t go against our desires).

The story about pain doesn’t end here. One might, and I think should, question whether the correct ontology of the world includes such entities as “matches” between mental states and desires for pain to be a privation of. I think what Aquinas would likely say is that because being is said analogically, “matches” do exist in an analogical sense, and hence we can correctly talk of their privation. I think this is problematic. For once we allow that “matches” exist analogically, we should equally allow for privations and other lacks to exist analogically—and Aquinas indeed does. And then we run into the problem that even positive things can count as lacks: for instance, sight could count as a lack of the lack of sight. And once we have gone this far, the privation theory becomes trivial.

But the point remains: once we have seen Aquinas’s distinction between the evil being a privation and the evilness of the evil being a privation, the critiques of the privation theory are apt to get a lot more complex.

Friday, February 22, 2019

Are desires really different from wishes?

It is tempting to conflate what is worth desiring with what is worth pursuing. But there seem to be cases where things are worth desiring but not worth pursuing:

  1. Having a surprising good happen to you completely gratuitously—i.e., without your having done anything to invite it—seems worth desiring but the pursuit of it doesn’t seem to make sense.

  2. If I have published a paper claiming a certain mathematical result, and I have come to realize that the result is false, it seems to make perfect sense to desire that the result be true, but it makes no sense to pursue that.

The standard response to cases like 1 and 2 is to distinguish wishes from desires, and say that it makes sense to wish for things that it makes no sense to pursue, but it does not make sense to desire such things.

But consider this. Suppose in case 2, I came to be convinced that God has power over mathematics, and that if I pray that the result be true, God might make it be true. Then the affective state I have in case 2 would motivate me to pray. But the nature of the affective state need not have changed upon coming to think that God has power over mathematics. Thus, either (a) I would be motivated to pray by a mere wish or else (b) wishes and desires are the same thing. But the wish/desire distinction does not fit with (a), which leaves (b).

I suppose one could claim that a desire just is a wish plus a belief that the object is attainable. But that makes desires be too gerrymandered.

Friday, September 7, 2018

Beauty and goodness

While listening to a really interesting talk on beauty in Aquinas,I was struck by the plausibility of the following idea (perhaps not Aquinas'): The good is what one properly desires to be instantiated; the beautiful is what one properly desires to behold. So the distinction between them is in how we answer Diotima's question about desire (or eros): what do we want to do with the object of desire?

Friday, May 11, 2018

Fun with desire satisfaction

Suppose that desire satisfaction as such contributes to happiness. Then it makes sense to pay a neuroscientist to induce in me as many desires as possible for obvious mathematical truths: the desire that 1+1=2, that 1+2=3, that 1+3=4, etc.

Or if desire has to be for a state of affairs in one’s control, one can pay the neuroscientist to induce in me as many desires as possible for states of affairs like: my not wearing a T-shirt that has a green numeral 1, my not wearing a T-shirt that has a green numeral 2, etc. Then by not wearing a T-shirt with any green numerals, I fulfill lots of desires.

Thursday, July 13, 2017

Love and happiness

Could perfect happiness consist of perfect love?

Here’s a line of argument that it couldn’t. Constitutively central to love are the desire for the beloved’s good and for union with the beloved. A love is no less perfect when its constitutive desires are unfulfilled. But perfect happiness surely cannot be even partly constituted by unfulfilled desires. If perfect happiness consistent of perfect love, then one could have a perfect happiness constituted at least partly by unfulfilled desires.

When this argument first occurred to me a couple of hours ago, I thought it settled the question. But it doesn’t quite. For there is a special case where a perfect love’s constitutive desires are always fulfilled, namely when the object of the love is necessarily in a perfectly good state, so that the desire for the beloved’s good is necessarily fulfilled, and when the union proper to the love is of such a sort that it exists whenever the love does. Both of these conditions might be thought to be satisfied when the object of love is God. Certainly, a desire for God’s good is always fulfilled. Moreover, although perfect love is compatible with imperfect union in the case of finite objects of love, perfect love of God may itself be a perfect union with God. If so, then our happiness could consist in perfect love for God.

I am not sure the response to the argument works but I am also not sure it doesn’t work. But at least, I think, my initial argument does establish this thesis:

  • If perfect happiness consists of perfect love, it consists of perfect love for God.

Of course none of the above poses any difficulty for someone who thinks that perfect happiness consists of fulfilled perfect love.

Monday, May 1, 2017

Desire-belief theory and soft determinism

Consider this naive argument:

  1. If the desire-belief theory of motivation is true, whenever I act, I do what I want.
  2. Sometimes in acting I do what I do not want.
  3. So the desire-belief theory is false.

Some naive arguments are nonetheless sound. (“I know I have two hands, …”) But that’s not where I want to take this line of thought, though I could try to.

I think there are two kinds of answers to this naive argument. One could simply deny (2), espousing an error theory about what happens when people say “I did A even though I didn’t want to.” But suppose we want to do justice to common sense. Then we have to accept (2). And (1) seems to be just a consequence of the desire-belief theory. So what to can one say?

Well, one can say that “what I want” is used in a different sense in (1) and (2). The most promising distinction here seems to me to be between what one wants overall and what one has a desire for. The desire-belief theorist has to affirm that if I do something, I have a desire for it. But she doesn’t have to say that I desire the thing overall. To make use of this distinction, (2) has to say that I act while doing what I do not overall want.

If this is the only helpful distinction here, then someone who does not want to embrace an error theory about (2) has to admit that sometimes we act not in accord with what we overall want. Moreover, it seems almost as much a truism as (2) that:

  1. Sometimes in acting freely I do what I do not want.

On the present distinction, this means that sometimes in acting freely, I do something that isn’t my overall desire.

But this in turn makes soft determinism problematic: for if my action is determined and isn’t what I overall desire, and desire-belief theory is correct, then it is very hard to see how the action could possibly be free.

There is a lot of argument from ignorance (the only relevant distinction seems to be…, etc.) in the above. But if it can be all cashed out, then we have a nice argument that one shouldn’t be both a desire-belief theorist or a soft-determinist. (I think one shouldn’t be either!)

Monday, January 9, 2017

Maps from desires and beliefs to actions

On a naive Humean picture of action, we have beliefs and desires and together these yield our actions.

But how do beliefs and desires yield beliefs? There are many (abstractly speaking, infinitely many, but perhaps only a finite subset is physically possible for us) maps from beliefs and desires to actions. Some of these maps might undercut essential functional characteristics of desires—thus, perhaps, it is impossible to have an agent that minimizes the satisfaction of her desires. But even when we add some reasonable restrictions, such as that agents be more likely to choose actions that are more likely to further the content of their desires, there will still be infinitely many maps available. For instance, an agent might always act on the strongest salient desire while another agent might randomly choose from among the salient desires with weights proportional to the strengths—and in between these two extremes, there are many options (infinitely many, speaking abstractly). Likewise, there are many ways that an agent could approach future change in her desires: allow future desires to override present ones, allow present desires to override future ones, balance the two in a plethora of ways (e.g., weighting a desire by the time-integral of its strength, or perhaps doing so after multiplying by a future-discount function), etc.

One could, I suppose, posit an overridingly strong desire to act according to one particular map from beliefs and desires to actions. But that is psychologically implausible. Most people aren’t reflective enough to have such a desire. And even if one had such a desire, it would be unlikely to in fact have strength sufficient to override all first-order desires—rare (and probably silly!) is the person who wouldn’t be willing to make a slight adjustment to how she chooses between desires in order to avoid great torture.

Nor will it help to move from desires to motivational structures like preferences or utility assignments. For instance, the different approaches towards risk and future change in motivational structure will still provide an infinity of maps from beliefs (or, more generally, representational structures) and motivational structures to actions.

Here’s one move that can be made: Each of us in fact acts according to some “governing mapping” from motivational and representational structures to actions (or, better, probabilities of actions, if we drop Hume’s determinism as we should). We can then extend the concept of motivational structure to include such a highest level mapping. Thus, perhaps, our motivational structure consists of two things: an assignment of utilities and a mapping from motivational and representational structures to actions.

But at this point the bold Humean claim that beliefs are impotent to cause action becomes close to trivial. For of course everybody will agree that we all implement some mapping from motivational and representational structures to actions or action probabilities (maybe not numerical ones), and if this mapping itself counts as part of the motivational structure, then everyone will agree that we all have a motivational structure essential to all of our actions. A naive cognitivist, for instance, can say that the governing mapping is one which assigns to each motivational and representational structure pair the action that is represented as most likely to be right (yes, this mapping doesn’t depend on the specific contents of the motivational structure).

Perhaps, though, a Humean can at least maintain a bold claim that motivational structures are not subject to rational evaluation. But if she does that, then the only way she can evaluate the rationality of action is by the action’s fit to the motivational and representational structures. But if the motivational structures include the actually implemented governing mapping, then every action an agent performs fits the structures. Hence the Humean who accepts the actual governing mapping as part of the motivational structure has to say that all actions are rational. And that’s a bridge too far.

Of course a non-Humean also has to give an account of the plurality of ways in which motivational and representational structures can be mapped onto actions. And if the claim that there is an actually implemented governing mapping is close to trivial, as I argued, then the non-Humean probably has to accept it, too. But she has at least one option not available to the Humean. She can, for instance, hold that motivational structures are subject to rational evaluation, and hence that there are rational constraints—maybe even to the point of determining a unique answer—on what the governing mapping should be like.

Monday, January 2, 2017

Humean views of rationality and the pursuit of money

Consider a Humean package view of rationality where:

  1. Then end of practical rationality is desire satisfaction.

  2. All the rational motivational drive in our decisions comes from our desires.

  3. There are no rational imperatives to have desires.

Now suppose that you learn that some costless action will further one or more of your desires, but you have no idea which desire or desires will be furthered by that action. (If we want to have some ideality constraints on which desires make action rational—say, only desires that would survive idealized psychotherapy—then we can suppose that you also know that the desire or desires furthered by the action will satisfy those constraints. I will ignore this wrinkle.)

Any theory of rationality that holds it to be rational to pursue one’s desires should hold it both rational and possible to take that costless action. In the abstract, a case where you know that some desire will be furthered but have no idea which one seems a strange edge case. But actually there is nothing all that strange about this. When money is offered to us, sometimes we have a clear picture of what the money would allow us to do. But sometimes we don’t: we just know that the money will help further some end or other. (Of course, in some people, the pursuit of money may have a non-instrumental dimension, but that’s vicious and surely unnecessary.)

So now let’s go back to the costless action that furthers one or more of your desires and the desire theory of rational motivation. How can this theory accommodate this action?

Option 1: Particular desires. You pick some desire of yours—let’s say, a desire to read a good book—and you think to yourself: “There is a non-zero probability that the action furthers my desire to read a good book.” Then the desire to read a good book, in the usual end-to-means ways, motivates you to do the costless action.

That, of course, could work. And in fact, in the case of money we do sometimes proceed by imagining something that we could buy. However, thinking that what motivates one is just the non-zero probability of furthering a particular desire gets things wrong for two reasons. The first is that we could imagine the case being enriched by your learning that the desire that will be furthered by the action is none of the desires that would come to mind if you were to spend less than a minute thinking about the case but that you need to make your decision within a minute. The inability to think of a particular desire that even might be furthered by the action does not affect the ratioanl possibility of taking the costless action.

The second is that this approach gets the strength of motivation wrong. You have many desires, and the desire to read a good book is only one among many. The probability that that desire to read a good book would be furthered by the costless action might well be tiny, especially if you received the further information that it is only one of your desires that is furthered by the action. Such a small probability of a benefit could still motivate you to take a costless action, but it may not work for similar cases where there is a modest cost. For instance, we can suppose you learn that:

  1. The benefit is roughly equal to reading a good book as measured by desire-satisfaction.

  2. The cost is roughly equal to a tenth of the benefit of reading a good book as measured by desire-satisfaction.

  3. You have a hundred desires and the one furthered is but one of them.

Well, then, the action is clearly worth it by (4) and (5). But it’s not worth doing the action simply on a one percent chance that it will lead to reading a good book, since the cost is ten percent of the benefit of reading a good book.

One might try to remedy the second problem by mentally going through a larger number of desires so as to increase the probability that some one of the desires will be fulfilled. But we still have the first objection—there may not be enough time to do this—and surely it is implausible that one would have to go through such mental lists of desires in order to get the motivation.

Option 2: A higher-order desire to have satisfied desires. Suppose you have a higher-order desire H to satisfy lower-order desires. Then while you don’t know which lower-order desire is furthered by the action, you do know that this higher-order desire is furthered by them.

This approach seems to lead to an unfortunate double-counting. When you sit down to read a good book, do you really get two benefits, one of reading the book and the other of furthering the higher-order desire to have satisfied lower-order desires? If not, the approach is problematic. But if so, then it gets the rational strength of motivation wrong. For suppose that you are choosing between two actions. Action A will lead to your reading a good book. Action B will lead to the fulfillment of an unknown desire other than reading a good book, a desire you nonetheless know to have the same weight. On the higher-order solution, it seems you have a double motivation for action A, namely H and the desire to read a good book, but only a single motivation for action B, namely H, and hence you should have a twice as strong rational motivation for A. But that’s surely not rational!

Maybe, though, you can get out of the double-counting in some way, by having some story about desire-overlap, so that H and the desire to read a good book don’t add up to a double desire. I suspect that this may undercut the force of the story, by making H not be a real desire.

But there is a second and more serious problem with the story. Suppose that Jim has all the usual lower-order order desires but lacks H. If rational motivation comes from desires, then Jim will not be rationally motivated to the action. (Maybe he will have some accidental non-rational motivation for the action.) But surely not going for a costless action that he knows will fulfill some desire of his will be a rational failing, assuming that it’s rational to fulfill one’s desires. Hence there will have to be a rational imperative to have H among one’s desires, contrary to the third part of the Humean package we are exploring.

Now I suppose we could drop the third part of the Humean picture, and hold that rationality requires some desires like H. But I think this makes the rest of the picture less plausible. If rationality requires one to have certain desires, it could just as well require one directly to fulfill certain ends, thereby undercutting the second part of the Humean picture.

Finally, I should note that not all non-Humeans should rejoice at this argument. For similar considerations may apply against some other views. For instance, some Natural Law views that tie motivation very tightly to basic goods may have this problem.

Thursday, October 27, 2016

Three strengths of desire

Plausibly, having satisfied desires contributes to my well-being and having unsatisfied desires contributes to my ill-being, at least in the case of rational desires. But there are infinitely many things that I’d like to know and only finitely many that I do know, and my desire here is rational. So my desire and knowledge state contributes infinite misery to me. But it does not. So something’s gone wrong.

That’s too quick. Maybe the things that I know are things that I more strongly desire to know than the things that I don’t know, to such a degree that the contribution to my well-being from the finite number of things I know outweighs the contribution to my ill-being from the infinite number of things I don’t know. In my case, I think this objection holds, since I take myself to know the central truths of the Christian faith, and I take that to make me know things that I most want to know: who I am, what I should do, what the point of my life is, etc. And this may well outweigh the infinitely many things that I don’t know.

Yes, but I can tweak the argument. Consider some area of my knowledge. Perhaps my knowledge of noncommutative geometry. There is way more that I don’t know than that I know, and I can’t say that the things that I do know are ones that I desire so much more strongly to know than the ones I don’t know so as to balance them out. But I don’t think I am made more miserable by my desire and knowledge state with respect to noncommutative geometry. If I neither knew anything nor cared to know anything about noncommutative geometry, I wouldn’t be any better off.

Thinking about this suggests there are three different strengths in a desire:

  1. Sp: preferential strength, determined by which things one is inclined to choose over which.

  2. Sh: happiness strength, determined by how happy having the desire fulfilled makes one.

  3. Sm: misery strength, determined by how miserable having the desire unfulfilled makes one.

It is natural to hypothesize that (a) the contribution to well-being is Sh when the desire is fulfilled and −Sm when it is unfulfilled, and (b) in a rational agent, Sp = Sh + Sm. As a result of (b), one can have the same preferential strength, but differently divided between the happiness and misery strengths. For instance, there may be a degree of pain such that the preferential strength of my desire not to have that pain equals the preferential strength of my desire to know whether the Goldbach Conjecture is true. I would be indifferent whether to avoid the pain or learn whether the Goldbach Conjecture is true. But they are differently divided: in the pain case Sm >> Sh and in the Goldbach case Sm << Sh.

There might be some desires where Sm = 0. In those cases we think “It would be nice…” For instance, I might have a desire that some celebrity be my friend. Here, Sm = 0: I am in no way made miserable by having that desire be unfulfilled, although the desire might have significant preferential strength—there might be significant goods I would be willing trade for that friendship. On the other hand, when I desire that a colleague be my friend, quite likely Sm >> 0: I would pine if the friendship weren’t there.

(We might think a hedonist has a story about all this: Sh measures how pleasant it is to have the desire fulfilled and Sm measures how painful the unfulfilled desire is. But that story is mistaken. For instance, consider my desire that people not say bad things behind my back in such a way that I never find out. Here, Sm >> 0, but there is no pain in having the desire unfulfilled, since when it’s unfulfilled I don’t know about it.)

Tuesday, October 18, 2016

Conditional vs. means-limited intentions

This morning, I set out to walk to the Philosophy Department. If asked my intention, I might have said that it was to reach the Department. And in actual fact I did reach it. Suppose, however, that as I was walking, my wife phoned me to inform me of a serious family emergency that required me to turn back, and that I did in fact turn back.

Here’s a puzzle. The family emergency in this (fortunately) hypothetical scenario seems to have frustrated my intention to reach the Department. On the other hand, surely I did not intend to reach the Department no matter what. That would have been quite wicked (imagine that I could only reach the Department by murdering someone). If I did not intend to reach the Department no matter what, it seems that my intention was conditional, such as to reach the Department barring the unforeseen. But the unforeseen happened, so my conditional intention wasn’t frustrated—it was mooted. If I intend to fail a student if he doesn’t turn in his homework, and he turns in his homework, my intention is not frustrated. So my intention was frustrated and not frustrated, it seems.

Perhaps rather than my intention being frustrated, it was my desire to reach the Department that was frustrated. But that need not be the case. Suppose, contrary to fact, that I was dreading my logic class today and would have appreciated any good excuse to bail on it. Then either I had no desire to reach the Department or my desire was conditional again: to reach the Department unless I can get out of my logic class. In neither case was my desire frustrated.

Let me try a different solution. I intended to reach the Department by morally licit means. The phone call made it impossible for me to reach the Department by morally licit means—reaching the Department would have required me to neglect my family. My intention wasn’t relevantly conditional, but included a stipulation as to the means. Thus my intention was frustrated when it became impossible for me to reach the Department by morally licit means.

The above suggests that our intentions should generally be thus limited in respect of means, unless the means are explicitly specified all the way down (and they probably never are). Otherwise, our intention wickedly commits us to wicked courses of action in some possible circumstances. Of course, the limitation, just as the intention itself, will typically be implicit.

Tuesday, May 24, 2016

Unreleasable promises and the value of punishment

Alice and Bob are conscientious vegetarians. Alice gets Bob to promise her that if Alice ever considers ceasing to be vegetarian, Bob should offer her the most powerful arguments in favor of vegetarianism even if Alice doesn't want to hear them. Years pass, and Alice's vegetarian fervor fades, and she mentions to Bob that she is considering giving up vegetarianism. Alice then says: "Please don't try to convince me otherwise."

What should Bea do? As a rule, the promisee can release the promiser from a promise. So it seems that Alice's request that Bob not importune her with the arguments for vegetarianism overrides the promise. But Bob promised to offer the arguments even if Alice didn't want to hear them. It seems that this was a promise where the usual release rule makes no sense. Can a promise like that be valid?

As the case demonstrates, there are times when it would be useful to be able to make promises that one cannot be released from by the promisee. But one cannot infer the existence of an ability from its usefulness: it could be useful for a pig to be able to fly. Still, it seems pretty plausible that Bob's promise is valid.

But now compare another case. During a fight, Carlos spitefully promises Alice that he's not going to get Alice's birthday party even if she wants him to come. Carlos does not, I think, have any moral duty to keep his promise if Alice reaches out to mend fences, releases him from his promise and invites him to his party.

In fact, my sense is that the release from the promise is irrelevant in the case of Carlos. For suppose that Dan, also fighting with Alice, promises Alice not to get her a birthday present. Dan does not, I think, violate any moral duty by giving Alice a birthday present, even absent a release, as long as it's clear that Alice would enjoy the present.

So how does the Bob case differ from the Carlos and Dan cases? I think it's that what Carlos and Dan promise Alice isn't good, or if it has any value it's a value dependent on how Alice feels about it at the time. But what Bob promises Alice has a value independent of how Alice feels at the time.

But here is another kind of unreleasable promise: an authority might unconditionally promise Alice a fair punishment should Alice do a particular wrong. And it is clear that Alice's releasing of the authority is irrelevant. If what I said about Bob's, Carlos' and Dan's promises is a guide, then unreleasable promises must be valuable for the promisee independently of the promisee's views and desires. Hence, just punishment is good for the punishee.