Showing posts with label reasons. Show all posts
Showing posts with label reasons. Show all posts

Monday, September 22, 2025

Requests and obligations

By requesting something from someone, we create a reason for them to fulfill the request. On an individualistic view of human beings, this is a rather awesome power—somehow I reach into your space of reasons and create a new one.

It is tempting to downplay the force of reasons created by a request. After all, it seems that a mere request can always be legitimately turned down.

But that’s not right. There are times when a request creates an obligation. For it may be that apart form the request one’s reasons for an action were nearly conclusive, and with the request they become conclusive.

And besides that, a successfully transmitted request always creates a moral obligation to consider the request. Sometimes, the request may be quickly rejectable on the basis of a background policy. But a quick rejection still requires a consideration.

Questions, of course, are a type of request: they are a request for an answer. Thus, they too always create a moral obligation.

Friday, July 11, 2025

Reasons and direct support

A standard view of reasons is that reasons are propositions or facts that support an action. Thus, that I promised to visit is a reason to visit, that pain is bad is a reason to take an aspirin, and that I am hungry is a reason to eat.

But notice that any such facts can also be a reason for the opposite action. That I promised to visit is a reason not to visit, if you begged me not to keep any of my promises to you. That pain is bad is a reason not to take an aspirin and that I am hungry is a reason not to eat when I am striving to learn to endure harship.

One might think that this kind of contingency in what the reasons—considered as propositions or facts—support disappears when the reasons are fully normatively loaded. That I owe you a visit is always a reason to visit, and that I ought to relieve my hunger is always a reason to eat.

This is actually mistaken, too. That I owe you a visit is indeed always a reason to visit. But it can also be a reason—and even a moral one—not to visit. For instance, if a trickster informs me that that if I engage in an owed visit to you, they will cause you some minor harm—say, give you a hangnail—then the fact that I owe you a visit gives me a reason not to visit you, though that reason will be outweighed (indeed, it has to be outweighed, or else it wouldn’t be true that I owe you the visit).

In fact, plausibly, that an action is the right one is typically also a moral reason not to perform the action. For whenever we do the right thing, that has a potential of feeding our pride, and we have reason not to feed our pride. Of course, that reason is always outweighed. But it’s still there. And we might even say that the fact that an action is wrong is a reason, albeit not a moral one, to perform that action in order to exhibit one’s will to power (this is a morally bad reason to act on, but one that is probably minimally rational—we understand someone who does this).

All this suggests to me that we need a distinction: some reasons directly support doing something. That I owe you a visit directly supports my visiting you, but only indirectly supports my not visiting you to avoid pride in fulfilling my duties.

But now it is an interesting question what determined what reasons directly support what action. One option is that the relation is due to entailment: a reason directly supports ϕing provided that that reason entails that ϕing is good or right. But this misses the hyperintentionality in reasons. It is necessarily true that it’s right for me to respect my neighbor; a necessary truth is entailed by every proposition; but that my neighbor is annoying is not directly a reason to respect my neighbor. One might try for some “relevant entailment”, but I am dubious. Perhaps the fact that an action is wrong relevantly entails that there is reason to do it to exhibit one’s will to power, but that ϕing is wrong is directly a reason not to ϕ, and only indirectly a reason to ϕ.

I suspect the right answer is that this direct support relation comes from our human nature: if it is our nature to be directly motivated to ϕ because of R, then R directly supports ϕing. Hmm. This may work for epistemic support, too.

Wednesday, July 9, 2025

Habitual action

Alice has lived a long and reasonable life. She developed a lot of good habits. Every morning, she goes on a walk. On her walk, she looks at the lovely views, she smells the flowers in season, she gathers mushrooms, she listens to the birds chirping, she climbs a tree, and so on. Some of these things she does for their own sake and some she does instrumentally. For instance, she climbs a tree because she saw research that daily exercise promotes health, but she smells the flowers for the sake of the smelling itself.

She figured all this out when she was in her 30s, but now she is 60. One day, she realizes that for a while now she had forgotten the reasoning that led to her habits. In particular, she no longer knows which of her daily activities have innate value and which ones are merely instrumental.

So what can we say about her habitual activities?

One option is that they retain the teleology with which they were established. Although Alice no longer remembers that she climbs a tree solely for the sake of health, that is indeed what she climbs the tree for. On this picture, when we perform actions from habit, they retain the teleology they had when the habit was established. In particular, it follows that agential teleology need not be grounded in occurrent mental states of the agent. This is a difficult bullet to bite.

The other option is that they have lost their teleological characterization. This implies, interestingly, that there is no fact about whether the actions are being done for their own sake or instrumentally. In particular, it follows that the standard diviion of actions into those done for their own sake and those done instrumentally is not exhaustive. That is also a difficult bullet to bite.

I am not sure what to say. I suspect one lesson is that action is more complicated than we philosophers think, and our simple characterizations of it miss the complexity.

Monday, July 7, 2025

Acting because of and for reasons

It seems that:

  1. If you pursue friendship because friendship is non-instrumentally valuable, then you pursue friendship non-instrumentally.

But not so. Imagine a rich eccentric offers you $10,000 to pursue something that is non-instrumentally valuable. You think about it, correctly decide friendship is non-instrumentally valuable, and pursue it to gain the $10,000. You are pursuing friendship because it is non-instrumentally valuable, but you are pursuing it merely instrumentally.

More generally, is there any conditional of the form:

  1. If you pursue friendship because p, then you pursue friendship non-instrumentally

that is true in all cases, where p states some known reason for the pursuit of friendship? I don’t think so. For the rich eccentric can tell you that you will get $10,000 if it is both the case that p and you pursue friendship. In that case, if you know that it is the case that p, then your reason for pursuing friendship is p, since it is given p, and only given p, that you will get $10,000 for your pursuit of friendship.

Maybe the lesson from the above is that there is a difference between doing something because of a reason and doing it for the reason. That friendship is non-instrumentally valuable is a reason. In the first rich eccentric case, you are pursuing because of that reason, but you are not pursuing it for that reason. Thus maybe we can say:

  1. If you pursue friendship for the reason that friendship is non-instrumentally valuable, then you pursue friendship non-instrumentally.

In the case where you are aiming only at the $10,000, you are pursuing friendship for the reason that pursuing friendship will get you $10,000, or more explicitly for the conjunctive reason that (a) if friendship is non-instrumentally valuable it will get you $10,000 to pursue it and (b) it is non-instrumentally valuable. But you are nonetheless pursuing friendship because it is non-instrumentally valuable.

There is thus a rather mysterious “acting for R” relation in regard to actions which does not reduce to “acting because R”.

Saturday, November 16, 2024

Reasons of identity

In paradigm instances of parental action, my reason for action is the objective fact that I am a parent, not because of the subjective fact that I think I'm a parent or identify with being a parent. There are times when it makes sense to act on the subjective fact. If I'm asked by someone (say, a counselor) whether I identify with being a parent, my answer needs to be based on the subjective fact that I so identify. But those are atypical cases. 

I suspect this is generally true: cases when one acts on what one is are primary and cases when one acts on what one identifies as are secondary. It is, thus, problematic to define any feature that is significantly rationally relevant to ordinary action in terms of what one identifies with. 

Tuesday, April 2, 2024

Abstaining from goods

There are many times when we refrain from pursuing an intrinsic good G. We can classify these cases into two types:

  1. we refrain despite G being good, and

  2. we refrain because G is good.

The “despite” cases are straightforward, such as when one refrains from from reading a novel for the sake of grading exams, despite the value of reading the novel.

The “because” cases are rather more interesting. St Augustine gives the example of celibacy for the sake of Christ: it is because marriage is good that giving it up for the sake of Christ is better. Cases of religious fasting are often like this, too. Or one might refrain from something of value in order to punish oneself, again precisely because the thing is of value. These are self-sacrificial cases.

One might think another type of example of a “because” case is where one refrains from pursuing G now in order to obtain it by a better means, or in better circumstances, in the future. For instance, one might refrain from eating a cake on one day in order to have the cake on the next day which is a special occasion. Here the value of the cake is part of the reason for refraining from pursuit. On reflection, however, I think this is a “because” case. For we should distinguish between the good G1 of having the cake now and the good G2 of having the cake tomorrow. Then in delaying one does so despite the good of G1 and because of the good of G2. The good of G1 is not relevant, unless this becomes sacrificial.

I don’t know if all the “because” cases are self-sacrificial in the way celibacy is. I suspect so, but I would not be surprised if a counterexample turned up.

Monday, October 30, 2023

Types of reasons

There are two ways of drawing a distinction between moral and epistemic reasons:

  1. What kind of value grounds the reasons (epistemic or moral).

  2. What kind of thing are the reasons reasons for (e.g., beliefs vs. actions).

If we take option (1), then there will be epistemic reasons not merely for beliefs, but for actions. Thus, the scientist will have epistemic reasons for doing a particularly informative experiment and the teacher may have epistemic reasons for engaging the students in a certain didactically beneficial group activity—i.e., in both cases, epistemic goods (to self and/or others) justify the action.

I like option (2). Moral reasons are reasons for action, while epistemic reasons are reasons for having a belief or credence or the like.

Here are some reasons for not drawing a distinction between reasons for action in terms of the kind of value as in (1).

First, we would morally admire someone who sacrificed a well-paying and easy career option to become a science teacher at an inner city school in order to pass the gift of knowledge to students. In other words, our admiration for someone who at significant personal cost promotes an epistemic value (by otherwise morally upstanding means) is moral.

Second, if we distinguish moral and epistemic reasons for action, consider conflicts. We would have to say that a scientist may have moral reasons to come home on time to feed her hungry children, and epistemic reasons to complete an experiment that cannot be done at another time. But now whether it is right to come home on time or to complete the experiment depends on the details. If the information gained from the experiment is unimportant while the experiment will take hours, and the kids are very hungry, coming home on time is right. But if the children are only very slightly hungry, and the experiment would only protract this hunger by a few minutes, while being extremely illuminating, staying a few minutes may well be the right thing to do.

Right in what way? Well, I think once again the kind of praise that we would levy on the scientist who balances their epistemic goals and their children’s needs well is moral praise. But then the moral praise does not always align with what I have been assuming are moral reasons for action. For we would not morally praise the scientist who neglects a short but extremely illuminating observation in order to make their children dinner a few minutes earlier. Such a scientist would have an insufficient love of epistemic goods. The scientist who hits the right balance is morally praiseworthy. Yet it is very odd to think that one is morally praiseworthy for subordinating moral reasons to non-moral ones!

If you’re not yet convinced by this case, consider one where the moral and non-moral goods are to the same person. A parent is explaining some very interesting matter of science to a child. The child would rather eat a few minutes earlier. If there really is a moral/epistemic reason distinction in actions, then the parent’s reasons for explaining are epistemic and the reasons for feeding are moral. But it could be morally praiseworthy to finish out the explanation.

Third, there are multiple kinds of non-epistemic good: health, virtue, appreciation, friendship, etc. The heterogeneity between them does not appear to be significantly less than that between all of them taken together and the epistemic goods. It seems that that if we are cutting nature at the joints, there is no reason to posit a particularly significant cut between the epistemic and non-epistemic goods. Instead, we should simply suppose that there is a variety of types of good, such as maybe health, virtue, beauty, friendship and understanding (and almost certainly others). All of these are alike in being goods, and different from each other as to the fundamental kind of good. To give the honorific “moral” to all of the ones on this list other than understanding seems quite arbitrary.

On the other hand, the distinction as to the type of thing that the reasons are reasons for does seem quite significant. Reasons for action and reasons for belief are quite different things because we respond, or fail to respond, to them quite differently: by willing and by believing, respectively.

It is interesting to ask this question. If the will has moral reasons, and the intellect has epistemic reasons, are there other faculties that have other reasons? Maybe. We can think of a reason R for ϕing in a faculty F as something that has a dual role:

  1. it tends to causally contributes to ϕing within F

  2. its presence (and causal contribution?) partially grounds ϕing counting as an instance of proper activity of F.

(Thus, reasons are causes-cum-justifiers.)

Are there things like that for other faculties F than will and intellect? Yes! The presence of a certain bacterium or virus may be a reason for the immune system to react in certain way. Humans thus have moral, epistemic and immune reasons, distinguished respectively by being reasons for the will, the intellect and the immune system. And there are doubtless many more (e.g., I expect there are reasons for all our sensory systems’ identifications of stimuli).

Some of these reasons are tied to specific types of goods. Thus, epistemic reasons are tied to epistemic goods, and immune reasons are tied to health goods. But moral reasons are different, in that action has a universality about it where any type of good—including epistemic and health ones—can ground a moral reason. And both epistemic and moral reasons tend to be different from immune reasons in that in the normal course of immune functioning we do not process them intellectually, while both epistemic and moral reasons are intellectually processed in normal use.

Thursday, August 17, 2023

Tiebreakers

You need to lay off Alice or Bob, or else the company goes broke. For private reasons, you dislike Bob and want to see him suffer. What should you do?

The obvious answer is: choose randomly.

But suppose that there is no way to choose randomly. For instance, perhaps an annoying oracle which has told you the outcome of any process that you could have made use of random decision. The oracle says “If you flip the penny in your pocket, it will come up heads”, and now deciding that Alice is laid off on heads is tantamount to deciding that Alice is laid off.

So what should you do?

There seems to be something rationally and maybe morally perverse in one’s treatment of Alice if one fires her to avoid firing the person that one wants to fire.

But it seems that if one fires Bob, one does so in order to see him suffer, and that’s wrong.

I have two solutions, not mutually exclusive.

The first is that various rules of morality and rationality only make sense in certain normal conditions. Typical rules of rationality simply break down if one is in the unhappy circumstance of knowing that one’s ability to reason rationally is so severely impaired that there is no correlation between what seems rational and what is rational. Similarly, if one is brainwashed into having to kill someone, but is left with the freedom to choose the means, then one may end up virtuously beheading an innocent person if beheading is less painful than any other method of murder available, because the moral rules against murder presuppose that one has freedom of will. It could be that some of our moral rules also presuppose an ability to engage in random processes, and when that ability is missing, then the rules are no longer applicable. And since circumstances where random choices are possible are so normal, our moral intuitions are closely tied to these circumstances, and hence no answer to the question of what is the right thing to do is counterintuitive.

The second is that there is a special kind of reason, a tie-breaker reason. When one fires Bob with the fact that one wants to see him suffering being a tie-breaker, one is not intending to see him suffer. Perhaps what one is intending, instead, is a conditional: if one of Alice and Bob suffers, it’s Bob.

Monday, March 20, 2023

A flip side to omnirationality

Suppose I do an action that I know benefits Alice and harms Bob. The action may be abstractly perfectly justified, but if I didn’t take into account the harm to Bob, if I didn’t treat the harm to Bob as a reason against the action in my deliberation, then Bob would have a reason to complain about what my deliberation if he somehow found out. If I was going to perform the action, I should have performed it despite the harm to Bob, rather than just ignoring the harm to Bob. I owed it to Bob not to ignore him, even if I was in the end going to go with the benefit to Alice.

But suppose that I am perfectly virtuous, and the action is one that I owed Alice in a way that constituted a morally conclusive reason for the action. (The most plausible case will be where the action is a refraining from something absolutely wrong.) Once I see that I have morally conclusive reason for the action, it seems that taking other reasons into account is a way of toying with violating the conclusive reason, and that kind of toying is not compatible with perfect virtue.

Still, the initial intuition has some pull. Even if I have an absolute duty to do what I did for Alice, I should be doing it despite the harm to Bob, rather than just ignoring the harm to Bob. I don’t exactly know what it means not to just ignore the harm to Bob. Maybe in part it means being the sort of person who would have been open to avoiding the action if the reasons for it weren’t morally conclusive?

If I stick to the initial intuition, then we get a principle of perfect deliberation: In perfect deliberation, the deliberator does not ignore any reasons—or, perhaps, any unexcluded reasons—against the action one eventually chooses.

If this is right, then it suggests a kind of a flip side to divine omnirationality. Divine omnirationality says that when God does something, he does it for all the unexcluded reasons that favor it.

Monday, December 12, 2022

More on non-moral and moral norms

People often talk of moral norms as overriding. The paradigm kind of case seems to be like this:

  1. You are N-forbidden to ϕ but morally required to ϕ,

where “N” is some norm like that of prudence or etiquette. In this case, the moral requirement of ϕing overrides the N-prohibition on ϕing. Thus, you might be rude to make a point of justice or sacrifice your life for the sake of justice.

But if there are cases like (1), there will surely also be cases where the moral considerations in favor of ϕing do not rise to the level of a requirement, but are sufficient to override the N-prohibition. In those cases, presumably:

  1. You are N-forbidden to ϕ but morally permitted to ϕ.

Cases of supererogation look like that: you are morally permitted to do something contrary to prudential norms, but not required to do so.

So far so good. Moral norms can override non-moral norms in two ways: by creating a moral requirement contrary to the non-moral norms or by creating a moral permission contrary to the non-moral norms.

But now consider this. What happens if the moral considerations are at an even lower level, a level insufficient to override the N-prohibition? (E.g., what if to save someone’s finger you would need to sacrifice your arm?) Then, it seems:

  1. You are N-forbidden to ϕ and not morally permitted to ϕ.

But this would be quite interesting. It would imply that in the absence of sufficient moral considerations in favor of ϕing, an N-prohibition would automatically generate a moral prohibition. But this means that the real normative upshot in all three cases is given by morality, and the N-norms aren’t actually doing any independent normative work. This suggests strongly that on such a picture, we should take the N-norms to be simply a species of moral norms.

However, there is another story possible. Perhaps in the case where the moral considerations are at too low a level to override the N-prohibition, we can still have moral permission to ϕ, but that permission no longer overrides the N-prohibition. On this story, there are two kinds of cases, in both of which we have moral permission, but in one case the moral permission comes along with sufficiently strong moral considerations to override the N-prohibition, while in the other it does not. On this story, moral requirement always overrides non-moral reasons; but whether moral considerations override non-moral considerations depends on the relative strengths of the two sets of considerations.

Still, consider this. The judgment whether moral considerations override the non-moral ones seems to be an eminently moral judgment. It is the person with moral virtue who is best suited to figuring out whether such overriding happens. But what happens if morality says that the moral considerations do not override the N-prohibition? Is that not a case of morality giving its endorsement to the N-prohibition, so that the N-prohibition would rise to the level of a moral prohibition as well? But if so, then that pushes us back to the previous story where it is reasonable to take N-considerations to be subsumed into moral considerations.

I don’t want to say that all norms are moral norms. But it may well be that all norms governing the functioning of the will are moral norms.

Tuesday, December 6, 2022

Dividing up reasons

One might think that reasons for action are exhaustively and exclusively divided into the moral and the prudential. Here is a problem with this. Suppose that you have a spinner divided into red and green areas. If you spin it and it lands into red, something nice happens to you; if it lands on green, something nice happens to a deserving stranger. You clearly have reason to spin the spinner. But, assuming the division of reasons, your reason for spinning it is neither moral nor prudential.

So what should we say? One possibility is to say that there are only reasons of one type, say the moral. I find that attractive. Then benefits to yourself also give you moral reason to act, and so you simply have a moral reason to spin the spinner. Another possibility is to say that in addition to moral and prudential reasons there is some third class of “mixed” or “combination” reasons.

Objection: The chance p of the spinner landing on red is a prudential reason and the chance 1 − p of its landing on green is a moral reason. So you have two reasons, one moral and one prudential.

Response: That may be right in the simple case. But now imagine that the “red” set is a saturated nonmeasurable subset of the spinner edge, and the “green” set is also such. A saturated nonmeasurable subset has no reasonable probability assignment, not even a non-trivial range of probabilities like from 1/3 to 1/2 (at best we can assign it the full range from 0 to 1). Now the reason-giving strength of a chancy outcome is proportionate to the probability. But in the saturated nonmeasurable case, there is no probability, and hence no meaningful strength for the red-based reason or for the green-based reason. But there is a meaningful strength for the red-or-green moral-cum-prudential reason. The red-or-green-based reason hence does not reduce to two separate reasons, one moral and one prudential.

Now, one might have technical worries about saturated nonmeasurable sets figuring in decisions. I do. (E.g., see the Axiom of Choice chapter in my infinity book.) But now instead of supposing saturated nonmeasurable sets, suppose a case where an agent subjectively has literally no idea whether some event E will happen—has no probability assignment for E whatsoever, not even a ranged one (except for the full range from 0 to 1). The spinner landing on a set believed to be saturated nonmeasurable might be an example of such a case, but the case could be more humdrum—it’s just a case of extreme agnosticism. And now suppose that the agent is told that if they so opt, then they will get something nice on E and a deserving stranger will get something nice otherwise.

Final remark: The argument applies to any exclusive and exhaustive division of reasons into “simple” (i.e., non-combination) types.

Monday, November 14, 2022

Reducing goods to reasons?

In my previous post I cast doubt on reducing moral reasons to goods.

What about the other direction? Can we reduce goods to reasons?

The simplest story would be that goods reduce to reasons to promote them.

But there seem to be goods that give no one a reason to promote them. Consider the good fact that there exist (in the eternalist sense: existed, exist now, will exist, or exist timelessly) agents. No agent can promote the fact that there exist agents: that good fact is part of the agent’s thrownness, to put it in Heideggerese.

Maybe, though, this isn’t quite right. If Alice is an agent, then Alice’s existence is a good, but the fact that some agent or other exists isn’t a good as such. I’m not sure. It seems like a world with agents is better for the existence of agency, and not just better for the particular agents it has. Adding another agent to the world seems a lesser value contribution than just ensuring that there is agency at all. But I could be wrong about that.

Another family of goods, though, are necessary goods. That God exists is good, but it is necessarily true. That various mathematical theorems are beautiful is necessarily true. Yet no one has reason to promote a necessary truth.

But perhaps we could have a subtler story on which goods reduce not just to reasons to promote them, but to reasons to “stand for them” (taken as the opposite of “standing against them”), where promotion is one way of “standing for” a good, but there are others, such as celebration. It does not make sense to promote the existence of God, the existence of agents, or the Pythagorean theorem, but celebrating these goods makes sense.

However, while it might be the case that something is good just in case an agent should “stand for it”, it does not seem right to think that it is good to the extent that an agent should “stand for it”. For the degree to which an agent should stand for a good is determined not just by the magnitude of the good, but the agent’s relationship to the good. I should celebrate my children’s accomplishments more than strangers’.

Perhaps, though, we can modify the story in terms of goods-for-x, and say that G is good-for-x to the extent that x should stand for G. But that doesn’t seem right, either. I should stand for justice for all, and not merely to the degree that justice-for-all is good-for-me. Moreover, there goods that are good for non-agents, while a non-agent does not have a reason to do anything.

I love reductions. But alas it looks to me like reasons and goods are not reducible in either direction.

Friday, October 28, 2022

Choices on a spectrum

My usual story about how to reconcile libertarianism with the Principle of Sufficient Reason is that when we choose, we choose on the basis of incommensurable reasons, some of which favor the choice we made and others favor other choices. Moreover, this is a kind of constrastive explanation.

This story, though it has some difficulties, is designed for choices between options that promote significantly different goods—say, whether to read a book or go for a walk or write a paper.

But a different kind of situation comes up for choices of a point on a spectrum. For instance, suppose I am deciding how much homework to assign, how hard a question to ask on an exam, or how long a walk to go for. What is going on there?

Well, here is a model that applies to a number of cases. There are two incommensurable goods one better served as one goes in one direction in the spectrum and the other better served as one goes in the other direction in the spectrum. Let’s say that we can quantify the spectrum as one from less to more with respect to some quantity Q (amount of homework, difficulty of a question or length of a walk), and good A is promoted by less of Q and incommensurable good B is promoted by more of Q. For instance, with homework, A is the student’s having time for other classes and for non-academic pursuits and B is the student’s learning more about the subject at hand. With exam difficulty, A may be avoiding frustration and B is giving a worthy challenge. With a walk, A is reducing fatigue and B is increasing health benefits. (Note that the claim that A is promoted by less Q and B is promoted by more Q may only be correct within a certain range of Q. A walk that is too long leads to injury rather than health.)

So, now, suppose we choose Q = Q1. Why did one choose that? It is odd to say that one chose Q on account of reasons A and B that are opposed to each other—that sounds inconsistent.

Here is one suggestion. Take the choice to make Q equal to Q1 to be the conjunction of two (implicit?) choices:

  1. Make Q at most Q1

  2. Make Q at least Q1.

Now, we can explain choice (a) in terms of (a) serving good A better than the alternative, which would be to make Q be bigger than Q1. And we can explain (b) in terms of (b) serving good B better than the alternative of making Q be smaller.

Here is a variant suggestion. Partition the set of options into two ranges R1, consisting of options where Q < Q1 and R2, where Q > Q1. Why did I choose Q = Q1? Well, I chose Q over all the choices in R1 because Q better promotes B than anything in R1, and I chose Q over all the choices in R2 because Q better promotes A than anything in R1.

On both approaches, the apparent inconsistency of citing opposed goods disappears because they are cited to explain different contrasts.

Note that nothing in the above explanatory stories requires any commitment to there being some sort of third good, a good of balance or compromise between A and B. There is no commitment to Q1 being the best way to position Q.

Thursday, August 18, 2022

Reasons and permissions

The fact that a large animal is attacking me would give me both permission and reason to kill the animal. On the basis of cases like that, one might hypothesize that permissions to ϕ come from particularly strong reasons to ϕ.

But there are cases where things are quite different. There is an inexpensive watch on a shelf beside me that I am permitted to destroy. What gives me that permission? It is that I own it. But the very thing that gives me permission, my ownership, also gives me a reason not to smash it. So sometimes the same feature of reality that makes ϕing permissible is also a reason against ϕing.

This is a bit odd. For if it were impermissible to destroy the watch, that would be a conclusive reason against the smashing. So it seems that my ownership moves me from having a conclusive reason against smashing to not having a conclusive reason against smashing. Yet it does that while at the same time being a reason not to smash. Interesting.

I suspect there may be an argument against utilitarianism somewhere in the vicinity.

Wednesday, September 8, 2021

Reasons from the value of true belief

Two soccer teams are facing off, with a billion fans watching on TV. Brazil has a score of 2 and Belgium has a score of 0, and there are 15 minutes remaining. The fans nearly unanimously think Brazil will win. Suddenly, there is a giant lightning strike, and all electrical devices near the stadium fail, taking the game off the air. Coincidentally, during the glitch, Brazil’s two best players get red cards, and now Belgium has a very real chance to win if they try hard.

But the captain of the Brazilian team yells out this argument to the Belgians: “If you win, you will make a billion fans have a false belief. A false belief is bad, and when you multiply the badness by billion, the result is very bad. So, don’t win!”
Great hilarity ensues among the Belgians and they proceed to trounce the Brazilians.

The Belgians are right to laugh: the consideration that the belief of a billion fans will be falsified by their effort carries little to no moral weight.

Why? Is it that false belief carries little to no disvalue? No. For suppose that now the game is over. At this point, the broadcast teams have a pretty strong moral reason to try to get back on the air in order to inform the billion fans that they were mistaken about the result of the game.

In other words, we have a much stronger reason to shift people’s beliefs to match reality than to shift reality to match people’s beliefs. Yet in both cases the relevant effect on the good and bad in the world can be the same: there is less of the bad of false beliefs and more of the good of true beliefs. An immediate consequence of this is that consequentialism about moral reasons is false: the weight of moral reasons depends on more than the value of the consequences.

It is often said that belief has a mind-to-world direction of fit. It is interesting that this not only has repercussions for the agent’s own epistemic life, but for the moral life of other parties. We have much more reason to help others to true belief by affecting their beliefs than by affecting the truth and falsity of the content of the beliefs.

Do the Belgians have any moral reason to lose, in light of the fact that losing will make the fans have correct belief? I am inclined to think so: producing a better state of affairs is always worthwhile. But the force of the reason is exceedingly small. (Nor do the numbers matter: the reason’s force would remain exceedingly small even if there are trillions of fans because Earth soccer was famous through the galaxy.)

There is a connection between the good and the right, but it is quite complex indeed.

Wednesday, June 16, 2021

Self-regarding moral reasons

Many contemporary ethicists believe that:

  1. Moral reasons are always other-regarding.

Add this very plausible premise:

  1. No action that is on balance supported by moral reasons is bad.

Suppose Alice is out of food on a desert island. She will die of starvation in a week. A malfunctioning robot shows up and offers her a deal (Alice verifies the robot’s buggy software to ensure it would follow-through on the deal). In exchange for her agreeing to be tortured horribly for the week of life that she has left, the robot will fly to the other side of the world, and tell a joke to a random stranger who will have a minute of enjoyment from the joke.

Clearly, Alice has a moral reason to go for the deal: it will brighten up a stranger’s day. By (1), accepting the deal is on balance supported by moral reasons, for the only relevant moral reason against the deal is the harm to Alice. Thus, the action is not bad by (2). But it is clearly a bad action.

I suppose one could reject (2), but it seems to me much better to reject (1), and to hold that prudence is a moral virtue, and if Alice takes the deal, she is morally failing by imprudence.

Monday, March 1, 2021

Deserving the rewards of virtue

We have the intuition that when someone has worked uprightly and hard for something good and thereby gained it, they deserve their possession of it. What does that mean?

If Alice ran 100 meters faster than her opponents at the Olympics, she deserves a gold medal. In this case, it is clear what is meant by that: the organizers of the Olympics owe her a gold medal in just recognition of her achievement. Thus, Alice’s desert appears appears to be appropriately analyzable partly in terms of normative properties had by persons other than Alice. In Alice’s case, these properties are obligations of justice, but they could simply be reasons of justice. Thus, if someone has done something heroic and they receive a medal, the people giving the medal typically are not obligated to give it, but they do have reasons of justice to do so.

But there are cases that fit the opening intuition where it is harder to identify the other persons with the relevant normative properties. Suppose Bob spends his life pursuing virtue, and gains the rewards of a peaceful conscience and a gentle attitude to the failings of others. Like Alice’s gold medal, Bob’s rewards are deserved. But if we understand desert as in Alice’s case, as partly analyzable in terms of normative properties had by others, now we have a problem: Who is it that has reasons of justice to bestow these rewards on Bob?

We can try to analyze Bob’s desert by saying that we all have reasons of justice not to deprive him of these rewards. But that doesn’t seem quite right, especially in the case of the gentle attitude to the failings of others. For while some people gain that attitude through hard work, others have always had it. Those who have always had it do not deserve it, but it would still be unjust to deprive them of it.

The theist has a potential answer to the question: God had reasons of justice to bestow on Bob the rewards of virtue. Thus, while Alice deserved her gold medal from the Olympic committee and Carla (whom I have not described but you can fill in the story) deserved her Medal of Honor from the Government, Bob deserved his quiet conscience and “philosophical” outlook from God.

This solution, however, may sound wrong to many Christians, especially but not only Protestants. There seems to be a deep truth to Leszek Kolakowski’s book title God Owes Us Nothing. But recall that desert can also be partly grounded in non-obligating reasons of justice. One can hold that God owes us nothing but nonetheless think that when God bestowed on Bob the rewards of virtue (say, by designing and sustaining the world in such a way that often these rewards came to those who strove for virtue), God was doing so in response to non-obligating reasons of justice.

Objection: Let’s go back to Alice. Suppose that moments after she ran the race, a terrorist assassinated everyone on the Olympic Committee. It still seems right to say that Alice deserved a gold medal for her run, but no one had the correlate reason of justice to bestow it. Not even God, since it just doesn’t seem right to say that God has reasons of justice to ensure Olympic medals.

Response: Maybe. I am not sure. But think about the “Not even God” sentence in the objection. I think the intuition behind the “Not even God” undercuts the case. The reason why not even God had reasons of justice to ensure the medal was that Alice deserved a medal not from God but from the Olympic Committee. And this shows that her desert is grounded in the Olympic Committee, if only in a hypothetical way: Were they to continue existing, they would have reasons of justice to bestow on her the medal.

This suggests a different response that an atheist could give in the case of Bob: When we say that Bob deserves the rewards of virtue, maybe we mean hypothetically that if God existed, God would have reasons of justice to grant them. This does not strike me as a plausible analysis. If God doesn’t exist, the existence of God is a far-fetched and fantastical hypothesis. It is implausible that Bob’s ordinary case of desert be partly grounded in hypothetical obligations of a non-existent fantastical being. On the other hand, it is not crazy to think that Alice’s desert, in the exceptional case of the Olympic Committee being assassinated, be partly grounded in hypothetical obligations of a committee that had its existence suddenly cut short.

Thursday, December 17, 2020

A multiple faculty solution to the problem of conscience

I used to be quite averse to multiplying types of normativity until I realized that in an Aristotelian framework it makes perfect sense to multiply them by their subject. Thus, I should think that 1 = 1, I should look both ways before crossing the street, and I should have a heart-rate of no more than 100. But the norms underlying these claims have different subjects: my intellect, my will and my circulatory system (or perhaps better: I as thinking, I as willing and I as circulating).

In this post I want to offer two solutions to the problem of mistaken conscience that proceed by multiplying norms. The problem of mistaken conscience is two-fold as there are two kinds of mistakes of conscience. A strong mistake is when I judge something is required when it is forbidden. A weak mistake is when I judge something is permissible when it is forbidden.

Given that I should follow my conscience, a strong mistake of conscience seems to lead to two conflicting obligations: I should ϕ, because my conscience says so, and I should refrain from ϕing, because ϕing is forbidden. Call the claim that strong mistakes of conscience lead to conflicting obligations the Dilemma Thesis. The Dilemma Thesis is perhaps somewhat implausible on its face, but can be swallowed (as Mark Murphy does). However, more seriously, the Dilemma Thesis has the unfortunate result that strong mistakes of conscience are not, as such, mistakes. For the mistake was supposed to be that I judge ϕing as required when it is forbidden. But that is only a mistake when ϕing is not required. But according to the Conflict Thesis, it is required. So there is no mistake. (There may be a mistake about why it is required, and perhaps one can use that to defuse the problem, but I want to try something else in this post.) Moreover, a view that embraces the Dilemma Thesis needs to explain the blame asymmetry between the obligation to ϕ and the obligation not to ϕ: I am to blame if I go against conscience, but not if I follow conscience.

Weak mistakes are less of a problem, but they still raise the puzzle of why I am not blameworthy if I do what is forbidden when conscience says it’s permissible.

Moving towards a solution, or actually a pair of solution, start with this thought. When I follow a mistaken conscience, my will does nothing wrong but the practical intellect has made a mistake. In other words, we have two sets of norms: norms of practical intellect and norms of will. In these cases I judged badly but willed well. And it is clear why I am not blameworthy: for I become blameworthy by virtue of a fault of the will, not a fault of the intellect.

But there is still a problem analogous to the problem with the Dilemma Thesis. For it seems that:

  1. In a mistake of conscience, my judgment was bad because it made a false claim as to what I should will.

In the case of a strong mistake, say, I judged that I should will my ϕing whereas is in fact I should have nilled my ϕing. But I can’t say that and say that the will did what it should in ϕing.

This means that if we are to say that the will did nothing wrong and the problem was with the intellect, we need to reject (1). There are two ways of doing this, leading to different solutions to the problem of conscience.

Claim (1) is based on two claims about practical judgment:

  1. The practical intellect’s judgments are truth claims.

  2. These truth claims are claims about what I should will.

We can get out of (1) by denying (2) (with (3) then becoming moot) or by holding on to (2) but rejecting (3).

Anscombe denies (2), for reasons having nothing to do with mistakes of conscience. There is good precedent for denying (2), then.

I find the solution that denies (2) a bit murky, but I can kind of see how one would go about it. Oversimplifying, the intellect presents actions to the will on balance positively or negatively. This presentation does not make a truth claim. The polarity of the presentation by the intellect to the will should not be seen as a judgment that an action has a certain character, but simply as a certain way of presenting the judgment—with propathy or antipathy, one might say. Nonetheless there are norms of presentation built into the nature of the practical intellect. These norms are not truth norms, like the norms of the theoretical intellect, but are more like the norms of the functioning of the body’s thermal regulation system, which should warm up the body in some circumstances and cool it down in others, but does not make truth claims. There are actions that should be positively presented and actions that should be negatively presented. We can say that the actions that should be positively presented are right, but the practical intellect’s positive presentation of an action is not a presentation that the action is right, for that would be an odd circularity: to present ϕing positively would be to present ϕing as something that should be presented positively.

(In reality, the “on balance” positive and negative presentations typically have a thick richness to them, a richness corresponding “in flavor” to words like “courageous”, “pleasant”, etc. However, we need to be careful on this view not to think of the presentation corresponding “in flavor” to these words as constituting a truth claim that a certain concept applies. I am somewhat dubious whether this can all be worked out satisfactorily, and so I worry that the no-truth-claim picture of the practical intellect falls afoul of the thickness of the practical intellect’s deliverances.)

There is a second solution which, pace Anscombe, holds on to the idea that the practical intellect’s judgments are truth claims, but denies that they are claims about what I should will. Here is one way to develop this solution. There are times when an animal’s subsystem is functioning properly but it would be better if it did something else. For instance, when we are sick, our thermal regulation system raises our temperature in order to kill invading bacteria or viruses. But sometimes the best medical judgment will be that we will on the whole be better off not raising the temperature given a particular kind of invader, in which case we take fever-reducing medication. We have two norms here: a local norm of the thermal regulation system and a holistic norm of the organism.

Similarly, there are local norms of the will—to will what the intellect presents to it overall in a positive light, say. And there are local norms of the intellect—to present the truth or maybe that which the evidence points to as true. But there are holistic norms of the acting person (to borrow Wojtyla’s useful phrase), such as not to kill innocents. The practical intellect discerns these holistic norms, and presents them to the will. The intellect can err in its discernment. The will can fail to follow the intellect’s discernment.

The second solution is rather profligate with norms, having three different kinds of norms: norms of the will, norms of the intellect, and norms of the acting person, who comprises at least the will, the intellect and the body.

In a strong mistake of conscience, where we judge that we should ϕ but ϕing is forbidden, and we follow conscience and ϕ, here is what happens. The will rightly follows the intellect’s presentation by willing to ϕ. The acting person, however, goes wrong by ϕing. We genuinely have a mistake of the intellect: the intellect misrepresented what the acting person should do. The acting person went wrong, and did so simpliciter. However, the will did right, and so one is not to blame. We can say that in this case, the ϕing was wrong, but the willing to ϕ was right. And we can say how the pro-ϕing norm takes priority: the norm to will one’s ϕing is a norm of the will, so naturally it is what governs the will.

In a weak mistake of conscience, where we judge that it is permissible to ϕ but it’s not, again the solution is that under the circumstances it was permissible to will to ϕ, but not permissible to ϕ.

There is, however, a puzzle in connecting this story with failed actions. Consider either kind of mistake of conscience, and suppose I will to ϕ but I fail to ϕ due to some non-moral systemic failure. Maybe I will to press a forbidden button, but it turns out I am paralyzed. In that case, it seems that the only thing I did was willing to ϕ, and so we cannot say that I did anything wrong. I think there are two ways out of this. The first is to bite the bullet and say that this is just a case where I got lucky and did nothing wrong. The second is to say that my willing to ϕ can be seen as a trying to ϕ, and it is bad as an action of the acting person but not bad as an action of the will.

Monday, November 30, 2020

Incompatible reasons for the same action

While writing an earlier post, I came across a curious phenomenon. It is, of course, quite familiar that we have incompatible reasons that we cannot act on all of: reasons of convenience often conflict with reasons of morality, say. This familiar incompatibility is due to the fact that the reasons support mutually incompatible actions. But what is really interesting is that there seem to be incompatible reasons for the same action.

The clearest cases involve probabilities. Let’s say that Alice has a grudge against Bob. Now consider an action that has a chance of bestowing an overall benefit on Bob and a chance of bestowing an overall harm on Bob. Alice can perform the action for the sake of the chance of overall harm out of some immoral motive opposed to Bob’s good, such as revenge, or she can perform the action for the sake of the chance of overall benefit out of some moral motive favoring Bob’s good. But it would make no sense to act on both kinds of reasons at once.

One might object as follows: The expected utility of the action, once both the chance of benefit and the chance of harm are taken into account, is either negative, neutral or positive. If it’s negative, only the harm-driven action makes sense; if it’s positive, only the benefit-driven action makes sense; if it’s neutral, neither makes sense. But this neglects the richness of possible rational attitudes to risk. Expected utilities are not the only rational way to make decisions. Moreover, the chances may be interval-valued in such a way that the expected utility is an interval that has both negative and positie components.

Another objection is that perhaps it is possible to act on both reasons at once. Alice could say to herself: “Either the good thing happens to Bob, which is objectively good, or the bad thing happens, or I am avenged, which is good for me.” Sometimes such disjunctive reasoning does make sense. Thus, one might play a game with a good friend and think happily: “Either I will win, which will be nice for me, or my friend will win, and that’ll be nice, too, since he’s my friend.” But the Alice case is different. The revenge reason depends on endorsing a negative attitude towards Bob, while one cannot do while seeking to benefit Bob.

Or suppose that Carl read in what he took to be holy text that God had something to say about ϕing, but Carl cannot remember if the text said that God commanded ϕing or that God forbade ϕing—it was one of the two. Carl thinks there is a 30% chance it was a prohibition and a 70% chance that it was a command. Carl can now ϕ out of a demonic hope to disobey God or he can ϕ because ϕing was likely commanded by God.

In the most compelling cases, one set of motives is wicked. I wonder if there are such cases where both sets of motives are morally upright. If there are such cases, and if they can occur for God, then we may have a serious problem for divine omnirationality which holds that God always acts for all the unexcluded reasons that favor an action.

One way to argue that such cases cannot occur for God is by arguing that the most compelling cases are all probabilistic, and that on the right view of divine providence, God never has to engage in probabilistic reasoning. But what if we think the right view of providence involves probabilistic reasoning?

We might then try to construct a morally upright version of the Alice case, by supposing that Alice is in a position of authority over Bob, and instead of being moved by revenge, she is moved to impose a harm on Bob for the sake of justice or to impose a good on him out of benevolent mercy. But now I think the case becomes less clearly one where the reasons are incompatible. It seems that Alice can reasonably say:

  1. Either justice will be served or mercy will be served, and I am happy with both.

I don’t exactly know why it is that (1) makes rational sense but the following does not:

  1. Either vengeance on Bob will be saved or kindness to Bob will be served, and I am happy with both.

But it does seem that (1) makes sense in a way in which (2) does not. Maybe the difference is this: to avenge requires setting one’s will against the other’s overall good; just punishment does not.

I conjecture that there are no morally upright cases of rationally incompatible reasons for the same action. That conjecture would provide an interesting formal constraint on rationality and morality.

Wednesday, November 25, 2020

Reasons as construals

Scanlon argues that intentions do not affect the permissibility of non-expressive actions because our intentions come from our reasons, and our reasons are like beliefs in that they are not something we choose.

In this argument, our reasons are the reasons we take ourselves to have for action. Scanlon’s argument can be put as follows (my wording, not his):

  1. I do not have a choice of which reasons I take myself to have.

  2. If I rationally do A, I do it for all the reasons for A that I take myself to have for doing A.

And the analogy with beliefs supports (1). However, when formulated like this, there is something like an equivocation on “reasons I take myself to have” between (1) and (2).

On its face reasons I take myself to have are belief-like: indeed, one might even analyze “I take myself to have reason R for A” as “I believe that R supports A”. But if they are belief-like in this way, I think we can argue that (2) is false.

Beliefs come in occurrent and non-occurrent varieties. It is only the occurrent beliefs that are fit to ground or even be analogous to the reasons on the basis of which we act. Suppose I am a shady used car dealer. I have a nice-looking car. I actually tried it out and found that it really runs great. You ask me what the car is like. I am well-practiced at answering questions like that, and I don’t think about how it runs: I just say what I say about all my cars, namely that it runs great. In this case, my belief that the car runs great doesn’t inform my assertion to you. I do not even in part speak on the basis of the belief, because I haven’t bothered to even call to mind what I think about how this car runs.

So, (2) can only be true when the “take myself to have” is occurrent. For consistency, it has to be occurrent in (1). But (1) is only plausible in the non-occurrent sense of “take”. In the occurrent sense, it is not supported by the belief analogy. For we often do have a choice over which beliefs are occurrent. We have, for instance, the phenomenon of rummaging through our minds to find out what we think about something. In doing so, we are trying to make occurrent our beliefs about the matter. By rummaging through our minds, we do so. And so what beliefs are occurrent then is up to us.

This can be of moral significance. Suppose that I once figured out the moral value of some action, and now that action would be very convenient to engage in. I have a real choice: do I rummage through my mind to make occurrent my belief about the moral value of the action or not? I might choose to just do the convenient action without searching out what it is I believe about the action’s morality because I am afraid that I will realize that I believe the action to be wrong. In such a case, I am culpable for not making a belief occurrent.

While the phenomenon of mental rummaging is enough to refute (1), I think the occurrent belief model of taking myself to have a reason is itself inadequate. A better model is a construal model, a seeing-as model. It’s up to me whether I see the duck-rabbit as a duck or as a rabbit. I can switch between them at will. Similarly, I can switch between seeing an action as supported by R1 and seeing it as supported by R2. Moreover, there is typically a fact of the matter whether I am seeing the duck-rabbit as a duck or as a rabbit at any given time. And similarly, there may be a fact of the matter as to how I construed the action when I finally settled on it, though I may not know what that fact is (for instance, because I don’t know when I settled on it).

In some cases I can also switch to seeing the action as supported by both R1 and R2, unlike in the case of the duck-rabbit. But in some cases, I can only see it as supported by one of the reasons at a time. Suppose Alice is a doctor treating a patient with a disease that when untreated will kill the patient in a month. There is an experimental drug available. In 90% of the cases, the drug results in instant death. In 10% of the cases, the drug extends the remaining lifetime to a year. Alice happens to know that this patient once did something really terrible to her best friend. Alice now has two reasons to recommend the drug to the patient:

  • the drug may avenge the evil done to her friend by killing the patient, and

  • the drug may save the life of the patient thereby helping Alice fulfill her medical duties of care.

Both reasons are available for Alice to act on. Unless Alice has far above average powers of compartmentalization (in a way in which some people perhaps can manage to see the duck-rabbit as both a duck and a rabbit at once), it is impossible for Alice to act on both reasons. She can construe the recommending of the pill as revenge on an enemy or she can construe it as a last-ditch effort to give her patient a year of life, but not both. And it is very plausible that she can flip between these. (It is also likely that after the fact, she may be unsure of which reason she chose the action for.)

In fact, we can imagine Alice as deliberating between four options:

  • to recommend the drug in the hope of killing her enemy instantly

  • to recommend the drug in the hope of giving her patient a year of life

  • to recommend against the drug in order that her enemy should die in a month

  • to recommend against the drug in order that her patient have at least a month of life.

The first two options involve the same physical activity—the same words, say—and the last two options do as well. But when she considers the first two options, she construes them differently, and similarly with the last two.