Showing posts with label deliberation. Show all posts
Showing posts with label deliberation. Show all posts

Monday, March 20, 2023

A flip side to omnirationality

Suppose I do an action that I know benefits Alice and harms Bob. The action may be abstractly perfectly justified, but if I didn’t take into account the harm to Bob, if I didn’t treat the harm to Bob as a reason against the action in my deliberation, then Bob would have a reason to complain about what my deliberation if he somehow found out. If I was going to perform the action, I should have performed it despite the harm to Bob, rather than just ignoring the harm to Bob. I owed it to Bob not to ignore him, even if I was in the end going to go with the benefit to Alice.

But suppose that I am perfectly virtuous, and the action is one that I owed Alice in a way that constituted a morally conclusive reason for the action. (The most plausible case will be where the action is a refraining from something absolutely wrong.) Once I see that I have morally conclusive reason for the action, it seems that taking other reasons into account is a way of toying with violating the conclusive reason, and that kind of toying is not compatible with perfect virtue.

Still, the initial intuition has some pull. Even if I have an absolute duty to do what I did for Alice, I should be doing it despite the harm to Bob, rather than just ignoring the harm to Bob. I don’t exactly know what it means not to just ignore the harm to Bob. Maybe in part it means being the sort of person who would have been open to avoiding the action if the reasons for it weren’t morally conclusive?

If I stick to the initial intuition, then we get a principle of perfect deliberation: In perfect deliberation, the deliberator does not ignore any reasons—or, perhaps, any unexcluded reasons—against the action one eventually chooses.

If this is right, then it suggests a kind of a flip side to divine omnirationality. Divine omnirationality says that when God does something, he does it for all the unexcluded reasons that favor it.

Tuesday, February 1, 2022

Intentional acts that produce their own intentions

Start with these assumption:

  1. If you are at least partly responsible for x, then x is at least partly an outcome of an intentional act with an intention that you are at least partly responsible for.

  2. You are at least partly responsible for something.

  3. You do not have an infinite regress of intentions.

  4. You do not have a circle of distinct intentions.

For brevity, let’s drop the “at least partly”. Let’s say you’re responsible for x. Then x must be an outcome of an intentional act with an intention I1 you’re responsible for; the intention I1 then must be an outcome of an intentional act with an intention I2 you’re responsible for; and so on.

It seems we now have a contradiction: by (1) and (2), either you have infinitely many intentions in the list I1, I2, ..., and hence a regress contrary to (3), or else you come back to some intention that you already had, and hence you have a circle, contrary to (4).

But there is one more possibility, and (1)–(4) logically entail that this one more possibility must be true:

  1. For some n, In = In + 1.

This is like a circle, but not quite. It is a fixed point. What we have learned is that given (1)–(5):

  1. You have at least one intention that is at least partly an outcome of an intentional act with that very intention.

This seems even more absurd than a circle or regress. One wants to ask: How could an intention be its own outcome? But this question has a presupposition:

  1. Any at least partial outcome of an intentional act is an at least partial outcome of the intentional act’s intention.

What we learn from (6) is that the presupposition (7) is false. (For if (7) were true, then given (6) some intention would be its own at least partial outcome, which is indeed absurd.)

But how can (7) be false? How can an outcome (again, let’s drop the partiality for brevity) of an intentional act not be an outcome of the act’s intention? I think there are two possibilities. First, the act’s intention can itself be an outcome of the act. Second, the act’s intention can be something that is parallel to the act and neither is an outcome of the other. The second view fits with acausalist views in the philosophy of intention, but it does not seem plausible to me—there needs to be a causal connection of some sort between an intentional act and its intention. And in any case the second view won’t help solve our puzzle.

So, we are led to a view on which at least sometimes the intention of an intentional act is an outcome rather than a cause of the act. If we think of an intention as explaining the rational import of an act, then in such cases the rational import of the act is retrospective in a way.

It would be neatest if every time one performed an intentional act, the intention were an outcome of the act. But we have good reason to think that sometimes the intention precedes the intentional act. For instance, when I decide on a plan of action, and then simply carry out the plan, the plan as it is found in my mind informs my sequence of acts in the way an intention does, and so it makes sense to talk of the plan as an intention. But now think about the mental act of deliberation by which I decided on the plan, including on the plan’s end. Here it makes sense to think of the plan’s end as being a part of the intention behind the mental act—the mental act is made rational by aiming at the plan’s end.

But all this is predicated on (1). And it now occurs to me that (1) is perhaps not as secure as it initially seemed. For imagine this case. I am trying to decide on what to do, so I engage in deliberation. The deliberation is an intentional mental act, whose intention is to come to a decision. But perhaps I do not need to be responsible for this intention in order to be responsible for the decision I come to. I can be simply stuck with having to come to a decision, and still be responsible for the particular decision I come to. In other words, deliberative processes could be a unique case where I am responsible for an act’s outcome without being responsible for the act’s intention. That doesn’t sound quite right to me, though. It seems that if the outcome of the deliberative processes is not what I intend, and, as is often the case, is not even what I foresee, then I am not responsible for that outcome.

Wednesday, November 25, 2020

Reasons as construals

Scanlon argues that intentions do not affect the permissibility of non-expressive actions because our intentions come from our reasons, and our reasons are like beliefs in that they are not something we choose.

In this argument, our reasons are the reasons we take ourselves to have for action. Scanlon’s argument can be put as follows (my wording, not his):

  1. I do not have a choice of which reasons I take myself to have.

  2. If I rationally do A, I do it for all the reasons for A that I take myself to have for doing A.

And the analogy with beliefs supports (1). However, when formulated like this, there is something like an equivocation on “reasons I take myself to have” between (1) and (2).

On its face reasons I take myself to have are belief-like: indeed, one might even analyze “I take myself to have reason R for A” as “I believe that R supports A”. But if they are belief-like in this way, I think we can argue that (2) is false.

Beliefs come in occurrent and non-occurrent varieties. It is only the occurrent beliefs that are fit to ground or even be analogous to the reasons on the basis of which we act. Suppose I am a shady used car dealer. I have a nice-looking car. I actually tried it out and found that it really runs great. You ask me what the car is like. I am well-practiced at answering questions like that, and I don’t think about how it runs: I just say what I say about all my cars, namely that it runs great. In this case, my belief that the car runs great doesn’t inform my assertion to you. I do not even in part speak on the basis of the belief, because I haven’t bothered to even call to mind what I think about how this car runs.

So, (2) can only be true when the “take myself to have” is occurrent. For consistency, it has to be occurrent in (1). But (1) is only plausible in the non-occurrent sense of “take”. In the occurrent sense, it is not supported by the belief analogy. For we often do have a choice over which beliefs are occurrent. We have, for instance, the phenomenon of rummaging through our minds to find out what we think about something. In doing so, we are trying to make occurrent our beliefs about the matter. By rummaging through our minds, we do so. And so what beliefs are occurrent then is up to us.

This can be of moral significance. Suppose that I once figured out the moral value of some action, and now that action would be very convenient to engage in. I have a real choice: do I rummage through my mind to make occurrent my belief about the moral value of the action or not? I might choose to just do the convenient action without searching out what it is I believe about the action’s morality because I am afraid that I will realize that I believe the action to be wrong. In such a case, I am culpable for not making a belief occurrent.

While the phenomenon of mental rummaging is enough to refute (1), I think the occurrent belief model of taking myself to have a reason is itself inadequate. A better model is a construal model, a seeing-as model. It’s up to me whether I see the duck-rabbit as a duck or as a rabbit. I can switch between them at will. Similarly, I can switch between seeing an action as supported by R1 and seeing it as supported by R2. Moreover, there is typically a fact of the matter whether I am seeing the duck-rabbit as a duck or as a rabbit at any given time. And similarly, there may be a fact of the matter as to how I construed the action when I finally settled on it, though I may not know what that fact is (for instance, because I don’t know when I settled on it).

In some cases I can also switch to seeing the action as supported by both R1 and R2, unlike in the case of the duck-rabbit. But in some cases, I can only see it as supported by one of the reasons at a time. Suppose Alice is a doctor treating a patient with a disease that when untreated will kill the patient in a month. There is an experimental drug available. In 90% of the cases, the drug results in instant death. In 10% of the cases, the drug extends the remaining lifetime to a year. Alice happens to know that this patient once did something really terrible to her best friend. Alice now has two reasons to recommend the drug to the patient:

  • the drug may avenge the evil done to her friend by killing the patient, and

  • the drug may save the life of the patient thereby helping Alice fulfill her medical duties of care.

Both reasons are available for Alice to act on. Unless Alice has far above average powers of compartmentalization (in a way in which some people perhaps can manage to see the duck-rabbit as both a duck and a rabbit at once), it is impossible for Alice to act on both reasons. She can construe the recommending of the pill as revenge on an enemy or she can construe it as a last-ditch effort to give her patient a year of life, but not both. And it is very plausible that she can flip between these. (It is also likely that after the fact, she may be unsure of which reason she chose the action for.)

In fact, we can imagine Alice as deliberating between four options:

  • to recommend the drug in the hope of killing her enemy instantly

  • to recommend the drug in the hope of giving her patient a year of life

  • to recommend against the drug in order that her enemy should die in a month

  • to recommend against the drug in order that her patient have at least a month of life.

The first two options involve the same physical activity—the same words, say—and the last two options do as well. But when she considers the first two options, she construes them differently, and similarly with the last two.

Friday, August 7, 2020

Virtues as pocket oracles

Consider three claims:

  1. Virtues when fully developed make it possible to see what is the right thing to do without conscious deliberation.

  2. Acting on fully developed virtues is the best way to act.

  3. Acting on a pocket oracle, which simply tells you in each case what is to be done, misses out on something important in our action.

All three claims sound pretty plausible, but there is a tension between them. To make the tension evident, ask this question:

  • What makes a fully developed virtue relevantly different from a pocket oracle?

Consider three possible answers to the question:

Answer 1: The virtue makes you understand why the right thing is right (e.g., because it is courageous, or loyal, etc.). The oracle just says what is right.

But: We can easily add that the oracle gives an explanation, and that you understand that explanation. Intuition (3) is still going to be there.

Answer 2: The virtues are in us while the oracle is external.

But: Suppose that due to a weird mutation your gut had the functions of the pocket oracle, and gave you literal gut feelings as to what the right thing to do is (and, if necessary, why).

Answer 3: The virtues are formed by one’s own hard work.

But: Perhaps I had to work really hard to get the oracle. Or maybe I designed the AI system it uses.

Maybe there is some other answer to the question. But I would prefer to say that there is a relevant similarity between the case where the virtue tells me what to do and when the oracle does (even the gut oracle), namely that in neither case did I consciously weigh the options myself to come up with the answer.

I would deny (1). There are some independent reasons for that.

First, in difficult cases the struggle is important. This struggle involves oneself being pulled multiple ways by the genuine goods favoring the different actions. It is important to acknowledge the competing goods, especially if they are weighty. If I am trying to decide whether to rescue the drowning friend of five years or the mere acquaintance, it is by being deliberationally pulled by the good of rescuing the mere acquaintance that I acknowledge their moral call on me.

Second, there is sometimes literal and complex calculation going on in decisions. There is a 25% chance of rescuing 74 people versus a 33% chance of rescuing 68 people. It is not a part of perfected human virtue to have us do arithmetic in our heads instantly and see whether (0.25)(74) or (0.33)(68) is bigger. Of course, most of the time the deliberation is not mathematical, but that only makes things harder. We are not gods, and our agential perfection does not involve divine timeless deliberation.

Third, there is a trope in some science fiction (Watts’ Blindsight is where I saw this first) that there are non-human beings that are highly intelligent but lack consciousness. The idea is that consciousness involves some kind of second order reflection which actually slows down an agent, and agents that lack this evolutionary complication might actually be better. It seems to me that the temporally extended and self-reflective experience of deliberation is actually quite important to us as human agents. We are not gods or these kinds of aliens.

Wednesday, April 22, 2020

Virtue, deliberation and contemplation

Is it better to be virtuous or simply to deliberate about each case as one comes to it, making the right decision?

I worry about this: the virtuous person often acts from an internalized habit, without deliberating about the reasons, as these reasons have been internalized. She skillfully comforts a friend without consciously deliberating whether to do it. But by not deliberating, she misses out on things of moral worth. For in deliberating, we consciously contemplate the goods that provide reasons for action. Deliberating about what to do in light of a friend’s needs is a crucial instance of contemplating the worth of one’s friend. The more the virtuous person has internalized the reasons that arise from this worth, the more she misses out on these instances.

Of course, there are other occasions for conscious contemplation of the worth of one’s friend. But it seems to me that when the contemplation is tied to action via deliberation, it is particularly valuable.

And the same applies to other virtuous and other goods.

Saturday, November 23, 2019

Characterizing actions by the reasons against them

It is plausible that the reasons for which one chooses an action help determine the kind of action it is. Plausibly, an action is a murder if one performs it because it will kill an innocent.

But it is also interesting that the reasons against which one chooses an action also help determine the character of an action. This is true both in good and bad actions. Some actions are acts of courage in part because they are done contrary to reasons of one’s own safety. And some actions are acts of gross negligence in part because they are done contrary to reasons of the safety of another. If, on the other hand, the reasons of safety did not enter into deliberation at all, the act, in both cases, may well be a case of recklessness.

Monday, October 30, 2017

Counseling the lesser evil

A controversial principle in Catholic moral theology is the principle of “counseling the lesser evil”, sometimes confusingly (or confusedly) presented as the “principle of the lesser evil”. The principle is one that the Church has not pronounced on. (For a survey of major historical points, see this piece by Fr. Flannery.)

First, a clarification. Nobody in the debate thinks it is ever permissible to do the lesser evil. The lesser evil is still an evil, and it is never permissible to do evil, no matter what might result from it. The debate is very specifically the following. Suppose someone is determined to do an evil, and cannot be dissuaded from doing some evil or other. Is it permissible to counsel a lesser evil in order to redirect the person from a greater evil? For instance, if someone is about to murder you, and cannot be dissuaded from an evil course of action, are you permitted to counsel theft instead, as on some interpretations the ten men in Jeremiah 41:8 do? (But see quotations in Flannery for other interpretations.)

There is no question that if the potential murderer is redirected to theft, the theft will still be wrong, indeed quite possibly a mortal sin (depending on the amount stolen). The moral question about “the lesser evil” is not about the primary evildoer but about the counselor. On the one hand, it appears that if the counselor’s counsel is sincere, the counselor is wrongfully endorsing an evil—albeit less evil—course of action. Indeed, it seems that the counselor is even intending the evil, albeit as an alternative to a greater evil.

On the other hand, a number of people will have very strong intuitions that it is not wrong to say to a potential murderer “Don’t kill me: here, take my laptop!” (Note: I assume the coerced circumstances do not render this a valid gift, so the potential murderer will indeed be a thief by taking the laptop.)

Let me add that the argument I will give leaves open the question of the advisability of counseling the lesser evil. Often it may be better to inspire the evildoer to do the good thing rather than the lesser of the evils. Moreover, one needs to be extremely wary of any public counseling of the lesser evil, because it is apt to encourage people who are not determined on evil to do the lesser evil. I think it is unlikely that such counseling is often advisable.

So, here’s the argument. Start with this thought. Agents deliberate about options. As they do so, they come to favor some options over others. Eventually, as they narrow in on the decision, they favor one option over all the others. Moreover:

  1. If a deliberating agent in the end favors B over C, typically the agent will not choose C as a result of this deliberation.

There are at least two reasons for the “typically”. First, maybe the agent is irrational. Second, maybe there can be cases of circular favoring structures, so that the agent favors B over C, favors A over B, and favors C over A, so that she ends up choosing C anyway.

Next observe this:

  1. If option B is better than option C, then it is good for a deliberating agent to favor B over C.

This is true regardless of whether B and C are both good options, or B is good and C is bad, or both B and C are bad. It is simply a good thing to favor the better over the worse.

With (1) and (2) in mind, consider a case where the agent has three options: a good A (e.g., going away), a lesser evil B (e.g., theft) and a greater evil C (e.g., murder). By (2) it is good if the agent to favors B over C. Suppose the counselor strives to lead the agent who is determined on evil to favor B over C (e.g., by emphasizing the resale value of the laptop, or the likelihood that the police will investigate a murder more thoroughly than a theft, or the greater sinfulness of murder, depending on what is more likely to impress the particular agent). Then the conditions for the Principle of Double Effect can be satisfied on the side of the counselor.

  1. The counselor is pursuing a good end, the agent’s not choosing C.

  2. The counselor’s chosen means to the good end is the agent’s favoring B over C. By (1), such favoring is likely to be effective in fulfilling the counselor’s good end (namely, the agent’s not choosing C) and by (2), such favoring is good.

  3. There is a foreseen but not intended evil of the agent opting for B. It is not intended, because the counselor’s plan of action will be successful whether or not the agent opts for B (as foreseen) or for A (an unexpected bonus).

  4. The good of the agent’s not choosing C is proportionate to the foreseen evil of the agent’s choosing B, and there is, we may suppose, no better way of achieving the good.

In particular, there is no intention that the agent choose B, or even choose B over C. The intention is that the agent favor B over C, which is all that is typically needed, given (1), for the agent not to choose C.

Note 1: This provides a defense of pretty strong cases of counseling the lesser evil. The argument works even in cases where the agent being counseled wouldn’t have thought of evil B prior to the counseling (that is the case in Jeremiah 41:8). It might even work where B is impossible prior to the counseling. For instance you might unlock your safe in order to make it easier for the agent to steal your money in place of killing you. In so doing, your end is still that C not be done, and the means is that B is favored over C.

Note 2: This solves the problem of bribes.

Note 3: I am not very confident of any of the above.

Monday, October 24, 2016

Two senses of "decide"

Suppose:

  1. Alice sacrifices her life to protect her innocent comrades.

  2. Bob decides that if he ever has the opportunity to sacrifice his life to protect his innocent comrades, he’ll do it.

We praise Alice. But as for Bob, while we commend his moral judgment, we think that he is not yet in the crucible of character. Bob’s resolve has not yet been tested. And it’s not just that it hasn’t been tested. Alice’s decision not only reveals but also constitutes her as a courageous individual. Bob’s decision falls short both in the revealing but also in the constituting department (it’s not his fault, of course, that the opportunity hasn’t come up).

Now compare Alice and Bob to Carl:

  1. Carl knows that tomorrow he’ll have the opportunity to sacrifice his life to protect his innocent comrades, and he decides he will make the sacrifice.

Carl is more like Bob than like Alice. It’s true that Carl’s decision is unconditional while Bob’s is conditional. But even though Carl’s decision is unconditional, it’s not final. Carl knows (at least on the most obvious way of spelling out the story) that he will have another opportunity to decide come tomorrow, just as Bob will still have to make a final decision once the opportunity comes up.

I am not sure how much Bob and Carl actually count as deciding. They are figuring out what would or will (respectively) be the thing to do. They are making a prediction (hypothetical or future-oriented) about their action. They may even be trying by an act of will to form their character so as to determine that they would or will make the sacrifice. But if they know how human beings function, they know that their attempt is very unlikely to be successful: they would or will still have a real choice to make. And in the end it probably wouldn’t surprise us too much if, put to the test, Bob and Carl failed to make the sacrifice.

Alice did something decisive. Bob and Carl have yet to do so. There is an important sense in which only Alice decided to sacrifice her life.

The above were cases of laudable action. But what about the negative side? We could suppose that David steals from his employer; Erin decides that she will steal if she has the opportunity; and Frank knows he’ll have the opportunity to steal and decides he’ll take it.

I think we’ll blame Erin and Frank much more than we’ll praise Bob and Carl (this is an empirical prediction—feel free to test it). But I think that’s wrong. Erin and Frank haven’t yet gone into the relevant crucible of character, just as Bob and Carl haven’t. Bob and Carl may be praiseworthy for their present state; Erin and Frank may be blameworthy for theirs. But the praise and the blame shouldn’t go quite as far as in the case of Alice and David, respectively. (Of course, any one of the six people might for some other reason, say ignorance, fail to be blameworthy or praiseworthy.)

This is closely to connected to my previous post.

Friday, March 23, 2012

Choice and certainty

Suppose I am certain that I will choose A? (Maybe an oracle I completely trust has told me, or maybe I have in the past always chosen A; I am not saying anything about the certainty being justified; cf. this paper by David Hunt.) Can I deliberate between A and B, and choose A?

Here is an argument for a positive answer. Suppose I am 99% sure I will choose A. Clearly, I can deliberate between A and B, and freely choose either one (assuming none of the other reasons why I might be unfree apply). The same is true if I am 99.9% sure. And 99.99%. And so on. Moreover, while in such cases it may (though I am not sure even of that) become psychologically harder and harder to choose B, except in exceptional cases (see next paragraph), it should not become psychologically any harder to choose A over B. But if it does not become any harder to choose A over B, why can't I still deliberate and choose A in the limiting case where the certainty is complete?

There will be special cases where this limiting case argument fails. These will be cases where either I suffer from some contingent psychological condition that precludes a choice in the case of certainty or where in the limiting case I lose the reason I had for doing A. For instance, if I am in weird circumstances, there may be actions where the reasons I have for choosing the action depend on my not being certain that I will choose it—maybe you offer me money to choose something that I am not certain I will choose. But apart from such special cases, the probability that I will choose A is irrelevant to my deliberation. And hence it does not enter into my deliberation if I am being rational.

What enters into my deliberation whether to choose A or B are the reasons for and against choosing the options. The probabilities or even certainties of my making one or the other choice normally do not enter into deliberation.

But what if I know that it is impossible to do B? Isn't that relevant? Normally, yes, but that's because it's a straightforward reason against choosing B: one has good reason not to choose to do things which one can't succeed in doing, since in choosing such things, one will be trying and failing, and that is typically an unhappy situation.

But what if I am certain that it is impossible to choose B? Isn't that a reason against choosing B? I am not sure. After all, it might be a reason in favor of choosing B—wouldn't it be really cool to do something impossible? Maybe, though, what the case of impossibility of choice does is it removes all the reasons in favor of choosing B, since reasons involve estimates of expected outcomes, and one cannot estimate the expected outcome of an impossibility, perhaps.

Thursday, August 25, 2011

The unthinkable

Typically when we say that some course of action is unthinkable, we have already thought it.  Perhaps there is an equivocation, though.  Maybe what we mean is that the course of action is one that we can think about with theoretical reason, but cannot deliberate about with practical reason.  But isn't the revulsion from the course of action in fact a matter of the will?  Maybe it is a matter of the will, but not of the will in respect of deliberation?

I've seen a classic moral theology textbook say that if you've deliberated whether to commit a mortal sin, you've already committed a mortal sin.  Of course, it is presupposed here that you're aware that the action you're deliberating about is a mortal sin.  Maybe the idea is this.  Deliberation involves weighing the pros and cons of an action.  But as soon as you've realized that a course of action involves you in mortal sin, it is illegitimate to weigh the pros of the course of action.  If you do so, you're implicitly conditionally attaching your will to the mortal sin, conditionally on the pros being great enough.  It would be a mortal sin to explicitly conditionally attach your will to a mortal sin--for instance, to resolve to commit adultery if you win the lottery.  Whether the implicit conditional attachment in deliberation yields mortal sin is something I am not so sure about.  But it is surely sinful.

Of course this needs to be distinguished from a non-deliberative consideration of the benefits of a gravely wrong action.  There can be good reason to engage in such consideration.  For instance, one might try to think through the benefits of a sinful action in order to come up with a rhetorically powerful exhortation against the action, by contrasting the induced decay of soul with the temporariness of the benefits.

Saturday, December 11, 2010

Choice: compatibilist and incompatibilist views

I often wonder whether the compatibilist doesn't see choice as a matter of figuring something out—figuring what is to be done in the light of one reasons and/or desires. If one does see choice in this way, then it is quite unsurprising that we can assign responsibility without alternate possibilities. For the process is epistemic or relevantly like an epistemic process, and it is pretty plausible that epistemic responsibility does not require alternate possibilities.

The incompatibilist, on the other hand, is apt not to think of choice as a matter of figuring out anything. This is, I think, clearest on Thomistic accounts of choice on which choice is always between incommensurables. On these accounts, when one chooses, reason has already done all it can do—it has elucidated the reasons for all the available rationally-available options. And now, given the reasons, a choice must be made which reasons to follow. This choice isn't a matter of figuring out anything, since the figuring-out has all already been done. Reason has already informed us that option A is pleasant but cowardly and leading to ill-repute, option B is unpleasant, brave and leading to ill-repute, and option C is unpleasant, cowardly, and leading to good repute. Reason has also informed us that our duty is to avoid cowardice. And now a choice has to be made between pleasure, virtue and good reputation.

Tuesday, July 27, 2010

Deliberation and determinism

  1. If (a) were I to do A, p would be the case but were I not to do A, p would not be the case, and (b) I can rationally deliberate over whether to do A, then I could rationally deliberate over whether to act so that p would hold.
  2. For no proposition p about the past could I rationally deliberate over whether to act so that p would hold.
  3. If determinism is true, then for any action A I can rationally deliberate over, there is some proposition p about the past such that were I to do A, p would be the case and were I not to do A, p would not be the case.
  4. There is an action that I can rationally deliberate over which I will not actually do.
  5. Therefore: Determinism is not true. (By 1-4)
Lewis denies 3. He thinks that if determinism holds and A is an action I won't do, then were I to do A, the past would have been the same (except maybe very close to the doing of A), but the laws of nature would have been different. This is implausible. The laws of nature govern events and one should not choose a counternomic interpretation of an ordinary counterfactual. But if Lewis is right, then we can modify the argument by replacing (2) with:
  1. If determinism is true, I couldn't rationally deliberate over a proposition solely about laws of nature.
and replacing (3) with:
  1. If determinism is true, then for any action A I can rationally deliberate over, there is some proposition p solely about the laws of nature such that were I to do A, p would be the case and were I not to do A, p would not be the case.

One problem with the above argument is that one can deliberate about the past or about the laws of nature when one does not know that one is so doing (for instance, one might speculate whether to make it be the case that E happens at t without knowing that t is in fact in the past). I suppose it is a stipulation about how I use "rational" that I won't count that as rational. Perhaps a different word than "rationally" should be used, like "properly": I cannot properly deliberate over the past.

Thursday, November 13, 2008

Tense and action

Consider a version of John Perry's argument that action needs tense. You promised to call a friend precisely between 12:10 and 12:15 and no later. When it is between 12:10 and 12:15, and you know what time it is, this knowledge, together with the promise, gives you reason to call your friend. But if this knowledge is tenseless, then you could have it at 12:30, say. Thus, absurdly, at 12:30 you could have knowledge that gives you just as good a reason to call your friend.[note 1]

Here, however, is a tenseless proposal. Suppose it is 12:12, and I am deliberating whether to call my friend. I think the following thought-token, with all the verbs in a timeless tense:

  1. A phone call flowing from this deliberative process would occur between 12:10 and 12:15, and hence fulfill the promise, so I have reason that this deliberative process should conclude in a phone call to the friend.
And so I call. Let's see how the Perry-inspired argument fares in this case. I knew the propositions in (1) at 12:12, and I could likewise know these propositions at 12:30, though if I were to express that knowledge then, I would have to replace both occurrences of the phrase "this deliberative process" in (1) by the phrase "that deliberative process." However, this fact is in no way damaging.

For suppose that at 12:30, I am again deliberating whether to call my friend. I have, on this tenseless proposal, the very same beliefs that at 12:12 were expressed by (1). It would seem that where I have the same beliefs and the same knowledge, I have the same reasons. If this principle is not true, the Perry argument fails, since then one can simply affirm that one has the same beliefs and knowledge at 12:30 as one did at 12:12, but at 12:30 these beliefs and knowledge are not a reason for acting, while they are a reason for acting at 12:12. But I can affirm the principle, and I am still not harmed by the argument. For what is it that I conclude at 12:30 that I have (tenseless) reason to do? There is reason that the deliberative process should conclude in a call to the friend. But the relevant referent of "the deliberative process" is not the deliberative process that occurs at 12:30, call it D12:30, but the deliberative process that occurs at 12:12, call it D12:12. For (1) is not about the 12:30 deliberative process, but about the 12:12 one.

The principle that the same beliefs and knowledge gives rise to the very same reasons may be true—but the reason given rise to is a reason for the 12:12 deliberative process to conclude in a phone call. But that is not what I am deliberating about at 12:30. At 12:30, I am deliberating whether this new deliberative process, D12:30, should result in a phone call to the friend. That I can easily conclude that D12:12 should result in a phone call to the friend is simply irrelevant.

There is an awkwardness about the solution as I have formulated it. It makes deliberative processes inextricably self-referential. What I am deliberating about is whether this very deliberation should result in this or that action. But I think this is indeed a plausible way to understand a deliberation. When a nation votes for president, the nation votes not just for who should be president, but for who should result as president from this very election. (These two are actually subtly different questions. There could be cases where it is better that X be president, but it is better that Y result as president from this very election. Maybe X promised not to run in this election.)

[I made some minor revisions to this post, the most important of which was to emphasize that (1) is a token.]

Tuesday, March 18, 2008

"To make a choice, you need choices"

The title of this post is a remark I heard Nuel Belnap make in the question period after a talk on free will (quoting from memory).

Here, then, is a valid argument for a kind of Principle of Alternate Possibilities:

  1. It is not possible to rationally deliberate when one knows one that fewer than two options are possible. (Premise)
  2. One deliberates knowledgeably if and only if one knows all the deliberatively relevant facts. (Premise)
  3. It is deliberatively relevant which options are possible. (Premise)
  4. Therefore, if one rationally and knowledgeably deliberates, then at least two options are possible. (By (1)-(3))

(1) and (2) seem quite secure. But the opponent of Principles of Alternate Possibility may dispute (3), even though it seems very plausible to me.

In any case, (3) is clearly true in some cases. If I'm deliberating between three rescue operations, which can save, respectively, one family member, two strangers, or three family members, learning whether the third option is actually possible would, surely, affect rational deliberation (if it is possible, then it is the best choice; if it is not possible, then we have a hard choice between the first and second options). So there are at least some cases of deliberation where knowledge of what options are possible is deliberatively relevant. This isn't enough to yield (4), but it is enough to yield a weaker claim such as that rational and knowledgeable deliberation in certain kinds of real-world cases requires more than one option to be possible. If one adds the assumption that in these cases rational and knowledgeable deliberation does in fact occur, one concludes that in these cases more than one option is possible. Moreover, "possibility" here must be more than just metaphysical possibility—it must be some kind of causal possibility. (Learning that one of the rescue operations is logically impossible should affect deliberation; but learning that one of the rescue operations is causally impossible is just as relevant.) And hence, most likely, we do get something that is relevant to disputes with determinists.