Showing posts with label akrasia. Show all posts
Showing posts with label akrasia. Show all posts

Thursday, September 19, 2013

Proattitudes, freedom and determinism

On a familiar compatibilistic picture of what things might be like, all our free actions are determined by our proattitudes and beliefs. The proattitudes provide the drive and ends for the action and the beliefs tell us about what does and does not conduce to those ends. On the most traditional version of the story, the proattitudes are noncognitive. I think Warren Quinn's arguments against such a view of proattitudes are sound: noncognitive proattitudes just do not render action rational. I would say they are too much like mere dispositions to act, and dispositions to act, every bit as much as the actions that flow from the dispositions are in need of being made rational. Thus, the proattitudes must have a cognitive component: something like a seeing of an end as good or a judgment of an end as good.

Now consider this dilemma. We either do or do not always act in accordance with the rationally superior attitude. I.e., we either do or do not ever act in accordance with what the attitude presents to us as the rationally called for or the better course of action. If we always do, then we are never blameworthy. For while the judgments embodied in our proattitudes may be wrong, we are not blameworthy for these wrong judgments if we came to them always acting by our better lights.

Blameworthiness requires that at some point we have been responsible for acting against our better lights.

Now, proattitudes are either entirely cognitive or have a cognitive aspect and a conative drive/motativation aspect. If they are entirely cognitive, then when we act against our better lights, then something other then proattitude must be determining our action in cases where we go against the better judgment embodied in these entirely cognitive proattitudes. But on the compatibilist picture, it is being sourced in our proattitudes that makes an action be truly ours. And in the relevant respect, the respect that determines us on the wrong (by our lights) rather than right course of action, the action is not sourced in our proattitudes.[note 1] That makes it very hard to see how we can be responsible.

Next, suppose that the proattitudes have both a cognitive and a conative component. On this picture, the cognitive component is what makes actions rational and the conative is what causally explains the action. On this view, when we act against our better lights, it is because proattitudes with a rationally weaker cognitive component can nonetheless have a causally stronger conative component. But how can we be responsible if that's the ultimate explanation of our wrongdoing? For it is the cognitive component that makes for rational action, for action that is distinctively personal, the sort of thing that is subject to moral evaluation. Imagine taking a brute animal and adding a cognitive component to its noncognitive proattitudes, but keeping the root of the deterministic causal explanation of action on the noncognitive side. That would not make the brute responsible. It would just create a monster.

When one is determined to act in accordance with the rationally weaker but conatively stronger proattitude, one is in the grip of a disorder, a kind of disease of the will (we call it "akrasia" or "weakness of the will"), which causes one to choose the rationally weaker rather than the rationally stronger course of action. But one is not blameworthy for such diseased action unless one is blameworthy for the disease. However, since the story applies all the way back, there is no room for blame left.

This line of thought does not refute the compatibility between responsibility and determinism. For it says nothing against the compatibility between praiseworthiness and determinism. But I think it gives one reason to think that determinism rules out blameworthiness.

Tuesday, January 31, 2012

How can I knowingly and freely do wrong?

I accept the following two claims:

  1. Every free action is done for a reason.
  2. If an action is obligatory, then I have on balance reason to do it.
Consider cases where I know that an action is obligatory, but I don't do it. How could that be? Well, one option is that I don't realize that obligatory actions are ones I have on balance reason to do. Put that case aside: I do know it sometimes when I do wrong. So I know that I have on balance reason to do an action, but I refrain from it. But then how could I have a reason for my refraining? And without a reason, my action wouldn't be free.

It strikes me that this version of the problem of akrasia may not be particularly difficult. There is no deep puzzle about how someone might choose a game of chess over a jog for a reason. A jog is healthier but a game of chess is more intellectually challenging, and one might choose the game of chess because it is more intellectually challenging. In other words, there is a respect in which the game of chess is better than the jog, and when one freely chooses the game of chess, one does so on the basis of some such respect. The jog, of course, also has something going for it: it is healthier, and one can freely choose it because it is better in respect of health.

Now, suppose that the choice is between playing a game of chess and keeping one's promise to visit a sick friend. Suppose the game of chess is more pleasant and intellectually challenging than visiting the sick friend. One can freely choose the game of chess because there are respects in which it is better than visiting the friend. There are, of course, respects in which the game of chess is worse: it is a breaking of a promise and a neglecting of a sick friend. But that there are respects in which visiting the sick friend is better does not make there be no reason to play chess instead, since there are respects in which the chess game is better.

But isn't visiting the sick friend on balance better? Certainly! But being on balance better is just another respect in which visiting the sick friend is better. It is still in some other respects better to play the game of chess. If one freely chooses to play the game of chess, then one chooses to do so on account of those other respects. That one option is on balance better is compatible with the other option being in some respects better. It is no more mysterious how one can act despite the knowledge that another option is on balance better than how one can act despite the knowledge that another option is more pleasant. The difference is that when one chooses against an action that one takes to be on balance better, one may incur a culpability that one does not incur when one chooses against an action that is merely more pleasant, but the incurring of that culpability is just another reason not to do the action.

But isn't it decisive if an action is on balance better? Isn't it irrational to go against such a decisive reading? Well, one can understand a decisive reason in three ways: (a) a reason that in fact decides one; (b) a reason that cannot but decide one; and (c) a reason that rationality requires one to go with. That an action is on balance better need not be what decides you, even if in fact you do the on balance better action. Now, granted, rationality requires one to go with an on balance better action. But that rationality requires something does not imply you will do it.

But if you don't, aren't you irrational, and hence not responsible? Well, if by irrational one means lack of responsiveness to reasons, then that would indeed imply lack of responsibility, but that is not one's state when one chooses to do the wrong thing for a reason. It need not even be true that one is not responsive to what is on balance better. For to be responsive to a reason does not require that one act on that reason. The person who chooses the chess game over the jog is likely quite responsive to reasons of health. If she were not responsive to reasons of health, it might not be a choice but a shoo-in. Likewise, the person who chooses against what is on balance better is responsive to what is on balance better, but goes against it.

Now, of course, the person who knowingly does what she knows she on balance has reason not to do, does not respond to the reason in the way that she should. In that sense, she is irrational. But that sense of irrationality is quite compatible with responsibility.

Sunday, December 12, 2010

Choice and incommensurability

Right now, I am making all sorts of choices. For instance, I just chose to write the preceding sentence. When I made that choice, it was a choice between writing that sentence and writing some other sentence. But it was not a choice between writing that sentence and jumping up an down three times. Let A be the writing of the sentence that I wrote; let B be the writing of some alternate sentence; let C be jumping up and down three times. Then, I chose between A and B, but not between A and C. What makes that be the case?

Here is one suggestion. I was capable of A and of B, but I was not capable of C. If this is the right suggestion, compatibilism is false. For on standard compatibilist analyses of "is capable of", I am just as capable of C as of A and B. I was fully physically capable of doing C. Had I wanted to do C, I would have done C. So if the capability of action suggestion is the only plausible one, we have a neat argument against compatibilism. However, there is a decisive objection to the suggestion: I can choose options I am incapable of executing. (I may choose to lift the sofa, without realizing it's too heavy.)

To get around the last objection, maybe we should talk of capability of choosing A and capability of choosing B. Again, it does not seem that the compatibilist can go for this option. For if determinism holds, then in one sense neither choosing B nor choosing C are available to me—either choice would require a violation of the laws of nature or a different past. And it seems plausible that in that compatibilist sense in which choosing B is available to me—maybe the lack of brainwashing or other psychological compulsion away from B—choosing C is also available to me. Again, if this capability-of-choosing suggestion turns out to be the right one, compatibilism is in trouble.

Here is another suggestion, one friendly to compatibilism. When I wrote the first sentence in this post, I didn't even think of jumping up and down three times. But I did, let us suppose, think of some alternate formulations. So the difference between B and C is that I thought about B but did not think about C. However, this suggestion is unsatisfactory. Not all thinking about action has anything to do with choosing. I can think about each of A, B and C without making a choice. And we are capable of some limited parallel processing—and more is certainly imaginable—and so I could be choosing between D and E while still thinking, "purely theoretically" as we say, about A, B and C. There is a difference between "choosing between" and "theorizing about", but both involving "thinking about".

It seems that the crucial thing to do is to distinguish between the ways that the action-types one is choosing between and those action-types that one is merely theorizing about enter into one's thoughts. A tempting suggestion is that in the choice case, the actions enter one's mind under the description "doable". But that's mistaken, because I can idly theorize about a doable without at all thinking about whether to do it. (Kierkegaard is really good at describing these sorts of cases.) The difference is not in the description under which the action-types enter into the mind, as that would still be a difference within theoretical thought.

I think the beginning of the right thing to say is that those action-types one is choosing between are in one's mind with reasons-for-choosing behind them. And these reasons-for-choosing are reasons that one is impressed by—that actively inform one's deliberation. They are internalist reasons.

But now consider this case. Suppose I could save someone's life by having a kidney removed from me, or I could keep my kidneys and let the person die. While thinking about what to do, it occurs to me that I could also save the other person's life by having both kidneys removed. (Suppose that the other person's life wouldn't be any better for getting both kidneys. Maybe only one of my kidneys is capable of being implanted in her.) So, now, consider three options: have no kidneys removed (K0), have one kidney removed (K1), and have two kidneys removed (K2). If I am sane, I only deliberate between K0 and K1. But there is, in a sense, a reason for K2, namely that K2 will also save the other's life, and it is the kind of reason that I do take seriously, given that it is the kind of reason that I have for K1. But, nonetheless, in normal cases I do not choose between K0, K1 and K2. The reasons for K2 do not count to make K2 be among the options. Why? They do not count because I have no reason to commit suicide (if I had a [subjective] reason to commit suicide, K2 would presumably be among the options if I thought of K2), and hence the reasons for K2 are completely dominated by the reasons for K1.

If this is right, then a consequence of the reasons-for-choice view of what one chooses between is that one never has domination between the reasons for the alternatives. This supports (but does not prove, since there is also the equal-reason option to rule out) the view that choice is always between incommensurables.

A corollary of the lack-of-domination consequence is that each of the options one is choosing between is subjectively minimally rational, and hence that it would be minimally rational to choose any one of them. I think this is in tension with the compatibilist idea that we act on the strongest (subjective) reasons. For then if we choose between A and B, and opt for A because the reasons for A were the strongest, it does not appear that B would have been even minimally rational.

Maybe, though, the compatibilist can insist on two orderings of reasons. One ordering is domination. And there the compatibilist can grant that the dominated option is not among the alternatives chosen between. But there is another ordering, which is denoted in the literature with phrases like "on balance better" or "on balance more (subjectively) reasonable". And something that is on balance worse can still be among the alternatives chosen between, as long as it isn't dominated by some other alternative.

But what is it for an option to be on balance better? One obvious sense the phrase can have is that an action is on balance better if and only if it is subjectively morally better. But the view then contradicts the fact that I routinely make choices of what is by my own lights morally worse (may God have mercy on my soul). Another sense is that an action is on balance better if and only if it is prudentially better. But just as there can be moral akrasia, there can be prudential akrasia.

Here is another possibility. Maybe the compatibilist can say that reasons have two kinds of strength. One kind of strength is on the side of their content. Thus, the strength of reason that I have to save someone's life is greater than the strength of reason that I have to protect my own property. Call this "content strength". The other kind of strength is, basically, how impressed I am with the reason, how much I am moved by it. If I am greedy, I am more impressed with the reasons for the protection of my property than with the reasons for saving others' lives. Call this "motivational strength". We can rank reasons in terms of the content strength, and then we run into the domination and incommensurability stuff. But we can also rank reasons in terms of motivational strength. And the compatibilist now says that I always choose on the basis of the motivationally strongest reasons.

This is problematic. First, it suggests a picture that just does not seem to be that of freedom—we are at the mercy of the non-rational strengths of reasons. For it is the content strength rather than the the motivational strength of a reason that is a rational strength. Thus, the choices we make are only accidentally rational. The causes of the choices are reasons, but what determines which of the reasons we act on is something that is not rational. Rationality only determines which of the options we choose between, and then the choice itself is made on the non-rational strengths. This is in fact recognizable as a version of the randomness objection to libertarian views of free will. I actually think it is worse than the randomness objection. (1) Agent-causation is a more appealing solution for incompatibilists than for compatibilists, though Markosian has recently been trying to change that. (2) The compatibilist story really looks like a story on which we are in bondage to the motivational strengths of reasons. (3) The content strength and content of the outweighed reasons ends up being explanatorily irrelevant to the choice. And (4) the specific content of the reasons that carried the day is also explanatorily irrelevant—all that matters is (a) what action-type they are reasons for and (b) what their motivational strength is.

In light of the above, I think the compatibilist should consider giving up on the language of choice, or at least on taking choice, as a selection between alternatives, seriously. Instead, she should think that there is only the figuring out of what is to be done, and hold with Socrates that there is no akrasia in cases where we genuinely act—whenever we genuinely act (as opposed to being subject to, say, a tic) we do what we on balance think should be done. I think this view would give us an epistemic-kind of responsibility for our actions, but not a moral kind. Punishment would not be seen in retributivist terms, then.

Friday, November 28, 2008

Knowledge and belief

"I know that p, but I just can't get myself to believe it." On standard accounts of knowledge, this is self-contradictory. But I think the sentence gets at a very interesting phenomenon.

Here are two potential real life cases of the phenomenon, with names changed (we can't really discuss genuine real-life cases, because we can't read people's minds). George for years has been convinced by the apologetic arguments for Christianity. But he couldn't get himself to believe it. We would be tempted to say that he knew Christian doctrine to be the truth, but didn't believe it.

A second case is this. When we look at Dr. Schmidt's experiments at Auschwitz and try to explain his behavior, we are tempted to say: "His actions show that he did not believe Jews to be human beings." But in a juridical context, we are tempted to say something like this: "Moreover, the fact that he had no qualms about the application of the results of the experiments on Jews to non-Jewish Germans shows that he knew full well that Jews were human beings."[note 1] Odd, isn't it?

There are a couple of ways of understanding the phenomenon. One way is to say that sometimes we say "x knows p" simply to mean "x is in full possession of conclusive evidence that would suffice for knowing p". But I think this misses something in the phenomenon.

A different way, suggested to me by Bob Roberts, is that there are different senses of "belief" in play: one kind of belief—some kind of full-blooded, felt belief—is missing, and another kind of belief—the kind that "knowledge" requires—is present. One can parallel the two kinds of "belief" with two kinds of "knowledge".

Moreover, this issue connects, Bob suggested, with Socrates' doctrine that you necessarily do what you "know", a doctrine that requires a stronger sense of "know" than that which is used when we accusingly say that Dr. Schmidt knew the Jews he was experimenting on were human beings.

Maybe the stronger sense of "knowledge" is something almost visual (compare how Socrates in the Protagoras supposes that there is no knowledge when one is in the grip of a quasi-perceptual illusion, as when a temporally far-off bad seems lesser).