Showing posts with label choices. Show all posts
Showing posts with label choices. Show all posts

Wednesday, September 22, 2021

Consciousness of one's choices

Here is a plausible thesis:

  1. Consciousness of one’s choice is necessary for moral responsibility.

I go back and forth on (1). Here is a closely related thesis that is false:

  1. Knowledge of one’s choice is necessary for moral responsibility.

For imagine Alice who on the basis of a mistaken interpretation of neurosience thinks there are no choices. Then it could well be that Alice does not know that she is making any choices. But surely this mistake does not take away her moral responsibility for her choice.

Alice presumably still has consciousness of her choice, much as the sceptic still has perception of the external world. So Alice isn’t a counterexample to (1). But I wonder if (1) is very plausible once one has realized that (2) is false. For once we have realized that (2) is false, we realize that in Alice’s case the consciousness of her choice is not knowledge-conferring. And such consciousness just does not seem significant enough to matter for moral responsibility.

Monday, January 15, 2018

Natural event kinds and Frankfurt cases

Choices are transitions from an undecided to a decided state. Suppose choices are a natural kind of event. Then only the right sort of transition from an undecided to a decided state will be a choice.

Here, then, is something that is epistemically possible. It could be that a choice is a kind of thing that can only be produced in only one way, namely by the agent freely choosing. Compare essentiality of evolutionary origins for biological kinds: no animal that isn’t the product of evolution could be a lion. Of course, one can have something internally just like a lion arising from lightning hitting a swamp and one can have a transition from an undecided to a decided state arising from a neuroscientist’s manipulation, but these won’t be a lion or a choice, respectively.

If this is right, then it seems no Frankfurt story can make a choice unavoidable. For to make a choice unavoidable, an intervener would have to be able to cause a choice in case the agent wasn’t going to do it. In other words, there will be no Frankfurt argument against principle of alternate possibilities:

  • If x chose A, then it was causally possible for x not to have chosen A.

This is rather flickery, though: it doesn’t require that x could have chosen non-A.

Friday, August 21, 2015

Intra- and inter-choice comparisons of value

Start with this thought:

  1. If I have on-balance stronger reasons to do A than to do B, and I am choosing between A and B, then it is better that I do A than that I do B.
But notice that the following is false:
  1. If in decision X, I choose A over C, and in decision Y, I choose B over D, and I had on-balance stronger reasons to do A than I did to do B, then decision X was better.
To see that (2) is false, suppose that in decision X, you are choosing between your friend's life and your convenience, while in decision Y, you are choosing between your friend's life and my own life. Your reasons to choose your friend's life over your convenience are much stronger (indeed, they typically give rise to a duty) than your reasons to choose your friend's life over your own life. Nonetheless, to save your friend's life at the cost of your own life is a better thing than to save your friend's life at the cost of your own convenience.

There is a whiff of paradoxicality here. But it's just a whiff. If you chose your convenience over your friend's life you'd be a terrible person. So in a case like that described in (2), choosing B (e.g., your friend's life over your life) is a better thing than choosing A (e.g., your friend's life over your convenience), choosing C (e.g., your convenience) is worse than choosing D.

In other words, when you choose A over B, the on-balance strength of reasons for A doesn't correlate--even typically--with the value of your deciding for A. Rather, the on-balance strength of reasons for A correlates (at least roughly and typically) with the value of your deciding for A minus the value of your deciding for B. This is quite clear.

This helps to resolve the paradox of why it is that doing the supererogatory is better than doing the obligatory, even though in a case where an option is obligatory the reasons are stronger than the reasons for supererogation. For omitting the supererogatory is much less bad than omitting the obligatory.

We may even be able to use some of the above to make some progress on the Kantian paradox that a good action by a person with a neutral character is better than a good action by a person with a good character, once we observe that it is worse for a good person to do something bad than for a neutral person to do the same thing, since the good person does two bad things: she does the bad thing in itself and she fights her good personality. Thus, even though the good person has more on-balance reason to do the good thing, because the strength of reasons doesn't correlate with the value of the action but with the value of the action minus the value of the alternative, this does not guarantee that her action has greater value than the good action of the neutral person.

Tuesday, July 17, 2012

Why does brainwashing take away responsibility?

Everybody agrees that brainwashing can remove responsibility for the resulting actions. But how does it do that?

In some cases, brainwashing removes decisions--you just act an automaton without making any decisions. Bracket those cases of brainwashing as not to my purpose. The cases of interest are ones where decisions are still made, but they are made inevitable by the complex of beliefs, desires, habits, values, etc.--the character, for short--implanted by the brainwasher. Of these cases, some will still be not useful for my purposes, namely those where the implanted character is so distorted that decisions coming from the character are not responsible simply by reason of insanity.

The interesting case, for discussion of compatibilism, is where the character is the sort of character that could also result from an ordinary life, and if it resulted from that ordinary life, decisions flowing from that character would be ones that the agent is responsible for.

So now our question is: Why is it that when this character results from the brainwasher's activity, the agent is not responsible for the decisions flowing from it, even though if the character were to have developed naturally, the agent would have been responsible?

I want to propose a simple explanation: In the paradigmatic case when the character (or, more precisely, its relevant features) results from the brainwasher's activity, the agent is not responsible for the character (that this is true is uncontroversial; but my point is not just that this is true, but that it is the answer to the question). Decisions that inevitably flow from a character that one is not responsible for, in external circumstances that we may also suppose one is not responsible for, are decisions that one is not responsible for. When the character results from an ordinary life, one is responsible for the character. But when the character results from brainwashing, typically one is not (the case where one freely volunteered to be brainwashed in this way is a nice test case--in that case, one does have at least some responsibility).

But now we see, just as in yesterday's post, that incompatibilism follows. For what makes us responsible for a character or circumstances are decisions that we are responsible for and that lead in an appropriate way to having that character. If we are only responsible for a decision that inevitably flows from a character in some external circumstances when we are responsible for the character or at least for the external circumstances, then the first responsible decision we make cannot be one that is made inevitable by character and external circumstance.

The way to challenge this argument is to offer alternate explanations of why it is that when character comes from brainwashing one is not responsible for actions that inevitably flow from that character given the external circumstances. My proposal was that the answer is that one's isn't responsible for the character in that case. An alternate proposal is that it is the inevitability that takes away responsibility. This alternative certainly cannot be accepted by the compatibilist.

Thursday, April 19, 2012

Choices and intentions

Start with this situation:

  • Six innocents are drowning: A, B, C, D, E, F.
  • One innocent is in no antecedent danger: G.
  • Sam, who is truthful but evil, tells me that if I do anything that kills G, he will rescue D, E and F, and only then.
I cannot reach the innocents except by activing remote drones with rescue equipment. There are two buttons in front of me that activate the drones.
  • If I press the green button, A, B and C are rescued.
  • If I press the red button, A, B and C are rescued and G is killed along the way (maybe G is standing so close to the relevant drone that he will be killed by the drone when it launches; his death is not a means to the rescue of A, B and C, however).
So, if I press the green button, then four (A, B, C, G) live and three die (D, E, F). If I press red button, then six (A, B, C, D, E and F) live and one dies (G).

Suppose first that my choice is between the green button and nothing. (Maybe the red button is covered beneath an unbreakable dome.) Then I should press the green button.

Suppose instead that my choice is between the red button and nothing. Then I should press the red button. My intention would be to rescue A, B and C. The death of G is an unintended side-effect of rescuing A, B and C. The rescue of D, E and F by Sam is welcome but not intended (since if it were intended, then the death of G would have to be intended as a means thereto, and it is wrong to intend G's death).

But now suppose that my choice is between the green button, the red button and nothing. The red button has the best consequences, because two more innocents live. But if I choose the red button over the green button because of this fact, then I am intending the rescue of D, E and F, and therefore I am intending the means to that rescue, namely G's death. To make the point clearer, suppose that the way things work, when I press the green button, a signal gets sent to the drone to go rescue A, B and C, and when I press the red button, that happens and an additional signal is sent to the drone to activate a powerful booster that kills G. To choose the red over the green button seems to involve a choice to activate the booster, since otherwise there is no reason for that choice. Imagine, after all, that one could directly control the two signals without pressing the buttons. It would be wrong to send both the launch signal and the booster signal if one was capable of only sending the launch signal.

So it seems that although the red button is one that it is permissible to press in a binary choice between pressing it and doing nothing, it is not permissible to press the red button in preference to pressing the green button, even though pressing the red button has better consequences than pressing the green button.

If this line of reasoning is correct then to figure out what someone intends one needs to look not just at what they chose, but also at what alternative they chose it against. This fits neatly with my view of our choice and responsibility as essentially contrastive.

But I am not so confident of this line of reasoning.