Showing posts with label responsibility. Show all posts
Showing posts with label responsibility. Show all posts

Thursday, September 26, 2024

Moral conversion and Hume on freedom

According to Hume, for one to be responsible for an action, the action must flow from one’s character. But the actions that we praise people for the most include cases where someone breaks free from a corrupt character and changes for the good. These cases are not merely cases of slight responsibility, but are central cases of responsibility.

A Humean can, of course, say that there was some hidden determining cause in the convert’s character that triggered the action—perhaps some inconsistency in the corruption. But given determinism, why should we think that this hidden determining cause was indeed in the agent’s character, rather than being some cause outside of the character—some glitch in the brain, say? That the hidden determining cause was in the character is an empirical thesis for which we have very little evidence. So on the Humean view, we ought to be quite skeptical that the person who radically changes from bad to good is praiseworthy. We definitely should not take such cases to be among paradigm cases of praiseworthiness.

Tuesday, September 24, 2024

Culpability incompatibilism

Here are three plausible theses:

  1. You’re only culpable for a morally wrong choice determined by a relevantly abnormal mental state if you are culpable for that mental state.

  2. A mental state that determines a morally wrong choice is relevantly abnormal.

  3. You are not culpable for anything that is prior to the first choice you are culpable for.

Given these theses and some technical assumptions, it follows that:

  1. If determinism holds, you are not culpable for any morally wrong choice.

For suppose that you are blameworthy for some choice and determinism holds. Let t1 be the time of the first choice you are culpable for. Choices flow from mental states, and if determinism holds, these mental states determine the choice. So there is a time t0 at which you have a mental state that determines your culpable choice at t1. That mental state is abnormal by (2). Hence by (1) you must be culpable for it given that it determines a wrong choice. But this contradicts (3).

The intuition behind (1) is that abnormal mental states remove responsibility, unless either the abnormality is not relevant to the choice, or one has responsibility for the mental state. This is something even a compatibilist should find plausible.

Moreover, the responsibility for the mental state has to have the same valence as the responsibility for the choice: to be culpable for the choice, you must be culpable for the abnormal state; to be praiseworthy for the choice, you must be praiseworthy for the abnormal state. (Imagine this case. To save your friends from a horrific fate, you had to swallow a potion which had a side-effect of making you a kleptomaniac. You are then responsible for your kleptomania, but in a praiseworthy way: you sacrificed your sanity to save your friends. But now the thefts that come from the kleptomania you are not blameworthy for.)

Premise (2) is compatible with there being normal mental states that determine morally good choices, as well as with there being normal mental states that non-deterministically cause morally wrong choices (e.g., a desire for self-preservation can non-deterministically cause an act of cowardice).

What I find interesting about this argument is that it doesn’t have any obvious analogue for praiseworthiness. The conclusion of the argument is a thesis we might call culpability incompatibilism.

The combination of culpability incompatibilism with praiseworthiness compatibilism (the doctrine that praiseworthiness is compatible with determinism) has some attractiveness. Leibniz cites with approval St Augustine’s idea that the best kind of freedom is choosing the best action for the best reasons. Culpability incompatibilist who are praiseworthiness compatibilists can endorse that thesis. Moreover, they can endorse the idea that God is praiseworthy despite being logically incapable of doing wrong. Interestingly, though, praiseworthiness compatibilism makes it difficult to run free will based defenses for the problem of evil.

Tuesday, January 23, 2024

Do I need to be aware of what I am intending if I am to be responsible?

I am going to argue that one doesn’t need to be conscious of intending to ϕ in order to be responsible for intending to ϕ.

The easiest version of the argument supposes time is discrete. Let t1 be the very first moment at which I have already intended to ϕ. My consciousness of that intending comes later, at some time t2: there is a time delay in our mental processing. So, at t1, I have already intended to ϕ. When I have intended to ϕ, I am responsible for ϕ. But now suppose that God annihilates me before t2. Then I never come to be aware that I intended to ϕ, but yet I was already responsible for it.

Here are three ways out:

  1. I am not yet responsible at t1, but only come to be responsible once I come to be aware of my intention, namely at t2.

  2. My awareness is simultaneous with the intention, and doesn’t come from the intention, but from the causal process preceeding the intention. During that causal process I become more and more likely to intend to ϕ, and so my awareness is informed by this high probability.

  3. My awareness is a direct simultaneous seeing of the intention, partially constituted by the intention itself, so there is no time delay.

Friday, February 4, 2022

Fixing an earlier regress argument about intentions

In an earlier post, I generated a regress from:

  1. If you are responsible for x, then x is an outcome of an intentional act with an intention that you are responsible for,

where both responsibility and outcomehood are partial. But I am now sceptical of 1. It is plausible when applied to things that aren’t actions, but there is little reason to think an action I am responsible for has to be the outcome of another action of mine.

Maybe what I should say is this:

  1. Any action that I responsible for has an intention I am responsible for.

  2. Anything that isn’t an action that I am responsible for is an outcome of an action I am responsible for.

This still seems to generate a regress or circle. By (3), if I am responsible for anything, I am responsible for some action, say A1. This will have an intention I1 that I am responsible for. Now either I1 is itself an action A2 or an outcome of some action A2 that I am responsible for. In both cases, I am responsible for A2. And then A2 will have an intention I2 that I am responsible for. And so on.

How can we arrest this? I think there are exactly two ways out:

  1. Some action An is identical with its intention In.

  2. Some action An has its own intention In as an outcome of itself.

Tuesday, February 1, 2022

Intentional acts that produce their own intentions

Start with these assumption:

  1. If you are at least partly responsible for x, then x is at least partly an outcome of an intentional act with an intention that you are at least partly responsible for.

  2. You are at least partly responsible for something.

  3. You do not have an infinite regress of intentions.

  4. You do not have a circle of distinct intentions.

For brevity, let’s drop the “at least partly”. Let’s say you’re responsible for x. Then x must be an outcome of an intentional act with an intention I1 you’re responsible for; the intention I1 then must be an outcome of an intentional act with an intention I2 you’re responsible for; and so on.

It seems we now have a contradiction: by (1) and (2), either you have infinitely many intentions in the list I1, I2, ..., and hence a regress contrary to (3), or else you come back to some intention that you already had, and hence you have a circle, contrary to (4).

But there is one more possibility, and (1)–(4) logically entail that this one more possibility must be true:

  1. For some n, In = In + 1.

This is like a circle, but not quite. It is a fixed point. What we have learned is that given (1)–(5):

  1. You have at least one intention that is at least partly an outcome of an intentional act with that very intention.

This seems even more absurd than a circle or regress. One wants to ask: How could an intention be its own outcome? But this question has a presupposition:

  1. Any at least partial outcome of an intentional act is an at least partial outcome of the intentional act’s intention.

What we learn from (6) is that the presupposition (7) is false. (For if (7) were true, then given (6) some intention would be its own at least partial outcome, which is indeed absurd.)

But how can (7) be false? How can an outcome (again, let’s drop the partiality for brevity) of an intentional act not be an outcome of the act’s intention? I think there are two possibilities. First, the act’s intention can itself be an outcome of the act. Second, the act’s intention can be something that is parallel to the act and neither is an outcome of the other. The second view fits with acausalist views in the philosophy of intention, but it does not seem plausible to me—there needs to be a causal connection of some sort between an intentional act and its intention. And in any case the second view won’t help solve our puzzle.

So, we are led to a view on which at least sometimes the intention of an intentional act is an outcome rather than a cause of the act. If we think of an intention as explaining the rational import of an act, then in such cases the rational import of the act is retrospective in a way.

It would be neatest if every time one performed an intentional act, the intention were an outcome of the act. But we have good reason to think that sometimes the intention precedes the intentional act. For instance, when I decide on a plan of action, and then simply carry out the plan, the plan as it is found in my mind informs my sequence of acts in the way an intention does, and so it makes sense to talk of the plan as an intention. But now think about the mental act of deliberation by which I decided on the plan, including on the plan’s end. Here it makes sense to think of the plan’s end as being a part of the intention behind the mental act—the mental act is made rational by aiming at the plan’s end.

But all this is predicated on (1). And it now occurs to me that (1) is perhaps not as secure as it initially seemed. For imagine this case. I am trying to decide on what to do, so I engage in deliberation. The deliberation is an intentional mental act, whose intention is to come to a decision. But perhaps I do not need to be responsible for this intention in order to be responsible for the decision I come to. I can be simply stuck with having to come to a decision, and still be responsible for the particular decision I come to. In other words, deliberative processes could be a unique case where I am responsible for an act’s outcome without being responsible for the act’s intention. That doesn’t sound quite right to me, though. It seems that if the outcome of the deliberative processes is not what I intend, and, as is often the case, is not even what I foresee, then I am not responsible for that outcome.

Wednesday, September 22, 2021

Consciousness of one's choices

Here is a plausible thesis:

  1. Consciousness of one’s choice is necessary for moral responsibility.

I go back and forth on (1). Here is a closely related thesis that is false:

  1. Knowledge of one’s choice is necessary for moral responsibility.

For imagine Alice who on the basis of a mistaken interpretation of neurosience thinks there are no choices. Then it could well be that Alice does not know that she is making any choices. But surely this mistake does not take away her moral responsibility for her choice.

Alice presumably still has consciousness of her choice, much as the sceptic still has perception of the external world. So Alice isn’t a counterexample to (1). But I wonder if (1) is very plausible once one has realized that (2) is false. For once we have realized that (2) is false, we realize that in Alice’s case the consciousness of her choice is not knowledge-conferring. And such consciousness just does not seem significant enough to matter for moral responsibility.

Wednesday, January 20, 2021

I can jump 100 feet up in the air

Consider a possible world w1 which is just like the actual world, except in one respect. In w1, in exactly a minute, I jump up with all my strength. And then consider a possible world w2 which is just like w1, but where moments after I leave the ground, a quantum fluctuation causes 99% of the earth’s mass to quantum tunnel far away. As a result, my jump takes me 100 feet in the air. (Then I start floating down, and eventually I die of lack of oxygen as the earth’s atmosphere seeps away.)

Here is something I do in w2: I jump 100 feet in the air.

Now, from my actually doing something it follows that I was able to do it. Thus, in w2, I have the ability to jump 100 feet in the air.

When do I have this ability? Presumably at the moment at which I am pushing myself off from the ground. For that is when I am acting. Once I leave the ground, the rest of the jump is up to air friction and gravity. So my ability to jump 100 feet in the air is something I have in w2 prior to the catastrophic quantum fluctuation.

But w1 is just like w2 prior to that fluctuation. So, in w1 I have the ability to jump 100 feet in the air. But whatever ability to jump I have in w1 at the moment of jumping is one that I already had before I decided to jump. And before the decision to jump, world w1 is just like the actual world. So in the actual world, I have the ability to jump 100 feet in the air.

Of course, my success in jumping 100 feet depends on quantum events turning out a certain way. But so does my success in jumping one foot in the air, and I would surely say that I have the ability to jump one foot. The only principled difference is that in the one foot case the quantum events are very likely to turn out to be cooperative.

The conclusion is paradoxical. What are we to make of it? I think it’s this. In ordinary language, if something is really unlikely, we say it’s impossible. Thus, we say that it’s impossible for me to beat Kasparov at chess. Strictly speaking, however, it’s quite possible, just very unlikely: there is enough randomness in my very poor chess play that I could easily make the kinds of moves Deep Blue made when it beat him. Similarly, when my ability to do something has extremely low reliability, we simply say that I do not have the ability.

One might think that the question of whether one is able to do something is really important for questions of moral responsibility. But if I am right in the above, then it’s not. Imagine that I could avert some tragedy only by jumping 100 feet in the air. I am no more responsible for failing to avert that tragedy than if the only way to avert it would be by squaring a circle. Yet I can jump 100 feet in the air, while no one can square a circle.

It seems, thus, that what matters for moral responsibility is not so much the answer to the question of whether one can do something, but rather answers to questions like:

  1. How reliably can one do it?

  2. How reliably does one think (or justifiably think or know) one can do it?

  3. What would be the cost of doing it?

Thursday, March 5, 2020

Upbringing and responsibility

Consider these two stories:

  1. Alice grew up in a terrible home. Her mother abused her. Her father taught her by word and deed that morality is the advantage of the stronger. Her parents forced her to join them in their manifold criminal enterprises. Alice lacked good role models. When Alice turned 17, she grew wings and flew away.

  2. Bob grew up in a terrible home. His father abused him. His mother taught him by word and deed that morality is the advantage of the stronger. His parents forced him to join them in their manifold criminal enterprises. Bob lacked good role models. When Bob turned 17, he rebelled and turned away from the life of crime.

The first story is unlikely and unbelievable. The second is unlikely but believable. This suggests that we don’t literally think that a really bad upbringing makes it literally impossible to do the right thing.

As an argument against determinism, this is rather weak, though. For even the determinist will say that there are many factors left out of story (2), and it could be that one of those left out factors that caused Bob to rebel.

A responsibility asymmetry

Discussion of my previous post has made me realize that, it seems, we’re more apt to be skeptical about the culpability of someone whose evil actions arose from a poor upbringing than of the praiseworthiness of someone whose good actions arose from a good upbringing.

This probably isn’t due to any general erring in favor of positive judgments. We’re not that nice (e.g., think of the research that shows that people are going to say that the CEO who doesn’t care about the environment but institutes profitable policies that happen to pollute is intentionally polluting, while the CEO who doesn’t care about the environment but institutes profitable policies that happen to be good for the environment is not intentionally helping the environment).

Here are two complementary stories that would make the apparent asymmetry reasonable:

  • Virtue makes one free while vice enslaves.

  • The person raised badly may be non-blameworthily ignorant of what is right. The person raised well knows what is right, though may deserve no credit for the knowledge. But non-blameworthy ignorance takes away responsibility, while knowledge gained without credit is good enough for responsibility for the actions flowing from that knowledge.

The noise from this asymmetry suggests that we may want to be careful when discussing free will and determinism to include both positive and negative actions evenhandedly in our examples.

Tuesday, March 3, 2020

A curious story about becoming just

Alice is a supervillain and Bob is a mad scientist. Alice wants Bob’s device to destroy the world. Bob makes Alice a deal: gain the virtue of justice and get the device. Alice isn’t smart enough to realize that once she gains the virtue of justice, she won’t want to use the device. She reads the best in ancient and modern wisdom, works hard, and gains the full virtue of justice, all in order to destroy the world.

Let’s suppose that when you have the full virtue of justice, you have to act from justice (at least in actions where justice is relevant). At some point Alice lost the motivation to destroy the world. Let’s suppose that as it happened, she lost that motivation at the same moment at which she gained full justice.

Alice now possesses justice, but it seems she is not praiseworthy for being just. All her just actions are ultimately explained by her former desire to destroy the world. They are not to her credit.

Now, here is what I think is rather odd about this story: Precisely by becoming fully just, Alice has lost all possibility for getting moral credit for acting justly. She is now locked into a non-praiseworthy justice.

I don’t know how this story bears on other philosophical questions or what interesting conclusions to draw from it.

In practice, I suspect, it would be unlikely that Alice would lose the motivation to destroy the world just as she gained full justice. There would likely be an intermediate time when she is no longer motivated to destroy the world, but has incomplete justice and hence is capable of choosing between virtue and vice, and hence can praiseworthily gain full justice.

Wednesday, March 27, 2019

Culpability for irrational action when you are culpable for the irrationality

It is widely thought that:

  1. If you act wrongly in an irrational state, but you are responsible for being in the irrational state, then the irrationality does not take away your culpability for the wrongful action.

But consider these two cases.

Case 1: Suppose that you now program an unstoppable robot to punch me in the face ten years from now. The law sentences you to a jail sentence justly suited to what you have done. Then, ten years later, the robot punches me in the face.

Comments: You clearly should not get another jail sentence for that. You’ve already been punished for all that you did, namely programming the robot.

Case 2: The same as Case 1, except now the unstoppable robot is programmed to brainwash you into punching me in the face. Ten years later, the robot brainwashes you, and you punch me in the face.

Comments: I think it is almost as clear as in Case 1 that you should not get another jail sentence. It shouldn’t make any difference to your culpability whether the robot punches me directly, as in Case 1, or brainwashes you (or someone else) to punch me.

This judgment seems to contradict (1). For in Case 2, you act out of an irrational state but are responsible for that irrational state.

I think we need to clarify things. We talk of culpability for actions and culpability for the effects of actions. Thus, if Alice freely punches Bob in the face, we can say that she culpable both for punching Bob and for the effect, say, Bob’s broken nose. But when we say that Alice is culpable for Bob’s broken nose, I think this should be taken as shorthand for: Alice is culpable for freely punching Bob in the face in a way that resulted in a broken nose. In other words, culpability for the effects of actions is culpability for an action qua resulting in the effects.

In Case 1, you have effect-culpability for the robot punching me, and action-culpability for programming the robot. Talking of the effect-culpability for my being punched is shorthand for saying that you are action-culpable for programming the robot so that it would punch me.

In Case 2, we should say a very similar thing. You have effect-culpability for your< punching me, and action-culpability for programming the robot to brainwash you to do that. You do not have action-culpability for your punching me, because your culpability for punching me is really just your culpability for programming the robot to cause you to punch me.

Principle (1) is right as regarding effect-culpability but wrong as regarding action-culpability.

Monday, March 25, 2019

Internalism about non-derivative responsibility

Internalism about non-derivative responsibility holds that whether one is non-derivatively responsible for a decision depends only on facts about the agent during the time of the decision.

Only an incompatibilist can be an internalist. For suppose that compatibilism is true. Then there will be possible cases of non-derivative responsibility where what the agent decides will be determined by factors just prior to the decision. But of course those factors could have been aberrantly produced in order to determine the particular decision by some super-powerful, super-smart being, and then the agent would not have been responsible for the decision. So whether there is responsibility on compatibilism depends on factors outside the time of the decision.

Speaking for myself, I have a strong direct intuition that internalism about non-derivative responsibility is true. But it would be interesting whether arguments can be constructed for or against such internalism. If so, that might give another way forward in the compatibilism/incompatibilism debate.

Tuesday, October 2, 2018

When God doesn't act for some reason

Here’s an argument for a thesis that pushes one closer to omnirationality.

  1. God is responsible for all contingent facts about his will.

  2. No one is responsible for anything that isn’t an action (perhaps internal) done for a reason or the result of such an action.

  3. If God doesn’t act on a reason R that he could have acted on, that’s a contingent fact about his will.

  4. So, if God doesn’t act on a reason R, then either (a) God couldn’t have acted on R, or (b) God’s not acting on R itself has a reason S behind it.

Monday, June 25, 2018

Causation and memory theories of personal identity

Unlike soul-based theories, the memory, brain and body theories of personal identity are subject to fusion cases. There are four options as to what happens when persons merge:

  • Singleton: a specific person continues, but we don’t know which one

  • Double Identity: there was only one person prior to the fusion, wholly present in two places at once

  • Scattered: there was only one person prior to the fusion, half of whom was present in one location and half of whom was present in another

  • End: fusion causes the person’s demise and the arising of a new person.

The problem with Singleton is that it supposes there is a fact about personal identity deeper than facts about memories, brain-continuity and body-continuity, which undercuts the motivation for the three theories of personal identity.

Double Identity and Scattered are weird. Moreover, it leads to absurdity. For whether you and I are now one person or two depends on whether we will in fact fuse in the future, and we have backwards counterfactuals like: “If you and I fuse, then we will have always been one person.” This is just wrong: facts about your being a different person from me should not depend on what will happen. And consider that if you and I decide to fuse, thereby ensuring that we have always been one person J, either bilocated or scattered, then J exists because of J’s decision to fuse. But an individual cannot exist because of a decision made by that very individual.

That leaves End. I think End may be a good move for brain and body theorists. But it’s not a good move for memory theorists. For by analogy, we will have to say that fission causes a person’s demise, too. But then it is possible to kill a person without any causal interaction. For suppose you are unconscious and undergoing brain surgery under Dr. Kowalska. Dr. Kowalska scans your brain to a hard drive as a backup. A malefactor steals the hard drive from her as well as a blank lab-grown brain. If the thief restores the data from the hard drive into the lab-grown brain, that will result in fission and thus death. But the thief’s restoring of the data into the blank brain is something that can happen without any causal interaction with you. Hence, the thief can kill you without causally interacting with you, which is absurd.

Hence both Double Identity and End have causality problems on the memory theory: Double Identity allows someone to be literally self-made and End allows for killing without causation. It may be that if one is less of a realist about causation, these problems are less, but since memory itself is a causal process, it may be that memory theories of personal identity don’t sit well with being less of a realist about causation.

Wednesday, March 28, 2018

A responsibility remover

Suppose soft determinism is true: the world is deterministic and yet we are responsible for our actions.

Now imagine a device that can be activated at a time when an agent is about to make a decision. The device reads the agent’s mind, figures out which action the agent is determined to choose, and then modifies the agent’s mind so the agent doesn’t make any decision but is instead compelled to perform the very action that they would otherwise have chosen. Call the device the Forcer.

Suppose you are about to make a difficult choice between posting a slanderous anonymous accusation about an enemy of yours that will go viral and ruin his life and not posting it. It is known that once the message is posted, there will be no way to undo the bad effects. Neither you nor I know how you will choose. I now activate the Forcer on you, and it makes you post the slander. Your enemy’s life is ruined. But you are not responsible for ruining it, because you didn’t choose to ruin it. You didn’t choose anything. The Forcer made you do it. Granted, you would have done it anyway. So it seems you have just had a rather marvelous piece of luck: you avoided culpability for a grave wrong and your enemy’s life is irreparably ruined.

What about me? Am I responsible for ruining your enemy’s life? Well, first, I did not know that my activation of the Forcer would cause this ruin. And, second, I knew that my activation of the Forcer would make no difference to your enemy: she would have been ruined given the activation if and only if she would have been ruined without it. So it seems that I, too, have escaped responsibility for ruining your enemy’s life. I am, however, culpable for infringing on your autonomy. However, given how glad you are of your enemy’s life being ruined with your having any culpability, no doubt you will forgive me.

Now imagine instead that you activated the Forcer on yourself, and it made you post the slander. Then for exactly the same reasons as before, you aren’t culpable for ruining your enemy’s life. For you didn’t choose to post the slander. And you didn’t know that activating the Forcer would cause this ruin, while you did know that the activation wouldn’t make any difference to your enemy—the effect of activating the Forcer on yourself would not affect whether the message would be posted. Moreover, the charge of infringing on autonomy has much less force when you activated the Forcer yourself.

It is true that by activating the Forcer you lost something: you lost the possibility of being praiseworthy for choosing not to post the slander. But that’s a loss that you might judge worthwhile.

So, given soft determinism, it is in principle possible to avoid culpability while still getting the exact same results whenever you don’t know prior to deliberation how you will choose. This seems absurd, and the absurdity gives us a reason to reject the compatibility of determinism and responsibility.

But the above story can be changed to worry libertarians, too. Suppose the Forcer reads off its patient’s mind the probabilities (i.e., chances) of the various choices, and then randomly selects an action with the probabilities of the various options exactly the same as the patient would have had. Then in acting the Forcer, it can still be true that you didn’t know how things would turn out. And while there is no longer a guarantee that things would turn out with the Forcer as they would have without it, it is true that activating the Forcer doesn’t affect the probabilities of the various actions. In particular, in the cases above, activating the Forcer does nothing to make it more likely that your enemy would be slandered. So it seems that once again activating the Forcer on yourself is a successful way of avoiding responsibility.

But while that is true, it is also true that if libertarianism is true, regular activation of the Forcer will change the shape of one’s life, because there is no guarantee that the Forcer will decide just like you would have decided. So while on the soft determinist story, regular use of the Forcer lets one get exactly the same outcome as one would otherwise have had, on the libertarian version, that is no longer true. Regular use of the Forcer on libertarianism should be scary—for it is only a matter of chance what outcome will happen. But on compatibilism, we have a guarantee that use of the Forcer won’t change what action one does. (Granted, one may worry that regular use of the Forcer will change one’s desires in ways that are bad for one. If we are worried about that, we can suppose that the Forcer erases one’s memory of using it. That has the disadvantage that one may feel guilty when one isn’t.)

I don’t know that libertarians are wholly off the hook. Just as the Forcer thought experiment makes it implausible to think that responsibility is compatible with determinism, it also makes it implausible to think that responsibility is compatible with there being precise objective chances of what choices one will make. So perhaps the libertarian would do well to adopt the view that there are no precise objective chances of choices (though there might be imprecise ones).

Tuesday, November 14, 2017

Freedom, responsibility and the open future

Assume the open futurist view on which freedom is incompatible with there being a positive fact about what I choose, and so there are no positive facts about future (non-derivatively) free actions.

Suppose for simplicity that time is discrete. (If it’s not, the argument will be more complicated, but I think not very different.) Suppose that at t2 I freely choose A. Let t1 be the preceding moment of time.

Then:

  1. At t2, it is already a fact that I choose A, and so I am no longer free with respect to A.

  2. At t1, I am still free with respect to choosing A, but I am not yet responsible with respect to A.

Thus:

  1. At no time am I both free and responsible with respect to A.

This seems counterintuitive to me.

Monday, September 18, 2017

Let's not exaggerate the centrality of virtue to ethics

Virtues are important. They are useful: they internalize the moral law and allow us to make the right decision quickly, which we often need to do. They aren’t just time-savers: they shine light on the issues we deliberate over. And the development of virtue allows our freedom to include the two valuable poles that are otherwise in tension: (a) self-origination (via alternate possibilities available when we are developing virtue) and (b) reliable rightness of action. This in turn allows our development of virtue reflect the self-origination and perfect reliability in divine freedom.

But while virtues are important, they are not essential to ethics. We can imagine beings that only ever make a single, but truly momentous, decision. They come into existence with a clear understanding of the issues involved, and they make their decision, without any habituation before or after. That decision could be a moral one, with a wrong option, a merely permissible option, and a supererogatory option. They would be somewhat like Aquinas’ angels.

We could even imagine beings that make frequent moral choices, like we do, but whose nature does not lead them too habituate in the direction of virtue or vice. Perhaps throughout his life whenever Bill decides whether to keep an onerous promise or not, there is a 90% chance that he will freely decide rightly and a 10% chance that he will freely decide wrongly, a chance he is born and dies with. A society of such beings would be rather alien in many practices. For instance, members of that society could not be held responsible for their character, but only for their choices. Punishment could still be retributive and motivational (for the chance of wrong action might go down when there are extrinsic reasons against wrongdoing). I think such beings would tend to have lower culpability for wrongdoing than we do. For typically when I do wrong as a middle-aged adult, I am doubly guilty for the wrong: (a) I am guilty for the particular wrong choice that I made, and (b) I am guilty for not having yet transformed my character to the point where that choice was not an option. (There are two reasons we hold children less responsible: first, their understanding is less developed, and, second, they haven’t had much time to grow in virtue.)

Nonetheless, while such virtue-less beings woould be less responsible, and we wouldn’t want to be them or live among them, they would still have some responsibility, and moral concepts could apply to them.

Wednesday, August 16, 2017

Consent and euthanasia

I once gave an argument against euthanasia where the controversial center of the argument could be summarized as follows:

  1. Euthanasia would at most be permissible in cases of valid consent and great suffering.

  2. Great suffering is an external threat that removes valid consent.

  3. So, euthanasia is never permissible.

But the officer case in my recent post about promises and duress suggests that (2) may be mistaken. In that case, I am an officer captured by an enemy officer. I have knowledge that imperils the other officer’s mission. The officer lets me live, however, on the condition that I promise to stay put for 24 hours, an offer I accept. My promise to stay put seems valid, even though it was made in order to avoid great harm (namely, death). It is difficult to see exactly why my promise is valid, but I argue that the enemy officer is not threatening me in order to elicit a promise from me, but rather I am in dangerous circumstances that I can only get out of by making the promise, a promise that is nonetheless valid, much as the promise to pay a merchant for a drink is valid even if one is dying of thirst.

Now, if a doctor were to torture me in order to get me to consent to being killed by her, any death-welcoming words from me would not constitute valid consent, just as promises elicited by threats made precisely to elicit them are invalid. But euthanasia is not like that: the suffering isn’t even caused by the doctor. It doesn’t seem right to speak of the patient’s suffering as a threat in the sense of “threat” that always invalidates promises and consent.

I could, of course, be mistaken about the officer case. Maybe the promise to stay put under the circumstances really is invalid. If so, then (2) could still be true, and the argument against euthanasia stays.

But suppose I am right about the officer case, and suppose that (2) is false. Can the argument be salvaged? (Of course, even if it can’t, I still think euthanasia is wrong. It is wrong to kill the innocent, regardless of consequences or consent. But that’s a different line of thought.) Well, let me try.

Even if great suffering is not an external threat that removes valid consent, great suffering makes one less than fully responsible for actions made to escape that suffering (we shouldn’t call the person who betrayed her friends under torture a traitor). Now, how fully responsible one needs to be in order for one’s consent to be valid depends on how momentous the potential adverse consequences of the decision are. For instance, if I consent to a painkiller that has little in the way of side-effects, I don’t need to have much responsibility in order for my consent to be valid. On the other hand, suppose that the only way out of suffering would be a pill whose owner is only willing to sell it in exchange for twenty years of servitude. I doubt that one’s suffering-elicited consent to twenty years of servitude is valid. Compare how the Catholic Church grants annulments for marriages when responsibility is significantly reduced. Some of the circumstances where annulments are granted are ones where the agent would have sufficient responsibility in order to make valid promises that are less momentous than marriage vows, and this seems right. In fact, in the officer case, it seems that if the promise I made were more momentous than just staying put for 24 hours, it might not be valid. But it is hard to get more momentous a decision than a decision whether to be killed. So the amount of responsibility needed in order to make that decision is much higher than in the case of more ordinary decisions. And it is very plausible that great suffering (or fear of such) excludes that responsibility, or at the very least that it should make the doctor not have sufficient confidence that valid consent has been given.

If this is right, then we can replace (2) with:

  1. Great suffering (or fear thereof) removes valid consent to decisions as momentous as the decision to die.

And the argument still works.

Wednesday, October 26, 2016

"Should know"

I’ve been thinking about the phrase “x should know that s”. (There is probably a literature on this, but blogging just wouldn’t be as much fun if one had to look up the literature!) We use this phrase—or its disjunctive variant “x knows or should know that s”—very readily, without its calling for much evidence about x.

  • “As an engineer Alice should know that more redundancy was needed in this design.”

  • “Bob knows or should know that his behavior is unprofessional for a librarian.”

  • “Carl should have known that genocide is wrong.”

Here’s a sense of “x should know that s”: x has some relevant role R and it is normal for those in R to know that s under the relevant circumstances. In that sense, to say that x should know that s we don’t need to know anything specific about x’s history or mental state, other than that x has role R. Rather, we need to know about R: it is normal engineering practice to build in sufficient redundancy; librarians have an unwritten code of professional behavior; human beings normally have a moral law written in their hearts.

This role-based sense of “should know” is enough to justify treating x as a poor exemplar of the role R when x does not in fact know that s. When R is a contingent role, like engineer or librarian, it could be a sufficient for drumming x out of R.

But we sometimes seem use a “should know” claim to underwrite moral blame. And the normative story I just gave about “should know” isn’t strong enough for that. Alice might have had a really poor education as an engineer, and couldn’t have known better. If the education was sufficiently poor, we might kick her out of the profession, but we shouldn’t blame her morally.

Carl, of course, is a case apart. Carl’s ignorance makes him a defective human being, not just a defective engineer or librarian. Still a defective human being is not the same as a morally blameworthy human being. And in Carl’s case we can’t drum him out of the relevant role without being able to levy moral blame on him, as drumming him out of humanity is, presumably, capital punishment. However, we can lock him up for the protection of society.

On the other hand, we could take “x should know that s” as saying something about x’s state, like that it is x’s own fault if x doesn’t know. But in that case, I think people often use the phrase without sufficient justification. Yes, it’s normal to know that genocide is wrong. But we live in a fallen world where people can fall very far short of what is normal through no fault of their own, by virtue of physical and mental disease, the intellectual influence of others, and so on.

I worry that in common use the phrase “x should know that s” has two rationally incompatible features:

  • Our evidence only fits with the role-based normative reading.

  • The conclusions only fit with the personal fault reading.

Monday, September 14, 2015

No one can make you freely do a serious wrong

I've just been struck by the obviousness of this principle: It would be unjust for you to be punished for something that someone else made you do.

But it wouldn't be unjust for you to be punished for freely doing something seriously morally wrong. Hence, it is impossible for someone to make you freely do something seriously morally wrong. But if compatibilism is true, then it is possible for someone to make one freely do something seriously wrong: a powerful being could produce a state of the universe long before one's conception that determines one to do that wrong. (In principle a compatibilist could insist--as Ayer did--that it takes away one's freedom when an agent determines one to act a certain way. But this cannot be maintained. Whether I'm free shouldn't depend on ancient history.)