Showing posts with label compatibilism. Show all posts
Showing posts with label compatibilism. Show all posts

Wednesday, April 23, 2025

Sensory-based hacking and consent

Suppose human beings are deterministic systems.

Then quite likely there are many cases where the complex play of associations combined with a specific sensory input deterministically results in a behavior in a way where the connection to the input doesn’t make rational sense. Perhaps I offer you a business deal, and you are determined to accept the deal when I wear a specific color of shirt because that shirt unconsciously reminds you of an excellent and now deceased business partner you once had, while you have found the deal dubious if I wore any other color. Or, worse, I am determined to reject a deal offered by some person under some circumstances where the difference-maker is that the person is a member of a group I have an implicit and irrational bias against. Or perhaps I accept the deal precisely because I am well fed.

If this is true, then we are subject to sensory-based hacking: by manipulating our sensory inputs, we can be determined to engage in specific behaviors that we wouldn’t have engaged in were those sensory inputs somewhat different in a way that has no rational connection with the justification of the behavior.

Question: Suppose a person consents to something (e.g., a contract or a medical procedure) due to deliberate deterministic sensory-based hacking, but otherwise all the conditions for valid consent are satisfied. Is that consent valid?

It is tempting to answer in the negative. But if one answers in the negative, then quite a lot of our consent is in question. For even if we are not victims of deliberate sensory-based hacking, we are likely often impacted by random environmental sensory-based hacking—people around us wear certain colors of shirts or have certain shades of skin. So the question of whether determinism is true impacts first-order questions about the validity of our consents.

Perhaps we should distinguish three kinds of cases of consent. First, we have cases where one gives consent in a way that is rational given the reasons available to one. Second, we have cases where one gives consent in a way that is not rational but not irrational. Third, we have cases of irrational consent.

In cases where the consent is rational, perhaps it doesn’t matter much that we were subject to sensory-based hacking.

In cases where the consent is neither rational nor irrational, however, it seems that the consent may be undermined by the hacking.

In cases where the consent is irrational, one might worry that the irrationality undercuts validity of consent anyway. But that’s not in general true. It may be irrational to want to have a very painful surgery that extends one’s life by a day, but the consent is not invalidated by the irrationality. And in cases where one irrationally gives consent it seems even more plausible that sensory-based hacking undercuts the consent.

I wonder how much difference determinism makes to the above. I think it makes at least some difference.

Thursday, October 10, 2024

A really bad moral dilemma

Here would be a really bad kind of moral dilemma:

  • It is certain that unless you murder one innocent person now, you will freely become a mass murderer, but if you do murder that innocent person, you will freely repent of it later and live an exemplary life.

If compatibilism is true, such dilemmas are possible—the world could be so set up that these unfortunate free choices are inevitable. If compatibilism is false, such dilemmas are impossible, absent Molinism.

We might have a strong intuition that such dilemmas are impossible. If so, maybe that gives us another reason to reject compatibilism and Molinism.

Tuesday, September 24, 2024

Culpability incompatibilism

Here are three plausible theses:

  1. You’re only culpable for a morally wrong choice determined by a relevantly abnormal mental state if you are culpable for that mental state.

  2. A mental state that determines a morally wrong choice is relevantly abnormal.

  3. You are not culpable for anything that is prior to the first choice you are culpable for.

Given these theses and some technical assumptions, it follows that:

  1. If determinism holds, you are not culpable for any morally wrong choice.

For suppose that you are blameworthy for some choice and determinism holds. Let t1 be the time of the first choice you are culpable for. Choices flow from mental states, and if determinism holds, these mental states determine the choice. So there is a time t0 at which you have a mental state that determines your culpable choice at t1. That mental state is abnormal by (2). Hence by (1) you must be culpable for it given that it determines a wrong choice. But this contradicts (3).

The intuition behind (1) is that abnormal mental states remove responsibility, unless either the abnormality is not relevant to the choice, or one has responsibility for the mental state. This is something even a compatibilist should find plausible.

Moreover, the responsibility for the mental state has to have the same valence as the responsibility for the choice: to be culpable for the choice, you must be culpable for the abnormal state; to be praiseworthy for the choice, you must be praiseworthy for the abnormal state. (Imagine this case. To save your friends from a horrific fate, you had to swallow a potion which had a side-effect of making you a kleptomaniac. You are then responsible for your kleptomania, but in a praiseworthy way: you sacrificed your sanity to save your friends. But now the thefts that come from the kleptomania you are not blameworthy for.)

Premise (2) is compatible with there being normal mental states that determine morally good choices, as well as with there being normal mental states that non-deterministically cause morally wrong choices (e.g., a desire for self-preservation can non-deterministically cause an act of cowardice).

What I find interesting about this argument is that it doesn’t have any obvious analogue for praiseworthiness. The conclusion of the argument is a thesis we might call culpability incompatibilism.

The combination of culpability incompatibilism with praiseworthiness compatibilism (the doctrine that praiseworthiness is compatible with determinism) has some attractiveness. Leibniz cites with approval St Augustine’s idea that the best kind of freedom is choosing the best action for the best reasons. Culpability incompatibilist who are praiseworthiness compatibilists can endorse that thesis. Moreover, they can endorse the idea that God is praiseworthy despite being logically incapable of doing wrong. Interestingly, though, praiseworthiness compatibilism makes it difficult to run free will based defenses for the problem of evil.

Wednesday, October 12, 2022

Compatibilism and servitude

Suppose determinism and compatibilism are true. Imagine that a clever alien crafted a human embryo and the conditions on earth so as to produce a human, Alice, who would end up living in ways that served the alien’s purposes, but whose decisions to serve the alien had the right kind of connection with higher-order desires, reasons, decision-making faculties, etc. so that a compatibilist would count them as right. Would Alice's decisions be free?

The answer depends on whether we include among the compatibilist conditions on freedom the condition that the agent’s actions are not intentionally determined by another agent. If we include that condition, then Alice is not free. But it is my impression that defenders of compatibilism these days (e.g., Mele) have been inclining towards not requiring such a non-determination-by-another-agent condition. So I will take it that there is no such condition, and Alice is free.

If this is right, then, given determinism and compatibilism, it would be in principle possible to produce a group of people who would economically function just like slaves, but who would be fully free. Their higher-order desires, purposes and values would be chosen through processes that the compatibilist takes to be free, but these desires, purposes and values would leave them freely giving all of their waking hours to producing phones for a mega-corporation in exchange for a bare minimum of sustenance, and with no possibility of choosing otherwise.

That's not freedom. I conclude, of course, that compatibilism is false.

Thursday, July 2, 2020

Supererogation and determinism

  1. If at most one action is possible for one, that action is not supererogatory.

  2. If determinism is true, then there is never more than one action possible for one.

  3. So, if any action is supererogatory, determinism is false.

There is controversy over (2), but I don’t want to get into that in this post. What about (1)? Well, the standard story about supererogation is something like this: A supererogatory action is one that is better than, or perhaps more burdensome, that some permissible alternative. In any case, supererogatory actions are defined in contrast to a permissible alternative. But that permissible alternative has got to be possible for one to count as a genuine alternative. For instance, suppose I stay up all night with a sick friend. That’s better than going to sleep. But if there is loud music playing which would make it impossible for me to go to sleep and I am tied up so I can’t go elsewhere, then my staying up all night with the friend is not supererogatory.

Tuesday, April 30, 2019

A pre-established harmony with genuine mind-world causation

Molinists have the ability to give a distinctive pre-established harmony account of how the exceptionless truth of deterministic laws of nature could be made compatible with libertarianism.

Here is the story. God considers possible worlds where dualistic agents have the causal power to miraculously contradict the physical laws in their free choices, but where outside of exercises of free will, there are elegant deterministic mathematical laws governing the world. Call such worlds Candidate Worlds.

God then narrows his consideration to Finalist Worlds, which are Candidate Worlds that are feasible—i.e., compatible with the Molinist conditionals—and where as it happens the agents’ free choices accord with the elegant deterministic mathematical laws that govern the rest of the world.

And then God wisely and prudently chooses one of the Finalist Worlds for actualization.

On this story, there are elegant deterministic mathematical laws of nature which are true even of the agents’ choices, but they are true of the agents’ choices because the agents freely chose as they did. The agents had the causal power to violate, say, the conservation of momentum, but in fact freely did not do so.

There is an ambiguity in the concept of “exceptionless laws”. “Exceptionless laws” could mean: laws that allow no exception (they are pushy laws that are so strong as to make no exception possible) and laws that in fact have no exception. The deterministic laws in this story are exceptionless in the sense of having no exception, which is why I am talking of their exceptionless truth rather than their exceptionless power.

In this story, there is a dual explanation of the agents’ choices. On the one hand, there is a standard libertarian story about the agents’ free causality. On the other hand, the laws have explanatory power, because God chose the Finalist Worlds because they are worlds where the laws have exceptionless truth.

The big difficulty with the above story is Molinism. Also, it is worth noting that it is metaphysically possible that there turn out not to be any (feasible) Finalist Worlds: in that scenario, God wouldn’t be able to create a world where there is freedom and elegant deterministic mathematical laws holding exceptionlessly. But for reasons similar to why many people think Transworld Depravity is unlikely to be true, I think it is unlikely that there would be no Finalist Worlds.

It is also interesting to note that there are two more views that could be plugged into the story that can do the same job: Thomism and compatibilism.

On Thomism, God can use primary causation to make agents freely choose as he desires. Then we can suppose that God surveys the same Candidate Worlds as on the Molinist story. Then he chooses Finalist Worlds as those Candidate Worlds where the agents’ free choices in fact do not contradict the mathematical laws. And then God uses primary causation to actualize one of the Finalist Worlds. Again, the agents have the power to contradict the laws, but freely choose not to exercise it.

Finally we have straightforward compatibilism. A dualist can just as easily be a compatibilist as a materialist. On this story, we skip the Candidate Worlds, and the Finalist Worlds are worlds with compatibilist agents, with a deterministic non-physical mental life, who have the power of contradict the physical laws of the world but who are mentally determined, in a way compatible with freedom, never to exercise such a power. And then God chooses one of the Finalist Worlds. The agents then are as free as any compatibilist agents.

The compatibilist version of this story is close to Leibniz’s pre-established harmony, except that it has real mind-world causation, which is a big improvement.

Of course, the non-theist can’t make any of these moves. And, alas, neither can I, since my mere foreknowledge view denies Molinism, Thomism (about free will) and compatibilism.

Monday, April 1, 2019

Molinism and Thomism and control over others

  1. It is not possible for a creature to exercise complete control over another person’s (non-derivatively) free action.

  2. If Molinism is true, it is possible for a creature to exercise complete control over another person’s (non-derivatively) free action.

  3. So, Molinism is false.

For, if Molinism is true, there will be a possible situation where God reveals to Alice that if she were to make a request of Bob while wearing blue gloves, Bob would acquiesce to the request, but if she were to make the request while wearing red gloves, Bob would turn down the request. In such a case, by controlling which gloves she wears, Alice could exercise complete control over whether Bob acquiesces to the request.

Interestingly the same argument works against Thomism. For on Thomism, God can use primary causation to determine Bob to freely acquiesce in the request and God can use primary causation to determine Bob to freely refuse the request. God could then promise Alice that he would hear her prayers as to whether Bob agrees or refuses, and then with her prayers, Alice would have complete control over Bob's decision.

The argument doesn't work against mere foreknowledge views, open theist views or compatibilist views. On mere foreknowledge and open theism, the analogue of (2) is false, while on compatibilism, (1) is not plausible.

Monday, March 25, 2019

Internalism about non-derivative responsibility

Internalism about non-derivative responsibility holds that whether one is non-derivatively responsible for a decision depends only on facts about the agent during the time of the decision.

Only an incompatibilist can be an internalist. For suppose that compatibilism is true. Then there will be possible cases of non-derivative responsibility where what the agent decides will be determined by factors just prior to the decision. But of course those factors could have been aberrantly produced in order to determine the particular decision by some super-powerful, super-smart being, and then the agent would not have been responsible for the decision. So whether there is responsibility on compatibilism depends on factors outside the time of the decision.

Speaking for myself, I have a strong direct intuition that internalism about non-derivative responsibility is true. But it would be interesting whether arguments can be constructed for or against such internalism. If so, that might give another way forward in the compatibilism/incompatibilism debate.

Wednesday, March 28, 2018

A responsibility remover

Suppose soft determinism is true: the world is deterministic and yet we are responsible for our actions.

Now imagine a device that can be activated at a time when an agent is about to make a decision. The device reads the agent’s mind, figures out which action the agent is determined to choose, and then modifies the agent’s mind so the agent doesn’t make any decision but is instead compelled to perform the very action that they would otherwise have chosen. Call the device the Forcer.

Suppose you are about to make a difficult choice between posting a slanderous anonymous accusation about an enemy of yours that will go viral and ruin his life and not posting it. It is known that once the message is posted, there will be no way to undo the bad effects. Neither you nor I know how you will choose. I now activate the Forcer on you, and it makes you post the slander. Your enemy’s life is ruined. But you are not responsible for ruining it, because you didn’t choose to ruin it. You didn’t choose anything. The Forcer made you do it. Granted, you would have done it anyway. So it seems you have just had a rather marvelous piece of luck: you avoided culpability for a grave wrong and your enemy’s life is irreparably ruined.

What about me? Am I responsible for ruining your enemy’s life? Well, first, I did not know that my activation of the Forcer would cause this ruin. And, second, I knew that my activation of the Forcer would make no difference to your enemy: she would have been ruined given the activation if and only if she would have been ruined without it. So it seems that I, too, have escaped responsibility for ruining your enemy’s life. I am, however, culpable for infringing on your autonomy. However, given how glad you are of your enemy’s life being ruined with your having any culpability, no doubt you will forgive me.

Now imagine instead that you activated the Forcer on yourself, and it made you post the slander. Then for exactly the same reasons as before, you aren’t culpable for ruining your enemy’s life. For you didn’t choose to post the slander. And you didn’t know that activating the Forcer would cause this ruin, while you did know that the activation wouldn’t make any difference to your enemy—the effect of activating the Forcer on yourself would not affect whether the message would be posted. Moreover, the charge of infringing on autonomy has much less force when you activated the Forcer yourself.

It is true that by activating the Forcer you lost something: you lost the possibility of being praiseworthy for choosing not to post the slander. But that’s a loss that you might judge worthwhile.

So, given soft determinism, it is in principle possible to avoid culpability while still getting the exact same results whenever you don’t know prior to deliberation how you will choose. This seems absurd, and the absurdity gives us a reason to reject the compatibility of determinism and responsibility.

But the above story can be changed to worry libertarians, too. Suppose the Forcer reads off its patient’s mind the probabilities (i.e., chances) of the various choices, and then randomly selects an action with the probabilities of the various options exactly the same as the patient would have had. Then in acting the Forcer, it can still be true that you didn’t know how things would turn out. And while there is no longer a guarantee that things would turn out with the Forcer as they would have without it, it is true that activating the Forcer doesn’t affect the probabilities of the various actions. In particular, in the cases above, activating the Forcer does nothing to make it more likely that your enemy would be slandered. So it seems that once again activating the Forcer on yourself is a successful way of avoiding responsibility.

But while that is true, it is also true that if libertarianism is true, regular activation of the Forcer will change the shape of one’s life, because there is no guarantee that the Forcer will decide just like you would have decided. So while on the soft determinist story, regular use of the Forcer lets one get exactly the same outcome as one would otherwise have had, on the libertarian version, that is no longer true. Regular use of the Forcer on libertarianism should be scary—for it is only a matter of chance what outcome will happen. But on compatibilism, we have a guarantee that use of the Forcer won’t change what action one does. (Granted, one may worry that regular use of the Forcer will change one’s desires in ways that are bad for one. If we are worried about that, we can suppose that the Forcer erases one’s memory of using it. That has the disadvantage that one may feel guilty when one isn’t.)

I don’t know that libertarians are wholly off the hook. Just as the Forcer thought experiment makes it implausible to think that responsibility is compatible with determinism, it also makes it implausible to think that responsibility is compatible with there being precise objective chances of what choices one will make. So perhaps the libertarian would do well to adopt the view that there are no precise objective chances of choices (though there might be imprecise ones).

Wednesday, January 17, 2018

Free will, randomness and functionalism

Plausibly, there is some function from the strengths of my motivations (reasons, desires, etc.) to my chances of decision, so that I am more likely to choose that towards which I am more strongly motivated. Now imagine a machine I can plug my brain into such that when I am deliberating between options A and B, the machine measures the strengths of my motivations, applies my strengths-to-chances function, randomly selects between A and B in accordance with the output of the strengths-to-chances function, and then forces me to do the selected option.

Here then is a vivid way to put the randomness objection to libertarianism (or more generally to a compatibilism between freedom and indeterminism): How do my decisions differ from my being attached to the decision machine? The difference does not lie in the chances of outcomes.

That the machine is external to me does not seem to matter. For we could imagine that the machine comes to be a part of me, say because it is made of organic matter that grows into my body. That doesn’t seem to make any difference.

But when the randomness problem is put this way, I am not sure it is distinctively a problem for the libertarian. The compatibilist has, it seems, an exactly analogous problem: Why not replace the deliberation by a machine that makes one act according to one’s strongest motivation (or, more generally, whatever motivation it is that would have been determined to win out in deliberation)?

This suggests (weakly) that the randomness problem in general may not be specific to compatibilism, but may be a special case of a deeper problem that both compatibilists and libertarians face.

It seems that both need to say that it deeply matters just how the decision is made, not just its functional characteristics. And hence both need to deny functionalism.

Monday, January 15, 2018

If computers can be free, compatibilism is true

In this post I want to argue for this:

  1. If a computer can non-accidentally have free will, compatibilism is true.

Compatibilism here is the thesis that free will and determinism can both obtain. My interest in (1) is that I think the compatibilism is false, and hence I conclude from (1) that computers cannot non-accidentally have free will. But one could also use (1) as an argument for compatibilism.

Here’s the argument for (1). Assume that:

  1. Hal is a computer non-accidentally with free will.

  2. Compatibilism is false.

Then:

  1. Hal’s software must make use of an indeterministic (true) random number generator (TRNG).

For the only indeterminism that non-accidentally enters into a computer (i.e., not merely as a glitch in the hardware) is through TRNGs.

Now imagine that we modify Hal by outsourcing all of Hal’s use of its TRNG to some external source. Perhaps whenever Hal’s algorithms need a random number, Hal opens a web connection to random.org and requests a random number. As long as the TRNG is always truly random, it shouldn’t matter for anything relevant to agency whether the TRNG is internal or external to Hal. But if we make Hal function in this way, then Hal’s own algorithms will be deterministic. And Hal will still be free, because, as I said, the change won’t matter for anything relevant to agency. Hence a deterministic system can be free, contrary to (3). Hence (2) and (3) are not going to be both true, and so we have (1).

We perhaps don’t even need the thought experiment of modifying Hal to argue for a problem with (2) and (3). Hal’s actions are at the mercy of the TRNG. Now, the output of the TRNG is not under Hal’s rational control: if it were, then the TRNG wouldn’t be truly random.

Objection 1: While Hal’s own algorithms, after the change, would be deterministic, the world as a whole would be indeterministic. And so one can still maintain a weaker incompatibilism on which freedom requires indeterminism somewhere in the world, even if not in the agent.

Response: Such an incompatibilism is completely implausible. Being subject to random external vagaries is no better for freedom than being subject to determined external vagaries.

Objection 2: It really does make a big difference whether the source of the randomness is internal to Hal or not.

Response: Suppose I buy that. Now imagine that we modify Hal so that at the very first second of its existence, before it has any thoughts about anything, the software queries a TRNG to generate a supply of random numbers sufficient for all subsequent algorithmic use. Afterwards, instead of calling on a TRNG, Hal simply takes one of the generated random numbers. Now the source of randomness is internal to Hal, so he should be free. And, strictly speaking, Hal thus modified is not a deterministic system, so he is not a counterexample to compatibilism. However, an incompatibilism that allows for freedom in a system all of whose indeterminism happens prior to any thoughts that the system has is completely implausible.

Objection 3: The argument proves too much: it proves that nobody can be free if compatibilism is false. For whatever the source of indeterminism in an agent is, we can label that “a TRNG”. And then the rest of the argument goes through.

Response: This is the most powerful objection, I think. But I think there is a difference between a TRNG and a free indeterministic decision. In an indeterministic free computer, the reasons behind a choice would not be explanatorily relevant to the output of the TRNG (otherwise, it’s not truly random). We will presumably have some code like:

if (TRNG() < weightOfReasons(A)/(weightOfReasons(A)+weightOfReasons(B))) {
   do A
}
else {
   do B
}

where TRNG() is a function that returns a truly random number from 0 to 1. The source of the indeterminism is then independent of the reasons for the options A and B: the function TRNG() does not dependent on these reasons. (Of course, one could set up the algorithm so that there is some random permutation of the random number based on the options A and B. But that permutation is not going to be rationally relevant.) On the other hand, an agent truly choosing freely does not make use of a source of indeterminism that is rationally independent of the reasons for action—she chooses indeterministically on the basis of the reasons. How that’s done is a hard question—but the above arguments do not show it cannot be done.

Objection 4: Whatever mechanism we have for freedom could be transplanted into a computer, even if it’s not a TRNG.

Response: It is central to the notion of a computer, as I understand it, that it proceeds algorithmically, perhaps with a TRNG as a source of indeterminism. If one transplanted whatever source of freedom we have, the result would no longer be a computer.

Monday, May 1, 2017

Desire-belief theory and soft determinism

Consider this naive argument:

  1. If the desire-belief theory of motivation is true, whenever I act, I do what I want.
  2. Sometimes in acting I do what I do not want.
  3. So the desire-belief theory is false.

Some naive arguments are nonetheless sound. (“I know I have two hands, …”) But that’s not where I want to take this line of thought, though I could try to.

I think there are two kinds of answers to this naive argument. One could simply deny (2), espousing an error theory about what happens when people say “I did A even though I didn’t want to.” But suppose we want to do justice to common sense. Then we have to accept (2). And (1) seems to be just a consequence of the desire-belief theory. So what to can one say?

Well, one can say that “what I want” is used in a different sense in (1) and (2). The most promising distinction here seems to me to be between what one wants overall and what one has a desire for. The desire-belief theorist has to affirm that if I do something, I have a desire for it. But she doesn’t have to say that I desire the thing overall. To make use of this distinction, (2) has to say that I act while doing what I do not overall want.

If this is the only helpful distinction here, then someone who does not want to embrace an error theory about (2) has to admit that sometimes we act not in accord with what we overall want. Moreover, it seems almost as much a truism as (2) that:

  1. Sometimes in acting freely I do what I do not want.

On the present distinction, this means that sometimes in acting freely, I do something that isn’t my overall desire.

But this in turn makes soft determinism problematic: for if my action is determined and isn’t what I overall desire, and desire-belief theory is correct, then it is very hard to see how the action could possibly be free.

There is a lot of argument from ignorance (the only relevant distinction seems to be…, etc.) in the above. But if it can be all cashed out, then we have a nice argument that one shouldn’t be both a desire-belief theorist or a soft-determinist. (I think one shouldn’t be either!)

Monday, January 23, 2017

Prosthetic decision-making

Let’s idealize the decision process into two stages:

  1. Intellectual: Figure out the degrees to which various options promote things that one values (or desires, judges to be valuable, etc.).

  2. Volitive: On the basis of this data, will one option.

On an idealized version of the soft-determinist picture, the volitive stage can be very simple: one wills the option that one figured out in step 1 to best promote what one values. We may need a tie-breaking procedure, but typically that won’t be invoked.

On a libertarian picture, the volitive stage is where all the deep stuff happens. The intellect has delivered its judgment, but now the will must choose. On the best version of the libertarian picture, typically the intellect’s judgment includes a multiplicity of incommensurable options, rather than a single option that best promotes what one values.

On the (idealized) soft-determinist picture, it seems one could replace the mental structures (“the volitive faculty”) that implement the volitive stage by a prosthetic device (say, a brain implant) that follows the simple procedure without too much loss to the person. The actions of a person with a prosthetic volitive faculty would be determined by her values in much the same way as they are in a person with a normal volitive faculty. What is important is the generation of input to the volitive stage—the volitive stage is completely straightforward (except when there are ties).

On the libertarian picture, replacing the volitive faculty by a prosthesis, however, would utterly destroy one as a responsible agent. For it is here, in the volition, that all the action happened.

What about replacing the intellectual faculty by a prosthesis? Well, since the point of the intellectual stage is to figure out something, it seems that the point of the intellectual stage would be respected if one replaced it by an automated process that is at least as accurate as the actual process. Something else would be lost, but the main point would remain. (Compare: Something would be lost if one replaced a limb by a prosthetic that functioned as well as the limb, but the main point would remain.)

So, now, we can imagine replacing both faculties by prostheses. There is definite loss to the agent, but on the soft-determinist picture, there isn’t a loss of what is central to the agent. On the libertarian picture, there is a loss of what is central to the agent as soon as the volitive faculty is replaced by a prosthesis.

The upshot of this is this: On the soft-determinist picture, making decisions isn’t what is central to one as an agent. Rather, it is the formation of values and desires that is central, a formation that (in idealized cases) precedes the decision process. On the libertarian picture, making decisions—and especially the volitive stage of this process—is central to one as an agent.

Tuesday, January 10, 2017

Analogue jitter in motivations and the randomness objection to libertarianism

All analogue devices jitter on a small time-scale. The jitter is for all practical purposes random, even if the system is deterministic.

Suppose now that compatibilism is true and we have a free agent who is determined to always choose what she is most strongly motivated towards. Now suppose a Buridan’s ass situation, where the motivations for two alternatives are balanced, but where the motivations were acquired in the normal way human motivations are, where there is an absence of constraint, etc.

Because of analogue jitter in the brain, sometimes one motivation will be slightly stronger and sometimes the other will be. Thus which way the agent will choose will be determined by the state of the jitter at the time of the choice. And that’s for all practical purposes random.

Either in such cases there is freedom or there is not. If there is no freedom in such cases, then the compatibilist has to say that people whose choices are sufficiently torn are not responsible for their choices. That is highly counterintuitive.

The compatibilist’s better option is to say that there can still be freedom in such cases. It’s a bit inaccurate to say that the choice is determined by the jitter. For it’s only because the rough values of the strengths of the motivations are as they are that the jitter in their exact strength is relevant. The rough values of the strengths of the motivations are explanatorily relevant, regardless of which way the choice goes. The compatibilist should say that this kind of explanatory relevance is sufficient for freedom.

But if she says this, then she must abandon the randomness objection against libertarianism.

Tuesday, February 23, 2016

Determinism and moral imperfection

If determinism is true, then I always do the best I can do. If I always do the best I can do, I lack moral imperfection. So if determinism is true, I lack moral imperfection. But I am morally imperfect. So determinism is not true.

Monday, September 14, 2015

No one can make you freely do a serious wrong

I've just been struck by the obviousness of this principle: It would be unjust for you to be punished for something that someone else made you do.

But it wouldn't be unjust for you to be punished for freely doing something seriously morally wrong. Hence, it is impossible for someone to make you freely do something seriously morally wrong. But if compatibilism is true, then it is possible for someone to make one freely do something seriously wrong: a powerful being could produce a state of the universe long before one's conception that determines one to do that wrong. (In principle a compatibilist could insist--as Ayer did--that it takes away one's freedom when an agent determines one to act a certain way. But this cannot be maintained. Whether I'm free shouldn't depend on ancient history.)

Friday, September 11, 2015

Randomness and compatibilism

The randomness objection to libertarian free will holds that undetermined choices will be random and hence unfree. Some randomness-based objectors to libertarianism are compatibilists who think free will is possible, but requires choices to be determined (e.g., David Hume). Others think that free will is impossible (cf. Galen Strawson). I will offer an argument against the Humeans, those who think that freedom is possible but it requires determinism for the relevant mental events. Consider three cases of ordinary human-like agents who have not suffered from brainwashing, compulsion, or the like:

  1. Gottfried always acts on his strongest relevant desire when there is one. In cases of a tie between desires, he is unable to make a choice and his head literally explodes. Determinism always holds.
  2. Blaise always acts on his strongest relevant desire when there is one. In cases of a tie between desires, his brain initiates a random indeterministic process to decide between the desires. Determinism holds in all other cases.
  3. Carl always acts on his strongest relevant desire when there is one. In cases of a tie between two desires, his brain unconsciously calculates one more digit of π, and if it's odd the brain makes him go for the first desire (as ordered alphabetically in whatever language he is thinking in) and if it's even for the second desire (with some generalization in case of an n-way tie for n>2). Determinism always holds.

Gottfried isn't free in cases of ties between desires--he doesn't even make a choice. Our Humean must insist that Blaise isn't free, either, in those cases, because although Blaise does decide, his decision is simply random. What about Carl? Well, Carl's choices are determined, which the Humean likes. But they are nonetheless to all intents and purposes random. A central part of the intuition that Blaise isn't free has to do with Blaise having no control over which desire he acts on, since he cannot control the indeterministic process. But Carl has no control over the digits of π and these digits are, as far as we can tell, essentially random. The randomness worry that is driving the Humean's argument that freedom requires determinism is not fundamentally a worry about indeterminism. That is worth noting.

Now let's go back to Gottfried. Given compatibilism it is plausible that in normal background conditions, all of Gottfried's choices are free. (Remember that if there is a tie, he doesn't make a choice.) Suppose we grant this. Then there is a tension between this judgment and what we observed about Carl. For now consider the case of closely-balanced choices by Gottfried. Suppose, for instance, Gottfried's desire to write a letter to Princess Elizabeth has strength 0.75658 and his desire to design a better calculator has strength 0.75657. He writes a letter to Princess Elizabeth, then, and does so freely by what has been granted. But now notice that our desires always fluctuate in the light of ordinary influences, and a difference of one in the fifth significant figure in a measure of the strength of a desire will be essentially a random fluctuation. The fact that this fluctuation is determined makes no difference, as we can see when we recall the case of Carl. So if we take seriously what we learned from the case of Carl, we need to conclude that Carl isn't actually free when he chooses between writing to Princess Elizabeth and designing a better calculator, even though he satisfies standard compatibilist criteria and acts on the basis of his stronger desire.

What should the Humean do? One option is to accept that Gottfried is free in the case of close decisions, and then conclude that so are Carl and Blaise in the case of ties. I think the resulting position may not be very stable--if compatibilism requires one to think Carl and Blaise are free in the case of ties, then compatibilism is no longer very plausible.

Another option is to deny that Gottfried is free in the case of close decisions. By parallel, however, she would need to deny that we are free in the case of highly conflicted decisions, unless she could draw some line between our conflicts and Gottfried's fifth-significant-figure conflict. And that's costly.

Finally, it's worth noting that the objection, whatever it might be worth, against the incompatiblist that we shouldn't need to wait on science to see if we're free also works against our Humean.

Thursday, November 13, 2014

Freedom and theodicy

Invoking free will has always been a major part of theodicy. If God has good reason to give us the possibility to act badly, that provides us with at least a defense against the problem of evil. But to make this defense into something more like a theodicy is hard. After all, God can give us such pure characters that even though we can act badly, we are unlikely to do so.

I want to propose that we go beyond the mere alternate-possibilities part of free will in giving theodicies. The main advantage of this is that the theodicy may be capable of accomplishing more. But there is also a very nice bonus: our theodicy may then be able to appeal to compatibilists, who are (sadly, I think) a large majority of philosophers.

I think we should reflect on the ways in which one can limit a person's freedom through manipulation of the perfectly ordinary sort. Suppose Jane is much more attractive, powerful, knowledgeable and intelligent than Bob, but Jane wants Bob to freely do something. She may even want this for Bob's own sake. Nonetheless, in order not to limit Bob's freedom too much, she needs to limit the resources she uses. Even if she leaves Bob the possibility of acting otherwise, there is the ever-present danger that she is manipulating him in a way that limits his freedom.

I think the issue of manipulation is particularly pressing if what Jane wants Bob to do is to love her back. To make use of vastly greater attractiveness, power, knowledge and intelligence in order to secure the reciprocation of love is to risk being a super-stalker, someone who uses her knowledge of the secret springs of Bob's motivations in order to subtly manipulate him to love her back. Jane needs to limit what she does. She may need to make herself less attractive to Bob in order not to swamp his freedom. She may need to give him a lot of time away from herself. She might have reason not to make it be clear to him that she is doing so much for him that he cannot but love her back. These limitations are particularly plausible in the case where the love Jane seeks to have reciprocated is something like friendship or, especially, romantic love. And Scripture also presents God's love for his people as akin to marital love, in addition to being akin to parental love (presumably, God's love has no perfect analogue among human loves).

So if God wants the best kind of reciprocation of his love, perhaps he can be subtle, but not too subtle. He can make use of his knowledge of our motivations and beliefs, but not too much such knowledge. He can give us gifts, but not overload us with gifts. He may need to hide himself from us for a time. Yes, the Holy Spirit can work in the heart all the time, but the work needs to be done in a way that builds on nature if God is to achieve the best kind of reciprocation of his love.

I think there are elements of theodicy here. And a nice bonus is that they don't rely on incompatibilism.

The Incarnation is also an important element here—I am remembering Kierkegaard...

Friday, August 22, 2014

A criticism of some consequence arguments

The standard consequence argument for incompatibilism makes use of the operator Np which abbreviates "p and no one has or has ever had a choice about whether p". Abbreviating the second conjunct as N*p, we have Np equivalent to "p and N*p". The argument then makes use of a transfer principle, like:

  • beta-2: If Np and p entails q, then Nq.
When I think about beta-2, it seems quite intuitive. The way I tend to think about it is this: "Well, if I have no choice about p, and p entails q, then how can I have a choice about q?" But this line of reasoning commits me not just to beta-2, but to the stronger principle:
  • beta-2*: If N*p and p entails q, then N*q.
But beta-2* is simply false. For instance, let p be any necessary falsehood. Then clearly N*p. But if p is a necessary falsehood, then p entails q for every q, and so we conclude—without any assumptions about determinism, freedom and the like—that no one has a choice about anything. And that's unacceptable.

This may be what Mike Almeida is getting at in this interesting discussion which inspired this post.

Of course, this counterexample to beta-2* is not a counterexample to beta-2, since although we have N*p, we do not have Np, as we do not have p. But if the intuition driving one to beta-2 commits one also to beta-2*, then that undercuts the intuitive justification for beta-2. And that's a problem. One might still say: "Well, yes, we have a counterexample to beta-2*. But beta-2 captures most of the intuitive content of beta-2*, and is not subject to this counterexample." But I think such arguments are not very strong.

This is not, however, a problem if instead of accepting beta-2 on the basis of sheer intuition, one accepts it because it provably follows from a reasonable counterfactual rendering of the N*p operator.

Saturday, July 12, 2014

Responsibility and randomness

Consider this anti-randomness thesis that some compatibilists use to argument against libertarianism:

  1. If given your mental state you're at most approximately equally likely to choose A as to choose B, you are not responsible for choosing A over B.
Note that being in such a state of mind is compatible with determinism, since even given determinism one can correctly say things like "The coin is equally likely to come up tails as heads."

Thesis (1) is false. Here's a counterexample. Consider the following family of situations, where your character is fixed between them: You choose whether to undergo x hours of torture in order to save me from an hour of torture. If x=0.000001, then I assume you will be likely to choose to save me from the torture—the cost is really low. If x=10, then I would expect you to be very unlikely to save me from the torture—the cost is disproportionate. Presumably as x changes between 0.000001 and 10, the probability of your saving me changes from close to 1 to close to 0. Somewhere in between, at x=x1 (I suppose x1=1, if you're a utilitarian), the probability will be around 1/2. By (1), you wouldn't be responsible for choosing to undergo x1 hours of torture to save me from an hour of torture. But that's absurd.

Thus, anybody who believes in free will, compatibilist or incompatibilist, should deny (1).

Now, let's add two other common theses that get used to attack libertarianism:

  1. If a choice can be explained with antecedent mental conditions that yield at most approximately probability 1/2 of that choice, a contrastive explanation of that choice cannot be given in terms of antecedent mental conditions.
  2. One is only responsible for a choice if one can give a contrastive explanation of it in terms of antecedent mental conditions.
Since (2) and (3) imply (1), and (1) is false, it follows that at least one of (2) and (3) must be rejected as well.

There is an independent argument against (1). The intuition behind (1) is that responsibility requires that a choice be more likely than its alternative. But necessarily God is responsible for all his choices. And surely it was possible in at least one of his choices for him to have chosen otherwise (otherwise, how can he be omnipotent?). If the choice he actually made was not more likely than the alternative, then he was not responsible by the intuition. But God is always responsible. Suppose then the choice he actually made was more likely than the alternative. Nonetheless, he could have made the alternative choice, and had he done so, he would have done something less likely than the alternative, and by the intuition he wouldn't have been responsible, which again is impossible. Thus, the theist must reject the intuition.