In this post I want to argue for this:
- If a computer can non-accidentally have free will, compatibilism is true.
Compatibilism here is the thesis that free will and determinism can both obtain. My interest in (1) is that I think the compatibilism is false, and hence I conclude from (1) that computers cannot non-accidentally have free will. But one could also use (1) as an argument for compatibilism.
Here’s the argument for (1). Assume that:
Hal is a computer non-accidentally with free will.
Compatibilism is false.
Then:
- Hal’s software must make use of an indeterministic (true) random number generator (TRNG).
For the only indeterminism that non-accidentally enters into a computer (i.e., not merely as a glitch in the hardware) is through TRNGs.
Now imagine that we modify Hal by outsourcing all of Hal’s use of its TRNG to some external source. Perhaps whenever Hal’s algorithms need a random number, Hal opens a web connection to random.org and requests a random number. As long as the TRNG is always truly random, it shouldn’t matter for anything relevant to agency whether the TRNG is internal or external to Hal. But if we make Hal function in this way, then Hal’s own algorithms will be deterministic. And Hal will still be free, because, as I said, the change won’t matter for anything relevant to agency. Hence a deterministic system can be free, contrary to (3). Hence (2) and (3) are not going to be both true, and so we have (1).
We perhaps don’t even need the thought experiment of modifying Hal to argue for a problem with (2) and (3). Hal’s actions are at the mercy of the TRNG. Now, the output of the TRNG is not under Hal’s rational control: if it were, then the TRNG wouldn’t be truly random.
Objection 1: While Hal’s own algorithms, after the change, would be deterministic, the world as a whole would be indeterministic. And so one can still maintain a weaker incompatibilism on which freedom requires indeterminism somewhere in the world, even if not in the agent.
Response: Such an incompatibilism is completely implausible. Being subject to random external vagaries is no better for freedom than being subject to determined external vagaries.
Objection 2: It really does make a big difference whether the source of the randomness is internal to Hal or not.
Response: Suppose I buy that. Now imagine that we modify Hal so that at the very first second of its existence, before it has any thoughts about anything, the software queries a TRNG to generate a supply of random numbers sufficient for all subsequent algorithmic use. Afterwards, instead of calling on a TRNG, Hal simply takes one of the generated random numbers. Now the source of randomness is internal to Hal, so he should be free. And, strictly speaking, Hal thus modified is not a deterministic system, so he is not a counterexample to compatibilism. However, an incompatibilism that allows for freedom in a system all of whose indeterminism happens prior to any thoughts that the system has is completely implausible.
Objection 3: The argument proves too much: it proves that nobody can be free if compatibilism is false. For whatever the source of indeterminism in an agent is, we can label that “a TRNG”. And then the rest of the argument goes through.
Response: This is the most powerful objection, I think. But I think there is a difference between a TRNG and a free indeterministic decision. In an indeterministic free computer, the reasons behind a choice would not be explanatorily relevant to the output of the TRNG (otherwise, it’s not truly random). We will presumably have some code like:
if (TRNG() < weightOfReasons(A)/(weightOfReasons(A)+weightOfReasons(B))) {
do A
}
else {
do B
}
where TRNG() is a function that returns a truly random number from 0 to 1. The source of the indeterminism is then independent of the reasons for the options A and B: the function TRNG() does not dependent on these reasons. (Of course, one could set up the algorithm so that there is some random permutation of the random number based on the options A and B. But that permutation is not going to be rationally relevant.) On the other hand, an agent truly choosing freely does not make use of a source of indeterminism that is rationally independent of the reasons for action—she chooses indeterministically on the basis of the reasons. How that’s done is a hard question—but the above arguments do not show it cannot be done.
Objection 4: Whatever mechanism we have for freedom could be transplanted into a computer, even if it’s not a TRNG.
Response: It is central to the notion of a computer, as I understand it, that it proceeds algorithmically, perhaps with a TRNG as a source of indeterminism. If one transplanted whatever source of freedom we have, the result would no longer be a computer.