Showing posts with label randomness. Show all posts
Showing posts with label randomness. Show all posts

Monday, June 2, 2025

Shuffling an infinite deck

Suppose infinitely many blindfolded people, including yourself, are uniformly randomly arranged on positions one meter apart numbered 1, 2, 3, 4, ….

Intuition: The probability that you’re on an even-numbered position is 1/2 and that you’re on a position divisible by four is 1/4.

But then, while asleep, the people are rearranged according to the following rule. The people on each even-numbered position 2n are moved to position 4n. The people on the odd numbered positions are then shifted leftward as needed to fill up the positions not divisible by 4. Thus, we have the following movements:

  • 1 → 1

  • 2 → 4

  • 3 → 2

  • 4 → 8

  • 5 → 3

  • 6 → 12

  • 7 → 5

  • 8 → 16

  • 9 → 6

  • and so on.

If the initial intuition was correct, then the probability that now you’re on a position that’s divisible by four is 1/2, since you’re now on a position divisible by four if and only if initially you were on a position divisible by two. Thus it seems that now people are no longer uniformly randomly arranged, since for a uniform arrangement you’d expect your probability of being in a position divisible by four to be 1/4.

This shows an interesting difference between shuffling a finite and an infinite deck of cards. If you shuffle a finite deck of cards that’s already uniformly distributed, it remains uniformly distributed no matter what algorithm you use to shuffle it, as long as you do so in a content-agnostic way (i.e., you don’t look at the faces of the cards). But if you shuffle an infinite deck of distinct cards that’s uniformly distributed in a content-agnostic way, you can destroy the uniform distribution, for instance by doubling the probability that a specific card is in a position divisible by four.

I am inclined to take this as evidence that the whole concept of a “uniformly shuffled” infinite deck of cards is confused.

Monday, June 10, 2024

Computation

I’ve been imagining a very slow embodiment of computation. You have some abstract computer program designed for a finite-time finite-space subset of a Turing machine. And now you have a big tank of black and white paint that is constantly being stirred in a deterministic way, but one that is some ways into the ergodic hierarchy: it’s weakly mixing. If you leave the tank for eternity, every so often the paint will make some seemingly meaningful patterns. In particular, on very rare occasions in the tank one finds an artistic drawing of the next step of the Turing machine’s functioning while executing that program—it will be the drawing of a tape, a head, and various symbols on the tape. Of course, in between these steps will be a millenia of garbage.

In fact, it turns out that (with probability one) there will be some specific number n of years such that the correct first step of the Turing machine’s functioning will be drawn in exactly n years, the correct second step in exactly 2n years, the correct third one in exactly 3n years, and so on (remembering that there is only a finite number of steps, since we have working with a finite-space subset). (Technically, this is because weak mixing implies multiple weak mixing.) Moreover, each step causally depends on the preceding one. Will this be computation? Will the tank of paint be running the program in this process?

Intuitively, no. For although we do have causal connections between the state in n years and the next state in 2n years and so on, those connections are too counterfactually fragile. Let’s say you took the artistic drawing of the Turing machine in the tank at the first step (namely in n years) and you perturbed some of the paint particles in a way that makes no visible difference to the visual representation. Then probably by 2n years things would be totally different from what they should be. And if you changed the drawing to a drawing of a different Turing machine state, the every-n-years evolution would also change.

So it seems that for computation we need some counterfactual robustness. In a real computer, physical states define logical states in a infinity-to-one way (infinitely many “small” physical voltages count as a logical zero, and infinitely many “larger” physical voltages count as a logical one). We want to make sure that if the physical states were different but not sufficiently different to change the logical states, this would not be likely to affect the logical states in the future. And if the physical states were different enough to change the logical states, then the subsequent evolution would likely change in an orderly way. Not so in the paint system.

But the counterfactual robustness is tricky. Imagine a Frankfurt-style counterfactual intervener who is watching your computer while your computer is computing ten thousand digits of π. The observer has a very precise plan for all the analog physical states of your computer during the computation, and if there is the least deviation, the observer will blow up the computer. Fortunately, there is no deviation. But now with the intervener in place, there is no counterfactual robustness. So it seems the computation has been destroyed.

Maybe it’s fine to say it has been destroyed. The question of whether a particular physical system is actually running a particular program seems like a purely verbal question.

Unless consciousness is defined by computation. For whether a system is conscious, or at least conscious in a particular specific way, is not a purely verbal question. If consciousness is defined by computation, we need a mapping between physical states and logical computational states, and what that mapping is had better not be a purely verbal question.

Thursday, June 1, 2023

Might there have been less randomness earlier?

In my previous post, I noted that a branching view of possibility, when continued into an infinite past, leads to the counterintuitive consequence that there is less and less randomness the further back we go.

In this post I want to note that this counterintuitive consequence may in fact be right even with a finite past, given a certain interpretation of quantum mechanics.

Start with the naive consciousness causes collapse (ccc) interpretation of quantum mechanics. On naive ccc, at each moment of time, the laws of nature prevent the world to evolve into a superposition of states that differ with respect to consciousness. Thus, there cannot be a superposition between one’s feeling hot and one’s not feeling hot, or between a cat being aware of its surroundings and a cat being asleep or dead. This is assured by constant collapse with respect to a global consciousness operator C.

Unfortunately, as it stands this is untenable, because it corresponds to a setup where there is constant observation of C, and constant observation of an observable precludes change with respect to that observable by the quantum Zeno effect. In other words, if we had naive ccc, then conscious states would never change, which is empirically absurd.

Here is one way to fix this problem. Suppose that there are special moments in time, which I’ll poetically call “cosmic heartbeats”. Collapse with respect to C only occurs at cosmic heartbeats. If the cosmic “heart rate” is not very fast (i.e., the spacing between the heartbeats is big enough), then the quantum Zeno effect will be negligible, and we needn’t worry about it. And we hypothesize that consciousness only occurs at cosmic heartbeats.

But now let’s consider the history of our universe. In the early universe, the only way to get a non-empty consciousness state is by some ridiculously unlikely feat of quantum tunnelling generating a Boltzmann brain or the like. Thus the only randomness we will have in the early universe will be that induced by pruning away the components of the global wavefunction corresponding to such ridiculously unlikely feats. And that is only a tiny bit of randomness. But as things evolve, we get components of the wave function with significant weight corresponding to the evolution of various conscious critters. Now the periodic collapse will be “deciding” between states of comparable likelihood (e.g., life on earth versus life on some other planet formed from some of the same materials orbiting the sun) rather than just pruning away extremely unlikely options.

One would need to know a lot more physics (and perhaps neuroscience?) to figure out what the cosmic heartrate needs to be to make the theory work. An upper bound is given by the quantum Zeno effect: if the cosmic heartrate is too fast, then we could predict a slowdown of consciousness. A lower bound is given by introspection: the cosmic heartrate had better be at least as fast as the speed at which our conscious states are observed to change.

I wonder if a similar decrease of randomness in the past wouldn’t be predicted by GRW collapse theories.

Tuesday, February 21, 2023

Achievement in a quantum world

Suppose Alice gives Bob a gift of five lottery tickets, and Bob buys himelf a sixth one. Bob then wins the lottery. Intuitively, if one of the tickets that Alice bought for Bob wins, then Bob’s win is Alice’s achievement, but if the winning ticket is not one of the ones that Alice bought for Bob, then Bob’s win is not Alice’s achievement.

But now suppose that there is no fact of the matter as to which ticket won, but only that Bob won. For instance, maybe the way the game works is that there is a giant roulette wheel. You hand in your tickets, and then an equal number of depressions on the wheel gets your name. If the ball ends in a depression with your name, you win. But they don’t write your name down on the depressions ticket-by-ticket. Instead, they count up how many tickets you hand them, and then write your name down on the same number of depressions.

In this case, it seems that Bob’s win isn’t Alice’s achievement, because there is no fact of the matter that it was one of Alice’s tickets that got Bob his win. Nor does this depend on the probabilities. Even if Alice gave Bob a thousand tickets, and Bob contributed only one it seems that Bob’s win isn’t Alice’s achievement.

Yet in a world run on quantum mechanics, it seems that our agential connection to the external world is like Alice’s to Bob’s win. All we can do is tweak the probabilities, perhaps overwhelmingly so, but there is no fact of the matter about the outcome being truly ours. So it seems that nothing is ever our achievement.

That is an unacceptable consequence, I think.

I think there are two possible ways out. One is to shift our interpretation of “achievement” and say that Bob’s win is Alice’s achievement in the original case even when it was the ticket that Bob bought for himself that won. Achievement is just sufficient increase of probability followed by the occurrence of the thus probabilified event.

The second is heavy duty metaphysics. Perhaps our causal activity marks the world in such a way that there is always a trace of what happened due to what. Events come marked with their actual causal history. Sometimes, but not always, that causal history specifies what was actually the cause. Perhaps I turn a quantum probability dial from 0.01 to 0.40, and you turn it from 0.40 to 0.79, and then the event happens, and the event comes metaphysically marked with its cause. Or perhaps when I turn the quantum probability dial and you turn it, I embue it with some of my teleology and when you turn it, you embue it with some of yours, and there is a fact of the matter as to whether a further on down effect comes from your teleology or mine.

I find the metaphysical answer hard to believe, but I find the probabilistic one conceptually problematic.

Thursday, October 13, 2022

On monkeys and exemplar theories of salvation

On “exemplar” theories of salvation, Christ’s work of the cross saves us by providing a deeply inspiring example of love, sacrifice, or the like.

Such theories of salvation have the following unsavory consequence: they imply that it would be possible for us to be saved by a monkey.

For imagine that a monkey typing on a typerwriter at random wrote a fictitious story of a life in morally relevant respects like that of Christ, and people started believing that story. If Christ saves us by providing an inspiring example, then we could have gotten the very same effect by reading that fictitious story typed at random by a monkey and erroneously thinking the story to be true.

Of course, that’s just a particularly vivid way of putting the standard objection against exemplar theories that they are Pelagian. I have nothing against monkeys except that they are creatures, and so that if it is possible to be saved by a monkey, then it is possible to be saved by creatures, which is Pelagianism.

Tuesday, September 6, 2022

Trolleys and chaos

Suppose that determinism is true and Alice is about to roll a twenty-sided die to determine which of twenty innocent prisoners to murder. There is nothing you can do to stop her. You are in Alice’s field of view. Now, a die roll, even if deterministic, is very sensitive to the initial conditions. A small change in Alice’s throw is apt to affect the outcome. And any behavior of yours is apt to affect Alice’s throw. You frown, and Alice becomes slightly tenser when she throws. You smile, and Alice pauses a little wondering what you’re smiling about, and then she throws differently. You turn around not to watch, and Alice grows annoyed or pleased, and her throw is affected.

So it’s quite reasonable to think that whatever you do has a pretty good chance, indeed close to a 95% chance, of changing which of the prisoners will die. In other words, with about 95% probability, each of your actions is akin to redirecting a trolley heading down a track with one person onto a different track with a different person.

Some people—a minority—think that it is wrong to redirect a trolley heading for five people to a track with only one person. I wonder what they could say should be done in the Alice case. If it’s wrong to redirect a trolley from five people to one person, it seems even more wrong to redirect a trolley from one person to another person. So since any discernible action is likely to effectively be a trolley redirection in the Alice case, it seems you should do nothing. But what does “do nothing” mean? Does it mean: stop all external bodily motion? But stopping all external bodily motion is itself an effortful action (as anybody who played Lotus Focus on the Wii knows). Or does it mean: do what comes naturally? But if one were in the situation described, one would likely become self-conscious and unable to do anything “naturally”.

The Alice case is highly contrived. But if determinism is true, then it is very likely that many ordinary actions affect who lives and who dies. You talk for a little longer to a colleague, and they start to drive home a little later, which has a domino effect on the timing of people’s behaviors in traffic today, which then slightly affects when people go to sleep, how they feel when they wake up, and eventually likely affects who dies and who does not die in a car accident. Furthermore, minor differences in timing affect the timing of human reproducive activity, which is likely to affect which sperm reaches the ovum, which then affects the personalities of people in the next generation, and eventually affects who lives and who dies. Thus, if we live in a deterministic world, we are constantly “randomly” (as far as we are concerned, since we don’t know the effects) redirectly trolleys between paths with unknown numbers of people.

Hence, if we live in a deterministic world, then we are all the time in trolley situations. If we think that trolley redirection is morally wrong, then we will be morally paralyzed all the time. So, in a deterministic world, we better think that it’s OK to redirect trolleys.

Of course, science (as well as the correct theology and philosophy) gives us good reason to think we live in an indeterministic world. But here is an intuition: when we deal with the external world, it shouldn’t make a difference whether we have real randomness or the quasi-randomness that determinism allows. It really shouldn’t matter whether Alice is flipping an indeterministic die or a deterministic but unpredictable one. So our conclusions should apply to our indeterministic world as well.

Thursday, September 24, 2020

Discrimination and coin tosses

Bob is deciding whom to hire for a job where race is clearly irrelevant to job performance. There are two clear front-runners. Bob hires the white front-runner because that candidate is white.

Bob has done something very wrong. Why was it wrong? A naive thought is that what he did wrong was to take into account something irrelevant to job performance while deciding whom to hire. But that can’t be right. For suppose that all the job-performance related facts were on par as far as Bob could tell. And then suppose that Alice when dealing with a similar case just said to herself “Heads, A, and tails, B”, tossed a coin, got tails, and hired candidate B. Alice didn’t do anything wrong. But Alice also made a decision on the basis of something irrelevant to job performance, namely whether the prior heads/tails assignment to a candidate matched the outcome of the coin toss.

In terms of deciding on irrelevancies, the paradigm of a fair tie-breaking procedure—a coin flip—and the paradigm of an unfair tie-breaking procedure—a racist decision—look very similar.

Here is a standard thing to say about this (cf. Scanlon): When the job-performance related facts are tied, and we still have to choose, we just have to choose on the basis of something not related to job performance. But that something had better not be something that forms the basis for large-scale patterns of dominance in society. Both Alice’s and Bob’s procedures are based on something not related to job performance, but Bob’s procedure is an instance of a large-scale social pattern of dominance.

I want to propose an account of why Bob did wrong and Alice did not that seems to me to differ slightly from the standard story (or maybe it just is a version of it). To that end, consider a third story. Carl runs a graduate program where he has to make lots of hard choices about current students, e.g., about travel-funding, stipend-renewal, lab and office allocation, etc., and these choices often involve ties on the usual academic metrics. (This is not a description of the Baylor philosophy program: we have lots of funding, and rarely if ever had to break ties regarding funding.) Carl is lazy and has decided to simplify things for himself by saving the number of coin tosses he has to make. Instead, whenever a student is admitted, Carl chooses a random number between one and a thousand and assigns that number to the student, re-rolling the random number generator if that number matches the number of a student already in the program. Thereafter, whenever a tie is to be broken, Carl always breaks the tie in favor of the student with the higher pre-assigned number.

Carl’s tie-breaking procedure is like Alice’s in terms of randomness and lack of alignment with larger social patterns of discrimination. But it’s still a terrible procedure. It’s terrible, because it distributes benefits and burdens in a seriously unequal way: if you got randomly assigned a low number at admission, you are stuck with it and keep on missing out on goodies that people assigned a high number got.

One can now explain what goes wrong in Bob’s procedure as something rather like what went wrong in Carl’s procedure: given structural racism, the minority candidate, call him Dave, passed over by Bob has tended to have been on the negative side of many other decisions (some of them perhaps being racist tie-breaking decisions, and many of them being even more unjust than that). Bob’s procedure has contributed to Dave having a life of tending to get the short end of the stick, just as Carl’s procedure has led a number of students having a graduate career with a tendency to getting the short end of the stick. And a tendency to getting the short of the end of the stick is something we should (at least typically) not contribute to.

This is close to the standard account about Bob’s racism. It likewise involves the large-scale patterns of dominance in society. But it seems to me also importantly different: The large-scale patterns of dominance in society are relevant to Bob’s action insofar as they make it likely that Dave has been on the unfavorable side of too many decisions. In the graduate program case, there may be no larger social patterns that match the ones within the program (or at least not pre-existing ones), and even within the program there need not be any significant interpersonal patterns of dominance between the persons assigned high numbers and low numbers, especially if the initial numerical assignments and the tie-breaking procedure are kept secret from the students, who just say things like, “My luck is terrible!” (This is going to be most likely in a program where students are oblivious to their social environment due to a focus on their individual research.)

In the alternate account, the focus is on the individual rather than the group, and the larger social facts are relevant precisely as they have impacted the individual. But this may seem to miss out on a common dimension of invidious discrimination. If I am a member of a group and someone else in the group is unfairly discriminated against, then that is apt to harm me in two ways: first, because I am apt to have a special concern for other members of the group (either because they are members of the group, or because persons more closely related to me tend to be members of the group), and harm to someone I have a special concern for is harm to me, and, second, because seeing someone like me get harmed scares me.

But I think this fits with my individualistic story by just multiplying the number of times that Dave gets the short end of the stick: sometimes he gets the short end of the stick directly and sometimes he gets it indirectly by having someone else in his group get it.

At the same time, I have to say that this is material I know next to nothing about. Take it with a grain of salt.

Wednesday, January 17, 2018

Free will, randomness and functionalism

Plausibly, there is some function from the strengths of my motivations (reasons, desires, etc.) to my chances of decision, so that I am more likely to choose that towards which I am more strongly motivated. Now imagine a machine I can plug my brain into such that when I am deliberating between options A and B, the machine measures the strengths of my motivations, applies my strengths-to-chances function, randomly selects between A and B in accordance with the output of the strengths-to-chances function, and then forces me to do the selected option.

Here then is a vivid way to put the randomness objection to libertarianism (or more generally to a compatibilism between freedom and indeterminism): How do my decisions differ from my being attached to the decision machine? The difference does not lie in the chances of outcomes.

That the machine is external to me does not seem to matter. For we could imagine that the machine comes to be a part of me, say because it is made of organic matter that grows into my body. That doesn’t seem to make any difference.

But when the randomness problem is put this way, I am not sure it is distinctively a problem for the libertarian. The compatibilist has, it seems, an exactly analogous problem: Why not replace the deliberation by a machine that makes one act according to one’s strongest motivation (or, more generally, whatever motivation it is that would have been determined to win out in deliberation)?

This suggests (weakly) that the randomness problem in general may not be specific to compatibilism, but may be a special case of a deeper problem that both compatibilists and libertarians face.

It seems that both need to say that it deeply matters just how the decision is made, not just its functional characteristics. And hence both need to deny functionalism.

Monday, January 15, 2018

If computers can be free, compatibilism is true

In this post I want to argue for this:

  1. If a computer can non-accidentally have free will, compatibilism is true.

Compatibilism here is the thesis that free will and determinism can both obtain. My interest in (1) is that I think the compatibilism is false, and hence I conclude from (1) that computers cannot non-accidentally have free will. But one could also use (1) as an argument for compatibilism.

Here’s the argument for (1). Assume that:

  1. Hal is a computer non-accidentally with free will.

  2. Compatibilism is false.

Then:

  1. Hal’s software must make use of an indeterministic (true) random number generator (TRNG).

For the only indeterminism that non-accidentally enters into a computer (i.e., not merely as a glitch in the hardware) is through TRNGs.

Now imagine that we modify Hal by outsourcing all of Hal’s use of its TRNG to some external source. Perhaps whenever Hal’s algorithms need a random number, Hal opens a web connection to random.org and requests a random number. As long as the TRNG is always truly random, it shouldn’t matter for anything relevant to agency whether the TRNG is internal or external to Hal. But if we make Hal function in this way, then Hal’s own algorithms will be deterministic. And Hal will still be free, because, as I said, the change won’t matter for anything relevant to agency. Hence a deterministic system can be free, contrary to (3). Hence (2) and (3) are not going to be both true, and so we have (1).

We perhaps don’t even need the thought experiment of modifying Hal to argue for a problem with (2) and (3). Hal’s actions are at the mercy of the TRNG. Now, the output of the TRNG is not under Hal’s rational control: if it were, then the TRNG wouldn’t be truly random.

Objection 1: While Hal’s own algorithms, after the change, would be deterministic, the world as a whole would be indeterministic. And so one can still maintain a weaker incompatibilism on which freedom requires indeterminism somewhere in the world, even if not in the agent.

Response: Such an incompatibilism is completely implausible. Being subject to random external vagaries is no better for freedom than being subject to determined external vagaries.

Objection 2: It really does make a big difference whether the source of the randomness is internal to Hal or not.

Response: Suppose I buy that. Now imagine that we modify Hal so that at the very first second of its existence, before it has any thoughts about anything, the software queries a TRNG to generate a supply of random numbers sufficient for all subsequent algorithmic use. Afterwards, instead of calling on a TRNG, Hal simply takes one of the generated random numbers. Now the source of randomness is internal to Hal, so he should be free. And, strictly speaking, Hal thus modified is not a deterministic system, so he is not a counterexample to compatibilism. However, an incompatibilism that allows for freedom in a system all of whose indeterminism happens prior to any thoughts that the system has is completely implausible.

Objection 3: The argument proves too much: it proves that nobody can be free if compatibilism is false. For whatever the source of indeterminism in an agent is, we can label that “a TRNG”. And then the rest of the argument goes through.

Response: This is the most powerful objection, I think. But I think there is a difference between a TRNG and a free indeterministic decision. In an indeterministic free computer, the reasons behind a choice would not be explanatorily relevant to the output of the TRNG (otherwise, it’s not truly random). We will presumably have some code like:

if (TRNG() < weightOfReasons(A)/(weightOfReasons(A)+weightOfReasons(B))) {
   do A
}
else {
   do B
}

where TRNG() is a function that returns a truly random number from 0 to 1. The source of the indeterminism is then independent of the reasons for the options A and B: the function TRNG() does not dependent on these reasons. (Of course, one could set up the algorithm so that there is some random permutation of the random number based on the options A and B. But that permutation is not going to be rationally relevant.) On the other hand, an agent truly choosing freely does not make use of a source of indeterminism that is rationally independent of the reasons for action—she chooses indeterministically on the basis of the reasons. How that’s done is a hard question—but the above arguments do not show it cannot be done.

Objection 4: Whatever mechanism we have for freedom could be transplanted into a computer, even if it’s not a TRNG.

Response: It is central to the notion of a computer, as I understand it, that it proceeds algorithmically, perhaps with a TRNG as a source of indeterminism. If one transplanted whatever source of freedom we have, the result would no longer be a computer.

Friday, March 24, 2017

Authorless books

I've been imagining a strange scenario. I come across a text that I know for sure was generated by an entirely random process--say, the proverbial monkeys at the typewriter. I look at it. Mirabile dictu, it's coherent and reads just like a literary masterpiece--let's say it's just like something Tolstoy would have written had he written one more novel at the peak of his creative powers.

I think reading this random text could be a disquieting experience. I could read it shallowly, the way one reads some novels for mere entertainment. And in that context, it would work just as well as shallowly reading a real novel. But of course with a masterpiece, one wants to read it more deeply. In doing so, one draws connections between different parts of the texts ("Oh! So that's what that foreshadowed!" or "Ah, so that's why she did that!"), between the content and the mode of expression ("Look at all these short words describing the rapidity of the march"), between what is overtly the text and other texts, ideas, historical events and persons, etc. Drawing such connections, whether explicitly or just as a barely conscious sensation of something there--is a part of the enjoyment of reading a literary masterpiece, when done in moderation. But in our random text, all connections are merely coincidental. Nothing is there on purpose, not even unconsciously. When we read a literary giant like a Plato or a Tolstoy, when we see a compelling connection, we have good reason to think the author meant it to be there, and that sensing the connection is a part of a good reading of the text. But in the random text, there will be no such thing as a good reading or a misreading. And that would have to be disquieting. There is a sense in which we would be inventing all the connections. Reading would be more like creating than like discovering. I suppose death-of-author people think that's already the case with normal novels. But I don't think so. Real connections differ from chance ones.

At the same time, I think that in practice if I were reading this text which is just like a literary masterpiece, I'd end up suspending my disbelief about the author, and just delight in the connections and subtleties, even if they are merely apparent.

But maybe in a world with God there is no true randomness. So maybe the hypothesis of a book where nothing is intended is impossible?

Sunday, March 5, 2017

Super-simple fractal terrain generator

Here's a very simple algorithm for generating random terrains: Start with a equilateral triangle in the x-y plane. Add a uniformly and symmetrically distributed random z-offset to each vertex. Bisect each resulting edge and add a random z-offset at the newly added point, half of the magnitude of the random offsets earlier. Repeat several times.

The algorithm is no doubt known, but some quick searches for terrain generation algorithms didn't turn it up, so I am posting for people's convenience.

There are some artifacts at internal triangle boundaries that better algorithms presumably don't have, but the algorithm is super simple to implement, and because it is based on triangles it directly makes a triangular mesh. Here is some code which does this, starting with a hexagon divided into six equilateral triangles, and putting out an STL file.


Tuesday, January 10, 2017

Analogue jitter in motivations and the randomness objection to libertarianism

All analogue devices jitter on a small time-scale. The jitter is for all practical purposes random, even if the system is deterministic.

Suppose now that compatibilism is true and we have a free agent who is determined to always choose what she is most strongly motivated towards. Now suppose a Buridan’s ass situation, where the motivations for two alternatives are balanced, but where the motivations were acquired in the normal way human motivations are, where there is an absence of constraint, etc.

Because of analogue jitter in the brain, sometimes one motivation will be slightly stronger and sometimes the other will be. Thus which way the agent will choose will be determined by the state of the jitter at the time of the choice. And that’s for all practical purposes random.

Either in such cases there is freedom or there is not. If there is no freedom in such cases, then the compatibilist has to say that people whose choices are sufficiently torn are not responsible for their choices. That is highly counterintuitive.

The compatibilist’s better option is to say that there can still be freedom in such cases. It’s a bit inaccurate to say that the choice is determined by the jitter. For it’s only because the rough values of the strengths of the motivations are as they are that the jitter in their exact strength is relevant. The rough values of the strengths of the motivations are explanatorily relevant, regardless of which way the choice goes. The compatibilist should say that this kind of explanatory relevance is sufficient for freedom.

But if she says this, then she must abandon the randomness objection against libertarianism.

Wednesday, June 8, 2016

Counterfactuals and the randomness objection to libertarianism

The randomness objection to libertarian free will says that undetermined choices could only be random rather than reason-governed.

I want to consider a bit of a reply to this. Suppose that you are choosing between A and B. You have a reason R for A and a reason S for B, and you freely end up choosing A. I think the following will be true, and I think the libertarian can say that they are true as well:

  1. If the reason R for A were stronger, you'd still have chosen A.
  2. If the reason S for B were weaker, you'd still have chosen A.
  3. If the reason R for A were noticeably weaker, you might not have chosen A.
  4. If the reason S for B were noticeably stronger, you might not have chosen B.
If this is right, then there is a real counterfactual dependence of your action on the reasons. The dependence isn't as precise as the compatibilist's dependence. The compatibilist's story may allow for precise values of strengths of reasons that produce counterfactuals like (3) and (4) with quantitative antecedents and "would" rather than "might". Still, I don't think anything so precise is needed for reasons-governance of our actions.

So, I think that if I am right that the libertarian can reasonably affirm (1)-(4), then the randomness objection fails. Of the four, I don't think there is any difficulty with (3) and (4): even if there were pure randomness, we would expect (3) and (4) to be true. So the question is: Can the libertarian affirm (1) and (2)? And I think (1) and (2) are in the same boat, so really the question is whether the libertarian can affirm (1)?

And I say: Why not? At the same time, I know that when I've talked with fellow libertarians about this, they've been pretty sceptical about counterfactuals like (1). Their main reason for scepticism was van Inwagen's re-run argument: In indeterministic free choice situations, if you repeated the same circumstances, you'd get different results. And you'd expect to get different results in repeat runs even if you somewhat raised the strength of the reasons for A.

I agree with the re-run intuition here, but I don't see it as incompatible with (1). The re-run intuition is about what we would get at a later time, albeit in a situation that is qualitatively the same. But (1) is about what would have happened at the time you made the original choice, albeit in a situation that was tweaked to favor A more.

Friday, September 11, 2015

Randomness and compatibilism

The randomness objection to libertarian free will holds that undetermined choices will be random and hence unfree. Some randomness-based objectors to libertarianism are compatibilists who think free will is possible, but requires choices to be determined (e.g., David Hume). Others think that free will is impossible (cf. Galen Strawson). I will offer an argument against the Humeans, those who think that freedom is possible but it requires determinism for the relevant mental events. Consider three cases of ordinary human-like agents who have not suffered from brainwashing, compulsion, or the like:

  1. Gottfried always acts on his strongest relevant desire when there is one. In cases of a tie between desires, he is unable to make a choice and his head literally explodes. Determinism always holds.
  2. Blaise always acts on his strongest relevant desire when there is one. In cases of a tie between desires, his brain initiates a random indeterministic process to decide between the desires. Determinism holds in all other cases.
  3. Carl always acts on his strongest relevant desire when there is one. In cases of a tie between two desires, his brain unconsciously calculates one more digit of π, and if it's odd the brain makes him go for the first desire (as ordered alphabetically in whatever language he is thinking in) and if it's even for the second desire (with some generalization in case of an n-way tie for n>2). Determinism always holds.

Gottfried isn't free in cases of ties between desires--he doesn't even make a choice. Our Humean must insist that Blaise isn't free, either, in those cases, because although Blaise does decide, his decision is simply random. What about Carl? Well, Carl's choices are determined, which the Humean likes. But they are nonetheless to all intents and purposes random. A central part of the intuition that Blaise isn't free has to do with Blaise having no control over which desire he acts on, since he cannot control the indeterministic process. But Carl has no control over the digits of π and these digits are, as far as we can tell, essentially random. The randomness worry that is driving the Humean's argument that freedom requires determinism is not fundamentally a worry about indeterminism. That is worth noting.

Now let's go back to Gottfried. Given compatibilism it is plausible that in normal background conditions, all of Gottfried's choices are free. (Remember that if there is a tie, he doesn't make a choice.) Suppose we grant this. Then there is a tension between this judgment and what we observed about Carl. For now consider the case of closely-balanced choices by Gottfried. Suppose, for instance, Gottfried's desire to write a letter to Princess Elizabeth has strength 0.75658 and his desire to design a better calculator has strength 0.75657. He writes a letter to Princess Elizabeth, then, and does so freely by what has been granted. But now notice that our desires always fluctuate in the light of ordinary influences, and a difference of one in the fifth significant figure in a measure of the strength of a desire will be essentially a random fluctuation. The fact that this fluctuation is determined makes no difference, as we can see when we recall the case of Carl. So if we take seriously what we learned from the case of Carl, we need to conclude that Carl isn't actually free when he chooses between writing to Princess Elizabeth and designing a better calculator, even though he satisfies standard compatibilist criteria and acts on the basis of his stronger desire.

What should the Humean do? One option is to accept that Gottfried is free in the case of close decisions, and then conclude that so are Carl and Blaise in the case of ties. I think the resulting position may not be very stable--if compatibilism requires one to think Carl and Blaise are free in the case of ties, then compatibilism is no longer very plausible.

Another option is to deny that Gottfried is free in the case of close decisions. By parallel, however, she would need to deny that we are free in the case of highly conflicted decisions, unless she could draw some line between our conflicts and Gottfried's fifth-significant-figure conflict. And that's costly.

Finally, it's worth noting that the objection, whatever it might be worth, against the incompatiblist that we shouldn't need to wait on science to see if we're free also works against our Humean.

Saturday, August 2, 2014

Randomness and freedom

Consider cases where your decision is counterfactually dependent on some factor X that is not a part of your reasons and is outside of your (present or past) rational control. The kind of dependence that interests me is this:

  • In the presence of X, you decided on A, but had X been absent, you would have decided on B on the basis of the same set of reasons.
It's important here that X isn't just an enabler of your making a decision, nor is it one of the reasons—your reasons are the same whether X is present or not—but is an extrarational difference-maker for your action.

As far as rationality is concerned, these are cases of randomness. It doesn't matter whether X's influence is deterministic or not: the cases are random vis-à-vis reason.

In these cases, the best contrastive explanation of your decision is in terms of your reasons and X. And the counterfactual dependence on X, which is outside of your control, puts your freedom into question.

I think many cases of conflicted decisions have the following property:

  1. If determinism is true, then the case involves such counterfactual dependence on a factor outside of one's reasons and rational control.
But I also think that:
  1. Some of these cases are also cases of responsibility.
It follows that:
  1. Responsibility is compatible with such counterfactual dependence
or:
  1. Determinism is false.
If (3) is true, then a fortiori the kind of causal undeterdetermination that is posited by event-causal libertarians does not challenge freedom.

I think the right conclusion to draw is (4). I think the counterfactual dependence here does indeed remove freedom. But I do not think the mere absence of a determiner like X is enough for freedom. Something needs to be put in the place of X. What? The agent! The problem with X is that it usurps the place of the agent. Thus I am inclined to think that freedom requires agent causation. I didn't see this until now.

Saturday, July 12, 2014

Responsibility and randomness

Consider this anti-randomness thesis that some compatibilists use to argument against libertarianism:

  1. If given your mental state you're at most approximately equally likely to choose A as to choose B, you are not responsible for choosing A over B.
Note that being in such a state of mind is compatible with determinism, since even given determinism one can correctly say things like "The coin is equally likely to come up tails as heads."

Thesis (1) is false. Here's a counterexample. Consider the following family of situations, where your character is fixed between them: You choose whether to undergo x hours of torture in order to save me from an hour of torture. If x=0.000001, then I assume you will be likely to choose to save me from the torture—the cost is really low. If x=10, then I would expect you to be very unlikely to save me from the torture—the cost is disproportionate. Presumably as x changes between 0.000001 and 10, the probability of your saving me changes from close to 1 to close to 0. Somewhere in between, at x=x1 (I suppose x1=1, if you're a utilitarian), the probability will be around 1/2. By (1), you wouldn't be responsible for choosing to undergo x1 hours of torture to save me from an hour of torture. But that's absurd.

Thus, anybody who believes in free will, compatibilist or incompatibilist, should deny (1).

Now, let's add two other common theses that get used to attack libertarianism:

  1. If a choice can be explained with antecedent mental conditions that yield at most approximately probability 1/2 of that choice, a contrastive explanation of that choice cannot be given in terms of antecedent mental conditions.
  2. One is only responsible for a choice if one can give a contrastive explanation of it in terms of antecedent mental conditions.
Since (2) and (3) imply (1), and (1) is false, it follows that at least one of (2) and (3) must be rejected as well.

There is an independent argument against (1). The intuition behind (1) is that responsibility requires that a choice be more likely than its alternative. But necessarily God is responsible for all his choices. And surely it was possible in at least one of his choices for him to have chosen otherwise (otherwise, how can he be omnipotent?). If the choice he actually made was not more likely than the alternative, then he was not responsible by the intuition. But God is always responsible. Suppose then the choice he actually made was more likely than the alternative. Nonetheless, he could have made the alternative choice, and had he done so, he would have done something less likely than the alternative, and by the intuition he wouldn't have been responsible, which again is impossible. Thus, the theist must reject the intuition.

Tuesday, November 19, 2013

Manipulation, randomness and responsibility

Suppose you chose A over B, but that through minor changes in your circumstances, changes that at most slightly rationally affect the reasons for your decision and that do not intervene in your mental functioning, I could reliably control whether you chose A or whether you chose B. For instance, maybe I could reliably get you to choose B by being slightly louder in my request that you do A, and to choose A by being slightly quieter. In that case your choice is in effect random—the choice is controlled by features that from the point of view of your rational decision are random—and your responsibility slight.

Now suppose you are a friend of mine. To save my life, you would need to make a sacrifice. There is a spectrum of possible sacrifices. At the low end, you need to spend five minutes in my company (yes, it gets worse than that!). At the high end, you and everybody else you care about are tortured to death. With the required sacrifice being at the low end, of course you'd make the sacrifice for your friend. But with the required sacrifice being at the high end, of course you wouldn't. Now imagine a sequence of cases with greater and greater sacrifice. As the sacrifice gets too great, you wouldn't make it. Somewhere there is a critical point, a boundary between the levels of sacrifice you would undertake to save my life and ones you wouldn't. This critical point is where the reasons in favor of the sacrifice and those against it are balanced.

Speaking loosely, as the degree of required sacrifice increases, the probability of your making that sacrifice goes down. The "probability" here is something rough and frequentist, compatible with determinism. If determinism is true, however, in each precise setup around the critical point, there is a definite fact of the matter as to what you would do. And there are two possibilities about your character:

  1. You have a neat and rational character, so that for all sacrifices below the critical level, you'd do it, and for all the sacrifices above the critical level, you wouldn't do it.
  2. At around the critical value, whether you make the sacrifice or not comes to be determined not by the degree of sacrifice but by irrational factors—what shoes I'm wearing, how long ago you had lunch, etc.
I suspect that in most realistic cases we'd have (2). But on both options, we have effective randomness: your action can be controlled through minor changes in your environment that at most slightly affect your reasons. For instance, in option (1), where you are simply rationally going by the strength of the reasons, the slightest tipping of the scales will do the job—you'll undergo 747.848 minutes of torture but not 747.849. And in option (2), non-rational factors that have only a slight rational effect, or no rational effect, control your aciton. In both cases, your choice can be controlled. By the principle I started the post with, around the critical point you couldn't be very responsible.

But surely you would be very praiseworthy for undertaking a great sacrifice to save my life, especially around the critical point. That the sacrifice is so great that we're very near the point where the reasons are balanced does nothing to diminish your responsibility. If anything, it increases your praiseworthiness. Thus determinism is false.

This is not an argument for incompatibilism. I am not arguing here that responsible is incompatible with determinism. I am arguing that having full responsibility around the critical level is incompatible with determinism.

Tuesday, September 11, 2012

Defending infinitary frequentism from some arguments

Frequentism defines probabilities in terms of long-term frequencies of outcomes. This doesn't work very well with finite frequencies—for one, it's going to conflict with physics in that finite sequence frequentism can only yield rational numbers as probabilities while quantum physics is quite happy to yield irrational numbers. As a result frequentism is often extended to suppose a hypothetical infinite sequence of data for defining frequencies. Alan Hajek has a paper that gives fifteen arguments against such a frequentism. Fourteen of them are strong arguments against standard hypothetical frequentism (I am unmoved by argument 15 involving infinitesimals, since I doubt that infinitesimal probabilities are much use to us).

But it turns out that one can formulate a frequentism that escapes or partly escapes some of Hajek's arguments.

A representative of these six arguments is the observation going back to De Finetti that probabilities defined via frequencies fail to satisfy the Kolmogorov Axioms (arguments 13 and 14). But my modified frequentist probabilities satisfy the Kolmogorov Axioms.

For simplicity, our data will be real valued, but the extension to Rn is easy. Let s=(sn) be our sequence of real numbers in R. For any subset A of R, let Fn(s,A) be the proportion of s1,...,sn that is in A. Let L(s,A) be the limit of Fn(s,A) if that limit exists, and otherwise L(s,A) is undefined.

Say that a (classical) probability measure m on the Borel subsets of R fits s provided that for all subintervals I of R, L(s,I) is defined and L(s,I)=m(I).

If there is a probability measure m that fits s, let a frequentist probability measure Ps defined by s be the completion of m (basically, the completion of a measure sets the measure of all subsets of null sets to be measurable and have measure zero).

Proposition:

  1. If s defines a frequentist probability measure, it defines a unique frequentist probability measure.
  2. Suppose that P is a probability measure and X=(Xn) is a sequence of independent identically distributed random variables. Let P1 be the measure on R defined by P1(A)=P(X1 in A). Then with probability one, X defines a frequentist probability measure on R which coincides with P1.

Because Ps is an ordinary Kolmogorovian probability measure, Hajek's arguments 13 and 14 do not apply. Argument 15 is anyway not very convincing, but is weakened since our version of frequentism handles the case of a dart thrown at [0,1] about as well as one can expect classical probabilities to handle it. (There is also a tension between arguments 13-14 and 15, in that probabilities involving infinitesimals are unlikely to be Kolmogorovian.) It is plausible that our frequentist probability measure will provide frequencies only when there is a probability, which makes argument 8 not apply, and non-uniqueness worries from argument 4 are ruled out by (1). I think the frequentist can bite the bullet on arguments 5 and 6, whether with standard frequentism or our modified version, given that the problem occurs only with probability zero.

Remark: The big philosophical problem with this is the reliance on intervals.

Quick sketch of proof:

Claim (1) is easy, because two Borel measures that agree on all intervals agree everywhere.

Claim (2) is proved by letting S be the collection of (a) all intervals with rational numbers as endpoints and (b) all singletons with non-zero P1 measure, and using the Strong Law of Large Numbers to see that for each member A of S almost surely L(X,A) exists and equals P1(A). But since S has countably members (obvious in the case of the intervals, but also easy in the case of the singletons), almost surely for every A in S we have L(X,A) existing. Moreover, almost surely, no singleton with null P1 measure will be hit by infinitely many of the Xn, and hence L(X,A) will be defined and equal to zero for all such singletons.

Thus there is a set W of P-measure one such that on W, we have L(x,A) existing and equal to P1(A) for every A that is either an interval with rational number endpoints or a singleton. Approximating any other interval A from below and above with monotone sequences of rational-number-ended intervals plus or minus one or two singletons, we can show that L(x,A) exists and equals P1(A) for any other interval everywhere on W.

Sunday, July 24, 2011

More on chance and compatibilism

This is another attempt at defending the main point of this post, that the randomness objection is also problematic for compatibilists.

Compatibilism is of merely academic interest unless the freedom or responsibility whose compatibility with determinism is being defended is close enough to the kind of freedom we have. For instance, a freedom or responsibility whose compatibility with determinism is assured by supposing time travel or backwards causation is not close enough to ours to be of great interest, except academically.

Now consider this thesis:

  1. Many of the actions that we are responsible for are significantly causally affected by factors that have little to do with the ingredients in a compatibilist decision theory.
Consider, for instance, the fact that surely in the ordinary course of things, judges are responsible for their decisions. But a fairly recent study found that judges have a 65% chance of granting parole shortly after a food break, and a close to zero chance at the end of a period of not eating. Yet, except in extreme cases (e.g., a judge who had been starving for days), we would surely hold judges responsible for their decisions at both times. Effects like this are, I suspect, not at all uncommon.

But a decision to a significant extent determined by such causal factors is no less "a matter of chance" than a libertarian-free indeterministic choice is. This suggests that compatibilists cannot afford to wield the randomness objection against libertarians, unless they want to say that freedom is much more rare than we normally think.

What can compatibilists say? Well, they can go agent-causal and say that, nonetheless, the action is caused by the agent, and that makes it free. Or they can say that the desires of the agent play a significant part in the decision, and that that is enough to make the action be an action of the agent, rather than a mere matter of chance. But the libertarian can make either move.

Wednesday, August 13, 2008

Molinist evolutionary theory

Molinist evolutionary theory (MET) holds that evolutionary theory is correct and based on genuinely random processes. Nonetheless, according to MET, these processes are guided by God. For each random transition (e.g., a random mutation, recombination or selection event) has associated with it a subjunctive conditional of the form "if circumstances C were to occur, then transition T would occur". God non-trivially knows the truth values of all such conditionals, and created the world so as to ensure a sequence of circumstances C that, given the conditionals he knew, would result in a sequence of transitions that fits with his plan.

I have argued elsewhere (a version of this has appeared in Philosophia Christi) that this story undercuts the statistical explanations that evolution needs. Here I want to point out a second issue. We know the probabilities of outcomes of processes in nature essentially by looking at frequencies of outcomes[note 1]. But, almost surely[note 2], a Molinist God can get any sequence of outcomes he wants by tweaking the circumstances appropriately. If a coin is to be flipped a million times, a Molinist God can make them all come out heads not by intervening in the flips, but by ensuring that the conditions C in which the flips happen are such as to make true appropriate conditionals of the form "C→heads".

Given the existence of a Molinist God, one might expect, or so Mike Almeida has argued, observed frequencies that do not match the probabilities involved in the processes. In fact, this might even give rise to an interesting prediction: given a Molinist God, we might expect the more needy to be disproportionately represented among lottery winners, since it seems not unlikely that God would want to choose initial conditions to favor them. If this line of reasoning is right, then given the existence of a Molinist God, the frequencies we observe should not reflect the probabilities of the underlying physical processes. But if so, then our knowledge of the probabilities of the underlying physical processes is undercut. And this is surely a problem for MET, not because it falsifies evolutionary theory, but because it undercuts it epistemically, making it impossible for us to know the probabilistic claims on which evolutionary theory is based.

Suppose, on the other hand, our Molinist rejects the Almeida argument, and holds that even given a Molinist God, the observed frequencies will match the probabilities of the underlying physical processes, perhaps because God would want them to match in order to be a self-effacing creator, or to let us engage correctly in inductive reasoning. In that case, the following is still true. The observed frequencies are not directly evidence for the probabilities of the underlying physical processes. They are only indirectly evidence given some assumptions about how one expects God to act.

Here is another way to put this. On the Molinist view, there is a defeater to our knowledge of probabilities on the basis of frequencies: the frequencies come from God's decision as to the antecedents of conditionals. A controversial thesis about how God chooses to act, if substantiated, would provide a defeater for this defeater. This makes knowledge of probabilities of physical processes rather more roundabout than we think it is. Moreover, I am not clear whether on this view an atheist can know any claims about these probabilities, since God's contingent decision to make the frequencies match the probabilities seems to play a central role.