Showing posts with label determinism. Show all posts
Showing posts with label determinism. Show all posts

Thursday, September 18, 2025

Appearances of atemporality

Here’s a way to think about presentism. We have a three-dimensional reality and temporal modal operators, such as Prior’s P, F, H and G (pastly, futurely, always-pastly and always-futurely), with appropriate logical rules.

But now imagine this scenario: Reality consists of an eternally frozen time slice from a complex two-way-deterministic Newtonian world resembling our world—“frozen Newtonian world” for short. There are particles at various positions with various momenta (I assume that q = mdx/dt is a law of nature rather than a definition of momentum, and momentum is fundamental), but they are ever still, unchanging in their Newtonian properties. Now if Q is one of the operators P, F, H and G, define the operator Q as follows:

  • Qp if and only if either (a) reality consists of an eternally frozen Newtonian world and in an unfrozen Newtonian world that agrees with reality on its present slice we would have Qp or (b) we have Qp and reality does not consist of an eternally frozen Newtonian world and we have Qp.

Then the primed operators behave logically just like the unprimed ones. But according to the primed operators, “there is change and motion”. For instance, for a typical particle z eternally frozen at location x, it will be the case that P(z is not at x) and F(z is not at x), since in an unfrozen Newtonian world whose present slice is just like our world’s, there will be past and future times at which z is not at x.

So what? Well, the presentist’s big intuition is that on a B-theory of time there is no change: there is just a four-dimensional block, where one dimension just happens to be called “time” and things are different at different locations along that dimension, but simply calling a dimension “time” doesn’t make it any different from the spatial ones.

But the B-theorist can respond in kind. On presentism there is no change: there is just a three-dimensional world related to other three-dimensional worlds via certain modal operators that are called “pastly”, etc., but simply calling a modal operator P “pastly” doesn’t make it any different from deviant operators like P.

The presentist will, presumably, say that it’s not just a matter of calling the operator P “pastly”, but rather the operator is the primitive pastly operator, so that if something pastly is different from how it’s now, it has changed. But now the B-theorist can just respond in the same way. It’s not just a matter of calling one of the four dimensions “time”, but rather the dimension is the primitive time dimension, so that if something is different at a different time from how it is now, it has changed or will change.

(Granted, a B-theorist does not need to think that the distinction between space and time is primitive. I am inclined not to, and hence I cannot make the above response.)

Presumably, even after this dialogue, the B-theorist’s alleged reality will look static to the presentist. But I think the presentist’s alleged reality can look atemporal to the B-theorist—it’s just a three-dimensional world with no time dimension and some modal operators relating it to other three-dimensional worlds.

Wednesday, April 23, 2025

Sensory-based hacking and consent

Suppose human beings are deterministic systems.

Then quite likely there are many cases where the complex play of associations combined with a specific sensory input deterministically results in a behavior in a way where the connection to the input doesn’t make rational sense. Perhaps I offer you a business deal, and you are determined to accept the deal when I wear a specific color of shirt because that shirt unconsciously reminds you of an excellent and now deceased business partner you once had, while you have found the deal dubious if I wore any other color. Or, worse, I am determined to reject a deal offered by some person under some circumstances where the difference-maker is that the person is a member of a group I have an implicit and irrational bias against. Or perhaps I accept the deal precisely because I am well fed.

If this is true, then we are subject to sensory-based hacking: by manipulating our sensory inputs, we can be determined to engage in specific behaviors that we wouldn’t have engaged in were those sensory inputs somewhat different in a way that has no rational connection with the justification of the behavior.

Question: Suppose a person consents to something (e.g., a contract or a medical procedure) due to deliberate deterministic sensory-based hacking, but otherwise all the conditions for valid consent are satisfied. Is that consent valid?

It is tempting to answer in the negative. But if one answers in the negative, then quite a lot of our consent is in question. For even if we are not victims of deliberate sensory-based hacking, we are likely often impacted by random environmental sensory-based hacking—people around us wear certain colors of shirts or have certain shades of skin. So the question of whether determinism is true impacts first-order questions about the validity of our consents.

Perhaps we should distinguish three kinds of cases of consent. First, we have cases where one gives consent in a way that is rational given the reasons available to one. Second, we have cases where one gives consent in a way that is not rational but not irrational. Third, we have cases of irrational consent.

In cases where the consent is rational, perhaps it doesn’t matter much that we were subject to sensory-based hacking.

In cases where the consent is neither rational nor irrational, however, it seems that the consent may be undermined by the hacking.

In cases where the consent is irrational, one might worry that the irrationality undercuts validity of consent anyway. But that’s not in general true. It may be irrational to want to have a very painful surgery that extends one’s life by a day, but the consent is not invalidated by the irrationality. And in cases where one irrationally gives consent it seems even more plausible that sensory-based hacking undercuts the consent.

I wonder how much difference determinism makes to the above. I think it makes at least some difference.

Friday, July 26, 2024

Perfect nomic correlations

Here is an interesting special case of Ockham’s Razor:

  1. If we find that of nomic necessity whenever A occurs, so does B, then it is reasonable to assume that B is not distinct from A.

Here are three examples.

  1. We learn from Newton and Einstein that inertial mass and gravitational mass always have the same value. So by (1) we should suppose them to be one property, rather than two properties that are nomically correlated.

  2. In a Newtonian context consider the hypothesis of a gravitational field. Because the gravitational field values at any point are fully determined by the positions and masses of material objects, (1) tells us that it’s reasonable to assume the gravitational field isn’t some additional entity beyond the positions and masses of material objects.

  3. Suppose that we find that mental states supervene on physical states: that there is no difference in mental states without a corresponding difference in physical states. Then by (1) it’s reasonable to expect that mental states are not distinct from physical states. (This is of course more controversial than (A) and (B).)

But now consider that in a deterministic theory, future states occur of nomic necessity given past states. Thus, (1) makes it reasonable to reduce future states to past states: What it is for the universe to be in state S7 at time t7 is nothing but the universe’s being in state S0 at time t0 and the pair (S7,t7) having such-and-such a mathematical relationship to the pair (S0,t0). Similarly, entities that don’t exist at the beginning of the universe can be reduced to the initial state of the universe—we are thus reducible. This consequence of (1) will seem rather absurd to many people.

What should we do? One move is to embrace the consequence and conclude that indeed if we find good evidence for determinism, it will be reasonable to reduce the present to the past. I find this implausible.

Another move is to take the above argument as evidence against determinism.

Yet another move is to restrict (1) to cases where B occurs at the same time as A. This restriction is problematic in a relativistic context, since simultaneity is relative. Probably the better version of the move is to restrict (1) to cases where B occurs at the same time and place as A. Interestingly, this will undercut the gravitational field example (B). Moreover, because it is not clear that mental states have a location in space, this may undercut application (C) to mental staes.

A final move is either to reject (1) or, more modestly, to claim that the the evidence provided by nomic coincidence is pretty weak and defeasible on the basis of intuitions, such as our intuition that the present does not reduce to the past. In either case, application (C) is in question.

In any case, it is interesting to note that thinking about determinism gives us some reason to be suspicious of (1), and hence of the argument for mental reduction in (C).

Wednesday, October 12, 2022

Compatibilism and servitude

Suppose determinism and compatibilism are true. Imagine that a clever alien crafted a human embryo and the conditions on earth so as to produce a human, Alice, who would end up living in ways that served the alien’s purposes, but whose decisions to serve the alien had the right kind of connection with higher-order desires, reasons, decision-making faculties, etc. so that a compatibilist would count them as right. Would Alice's decisions be free?

The answer depends on whether we include among the compatibilist conditions on freedom the condition that the agent’s actions are not intentionally determined by another agent. If we include that condition, then Alice is not free. But it is my impression that defenders of compatibilism these days (e.g., Mele) have been inclining towards not requiring such a non-determination-by-another-agent condition. So I will take it that there is no such condition, and Alice is free.

If this is right, then, given determinism and compatibilism, it would be in principle possible to produce a group of people who would economically function just like slaves, but who would be fully free. Their higher-order desires, purposes and values would be chosen through processes that the compatibilist takes to be free, but these desires, purposes and values would leave them freely giving all of their waking hours to producing phones for a mega-corporation in exchange for a bare minimum of sustenance, and with no possibility of choosing otherwise.

That's not freedom. I conclude, of course, that compatibilism is false.

Tuesday, September 6, 2022

Trolleys and chaos

Suppose that determinism is true and Alice is about to roll a twenty-sided die to determine which of twenty innocent prisoners to murder. There is nothing you can do to stop her. You are in Alice’s field of view. Now, a die roll, even if deterministic, is very sensitive to the initial conditions. A small change in Alice’s throw is apt to affect the outcome. And any behavior of yours is apt to affect Alice’s throw. You frown, and Alice becomes slightly tenser when she throws. You smile, and Alice pauses a little wondering what you’re smiling about, and then she throws differently. You turn around not to watch, and Alice grows annoyed or pleased, and her throw is affected.

So it’s quite reasonable to think that whatever you do has a pretty good chance, indeed close to a 95% chance, of changing which of the prisoners will die. In other words, with about 95% probability, each of your actions is akin to redirecting a trolley heading down a track with one person onto a different track with a different person.

Some people—a minority—think that it is wrong to redirect a trolley heading for five people to a track with only one person. I wonder what they could say should be done in the Alice case. If it’s wrong to redirect a trolley from five people to one person, it seems even more wrong to redirect a trolley from one person to another person. So since any discernible action is likely to effectively be a trolley redirection in the Alice case, it seems you should do nothing. But what does “do nothing” mean? Does it mean: stop all external bodily motion? But stopping all external bodily motion is itself an effortful action (as anybody who played Lotus Focus on the Wii knows). Or does it mean: do what comes naturally? But if one were in the situation described, one would likely become self-conscious and unable to do anything “naturally”.

The Alice case is highly contrived. But if determinism is true, then it is very likely that many ordinary actions affect who lives and who dies. You talk for a little longer to a colleague, and they start to drive home a little later, which has a domino effect on the timing of people’s behaviors in traffic today, which then slightly affects when people go to sleep, how they feel when they wake up, and eventually likely affects who dies and who does not die in a car accident. Furthermore, minor differences in timing affect the timing of human reproducive activity, which is likely to affect which sperm reaches the ovum, which then affects the personalities of people in the next generation, and eventually affects who lives and who dies. Thus, if we live in a deterministic world, we are constantly “randomly” (as far as we are concerned, since we don’t know the effects) redirectly trolleys between paths with unknown numbers of people.

Hence, if we live in a deterministic world, then we are all the time in trolley situations. If we think that trolley redirection is morally wrong, then we will be morally paralyzed all the time. So, in a deterministic world, we better think that it’s OK to redirect trolleys.

Of course, science (as well as the correct theology and philosophy) gives us good reason to think we live in an indeterministic world. But here is an intuition: when we deal with the external world, it shouldn’t make a difference whether we have real randomness or the quasi-randomness that determinism allows. It really shouldn’t matter whether Alice is flipping an indeterministic die or a deterministic but unpredictable one. So our conclusions should apply to our indeterministic world as well.

Monday, November 29, 2021

Simultaneous causation and determinism

Consider the Causal Simultaneity Thesis (CST) that all causation is simultaneous. Assume that simultaneity is absolute (rather than relative). Assume there is change. Here is a consequence I will argue for: determinism is false. In fact, more strongly, there are no diachronic deterministic causal series. What is surprising is that we get this consequence without any considerations of free will or quantum mechanics.

Since there is a very plausible argument from presentism to CST (a non-simultaneous fundamental causal relation could never obtain between two existent things given presentism), we get an argument from presentism to indeterminism.

Personally, I am inclined to think of this argument as a bit of evidence against CST and hence against presentism, because it seems to me that there could be a deterministic world, even though there isn’t. But tastes differ.

Now the argument for the central thesis. The idea is simple. On CST, as soon as the deterministic causes of an effect are in place, their effect is in place. Any delay in the effect would mean a violation of the determinism. There can be nothing in the deterministic causes to explain how much delay happens, because all the causes work simultaneously. And so if determinism is true—i.e., if everything has a deterministic cause—then all the effects happen all at once, and everything is already in the final state at the first moment of time. Thus there is no change if we have determinism and CST.

The point becomes clearer when we think about how it is an adherent of CST explains diachronic causal series. We have an item A that starts existing at time t1, persists through time t2 (kept in existence not by its own causal power, as that would require a diachronic causal relation, but either by a conserver or a principle of existential inertia), then causes an item B, which then persists through time t3 and then causes an item C, and so on. While any two successive items in the causal series A, B, C, ... must overlap temporally (i.e., there must be a time at which they both exist), we need not have temporal overlap between A and C, say. We can thus have things perishing and new things coming into being after them.

But if the causation is deterministic, then as soon as A exists, it will cause B, which will cause C, and so on, thereby forcing the whole series to exist at once, and destroying change.

In an earlier post, I thought this made for a serious objection to CST. I asked: “Why does A ‘wait’ until t2 to cause B?” But once we realize that the issue above has to do with determinism, we see that an answer is available. All we need to do is to suppose there is probabilistic causation.

For simplicity (and because this is what fits best with causal finitism) suppose time is discrete. Then we may suppose that at each moment of time at which A exists it has a certain low probability pAB of causing B if B does not already exist. Then the probability that A will cause B precisely after n units of time is (1 − pAB)npAB. It follows mathematically that “on average” it will cause B after pAB/(1 − pAB) fundamental units of time.

It follows that for any desired average time delay, a designer of the universe can design a cause that has that delay. Let’s say that we want B to come into existence on average u fundamental units of time after A has come into existence. Then the designer can give A a causal power of producing B at any given moment of time at which B does not already exist with probability pAB = 1/(1 + u).

The resulting setup will be indeterministic, and in particular we can expect significant random variation in how long it takes to get B from A. But if the designer wants more precise timing, that can be arranged as well. Let’s say that our designer wants B to happen very close to precisely one second after A. The designer can then ensure that, say, there are a million instants of time in a second, and that A has the power to produce an event A1 with a probability at any given instant such that the expected wait time will be 0.0001 seconds (i.e., 100 fundamental units of time), and A1 the power to produce A2 with the same probability, and so on, with A10000 = B. Then by the Central Limit Theorem, the average wait time between A and B can be expected to be fairly close to 10000 × 0.0001 = 1 seconds, and the designer can get arbitrarily high confidence of an arbitrarily high precision of delay by inserting more instants in each second, and more intermediate causes between A and B, with each intermediate cause having an average delay time of 100 fundamental units (say). (This uses the fact that the geometric distribution has a finite third moment and the Barry-Esseen version of the Central Limit Theorem.)

Thus, a designer of the universe can make an arbitrarily precise and reliable near-deterministic changing universe despite CST. And that really blunts the force of my anti-deterministic observation as a consideration against CST.

Monday, November 1, 2021

Determinism and thought

Occasionally, people have thought that one can refute determinism as follows:

  1. If determinism is true, then all our thinking is determined.

  2. If our thinking is determined, then it is irrational to trust its conclusions.

  3. It is not irrational to trust the conclusions of our thinking.

  4. So, determinism is not true.

But now notice that, plausibly, even if we have indeterministic free will, other animals don’t. And yet it seems at least as reasonable to trust a dog’s epistemic judgment—say, as to the presence of an intruder—as a human’s. Nor would learning that a dog’s thinking is determined or not determined make any difference to our trust in its reliability.

One might respond that things are different in a first-person case. But I don’t see why.

Thursday, July 2, 2020

Supererogation and determinism

  1. If at most one action is possible for one, that action is not supererogatory.

  2. If determinism is true, then there is never more than one action possible for one.

  3. So, if any action is supererogatory, determinism is false.

There is controversy over (2), but I don’t want to get into that in this post. What about (1)? Well, the standard story about supererogation is something like this: A supererogatory action is one that is better than, or perhaps more burdensome, that some permissible alternative. In any case, supererogatory actions are defined in contrast to a permissible alternative. But that permissible alternative has got to be possible for one to count as a genuine alternative. For instance, suppose I stay up all night with a sick friend. That’s better than going to sleep. But if there is loud music playing which would make it impossible for me to go to sleep and I am tied up so I can’t go elsewhere, then my staying up all night with the friend is not supererogatory.

Thursday, March 5, 2020

Upbringing and responsibility

Consider these two stories:

  1. Alice grew up in a terrible home. Her mother abused her. Her father taught her by word and deed that morality is the advantage of the stronger. Her parents forced her to join them in their manifold criminal enterprises. Alice lacked good role models. When Alice turned 17, she grew wings and flew away.

  2. Bob grew up in a terrible home. His father abused him. His mother taught him by word and deed that morality is the advantage of the stronger. His parents forced him to join them in their manifold criminal enterprises. Bob lacked good role models. When Bob turned 17, he rebelled and turned away from the life of crime.

The first story is unlikely and unbelievable. The second is unlikely but believable. This suggests that we don’t literally think that a really bad upbringing makes it literally impossible to do the right thing.

As an argument against determinism, this is rather weak, though. For even the determinist will say that there are many factors left out of story (2), and it could be that one of those left out factors that caused Bob to rebel.

A responsibility asymmetry

Discussion of my previous post has made me realize that, it seems, we’re more apt to be skeptical about the culpability of someone whose evil actions arose from a poor upbringing than of the praiseworthiness of someone whose good actions arose from a good upbringing.

This probably isn’t due to any general erring in favor of positive judgments. We’re not that nice (e.g., think of the research that shows that people are going to say that the CEO who doesn’t care about the environment but institutes profitable policies that happen to pollute is intentionally polluting, while the CEO who doesn’t care about the environment but institutes profitable policies that happen to be good for the environment is not intentionally helping the environment).

Here are two complementary stories that would make the apparent asymmetry reasonable:

  • Virtue makes one free while vice enslaves.

  • The person raised badly may be non-blameworthily ignorant of what is right. The person raised well knows what is right, though may deserve no credit for the knowledge. But non-blameworthy ignorance takes away responsibility, while knowledge gained without credit is good enough for responsibility for the actions flowing from that knowledge.

The noise from this asymmetry suggests that we may want to be careful when discussing free will and determinism to include both positive and negative actions evenhandedly in our examples.

Wednesday, August 14, 2019

The present doesn't ground the past

I will run an argument against the thesis that facts about the past are grounded in the present on the basis of the intuition that that would be a problematically backwards explanation.

Suppose for a reductio:

  1. Necessarily, facts about the past are fully grounded in facts about the present.

Add the plausible premises:

  1. Necessarily, if fact C is fully grounded in some facts, the Bs, and the Bs are fully causally explained by fact A, then fact A causally explains fact C.

As an illustration, suppose that the full causal explanation of why the Nobel committee gave the Nobel prize to Bob is that Alice persuaded them to. Bob’s being a Nobel prize winner is fully grounded in his being awarded the Nobel prize by the Nobel committee. So, Alice’s persuasion fully causally explains why Bob is the Nobel prize winner.

  1. It is possible to have a Newtonian world such that:

    1. All the facts about the world at any one time are fully causally explained by the complete state of the universe at any earlier time.

    2. There are no temporally backwards causal explanations.

    3. There are at least three times.

Now, consider such a Newtonian world, and let t1 < t2 < t3 be three times (by (3c)).

Suppose that t3 is now present. Let Ui be the fact that the complete state of the universe at time ti is (or will be or was) as it is (or will be or was). Then:

  1. Fact U1 is fully grounded in some facts about the present. (By (1))

Call these facts the Bs.

  1. The Bs are fully causally explained by U2. (As (3a) holds in our assumed world)

Therefore:

  1. Fact U1 is fully causally explained by U2. (By (1))

  2. So, there is backwards causal explanation. (By (6))

  3. Contradiction! (By (7) and as (3b) holds in our assumed world)

I think we should reject (1), and either opt for eternalism or for Merricks’ version of presentism on which facts about the past are ungrounded.

Wednesday, March 28, 2018

A responsibility remover

Suppose soft determinism is true: the world is deterministic and yet we are responsible for our actions.

Now imagine a device that can be activated at a time when an agent is about to make a decision. The device reads the agent’s mind, figures out which action the agent is determined to choose, and then modifies the agent’s mind so the agent doesn’t make any decision but is instead compelled to perform the very action that they would otherwise have chosen. Call the device the Forcer.

Suppose you are about to make a difficult choice between posting a slanderous anonymous accusation about an enemy of yours that will go viral and ruin his life and not posting it. It is known that once the message is posted, there will be no way to undo the bad effects. Neither you nor I know how you will choose. I now activate the Forcer on you, and it makes you post the slander. Your enemy’s life is ruined. But you are not responsible for ruining it, because you didn’t choose to ruin it. You didn’t choose anything. The Forcer made you do it. Granted, you would have done it anyway. So it seems you have just had a rather marvelous piece of luck: you avoided culpability for a grave wrong and your enemy’s life is irreparably ruined.

What about me? Am I responsible for ruining your enemy’s life? Well, first, I did not know that my activation of the Forcer would cause this ruin. And, second, I knew that my activation of the Forcer would make no difference to your enemy: she would have been ruined given the activation if and only if she would have been ruined without it. So it seems that I, too, have escaped responsibility for ruining your enemy’s life. I am, however, culpable for infringing on your autonomy. However, given how glad you are of your enemy’s life being ruined with your having any culpability, no doubt you will forgive me.

Now imagine instead that you activated the Forcer on yourself, and it made you post the slander. Then for exactly the same reasons as before, you aren’t culpable for ruining your enemy’s life. For you didn’t choose to post the slander. And you didn’t know that activating the Forcer would cause this ruin, while you did know that the activation wouldn’t make any difference to your enemy—the effect of activating the Forcer on yourself would not affect whether the message would be posted. Moreover, the charge of infringing on autonomy has much less force when you activated the Forcer yourself.

It is true that by activating the Forcer you lost something: you lost the possibility of being praiseworthy for choosing not to post the slander. But that’s a loss that you might judge worthwhile.

So, given soft determinism, it is in principle possible to avoid culpability while still getting the exact same results whenever you don’t know prior to deliberation how you will choose. This seems absurd, and the absurdity gives us a reason to reject the compatibility of determinism and responsibility.

But the above story can be changed to worry libertarians, too. Suppose the Forcer reads off its patient’s mind the probabilities (i.e., chances) of the various choices, and then randomly selects an action with the probabilities of the various options exactly the same as the patient would have had. Then in acting the Forcer, it can still be true that you didn’t know how things would turn out. And while there is no longer a guarantee that things would turn out with the Forcer as they would have without it, it is true that activating the Forcer doesn’t affect the probabilities of the various actions. In particular, in the cases above, activating the Forcer does nothing to make it more likely that your enemy would be slandered. So it seems that once again activating the Forcer on yourself is a successful way of avoiding responsibility.

But while that is true, it is also true that if libertarianism is true, regular activation of the Forcer will change the shape of one’s life, because there is no guarantee that the Forcer will decide just like you would have decided. So while on the soft determinist story, regular use of the Forcer lets one get exactly the same outcome as one would otherwise have had, on the libertarian version, that is no longer true. Regular use of the Forcer on libertarianism should be scary—for it is only a matter of chance what outcome will happen. But on compatibilism, we have a guarantee that use of the Forcer won’t change what action one does. (Granted, one may worry that regular use of the Forcer will change one’s desires in ways that are bad for one. If we are worried about that, we can suppose that the Forcer erases one’s memory of using it. That has the disadvantage that one may feel guilty when one isn’t.)

I don’t know that libertarians are wholly off the hook. Just as the Forcer thought experiment makes it implausible to think that responsibility is compatible with determinism, it also makes it implausible to think that responsibility is compatible with there being precise objective chances of what choices one will make. So perhaps the libertarian would do well to adopt the view that there are no precise objective chances of choices (though there might be imprecise ones).

Saturday, December 23, 2017

Cellular automaton snowflake generator

I made a simple cellular automaton snowflake generator in OpenSCAD. By default it uses Stephen Wolfram's rule that a hex cell stays alive once alive and a cell is generated if it has exactly one neighbor.


Christmas Day addition:
Adding a tiny bit of indeterminism--a chance of 0.5 of generating a cell instead of certainty--makes things look more like a real snowflake, though. Tap on Customizer in the above link if you want to play with it.
And here it is on our Christmas tree. Merry Christmas!

Thursday, August 10, 2017

Uncountable independent trials

Suppose that I am throwing a perfectly sharp dart uniformly randomly at a continuous target. The chance that I will hit the center is zero.

What if I throw an infinite number of independent darts at the target? Do I improve my chances of hitting the center at least once?

Things depend on what size of infinity of darts I throw. Suppose I throw a countable infinity of darts. Then I don’t improve my chances: classical probability says that the union of countably many zero-probability events has zero probability.

What if I throw an uncountable infinity of darts? The answer is that the usual way of modeling independent events does not assign any meaningful probabilities to whether I hit the center at least once. Indeed, the event that I hit the center at least once is “saturated nonmeasurable”, i.e., it is nonmeasurable and every measurable subset of it has probability zero and every measurable superset of it has probability one.

Proposition: Assume the Axiom of Choice. Let P be any probability measure on a set Ω and let N be any non-empty event with P(N)=0. Let I be any uncountable index set. Let H be the subset of the product space ΩI consisting of those sequences ω that hit N, i.e., ones such that for some i we have ω(i)∈N. Then H is saturated nonmeasurable with respect to the I-fold product measure PI (and hence with respect to its completion).

One conclusion to draw is that the event H of hitting the center at least once in our uncountable number of throws in fact has a weird “nonmeasurable chance” of happening, one perhaps that can be expressed as the interval [0, 1]. But I think there is a different philosophical conclusion to be drawn: the usual “product measure” model of independent trials does not capture the phenomenon it is meant to capture in the case of an uncountable number of trials. The model needs to be enriched with further information that will then give us a genuine chance for H. Saturated nonmeasurability is a way of capturing the fact that the product measure can be extended to a measure that assigns any numerical probability between 0 and 1 (inclusive) one wishes. And one requires further data about the system in order to assign that numerical probability.

Let me illustrate this as follows. Consider the original single-case dart throwing system. Normally one describes the outcome of the system’s trials by the position z of the tip of the dart, so that the sample space Ω equals the set of possible positions. But we can also take a richer sample space Ω* which includes all the possible tip positions plus one more outcome, α, the event of the whole system ceasing to exist, in violation of the conservation of mass-energy. Of course, to be physically correct, we assign chance zero to outcome α.

Now, let O be the center of the target. Here are two intuitions:

  1. If the number of trials has a cardinality much greater than that of the continuum, it is very likely that O will result on some trial.

  2. No matter how many trials—even a large infinity—have been performed, α will not occur.

But the original single-case system based on the sample space Ω* does not distinguish O and α probabilistically in any way. Let ψ be a bijection of Ω* to itself that swaps O and α but keeps everything else fixed. Then P(ψ[A]) = P(A) for any measurable subset A of Ω* (this follows from the fact that the probability of O is equal to the probability of α, both being zero), and so with respect to the standard probability measure on Ω*, there is no probabilistic difference between O and α.

If I am right about (1) and (2), then what happens in a sufficiently large number of trials is not captured by the classical chances in the single-case situation. That classical probabilities do not capture all the information about chances is something we should already have known from cases involving conditional probabilities. For instance P({O}|{O, α}) = 1 and P({α}|{O, α}) = 0, even though O and α are on par.

One standard solution to conditional probability case is infinitesimals. Perhaps P({α}) is an infinitesimal ι but P({O}) is exactly zero. In that case, we may indeed be able to make sense of (1) and (2). But infinitesimals are not a good model on other grounds. (See Section 3 here.)

Thinking about the difficulties with infinitesimals, I get this intuition: we want to get probabilistic information about the single-case event that has a higher resolution than is given by classical real-valued probabilities but lower resolution than is given by infinitesimals. Here is a possibility. Those subsets of the outcome space that have probability zero also get attached to them a monotone-increasing function from cardinalities to the set [0, 1]. If N is such a subset, and it gets attached to it the function fN, then fN(κ) tells us the probability that κ independent trials will yield at least one outcome in N.

We can then argue that fN(κ) is always 0 or 1 for infinite. Here is why. Suppose fN(κ)>0. Then, κ must be infinite, since if κ is finite then fN(κ)=1 − (1 − P(N))κ = 0 as P(N)=0. But fN(κ + κ)=(fN(κ))2, since probabilities of independent events multiply, and κ + κ = κ (assuming the Axiom of Choice), so that fN(κ)=(fN(κ))2, which implies that fN(κ) is zero or one. We can come up with other constraints on fN. For instance, if C is the union of A and B, then fC(κ) is the greater of fA(κ) and fB(κ).

Such an approach could help get a solution to a different problem, the problem of characterizing deterministic causation. To a first approximation, the solution would go as follows. Start with the inadequate story that deterministic causation is chancy causation with chance 1. (This is inadequate, because in the original dart-throwing case, the chance of missing the center is 1, but throwing the dart does not deterministically cause one to hit a point other than the center.) Then say that deterministic causation is chancy causation such that the failure event F is such that fF(κ)=0 for every cardinal κ.

But maybe instead of all this, one could just deny that there are meaningful chances to be assigned to events like the event of uncountably many trials missing or hitting the center of the target.

Sketch of proof of Proposition: The product space ΩI is the space of all functions ω from I to Ω, with the product measure PI generated by the product measures of cylinder sets. The cylinder sets are product sets of the form A = ∏iIAi such that there is a finite J ⊆ I such that Ai = Ω for i ∉ J, and the product measure of A is defined to be ∏iJP(Ai).

First I will show that there is an extension Q of PI such that Q(H)=0 (an extension of a measure is a measure on a larger σ-algebra that agrees with the original measure on the smaller σ-algebra). Any PI-measurable subset of H will then have Q measure zero, and hence will have PI-measure zero since Q extends PI.

Let Q1 be the restriction of P to Ω − N (this is still normalized to 1 as N is a null set). Let Q1I be the product measure on (Ω − N)I. Let Q be a measure on Ω defined by Q(A)=Q1I(A ∩ ΩN). Consider a cylinder set A = ∏iIAi where there is a finite J ⊆ I such that Ai = Ω whenever i ∉ J. Then
Q(A)=∏iJQ1(Ai − N)=∏iJP(Ai − N)=∏iJP(Ai)=PN(A).
Since PN and Q agree on cylinder sets, by the definition of the product measure, Q is an extension of PN.

To show that H is saturated nonmeasurable, we now only need to show that any PI-measurable set in the complement of H must have probability zero. Let A be any PI-measurable set in the complement of H. Then A is of the form {ω ∈ ΩI : F(ω)}, where F(ω) is a condition involving only coordinates of ω numbered by a fixed countable set of indices from I (i.e., there is a countable subset J of I and a subset B of ΩJ such that F(ω) if and only if ω|J is a member of B, where ω|J is the restriction of ω to J). But no such condition can exclude the possibility that a coordinate of Ω outside that countable set is in H, unless the condition is entirely unsatisfiable, and hence no such set A lies in the complement of H, unless the set is empty. And that’s all we need to show.

Monday, May 1, 2017

Desire-belief theory and soft determinism

Consider this naive argument:

  1. If the desire-belief theory of motivation is true, whenever I act, I do what I want.
  2. Sometimes in acting I do what I do not want.
  3. So the desire-belief theory is false.

Some naive arguments are nonetheless sound. (“I know I have two hands, …”) But that’s not where I want to take this line of thought, though I could try to.

I think there are two kinds of answers to this naive argument. One could simply deny (2), espousing an error theory about what happens when people say “I did A even though I didn’t want to.” But suppose we want to do justice to common sense. Then we have to accept (2). And (1) seems to be just a consequence of the desire-belief theory. So what to can one say?

Well, one can say that “what I want” is used in a different sense in (1) and (2). The most promising distinction here seems to me to be between what one wants overall and what one has a desire for. The desire-belief theorist has to affirm that if I do something, I have a desire for it. But she doesn’t have to say that I desire the thing overall. To make use of this distinction, (2) has to say that I act while doing what I do not overall want.

If this is the only helpful distinction here, then someone who does not want to embrace an error theory about (2) has to admit that sometimes we act not in accord with what we overall want. Moreover, it seems almost as much a truism as (2) that:

  1. Sometimes in acting freely I do what I do not want.

On the present distinction, this means that sometimes in acting freely, I do something that isn’t my overall desire.

But this in turn makes soft determinism problematic: for if my action is determined and isn’t what I overall desire, and desire-belief theory is correct, then it is very hard to see how the action could possibly be free.

There is a lot of argument from ignorance (the only relevant distinction seems to be…, etc.) in the above. But if it can be all cashed out, then we have a nice argument that one shouldn’t be both a desire-belief theorist or a soft-determinist. (I think one shouldn’t be either!)

Tuesday, January 10, 2017

Analogue jitter in motivations and the randomness objection to libertarianism

All analogue devices jitter on a small time-scale. The jitter is for all practical purposes random, even if the system is deterministic.

Suppose now that compatibilism is true and we have a free agent who is determined to always choose what she is most strongly motivated towards. Now suppose a Buridan’s ass situation, where the motivations for two alternatives are balanced, but where the motivations were acquired in the normal way human motivations are, where there is an absence of constraint, etc.

Because of analogue jitter in the brain, sometimes one motivation will be slightly stronger and sometimes the other will be. Thus which way the agent will choose will be determined by the state of the jitter at the time of the choice. And that’s for all practical purposes random.

Either in such cases there is freedom or there is not. If there is no freedom in such cases, then the compatibilist has to say that people whose choices are sufficiently torn are not responsible for their choices. That is highly counterintuitive.

The compatibilist’s better option is to say that there can still be freedom in such cases. It’s a bit inaccurate to say that the choice is determined by the jitter. For it’s only because the rough values of the strengths of the motivations are as they are that the jitter in their exact strength is relevant. The rough values of the strengths of the motivations are explanatorily relevant, regardless of which way the choice goes. The compatibilist should say that this kind of explanatory relevance is sufficient for freedom.

But if she says this, then she must abandon the randomness objection against libertarianism.

Wednesday, November 30, 2016

No-collapse interpretations without a dynamically evolving wavefunction in reality

Bohm’s interpretation of quantum mechanics has two ontological components: It has the guiding wave—the wavefunction—which dynamically evolves according to the Schrödinger equation, and it has the corpuscles whose movements are guided by that wavefunction. Brown and Wallace criticize Bohm for this duality, on the grounds that there is no reason to take our macroscopic reality to be connected with the corpuscles rather than the wavefunction.

I want to explore a variant of Bohm on which there is no evolving wavefunction, and then generalize the point to a number of other no-collapse interpretations.

So, on Bohm’s quantum mechanics, reality at a time t is represented by two things: (a) a wavefunction vector |ψ(t)⟩ in the Hilbert space, and (b) an assignment of values to hidden variables (e.g., corpuscle positions). The first item evolves according to the Schrödinger equation. Given an initial vector |ψ(0)⟩, the vector at time t can be mathematically given as |ψ(t)⟩ = Ut|ψ(0)⟩ where Ut is a mathematical time-evolution operator (dependent on the Hamiltonian). And then by a law of nature, the hidden variables evolve according to a differential equation—the guiding equation—that involves |ψ(t)⟩.

But now suppose we change the ontology. We keep the assignment of values to hidden variables at times. But instead of supposing that reality has something corresponding to the wavefunction vector at every time, we merely suppose that reality has something corresponding to an initial wavefunction vector |ψ0⟩. There is nothing in physical reality corresponding to the wavefunction at t if t > 0. But nonetheless it makes mathematical sense to talk of the vector Ut|ψ0⟩, and then the guiding equation governing the evolution of the hidden variables can be formulated in terms of Ut|ψ0⟩ instead of |ψ(t)⟩.

If we want an ontology to go with this, we could say that the reality corresponding to the initial vector |ψ0⟩ affects the evolution of the hidden variables for all subsequent times. We now have only one aspect of reality—the hidden variables of the corpuscles—evolving dynamically instead of two. We don’t have Schrödinger’s equation in the laws of nature except as a useful mathematical property of the Ut operator described by the initial vector). We can talk of the wavefunction Ut|ψ0⟩ at a time t, but that’s just a mathematical artifact, just as m1m2 is a part of the equation expressing Newton’s law of gravitation rather than a direct representation of physical reality. Of course, just as m1m2 is determined by physical things—the two masses—so too the wavefunction Ut|ψ0⟩ is determined by physical reality (the initial vector, the time, and the Hamiltonian). This seems to me to weaken the force of the Brown and Wallace point, since there no longer is a reality corresponding to the wavefunction at non-initial times, except highly derivatively.

Interestingly, the exact same move can be made for a number of other no-collapse interpretations, such as Bell’s indeterministic variant of Bohm, other modal interpretations, the many-minds interpretation, the traveling minds interpretation and the Aristotelian traveling forms interpretation. There need be no time-evolving wavefunction in reality, but just an initial vector which transtemporally affects the evolution of the other aspects of reality (such as where the minds go).

Or one could suppose a static background vector.

It’s interesting to ask what happens if one plugs this into the Everett interpretation. There I think we get something rather implausible: for then all time-evolution will disappear, since all reality will be reduced to the physical correlate of the initial vector. So my move above is only plausible for those no-collapse interpretations on which there is something more beyond the wavefunction.

There is also a connection between this approach and the Heisenberg picture. How close the connection is is not yet clear to me.

Friday, September 23, 2016

A Copenhagen interpretation of classical mechanics

One can always take an indeterministic theory and turn it deterministic in some way or other while preserving empirical predictions. Bohmian mechanics is an example of doing that with quantum mechanics. It's mildly interesting that one can go the other way: take a deterministic theory and turn it indeterministic. I'm going to sketch how to do that.

Suppose we have classical physics with phase space S and a time evolution operator Tt. If the theory is formulated in terms of a constant finite number n of particles, then S will be a 6n-dimensional vector space (three position and three momentum variables for each particle). The time evolution operator takes a point in phrase space and says where the system will be after time t elapses if it starts at that point. I will assume that there is a beginning to time at time zero. The normal story then is that physical reality is modeled by a trajectory function s from times to points of S, such that Tt(s(u))=s(u+t).

Our indeterministic theory will instead say that physical reality is modeled by a (continuous) sequence of probability measures Pt on the phase space S for times t≥0. These probability measures should be thought of as something like a physical field, akin to the wavefunction of quantum mechanics--they represent physical reality, and not just our state of knowledge of it. Mirroring the consciousness-causes-collapse version of the Copenhagen interpretation of quantum mechanics, we now say this. If from time u (exclusive) to time t+u (inclusive) no observation of the system was made, then Pt+u(A)=Pt(Tu−1[A]). I.e., the probability measure is just given by tracking forward by the time-evolution operator in that case.

On the other hand, suppose that at time t an observation is made. Assume that observations are binary, and correspond to measurable subsets of phase space. Intuitively, when we observe we are checking if reality is in some region A of phase space. (It's easy to generalize this to observations having any countable number of possible outcomes.) Suppose Pt* is the value that Pt would have had there been no observation at t by the no-observation evolution rule. Then I suppose that with objective chance Pt*(A) we observe A and with objective chance 1−Pt*(A) we observe not-A, with the further supposition that if one of these numbers is zero, the corresponding observation physically cannot happen. Then the probability measure Pt equals the conditionalization of Pt* on the observation that does in fact occur. In other words, if we observe A, then Pt(B)=Pt*(B|A) and otherwise Pt(B)=Pt*(B|not-A). And then the deterministic evolution continues as before until the next observation.

As far as I can see, this story generates the same empirical predictions as the original deterministic classical story. Also note that while in this story, collapse was triggered by observation, presumably one can also come up with stories on which collapse is triggered by some other kind of physical process.

So what? Well, here's one thought. Free will is (I and others have argued) incompatible with determinism. One thought experiment that people have raised is this. If you think free will incompatible with determinism, and suddenly the best physics turned out to deterministic, what would you do? Would you deny free will? Or would you become a compatibilist? Well, the above example shows that there is a third option: give an indeterministic but empirically adequate reinterpretation of the physics. (Well, to be honest, this might not entirely solve the problem. For it might be, depending on how the details work out, that past observations narrow down the options for brain states so much that they become deterministic. But at least there would be hope that one wouldn't need to give up on libertarianism.)

The above way of making free will compatible with physical determinism is functionally similar to Kant's idea that our free choices affect the initial conditions of the universe, but without the freaky backwards-like (not exactly backwards, since the noumenal isn't in time) causation.

Here's another thought. Any indeterministic theory can be reinterpreted as a deterministic multiverse theory with traveling minds, while maintaining empirical adequacy. The multiverse traveling minds theory allows for causal closure of a deterministic physics together with robust alternate-possibilities freedom. Combining the two reinterpretations, we could in principle start with a deterministic physics, then reinterpret it in a Copenhagen way, and then impose on top of that the traveling minds interpretation, thereby gaining an empirical equivalent theory with robust alternate-possibilities freedom and no mental-to-physical causation. I bet a lot of people thought this can't be done.

Tuesday, February 23, 2016

Determinism and moral imperfection

If determinism is true, then I always do the best I can do. If I always do the best I can do, I lack moral imperfection. So if determinism is true, I lack moral imperfection. But I am morally imperfect. So determinism is not true.

Friday, December 18, 2015

Causation and collapse

If determinism were true, then since each state could be project from the initial state, we could simply suppose that the whole four-dimensional shebang came into existence causally "all at once", so that there would be no causal relations within the four-dimensional universe. The only relevant causation could that of God's causing the universe as a whole--and an atheist might just think the four-dimensional universe to be uncaused.

I think that this acausal picture could be adapted to give an attractive picture of the role of causation in a collapse interpretation of quantum mechanics (whether the collapse is of the GRW-type or of the consciousness-caused type). On a collapse picture, we have an alternation between a deterministic evolution governed by the Schroedinger equation and an indeterministic collapse. Why not suppose, then, that there is no causation within the deterministic evolution? We could instead suppose that the state of the universe at collapse causes the whole of the four-dimensional block between that collapse and the next. As long as collapse isn't too frequent, this could allow occasions of causation to be discrete, with only a finite number of such occasions within any interval of time. And this would let us reconcile quantum physics with causal finitism even with a continuous time. (Relativity would require more work.)