Showing posts with label indeterminism. Show all posts
Showing posts with label indeterminism. Show all posts

Monday, October 3, 2022

The Church-Turing Thesis and generalized Molinism

The physical Church-Turing (PCT) thesis says that anything that can be physically computed can be computed by a Turing machine.

If generalized Molinism—the thesis that for any sufficiently precisely described counterfactual situation, there is a fact of the matter what would happen in that situation—is true, and indeterminism is true, then PCT seems very likely false. For imagine the function f from the natural numbers to {0, 1} such that f(n) is 1 if and only if the coin toss on day n would be heads, were I to live forever and daily toss a fair coin—with whatever other details need to be put in to get the "sufficiently precisely described". But only countably many functions are Turing computable, so with probability one, an infinite sequence of coin tosses would define a Turing non-computable function. But f is physically computable: I could just do the experiment.

But wait: I’m going to die, and even if there is an afterlife, it doesn’t seem right to characterize whatever happens in the afterlife as physical computation. So all I can compute is f(n) for n < 30000 or so.

Fair enough. But if we say this, then the PCT becomes trivial. For given finite life-spans of human beings and of any machinery in an expanding universe with increasing entropy, only finitely many values of any given function can be physically computed. And any function defined on a finite set can, of course, be trivially computed by a Turing machine via a lookup-table.

So, either we trivialize PCT by insisting on the facts of our physical universe that put a finite limit on our computations, or in our notion of “physically computed” we allow for idealizations that make it possible to go on forever. If we do allow for such idealizations, then my argument works: generalized Molinism makes PCT unlikely to be true.

Friday, February 25, 2022

Inscrutable probabilities and the replay argument

Suppose that we have two hypotheses about a long sequence X of zeroes and/or ones:

  • S: The sequence has the underlying structure of a sequence of independent and identically distributed (iid) probabilistic phenomena with possible outcomes zero or one.

  • U: The sequence is completely probabilistically unstructured, being completely probabilistically inscrutable.

What can we say about the evidential import of X on S and U?

Here is one thought. We have a picture of how a repeated sequence of independent and identically distributed binary events should look. Much as van Inwagen says in his discussion of the replay argument, we expect to have the proportion of ones to oscillate but then converge to some value. So if X looks like that, then we have evidential support for S, and if X doesn’t look like that—maybe there is wilder oscillation and no convergence—then we do not have evidential support for S.

But does the fact that the sequence X has a “nice gradual convergence” in frequencies support the structure hypothesis S? Van Inwagen in his replay argument against indeterministic free will seems to assume so.

Actually, convergence of frequencies does not support S (or U, for that matter). Here is why not. Let’s model hypothesis S as follows. There is some unknown real dispositional probability p between 0 and 1, and then our sequence X is taken by observing a sequence of iid results with the probability of the result being 1 being equal to p. Given p, any sequence of n events whose proportion of ones is m/n has probability pm(1−p)n − m, regardless of whether the sequence shows any “nice gradual convergence” of the frequency of ones to m/n or is just a seqence of n − m zeroes followed by m ones.

Of course, hypothesis S doesn’t say what the probability p of getting a one on a given experiment is. So, we need to suppose some probability distribution Q over the possible values of p in the interval [0,1], and then our probability of the sequence X of m ones and n − m zeroes will be pm(1−p)n − mdQ(p). But it’s still true that this probability does not depend on the order between the ones and zeroes. The sequence could “look random”, or it could look as fishy as you like, and the probability of it on S would be exactly the same if the number of ones is the same.

On the other hand, on the hypothesis U, all possible sequences X of length n are probabilistically inscrutable, and hence we cannot say anything about any sequence being more likely than another—we might as well represent the probability of any sequence X of observations as just an interval-valued probability of [0,1].

So, no facts about the order of events in X make any difference between the hypotheses S and U. In particular, van Inwagen’s idea that an appearance of convergence is evidence for S is false.

Now, let’s say a little more about the Bayesian import of the observation X. Let’s suppose that S and I are our only two hypotheses, and that both have serious prior probabilities, say somewhere between 0.1 and 0.9. Further, let’s suppose that that the sequence X is long and has a decent amount of variation in it—for instance, let’s suppose that it has at least 50 zeroes and at least 50 ones. Because of this, P(X|S) will be some astronomically small positive number α. Indeed, we can prove that α is at most 1/2100 ≈ 10−30, for any distribution of the unknown probability p of getting a one.

On the other hand, P(X|U) will be completely probabilistically inscrutable, and hence can be represented as the full interval [0,1].

Here’s what follows from Bayes’s theorem combined with a natural way of handling inscrutable probabilities as a range of probability assignments: The posterior probability of S will be a range of probabilities that starts with something within an order of magnitude of α and ends with 1. Hence, our observation of X does not support S: upon observing X, we move from S having some moderate probability between 0.1 and 0.9, to a nearly completely inscrutable probability in a range starting with something astronomically small and ending at one. The upper end of the range is higher than what S started with but the lower end of the range is lower.

Thus, if we were to actually do the experiment that van Inwagen describes, and get the results that he thinks we would get (namely, a sequence converging to some probability), that would not support the hypothesis S that van Inwagen thinks it would support.

Monday, November 29, 2021

Simultaneous causation and determinism

Consider the Causal Simultaneity Thesis (CST) that all causation is simultaneous. Assume that simultaneity is absolute (rather than relative). Assume there is change. Here is a consequence I will argue for: determinism is false. In fact, more strongly, there are no diachronic deterministic causal series. What is surprising is that we get this consequence without any considerations of free will or quantum mechanics.

Since there is a very plausible argument from presentism to CST (a non-simultaneous fundamental causal relation could never obtain between two existent things given presentism), we get an argument from presentism to indeterminism.

Personally, I am inclined to think of this argument as a bit of evidence against CST and hence against presentism, because it seems to me that there could be a deterministic world, even though there isn’t. But tastes differ.

Now the argument for the central thesis. The idea is simple. On CST, as soon as the deterministic causes of an effect are in place, their effect is in place. Any delay in the effect would mean a violation of the determinism. There can be nothing in the deterministic causes to explain how much delay happens, because all the causes work simultaneously. And so if determinism is true—i.e., if everything has a deterministic cause—then all the effects happen all at once, and everything is already in the final state at the first moment of time. Thus there is no change if we have determinism and CST.

The point becomes clearer when we think about how it is an adherent of CST explains diachronic causal series. We have an item A that starts existing at time t1, persists through time t2 (kept in existence not by its own causal power, as that would require a diachronic causal relation, but either by a conserver or a principle of existential inertia), then causes an item B, which then persists through time t3 and then causes an item C, and so on. While any two successive items in the causal series A, B, C, ... must overlap temporally (i.e., there must be a time at which they both exist), we need not have temporal overlap between A and C, say. We can thus have things perishing and new things coming into being after them.

But if the causation is deterministic, then as soon as A exists, it will cause B, which will cause C, and so on, thereby forcing the whole series to exist at once, and destroying change.

In an earlier post, I thought this made for a serious objection to CST. I asked: “Why does A ‘wait’ until t2 to cause B?” But once we realize that the issue above has to do with determinism, we see that an answer is available. All we need to do is to suppose there is probabilistic causation.

For simplicity (and because this is what fits best with causal finitism) suppose time is discrete. Then we may suppose that at each moment of time at which A exists it has a certain low probability pAB of causing B if B does not already exist. Then the probability that A will cause B precisely after n units of time is (1 − pAB)npAB. It follows mathematically that “on average” it will cause B after pAB/(1 − pAB) fundamental units of time.

It follows that for any desired average time delay, a designer of the universe can design a cause that has that delay. Let’s say that we want B to come into existence on average u fundamental units of time after A has come into existence. Then the designer can give A a causal power of producing B at any given moment of time at which B does not already exist with probability pAB = 1/(1 + u).

The resulting setup will be indeterministic, and in particular we can expect significant random variation in how long it takes to get B from A. But if the designer wants more precise timing, that can be arranged as well. Let’s say that our designer wants B to happen very close to precisely one second after A. The designer can then ensure that, say, there are a million instants of time in a second, and that A has the power to produce an event A1 with a probability at any given instant such that the expected wait time will be 0.0001 seconds (i.e., 100 fundamental units of time), and A1 the power to produce A2 with the same probability, and so on, with A10000 = B. Then by the Central Limit Theorem, the average wait time between A and B can be expected to be fairly close to 10000 × 0.0001 = 1 seconds, and the designer can get arbitrarily high confidence of an arbitrarily high precision of delay by inserting more instants in each second, and more intermediate causes between A and B, with each intermediate cause having an average delay time of 100 fundamental units (say). (This uses the fact that the geometric distribution has a finite third moment and the Barry-Esseen version of the Central Limit Theorem.)

Thus, a designer of the universe can make an arbitrarily precise and reliable near-deterministic changing universe despite CST. And that really blunts the force of my anti-deterministic observation as a consideration against CST.

Monday, September 9, 2019

Eleven varieties of contrastive explanation

In connection with free will, quantum mechanics or divine creation it is useful to talk about contrastive explanation. But there is no single generally accepted concept of contrastive explanation, and what one says about these topics varies depending on the chosen concept.

To that end, here is a collection of definitions of contrastive explanation. They all have this form:

  • r contrastively explains why p rather than q if and only if r explains why (p and not q) and [insert any additional conditions].

They vary depending on the additional conditions to be inserted. Here are some options for these:

  1. No additional conditions.

  2. r makes p more likely than q.

  3. r cannot explain q.

  4. r wouldn’t explain q if q were true instead of p.

  5. r wouldn’t explain q as well as it now explains p if q were true instead of p.

  6. q wouldn’t be explained by r or by any proposition with r’s actual grounds if q were true instead of p.

  7. q wouldn’t be explained by r or by any proposition with r’s actual grounds as well as r now explains p if q were true instead of p.

  8. the conjunction of everything explanatorily prior to p makes p more likely than q.

  9. r entails (p and not q).

  10. r entails the truth of p.

  11. r entails the falsity of q.

It is not possible to normally have contrastive explanations of indeterministic free choices or quantum events in senses 9–11, and probably sense 8, but it is possible (with an appropriately metaphysical theory of free choice or quantum events) in senses 1-7. As for the case of contingent divine creative decision, things depend on divine simplicity. Without divine simplicity, contrastive explanations are possible in senses 1–7. Interestingly, if divine simplicity is true, then it is not possible to have contrastive explanations of contingent divine creative decisions in senses 6 or 7.

In what I said above, I assumed that the explanandum cannot be a part of the explanans. If following Peter Railton one drops this condition, then contrastive explanation of all three phenomena (with or without divine simplicity) becomes possible in all the senses.

Lesson: When one talks about contrastive explanation, one needs to define one’s terms.

Acknowledgments: I am grateful to Christopher Tomaszewski for in-depth discussion that led me to recognize the important difference between 4–5 and 6–7. And the Railton point is basically due to a remark by Yunus Prasetya.

Monday, August 26, 2019

Physical possibility

Here is an interesting question: How can one tell from a physics theory whether some event is physically possible according to that theory?

A sufficient condition for physical possibility is that the physics assigns a non-zero chance to it. But this is surely not a necessary condition. After all, it is possible that you will get heads on each of infinitely many tosses of an indeterministic die, while the chance of that is zero.

Plausibly, a necessary condition is that the event should be describable within the state space of the theory. Thus, the state space of classical mechanics simply cannot describe an electron being in a superposition of two position states, and hence such a superposition is physically impossible. But this necessary condition is not sufficient, as Newtonian mechanics bans various transitions that can be described within the state space of classical mechanics.

So, we have a necessary condition and a sufficient condition for physical possibility relative to a physics theory. It would be nice to have a necessary and sufficient condition.

Thursday, March 21, 2019

If open futurism is true, then there are possible worlds that can't ever be actual

Assume open futurism, so that, necessarily, undetermined future tensed “will” statements are either all false or all lack truth value. Then there are possible worlds containing me such that it is impossible for it to be true that I am ever in that world. What do I mean?

Consider possible worlds where I flip an indeterministic fair coin on infinitely many days, starting with day 1. Among these worlds, there is a possible world wH where the coin always comes up heads. But it is impossible for it to be true that I am in that world. For that I ever am in that world entails that infinitely many future indeterministic fair coin tosses will be heads. But a proposition reporting future indeterministic events cannot be true given an open future. So, likewise, it cannot be true that I am ever in that world.

But isn’t it absurd that there be a possible world with me such that it is impossible that it be true that I am in it?

My presence is an unnecessary part of the above argument. The point can also be put this way. If open futurism is true, there are possible worlds (such as wH) that can’t possibly ever be actual.

Wednesday, November 30, 2016

No-collapse interpretations without a dynamically evolving wavefunction in reality

Bohm’s interpretation of quantum mechanics has two ontological components: It has the guiding wave—the wavefunction—which dynamically evolves according to the Schrödinger equation, and it has the corpuscles whose movements are guided by that wavefunction. Brown and Wallace criticize Bohm for this duality, on the grounds that there is no reason to take our macroscopic reality to be connected with the corpuscles rather than the wavefunction.

I want to explore a variant of Bohm on which there is no evolving wavefunction, and then generalize the point to a number of other no-collapse interpretations.

So, on Bohm’s quantum mechanics, reality at a time t is represented by two things: (a) a wavefunction vector |ψ(t)⟩ in the Hilbert space, and (b) an assignment of values to hidden variables (e.g., corpuscle positions). The first item evolves according to the Schrödinger equation. Given an initial vector |ψ(0)⟩, the vector at time t can be mathematically given as |ψ(t)⟩ = Ut|ψ(0)⟩ where Ut is a mathematical time-evolution operator (dependent on the Hamiltonian). And then by a law of nature, the hidden variables evolve according to a differential equation—the guiding equation—that involves |ψ(t)⟩.

But now suppose we change the ontology. We keep the assignment of values to hidden variables at times. But instead of supposing that reality has something corresponding to the wavefunction vector at every time, we merely suppose that reality has something corresponding to an initial wavefunction vector |ψ0⟩. There is nothing in physical reality corresponding to the wavefunction at t if t > 0. But nonetheless it makes mathematical sense to talk of the vector Ut|ψ0⟩, and then the guiding equation governing the evolution of the hidden variables can be formulated in terms of Ut|ψ0⟩ instead of |ψ(t)⟩.

If we want an ontology to go with this, we could say that the reality corresponding to the initial vector |ψ0⟩ affects the evolution of the hidden variables for all subsequent times. We now have only one aspect of reality—the hidden variables of the corpuscles—evolving dynamically instead of two. We don’t have Schrödinger’s equation in the laws of nature except as a useful mathematical property of the Ut operator described by the initial vector). We can talk of the wavefunction Ut|ψ0⟩ at a time t, but that’s just a mathematical artifact, just as m1m2 is a part of the equation expressing Newton’s law of gravitation rather than a direct representation of physical reality. Of course, just as m1m2 is determined by physical things—the two masses—so too the wavefunction Ut|ψ0⟩ is determined by physical reality (the initial vector, the time, and the Hamiltonian). This seems to me to weaken the force of the Brown and Wallace point, since there no longer is a reality corresponding to the wavefunction at non-initial times, except highly derivatively.

Interestingly, the exact same move can be made for a number of other no-collapse interpretations, such as Bell’s indeterministic variant of Bohm, other modal interpretations, the many-minds interpretation, the traveling minds interpretation and the Aristotelian traveling forms interpretation. There need be no time-evolving wavefunction in reality, but just an initial vector which transtemporally affects the evolution of the other aspects of reality (such as where the minds go).

Or one could suppose a static background vector.

It’s interesting to ask what happens if one plugs this into the Everett interpretation. There I think we get something rather implausible: for then all time-evolution will disappear, since all reality will be reduced to the physical correlate of the initial vector. So my move above is only plausible for those no-collapse interpretations on which there is something more beyond the wavefunction.

There is also a connection between this approach and the Heisenberg picture. How close the connection is is not yet clear to me.

Friday, September 23, 2016

A Copenhagen interpretation of classical mechanics

One can always take an indeterministic theory and turn it deterministic in some way or other while preserving empirical predictions. Bohmian mechanics is an example of doing that with quantum mechanics. It's mildly interesting that one can go the other way: take a deterministic theory and turn it indeterministic. I'm going to sketch how to do that.

Suppose we have classical physics with phase space S and a time evolution operator Tt. If the theory is formulated in terms of a constant finite number n of particles, then S will be a 6n-dimensional vector space (three position and three momentum variables for each particle). The time evolution operator takes a point in phrase space and says where the system will be after time t elapses if it starts at that point. I will assume that there is a beginning to time at time zero. The normal story then is that physical reality is modeled by a trajectory function s from times to points of S, such that Tt(s(u))=s(u+t).

Our indeterministic theory will instead say that physical reality is modeled by a (continuous) sequence of probability measures Pt on the phase space S for times t≥0. These probability measures should be thought of as something like a physical field, akin to the wavefunction of quantum mechanics--they represent physical reality, and not just our state of knowledge of it. Mirroring the consciousness-causes-collapse version of the Copenhagen interpretation of quantum mechanics, we now say this. If from time u (exclusive) to time t+u (inclusive) no observation of the system was made, then Pt+u(A)=Pt(Tu−1[A]). I.e., the probability measure is just given by tracking forward by the time-evolution operator in that case.

On the other hand, suppose that at time t an observation is made. Assume that observations are binary, and correspond to measurable subsets of phase space. Intuitively, when we observe we are checking if reality is in some region A of phase space. (It's easy to generalize this to observations having any countable number of possible outcomes.) Suppose Pt* is the value that Pt would have had there been no observation at t by the no-observation evolution rule. Then I suppose that with objective chance Pt*(A) we observe A and with objective chance 1−Pt*(A) we observe not-A, with the further supposition that if one of these numbers is zero, the corresponding observation physically cannot happen. Then the probability measure Pt equals the conditionalization of Pt* on the observation that does in fact occur. In other words, if we observe A, then Pt(B)=Pt*(B|A) and otherwise Pt(B)=Pt*(B|not-A). And then the deterministic evolution continues as before until the next observation.

As far as I can see, this story generates the same empirical predictions as the original deterministic classical story. Also note that while in this story, collapse was triggered by observation, presumably one can also come up with stories on which collapse is triggered by some other kind of physical process.

So what? Well, here's one thought. Free will is (I and others have argued) incompatible with determinism. One thought experiment that people have raised is this. If you think free will incompatible with determinism, and suddenly the best physics turned out to deterministic, what would you do? Would you deny free will? Or would you become a compatibilist? Well, the above example shows that there is a third option: give an indeterministic but empirically adequate reinterpretation of the physics. (Well, to be honest, this might not entirely solve the problem. For it might be, depending on how the details work out, that past observations narrow down the options for brain states so much that they become deterministic. But at least there would be hope that one wouldn't need to give up on libertarianism.)

The above way of making free will compatible with physical determinism is functionally similar to Kant's idea that our free choices affect the initial conditions of the universe, but without the freaky backwards-like (not exactly backwards, since the noumenal isn't in time) causation.

Here's another thought. Any indeterministic theory can be reinterpreted as a deterministic multiverse theory with traveling minds, while maintaining empirical adequacy. The multiverse traveling minds theory allows for causal closure of a deterministic physics together with robust alternate-possibilities freedom. Combining the two reinterpretations, we could in principle start with a deterministic physics, then reinterpret it in a Copenhagen way, and then impose on top of that the traveling minds interpretation, thereby gaining an empirical equivalent theory with robust alternate-possibilities freedom and no mental-to-physical causation. I bet a lot of people thought this can't be done.

Wednesday, November 5, 2014

The traveling minds interpretation of indeterministic theories

I'm going to start by offering a simple way—likely not original, but even if so, not very widely discussed—of turning an indeterministic physical theory into a deterministic physical theory with an indeterministic dualist metaphysics. While I do not claim, and indeed rather doubt, that the result correctly describes our world, the availability of this theory has some rather interesting implications for the mind-body and free will and determinism debates.

Start with any indeterministic theory that we can diagram as a branching structure. The first diagram illustrates such a theory. The fat red line is how things go. The thin black dotted lines are how things might have gone but didn't. At each node, things might go one way or another, and presumably the theory specifies the transition probabilities—the chances of going into the different branches. The distinction between the selected branches and the unselected branches is that between the actual and the merely possible.

The Everett many-worlds interpretation of Quantum Mechanics then provides us with a way of making an indeterministic theory deterministic. We simply suppose that all the branches are selected. When we get to a node, the world splits, and so do we its observers. All the lines are now fat and red: they are all taken. There are some rather serious probabilistic problems with the Everett interpretation—it works best if the probabilities of each branch coming out of a node are equal, but in general we would not expect this to be true. Also, there are serious ethics problems, since we don't get to affect the overall lot of humankind—no matter which branch we ourselves take, there will be misery on some equally real branches and happiness on others, and we can do nothing about that.

To solve the probabilistic problems, people introduce the many-minds interpretation of the many-worlds interpretation. Each person has infinitely many minds. When we get to a branch point, each mind indeterministically "chooses" (i.e., is selected to) an outgoing branch according to the probabilities in the physics. Since there are infinitely many of these minds, at least in the case where there are finitely many branches coming out of a node we will expect each outgoing branch to get infinitely many of the minds going along it. So we're still splitting, and we still have the ethics problems since we don't get to affect the overall lot of humankind—or even of ourselves (no matter which branch we go on, infinitely many of our minds will be miserable and infinitely many will be happy).

But now I want to offer a traveling minds interpretation of the indeterministic theory. On the physical side, this interpretation is just like the many-worlds interpretation. It is a dualist interpretation like the many-minds one: we each have a non-physical mind. But there is only one mind per person, as per common sense, and minds never split. Moreover our minds are all stuck together: they always travel together. When we come to a branching point, the physical world splits just as on the many-worlds interpretation. But the minds now collectively travel together on one of the outgoing branches, with the probability of the minds taking a branch being given by the indeterministic theory.

In the diagram, the red lines indicate physical reality. So unlike in the original indeterministic theory, and like in the many worlds interpretation, all the branches are physically real. But the thick red lines and the filled-in nodes, indicate the observed branches, the ones with the minds. (Of course, if God exists, he observes all the branches, but here I am only talking of the embodied observers.) On the many-worlds interpretation, all the relevant branches were not only physically real, but also observed. Presumably, many of the unobserved branches have zombies: they have an underlying physical reality that is very much like the physical reality we observe, but there are no minds.

The traveling minds interpretation solves the probability problems. The minds can travel precisely according to the probabilities given by the physics. Traveling minds as generated in the above way will have exactly the same empirical predictions as the original indeterministic theory. (In particular, one can build traveling minds from a Copenhagen-style consciousness-causes-collapse interpretation of Quantum Mechanics, or a GRW-style interpretation.)

Traveling Minds helps a lot with the ethics problem that many-worlds and many-minds faced. For although physical reality is deterministically set, it is not set which part of physical reality is connected with the minds. We cannot affect what physical reality is like, but we can affect which part of physical reality we collectively experience. And that's all we need. Note that "we" here will include all the conscious animals as well: their minds are traveling as well. In fact, as a Thomist, I would be inclined to more generally make this a "traveling forms" theory. Thus the unselected branches not only have zombies, but they have physical arrangements like those of a tree, but it's not a tree but just an arrangement of fields or particles because it lacks metaphysical form. But in the following I won't assume this enhanced version of the theory.

Now while I don't endorse this theory or interpretation—I don't know if it can be made to fit with hylomorphic metaphysics—I do want to note that it opens an area of logical space that I think a lot of people haven't thought about.

Traveling minds is an epiphenomenalist theory (no mind-to-physics causation) with physical determinism, and is as compatible with the causal closure of the physical as any physicalist theory (it may be that physicalist theories themselves require a First Cause; if so, then so will the traveling minds theory). Nonetheless, it is a theory that allows for fairly robust alternate possibilities freedom. While you cannot affect what physical reality is like, you can affect what part of physical reality we collectively inhabit, and that's almost as good. We have a solution to the mind-to-world causation problem for dualism (not that I think it's an important problem metaphysically speaking).

I expect that I and other philosophers have incautiously said many things about things like epiphenomenalism, determinism and causal closure that the traveling minds theory provides a counterexample to. For instance, while traveling minds is a version of epiphenomenalism, it is largely untouched by the standard objections to epiphenomenalism. For instance, one of the major arguments against epiphenomenalism is that if minds make no causal difference, then I have no reason to think you have a mind, since your mind makes no impact on my observations. But this argument fails because it assumes incorrectly that the only way for your mind to make an impact on my observations is by affecting physical reality. But your mind can also make an impact on my observations by leaving physical reality unchanged, and simply affecting which part of physical reality we are all collectively hooked up to.

Tuesday, October 8, 2013

Quantum collapse and free choices

When I say that something is metaphysically impossible to do, I will mean: metaphysically impossible for creaturely causation. For this post I leave open the question of what God might be able to do through primary causation. The following seems quite plausible to me:

  1. If a particle is in a mixed |A>+|B> quantum state, then it is metaphysically impossible to determine the particle to collapse into the |A> state.
It is surely metaphysically possible to determine it about that the particle should have a transition from an |A>+|B> state to an |A> state. But not every transition from an |A>+|B> state to an |A> is a collapse. A collapse seems to be a natural kind of transition under the influence of the wavefunction. One can presumably take a particle in a mixed state, and then determine it to have a particular pure state. But that isn't collapse. That is our change of the particle's state. This seems very plausible to me.

The following seems to me to be just as plausible as (1):

  1. If an agent is deciding between A for reasons R and B for reasons S, then it is metaphysically impossible to determine the agent to choose A for R over B for S.
Of course, compatibilists can't say (2). But I find it surprising that in the Frankfurt literature incompatibilists typically grant the denial of (2), allowing that neural manipulators or blockers can induce particular choices. But I see very little reason for an incompatibilist to think (2) true. Of course, it may well be possible to deterministically induce a transition from the state of the agent deciding between A and B to the state of the agent attempting to execute A. But such a transition would seem to me to be very unlikely to be a choice.

Simply doing A after deciding between A and B does not constitute having chosen A. Nor is it sufficient for having chosen A that one does A because of deciding between A and B. For one to have chosen, one's doing of A must be caused in the right way by one's process of decision between A for R and B for S. But it just seems very implausible that an externally determined transition, even if it somehow causally incorporated the process of decision, would be a case of causing in the right way.

Could there perhaps be overdetermination, so that one's transition from deciding between A and B one's doing of A be both an exercise of freedom and externally determined? Quite possibly. But that wouldn't be a case where the choice is overdetermined. Rather, it would be a case where choice and external determination overdetermine the action A. The choice, however, is still un-determined.

But couldn't one make the agent choose A for R over B for S by strengthening the motive force of R or weakening that of S? I don't think so. For as long as each set of reasons has some motive force over and against the other set of reasons, it might yet win, just as a particular in a |A>+0.000001|B> state might yet collapse into the |B> state.

The above doesn't settle one question. While it is not possible to determine that one choose A over B, maybe it is possible to determine that one not choose B, by preventing a choice into a decided-for-B state, while allowing a choice in favor of the decided-for-A state? I see little reason to allow such a possibility.

Saturday, March 23, 2013

A Copenhagen story about the problem of suffering before human sin

It sure looks like there was a lot of suffering in the animal world prior to the advent of humanity, and hence before any sins of humanity. Yet it would be attractive, both theologically (at least for Christians) and philosophically, if one could say that evil entered the physical world through the free choices of persons. One could invoke the idea of angels who fell before the evolutionary process got started and who screwed things up (that might even be the right story). Or one could invoke backwards causation (Hud Hudson's hypertime story does something like that). Here I want to explore another story. I don't believe the story I will give is true. But the story is compatible with our observations outside of Revelation, does not lead to widespread scepticism, and is motivated in terms of an interpretation of quantum mechanics that has been influential.

Begin with this observation. If the non-epistemic Copenhagen interpretation (NECI) of Quantum Mechanics is literally true, then before there were observers, there was no earth and hence no life on earth. Given indeterministic events in the history of the universe, the world existed in a giant superposition between an earth and a no-earth state. The Milky Way Galaxy may not have even existed then, but instead there was a giant superposition between Milky-Way and no-Milky-Way states. And then an observation collapsed this giant superposition in favor of the sort of Solar System and Milky Way that we observe. There are difficult details to spell out here, which we can talk about in the discussion. But note that the story predicts that we will have astronomical evidence of the Milky Way existing long before there were observers on earth, even though perhaps it didn't--perhaps there was just the giant superposition. For when such a superposition collapses, it leaves evidence as of the remaining branch having been there for a long time earlier.

Now to make this a defense of the idea that suffering in the animal world entered through human sin, I need a few assumptions beyond the above plain NECI story:

  1. the observations that collapse the wavefunction are observations by intelligent embodied observers
  2. quantum states only come to be substrates of conscious states when the wavefunction is strongly concentrated on them (think of a very narrow Gaussian)
  3. prior to there being humans on earth, there were no highly concentrated quantum states of the sort that would be substrates of conscious states
  4. humans were the first embodied intelligent observers of the earth (or of other stuff relevantly entangled with it)
  5. God set up special laws of nature such that if humans were never to make wrong choices, no wavefunctions would ever collapse into the substrates of painful states.
  6. optional but theologically and philosophically attractive: the unsuperposed existence of humans comes from a special divinely-wrought collapse of the wavefunction (this would solve one problem with NECI, namely how the first observation was made, given that on plain NECI before the first observation there was a superposition of observer and no-observer states before it; it would also help reconcile creation and evolution)

One might even connect the giant superposition with the formless and void state mentioned in the Book of Genesis, though I do not particularly recommend this exegesis and I don't believe the story I am giving is in fact true (and I am mildly inclined to think it false).

Objection 1: The story makes standard paleontological and geological claims literally false. There never were any dinosaurs or trilobites, just a giant superposition with dinosaur- and trilobite-states in one component.

Response: So does the plain NECI story, without any of my supplements such as that it is intelligent observation that collapses the wavefunction. And just like the plain NECI story, my extended story explains why have the evidence we do.

Objection 2: Like the worst of the young-earth creationist stories, this story involves a massive divine deception.

Response: Not at all. Consider Descartes' attractive idea that what we expect from God is not that we would always get science right, but that we would be capable of scientifically correcting our mistakes. And the discovery of quantum mechanics, with the invention of the NECI interpretation, came within a century of Darwin's work. As soon as we had quantum mechanics with the NECI interpretation, we had good reason to doubt whether prior to the existence of observers there was an earth simpliciter or just an earth-component in a giant superposition.

Objection 3: There are better interpretations of quantum mechanics than NECI.

Response: Weighing the pros and cons of an interpretation of quantum mechanics requires weighing all its costs and benefits. This will include weighing the theological benefits of this interpretation, given the evidence that there is a God.

Variant: If we want, we can reinterpret the paleontological and geological claims about how things were before observers as relativized to a component of the wavefunction, while exempting consciousness from this relativization--only where there are highly concentrated states is there consciousness. The Everett interpretation basically does this relativization for all claims. The present relativization is, I think, less problematic than the Everett one. First, it doesn't branch intelligent agents or conscious states in the way the Everett interpretation does, a branching that generates the severely counterintuitive consequences of Everett's theory. Second, I do not think it has the well-known serious philosophical problems with the interpretation of probability that the Everett interpretation suffers from: the probabilistic transitions all happen with intelligent observation, and are objectively chancy transitions with the probabilities being interpreted according to one's favorite view of objective chances.

Final remarks: Why don't I believe this story? Well, for one, I find myself incredulous at it. Second, we know that either quantum mechanics or relativity theory is false, and I see little reason to assign more credence to quantum mechanics. Third, I do want to preserve the claims of the special sciences, like biology and geology, without implausible relativization. Fourth, I am sceptical of (1), the idea that only intelligent observation collapses the wavefunction.

Friday, February 19, 2010

Externalism about prudential reasons

Consider this case, which a colleague tells me is standard. You are bleeding badly, and you need to get to the hospital. You get in your car. No ambulance is available. However, unbeknownst to you, your car's ignition is wired to a bomb. What should you, prudentially, do? Suppose you say "Don't go to the hospital, try to self-treat." Why would you say that? Well, it has better consequences than turning on the ignition. Call somebody who says this a "consequences externalist".

But what does it mean to say that it has better consequences than turning on the ignition? I suppose it's because something like this pair of conditionals is true:

  1. Were you to turn on the ignition, the bomb would explode and you'd die immediately.
  2. Were you not to turn on the ignition, you'd live longer.
But in fact we live in a world that, as far as we know, is suffused with indeterminism. There is a tiny chance that if you turn on the ignition, the electrons from the battery will quantum tunnel around the bomb's igniter and to the car's spark plug. There is a tiny chance that if you don't turn on the ignition, a quantum event will increase the heat in the bomb and make it explode. And so on.

If something like generalized standard Molinism (i.e., Molinism generalized to indeterministic stuff other than free will) is true, (1) and (2) are perfectly well defined. But suppose no such view is true. So, really, all we have at the time of the decision are objective probabilities: it is overwhelmingly likely, given the physical state of the world, that if you turn on the ignition, the bomb will explode and you'll die immediately, etc. So, it seems, the consequences externalist has to be deeming the conditionals true when the probabilities are high enough.

So, it seems, the consequences externalist is saying that you ought not to turn on the ignition because it is exceedingly likely, given the actual arrangement of the universe at the time of the action, that doing so will let you live longer, and it is exceedingly likely that turning on the ignition will not.

Fine. Now imagine that you in fact turn the ignition, the electrons quantum-tunnel around the bomb, and all is well (maybe eventually the bomb quantum-tunnels into the sun, too). This is exceedingly unlikely, but is compatible with everything in the story so far. According to the consequences externalist position I've sketched, you in fact did the wrong thing—even though it had better consequences than the alternative. You did the wrong thing, because at the time of the decision the objective probabilities were against this decision.

But to say that in this case you did the wrong thing goes against the guiding intuitions of the consequence externalist. Once you admit that you might have done the wrong thing even though it had the better consequences, you should probably just abandon the consequence externalism altogether, and move from objective to subjective probabilities.

Now, there is something the consequence externalist can say. She can say that we evaluate subjunctives by probabilities when their antecedents are false, and by consequents when the antecedents are true. This is messy, but not crazy. So, in the case I've described, (1) is false because it has a false consequent and true antecedent, but (2) is true because the objective probability of the consequent given the antecedent is low at the time of the action.

But if the consequence externalist says this, she has the following weird thing to say. She has to say that (a) turning on the ignition was in fact right, but (b) had you not turned on the ignition, turning on the ignition would have been wrong. Why does she have to say (b)? For if you had not turned on the ignition, the subjunctive conditional (1) would have been true. It would have been true because it would have had a false antecedent and hence would have to have been evaluated according to the objective probabilities.

So, oddly, you did the right thing, but had you not done it, it would have been the wrong thing. That is weird indeed.