Showing posts with label computation. Show all posts
Showing posts with label computation. Show all posts

Monday, April 7, 2025

Information Processing Finitism, Part II

In my previous post, I explored information processing finitism (IPF), the idea that nothing can essentially causally depend on an infinite amount of information about contingent things.

Since a real-valued parameter, such as mass or coordinate position, contains an infinite amount of information, a dynamics that fits with IPF needs some non-trivial work. One idea is to encode a real-valued parameter r as a countable sequence of more fundamental discrete parameters r1, r2, ... where ri takes its value in some finite set Ri, and then hope that we can make the dynamics be such that each discrete parameter depends only on a finite number of discrete parameters at earlier times.

In the previous post, I noted that if we encode real numbers as Cauchy sequences of rationals with a certain prescribed convergence rate, then we can do something like this, at least for a toy dynamics involving continuous functions on between 0 and 1 inclusive. However, an unhappy feature of the Cauchy encoding is that it’s not unique: a given real number can have multiple Cauchy encodings. This means that on such an account of physical reality, physical reality has more information in it than is expressed in the real numbers that are observable—for the encodings are themselves a part of reality, and not just the real numbers they encode.

So I’ve been wondering if there is some clever encoding method where each real number, at least between 0 and 1, can be uniquely encoded as a countable sequence of discrete parameters such that for every continuous function f from [0,1] to [0,1], the value of each parameter discrete parameter corresponding to of f(x) depends only on a finite number of discrete parameters corresponding to x.

Sadly, the answer is negative. Here’s why.

Lemma. For any nonempty proper subset A of [0,1], there are uncountably many sets of the form f−1[A] where f is a continuous function from [0,1] to [0,1].

Given the lemma, without loss of generality suppose all the parameters are binary. For the ith parameter, let Bi be the subset of [0,1] where the parameter equals 1. Let F be the algebra of subsets of [0,1] generated by the Bi. This is countable. Any information that can be encoded by a finite number of parameters corresponds to a member of F. Suppose that whether f(x) ∈ A for some A ∈ F depends on a finite number of parameters. Then there is a C ∈ F such that x ∈ C iff f(x) ∈ A. Thus, C = f−1[A]. Thus, F is uncountable by the lemma, a contradiction.

Quick sketch of proof of lemma: The easier case is where either A or its complement is non-dense in [0,1]—then piecewise linear f will do the job. If A and its complement are dense, let (an) and (bn) be a sequence decreasing to 0 such that both an and bn are within 1/2n + 2 of 1/2n, but an ∈ A and bn ∉ A. Then for any set U of positive integers, there will be a strictly increasing continuous function fU such that fU(an) = an if n ∈ U and fU(bn) = an if n ∉ U. Note that fU−1[A] contains an if and only if n ∈ A and contains bn if and only if n ∉ A. So for different sets U, fU−1[A] is different, so there are continuum-many sets of the form fU−1[A].

Tuesday, June 11, 2024

A very simple counterexample to Integrated Information Theory?

I’ve been thinking a bit about Integerated Information Theory (IIT) as a physicalist-friendly alternative to functionalism as an account of consciousness.

The basic idea of IIT is that we measure the amount of consciousness in a system by subdividing the system into pairs of subsystems and calculating how well one can predict the next state of each of the two subsystems without knowing the state of the other. If there is a partition which lets you make the predictions well, then the system is considered reducible, with low integrated information, and hence low consciousness. So you look for the best-case subdivision—one where you can make the best predictions as measured by Shannon entropy with a certain normalization—and say that the amount Φ of “integrated information” in the system varies in reverse order with the quality of these best predictions. And then the amount of consciousness Φ in the system corresponds to the amount of integrated information.

Aaronson gives a simple mathematical framework and what sure look like counterexamples: systems that intuitively don’t appear to be mind-like and yet have a high Φ value. Surprisingly, though, Tononi (the main person behind IIT) has responded by embracing these counterexamples as cases of consciousness.

In this post, I want to offer a counterexample with a rather different structure. My counterexample has an advantage and a disadvantage with respect to Aaronson’s. The advantage is that it is a lot harder to embrace my counterexample as an example of consciousness. The disadvantage is that my example can be avoided by an easy tweak to the definition of Φ.

It is even possible that my tweak is already incorporated in the official IIT 4.0. I am right now only working with Aaronson’s perhaps simplified framework (for one, his framework depends on a deterministic transition function), because the official one is difficult for me to follow. And it is also possible that I am just missing something obvious and making some mistake. Maybe a reader will point that out to me.

The idea of my example is very simple. Imagine a system consisting of two components each of which has N possible states. At each time step, the two components swap states. There is now only one decomposition of the system into two subsystems, which makes things much simpler. And note that each subsystem’s state at time n has no predictive power for its own state at n + 1, since it inherits the other subsystem’s state at n + 1. The Shannon entropies corresponding to the best predictions are going to be log2N, and so Φ of the system is 2log2N. By making N arbitrarily large, we can make Φ arbitrarily large. In fact, if we have an analog system with infinitely many states, then Φ is infinite.

Advantage over Aaronson’s counterexamples: There is nothing the least consciousness-like in this setup. We are just endlessly swapping states between two components. That’s not consciousness. Imagine the components are hard drives and we just endlessly swap the data between them. To make it even more vivid, suppose the two hard drives have the same data, so nothing actually changes in the swaps!

Disadvantage: IIT can escape the problem by modifying the measure Φ of integrated information in some way in the special case where the components are non-binary. Aaronson’s counterexamples use binary components, so they are unaffected. Here are three such tweaks. (i) Just to divide by the logarithm of the maximum number of states in a component (seems ad hoc). (ii) Restrict the system to one with binary components, and therefore require that any component with more that two possible states be reinterpreted as a collection of binary components encoding the non-binary state (but which binarization should one choose?). (iii) Define Φ of a non-binary system as a minimum of the Φ values over all possible binarizations. Either (i) or (iii) kills my counterexample.

Monday, June 10, 2024

Computation

I’ve been imagining a very slow embodiment of computation. You have some abstract computer program designed for a finite-time finite-space subset of a Turing machine. And now you have a big tank of black and white paint that is constantly being stirred in a deterministic way, but one that is some ways into the ergodic hierarchy: it’s weakly mixing. If you leave the tank for eternity, every so often the paint will make some seemingly meaningful patterns. In particular, on very rare occasions in the tank one finds an artistic drawing of the next step of the Turing machine’s functioning while executing that program—it will be the drawing of a tape, a head, and various symbols on the tape. Of course, in between these steps will be a millenia of garbage.

In fact, it turns out that (with probability one) there will be some specific number n of years such that the correct first step of the Turing machine’s functioning will be drawn in exactly n years, the correct second step in exactly 2n years, the correct third one in exactly 3n years, and so on (remembering that there is only a finite number of steps, since we have working with a finite-space subset). (Technically, this is because weak mixing implies multiple weak mixing.) Moreover, each step causally depends on the preceding one. Will this be computation? Will the tank of paint be running the program in this process?

Intuitively, no. For although we do have causal connections between the state in n years and the next state in 2n years and so on, those connections are too counterfactually fragile. Let’s say you took the artistic drawing of the Turing machine in the tank at the first step (namely in n years) and you perturbed some of the paint particles in a way that makes no visible difference to the visual representation. Then probably by 2n years things would be totally different from what they should be. And if you changed the drawing to a drawing of a different Turing machine state, the every-n-years evolution would also change.

So it seems that for computation we need some counterfactual robustness. In a real computer, physical states define logical states in a infinity-to-one way (infinitely many “small” physical voltages count as a logical zero, and infinitely many “larger” physical voltages count as a logical one). We want to make sure that if the physical states were different but not sufficiently different to change the logical states, this would not be likely to affect the logical states in the future. And if the physical states were different enough to change the logical states, then the subsequent evolution would likely change in an orderly way. Not so in the paint system.

But the counterfactual robustness is tricky. Imagine a Frankfurt-style counterfactual intervener who is watching your computer while your computer is computing ten thousand digits of π. The observer has a very precise plan for all the analog physical states of your computer during the computation, and if there is the least deviation, the observer will blow up the computer. Fortunately, there is no deviation. But now with the intervener in place, there is no counterfactual robustness. So it seems the computation has been destroyed.

Maybe it’s fine to say it has been destroyed. The question of whether a particular physical system is actually running a particular program seems like a purely verbal question.

Unless consciousness is defined by computation. For whether a system is conscious, or at least conscious in a particular specific way, is not a purely verbal question. If consciousness is defined by computation, we need a mapping between physical states and logical computational states, and what that mapping is had better not be a purely verbal question.

Tuesday, April 16, 2024

A version of computationalism

I’ve been thinking how best to define computationalism about the mind, while remaining fairly agnostic about how the brain computes. Here is my best attempt to formulate computationalism:

  • If a Turing machine with sufficiently large memory simulates the functioning of a normal adult human being with sufficient accuracy, then given an appropriate mapping of inputs and outputs but without any ontological addition of a nonphysical property or part, (a) the simulated body dispositionally will behave like the simulated one at the level of macroscopic observation, and (b) the simulation will exhibit mental states analogous to those the simulated human would have.

The “analogous” in (b) allows the computationalist at least two difference between the mental states of the simulation and the mental states of the simulated. First, we might allow for the possibility that the qualitative features of mental states—the qualia—depend on the exact type of embodiment, so in vivo and in silico versions of the human will have different qualitative states when faced with analogous sensory inputs. Second, we probably should allow for some modest semantic externalism.

The “without any ontological addition” is relevant if one thinks that the laws of nature, or divine dispositions, are such that if a simulation were made, it would gain a soul or some other nonphysical addition. In other words, the qualifier helps to ensure that the simulation would think in virtue of its computational features, rather than in virtue of something being added.

Note that computationalism so defined is not entailed by standard reductive physicalism. For while the standard reductive physicalist is going to accept that a sufficiently accurate simulation will yield (a), they can think that real thought depends on physical features that are not had by the simulation (we could imagine, for instance, that to have qualia you need to have carbon, and merely simulated carbon is not good enough).

Moreover, computationalism so defined is compatible with some nonreductive physicalisms, say ones on which there are biological laws that do not reduce to laws of physics, as long as these biological laws are simulable, and the appropriate simulation will have the right mental states.

In fact, computationalism so defined is compatible with substance dualism, as long as the functioning of the soul is simulable, and the simulation would have the right mental states without itself having to have a soul added to it.

Computationalism defined as above is not the same as functionalism. Functionalism requires a notion of a proper function (even if statistically defined, as in Lewis). No such notion is needed above. Furthermore, the computationalism is not a thesis about every possible mind, but only about human minds. It seems pretty plausible that (perhaps in a world with different laws of nature than ours) it is possible to have a mind whose computational resources exceed those of a Turing machine.

Tuesday, February 6, 2024

Computationalism and subjective time

Conway’s Game of Life is Turing complete. So if computationalism about mind is true, we can have conscious life in the Game of Life.

Now, consider a world C (for Conway) with three discrete spatial dimensions, x, y and z, and one temporal dimension, t, discrete or not. Space is thus a three-dimensional regular grid. In addition to various “ordinary” particles that occupy the three spatial dimensions and have effects forwards along the temporal dimension, C also has two special particle types, E and F.

The causal powers of the E and F particles have effects simultaneous with their causes, and follow a spatialized version of the rules of the Game of Life. Say that a particle at coordinates (x0,y0,z0) has as its “neighbors” particles at the eight grid points with the same z coordinate z0 and surrounding (x0,y0,z0). Then posit these causal powers of an E (for “empty”) or F (for “full”) particle located at (x0,y0,z0):

  • If it’s an F particle and has less two or three F neighbors, it instantaneously causes an F particle at (x0,y0,z0+1).

  • If it’s an E particle and has exactly three F neighbors, it instantaneously causes an F particle at (x0,y0,z0+1).

  • Otherwise, it instantaneously causes an E particle at (x0,y0,z0+1).

Furthermore, suppose that along the temporal axis, the E and F particles are evanescent: if they appear at some time, they exist only at that time, perishing right after.

In other words, once some E and F particles occur somewhere in space, they instantly propagate more E and/or F particles to infinity along the z-axis, all of which particles then perish by the next moment of time.

Given the Turing completeness of the Game of Life and a computational theory of mind, the E and F particles can compute whatever is needed for a conscious life. We have, after all, a computational isomorphism between an appropriately arranged E and F particle system in C and any digital computer system in our world.

But because the particles are evanescent, that conscious life—with all its subjective temporal structure—will happen all at once according to the objective time of its world!

If this is right, then on a computational theory of mind, we can have an internal temporally structured conscious life with a time sequence that has nothing to do with objective time.

One can easily get out of this consequence by stipulating that mind-constituting computations must be arranged in an objective temporal direction. But I don’t think a computationalist should add this ad hoc posit. It is better, I think, simply to embrace the conclusion that internal subjective time need not have anything to do with external time.

Monday, October 3, 2022

The Church-Turing Thesis and generalized Molinism

The physical Church-Turing (PCT) thesis says that anything that can be physically computed can be computed by a Turing machine.

If generalized Molinism—the thesis that for any sufficiently precisely described counterfactual situation, there is a fact of the matter what would happen in that situation—is true, and indeterminism is true, then PCT seems very likely false. For imagine the function f from the natural numbers to {0, 1} such that f(n) is 1 if and only if the coin toss on day n would be heads, were I to live forever and daily toss a fair coin—with whatever other details need to be put in to get the "sufficiently precisely described". But only countably many functions are Turing computable, so with probability one, an infinite sequence of coin tosses would define a Turing non-computable function. But f is physically computable: I could just do the experiment.

But wait: I’m going to die, and even if there is an afterlife, it doesn’t seem right to characterize whatever happens in the afterlife as physical computation. So all I can compute is f(n) for n < 30000 or so.

Fair enough. But if we say this, then the PCT becomes trivial. For given finite life-spans of human beings and of any machinery in an expanding universe with increasing entropy, only finitely many values of any given function can be physically computed. And any function defined on a finite set can, of course, be trivially computed by a Turing machine via a lookup-table.

So, either we trivialize PCT by insisting on the facts of our physical universe that put a finite limit on our computations, or in our notion of “physically computed” we allow for idealizations that make it possible to go on forever. If we do allow for such idealizations, then my argument works: generalized Molinism makes PCT unlikely to be true.

Wednesday, September 22, 2021

Against digital phenomenology

Suppose a digital computer can have phenomenal states in virtue of its computational states. Now, in a digital computer, many possible physical states can realize one computational state. Typically, removing a single atom from a computer will not change the computational state, so both the physical state with the atom and the one without the atom realize the same computational state, and in particular they both have the same precise phenomenal state.

Now suppose a digital computer has a maximally precise phenomenal state M. We can suppose there is an atom we can remove that will not change the precise phenomenal state it is in. And then another. And so on. But then eventually we reach a point where any atom we remove will change the precise phenomenal state. For if we could continue arbitrarily long, eventually our computer would have no atoms, and then surely it wouldn’t have a phenomenal state.

So, we get a sequence of physical states, each differing from the previous by a single atom. For a number of initial states in the sequence, we have the phenomenal state M. But then eventually a single atom difference destroys M, replacing it by some other phenomenal state or by no phenomenal state at all.

The point at which M is destroyed cannot be vague. For while it might be vague whether one is seeing blue (rather than, say, purple) or whether one is having a pain (rather than, say, an itch), whether one has the precise phenomenal state M is not subject to vagueness. So there must be a sharp transition. Prior to the transition, we have M, and after it we don’t have M.

The exact physical point at which the transition happens, however, seems like it will have to be implausibly arbitrary.

This line of argument suggests to me that perhaps functionalists should require phenomenal states to depend on analog computational states, so that an arbitrarily small of the underlying physical state can still change the computational state and hence the phenomenal state.

Tuesday, May 5, 2020

Timeless flow of consciousness?

We could imagine that all the computation a deterministic brain does being done by an incredibly complex system of gears operated by a single turn of the crank to generate all the different intermediate computational results in different gears. Now, imagine a Newtonian world with frictionless, perfectly rigid and perfectly meshing gears, and suppose that the computations are done by that system. Perfectly rigid and perfectly meshing gears compute instantly. So, all the computation of a life can be done with a single turn of a crank. Note that the computational states will then have an explanatory order but need not have a temporal order: all the computations happen simultaneously. So:

  1. On a computational theory of mind, it is possible to live a conscious mental life of many years of subjective flow of consciousness without any real temporal succession.

It follows that:

  1. Either computational theories of mind are false, or the subjective flow of consciousness does not require any real time.

I think there is a potential problem in (1) and (2), namely a potential confusion between real time and external time. For it could be that internal time is just as real as (or more real than!) external time, and is simply constituted by the causal order of interactions within a substance. If so, then if the system of gears were to be a substance (which I think it could only be if it had a unified form), its causal order could actually constitute a temporal order.

This and other recent posts could fit into a neat research project—perhaps a paper or even a dissertation or a monograph—exploring the costs of physicalism in accounting for the temporality of our lives. As usual, I am happy to collaborate if someone wants to do the heavy hitting.

Monday, August 26, 2019

Functionalism and imperfect reliability

Suppose a naturalistic computational theory of mind is true: To have mental states of a given kind is to engage in a particular kind of computation. Now imagine a conscious computer thinking various thoughts and arranged around standard logic gates. Modify the computer to have an adjustment knob on each of its logic gates. The adjustment knob can be set to any number between 0 and 1, such that if the knob is set to set to p, then the chance (say, over a clock cycle) that the gate produces the right output is p. Thus, with the knob at 1, the gate always produces the right output, with the knob at 0, it produces the opposite output, with the knob at 0.5, it functions like a fair coin. Make all the randomness independent.

Now, let Cp be the resulting computer with all of its adjustment knobs set to p. On our computational theory of mind, C1 is a conscious computer thinking various thoughts. Now, C0.5 is not computing anything: it is simply giving random outputs. This is true even if in fact, by an extremely unlikely chance, these outputs always match the ones that C1 gives. The reason for this is that we cannot really characterize the components of C0.5 as the logic gates that they would need to be for C0.5 to be computing the same functions as C1. Something that has a probability 0.5 of producing a 1 and a probability 0.5 of producing a 0, regardless of inputs, is no more an and-gate than it is a nand-gate, say.

So, on a computational theory of mind, C0.5 is mindless. It’s not computing. Now imagine a sequence of conscious computers Cp as p ranges from 0.5 to 1. Suppose that it so happens that the corresponding “logic gates” of all of them always happen to give the same answer as the logic gates of C1. Now, for p sufficiently close to 1, any plausible computational theory of mind will have to say that Cp is thinking just as C1 is. Granted, Cp’s gates are less reliable than C1’s, but imperfect reliability cannot destroy thought: if it did, nothing physical in a quantum universe would think, and the naturalistic computational theorist of mind surely won’t want to accept that conclusion.

So, for p close to 1, we have thought. For p = 0.5, we do not. It seems very plausible that if p is very close to 0.5, we still have no thought. So, somewhere strictly between p = 0.5 and p = 1, a transition is made from no-thought to thought. It seems implausible to think that there is such a transition, and that is a count against computational theories of mind.

Moreover, because all the gates actually happen to fire in the same way in all the computers in the Cp sequence, and consciousness is, on the computational theory, a function of the content of the computation, it is plausible that for all the values of p < 1 for which Cp has conscious states, Cp has the same conscious states as C1. Either Cp does not count as computing anything interesting enough for consciousness or it counts as imperfectly reliably computing the same thing as C1 is. Thus, the transition from C0.5 to C1 is not like gradually waking up from unconsciousness. For when we gradually wake up from unconsciousness, we have an apparently continuous sequence of more and more intense conscious states. But the intensity of a conscious state is to be accounted for computationally on a computational theory of mind: the intensity is a central aspect of the qualia. Thus, the intensity has to be a function of what is being computed. And if there is only one relevant thing computed by all the Cp that are computing something conscious-making, then what we have as p goes from 0.5 to 1 is a sudden jump from zero intensity to full intensity. This seems implausible.

Tuesday, February 2, 2016

A "Freudian" argument against some theories of consciousness

  1. The kinds of computational complexity found in our conscious thought are also found in our unconscious thought.
  2. So, consciousness does not supervene on the kinds of computational complexity found in an entity.
Of course, (1) is an empirical claim, and it might turn out to be false, though I think it is quite plausible. If so, then we have the backup argument:
  1. The kinds of computational complexity found in our conscious thought possibly are all found in unconscious thought.
  2. So, consciousness does not supervene on the kinds of computational complexity found in an entity.

Monday, February 1, 2016

A secondary brain and computational theories of consciousness

There is an urban myth that the Stegosaurus had a secondary brain to control its rear legs and tail. Even though it's a myth, such a thing could certainly happen. I want to explore this thought experiment:

At the base of my spine, I grow a secondary brain, a large tail, and an accelerometer like the one in my inner ear. Moreover, sensory data from the tail and the accelerometer is routed only to the secondary brain, not to my primary brain, and my primary brain cannot serve signals to the tail. The secondary brain eavesdrops on nerve signals between my legs and my primary brain, and based on these signals and accelerometer data it positions the tail in ways that improve my balance when I walk and run. The functioning of this secondary brain is very complex and are such as to suffice for consciousenss--say, of tactile and kinesthetic data from the tail and orientation data from the accelerometer--if computational theories of consciousness are correct.

What would happen if this happened? Here is an intuition:

  1. The thought experiment does not guarantee that I would be aware of data from the tail.
But:
  1. If a computational theory of consciousness is correct, the thought experiment guarantees that something would be aware of data from the tail.
Suppose, then, a computational theory of consciousness is correct. Then it would be possible for the thought experiment to happen and for me to be unaware of data from the tail by (1). By (2), in this scenario, something other than me would be aware of data from the tail. What would this something other than me be? It seems that the best hypothesis is that it would be that it would be the secondary brain. But by parity, then, the subject of the consciousness of everything else should be the primary brain. But I am and would continue to be the subject of the consciousness of everything else and there is only one subject there. So I am a brain.

Thus, the thought experiment gives me a roundabout argument that:

  1. If a computational theory of consciousness is correct, I am a brain.
Of course (3) is fairly plausible apart from the thought experiment, but it's always nice to have another argument.

So what? Well:

  1. My hands are a part of me.
  2. My hands are not a part of any brain
  3. So, I am not a brain.
  4. So, computational theories of consciousness are false.

The thought experiment is still interesting even if computational theories of consciousness are false. If we had such secondary brains would we, or anything else, feel what's going on in the tail? I think it metaphysically go either way, depending on non-physical details of the setup.

Thursday, May 15, 2008

Livers, brains, conscious computation and teleology

Let us suppose, for the sake of exploration the following (false) thesis:

  1. I am conscious because of a part of me—viz., my brain—engaging in certain computations which could also be engaged in by sophisticated computers, thereby rendering these computers conscious.
I will also assume the following, which seems obvious to me, but I think in the end we will find that there is a tension between it and (1):
  1. The brain and liver are both proper parts of me.
Call the kinds of computations that give rise to consciousness "C-computations". So by (1) I am conscious because of my brain's C-computations, and if my laptop were to engage in C-computations, that would give rise to consciousness in it.[note 1]

Now, let us imagine that my liver started doing C-computations within itself, but did not let anything outside it know about this. Normally, livers regulate various biochemical reactions unconsciously (I assume), but let us imagine that my liver became aware of what it was doing through engaging in C-computations on the data available to it. Of course, livers can't willy nilly do that. So, part of my supposition is that the structure of my liver, by a freak of nature or nurture, has shifted in such wise that it became a biochemical computer running C-computations. As long as the liver continued serving my body, with the same external functions, this added sophistication would not (I assume) make it cease to be a part of me.

So now I have two body parts where C-computations go on: my brain and my liver. However, the following is very plausible to me:

  1. Whatever computations my liver were to perform, I would do not have direct awareness of what is going in my liver, except insofar as the liver transmitted information to its outside.
In the hypothesis I am considering, the liver "keeps to itself", not sending data into my nervous system about its new computational activities. So, I submit, I would not be aware of what is going on there. Thus, there would be consciousness, since there would be C-computations, but it would not be consciousness that I have.

But now we have a puzzle. In this setup, I am conscious in virtue of the neural computations that my brain engages in, but am not conscious in virtue of the hepatic computations that my liver engages in. Thus, when my neural computation is quiescent due to sleep, but my hepatic computation continue, I am, simpliciter, not conscious. Why? After all, both the brain and the liver are parts of me. The brain also "keeps to itself": it only lets the rest of the body have the outputs of its computation, but the details of it, it keeps to itself, just as in my thought experiment the liver does. The idea in (1) seemed to have been that computational activity by an organic part of me would give rise to consciousness that is mine. But by the same token the hepatic computational activity should give rise to consciousness that is mine.

So what should someone who accepts (1), (2) and (3) (and the various subsidiary assumptions) say about this? Well, I think the best thing to do would be to abandon (1), denying that I think in virtue of computational activity. A second-best solution would be to qualify (2): yes, the brain and the liver are parts of me, but the brain is a more "intimate" or "central" part of me. But note that one cannot explain this intimacy or centrality in terms of the brain's engaging in computational activity. For the liver could do that, too. Could one explain it in terms of how much coordinating the brain actually does of my bodily functions, both voluntary and not? Maybe, but this has to be taken teleologically. For we can imagine as part of the thought experiment that I become paralyzed, and my brain no longer coordinates my bodily functions, but they are in fact coordinated by medical technology. So the defender of (1) who wishes to qualify (2) in this way may have to embrace teleology to account for the difference between the brain and the liver.

But there may be another solution that doesn't seem to involve teleology. One might say that each bundle of C-computational activities gives rise to a conscious being. Thus, perhaps, in my thought experiment, there are two persons, who contingently have all of their parts in common: (a) the neural person that I am, and (b) the hepatic person that I am not. Contingently, because if the liver were replaced by a prosthesis that maintains basic bodily functions, (b) would die, but (a) would continue to exist, while if the brain were replaced by such a prosthesis, (a) would die, but (b) would continue to exist. This view can be seen as a way of qualifying (2): yes, both the brain and the liver are parts of me, but one is an essential part and the other not. On this view, the claim is that:

  1. I am conscious in virtue of C-computations in an essential part of me.
But actually further qualification is needed. For suppose that instead of my liver becoming conscious, one of my neurons were to suddenly complexify immensely, while retaining the same external functions, so that internally it was engaging in sophisticated C-computations, while externally it worked like every other neuron. It seems that an analogue to (3) would still hold—I wouldn't be aware of what the neuron is aware of (but I admit that my intuition is weaker here). And so (4) is false—for the C-computations in the neuron are computations in the brain, and the brain is an essential part of me, but I am not conscious in virtue of them.

It is difficult to fix up (4). One might require that the computations take place throughout the whole of the essential part. But supposing that for a while my left brain hemisphere fell asleep while the right continued C-computing (dolphins practice such "unihemispheric sleep"), then the computations would not be taking place throughout the whole of the essential part. But I would, surely, still be aware. I am not sure there is any way of fixing up (4) to avoid the conscious-neuron and unihemispheric sleep counterexamples. If not, and if no other two-persons-in-one-body solution can be articulated, then the teleological solution may be needed.

Final remarks: I think naturalism requires something like (1). If so, then given the plausibility of (2) and (3), naturalists need to accept teleology. But teleology does not fit well into naturalism. So naturalism is in trouble.