Showing posts with label brains. Show all posts
Showing posts with label brains. Show all posts

Thursday, April 25, 2024

Brain snatching is not a model of life after death

Van Inwagen infamously suggested the possibility that at the moment of death God snatches a core chunk of our brain, transports it to a different place, replaces it with a fake chunk of brain, and rebuilds the body around the transported chunk.

I think that, were van Inwagen’s suggestion is correct, it would be correct to say that we die. If not, then it is a seriously problematic view given the Christian commitment that people do, in fact, die. Hence van Inwagen's model is not a model of life after death.

Argument: If in the distant future all of a person’s body was destroyed in an accident except for a surving core chunk, and medical technology had progressed so much that it could regrow the rest of the body from that chunk, I think we would not say that the medical technology resurrected the person, but that it prevented the person’s death.

Objection: The word “death” gets its meaning ostensively from typical cases we label as cases of “death”. In these cases, the heart stops, the parts of the brain observable to us stop having electrical activity, etc. What we mean by “death” is what happens in these cases when this stuff happens. If van Inwagen’s suggestion is correct, then what happens in these cases is the snatching of a core chunk. Hence if van Inwagen’s suggestion is correct, then death is divine snatching of a core chunk of the brain, and we do in fact die.

Responses: First, if death is divine snatching of a core chunk of the brain, then jellyfish and trees don’t die, because they don’t have a brain. I suppose, though, one might say that “death” is understood analogously between jellyfish and humans, and it is human death that is a divine snatching of a core chunk of the brain.

Second, it seems obvious that if God had chosen not to snatch a core chunk of Napoleon’s brain, and allowed Napoleon’s body to rot completely, then Napoleon would be dead. Hence, not even the death of a human is identical to a divine snatching.

Third, I think it is an important part of the concept of death is that death is something that is in common between humans and other organisms. People, dogs, jellyfish, and trees all die. We should have an account of death common between these. The best story I know is that death is the destruction of the body. And the van Inwagen story doesn’t have that. So it’s not a story about death.

Tuesday, March 26, 2024

Brains, bodies and souls

There are four main families of views of who we are:

  1. Bodies (or organisms)

  2. Brains (or at least cerebra)

  3. Body-soul composites

  4. Souls.

For the sake of filling out logical space, and maybe getting some insight, it’s worth thinking a bit about what other options there might be. Here is one that occurred to me:

  1. Brain-soul (or cerebrum-soul) composites.

I suppose the reason this is not much (if at all) talked about is that if one believes in a soul, the body-soul composite or soul-only views seem more natural. Why might one accept a brain-soul composite view? (For simplicity, I won’t worry about the brain-cerebrum distinction.)

Here is one line of thought. Suppose we accept some of the standard arguments for dualism, such as that matter can’t be conscious or that matter cannot think abstract thoughts. This leads us to think the mind cannot be entirely material. But at the same time, there is some reason to think the mind is at least partly material: the brain’s activity sure seems like an integral part of our discoursive thought. Thus, the dualist might have reason to say that the mind is a brain-soul composite. At the same time, there is a Cartesian line of thought that we should be identified with the minimal entity hosting our thoughts, namely the mind. Putting all these lines of thought together, we conclude that we are minds, and hence brain-soul composites.

Now I don’t endorse (5). The main ethical arguments against (2) and (4), namely that they don’t do justice to the deep ethical significance of the human body, apply against (5) as well. But if one is not impressed by these arguments, there really is some reason to accept (5).

Furthermore, exploring new options, like the brain-soul composite option, sometimes may give new insights into old options. I am now pretty much convinced that the mind is something like the brain plus soul (or maybe cerebrum plus intellectual part of soul or some other similar combination). Since it is extremely plausible that all of my mind is a part of me, this gives me a new reason to reject (4), the view that I am just a soul. At the same time, I do not think it is necessary to hold that I am just a mind, so I can continue to accept view (3).

The view that the mind is the brain plus soul has an interesting consequence for the interim state, the state of the human being between death and the resurrection of the body. I previously thought that the human being in the interim state is in an unfortunately amputated state, having lost all of the body. But if we see the brain as a part of the mind, the amputated nature of the human being in the interim state is even more vivid: a part of the human mind is missing in the interim state. This gives a better explanation of why Paul was right to insist on the importance of the physical resurrection—we cannot be fully in our mind without at least some of our physical components.

Friday, March 15, 2024

A tweak to the Turing test

The Turing test for machine thought has an interrogator communicate (by typing) with a human and a machine both of which try to convince the interrogator that they are human. The interrogator then guesses which is human. We have good evidence of machine thought, Turing claims, if the machine wins this “imitation game” about as often as the human. (The original formulation has some gender complexity: the human is a woman, and the machine is trying to convince the interrogator that it, too, is a woman. I will ignore this complication.)

Turing thought this test would provide a posteriori evidence that a machine can think. But we have a good a priori argument that a machine can pass the test. Suppose Alice is a typical human, so that in competition with other humans she wins the game about half the time. Suppose that for any finite sequence Sn of n questions and n − 1 answers of reasonable length (i.e., of a length not exceeding how long we allow for the game—say, a couple of hours) ending on a question that could be a transcript of the initial part of an interrogation of Alice, there is a fact of the matter as to what answer Alice would make to the last question. Then there is a possible very large , but finite, machine that has a list of all such possible finite sequences and the answers Alice would make, and that at any point in the interrogation answers just as Alice would. That machine would do as well as Alice at the imitation game, so it would pass the Turing test.

Note that we do not need to know what Alice would say in response to the last question of Sn. The point isn’t that we could build the machine—we obviously couldn’t, just because the memory capacity required would be larger than the size of the universe—but that such a machine is possible. We could suppose constructing the database in the machine at random and just getting amazingly lucky and matching Alice’s dispositions.

The machine would not be thinking. Matching the current stage in the interrogation to the database and just giving the item in the line for that is not thinking. The point is obvious. Suppose that S1 consists of the question “What is the most important thing in life?” and the database gives the rote answer “It is living in such a way that you have no regrets.” It’s obvious that the machine doesn’t know what it’s saying.

Compare this to a giant chess playing machine which encodes for each of the 1040 legal chess positions the optimal next move. That machine doesn’t think about playing chess.

If the Turing test is supposed to be an a posteriori test for the possibility of machine intelligence, I propose a simple tweak: We limit the memory capacity of the machine to be within an order of magnitude of human memory capacity. This avoids cases where the Turing test is passed by rote recitation of responses.

Turing himself imagined that doing well in the imitation game would require less memory capacity than the human brain had, because he thought that only “a very small fraction” of that memory capacity was used for “higher types of thinking”. Specifically, Turing surmised that 109 bits of memory would suffice to do well in the game against “a blind man” (presumably because it would save the computer from having to have a lot of data about what the world looks like). So in practice my modification is one that would not decrease Turing’s own confidence in the passability of his test.

Current estimates of the memory capacity of the brain are of the order of 1015 bits, at the high end of the estimates in Turing’s time (and Turing himself inclined to the low end of the estimates, around 1010). The model size of GPT-4 has not been released, but it appears to be near but a little below the human brain capacity level. So if something with the model size of GPT-4 were to pass the Turing test, it would also pass the modified Turing test.

Technical comment: The above account assumed there was a fact about what answer Alice would make in a dialogue that started with Sn. There are various technical issues with regard to this. Given Molinism or determinism, these technical issues can presumably be overcome (we may need to fix the exact conditions in which Alice is supposed to be undergoing the interrogation). If (as I think) neither Molinism nor determinism is true, things become more complicated. But there are presumably to be statistical regularities as to what Alice is likely to answer to Sn, and the machine’s database could simply encode an answer that was chosen by the machine’s builders at random in accordance with Alice’s statistical propensities.

Monday, September 27, 2021

Distancing oneself from one's brain

It can be quite useful for someone suffering from a variety of brain conditions, such as obsessive compulsive disorder, to deliberately distance themselves from their brain’s unfortunate doings, by saying to themselves things like: “That’s not me, just my brain.”

If physicalism is true, then brains are either identical with us or at least are the core of who we are. But “That’s not me, just me” is a contradiction while “That’s not me, just the core of my being” isn’t much of a distancing. A similar issue arises in second and third person contexts: if physicalism is true, one must admit brain problems to be grounded in that which is at the core of the other’s being.

The dualist, on the other hand, can pull off the distancing much more easily: “That’s not my soul, just my brain” makes perfect sense. An impairment in the brain is just an impairment of a body part, albeit one of the most important ones.

Of course, that something is a helpful way of thinking does not prove that it’s true. But it is an insight from the beginnings of Western philosophy that truth is generally better for us than falsehood, and so that something is a helpful way of thinking is some evidence that it is true. We may, thus, have some evidence for dualism here.

Thursday, October 15, 2020

Synchronization and the unity of consciousness

The problem of the unity of consciousness for materialists is what makes activity in different areas of the physical mind come together into a single phenomenally unified state rather than multiple disconnected phenomenal states. If my auditory center active in the perception of a middle C and my visual center is active in the perception of red, what makes it be the case that there is a single entity that both hears a middle C and sees red?

We can imagine a solution to this problem in a computer. Let’s say that one part of the computer has and representation of red in one part (of the right sort for consciousness) and a representation of middle C in another part. We could unify the two by means of a periodic synchronizing clock signal sent to all the parts of the computer. And we could then say that what it is for the computer to perceive red and middle C at the same time is for an electrical signal originating in the same tick of the clock to reach a part that is representing red (in the way needed for consciousness) and to reach a part that is representing middle C.

On this view, there is no separate consciousness of red (say), because the conscious state is constituted not just by the representation of red (say) in the computer’s “visual system”, but by everything that is reached by the signals emanating from the clock tick. And that includes the representation of middle C in the “auditory system”.

The unification of consciousness, then, would be the product of the synchronization system, which of course could be more complex than just a clock signal.

This line of thought shows that in principle the problem of the unity of consciousness is soluble for materialists if the problem of consciousness is (which I doubt). This will, of course, only be a Pyrrhic victory if it turns out that no similar pervasive synchronization system is found in the brain. The neuroscience literature talks of synchronization in the brain. Whether that synchronization is sufficient for solving the unity problem may be an empirical question.

The above line of thought also strongly suggests that if materialism is true, then our internal phenomenal timeline is not the same as objective physical time, but rather is constructed out of the synchronization processes. It need not be the case for this that the representation of red and the representation of middle C happen at the same physical time. A part further from the clock will receive the synchronizing signal later than a part closer to the clock, and so the synchronization process may make two events that are not simultaneous in physical time be simultaneous in computer time. I suspect that a similar divide between mental time and physical time is true even if dualism is (as I think) true, but for other reasons.

Wednesday, April 8, 2020

What if my kidney got smart?

Suppose the cells in my left kidney mutated, and the kidney grew neurons and started engaging in the same sorts of computations that my brain consciously does. (The idea is not as outlandish as it may seem. We will no doubt one day be able to make replacement kidneys in the lab. And if so, why not replacement kidneys with neurons?)

Question: Would I come to have a new mode of kidney-based consciousness on top of my brain-based consciousness?

I don’t know the answer to this as a genuine hypothetical question. But I have a strong intuition that there is no metaphysical guarantee that I would have a new mode of kidney-based consciousness. The mere fact that my kidney functions computationally like a brain doesn’t guarantee that I think with it.

It’s an interesting question which views about persons and mind can agree that there is no guarantee of my consciousness through kidneys.

Brainists, who think that we are brains, will happily agree. I think with my brain because I am my brain. The kidney would, perhaps, think, but it wouldn’t be me thinking, because the kidney isn’t even a part of me.

Dualists of all sorts can agree: for there is no guarantee that kidney-based computation gives rise to consciousness, since the connection between neural function and mental function on dualism can be contingent.

Some non-dualist animalists, however, will have a problem. For a non-dualist animalist will identify us with the animal, and then many of them will presumably want to say that the animal thinks provided that it has an organ that engages in certain kinds of neural behavior. But now it seems like I would have to be thinking through the kidney if it were to engage in this neural behavior.

But it’s not quite so simple. For it could be that the neural behavior that defines thought has a normative component. Thus, to think may require the neurons to appropriately engage in certain behaviors. But neurons in the kidney would not have proper function.

Thus, perhaps, the no-guarantee constraint only rules out one of the views I’ve considered: non-normative non-dualist animalism.

Wednesday, April 1, 2020

If we're not brains, computers can't think

The following argument has occurred to me:

  1. We are not brains.

  2. If we are not brains, our brains do not think.

  3. If our brains do not think, then computers cannot think.

  4. So, computers cannot think.

I don’t have anything new to say about (1) right now: I weigh a lot more than three pounds; my arms are parts of me; I have seen people whose brains I haven’t seen.

Regarding (2), if our brains think and yet we are not brains then we have the too many thinkers problem. Moreover, if brains and humans think, then that epistemically undercuts (1), because then I can’t tell if I’m a brain or a human being.

I want to focus on (3). The best story about how computers could think is a functionalist story on which thinking is the operation of a complex system of functional relationships involving inputs, outputs, and interconnections. But brains are such complex systems. So, on the best story about how computers could think, brains think, too.

Is there some non-arbitrary way to extend the functionalist story to avoid the conclusion that brains think? Here are some options:

  1. Organismic philosophy of mind: Thought is the operation of an organism with the right functional characteristics.

  2. Restrictive ontology: Only existing functional systems think; brains do not exist but organisms do.

  3. Maximalism: Thought is to be attributed to the largest entity containing the relevant functional system.

  4. Inputs and outputs: The functional system that thinks must contain its input and output facilities.

Unfortunately, none of these are a good way to save the idea that computers could think.

Computers aren’t organisms, so (5) does not help.

The only restrictive ontology on the table where organisms exist but brains do not is one on which the only complex objects are organisms, so (6) in practice goes back to (5).

Now consider maximalism. For maximalism to work and not reduce down to the restrictive ontology solution, these two things have to be the case:

  1. Brains exist

  2. Humans are not a part of a greater whole.

Option (b) requires a restrictive ontology which denies the existence of nations, ecosystems, etc. Our best restrictive ontologies either deny the existence of brains or relegate them to a subsidiary status, as non-substantial parts of substances. The latter kind of ontology is going to be very restrictive about substances. On such a restrictive ontology, I doubt computers will count as substances. But they also aren’t going to be non-substantial parts of substances, so they aren’t going to exist at all.

Finally, consider the inputs and outputs option. But brains have inputs and outputs. It seems prejudice to insist that for thought the inputs and outputs have to “reach further into the world” than those of a brain which only reaches the rest of the body. But if we do accept that inputs and outputs must reach further, then we have two problems. The first is that while we are not brains, we could certainly continue to think after the loss of all our senses and muscles. The second is that if our inputs and outputs must reach further into the world, then a hearing-aid is a part of a person which appears false (though recently Hilary Yancey has done a great job defending the possibility of prostheses being body parts in her dissertation here at Baylor).

Monday, May 7, 2018

Heaven and materialism: The return of the swollen head problem

Plausibly, there is a maximum information density for human brains. This means that if internal mental states supervene on the information content of brains and there is infinite eternal life, then either:

  1. Our head grows without bound to accommodate a larger and larger brain, or

  2. Our brain remains bounded in size and either (a) eventually we settle down to a single unchanging internal mental state (including experiential state) which we maintain for eternity, or (b) we eternally move between a finite number of different internal mental states (including experiential states).

For if a brain remains bounded in size, there are only finitely many information states it can have, because of the maximum information density. Neither of options 2a and 2b is satisfactory, because mental (intellectual, emotional and volitive) growth is important to human flourishing, and a single unchanging internal mental state or eternal repetition does not fit with human flourishing.

Note, too, that on both options 2a and 2b, a human being in heaven will eventually be ignorant of how long she’s been there. On option 2b, she will eventually also be ignorant of whether it is the first time, the second time, or the billionth that she is experiencing a particular internal mental state. (I am distinguishing “internal mental states” from broad mental states that may have externalist semantics.) This, too, does not fit with the image of eternal flourishing.

This is, of course, a serious problem for the Christian materialist. I assume they won’t want to embrace the growing head option 1. Probably the best bet will be to say that in the afterlife, our physics and biology changes in such a way as to remove the information density limits from the brain. It is not clear, however, that we would still count as human beings after such a radical change in how our brains function.

The above is also a problem for any materialist or supervenientist who becomes convinced—as I think we all should be—that our full flourishing requires eternal life. For the flourishing of an entity cannot involve something that is contrary to the nature of a being of that sort. But if 2a and 2b are not compatible with our flourishing, and if 1 is contrary to our nature, then our flourishing would seem to involve something contrary to our human nature.

This is a variant of the argument here, but focused on mental states rather than on memory.

Monday, December 14, 2015

Thinking big numbers

If physicalism is true, then the nature of human beings is probably essentially tied to the nature of our brains, which in turn is essentially tied to the laws of nature. So a human being couldn't have a radically transformed brain. But there are limits on the information storage of our brains. This makes plausible the first premise of the following valid argument:

  1. If physicalism is true, there are only finitely many integers that it is metaphysically possible for humans to think about (without externalist crutches).
  2. It is metaphysically possible for humans to think about any integer (without externalist crutches).
  3. So, physicalism is not true.
Of course, the controversial premise is (2), and I wouldn't worry too much about the argument if I were a physicalist. But, still, there is some plausibility to (2), so the argument has some little force.

Wednesday, April 29, 2015

Brains and animalism

Animalists hold that we are animals. It is widely accepted by animalists that if a brain were removed from a body, and the body kept alive, the person would stay with the bulk of the body rather than go with the brain.

I wonder how much of the intuition is based on irrelevant questions of physical bulk. Imagine aliens who are giant brains with tiny support organs—lungs, heart, legs, etc.—dwarfed by the brain. I think we might have the intuition that if the brain were disconnected from the support organs, the animal would go with the brain. In the case of beings that dwarf their brains, it feels natural to talk of a certain operation as a brain transplant. But in the case of beings that are almost all brain, the analogous operation would probably be referred to as a support-system transplant. Yet surely we should say exactly the same thing metaphysically about us and the aliens, assuming that the functional roles of the brains and the other organs are sufficiently similar.

This isn't a positive argument that we'd go with our brains. It's just an argument to defuse the intuition that we wouldn't.

What about cerebra? Here's a widely shared intuition. If the cerebrum is removed from the skull of an animal and placed in a life-support vat, the animal stays with the rest of the body.

But now suppose that we granted that the animal goes with the whole brain. Let's say, then, that I am an animal and sadly become a brain in a life-support vat, losing the rest of my body. Suppose that next my brain is cut and the upper and lower brains are placed in separate life-support vats. It does not seem particularly plausible to think that the animal goes with the lower brain. (Maybe the animal dies, or maybe it goes with the upper brain.) So once we've granted that the animal would go with the brain, the primacy of the lower brain for animal identity seems somewhat undermined.

Maybe, though, one could accept both (a) the common intuition that if the cerebrum were removed the human animal would go with the rest of its body, and (b) my intuition that if the human animal were first reduced to a brain, and the brain then cut into the cerebrum and lower brain, the animal would go with the cerebrum. There is no logical contradiction between these two intuitions. Compare this. I have a loaf of bread. Imagine the loaf marked off into five equally sized segments A, B, C, D and E. If I first cut off the 2/5 of the loaf marked D and E, it's plausible that the loaf shrinks to the ABC part, and DE is a new thing. And then if I cut off C, the same loaf shrinks once again, to AB. On the other hand if I start off by cutting off the AB chunk, the loaf shrinks to CDE. So the order of cutting determines whether the original loaf ends up being identical to AB or to something else. (We can also make a similar example using some plant or fungus if we prefer a living example.) Likewise, the order of cutting could determine whether the animal ends up being just a cerebrum (first remove brain, then cut brain into upper and lower parts) or whether it ends up being a cerebrumless body.

We might have a rough general principle: The animal when cut in two tends to go with the functionally more important part. Thus, perhaps, when the human animal is cut into a brain and a rest-of-body, it goes with the brain, as the brain is functionally more important in the brainier animals. When that brain is subsequently cut into upper and lower brains, the brainy animal goes with the upper brain, as that's functionally more important given its distinctively brainy methods for survival. On the other hand, if the human animal is cut into a cerebrum and a cerebrumless-rest-of-body, perhaps (I am actually far from sure about this) the animal goes with the cerebrumless-rest-of-body, because although the upper brain is more important functionally than a lower brain, the lower brain plus the rest of the body are collectively more important than the upper brain by itself. So the order of surgery matters to identity.

Friday, March 21, 2014

The human animal and the cerebrum

Suppose your cerebrum was removed from your skull and placed in a vat in such a way that its neural functioning continued. So then where are you: Are you in the vat, or where the cerebrum-less body with heartbeat and breathing is?

Most people say you're in the vat. So persons go with their cerebra. But the animal, it seems, stays behind—the cerebrum-less body is the same animal as before. So, persons aren't animals, goes the argument.

I think the animal goes with the cerebrum. Here's a heuristic.

  • Typically, if an organism of kind K is divided into two parts A and B that retain much of their function, and the flourishing of an organism of kind K is to a significantly greater degree constituted by the functioning of A than that of B, then the organism survives as A rather than as B.
Uncontroversial case: If you divide me into a little toe and the rest of me, then since the little toe's contribution to my flourishing is quite insignificant compared to the rest, I survive as the rest. More controversially, the flourishing of the human animal is to a significantly greater degree constituted by the functioning of the cerebrum than of the cerebrum-less body, so we have reason to think the human animal goes with the cerebrum.

Another related heuristic:

  • Typically, if an organism of kind K is divided into two parts A and B that retain much of their function, and B's functioning is significantly more teleologically directed to the support of A than the other way around, then the organism survives as A rather than as B.

My heart exists largely for the sake of the rest of my body, while it is false to say that the rest of my body exists largely for the sake of my heart. So if I am divided into a heart and the rest of me, as long as the rest of me continues to function (say, due to a mechanical pump circulating blood), I go with the rest of me, not the heart. But while the cerebrum does work for the survival of the rest of my body, it is much more the case that the rest of the body works for the survival of the cerebrum.

There may also be a control heuristic, but I don't know how to formulate it.

Friday, December 5, 2008

Eliminativism about minds

Here is a hypothesis: A mature neuro-science will, as the eliminativists claim, have no room for concepts like "belief", "desire", etc. Suppose this hypothesis proves true. What should we then do? Obviously, it would be absurd to deny that we have beliefs or desires. Instead, we should deny that belief, desire, etc. occur in the neural system, which is what the neuro-science studies, and hold that they occur elsewhere. Since there is no other plausible candidate for the mind in the physical world besides the neural system, we should conclude that belief, desire, etc. occur outside the physical world, i.e., that some form of dualism is true. Moreover, I suspect that a neuro-science that would have no room for beliefs and desires would also have no room for the idea that there are states of the brain on which beliefs and desires supervene. Thus, it would lead us to a non-supervenient dualism.

But this is just an exercise in hypotheticality, since we are in no position to make such specific predictions about future science.