Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts

Friday, March 18, 2016

Death, materialism and resurrection

Consider two Christian materialist theories about how life after death works:

  1. Snatching: At the last moment of life, God snatches a central part of the person (say, the cerebrum), transports it away to heaven, purgatory or hell, keeps it alive there, and replaces it in the corpse with a replica.
  2. Fission: At the last moment of life, the cells in the body or a central part of it get the power to split into two cells. One of these cells is a dead cell found in the corpse and the other is in heaven, purgatory or hell.

Here's problem both Snatching and Fission face: there is no death on these stories, since death requires the cessation of biological life. But on both, biological life is continuously maintained. These are stories about life after teleportation rather than about life after death. But we do in fact die: Scripture is completely clear on this.

Maybe one could modify my formulations of Snatching or Fission to solve this problem. Rather than the snatching or fission happening at the last moment of life, it happens at the first moment of death. Thus, God snatches or fissions a central part of the person after the person is already dead, and then resuscitates the part in heaven, purgatory or hell. The problem with this is that Snatching and Fission are meant to preserve biological continuity. But while typically after death cells remain with some semblance of biological life, this need not always happen. Suppose that someone dies by having a laser blast their brain. That person dies precisely when those cells central to biological life have been destroyed. But it is precisely those cells that would need to be snatched or fissioned after death.

(I used to think, by the way, that the interim state--the state between death and the resurrection of the body--was also an objection to Snatching and Fission. But it's not. For the materialist can say that after Snatching or Fusion, the person exists as a mere brain in a vat in heaven, purgatory or hell. And then only at the second coming does that brain regain the rest of the body. The materialist can even say that the brain plays the functional role of the soul here.)

Monday, February 1, 2016

A secondary brain and computational theories of consciousness

There is an urban myth that the Stegosaurus had a secondary brain to control its rear legs and tail. Even though it's a myth, such a thing could certainly happen. I want to explore this thought experiment:

At the base of my spine, I grow a secondary brain, a large tail, and an accelerometer like the one in my inner ear. Moreover, sensory data from the tail and the accelerometer is routed only to the secondary brain, not to my primary brain, and my primary brain cannot serve signals to the tail. The secondary brain eavesdrops on nerve signals between my legs and my primary brain, and based on these signals and accelerometer data it positions the tail in ways that improve my balance when I walk and run. The functioning of this secondary brain is very complex and are such as to suffice for consciousenss--say, of tactile and kinesthetic data from the tail and orientation data from the accelerometer--if computational theories of consciousness are correct.

What would happen if this happened? Here is an intuition:

  1. The thought experiment does not guarantee that I would be aware of data from the tail.
But:
  1. If a computational theory of consciousness is correct, the thought experiment guarantees that something would be aware of data from the tail.
Suppose, then, a computational theory of consciousness is correct. Then it would be possible for the thought experiment to happen and for me to be unaware of data from the tail by (1). By (2), in this scenario, something other than me would be aware of data from the tail. What would this something other than me be? It seems that the best hypothesis is that it would be that it would be the secondary brain. But by parity, then, the subject of the consciousness of everything else should be the primary brain. But I am and would continue to be the subject of the consciousness of everything else and there is only one subject there. So I am a brain.

Thus, the thought experiment gives me a roundabout argument that:

  1. If a computational theory of consciousness is correct, I am a brain.
Of course (3) is fairly plausible apart from the thought experiment, but it's always nice to have another argument.

So what? Well:

  1. My hands are a part of me.
  2. My hands are not a part of any brain
  3. So, I am not a brain.
  4. So, computational theories of consciousness are false.

The thought experiment is still interesting even if computational theories of consciousness are false. If we had such secondary brains would we, or anything else, feel what's going on in the tail? I think it metaphysically go either way, depending on non-physical details of the setup.

Monday, September 1, 2014

Extracting raw data from the Mindflex toy EEG

Last night I posted a full Instructable on how to extract raw data from the Mindflex toy EEG via Bluetooth (headsets can be bought for $15 on ebay and you need one or two $10 HC-06 Bluetooth modules), and indeed how to make it work pretty much like the more expensive Mindwave Mobile (complete software compatibility as far as I can tell if you initialize the toy with an app I link to in the instructions).

You need to make four solder connections, and jump through a few hoops to set up the Bluetooth module.

Good educational project for those with kids.

Follow the instructions at your own risk.  Make sure the headset isn't wired to anything connected to mains while it's on.

Thursday, July 24, 2014

Another EEG

Josh Rasmussen encouraged me to run the toy EEG while I was writing book chapter, presumably as a way to get me to make more progress on our joint book arguing for a necessary being.  So, here it is.

Looks to me slightly intermediate between the graphs for blogging and for feeding in the earlier EEGs.

The topic of the chapter is the same as that of the post I was doing in the earlier EEG.

In case anybody is curious, here's how raw data (not from the above, just from some software testing I was doing) looks like.


Amusingly, one can also touch the electrode to one's chest, put one's fingers in the ear clips, and get an ECG.  I think I got my $21 worth of fun. :-)

EEG of me blogging vs. feeding/cleaning


I recently acquired a MindFlex EEG-based toy (on ebay, for a total of $21 with shipping), which is based on a NeuroSky ThinkGear ASIC chip.  As a toy, it's not that great, but if you solder wires to the transmit and ground pins, and hook it up to a TTL-level serial port, you can read the data off it.  By default the data comes processed into a bunch of frequency domains (presumably by running an FFT on the raw potentials), though if you attach your serial port to the receive pin (I ended up shorting that pin to another and had to cut through the blog carefully afterwards; I'm not good at soldering), you can switch to raw data mode, though my bridge hardware isn't fast enough for that.  As a safety measure, it's a good idea to either have the computer be a laptop not plugged into mains or else to bridge from the ASIC to the laptop wirelessly.

I hooked up the mindflex to the BrainLink all-purpose-BlueTooth-interconnect device (coupon code SS72142 gives a 30% discount on everything at SurplusShed this week).  I had some initial technical difficulties.  For some reason, the BrainLink keeps on resetting when it gets a lot of incoming serial data, and so I had to write some custom communication code for it that un-reset when receiving data, instead of just using the standard BrainLink java library.  And I wrote some quick visualization code in java (it's a mess, as I've never written desktop java GUI code before).  If you want to play with the messy code, it's here (it's called brainflex).  (You probably don't have a BrainLink, but any serial-to-BlueTooth adapter should work, as long as you create a new implementation of DataLink, which should be quite easy.)

Anyway, while doing the preceding post, I had the BrainLink on, and recorded the processed EEG.  (The Attention and Meditation data is computed by the chip from the Fourier transform data.)  Here it is:

As control data, I then switched to feeding the toddler and cleaning her and after her:

There is discussion online on whether the BrainFlex toy actually works, or if it's just an illusion-of-control thing, though the NeuroSky folk have research data on their chip that suggests it does do something.

Note that all the regular frequency domains (not the Attention and Meditation, though) are normalized to sum to a constant total.

Beta and alpha seem much more active when blogging than when feeding/cleaning.  On the other hand, the chip's computed Attention value seems rather higher for the feeding/cleaning, which fits with how I felt: the blogging seemed fairly automatic, while the feeding/cleaning involved more conscious attention.

A confound for the experiment is that as I was blogging, I was on the computer, so there would have been electromagnetic interference from that.  I did make sure that the computer was not plugged into the mains, which may have reduced interference.  And another confound was that I knew I was recording myself.  This isn't real science!

Tuesday, October 22, 2013

Brains, souls and consciousness

  1. (Premise) I am the only entity that has all the conscious states I presently have.
  2. (Premise) I am breathing.
  3. (Premise) My soul isn't breathing.
  4. (Premise) My brain isn't breathing.
  5. I am neither my soul nor my brain. (2-4)
  6. Neither my soul nor my brain has all the conscious states I presently have. (1,5)
  7. (Premise) If my soul or my brain is conscious, it has all the conscious states I presently have.
  8. So, neither my soul nor my brain is conscious.

If my soul or my brain grounds my consciousness, it does not ground my consciousness by being conscious. It grounds my consciousness by having non-conscious states that ground my consciousness. These non-conscious states will then be more fundamental than my conscious states.

In particular, substance dualists should agree with naturalists that conscious states are non-fundamental. Only non-substance dualists, like hylomorphic dualists and property dualists, have a hope of saying that conscious states are fundamental. And of course a similar argument can be run for other mental states beside the conscious ones.

In practice, some substance dualists will say that I am my soul. If so, then I don't breathe (at most I cause breathing), I don't weigh anything, and so on.

Friday, March 20, 2009

Identity theory of mind

Here is a quick, and no doubt well-known, argument that mental states are not token-token identical with brain states. The argument makes assumptions I reject, but they are assumptions that, I think, will be plausible to a materialist. The idea is this. It is possible to transfer my mind into a computer, while preserving at least some of my memories, and with my brain being totally destroyed in the process (I reject this myself, but I think the materialist should accept Strong AI, and her best bet for a theory of personal identity is some version of the memory theory, which should allow this). Were such a transfer to be made, then I would have some of the numerically same token mental states (e.g., a memory of a particular embarassing episode) as I do now. But if these mental state tokens are now identical with brain state tokens, then it follows that it is possible that some of my brain states can survive the destruction of my brain, without any miracle, just by means of technological manipulation. But no brain state token of the sort that is correlated with memories[note 1], can survive the destruction of the brain, perhaps barring a miracle.[note 2] Hence, the mental states are not identical with brain states.

Of course, one might try a four-dimensionalist solution, supposing some temporally extended entity that coincides with the brain state prior to the destruction of the brain and with the electronic state after the destruction of the brain. But that won't save identity theory—it will only yield the claim that the mental state is spatiotemporally coincident with a brain state, or constituted by the brain state, vel caetera.

Maybe, though, what the identity theorist needs to do is to disambiguate the notion of a "brain state". In one sense, a brain state is the state of the brain's being a certain way. Call that an "intrinsic brain state" (note: it may be somewhat relational—I am not worried about that issue). If identity theory is understood in this way, the above argument against the identity theory works (assuming materialism, etc.) But a different sense of "brain state" is: a state of the world which, right now, as a matter of fact obtains in virtue of how a brain is.

Thus, consider the following state S: Alex's brain being gray, or there being a war in France. State S now obtains in virtue of how my brain is. But state S obtained in 1940 in the absence of my brain, since I did not exist then; instead, it obtained in virtue of there being a war in France. The state S is now a brain state, though earlier it wasn't. Call such a thing a "jumpy brain state": it can jump in and out of heads.

The identity theorist who accepts the possibility of mind transfer had better not claim that mental state tokens are identical with intrinsic brain state tokens but rather must hold that they are identical with jumpy brain state tokens. Put that way, the identity theory is much tamer than one might have thought. In fact, it is not clear that it says anything beyond the claim that the present truthmakers for mental state attributions are brain states.

Also, consider this. Presumably, for any jumpy brain state S, there is an intrinsic brain state S*, which right now coincides with S, and which is such that S obtains in virtue of S*. Thus, corresponding to the jumpy state Alex's brain being gray, or there being a war in France, there is the intrinsic brain state Alex's brain being gray. There is now a sense in which our identity theory is not faithful to its founding intuition that mental states are the states that neuroscience studies. For neuroscience certainly does not study jumpy brain states (neuroscience as such is not about wars in France, or information on hard drives). Rather, neuroscience studies intrinsic brain states. The identity theorist's mental state is identical with some jumpy brain state S, but it is S* that neuroscience studies.

And so there is a sense in which the identity theory is a cheat, unless it is supplemented with a non-psychological theory of personal identity that bans mind transfer between brains and computers. But the latter supplementation will, I think, also ban AI, since if computers can be intelligent, minds can be transfered between computers (think of a networked computation—the data can move around the network freely), and it would be weird if they could be transfered between computers but not from a brain to an appropriately programmed computer. Moreover, once one bans AI, one has made a claim that intelligence requires a particular kind of physical substrate. And then it becomes difficult to justify the intuition that aliens with completely different biochemical constitution (even an electronic one—cf. the aliens in Retief's War) could have minds.

Thursday, May 15, 2008

Livers, brains, conscious computation and teleology

Let us suppose, for the sake of exploration the following (false) thesis:

  1. I am conscious because of a part of me—viz., my brain—engaging in certain computations which could also be engaged in by sophisticated computers, thereby rendering these computers conscious.
I will also assume the following, which seems obvious to me, but I think in the end we will find that there is a tension between it and (1):
  1. The brain and liver are both proper parts of me.
Call the kinds of computations that give rise to consciousness "C-computations". So by (1) I am conscious because of my brain's C-computations, and if my laptop were to engage in C-computations, that would give rise to consciousness in it.[note 1]

Now, let us imagine that my liver started doing C-computations within itself, but did not let anything outside it know about this. Normally, livers regulate various biochemical reactions unconsciously (I assume), but let us imagine that my liver became aware of what it was doing through engaging in C-computations on the data available to it. Of course, livers can't willy nilly do that. So, part of my supposition is that the structure of my liver, by a freak of nature or nurture, has shifted in such wise that it became a biochemical computer running C-computations. As long as the liver continued serving my body, with the same external functions, this added sophistication would not (I assume) make it cease to be a part of me.

So now I have two body parts where C-computations go on: my brain and my liver. However, the following is very plausible to me:

  1. Whatever computations my liver were to perform, I would do not have direct awareness of what is going in my liver, except insofar as the liver transmitted information to its outside.
In the hypothesis I am considering, the liver "keeps to itself", not sending data into my nervous system about its new computational activities. So, I submit, I would not be aware of what is going on there. Thus, there would be consciousness, since there would be C-computations, but it would not be consciousness that I have.

But now we have a puzzle. In this setup, I am conscious in virtue of the neural computations that my brain engages in, but am not conscious in virtue of the hepatic computations that my liver engages in. Thus, when my neural computation is quiescent due to sleep, but my hepatic computation continue, I am, simpliciter, not conscious. Why? After all, both the brain and the liver are parts of me. The brain also "keeps to itself": it only lets the rest of the body have the outputs of its computation, but the details of it, it keeps to itself, just as in my thought experiment the liver does. The idea in (1) seemed to have been that computational activity by an organic part of me would give rise to consciousness that is mine. But by the same token the hepatic computational activity should give rise to consciousness that is mine.

So what should someone who accepts (1), (2) and (3) (and the various subsidiary assumptions) say about this? Well, I think the best thing to do would be to abandon (1), denying that I think in virtue of computational activity. A second-best solution would be to qualify (2): yes, the brain and the liver are parts of me, but the brain is a more "intimate" or "central" part of me. But note that one cannot explain this intimacy or centrality in terms of the brain's engaging in computational activity. For the liver could do that, too. Could one explain it in terms of how much coordinating the brain actually does of my bodily functions, both voluntary and not? Maybe, but this has to be taken teleologically. For we can imagine as part of the thought experiment that I become paralyzed, and my brain no longer coordinates my bodily functions, but they are in fact coordinated by medical technology. So the defender of (1) who wishes to qualify (2) in this way may have to embrace teleology to account for the difference between the brain and the liver.

But there may be another solution that doesn't seem to involve teleology. One might say that each bundle of C-computational activities gives rise to a conscious being. Thus, perhaps, in my thought experiment, there are two persons, who contingently have all of their parts in common: (a) the neural person that I am, and (b) the hepatic person that I am not. Contingently, because if the liver were replaced by a prosthesis that maintains basic bodily functions, (b) would die, but (a) would continue to exist, while if the brain were replaced by such a prosthesis, (a) would die, but (b) would continue to exist. This view can be seen as a way of qualifying (2): yes, both the brain and the liver are parts of me, but one is an essential part and the other not. On this view, the claim is that:

  1. I am conscious in virtue of C-computations in an essential part of me.
But actually further qualification is needed. For suppose that instead of my liver becoming conscious, one of my neurons were to suddenly complexify immensely, while retaining the same external functions, so that internally it was engaging in sophisticated C-computations, while externally it worked like every other neuron. It seems that an analogue to (3) would still hold—I wouldn't be aware of what the neuron is aware of (but I admit that my intuition is weaker here). And so (4) is false—for the C-computations in the neuron are computations in the brain, and the brain is an essential part of me, but I am not conscious in virtue of them.

It is difficult to fix up (4). One might require that the computations take place throughout the whole of the essential part. But supposing that for a while my left brain hemisphere fell asleep while the right continued C-computing (dolphins practice such "unihemispheric sleep"), then the computations would not be taking place throughout the whole of the essential part. But I would, surely, still be aware. I am not sure there is any way of fixing up (4) to avoid the conscious-neuron and unihemispheric sleep counterexamples. If not, and if no other two-persons-in-one-body solution can be articulated, then the teleological solution may be needed.

Final remarks: I think naturalism requires something like (1). If so, then given the plausibility of (2) and (3), naturalists need to accept teleology. But teleology does not fit well into naturalism. So naturalism is in trouble.

Thursday, January 24, 2008

Parts

Let us suppose that there is a proper part P of me such that every part of me beyond P could perish, while P would remain. Some think the soul is such a part. Others think the brain is. Yet others might think that the head, or the head plus soul, or my upper half are such. Suppose now that at t0, this part P is a proper part of me, but later, at t1, everything making me up outside of P perishes, and no new stuff accretes, so that P is my only part, at least not counting the subparts of P. Maybe I am a brain in a vat or a disembodied soul at t1.

What is my relationship to P at t1? It cannot be identity. For if I were identical to P at t1, then by transitivity, I would also be identical to P at t0, and thus at t0 I would be a proper part of myself, which is absurd. Yet at t1, there is a sense in which there is nothing to me but P.

It seems that if identity is not the relation, then the relation is constitution, or some other such relation that falls short of identity. Thus, I am constituted by P at t1. This is pretty standard. But it bothers me. Here's why. At t1, P is still a part of me--it didn't cease to be a part of me just because all the other parts of me have gone away (e.g., if I have a brain, then a brain is a part of me, even if nothing beyond it is). Is P a proper or improper part? If a proper part, then there ought to be other stuff beyond P making me up. But ex hypothesi, at t1, P is all that's left of me. So P is an improper part. But the only improper parts of something are the thing as a whole and, on some views, nothing (or an empty or trivial part). Plainly P is not nothing. So then P is the whole of me, which we've already seen isn't true. Either way, we have a problem.

To rephrase, suppose:

  1. Everything excepting some proper part of me could perish with nothing new accreting to me, and with that part not coming to be beyond me.
  2. If a part x of y survives and y survives, and x does not come to be beyond y, then x will still be a part of y.
  3. Identity is transitive.
  4. If x is a proper part of y, then there is stuff beyond x in y.
Then a contradiction follows. (I may have forgotten some assumption here from the informal argument. But the basic idea is here.)

A slightly different argument is to note that at t0, both P and I are parts of me in the same sense--one a proper and the other an improper part. (One might question this: maybe proper and improper parts are "parts" by analogy or equivocation.) This shouldn't change at t1: both P and I should still be parts of me in the same sense. But it doesn't seem categorially right to suppose that x and y can both be parts of z in the same sense when x constitutes y (the atoms of my heart and my heart are parts of me in different senses).

I am inclined to say that these arguments push one to reject (1)--the idea that I have a proper part such that everything beyond that part could perish while I survived with that one part. However, I also think that if any object has proper parts, surely there will be some object with a proper part that is such that the object could survive being "reduced to" that part. Indeed, very plausibly, if any object has proper parts, likewise my mind is a proper part of me, and I could surivve being "reduced to" just the mind (regardless whether the mind is a brain or a soul).

And so we have another argument for the denial of the thesis that some objects have proper parts.