Showing posts with label self-consciousness. Show all posts
Showing posts with label self-consciousness. Show all posts

Friday, August 29, 2025

Experiencing something as happening to you

In some video games, it feels like I am doing the in-game character’s actions and in others it feels like I am playing a character that does the actions. The distinction does not map onto the distinction between first-person-view and third-person-view. In a first-person view game, even a virtual reality one (I’ve been playing Asgard Wrath 2 on my Quest 2 headset), it can still feel like a character is doing the action, even if visually I see things from the character’s point of view. On the other hand, one can have a cartoonish third-person-view game where it feels like I am doing the character’s actions—for instance, Wii Sports tennis. (And, of course, there are games which have no in-game character responsible for the actions, such as chess or various puzzle games like Vexed. But my focus is on games where there is something like an in-game character.)

For those who don’t play video games, note that one can watch a first-person-view movie like Lady in the Lake without significantly identifying with the character whose point of view is presented by the camera. And sometimes there is a similar distinction in dreams, between events happening to one and events happening to an in-dream character from whose point of view one looks at things. (And, reversely, in real life some people suffer from a depersonalization where feels like the events of life are happening to a different person.)

Is there anything philosophically interesting that we can say about the felt distinction between seeing something from someone else’s point of view—even in a highly immersive and first-person way as in virtual reality—and seeing it as happening to oneself? I am not sure. I find myself feeling like things are happening to me more in games with a significant component of physical exertion (Wii Sports tennis, VR Thrill of the Fight boxing) and where the player character doesn’t have much character to them, so it is easier to embody them, and less so in games with a significant narrative where the player character has character of their own—even when it is pretty compelling, as in Deus Ex. Maybe both the physical aspect and the character aspect are bound up in a single feature—control. In games with a significant physical component, there is more physical control. And in games where there is a well-developed player character, presumably to a large extent this is because the character’s character is the character’s own and only slightly under one’s control (e.g., maybe one can control fairly coarse-grained features, roughly corresponding to alignment in D&D).

If this is right, then a goodly chunk of the “it’s happening to me” feeling comes not from the quality of the sensory inputs—one can still have that feeling when the inputs are less realistic and lack it when they are more realistic—but from control. This is not very surprising. But if it is true, it might have some philosophical implications outside of games and fiction. It might suggest that self-consciousness is more closely tied to agency than is immediately obvious—that self-consciousness is not just a matter of a sequence of qualia. (Though, I suppose, someone could suggest that the feeling of self-conscious is just yet another quale, albeit one that typically causally depends on agency.)

Monday, July 31, 2017

Self-consciousness and AI

Some people think that self-consciousness is a big deal, that it’s the sort of thing that might be hard for an artificial intelligence system to achieve.

I think consciousness and intentionality are a big deal, that they are the sort of thing that would be hard or impossible for an artificial intelligence system to achieve. But I wonder whether if we could have consciousness and intentionality in an artificial intelligence system, would self-consciousness be much of an additional difficulty. Argument:

  1. If a computer can have consciousness and intentionality, a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”.

  2. If a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”, then it can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature of me is 300K”.

  3. Necessarily, anything that can have a conscious awareness whose object would be aptly expressible with the phrase “that the temperature of me is 300K” is self-conscious.

  4. So, if a computer can have consciousness and intentionality, a computer can have self-consciousness.

Premise 1 is very plausible: after all, the most plausible story about what a conscious computer would be aware of is immediate environmental data through its sensors. Premise 2 is, I think, also plausible for two reasons. First, it’s hard to see why awareness whose object is expressible in terms of “here” would be harder than awareness whose object is expressible in terms of “I”. That’s a bit weak. But, second, it is plausible that the relevant sense of “here” reduces to “I”: “the place I am”. And if I have the awareness that the temperature in the place I am is 300K, barring some specific blockage, I have the cognitive skills to be aware that my temperature is 300K (though I may need a different kind of temperature sensor).

Premise 3 is, I think, the rub. My acceptance of premise 3 may simply be due to my puzzlement as to what self-consciousness is beyond an awareness of oneself as having certain properties. Here’s a possibility, though. Maybe self-consciousness is awareness of one’s soul. And we can now argue:

  1. A computer can only have a conscious awareness of what physical sensors deliver.

  2. Even if a computer has a soul, no physical sensor delivers awareness of any soul.

  3. So, no computer can have a conscious awareness of its soul.

But I think (5) may be false. Conscious entities are sometimes aware of things by means of sensations of mere correlates of the thing they sense. For instance, a conscious computer can be aware of the time by means of a sensation of a mere correlate—data from its inner clock.

Perhaps, though, self-consciousness is not so much awareness of one’s soul, as a grasp of the correct metaphysics of the self, a knowledge that one has a soul, etc. If so, then materialists don’t have self-consciousness, which is absurd.

All in all, I don’t see self-consciousness as much of an additional problem for strong artificial intelligence. But of course I do think that consciousness and intentionality are big problems.