Showing posts with label types. Show all posts
Showing posts with label types. Show all posts

Wednesday, October 20, 2021

Is Lewis's identity theory a type-type identity theory?

David Lewis’s 1983 identity theory of mind holds that:

  1. For each mental state type M there is a causal role RM such that to be a state of type M is to fulfill RM.

  2. For each actually occurring mental state type M, the causal role RM is fulfilled by physical states and only physical states.

It is normal to take Lewis’s identity theory to be a type-type identity theory.

But a type-type identity theory identifies being a state of type M with some physical state type. So whether Lewis’s identity theory is a type-type identity theory depends then on whether fulfilling RM counts as a physical state type.

Here are two accounts of what makes a type T be a physical type:

  1. Everything falling under T is physical.

  2. Necessarily everything falling under T is physical.

If (3) is the right account of the physicality of a state type, then Lewis’s theory is a type-type identity theory, because everything that fulfills RM is physical according to (2).

However, (3) is an inadequate account of the physicality of a type. Consider the type ghost. That’s paradigmatically not a physical type. But in fact, trivially, everything that is a ghost is physical, simply because there are no ghosts. If one objects that only instantiated types count, then we can note that by (3) the type ghost-or-pig also counts as a physical type, whereas it surely does not.

It seems to me that (4) is a much better account of a physical type. However, on (4) for Lewis’s theory to count as a type-type identity theory, he would need a version of (2) strengthened by deleting “actually” and inserting “Necessarily” in front. And Lewis’s arguments do not establish such a stronger version of (2). Lewis’s arguments are quite compatible with RM having non-physical realizers in other possible worlds.

That said, perhaps (4) is not the right account of the physicality of a type either. Consider the type believed by God to be an electron. Necessarily, everything falling under this type is an electron, hence physical. But because the definition of the type makes use of supernaturalist vocabulary, the type does not seem to be physical. This criticism points towards an acocunt of physical type like this:

  1. The type T is expressible wholly in terms that natural science uses.

It’s essential for this to fit with Lewis’s theory that causation be one of the terms that natural science uses. But now imagine that we live in a world where one being causes spacetime, and it’s a non-physical being. Clearly, the type cause of spacetime is expressible wholly in natural scientific vocabulary, but given that the one and only instance of this type is non-physical, it sure doesn’t sound like a physical type! Indeed, if this (5) is how we understand physical types, then a type-type identity theory does not even imply a token-token identity theory!

We might try to combine (3) with (5):

  1. Everything falling under T is physical and the type T is expressible wholly in terms that natural science uses.

But now imagine that there is no being that causes spacetime and all spatiotemporal entities, but that it is possible for there to be such a being, and that any such being would necessarily be non-physical. In that case causes spacetime and all spatiotemporal entities satisfies (6) trivially, but is surely not a physical type, because the only possible instances of it would be non-physical. (If one objects that types need to be instantiated, just disjoin this type with the type pig, as we did in the ghost case.)

So perhaps our best bet is to combine (4) with (5). But any account on which (4) is a necessary condition for the physicality of a type is an account that goes beyond Lewis’s, because it requires the stronger version of (2) with actuality replaced by necessity.

I conclude that Lewis’s account isn’t really a type-type identity theory, except in the inadequate senses of physicality of type given by (3), (5) or (6).

Wednesday, December 21, 2016

What is this?

Consider the black item to the right here on your screen. Is it a token of the Latin alphabet letter pee, the Greek letter rho or the Cyrillic letter er? The question cannot be settled by asking which font, and where in the font, the glyph is taken from, because I drew the drawing in Inkscape rather than using any font, precisely to block such an answer. Nor will my intentions answer the question, since I drew the thing precisely to pose such a philosophical question rather than to express any one of the three options.

There are two interesting questions here. The first is an ontological one. Is a token on screen something different from the pattern of light? If it's the same as the pattern of light, then there is at most one token, there being at most one relevant pattern of light (perhaps none, if our ontology doesn't include patterns of light), though this token is a token of pee, and a token of rho and a token of er. If a token is not identical with a pattern of light, then we might as well keep on multiplying entities, and say that there is a pattern of light and three tokens, of pee, rho and er, respectively, with the first entity constituting the latter three.

The second one is a philosophy of language one. What determines whether or not the pattern of light is or constitutes a token of, say, rho? Is it my intentions? If so, then indeed we have tokens of pee, rho and er, as making these was my intention, but we do not have a token of the Coptic letter ro or a token of the letter qof in 15th century Italian Hebrew cursive, since I didn't think of these when I was doing the drawing. Is it the linguistic context? But then it's not a token of any letter, since a displayed png file in an analytic philosophy post is not a the kind of linguistic context that determines a token.

Or is it that the pattern of light is or constitutes tokens of all the letters it geometrically matches, whether or not it was intended as such? If so, then we also have a letter dee (just turn your screen). But now suppose a new alphabet is created, and it contains a letter that looks just like the drawing. It would be odd to say that if a new language were created on another planet this instantly would multiply the entities on earth (at the speed of light? faster?). So it seems that on this view, we should say that the pattern of light is or constitutes tokens of all the letters in all the alphabets that will ever exist. But future actions shouldn't affect how many things there now are. So on this view, we should be even more pluralistic: the pattern of light is or constitutes tokens of all the letters in all possible alphabets.

We thus have two questions: one about ontology and one about what is being tokened. Both questions have parsimonious and profligate answers. The parsimonious answer to the ontology question is that there is one thing, which can be a token of multiple things. The profligate one is that we have many tokens. The parsimonious answers to the language question are that intentions and/or context determines what's been tokened. The profligate answer has an infinite amount of tokening.

We probably shouldn't combine the two profligate answers. For then on your screen there are infinitely many physical things, all co-located (and some perhaps even with the same modal profile). That's too much.

That still leaves three combinations. I think there is reason to reject the combination of ontological profligacy with parsimony on the philosophy of language side. The reason is that tokens get repurposed. Consider a Russian who has a Scrabble set and loses an er tile. She then buys a replacement pee tile, as it looks pretty much the same (I looked at online pictures--both have value 1 and look the same). Then it seems that a new entity, a token of er, comes into existence if we have ontological profligacy and linguistic parsimony. Does a mere intention to use the tile for an er what magically creates a new physical object, a token? That seems not very plausible.

That leaves two combinations:

  • ontological and linguistic parsimony
  • ontological parsimony and linguistic profligacy.

Friday, September 11, 2009

On sentence types

Sentence tokens come in many types, such as stupid sentence tokens, true sentence tokens, sentence tokens written in green ink, tokens of "Snow is white", tokens of "Snow is white" written in a serif font and in a 4pt typeface or smaller, etc. Most of these types of sentence tokens do not qualify as "sentence types". In fact, in the above, the only sentence type is tokens of "Snow is white". Types of sentence tokens are abstractions from sentence tokens. But there are different kinds and levels of abstraction, and so not all types of sentence tokens count as "sentence types".

I will argue that the notion of a sentence type is to a large extent merely pragmatic. We consider the following to each be a token of the same sentence type:

  1. Snow is white.
  2. Snow is white.
  3. Snow is white.
The difference between roman, bold and italic, as well as differences in size, are differences that do not make a difference between sentence types. Similarly, "Snow is white" as said by me with my weird Polish-Canadian accent and as said by the Queen are the same sentence types. On the other hand,
  1. Snow, it is white.
is a token of different sentence type.

Say that a difference between the appearances (visual or auditory) of tokens that does not make for a difference in sentence type is a "merely notational difference". So, the way logicians think of language is roughly something like this. First, we abstract away merely notational differences. The result of this abstraction is sentence types. Then we can do logic with sentence types, and doing logic with sentence types helps us to do other abstractions. Thus, Lewis abstracts from differences that do not affect which worlds verify the sentence, and the result are his unstructured propositions (which he, in turn, identifies with sets of worlds). Or we might abstract from differences that do not affect meaning, and get propositions. (This simplifies by assuming there are no indexicals.)

But one could do things differently. For instance, we could say that differences in typeface are not merely notational differences, but in fact make for a different sentence type. Our logic would then need to be modified. In addition to rules like conjunction-introduction and universal-elimination, we would need rules like italic-introduction and bold-elimination. Moreover, these rules do not contribute in an "interesting way" to the mathematical structures involved. (John Bell once read to me a referee's report on a paper of his. As I remember it, it was something like this: "The results are correct and interesting. Publish." There are two criteria for good work in mathematics: it must, of course, be correct but it must also be interesting.) Moreover, there will be a lot of these rules, and they're going to be fairly complicated, because we'll need a specification of what is a difference between two symbol types (say, "b" and "d") and what is a difference between the same symbol type in different fonts. Depending on how finely we individuate typefaces (two printouts of the same document on the same printer never look exactly alike), this task may involve specifying a text recognition algorithm. This is tough stuff. So there is good pragmatic reason to sweep all this stuff under the logician's carpet as merely notational differences.

Or one could go in a different direction. One could, for instance, identify the differences between sentences (or, more generally, wffs) that are tautologically equivalent as merely notational differences. Then, "P or Q" and "Q or P or Q" will be the same sentence type. Why not do that? One might respond: "Well, it's possible to believe that P or Q without believing that Q or P or Q. So we better not think of the differences as merely notational." However, imagine Pierre. He has heard me say that London is pretty and the Queen saying that London is ugly. But he has failed to recognize behind the difference in accents that my token of "London" and the Queen's token of it both name the same city. If we were to express Pierre's beliefs, it would be natural to say "Pierre believes that [switch to Pruss's accent] London [switch back] is pretty and that [switch to Her Majesty's accent] London [switch back] is ugly." So the belief argument against identifying "P or Q" with "Q or P or Q" pushes one in the direction of the previous road—that of differentiating very finely.

On this approach, propositional logic becomes really easy. You just need conjunction-introduction and disjunction-introduction.

Or one could do the following: Consider tokens of "now" to be of different word types (the comments on the arbitrariness of sentence types apply to word types) when they are uttered at different times. Then, tokens of "now" are no longer indexicals. Doing it this way, we remove all indexicality from our language. Which is nice!

Or one can consider "minor variations". For instance, logic textbooks often do not give parenthesis introduction and elimination rules, as well as rules on handling spaces in sentences. As a result, a good deal of the handling of parentheses and spaces is left for merely notational equivalence to take care of. It's easy to vary how one handles a language in these ways.

There does not seem to be any objective answer for any language where exactly merely notational differences leave off. There seem to be some non-pragmatic lines one can draw. We do not want sentence types to be so broad that two tokens of the same non-paradoxical and non-indexical type can have different truth values. Nor, perhaps, do we want to identify sentence tokens as being of the same type just because they are broadly logically equivalent when the equivalence cannot be proved algorithmically. (Problem: Can the equivalences between tokens in different fonts and accents be proved algorithmically? Can one even in principle have a perfect text scanning and speech recognition algorithm?) But even if we put in these constraints this still leaves a lot of flexibility. We could identify all tautologously equivalent sentences as of the same type. We could even identify all first order equivalent sentences as of the same type.

Here is a different way of seeing the issue, developed from an idea emailed to me by Heath White. A standard way of making a computer language compiler is to split the task up into two stages. The first stage is a "lexer" or "lexical analyzer" (often generated automatically by a tool like flex from a set of rules). This takes the input, and breaks it up into "tokens" (not in the sense in which I use the word)—minimal significant units, such as variable names, reserved keywords, numeric literals, etc. The lexical analyzer is not in general one-to-one. Thus, "f( x^12 + y)" will get mapped to the same sequence of tokens as "f(x^12+y )"—differences of spacing don't matter. The sequence of tokens may be something one can represent as FUNCTIONNAME("f") OPENPAREN VARIABLENAME("x") CARET NUMERICLITERAL(12) PLUS VARIABLENAME("y") CLOSEPAREN. After the lexical analyzer is done, the data is handed over to the parser (often generated automatically by a tool like yacc or bison from a grammar file).

Now, in practice, the hand-off between the lexer and the parser is somewhat arbitrary. If one really wanted to and was masochistic enough, one could write the whole compiler in the lexer. Or one could write a trivial parser, one that spits out each character (or even each bit!) as a separate token, and then the parser would work really hard.

Nonetheless, as Heath pointed out to me, there may be an objective answer to where notational difference leaves off. For it may be that our cognitive structure includes a well-defined lexer that takes auditory (speech), visual (writing or sign language) or tactile (Braille or sign language for the deaf-blind) observations and processes them into some kind of tokenized structure. If so, then two tokens are of the same sentence type provided that the structure would normally process them into the same structure. If so, then sentence type will in principle be a speaker-relative concept, since different people's lexers might work differently. To be honest, I doubt that it works this way in me. For instance, I strongly doubt that an inscription of "Snow is white" and an utterance of "Snow is white" give rise to any single mental structure in me. Maybe if one defines the structure in broad enough functional terms, there will be a single structure. But then we have arbitrariness as to what we consider to be functionally relevant to what.

The lesson is not that all is up for grabs. Rather, the lesson is that the distinctions between tokens and types should not be taken to be unproblematic. Moreover, the lesson supports my view—which I think is conclusively proved by paradoxical cases—that truth is a function of sentence token rather than sentence type.