Showing posts with label names. Show all posts
Showing posts with label names. Show all posts

Thursday, April 5, 2012

"John and John"

I just sent out an email to two philosophers whose first name was "John" and the email's first line said "Dear John and John". After I sent the email, I wondered to myself: Is there a fact of the matter as to which token of "John" referred to whom?

Normally, if I write an email to two people, I think about the issue of which order to list their names in, and I typically proceed alphabetically. But in this case, I didn't think about the order of names I was writing down. It is possible that I thought about the one while typing the first "John" and then about the other while typing the second "John". Would that be enough to determine which token refers to whom? Maybe. But I don't know if I did anything like that, and we may suppose I didn't.

Now:

  1. John and John are philosophers.
This is true. But I didn't think of a particular one of the two while typing a particular "John" token. It seems unlikely that there be a fact of the matter as to which "John" refers to whom. But the sentence is, nonetheless, true, and hence meaningful.

Is the sentence ambiguous in its speaker meaning? If so, that's a hyperintensional ambiguity, because necessarily "x and y are Fs" and "y and x are Fs" have the same truth value. I am hesitant to say that (1) is ambiguous in its speaker meaning. (I will leave its lexical meaning alone, not to complicate things.)

Suppose that there is no ambiguity in speaker meaning, or at least none arising from the issue of which token refers to whom (maybe "philosopher" is ambiguous). Then this rather complicates compositional semantics on which the content of a whole arises from the content of the parts. For if either token of "John" in (1) has a content, the other token has the same content, since they are on par. But if the content is the same, we're not going to get out of this a sentence that means the same thing as (1) does. Suppose, for instance, the content of each token of "John" is the same as that of of "x or y", where "x" and "y" are unambiguous names for the two philosophers. Then we would have to say that (1) is equivalent to:

  1. (x or y) and (x or y) are philosophers,
but in fact (1) and (2) are not equivalent—all that (2) needs for its truth is that one of x and y be a philosopher.

Maybe the solution is this. Neither "John" in (1) refers. But "John and John" is the name of a plurality. I think not, though. Here's why. Suppose instead I said: "John and the most productive member of my Department and John are all philosophers." Well, "John and the most productive member of my Department and John" is not a name, as it does not refer rigidly.

I am just a dilettante on semantics, and it would not surprise me if this was exhaustively discussed in the literature.

Wednesday, February 29, 2012

Sketches towards a theory of quantifiers and quantifier variance

Quantifier variance theorists think that there can be multiple kinds of quantifiers. Thus, there could be quantifiers that range over only fundamental entities, but there could also be quantifiers that range over arbitrary mereological sums. I will call all the quantifiers with a particular range a "quantifier family". A given quantifier family will include its own "for all x (∀x)" and "for some x (∃x)", but may also include "for most x", "for at least two x", and similar quantifiers. I will, however, not include any negative quantifiers like "for no x" or "for at most seven x", or partially negative ones like "for exactly one x". I will also include "singular quantifiers", which we express in English with "for x=Jones". In fact, I will be working with a language that has no names in it as names are normally thought of in logic. Instead of names, there will be a full complement of singular quantifiers, one for each name-as-ordinarily-thought-of; I am happy to identify names with singular quantifiers for the purposes of logic.

Say that quantifier-candidates are operators that take a variable and a formula and return a formula in which that variable is not open. Consider a set F of quantifier-candidates with a partial ordering ≤, where I will read "QR" as "R is at least as strong as Q", and with a symmetric "duality" relation D on F. There is also a subset N of elements of F which will be called "singular". Then F will be a quantifier family provided that

  1. There is a unique maximally strong operator ∀
  2. There is an operator ∃ dual to ∀
  3. If Q is dual to R then it can be proved that QxP iff ~Rx~P
  4. ∃ is minimally strong
  5. If R in F is at least as strong as Q in F, then from RxP one can prove QxP
  6. From P one can prove QxP for any Q
  7. From ∀xP one can prove P (note: open formulae can stand in provability relations)
  8. If Q is singular, then Q is self-dual and Qx(A&B) can be proved from QxA and QxB
We may need to add some more stuff. But this will do for now.

One can set up a model-theory as well. A domain-model for a quantifier family will include a set O of "objects", and a set S of sets of subsets of O, such that if E is a member of S, then E is an upper set, i.e., if a subset A of O is in E, then so is any superset of A as long as it is still a subset of O. A member of S will be called an "evaluator". To get a model from a domain-model, we add the usual set of relations. An interpretation I in a given model for a language with a quantifier family will then involve an ordinary interpretation of the language's predicates, plus an assignment of quantifiers to members of S subject to the constraints that (a) if QR, then I(R) is a subset of I(Q), (b) I(∀) is the evaluator {O}, (c) if Q is dual to R, then A is a member of I(Q) if and only if the complement OA is not a member of I(R), (d) if Q is singular, then I(Q) is a filter-base. We can then define truth under I using the basic idea that QxP is true if and only if the set of objects o such that o satisfies P when put in for x is a member of I(Q) (this should be all done more carefully, of course).

(The interpretation of a name is always an ultrafilter. If we wanted to, we could restrict names to being interpreted as principal ultrafilters, in which case names would correspond to objects, but I think things are more interesting like it is.)

Ideally, we'd want to make sure we have soundness and completeness at this point. I'm basically just making this up as I go along, so there may be a literature on this, and if there is, there will presumably be results about soundness and completeness in it. And maybe we need more rules of inference and maybe I screwed up above. This is just a blog post. Moreover, we might want some further restrictions about how particular quantifiers, like "for most", are interpreted (the above just constrains it to having an evaluator between the evaluators for ∀ and ∃). The point of the above is more to give an example of what a formal characterization of a quantifier family might look like than to give the correct one.

But now it is time for some metaphysics. The notion of a quantifier family is a purely formal one. Moreover, the model-theoretic notion of interpretation that I used above won't be helpful for quantifier variance discussions because it talks of "sets of objects", whereas what "metaphysically counts as an object" varies between quantifier families.

It is easy to come up with quantifier families and perverse interpretations such that under such an interpretation, we would not want to count the members of the quantifier family "quantifiers". Nor would it be a quantifier variance thesis to say that there are many such families and interpretations, since that there are such is not controversial.

I think a Thomist can give an answer: a quantifier family in the formal sense is a bona fide quantifier family provided that the family is analogous to some privileged family of quantifiers, say quantifiers over substances. In other words, the different kinds of existence are defined by analogy to existence proper. This won't satisfy typical quantifier variance folk, as I think they don't want a privileged family of quantifiers. But that's the best I can do right now.

Monday, March 21, 2011

Names, quantifiers, Aristotelian logic and one-sided relations

This is going to be a pretty technically involved post and it will be written very badly, as it's really just notes for self. Start with this objection to Aristotelian logic. A good logical system reveals the deep logical structure of sentences. But Aristotelian logic takes as fundamental sentences like:
  1. Everyone is mortal.
  2. Socrates is mortal.
In so doing, Aristotelian logic creates the impression that (1) and (2) have similar logical form, and it is normally taken to be that modern quantified logic has shown that (1) and (2) have different logical forms, namely:
  1. x(Mortal(x))
  2. Mortal(Socrates).
I shall show, however, that there is a way of thinking about (1) and (2), as well as about (3) and (4), that makes them have the same deep logical form, as Aristotelian logician makes it seem. (This is a very surprising result for me. Until I discovered these ideas this year, I had a strong antipathy to Aristotelian logic.) Moreover, this will give us some hope of understanding the medieval idea of one-sided relations. The medievals thought, very mysteriously, that creation is a one-sided relation: we are related to God by the created by relation, but God is not related to us by the creates relation.

Now to the technical stuff. Recall Tarski's definition of truth in terms of satisfaction. I think the best way to formulate the definition is by means of a substitution sequence. A substitution sequence s is a finite sequence of variable-object pairs, which I will write using a slash. E.g., "x1"/Socrates,"x2"/Francis,"x3"/Bucephalus is a substitution sequence. The first pair in my example consists of the variable letter "x1", a linguistic entity (actually in the best logic we might have slot identifiers instead of variable letters) and Socrates—not the name "Socrates" (which is why the quotation marks are as they are). We then inductively define the notion of a substitution sequence satisfying a well-formed formula (wff) under an interpretation I. An interpretation I is a function from names and predicates to objects and properties respectively. And then we have satisfaction simpliciter which is satisfaction under the intended interpretation, and that's what will interest me. So henceforth, I will be the intended interpretation. (I've left out models, because I am interested in truth simpliciter.) We proceed inductively. Thus, s satisfies a disjunction of wffs if and only if it satisfies at least one of the wffs, and so on, the negation of a wff if and only if it does not satisfy the wff, and so on.

Quantifiers are a little more tricky. The sequence s satisfies the wff ∀xF iff for every object u, the sequence "x"/u,s (i.e., the sequence obtained by prepending the pair "x"/u" at its head) satisfies F. The sequence s satisfies ∃xF iff for some object u, the sequence "x"/u,s satisfies F.

What remains is to define s's satisfaction of an atomic wff, i.e., one of the form P(a1,...,an) where a1,...,an are a sequence of names or variables. The standard way of doing this is as follows. Let u1,...,un be a sequence of objects defined as follows. If ai is a variable "x", then we let ui be the first object u occuring in s paired with the variable "x". If for some i there is none such pair in s, then we say s doesn't satisfies the formula. If ai is a name "n", then we let ui=I("n"). We then say that s satisfies P(a1,...,an) if and only if u1,...,un stand in I(P).

Now notice that while the definition of satisfaction for quantified sentences is pretty neat, the definition of satisfaction for atomics is really messy, because it needs to take into account the question of which slot of the predicate has a variable in it and which one has a name.

There is a different way of doing this. This starts with the Montague grammar way of thinking about things, on which words are taken to be functors from linguistic entities to linguistic entities. Let us ask, then, what kind of functors are represented by names. Here is the answer that I think is appealing. A name, say "Socrates", is a functor from wffs with an indicated blank to wffs. In English, the name takes a wff like "____ likes virtue" and returns the wff (in this case sentence) "Socrates likes virtue". (The competing way of thinking of names is as zero-ary functors. But if one does it this way, one also needs variables as another kind of zero-ary functor, which I think is unappealing since variables are really just a kind of slot, or else one has a mess in treating atomics differently depending on which slots are filled with names and which with variables.) We can re-formulate First Order Logic so that a name like "Socrates" is (or at least corresponds to) a functor from wff-variable pairs to new wffs. Thus, when we apply the functor "Socrates" to the wff "Mortal(x)" and the variable "x", we get the wff (sentence, actually) "Mortal(Socrates)". And the resulting wff no longer has the variable "x" freely occurring in it. But this is exactly what quantifiers do. For instance, the universal quantifier is a functor that takes a wff and a variable, and returns a new wff in which the variable does not freely occur.

If we wanted the grammar to indicate this with particular clarity, instead of writing "Rides(Alexander, Bucephalus)", we would write: "Alexanderx Bucephalusy Rides(x,y)". And this is syntactically very much like "∀xy Rides(x,y)".

And if we adopted this notation, the Tarski definition of satisfaction would change. We would add a new clause for the satisfaction of a name-quantified formula: s satisfies nxF, where "n" is a name, if and only if "x"/I("n"),s satisfies F. Now once we got to the satisfaction of an atomic, the predicate would only be applied to variables, never to names. And so we could more neatly say that s satisfies P(x1,...,xn) if and only if every variable occurs in the substitution sequence and u1,...,un stand in I(P) where ui is the first entity u occurring in s in a pair of the form "xi"/u.  Neater and simpler, I think.

Names, thus, can be seen as quantifiers. It might be thought that there is a crucial disanalogy between names and the universal/existential quantifiers, in that there are many names, and only one universal and only one existential quantifier. But the latter point is not clear. In a typed logic, there may be as many universal quantifiers as types, and as many existential ones as types, once again. And the number of types may be world-dependent, just as the number of objects.

If I am right, then if we wanted to display the logical structure of (1) and (2), or of (3) and (4) for that matter, we would respectively say:
  1. x Mortal(x)
  2. Socratesx Mortal(x).
And there is a deep similarity of logical structure—we simply have different quantifiers. And so the Aristotelian was right to see these two as similar.

Now, the final little bit of stuff. Obviously, if "m" and "n" are two names, then:
  1. "mnF(x,y)" is true iff "nmF(x,y)" is true,
just as:
  1. "∀xyF(x,y)" is true iff "∀yxF(x,y)" is true.
But the two sentences in (8), although they are logically equivalent, arguably express different propositions. And I submit that so do the two sentence in (7). And we even have a way of marking the difference in English, I think. Ordinarily, what the left hand side in (7) says is that u has the property PxnyF(x,y) while the right hand side in (8) says that v has the property PymxF(x,y), where u and v are what "m" and "n" respectively denote, and PxH(x) is the (abundant) property corresponding to the predicate H (the P-thingy is like the lambda functor, except it returns a property, not a predicate). These are distinct claims.

The medievals then claim that in the case of God we have this. They say that "Godx nF(x,y)" is true in virtue of "ny GodF(x,y)" being true. It is to the referent of "n" that the property Py GodF(x,y) is attributed, and the sentence that seems to attribute a property to God is to be analyzed in terms of the one that attributes a property to the referent of "n".

Friday, June 18, 2010

Radical Essentiality of Origins and a crazy theory of names

Radical Essentiality of Origins (REO) is the thesis that the complete origins of an entity x are necessary and sufficient to the identity of x. In other words, for any x, if D is a complete description of the origins of x (all the history prior to x, as well as x's initial state), then necessarily something is an x if and only if it has D. In other words, according to REO, origins function like haecceities.

REO has a number of benefits. It reduces the number of brute facts (one doesn't have to explain why I exist instead of someone just like me), it reduces transworld identity facts to REO and diachronic identity, it explains how God knows whom he will create, and so on. It has one cost: it reduces the number of possibilities, eliminating apparent possibilities that people might think are real (like the possibility that I might have lived your life, or had a slightly different causal origin, or that there be two indiscernibles). Moreover, Kripke's quantified modal logic has the deficiency that it has no room for names; but if REO is true, names aren't needed, as we can just use definite descriptions, pace Kripke.

Here's something really crazy you can do with REO. You can rescue something like the definite description theory of names. You say: a name functions as an abbreviation (in a broad sense—I will say a bit more) for a definite description in terms of origins. Thus, "Socrates" abbreviates: "The son of Phaenarete and Sophroniscus, grandson of ..., conceived at ..."

An apparent problem is that it seems that to grasp an abbreviation, you must grasp what it is an abbreviation for. But that may well be false. One can have the concept of the U.N. or of a someone's being a POW without knowing what the abbreviations stand for. (We could explain this datum by saying that there are in fact two words "POW"—one is an abbreviation and the other is a word in the idiolect of those who don't know it's an abbreviation. But the proposal that one can understand an abbreviation without knowing what it stands for is simpler.)

Maybe, though, some user of the abbreviation has to know what it's an abbreviation for. But suppose that Sally wrote down on a piece of paper some definite description, and sealed the piece in an envelope. Then she told me some facts about what satisfies the description, without telling me the description. Maybe it's a riddle and I'm supposed to guess what the description is. So I say: "Let 'D' abbreviate that definite description." I can then use "D" grammatically as a definite description. For instance, if Sally's hints imply that the description isn't (even de facto) rigid, then I can say: "D might not have been D." So I can use "D" in my language. Now, there is a question whether such use is sufficient for counting as grasp. It either does or does not. If it does, then there is no objection to the REO account of names—we just suppose that names function as abbreviations for definite descriptions of the origins, but we don't know what these definite descriptions are. But if it is not sufficient for counting as grasp, we can still say the same thing about how names function. We just have to say that we don't grasp names, though we are able to use them. Maybe only God grasps my name. Or, perhaps, the better thing to do is to distinguish different kinds of grasp. We have a sufficient grasp of a name to use it.

I don't know how this can be extended to fictional names. But Kripke's account of names has that problem, too.

A problem with this theory of names is that there isn't a unique definite description of origins. Conjuncts can be re-ordered, etc. I think to some people this isn't going to be a great cost—maybe a name is an ambiguous definite description (i.e., it's ambiguous which definite description it is), but the ambiguity does not affect extension in any possible world. (Then, maybe a fictional name is an ambiguous definite description where the ambiguity does affect extension?) I don't like that. Another problem this is that I actually think "Tully" and "Cicero" mean different things. To me, this is a very strong, maybe fatal, objection to the crazy REO-based theory of names. But it is a standard view that "Tully" and "Cicero" have the same sense, so to others this won't be much of an objection.

Enough fun for now.

Wednesday, July 23, 2008

Naming and taming

After having jumped into amateur astronomy, the sky has lost some of its aweful majesty to me. It's beautiful, but not aweful. I think this has something to do with the way that by having names to attach to objects and having ways of classifying them, there is a way in which we have tamed them. That beautiful glow over there—that's "just" the Lagoon Nebula. There is a way in which this is deceptive. We encompass a whole galaxy in a word, completely ignorant of the billions of fascinating lives that, for all we know, are unfolding there. But there is also a way in which the galaxy itself, leaving aside any life in it (I think it would be strange to talk of a dog, much less a human, as "part of the Milky Way Galaxy"), really is not aweful. It is a creature of God, and in itself not as wondrous as a human being with reason and volition.
I think the above gives me reason to be even more sympathetic to the Thomistic doctrine that God is not a member of any of the genera, and the early Christian insistence on God not having a name.

Saturday, December 15, 2007

God has no name

Early Christians considered it important that God has no name, in contradistinction to pagans who had multiple gods and naturally wanted to know which god the Christians worshiped. Eusebius reports of Attalus, being roasted on an iron chair, that "when asked what was the name of God, he answered, 'God has no name like a human being has'." St. Justin Martyr in his second Apology argues that names are given by one's elders, and hence God has no name. Aristides in his Apology says: "He has no name, for everything which has a name is kindred to things created." After quoting Trismegistus to the same effect, Lactantius writes: "God, therefore, has no name, because He is alone; nor is there any need of a proper name, except in cases where a multitude of persons requires a distinguishing mark, so that you may designate each person by his own mark and appellation. But God, because He is always one, has no peculiar name." (The difference in reasons given suggests that there was a well-established doctrine that God had no name, but the reasons for the doctrine were not universally agreed on.)

The idea of God's namelessness is fruitful. It is true that there is the tetragrammaton", but that seems to have been completely unused by Christians until very recently. The early Christians would have thought that the use of a proper name made God too much like a pagan deity. (And, indeed, there is evidence of pagan deities with names akin to the tetragrammaton, e.g., in Ugaritic texts.) The Jewish cessation of use of a proper name for God, and its systematic oral replacement by "Adonai" or "Elohim", would have been seen not as protection against uttering a name too holy for our sinful lips, but as a deepening of the understanding of monotheism, of God's utter transcendence.

But in a way God has a name. The man Jesus Christ is his name to us. Christ is the Logos, the Word that reveals God, the word pointing towards God (I am reading "pros ton Theon" in John 1:1 in a way complementary to the usual reading of "with God"). But his name is unlike the names of humans. His name is a person, consubstantial with him. Nothing less than himself is sufficient for us to call him by. Yet, like a name, he is made sensible in the incarnation.