Showing posts with label quantifiers. Show all posts
Showing posts with label quantifiers. Show all posts

Thursday, June 5, 2025

What is an existential quantifier?

What is an existential quantifier?

The inferentialist answer is that an existential quantifier is any symbol that has the syntactic features of a one-place quantifier and obeys the same logical rules of an existential quantifier (we can precisely specify both the syntax and logic, of course). Since Carnap, we’ve had good reason to reject this answer (see, e.g., here).

Here is a modified suggestion. Consider all possible symbols that have the syntactic features of a one-place quantifier and obeys the rules of an existential quantifier. Now say that a symbol is an existential quantifier provided that it is a symbol among these symbols that maximizes naturalness, in the David Lewis sense of “naturalness”.

Moreover, this provides the quantifier variantist or pluralist (who thinks there are multiple existential quantifiers, none of them being the existential quantifier) with an answer to a thorny problem: Why not simply disjoin all the existential quantifiers to make a truly unrestricted existential quantifier, and say that that is the existential quantifier? THe quantifier variantist can say: Go ahead and disjoin them, but a disjunction of quantifiers is less natural than its disjuncts and hence isn’t an existential quantifier.

This account also allows for quantifier variance, the possibility that there is more than one existential quantifier, as long as none of these existential quantifiers is more natural than any other. But it also fits with quantifier invariance as long as there is a unique maximizer of naturalness.

Until today, I thought that the problem of characterizing existential quantifiers was insoluble for a quantifier variantist. I was mistaken.

It is tempting to take the above to say something deep about the nature of an existential quantifier, and maybe even the nature of being. But I think it doesn’t quite. We have a characterization of existential quantifiers among all possible symbols, but this characterization doesn’t really tell us what they mean, just how they behave.

Monday, May 5, 2025

Unrestricted quantification and Tarskian truth

It is well-known—a feature and not a bug—that Tarski’s definition of truth needs to be given in a metalanguage rather than the object language. Here I want to note a feature of this that I haven’t seen before.

Let’s start by considering how Tarski’s definition of truth would work for set theory.

We can define satisfaction as a relation between finite gappy sequences of objects (i.e., sets) and formulas where the variables are x1, .... We do this by induction on formulas.

How does this work? Following the usual way to formally create an inductive definition, we will do something like this:

  1. A satisfaction-like relation is a relation between finite sequences of sets and formulas such that:

    1. the relation gets right the base cases, namely, a sequence s satisfies xn ∈ xm if and only if the nth entry of s is a member of the mth entry of s, and satisfies xn = xm if and only if the nth entry of s is identical to the mth entry

    2. the relation gets right the inductive cases (e.g., s satisfies xnϕ if and only if for every sequence s that includes an nth place and agrees with s on all the places other than the nth place we have s satisfying ϕ, etc.)

  2. A sequence s satisfies a formula ϕ provided that every satisfaction-like relation holds between s and ϕ.

The problem is that in (2) we quantify over satisfaction-like relations. A satisfaction-like relation is not a set in ZF, since any satisfaction-like relation includes ((a),ϕ=) for every set a, where (a) is the sequence whose only entry is a at the first location and ϕ= is x1 = x1. Thus, a satisfaction-like relation needs to be a proper class, and we are quantifying over these, which suggests ontological commitment to these proper classes. But ZF set theory does not have proper classes. It only has virtual classes, where we identify a class with the formula defining it. And if we do that, then (2) comes down to:

  1. A sequence s satisfies ϕ if for every satisfaction-like formula F the sentence F(s,ϕ) is true.

And that presupposes the concept of truth. (Besides which, I don’t know if we can define a satisfaction-like formula.) So that’s a non-starter. We need genuine and not merely virtual classes to give a Tarski-style definition of truth for set theory. In other words, it looks like the meta-language in which we give the Tarski-style definition of truth for set theory not only needs a vocabulary that goes beyond the object-language’s vocabulary, but it needs a domain of quantification that goes beyond the object-language’s domain.

Now, suppose that we try to give such a Tarskian definition of truth for a language with unrestricted quantification, namely quantification over literally everything. This is very problematic. For now the satisfaction-like relation includes the pair ((a),ϕ=) for literally every object a. This relation, then, can neither be a set, nor a class, nor a proper superclass, nor a supersuperclass, etc.

I wonder if there is a way of getting around this difficulty by having some kind of a primitive “inductive definition” operator instead of quantifying over satisfaction-like relations.

Another option would be to be a realist about sets but a non-realist about classes, and have some non-realist story about quantification over classes.

I bet people have written on this stuff, as it’s a well-explored area. Anybody here know?

Monday, April 28, 2025

Inferentialism and the completeness of geometry

The Quinean criterion for existential commitment is that we incur existential commitment precisely by affirming existentially quantified sentences. But what’s an existential quantifier?

The inferentialist answer is that an existential quantifier is anything that behaves logically like an existential quantifier by obeying the rules of inference associated with quantifiers in classical logic.

Here is a fun little problem with the pairing of the above views. Tarski proved that, with an appropriate axiomatization, Euclidean geometry is complete and consistent, i.e., for every geometric sentence ϕ, exactly one of ϕ and its negation is provable from the axioms. Now let us stipulate a philosophically curious language L*. Syntactically, the symbols of L* are the symbols of L but with asterisks added after every logical connective, and the sentences are of L* are the sentences of L with an asterisk added after every connective and predicate. The semantics of L* are as follows: the sentence ϕ of L* means that the sentence of L formed by dropping the asterisks from ϕ is provable from the axioms of Euclidean geometry.

Inferentially, the asterisked connectives of L* behave exactly like the corresponding non-asterisked connectives of L.

Consider the sentence ϕ of L* that is written *x(x=*x). This sentence, by stipulation, means that x(x=x) is provable from the axioms of Euclidean geometry. According to the Quinean criterion plus inferentialism, it incurs existential commitment, because ∃*x, since it behaves inferentially just like an existential quantifier, is an existential quantifier. Now, it is intuitively correct that *x(x=*x) does incur existential commitment: it claims that there is a proof of ∃x(x=x), so it incurs existential commitment to the existence of a proof. So in this case, the inferentialist Quinean gets right that there is existential commitment. But rather clearly only coincidentally so! For now consider the sentence ψ that is written *x(x=*x). Since ∀*x behaves inferentially just like ∀x, by inferentialist Quineanism it incurs no existential commitment. But ψ means that there is a proof of x(x=x), and hence incurs exactly the same kind of existential commitment as ϕ did, which said that there was a proof of x(x=x).

What can the inferentialist Quinean respond? Perhaps this: The language L* is syntactically and inferentially compositional, but not semantically so. The meaning of p*q, namely that the unasterisked version of p*q has a proof, is not composed from the meanings of p and of q, which respectively mean that p has a proof and that q has a proof. But that’s not quite right. For meaning-composition is just a function from meanings to meanings, and there is a function from the meanings of p and of q to the meaning of p*q—it’s just a messy function, rather than the nice function we normally associate with disjunction.

Perhaps what the inferentialist Quinean should do is to insist on the intuitive non-inferentialist semantic compositional meanings for the truth-functional connectives, but not for the quantifiers. This feels ad hoc.

Even apart from Quineanism, I think the above constitutes an argument against inferentialism about logical connectives. For the asterisked connectives of L* do not mean the same thing as their unasterisked variants in L.

Thursday, September 12, 2019

Ordinary language and "exists"

In Material Beings, Peter van Inwagen argues that his view that there are no complex artifacts does not contradict (nearly?) universal human belief. The argument is based on his view that the propositions expressed by ordinary statements like “There are three valuable chairs in this room” do not entail the negation of the Radical Claim that there are no artifacts, for such a proposition does not entail that there exist chairs.

I think van Inwagen is right that such ordinary propositions do not entail the negation of the Radical Claim. But he is wrong in thinking that the Radical Claim does not contradict nearly universal human belief. Van Inwagen makes much of the analogy between his view and the Copernican view that the sun does not move. When ordinary people say things like “The sun moved behind the elms”, they don’t contradict Copernicus. Again, I think he is right about the ordinary claims, but nonetheless Copernicus contradicted nearly universal human belief. That was why Copernicus’ view was so surprising, so counterintuitive (cf. some remarks by Merricks on van Inwagen). One can both say that when people prior to Copernicus said “The sun moved behind the elms” they didn’t contradict Copernicanism and that they believed things that entailed that Copernicus is wrong.

People do not assert everything they believe. They typically assert what is salient. What is normally salient is not that the sun actually moved, but that there was a relative motion between the rays pointing to the elms and to the sun. Nonetheless, if ordinary pre-Copernicans said “The sun doesn’t stand still”, they might well have been contradicting the Copernican hypothesis. But rarely in ordinary life is there occasion to say “The sun doesn’t stand still.” Because of the way pragmatics affects semantics (something that van Inwagen apparently agrees on), we simply cannot assume that the proposition expressed by the English sentence “The sun moved behind the elms” entails the proposition expressed by the English sentence “The sun doesn’t stand still.”

Something similar, I suspect, is true for existential language. When an ordinary person says “There are three chairs in the room”, the proposition they express does not contradict the Radical Thesis. But if an ordinary person says things like “Chairs exist” or “Artifacts exist”, they likely would contradict the Radical Thesis, and moreover, these are statements that the ordinary person would be happy to make in denial of the Radical Thesis. But in the ordinary course of life, there is rarely an occasion for such statements.

This is all largely a function of pragmatics than the precise choice of words. Thus, one can say: “Drive slower. Speed limits exist.” The second sentence does not carry ontological commitment to speed limits.

So, how can we check whether an ordinary person believes that tables and chairs exist? I think the best way may be by ostension. We can bid the ordinary person to consider:

  1. People, dogs, trees and electrons.

  2. Holes, shadows and trends.

We remind the ordinary person that we say “There are three holes in this road” or “The shadow is growing”, but of course there are no holes or shadows, while there are people (we might remind them of the Cogito), dogs, trees and (as far as we can tell) electrons. I think any intelligent person will understand what we mean when we say there are no holes or shadows. And then we ask: “So, are tables and chairs in category 2 or in category 1? Do they exist like people, dogs, trees and electrons, or fail to exist like holes, shadows and trends?” This should work even if like Ray Sorensen they disagree that there are no shadows; they will still understand what we meant when we said that there are no shadows, and that’s enough for picking out what we meant by “exist”. To put in van Inwagen’s terms, this brief ostensive discussion will bring intelligent people into the “ontology room”.

And I suspect, though this is an empirical question and I could be wrong, once inducted into the discussion, most people will say that tables and chairs exist (and that they have believed this all along). But, van Inwagen should say, this nearly universal belief is mistaken.

This story neatly goes between van Inwagen’s view that ordinary people don’t believe things patently incompatible with the Radical Theory and Merricks’ view that ordinary poeple contradict the Radical Theory all the time. Ordinary people do believe things patently incompatible with the Radical Theory, but they rarely express these beliefs. Most ordinary “there exist” statements—whether concerning artifacts or people or particles—do not carry ontological commitment, and those of us who accept the Radical Theory normally aren’t lying when we say “There are three chairs in the room”. But the Radical Theory really is radical.

Tuesday, April 23, 2019

Presentism and vagueness

If presentism is true, then vagueness about the exact moment of cessation of existence implies vagueness about existence: for if it is vague whether an object has ceased to exist at t, then at time t it was, is or will be vague whether the object exists. But it is plausible that there is vagueness about the exact moment of cessation of existence for typical organisms (horses, trees, etc.). On the other hand, vagueness about existence seems to be a more serious logical problem: it makes unrestricted quantifiers vague.

Of course, the eternalist will have a similar problem with vagueness about existence-at-t. But existence-at-t is not fundamental logical existence on eternalism, so perhaps the problem is less serious.

Friday, August 3, 2018

World shuffling and quantifiers

Let ψ be a non-trivial one-to-one map from all worlds to all worlds. (By non-trivial, I mean that there is a w such that ψ(w)≠w.) We now have an alternate interpretation of all sentences. Namely, if I is our “standard” method of interpreting sentences of our favorite language, we have a reinterpretation Iψ where a sentence s reinterpreted under Iψ is true at a world w if and only if s interpreted under I is true at ψ(w). Basically, under Iψ, s says that s correctly describes ψ(actual world).

Under the reinterpretation Iψ all logical relations between sentences are preserved. So, we have here a familiar Putnam-style argument that the logical relations between sentences do not determine the meanings of the sentences. And if we suppose that ψ leaves fixed the actual world, as we surely can, the argument also shows that truth plus the logical relations between sentences do not determine meanings. Moreover, can suppose that ψ is a probability preserving map. If so, then all probabilistic relations between sentences will be preserved, and hence the meanings of sentences are not determined by truth and the probabilistic and logical relations between sentences. This is all familiar ground.

But here is the application that I want. Apply the above to English with its intended interpretation. This results in a language English* that is syntactically and logically just like English but where the intended interpretation is weird. The homophones of the English existential and universal quantifiers in English* behave logically in the same way, but they are not in fact the familiar quantifiers. Hence quantifiers are not defined by their logical relations. I’ve been looking for a simple argument to show this, and this is about as simple as can be.

Saturday, July 21, 2018

Trivial universalizations

Students sometimes find trivial universalizations, like "All unicorns have three horns", confusing. I just overheard my teenage daughter explain this in a really elegant way: She said she has zero Australian friends and zero Australian friends came to her birthday party, so all her Australian friends came to her birthday party.

The principle that if there are n Fs, and n of the Fs are Gs, then all the Fs are Gs is highly intuitive. However, the principle does need to be qualified, which may confuse students: it only works for finite values of n. Still, it seems preferable to except only the infinite case rather than both zero and infinity.

Monday, March 12, 2018

The usefulness of having two kinds of quantifiers

A central Aristotelian insight is that substances exist in a primary way and other things—say, accidents—in a derivative way. This insight implies that use of a single existential quantifier ∃x for both substances and forms does not cut nature at the joints as well as it can be cut.

Here are two pieces of terminology that together not only capture the above insight about existence, but do a lot of other (but closely related) ontological work:

  1. a fundamental quantifier ∃u over substances.

  2. for any y, a quantifier ∃yx over all the (immediate) modes (tropes) of y.

We can now define:

  • a is a substance iff ∃u(u = a)

  • b is a (immediate) mode of a iff ∃ax(x = b)

  • f is a substantial form of a substance a iff a is a substance and ∃ax(x = f): substantial forms are immediate modes of substances

  • b is a (first-level) accident of a substance a iff u is a substance ∃axxy(y = b & y ≠ x): first-level accidents are immediate modes of substantial forms, distinct from these forms (this qualifier is needed so that God wouldn’t coount as having any accidents

  • f is a substantial form iff ∃uux(x = f)

  • b is a (first-level) accident iff ∃uuxxy(y = b).

This is a close variant on the suggestion here.

Thursday, February 2, 2017

Characterizing quantifiers by rules of inference

One standard characterization of quantifiers is that they are bits of speech that follow certain rules of inference (e.g., universal instantiation, existential generalization, etc.). This characterization is surely incorrect.

Let p be any complicated logical truth. Consider the symbols ∃* and ∀* such that ∃*xF and ∀*xF are abbreviations for p ∧ ∃xF and p ∧ ∀xF. Then ∃* and ∀* satisfy exactly the same rules of inference as ∃ and ∀, but they are not quantifiers. The sentence ∃*Fx expresses a conjunctive proposition rather than a quantified one.

In other words, the characterization of quantifiers by logical rules misses the hyperintensionality at issue. The same is true of the characterization of any other logical connectives by logical rules.

Tuesday, December 27, 2016

Some weird languages

Platonism would allow one to reduce the number of predicates to a single multigrade predicate Instantiates(x1, ..., xn, p), by introducing a name p for every property. The resulting language could have one fundamental quantifier ∃, one fundamental predicate Instantiates(x1, ..., xn, p), and lots of names. One could then introduce a “for a, which exists” existential quantifier ∃a in place of every name a, and get a language with one fundamental multigrade predicate, Instantiates(x1, ..., xn, p), and lots of fundamental quantifiers. In this language, we could say that Jim is tall as follows: ∃Jimx Instantiates(x, tallness).

On the other hand, once we allow for a large plurality of quantifiers we could reduce the number of predicates to one in a different way by introducing a new n-ary existential quantifier ∃F(x1, …, xn) (with the corresponding ∀P defined by De Morgan duality) in place of each n-ary predicate F other than identity. The remaining fundamental predicate is identity. Then instead of saying F(a), one would say ∃Fx(x = a). One could then remove names from the language by introducing quantifiers for them as before. The resulting language would have many fundamental quantifiers, but only only one fundamental binary predicate, identity. In this language we would say that Jim is tall as follows: ∃JimxTally(x = y).

We have two languages, in each of which there is one fundamental predicate and many quantifiers. In the Platonic language, the fundamental predicate is multigrade but the quantifiers are all unary. In the identity language, the fundamental predicate is binary but the quantifiers have many arities.

And of course we have standard First Order Logic: one fundamental quantifier (say, ∃), many predicates and many names. We can then get rid of names by introducing an IsX(x) unary predicate for each name X. The resulting language has one quantifier and many predicates.

So in our search for fundamental parsimony in our language we have a choice:

  • one quantifier and many predicates
  • one predicate and many quantifiers.

Are these more parsimonious than many quantifiers and many predicates? I think so: for if there is only one quantifier or only one predicate, then we can collapse levels—to be a (fundamental) quantifier just is to be ∃ and to be a (fundamental) predicate just is to be Instantiates or identity.

I wonder what metaphysical case one could make for some of these weird fundamental language proposals.

Friday, May 6, 2016

An alternative to quantifier variance

According to quantifier variance (QV), there are many families of quantifiers, including the ordinary English family that quantifies over such things as dogs, chairs, holes and shadows, the abstemious ontologist's family that quantifies only over elementary particles, the organicist family that quantifies over elementary particles and living things, and so on. None of these is in any way primary or more fundamental than the others.

The main motivation of QV is to protect apparently reasonable people (whether ordinary or ontologist) against making lots of false statements. While the abstemious ontologist says "There are no chairs anywhere", she doesn't actually disagree with the ordinary person who says "There are five chairs in the room", as they use different quantifiers.

Here is an interesting alternative. Keep the QV story's claim that the ordinary person's "There are" and the ontologist's "There are" mean something different. But deny that they are both quantifiers. Only the ontologist is quantifying. The ordinary person is doing something else.

What about the plurality of ontologists? Are they all quantifying, or is only one camp quantifying? I suspect that most of ontologists are quantifying. And most are saying things that are false. So, unlike QV, I am only interested in saving the ordinary person from making lots of false statements. Ontologists doing ontology do so at their own risk.

Thursday, April 24, 2014

The God quantifier

Hypothesis: There is no fundamental quantifier that includes within its domain both God and something other than God. (Obviously, this is inspired by Jon Jacobs' work on apophaticism.)

The hypothesis is compatible with saying in ordinary English that both God and human beings exist, and that nothing (not even God) is a unicorn. But if we speak Ontologese, a language where all our quantifiers are fundamental, we will need to modify these locutions. Perhaps we will have a fundamental divine existential quantifier D and a fundamental creaturely quantifier ∃, and if in Ontologese we want to give the truth conditions for the ordinary English "Nothing is a unicorn", we may say something like:

  • ~Dx(Unicorn(x)) & ~∃x(Unicorn(x)).
And if we want to give truth conditions for "Something is alive", we may say something like:
  • Dx(Alive(x)) or ∃x(Alive(x)).
(Assuming that Alive(x) is a predicate of Ontologese.)

Of course, it could be that Ontologese doesn't just have a single quantifier for creatures. It might, for instance, have "metaphysically Aristotelian quantification": a quantifier ∃ over (created) substances and a subscripted quantifier ∃x over the accidents of the substance x. In that case, "Nothing is a unicorn" will have truth conditions:

  • ~Dx(Unicorn(x)) & ~∃x(Unicorn(x)) & ~∃xxy(Unicorn(y)).
(It might seem excessive to say that no accident is a unicorn, but better be safe than sorry.) Likewise, "Something is alive" has the truth conditions:
  • Dx(Alive(x)) or ∃x(Alive(x)) or ∃xxy(Alive(x)).

Now, it may seem wacky to think of a quantifier D that quantifies only over God. But it shouldn't seem so wacky if we recall that Montague-inspired linguistic classifies names as quantifiers (they correspond to functors that lower the arity of a predicate, after all).

Now this leads to an interesting question. Speaking in the ontology room, where we insist that our language cut at the joints, should we say "God exists"? That's a choice. We could adapt the English "exists" when used in the ontology room to go with the fundamental quantifier D or the fundamental quantifier ∃.

We might want to, this being the ontology room after all, make the decision that we will adapt words to the most fundamental meanings we can. But in some sense surely the divine quantifier D is more fundamental than the creaturely quantifier ∃, so in the ontology room we could say: "Only God exists." It is said that Jesus said to St Catherine of Siena: "I am he who is, and you are she who is not." Maybe St Catherine's mystical theology room wasn't that different from the ontology room.

Or we might want to keep as many of the ordinary existence claims unchanged, and so say "Photons exists". Then we might want to say something like "God does not exist but divinely-exists."

But since the ontology room isn't the ordinary context, this is really a matter of decision. My own preference would be to say "Only God exists" in the maximally fundamental ontology room, but to spend a lot of time in less fundamental ontology rooms, ones in which one can say "God exists" and "Photons exist" but not "Holes exist" or "Tables exist."

Friday, March 28, 2014

Non-sentences

Consider:

  1. Many a philosopher was celebrated during his life. His work was culturally influential. And then after he died, he was all but forgotten.
Notice that "His work was culturally influential" has the grammatical form of a sentence, but is not a sentence. It is an open formula with "His" being a free variable, bound ultimately by the "Many a philosopher" quantifier. But there seem to be other contexts in which "His work was culturally influential" seems to be a sentence:
  1. Bergson was celebrated during his life. His work was culturally influential. And then after he died, he was all but forgotten.
So it seems whether something is a sentence is sometimes contextually determined.

Maybe we can say that "his" is actually two words: one word functions as a variable and the other as a name (with reference anaphorically coming from another name). But on grounds of Ockham's razor this seems a poor move.

There is another move one can make here that seems better: The third-person pronoun is always a variable. In (1), it is bound by the "Many a philosopher" quantifier. In (2), it is bound by the "Bergson" quantifier. (Here I am following Montague's insight that names can be seen as functioning as quantifiers.)

Note added later: I think the name-as-variable move doesn't get one out of the contextuality of what's a sentence. Suppose that after I said (2), you said: "He is still quite influential." What you said is clearly a genuine sentence.

Sunday, December 8, 2013

A nominalist reduction

Suppose that there were only four possible properties: heat, cold, dryness and moistness. Then the Platonic-sounding sentences that trouble nominalists could have their Platonic commitments reduced away. For instance, van Inwagen set the challenge of how to get rid of the commitment to properties (or features) in:

  1. Spiders and insects have a feature in common.
On our hypothesis of four properties, this is easy. We just replace the existential quantification by a disjunction over the four properties:
  1. Spiders and insects are both hot, or spiders and insects are both cold, or spiders and insects are both dry, or spiders and insects are both moist.
And other sentences are handled similarly. Some, of course, turn into a mess. For instance,
  1. All but one property are instantiated
becomes:
  1. Something is hot and something is cold and something is dry but nothing is moist, or something is hot and something is cold and something is moist but nothing is dry, or something is hot and something is dry and something is moist but nothing is cold, or something is cold and something is dry and something is moist but nothing is hot.
Of course, this wouldn't satisfy Deep Platonists in the sense of this post, but that post gives reason not to be a Deep Platonist.

And of course there are more than four properties. But as long as there is a finite list of all the possible properties, the above solution works. But in fact the solution works even if the list is infinite, as long as (a) we can form infinite conjunctions (or infinite disjunctions—they are interdefinable by de Morgan) and (b) the list of properties does not vary between possible worlds. Fortunately in regard to (b), the default view among Platonists seems to be that properties are necessary beings.

Wednesday, December 4, 2013

Introducing doppelgangers

Yesterday, I mentioned that one might reinterpret the quantifier symbols in a language so as to introduce doppelgangers: extra quasi-entities that just don't exist, but get to be talked about using standard quantifier inference rules. Here I want to give a bit more detail, and then offer a curious application to the philosophy of mind: an account of how a materialist could use a doppelganged reinterpretation of language to talk like a hard-core dualist. The application shows that it is philosophically crucial that we have a way to distinguish between real quantifiers and mere quasi-quantifiers (in the terminology of the previous post) if we want to distinguish between materialism and dualism.

Onward! Let L be a first-order language with identity. A model M for L will be a pair (D,P), where D is a non-empty set ("domain") and P is a set of subsets ("properties") of D. A doppel-interpretation of L is a pair (I,s) where I is a function from the names and predicates other than identity to D and P respectively and s is a function from names to the set {0,1} ("signature"). The signature function s tells us which name is attached to an ordinary object (0) and which to a doppelganger (1).

Now a substitution vector for a doppel-interpretation (I,s) will be a partial function v from the names and variables of L to D such that v(a)=I(a) whenever a is a name. A signature vector is a partial function f from the names and variables of L to D such that f(a)=s(a) whenever a is a name. If x is a variable and u is in D, then I will write v(x/u) for the substitution vector that agrees with v except that v(x/u)(x)=u. I.e., v(x/u) takes v and adds or changes the substitution of u for x. Likewise, if f is a signature vector, then s(x/n) agrees with s except that s(x/n)(x)=n for n in {0,1}.

We can now define the notion of a substitution and signature vector pair (v,f) doppel-satisfying a formula F in L under a doppel-interpretation (I,s). We begin with the normal Tarskian inductive stuff for truth-functional connectives:

  • (v,f) doppel-satisfies F or G iff (v,f) doppel-satisfies F or (v,f) doppel-satisfies G
  • (v,f) doppel-satisfies F and G iff (v,f) doppel-satisfies F and (v,f) doppel-satisfies G
  • (v,f) doppel-satisfies ~F iff (v,f) doesn't doppel-satisfies F.
Now we need our quasi-quantifiers:
  • (v,f) doppel-satisfies ∃xF iff for some u in D, (v(x/u),f(x/0)) doppel-satisfies F or for some u in D, (v(x/u),f(x/1)) doppel-satisfies F
  • (v,f) doppel-satisfies ∀xF iff for every u in D, (v(x/u),f(x/0)) doppel-satisfies F and for every u in D, (v(x/u),f(x/1)) doppel-satisfies F.
Then we need atomic formulae:
  • (v,f) doppel-satisfies h=k (where h and k are variables-or-names) iff v(h)=v(k) and f(h)=f(k)
  • (v,f) doppel-satisfies Q(h1,...,hn) where Q is other than identity iff (I(h1),...,I(hn))∈I(Q).
And finally we can define doppel-truth: a sentence S of L is doppel-true provided that it is doppel-satisfied by every substitution and signature vector pair.

It is easy to see that if sm is the usual sentence that asserts that there are m objects, and if D has n objects, then sm is doppel-true if and only if m=2n. It is also easy to see that all the first order rules for quantifiers are valid for our quasi-quantifiers ∃ and ∀.

Now on to our fake dualism. We need a more complex doppelganging. Specifically, we need to divide our stock of predicates into the mental and non-mental predicates. Start with a materialist's first order language L that includes mental predicates (the materialist may think they are in some sense reducible). Add a predicate SoulOf(x,y) (we won't need to specify whether it's mental or not) which will count for us as neither mental nor non-mental. I will assume for simplicity that the mental predicates are all unary (e.g., "thinks that the sky is blue")—things get more complicated otherwise, but one can still produce the fake dualism. Now, instead of doppelganging all the objects, we only doppelgang the minded objects. Thus, our models will be triples (D,Dm,P) where Dm is a subset of D (the minded objects, in the intended interpretation). We say that a substitution and signature vector pair (v,f) is licit if and only if f(a)=1 implies v(a)∈Dm ("only members of Dm have doppelgangers"), and in all our definitions we only work with licit pairs. Moreover, our reinterpretations do not need to give any extension to the predicate SoulOf(x,y): that's handled in the semantics.

Finally, we modify doppel-satisfaction for quantifiers and predicates:

  • (v,f) doppel-satisfies ∃xF iff for some u in D, (v(x/u),f(x/0)) doppel-satisfies F or for some u in Dm, (v(x/u),f(x/1)) doppel-satisfies F
  • (v,f) doppel-satisfies ∀xF iff for every u in D, (v(x/u),f(x/0)) doppel-satisfies F and for every u in Dm, (v(x/u),f(x/1)) doppel-satisfies F.
  • (v,f) doppel-satisfies h=k (where h and k are variables-or-names) iff v(h)=v(k) and f(h)=f(k)
  • (v,f) doppel-satisfies Q(h1,...,hn) where Q is non-mental and other than identity iff (I(h1),...,I(hn))∈I(Q) and f(h1)=...=f(hn)=0
  • (v,f) doppel-satisfies Q(h) where Q is mental iff I(h)∈I(Q) and f(h)=1
  • (v,f) doppel-satisfies SoulOf(h,k) iff v(h)=v(k), f(h)=1 and f(k)=0.
And then we say we have doppel-truth of a sentence when every licit pair is satisfied.

The intended materialist doppel-interpretation (I,f) consists of the usual materialist interpretation I of the names and predicates other than Soul(x) and that gives to Soul(x) the extension of all the minded objects, sets Dm to be the set of minded objects, plus has a signature f such that f(n)=1 where n is a name of a minded object and otherwise f(n)=0.

Now let's speak an informal version of our doppelganged materialist language. Let M be any mental predicate and Q any non-mental one. Say that a soul is any x such that ∃y(SoulOf(x,y)). Suppose "Jill" is the name of a minded object. The following sentences will be doppel-true:

  • Some objects have souls.
  • Every object that has a soul is a non-soul.
  • Only souls satisfy M.
  • No soul satisfies Q.
  • Jill is a soul.
  • Jill does not satisfy Q.
The doppel-language is strongly dualist. Only the souls have mental properties predicated of them and only the non-souls have non-mental properties (or stand in non-mental relations). A materialist community could stipulate that henceforth their language bears this kind of doppel-interpretation. They could then talk like dualists. But they wouldn't be dualists.. Hence the doppelganged ∃ and ∀ aren't really quantifiers.

Assume materialism. If there are n material objects, including m minded objects, then in the doppelganged language it will be true to say something like "There are n+m objects." For every one of the minded objects has a doppelganger.

Tuesday, December 3, 2013

Quasi-quantifiers

In First Order Logic (FOL), there are three aspects to a quantifier:

  • grammar: a quantifier attaches to a formula and generates a new formula binding one variable
  • inference: we have the FOL universal and existential introduction and elimination rules
  • semantics: the Tarskian definition of truth in a model treats quantifiers in a particular way with respect to a domain.
The first two aspects are normally lumped together under "syntactic aspects", but I think keeping them separate is important.

A quasi-quantifier, then, is something has the grammar and inferential structure of a quantifier, but may have different semantics. Every quantifier is also a quasi-quantifier. A quasi-quantifier that isn't a quantifier—i.e., that has aberrant semantics—will be a quantifier. Quasi-quantifiers can be of types, like "existential" or "universal", that correspond to those of quantifiers. One can have formal languages with existential and universal quasi-quantifiers. In fact, to an approximation English is a language with quasi-quantifiers: "there is" is a mere quasi-quantifier. I will argue for the possibility of mere quasi-quantifiers, connect the issue with fundamentality and then make my suggestion about English.

For any natural number n and quantifier E, let sn(E) be the analogue of the FOL sentence using ∃ that asserts that there are n objects. For instance, s2(E) is the sentence

  • ExEy(xy&~Ez(zx&zy)).
A sufficient condition for E to be an existential mere quasi-quantifier is that E has the grammar and inferential rules of ∃ but is interpreted in such a way that sn(E) is false in some model with a domain with n objects.

An uninteresting way to get an existential mere quasi-quantifier is by domain restriction. Restrict interpretations in such a way that names must all be in a subdomain of the model and quantifiers are restricted to the subdomain. A non-trivial quasi-quantifier is a mere quasi-quantifier that isn't just a restricted quantifier.

A sufficient condition for E to be an existential non-trivial quasi-quantifier is that E has the grammar and inferential rules of ∃ but is interpreted in such a way that sn(E) is true in some model with a domain with fewer than n objects.

It isn't hard to generate languages with interpretations that make them have non-trivial quasi-quantifiers, though we will have to reinterpret the identity as well. For instance, it's not hard to generate a pair of existential and universal "doppelganging quantifiers"[note 1], that have the same inferential rules as the existential and universal quantifiers, but a sentence gets interpreted in a model as if each item in the model had a doppelganger, where a doppelganger of x stands in the same relations as x, except for identity (x=x but x's doppelganger isn't identical with x), and yet without adding any objects to the domain.[note 2]

Whether a quasi-quantifier is a quantifier depends on how that quasi-quantifier is treated in a Tarski-style definition of truth. Now, when we quasi-quantify also over non-fundamental objects, like holes and shadows, I think the Tarski-style definition of truth will give the truth conditions in terms of how the fundamental objects are (say, perforate or shadowing). This is going to be controversial, but, hey, this is only a blog post.

It follows immediately that when we quasi-quantify also over non-fundamental objects, we have a mere quasi-quantifier. Moreover, it's not going to be a restricted quantifier, so it's a non-trivial quasi-quantifier.

Now the English "there is" to an approximation is a quasi-quantifier. (It's not quite a quasi-quantifier, as the rules of inference for it will not quite match that of ∃ due to vagueness.) Moreover, it quasi-quantifies also over things like holes and defects and chairs, which are non-fundamental. Therefore, it is a mere quasi-quantifier. Nor is it just a restriction of a quantifier, so it is a non-trivial quasi-quantifier.

Once we see this, temptations to quantifier pluralism should be decreased. Of course, we have quasi-quantifier pluralism: There are quantifiers, there are doppelganging quasi-quantifiers, there are English quasi-quantifiers, there are mereological quasi-quantifiers, and so on. But only the first of these are quantifiers.

Now, in the formal examples, like of my doppelganging quantifiers, one can give a paraphrase of the quasi-quantifiers in terms of quantifiers: one just writes out the Tarski definition of truth for each sentence. But in natural language examples, the Tarski definition of truth is not going to be formally statable (at least not in any way tractable to us). And so there won't be a paraphrase of the quasi-quantifier sentences in quantified sentences. Quine won't like that. And what I said above about the Tarski definition when I characterized quasi-quantifiers won't be easy to say in the natural language case. There is much more work to be done here.

And of course just as there is no entity without identity, there is no quasi-quantifier without quasi-identity.

Tuesday, April 23, 2013

Length and other predicates

The length of a pencil is measured in a straight line from tip to end. This is equal to the length of the region of space occupied by the pencil. The length of a rope is measured along the rope, so that the length does not change much when the rope is coiled or uncoiled, and so unless the rope is straightened out, the length of the rope is not equal to any dimension of the region of space occupied by the rope. The length of a bow is (typically) the length of the string plus three inches. On the other hand, the length of a computer program is something quite different, not measured in length-dimensions, but in units like lines or lines-of-code or characters.

Similar points apply to almost all other predicates. These are a matter of decision rather than discovery. When we extend our language to start talking about pencils, ropes, bows and programs, we also need to decide how all the many predicates that could apply to them are to be extended. Quantifier pluralism requires predicate pluralism.

Thursday, September 6, 2012

Hintikka's criticism of the Fregean view of quantifiers

On the Fregean view of quantifiers, quantifiers express properties of properties. Thus, ∀ expresses a property U of Universality and ∃ expresses a property I of instantiatedness. So, ∀xFx says that Fness has universality, while ∃xFx says that Fness has instantiatedness.

One of Hintikka's criticisms is that it is hard to make sense of nested quantifiers. Consider for instance

  1. xyF(x,y).
Properties correspond to formulae open in one variable. But in the inner expression ∃yFxy the quantifier is applied to F(x,y) which is open in two variables.

But the Fregean can say this about ∀xyFxy. For any fixed value of x, there is a unary predicate λyFxy such that (λyFxy)(y) just in case Fxy. The λ functor takes a variable and any expression possibly containing that variable and returns a predicate. Thus, λy(y=2y) is the predicate that says of something that it is equal to twice itself.

Now, for any predicate Q, there is a property of Qness. So, for any x, there is a property of (λyFxy)ness. In other words, there is a function f from objects to properties, such that f(x) is a property that is had by y just in case F(x,y). We can write f(x)=(λyFxy)ness.

Now, we can replace the inner quantification by its Fregean rendering:

  1. yFxy)ness has I.
But (2) defines a predicate that is being applied to x, a predicate we can refer to as λx[(λyFxy)ness has I]. This predicate in turn expresses a property: (λx[(λyFxy)ness has I])ness. And then the outer ∀x quantifier in (1) says that this property has universality. Thus our final Fregean rendering of (1) is:
  1. x[(λyFxy)ness has I]]ness has U.

We can now ask which proposition formation rules were used in the above construction. These seem to be it:

  1. If R is an n-ary relation and 1≤kn, then for any x there is an (n−1)-ary relation Rk,x which we might call the <k,x>-contraction of R such that x1,...,xk−1,x,xk+1,...,xn stand in R if and only if X1,...,xk−1,xk+1,...,xn stand in Rk,x.
  2. If p is a function from objects to propositions, then there is a property p* which we might call the propertification of p such that x has p* iff p(x) is true.
  3. There are the properties I and U of instantiation and universality, respectively.
We can think of propertification and contraction as related in an inverse fashion. Given an n-ary relation, contraction can be used to define a function from objects to (n−1)-ary relations, and propertification takes a function from objects to 0-ary relations and defines a 1-ary relation from it (this could be generalized to an operation that takes a function from objects to (n−1)-ary relations and defines an n-ary relation from it).

Observe that if P is a property, i.e., a unary relation, then the contraction P1,x is a proposition (propositions are 0-ary relations), equivalent to the proposition that says of x that it has P.

With these two rules and the relation R that is expressed by the predicate F, start by defining the function f(x) that maps an object x to the property R1,x, and then define the function g(x) that maps an object x to the proposition I1,f(x). Thus, g(x) says that x stands in R to something. Now, we can form the propertification g* of the function g, and to get (1) we just say that g* has U. Thus the proposition that is expressed by (1) will be U1,g*.

One worry about proposition formation rules is that we might fear that if we allow too many, we will be able to form a liar-type sentence. A somewhat arbitrary restriction in the above is that we only get to form a propertification for functions of first-order objects.

Another worry I have is that I made use of the concept of a function, and I'd like to do without that.

Wednesday, February 29, 2012

Sketches towards a theory of quantifiers and quantifier variance

Quantifier variance theorists think that there can be multiple kinds of quantifiers. Thus, there could be quantifiers that range over only fundamental entities, but there could also be quantifiers that range over arbitrary mereological sums. I will call all the quantifiers with a particular range a "quantifier family". A given quantifier family will include its own "for all x (∀x)" and "for some x (∃x)", but may also include "for most x", "for at least two x", and similar quantifiers. I will, however, not include any negative quantifiers like "for no x" or "for at most seven x", or partially negative ones like "for exactly one x". I will also include "singular quantifiers", which we express in English with "for x=Jones". In fact, I will be working with a language that has no names in it as names are normally thought of in logic. Instead of names, there will be a full complement of singular quantifiers, one for each name-as-ordinarily-thought-of; I am happy to identify names with singular quantifiers for the purposes of logic.

Say that quantifier-candidates are operators that take a variable and a formula and return a formula in which that variable is not open. Consider a set F of quantifier-candidates with a partial ordering ≤, where I will read "QR" as "R is at least as strong as Q", and with a symmetric "duality" relation D on F. There is also a subset N of elements of F which will be called "singular". Then F will be a quantifier family provided that

  1. There is a unique maximally strong operator ∀
  2. There is an operator ∃ dual to ∀
  3. If Q is dual to R then it can be proved that QxP iff ~Rx~P
  4. ∃ is minimally strong
  5. If R in F is at least as strong as Q in F, then from RxP one can prove QxP
  6. From P one can prove QxP for any Q
  7. From ∀xP one can prove P (note: open formulae can stand in provability relations)
  8. If Q is singular, then Q is self-dual and Qx(A&B) can be proved from QxA and QxB
We may need to add some more stuff. But this will do for now.

One can set up a model-theory as well. A domain-model for a quantifier family will include a set O of "objects", and a set S of sets of subsets of O, such that if E is a member of S, then E is an upper set, i.e., if a subset A of O is in E, then so is any superset of A as long as it is still a subset of O. A member of S will be called an "evaluator". To get a model from a domain-model, we add the usual set of relations. An interpretation I in a given model for a language with a quantifier family will then involve an ordinary interpretation of the language's predicates, plus an assignment of quantifiers to members of S subject to the constraints that (a) if QR, then I(R) is a subset of I(Q), (b) I(∀) is the evaluator {O}, (c) if Q is dual to R, then A is a member of I(Q) if and only if the complement OA is not a member of I(R), (d) if Q is singular, then I(Q) is a filter-base. We can then define truth under I using the basic idea that QxP is true if and only if the set of objects o such that o satisfies P when put in for x is a member of I(Q) (this should be all done more carefully, of course).

(The interpretation of a name is always an ultrafilter. If we wanted to, we could restrict names to being interpreted as principal ultrafilters, in which case names would correspond to objects, but I think things are more interesting like it is.)

Ideally, we'd want to make sure we have soundness and completeness at this point. I'm basically just making this up as I go along, so there may be a literature on this, and if there is, there will presumably be results about soundness and completeness in it. And maybe we need more rules of inference and maybe I screwed up above. This is just a blog post. Moreover, we might want some further restrictions about how particular quantifiers, like "for most", are interpreted (the above just constrains it to having an evaluator between the evaluators for ∀ and ∃). The point of the above is more to give an example of what a formal characterization of a quantifier family might look like than to give the correct one.

But now it is time for some metaphysics. The notion of a quantifier family is a purely formal one. Moreover, the model-theoretic notion of interpretation that I used above won't be helpful for quantifier variance discussions because it talks of "sets of objects", whereas what "metaphysically counts as an object" varies between quantifier families.

It is easy to come up with quantifier families and perverse interpretations such that under such an interpretation, we would not want to count the members of the quantifier family "quantifiers". Nor would it be a quantifier variance thesis to say that there are many such families and interpretations, since that there are such is not controversial.

I think a Thomist can give an answer: a quantifier family in the formal sense is a bona fide quantifier family provided that the family is analogous to some privileged family of quantifiers, say quantifiers over substances. In other words, the different kinds of existence are defined by analogy to existence proper. This won't satisfy typical quantifier variance folk, as I think they don't want a privileged family of quantifiers. But that's the best I can do right now.

Monday, March 21, 2011

Names, quantifiers, Aristotelian logic and one-sided relations

This is going to be a pretty technically involved post and it will be written very badly, as it's really just notes for self. Start with this objection to Aristotelian logic. A good logical system reveals the deep logical structure of sentences. But Aristotelian logic takes as fundamental sentences like:
  1. Everyone is mortal.
  2. Socrates is mortal.
In so doing, Aristotelian logic creates the impression that (1) and (2) have similar logical form, and it is normally taken to be that modern quantified logic has shown that (1) and (2) have different logical forms, namely:
  1. x(Mortal(x))
  2. Mortal(Socrates).
I shall show, however, that there is a way of thinking about (1) and (2), as well as about (3) and (4), that makes them have the same deep logical form, as Aristotelian logician makes it seem. (This is a very surprising result for me. Until I discovered these ideas this year, I had a strong antipathy to Aristotelian logic.) Moreover, this will give us some hope of understanding the medieval idea of one-sided relations. The medievals thought, very mysteriously, that creation is a one-sided relation: we are related to God by the created by relation, but God is not related to us by the creates relation.

Now to the technical stuff. Recall Tarski's definition of truth in terms of satisfaction. I think the best way to formulate the definition is by means of a substitution sequence. A substitution sequence s is a finite sequence of variable-object pairs, which I will write using a slash. E.g., "x1"/Socrates,"x2"/Francis,"x3"/Bucephalus is a substitution sequence. The first pair in my example consists of the variable letter "x1", a linguistic entity (actually in the best logic we might have slot identifiers instead of variable letters) and Socrates—not the name "Socrates" (which is why the quotation marks are as they are). We then inductively define the notion of a substitution sequence satisfying a well-formed formula (wff) under an interpretation I. An interpretation I is a function from names and predicates to objects and properties respectively. And then we have satisfaction simpliciter which is satisfaction under the intended interpretation, and that's what will interest me. So henceforth, I will be the intended interpretation. (I've left out models, because I am interested in truth simpliciter.) We proceed inductively. Thus, s satisfies a disjunction of wffs if and only if it satisfies at least one of the wffs, and so on, the negation of a wff if and only if it does not satisfy the wff, and so on.

Quantifiers are a little more tricky. The sequence s satisfies the wff ∀xF iff for every object u, the sequence "x"/u,s (i.e., the sequence obtained by prepending the pair "x"/u" at its head) satisfies F. The sequence s satisfies ∃xF iff for some object u, the sequence "x"/u,s satisfies F.

What remains is to define s's satisfaction of an atomic wff, i.e., one of the form P(a1,...,an) where a1,...,an are a sequence of names or variables. The standard way of doing this is as follows. Let u1,...,un be a sequence of objects defined as follows. If ai is a variable "x", then we let ui be the first object u occuring in s paired with the variable "x". If for some i there is none such pair in s, then we say s doesn't satisfies the formula. If ai is a name "n", then we let ui=I("n"). We then say that s satisfies P(a1,...,an) if and only if u1,...,un stand in I(P).

Now notice that while the definition of satisfaction for quantified sentences is pretty neat, the definition of satisfaction for atomics is really messy, because it needs to take into account the question of which slot of the predicate has a variable in it and which one has a name.

There is a different way of doing this. This starts with the Montague grammar way of thinking about things, on which words are taken to be functors from linguistic entities to linguistic entities. Let us ask, then, what kind of functors are represented by names. Here is the answer that I think is appealing. A name, say "Socrates", is a functor from wffs with an indicated blank to wffs. In English, the name takes a wff like "____ likes virtue" and returns the wff (in this case sentence) "Socrates likes virtue". (The competing way of thinking of names is as zero-ary functors. But if one does it this way, one also needs variables as another kind of zero-ary functor, which I think is unappealing since variables are really just a kind of slot, or else one has a mess in treating atomics differently depending on which slots are filled with names and which with variables.) We can re-formulate First Order Logic so that a name like "Socrates" is (or at least corresponds to) a functor from wff-variable pairs to new wffs. Thus, when we apply the functor "Socrates" to the wff "Mortal(x)" and the variable "x", we get the wff (sentence, actually) "Mortal(Socrates)". And the resulting wff no longer has the variable "x" freely occurring in it. But this is exactly what quantifiers do. For instance, the universal quantifier is a functor that takes a wff and a variable, and returns a new wff in which the variable does not freely occur.

If we wanted the grammar to indicate this with particular clarity, instead of writing "Rides(Alexander, Bucephalus)", we would write: "Alexanderx Bucephalusy Rides(x,y)". And this is syntactically very much like "∀xy Rides(x,y)".

And if we adopted this notation, the Tarski definition of satisfaction would change. We would add a new clause for the satisfaction of a name-quantified formula: s satisfies nxF, where "n" is a name, if and only if "x"/I("n"),s satisfies F. Now once we got to the satisfaction of an atomic, the predicate would only be applied to variables, never to names. And so we could more neatly say that s satisfies P(x1,...,xn) if and only if every variable occurs in the substitution sequence and u1,...,un stand in I(P) where ui is the first entity u occurring in s in a pair of the form "xi"/u.  Neater and simpler, I think.

Names, thus, can be seen as quantifiers. It might be thought that there is a crucial disanalogy between names and the universal/existential quantifiers, in that there are many names, and only one universal and only one existential quantifier. But the latter point is not clear. In a typed logic, there may be as many universal quantifiers as types, and as many existential ones as types, once again. And the number of types may be world-dependent, just as the number of objects.

If I am right, then if we wanted to display the logical structure of (1) and (2), or of (3) and (4) for that matter, we would respectively say:
  1. x Mortal(x)
  2. Socratesx Mortal(x).
And there is a deep similarity of logical structure—we simply have different quantifiers. And so the Aristotelian was right to see these two as similar.

Now, the final little bit of stuff. Obviously, if "m" and "n" are two names, then:
  1. "mnF(x,y)" is true iff "nmF(x,y)" is true,
just as:
  1. "∀xyF(x,y)" is true iff "∀yxF(x,y)" is true.
But the two sentences in (8), although they are logically equivalent, arguably express different propositions. And I submit that so do the two sentence in (7). And we even have a way of marking the difference in English, I think. Ordinarily, what the left hand side in (7) says is that u has the property PxnyF(x,y) while the right hand side in (8) says that v has the property PymxF(x,y), where u and v are what "m" and "n" respectively denote, and PxH(x) is the (abundant) property corresponding to the predicate H (the P-thingy is like the lambda functor, except it returns a property, not a predicate). These are distinct claims.

The medievals then claim that in the case of God we have this. They say that "Godx nF(x,y)" is true in virtue of "ny GodF(x,y)" being true. It is to the referent of "n" that the property Py GodF(x,y) is attributed, and the sentence that seems to attribute a property to God is to be analyzed in terms of the one that attributes a property to the referent of "n".