Showing posts with label logic. Show all posts
Showing posts with label logic. Show all posts

Thursday, November 13, 2025

Symmetric relations and logic

Suppose Alice and Bob are friends, and that friendship is a fundamental relation. Consider the facts expressed by these two sentences:

  1. Alice and Bob are friends.

  2. Bob and Alice are friends.

It is implausible that these are different facts. For if they were different facts, they would both be fundamental facts (otherwise, which of them would be the fundamental one?), and we would be multiplying fundamental facts beyond necessity—only one of the two is needed in the totality of fundamental facts.

Furthermore, I think that the propositions expressed by (1) and (2) are the same. Here’s one reason to think this. Imagine a three-dimensional written language where plural symmetric predicates like “are friends” are written (say, laser-inscribed inside a piece of glass) with “and are friends” on one horizontal layer, with “Alice” on a layer below the “and” and “Bob” on a layer above it. If (1) and (2) express different propositions, we would have to ask which of them is a better translation of the three-dimensional language. But surely there is no fact about that.

If this is right, then First Order Logic (FOL) fails to accurately represent propositions about fundamental relations, by having two atomic sentences, F(a,b) and F(b,a), where there is only one fundamental fact. Moreover, FOL will end up having non-trivial proofs whose conclusion expresses the same proposition as the premise, since we will presumably have an axiom like xy(F(x,y)→F(y,x)) that lets us prove F(b,a) from F(a,b). This is not the only example of this phenomenon. Take the proof that xF(x) follows from ∀yF(y), even though surely the two express the same proposition, namely that everything is F.

In particular, the logic of sentences appears to differ from the logic of propositions, since the proposition that Bob and Alice are friends follows by reiteration from the proposition that Alice and Bob are friends if they are the same proposition, but sentence (2) does not follow from sentence (1) by reiteration (nor is this true for the FOL versions).

If we think there is a One True Logic, it presumably will be a logic of propositions rather than sentences, then. But what it will be like is a difficult question, to answer which we will have to a worked out theory of when we have the same proposition and when we have different ones.

Monday, April 28, 2025

Inferentialism and the fictitious isolated hydrogen atom

This is another attempt at an argument against inferentialism about logical constants.

Given a world w, let w* be a world just like w except that it has added to it an extra spatiotemporally disconnected island universe containing exactly one hydrogen atom with a precisely specified wavefunction ψ0. Suppose that in in the actual world there is no such isolated hydrogen atom. Now, given a nice first-order language L describing our world, let L* be a language whose constants are the same as the constants of L with an asterisk added to every logical constant, name and predicate. Given a sentence ϕ of L, let ϕ* be the corresponding sentence of L*—i.e., the sentence with all of L’s logical constants asterisked.

Let the rules of inference of L* be the same as those of L with asterisks added as needed.

Let the semantics of L* be as follows:

  • Every predicate P* in L* means the same thing as P in L.

  • Every name a* in L* means the same thing as a in L.

  • Any sentence ϕ* in L* without quantifiers means the same thing as ϕ in L.

  • But if ϕ* has a quantifier, then ϕ* means that ϕ would be true if there were an extra spatiotemporally disconnected island universe containing exactly one hydrogen atom with wavefunction ψ0.

Thus, ϕ* is true in world w if and only if ϕ is true in w*.

Observe that because L contains only names for things that exist in the actual world, and hence not for the extra hydrogen atom or its components, an atomic sentence P(a1,...,an) in L is true if and only if the corresponding sentence P*(a1*,...,an*) is true in L*.

Logical inferentialism tells us that the logical constants of L* mean the same thing as those of L, modulo asterisks. After all, modulo asterisks, we have the same inferences, the same meanings of names, and the same meanings of predicates. But this is false: for if ∃* in L* were an existential quantifier, then it would be true that there exists an isolated hydrogen atom with wavefunction ψ0. But there is none such.

Thursday, March 6, 2025

Definitions

In the previous post, I offered a criticism of defining logical consequence by means of proofs. A more precise way to put my criticism would be:

  1. Logical consequence is equally well defined by (i) tree-proofs or by (ii) Fitch-proofs.

  2. If (1), then logical consequence is either correctly defined by (i) and correctly defined by (ii) or it is not correctly defined by either.

  3. If logical consequence is correctly defined by one of (i) and (ii), it is not correctly defined by the other.

  4. Logical consequence is not both correctly defined by (i) and and correctly defined by (ii). (By 3)

  5. Logical consequence is neither correctly defined by (i) nor by (ii). (By 1, 2, and 4)

When writing the post I had a disquiet about the argument, which I think amounts to a worry that there are parallel arguments that are bad. Consider the parallel argument against the standard definition of a bachelor:

  1. A bachelor is equally well defined as (iii) an unmarried individual that is a man or as (iv) a man that is unmarried.

  2. If (6), then a bachelor is either correctly defined by (iii) and correctly defined by (iv) or it is not correctly defined by either.

  3. If logical consequence is correctly defined by one of (iii) and (iv), it is not correctly defined by the other.

  4. A bachelor is not both correctly defined by (iii) and correctly defined by (iv). (By 9)

  5. A bachelor is neither correctly defined by (iii) nor by (iv). (By 6, 7, and 10)

Whatever the problems of the standard definition of a bachelor (is a pope or a widower a bachelor?), this argument is not a problem. Premise (9) is false: there is no problem with saying that both (iii) and (iv) are good definitions, given that they are equivalent as definitions.

But now can’t the inferentialist say the same thing about premise (3) of my original argument?

No. Here’s why. That ψ has a tree-proof from ϕ is a different fact from the fact that ψ has a Fitch-proof from ϕ. It’s a different fact because it depends on the existence of a different entity—a tree-proof versus a Fitch-proof. We can put the point here in terms of grounding or truth-making: the grounds of one involve one entity and the grounds of the other involve a different entity. On the other hand, that Bob is an unmarried individual who is a bachelor and that Bob is a bachelor who is unmarried are the same fact, and have the same grounds: Bob’s being unmarried and Bob’s being a man.

Suppose one polytheist believes in two necessarily existing and essentially omniscient gods, A and B, and defines truth as what A believes, while her coreligionist defines truth as what B believes. The two thinkers genuinely disagree as to what truth is, since for the first thinker the grounds of a proposition’s being true are beliefs by A while for the second the grounds are beliefs by B. That necessarily each definition picks out the same truth facts does not save the definition. A good definition has to be hyperintensionally correct.

Logical consequence

There are two main accounts of ψ being a logical consequence of ϕ:

  • Inferentialist: there is a proof from ϕ to ψ

  • Model theoretic: every model of ϕ is a model of ψ.

Both suffer from a related problem.

On inferentialism, the problem is that there are many different concepts of proof all of which yield an equivalent relation of between ϕ and ψ. First, we have a distinction as to how the structure of a proof is indicated: is a tree, a sequence of statements set off by subproof indentation, or something else. Second, we have a distinction as to the choice of primitive rules. Do we, for instance, have only pure rules like disjunction-introduction or do we allow mixed rules like De Morgan? Do we allow conveniences like ternary conjunction-elimination, or idempotent? Which truth-functional symbols do we take as undefined primitives and which ones do we take as abbreviations for others (e.g., maybe we just have a Sheffer stroke)?

It is tempting to say that it doesn’t matter: any reasonable answers to these questions make exactly the same ψ be logical consequence of the same ϕ.

Yes, of course! But that’s the point. All of these proof systems have something in common which makes them "reasonable"; other proof systems, like ones including the rule of arbitrary statement introduction, are not reasonable. What makes them reasonable is that the proofs they yield capture logical consequence: they have a proof from ϕ to ψ precisely when ψ logically follows from ϕ. The concept of logical consequence is thus something that goes beyond them.

None of these are the definition of proof. This is just like the point we learn from Benacerraf that none of the set-theoretic “constructions of the natural numbers” like 3 = {0, 1, 2} or 3 = {{{0}}} gives the definition of the natural numbers. The set theoretic constructions give a model of the natural numbers, but our interest is in the structure they all have in common. Likewise with proof.

The problem becomes even worse if we take a nominalist approach to proof like Goodman and Quine do, where proofs are concrete inscriptions. For then what counts as a proof depends on our latitude with regard to the choice of font!

The model theoretic approach has a similar issue. A model, on the modern understanding, is a triple (M,R,I) where M is a set of objects, R is a set of relations and I is an interpretation. We immediately have the Benacerraf problem that there are many set-theoretic ways to define triples, relations and interpretations. And, besides that, why should sets be the only allowed models?

One alternative is to take logical consequence to be primitive.

Another is not to worry, but to take the important and fundamental relation to be metaphysical consequence, and be happy with logical consequence being relative to a particular logical system rather than something absolute. We can still insist that not everything goes for logical consequence: some logical systems are good and some are bad. The good ones are the ones with the property that if ψ follows from ϕ in the system, then it is metaphysically necessary that if ϕ then ψ.

Wednesday, December 11, 2024

Correction to "Goodman and Quine's nominalism and infinity"

In an old post, I said that Goodman and Quine can’t define the concept of an infinite number of objects using their logical resources. Allen Hazen corrected me in a comment in the specific context of defining infinite sentences. But it turns out that I wasn’t just wrong about the specific context of defining infinite sentences: I was almost entirely wrong.

To see this, let’s restrict ourselves to non-gunky worlds, where all objects are made of simples. Suppose, further, that we have a predicate F(x) that says that an object x is finite. This is nominalistically and physicalistically acceptable by Goodman and Quine’s standards: it states a physical feature of a physical object, namely its size qua made of simples. (If the simples all have some finite amount of energy with some positive minimum, F(x) will be equivalent to saying x has a finite energy.)

Now, this doesn’t solve the problem by itself. To say that an object x is finite is not the same as saying that the number of objects with some property is finite. But I came across a cute little trick to go from one to the other in the proof of Proposition 7 of this paper. The trick transposed to the non-gunky mereological setting is this. Then following two statements are equivalent in non-gunky worlds satisfying appropriate mereological axioms:

  1. The number of objects x satisfying G(x) is finite.

  2. There is a finite object z such that for any objects x and y with G(x) and G(y), if x ≠ y, then x and y differ inside z (i.e., there is a part of z that is a part of one object but not of the other).

To see the equivalence, suppose (2) is true. Then if z has n simples, and if x is any object satisfying G(x), then all objects y satisfying G(x) differ from x within these n simples, so there are at most 2n objects satisfying G(x). Conversely, if there are finitely many satisfiers of G, there will be a finite object z that contains a simple of difference between x and y for every pair of satisfiers x and y of G (where a simple of difference is a simple that is a part of one but not the other), and any two distinct satisfiers of G will differ inside z.

I said initially that I was almost entirely wrong. In thoroughly gunky worlds, all objects are infinite in the sense of having infinitely many parts, so a mereologically-based finiteness predicate won’t help. Nor will a volume or energy-based one, because we can suppose a gunky world with finite total volume and finite total energy. So Goodman and Quine had better hope that the world isn’t thoroughly gunky.

Friday, November 8, 2024

No fix for Goodman and Quine's counting

In yesterday’s post, I noted that Goodman and Quine’s nominalist mereological definition of what it is to say that there are more cats than dogs fails if there are cats that are conjoint twins. This raises the question whether there is some other way of using the same ontological resources to generate a definition of “more” that works for overlapping objects as well.

I think the answer is negative. First, note that GQ’s project is explicitly meant to be compatible with there being a finite number of individuals. In particular, thus, it needs to be compatible with the existence of mereological atoms, individuals with no proper parts, which every individual is a fusion of. (Otherwise, there would have to be no individuals or infinitely many. For every individual has an atom as a part, since otherwise it has an infinite regress of parts. Furthermore, every individual must be a fusion of the atoms it has as parts, otherwise the supplementation axiom will be violated.) Second, GQ’s avail themselves of one non-mereological tool: size comparisons (which I think must be something like volumes). And then it is surely a condition of adequacy on their theory that it be compatible with the logical possibility that there are finitely many individuals, every individual is a fusion of its atoms and the atoms are all the same size. I will call worlds like that “admissible”.

So, here are GQ’s theoretical resources for admissible worlds. There are individuals, made of atoms, and there is a size comparison. The size comparison between two individuals is equivalent to comparing the cardinalities of the sets of atoms the individuals are made of, since all the atoms are the same size. In terms of expressive power, their theory, in the case of admissible worlds, is essentially that of monadic second order logic with counting, MSO(#), restricted to finite models. (I am grateful to Allan Hazen for putting me on to the correspondence between GQ and MSO.) The atoms in GQ correspond to objects in MSO(#) and the individuals correspond to (extensions of) monadic predicates. The differences are that MSO(#) will have empty predicates and will distinguish objects from monadic predicates that have exactly one object in their extension, while in GQ the atoms are just a special (and definable) kind of individual.

Suppose now that GQ have some way of using their resources to define “more”, i.e., find a way of saying “There are more individuals satisfying F than those satisfying G.” This will be equivalent to MSO(#) defining a second-order counting predicate, one that essentially says “The set of sets of satisfiers of F is bigger than the set of sets of satisfiers of G”, for second-order predicates F and G.

But it is known that the definitional power of MSO(#) over finite models is precisely such as to define semi-linear sets of numbers. However, if we had a second-order counting predicate in MSO(#), it would be easy to define binary exponentiation. For the number of objects satisfying predicate F is equal to two raised to the power of the number of objects satisfying G just in case the number of singleton subsets of F is equal to the number of subsets of G. (Compare in the GQ context: the number of atoms of type F is equal to two the power of the number of atoms of type G provided that the number of atoms of type F is one plus the number of individuals made of the atoms of type G.) And of course equinumerosity can be defined (over finite models) in terms of “more”, while the set of pairs (n,2n) is clearly not semi-linear.

One now wants to ask a more general question. Could GQ define counting of individuals using some other predicates on individuals besides size comparison? I don’t know. My guess would be no, but my confidence level is not that high, because this deals in logic stuff I know little about.

Tuesday, October 24, 2023

Does denying bivalence get us out of the logical argument for fatalism?

Consider this seemingly standard argument for logical fatalism.

  1. It is true that you will ϕ or it is true that you will not ϕ.

  2. If something true now is incompatible with it’s being true that p, then p is not within your power.

  3. If you are free with respect to ϕing, then it is within your power that you will ϕ and it is within your power that you not ϕ.

  4. That you will ϕ and that you will not ϕ are incompatible.

  5. So, if it is true that you will ϕ, then it is not within your power that you will not ϕ. (2, 4)

  6. So, if it is true that you will ϕ, then you are not free with respect to ϕing. (3, 5)

  7. Also, if it is true that you will not ϕ, then it is not within your power that you will ϕ. (2, 4)

  8. So, if it is true that you will not ϕ, then you are not free with respect to ϕing. (3, 7)

  9. So, you are not free with respect to ϕing. (1, 8)

Many open futurists want to refute arguments for logical fatalism by supposing that in cases of freedom, that you will ϕ is indeterminate (and hence neither true nor false), and that you will not ϕ is also indeterminate, which allows them to deny premise 1 of the above argument.

But now consider this argument.

  1. It is now indeterminate that you will ϕ.

  2. Necessarily, p if and only if it is true that p.

  3. So, it is true that it is now indeterminate that you will ϕ. (10, 11)

  4. That it is indeterminate that you will ϕ and that it is true you will ϕ are incompatible.

  5. That it is indeterminate that you will ϕ and that you will ϕ are incompatible. (11, 13)

  6. If something true now is incompatible with it’s being true that p, then p is not within your power.

  7. If you are free with respect to ϕing, then it is within your power that you will ϕ.

  8. So, that you will ϕ is not within your power. (10, 14, 15)

  9. So, you are not free with respect to ϕing.

Premise 15 of this argument is the same as premise 2 of the first argument. Premise 16 is an even less controversial version of premise 3. So anybody who is impressed by the first argument will be impressed by premises 15 and 16. Premise 13 is obviously true, and is an immediate consequence of the fact that a proposition that is indeterminate is neither true nor false.

Premise 11 is the plausible Tarski T-schema (necessitated, because we can think of the T-schema as an axiom). It has been questioned, but it is still very plausible.

Finally, premise 10 is a commitment of our open futurist.

So, unless our open futurist denies the T-schema, the supposition of indeterminacy leads to fatalism just as determinacy did!

Suppose we deny the T-schema. Nonetheless, even without the T-schema to back them up, 12 and 14 are still plausible as they stand, and so we still have a pretty plausible argument for fatalism, at least one that should be plausible by the open futurist’s lights.

I am not an open futurist. I just get out of the arguments by denying 2 and 15. Easy.

Monday, July 10, 2023

Partially defined predicates

Is cutting one head off a two-headed person a case of beheading?

Examples like this are normally used as illustrations of vagueness. It’s natural to think of cases like this as ones where we have a predicate defined over a domain and being applied outside it. Thus, “is being beheaded” is defined over n-headed animals that are being deprived of all heads or of no heads.

I don’t like vagueness. So let’s put aside the vagueness option. What else can we say?

First, we could say that somehow there are deep facts about the language and/or the world that determine the extension of the predicate outside of the domain where we thought we had defined it. Thus, perhaps, n-headed people are beheaded when all heads are cut off, or when one head is cut off, or when the number of heads cut off is sufficient to kill. But I would rather not suppose a slew of facts about what words mean that are rather mysterious.

Second, we could deny that sentences using predicates outside of their domain lack truth value. But that leads to a non-classical logic. Let’s put that aside.

I want to consider two other options. The first, and simplest, is to take the predicates to never apply outside of their domain of definition. Thus,

  1. False: Cutting one head off Dikefalos (who is two headed) is a beheading.

  2. True: Cutting one head off Dikefalos is not a beheading

  3. False: Cutting one head off Dikefalos is a non-beheading.

  4. True: Cutting one head off Dikefalos is not a non-beheading.

(Since non-beheading is defined over the same domain as beheading). If a pre-scientific English-speaking people never encountered whales, then in their language:

  1. False: Whales are fish.

  2. True: Whales are not fish.

  3. False: Whales are non-fish.

  4. True: Whales are not non-fish.

The second approach is a way modeled after Russell’s account of definite descriptors: A sentence using a predicate includes the claim that the predicate is being used in its domain of definition and, thus, all of the eight sentences exhibited above are false.

I don’t like the Russellian way, because it is difficult to see how to naturally extend it to cases where the predicate is applied to a variable in the scope of a quantifier. On the other hand, the approach of taking the undefined predicates to be false is very straightforward:

  1. False: Every marine mammal is a fish.

10: False: Every marine mammal is a non-fish.

This leads to a “very strict and nitpicky” way of taking language. I kind of like it.

Monday, July 5, 2021

Disjunctive predicates

I have found myself thinking these two thoughts, on different occasions, without ever noticing that they appear contradictory:

  1. Other things being equal, a disjunctive predicate is less natural than a conjunctive one.

  2. A predicate is natural to the extent that its expression in terms of perfectly natural predicates is shorter. (David Lewis)

For by (2), the predicates “has spin or mass” and “has spin and mass” are equally natural, but by (1) the disjunctive one is less natural.

There is a way out of this. In (2), we can specify that the expression is supposed to be done in terms of perfectly natural predicates and perfectly natural logical symbols. And then we can hypothesize that disjunction is defined in terms of conjunction (p ∨ q iff ∼(∼p ∧ ∼q)). Then “has spin or mass” will have the naturalness of “doesn’t have both non-spin and non-mass”, which will indeed be less natural than “has spin and mass” by (2) with the suggested modification.

Interestingly, this doesn’t quite solve the problem. For any two predicates whose expression in terms of perfectly natural predicates and perfectly natural logical symbols is countably infinite will be equally natural by the modified version of (2). And thus a countably infinite disjunction of perfectly natural predicates will be equally natural as a countably infinite conjunction of perfectly natural predicates, thereby contradicting (1) (the De Morgan expansion of the disjunctions will not change the kind of infinity we have).

Perhaps, though, we shouldn’t worry about infinite predicates too much. Maybe the real problem with the above is the question of how we are to figure out which logical symbols are perfectly natural. In truth-functional logic, is it conjunction and negation, is it negation and material conditional, is it nand, is it nor, or is it some weird 7-ary connective? My intuition goes with conjunction and negation, but I think my grounds for that are weak.

Thursday, November 5, 2020

Is there a set of all set-theoretic truths?

Is there a set of all set-theoretic truths? This would be the set of sentences (in some encoding scheme, such as Goedel numbers) in the language of set theory that are true.

There is a serious epistemic possibility of a negative answer. If ZF is consistent, then there is a model M of ZFC such that every object in M is definable, i.e., for every object a of M, there is a defining formula ϕ(x) that is satisfied by a and by a alone in M (and if there is a transitive model of ZF, then M can be taken to be transitive). In such a model, it follows from Tarski’s Indefinability of Truth that there is no set of all set-theoretic truths. For if there were such a set, then that set would be definable, and we could use the definition of that set to define truth. So, if ZF is consistent, there is a model M of ZFC that does not contain a set of all the truths in M.

Interestingly, however, there is also a serious epistemic possibility of a positive answer. If ZF is consistent, then there is a model M of ZFC that does contain a set of all the truths in M. Here is a proof. If ZF is consistent, so is ZFC. Let ZFCT be a theory whose language is the language of set theory with an extra constant T, and whose axioms are the axioms of ZFC with the schemas of Separation and Replacement restricted to formulas of ZFC (i.e., formulas not using T), plus the axiom:

  1. x(x ∈ T → S(x))

where S(x) is a sentence saying that x is the code for a sentence (this is a syntactic matter, so it can be specified explicitly), and the axiom schema that has for every sentence ϕ with code n:

  1. ϕ ↔ n ∈ T.

Any finite collection of the axioms of ZFCT is consistent. For let M be a model of ZFC (if ZF is consistent, so is ZFC, so it has a model). Then all the axioms of ZFC will be satisfied in M. Furthermore, for any finite subset of the additional axioms of ZFCT, there is an interpretation of the constant T under which those axioms are true. To see this, suppose that our finite subset contains (1) (no harm throwing that in if it’s not there) and the instances ϕi ↔ ni ∈ T of (2) for i = 1, ..., m. It is provable from ZF and hence true in M that there is a set t such that x ∈ t if and only if x = n1 and ϕ1, or x = n2 and ϕ2, …, or x = nm and ϕm.

Moreover, any such set can be proved in ZF to satisfy:

  1. x(x ∈ t → S(t)).

Interpreting T to be that set t in M will make the finite subset of the additional axioms true.

So, by compactness, ZFCT has an interpretation I in some model M. In M there will be an object t such that t = I(T). That object t will be a set of all the truths in M that do not contain the constant T. Now consider the interpretation I of ZFC in M, which is I without any assignment of a value to the constant T (since T is not a constant of ZFC). Then ZFC will be true in M under I. Moreover, the object t in M will be a set of all the truths in M.

So, if ZF is consistent, then there is a model of ZFC with a set of all set-theoretic truths and a model of ZFC without a set of all set-theoretic truths.

The latter claim may seem to violate the Tarski Indefinability of Truth. But it doesn’t. For that set of all truths will not itself be definable. It will exist, but there won’t be a formula of set theory that picks it out. There is nothing mathematically new in what I said above, but it is an interesting illustration of how one can come close to violating Indefinability of Truth without actually violating it.

Now, what if we take a Platonic view of the truths of set theory? Should we then say that there really is a set of all set-theoretic truths? Intuitively, I think so. Otherwise, our class of all sets is intuitively “missing” a subset of the set of all sentences. I am inclined to think that the Axioms of Separation and Replacement should be extended to include formulas of English (and other human languages), not just the formulas expressible in set-theoretic language. And the existence of the set of all set-theoretic truths follows from an application of Separation to the sentence “n is the code for a sentence of set theory that is true”.

Monday, November 18, 2019

Change, time and contradiction

According to Aristote:

  1. Time is the measure of change.

  2. The law of non-contradiction says that a thing cannot have and lack the same property in the same respect at the same time.

The law of non-contradiction seems to be the fundamental basis of logic. Yet it presupposes the concept of time, which in turn presupposes that of change. Thus, it seems, for Aristotle, the concept of change is more fundamental than logic itself. That doesn’t seem very plausible to me.

But perhaps there is a different way to understand the “at the same time” qualifier in (2). Sometimes, we give a rule with something we call an exception, but it’s not really an exception. For instance, we could say: “It is an offense to lie to an officer of the law, except unintentionally.” Of course, there is no such thing as an unintentional lie, but it is useful to emphasize that unintentional falsehoods are not forbidden by the rule.

Now, Aristotle is, as far as I know, a presentist. On presentism, the only properties a thing has are its present properties, and it lacks precisely those properties it doesn’t presently have. So it’s not really possible for an object to have and lack the same property, since the having and lacking would have to be both present, and hence at the same time. But it is useful to emphasize that having the same property at one time and lacking it another is not forbidden by the law of non-contradiction, and hence the logically unnecessary qualifier “at the same time”. Strictly speaking, I think “in the same respect” isn’t needed, either.

Wednesday, November 7, 2018

Post-Goedelian mathematics as an empirical inquiry

Once one absorbs the lessons of the Goedel incompleteness theorems, a formalist view of mathematics as just about logical relationships such as provability becomes unsupportable (for me the strongest indication of this is the independence of logical validity). Platonism thereby becomes more plausible (but even Platonism is not unproblematic, because mathematical Platonism tends towards plenitude, and given plenitude it is difficult to identify which natural numbers we mean).

But there is another way to see post-Goedelian mathematics, as an empirical and even experimental inquiry into the question of what can be proved by beings like us. While the abstract notion of provability is subject to Goedelian concerns, the notion of provability by beings like us does not seem to be, because it is not mathematically formalizable.

We can mathematically formalize a necessary condition for something to be proved by us which we can call “stepwise validity”: each non-axiomatic step follows from the preceding steps by such-and-such formal rules. To say that something can be proved by beings like us, then, would be to say that beings like us can produce (in speech or writing or some other relevantly similar medium) a stepwise valid sequence of steps that starts with the axioms and ends with the conclusion. This is a question about our causal powers of linguistic production, and hence can be seen as empirical.

Perhaps the surest way to settle the question of provability by beings like us is for us to actually produce the stepwise valid sequence of steps, and check its stepwise validity. But in practice mathematicians usually don’t: they skip obvious steps in the sequence. In doing so, they are producing a meta-argument that makes it plausible that beings like us could produce the stepwise valid sequence if they really wanted to.

This might seem to lead to a non-realist view of mathematics. Whether it does so depends, however, on our epistemology. If in fact provability by beings like us tracks metaphysical necessity—i.e., if B is provable by beings like us from A1, ..., An, then it is not possible to have A1, ..., An without B—then by means of provability by beings like us we discover metaphysical necessities.

Thursday, November 1, 2018

The centrality of the natural numbers

The more I think about the foundations of mathematics, the more wisdom I see in Kronecker’s famous saying: “God made the natural numbers; all else is the work of man.” There is something foundationally deep about the natural numbers. We see this in the way theories of natural numbers is equivalent (e.g., via Goedel encoding) to the theories of strings of symbols that are central to logic, and in the way that when we fix our model of natural numbers, we fix the foundational notion of provability.

Tuesday, October 30, 2018

Independence of FOL-validity

A sentence ϕ of a dialect of First Order Logic is FOL-valid if and only if ϕ is true in every non-empty model under every interpretation. By the Goedel Completeness Theorem, ϕ is valid if and only if ϕ is a theorem of FOL (i.e., has a proof from no axioms beyond any axioms of FOL). (Note: This does not use the Axiom of Choice since we are dealing with a single sentence.)

Here is a meta-logic fact that I think is not as widely known as it should be.

Proposition: Let T be any consistent recursive theory extending Zermelo-Fraenkel set theory. Then there is a sentence ϕ of a dialect of First Order Logic such that according to some models of T, ϕ is FOL-valid (and hence a theorem of FOL) and according to other models of T, ϕ is not FOL-valid (and hence not a theorem of FOL).

Note: The claim that ϕ is FOL-valid according to a model M is shorthand for the claim that a certain complex arithmetical claim involving the Goedel encoding of ϕ is true according to M.

The Proposition is yet another nail in the coffins of formalism and positivism. It tells us that the mere notion of FOL-theoremhood has Platonic commitments, in that it is only relative to a fixed family of universes of sets (or at least a fixed model of the natural numbers or a fixed non-recursive axiomatization) does it make unambiguous sense to predicate FOL-theoremhood and its lack. Likewise, the very notion of valid consequence, even of a finite axiom set, carries such Platonic commitments.

Proof of Proposition: Let G be a Rosser-tweaked Goedel sentence for T with G being Σ1 (cf. remarks in Section 51.3 here). Then G is independent of T. In ZF, and hence in T, we can prove that there is a Turing machine Q that halts if and only if G holds. (Just make Q iterate over all natural numbers, halting if the number witnesses the existential quantifier at the front of the Σ1 sentence G.) But one can construct an FOL-sentence ϕ such that one can prove in ZF that ϕ is FOL-valid if and only if Q halts (one can do this for any Turing machine Q, not just the one above). Hence, one can prove in T that ϕ is FOL-valid if and only if I holds.

Thus, in T it is provable that ϕ is FOL-valid if and only if G holds. But T is a consistent theory (otherwise one could formalize in T the proof of its inconsistency). Since G is independent of T, it follows that the FOL-validity of ϕ is as well.

Friday, August 3, 2018

World shuffling and quantifiers

Let ψ be a non-trivial one-to-one map from all worlds to all worlds. (By non-trivial, I mean that there is a w such that ψ(w)≠w.) We now have an alternate interpretation of all sentences. Namely, if I is our “standard” method of interpreting sentences of our favorite language, we have a reinterpretation Iψ where a sentence s reinterpreted under Iψ is true at a world w if and only if s interpreted under I is true at ψ(w). Basically, under Iψ, s says that s correctly describes ψ(actual world).

Under the reinterpretation Iψ all logical relations between sentences are preserved. So, we have here a familiar Putnam-style argument that the logical relations between sentences do not determine the meanings of the sentences. And if we suppose that ψ leaves fixed the actual world, as we surely can, the argument also shows that truth plus the logical relations between sentences do not determine meanings. Moreover, can suppose that ψ is a probability preserving map. If so, then all probabilistic relations between sentences will be preserved, and hence the meanings of sentences are not determined by truth and the probabilistic and logical relations between sentences. This is all familiar ground.

But here is the application that I want. Apply the above to English with its intended interpretation. This results in a language English* that is syntactically and logically just like English but where the intended interpretation is weird. The homophones of the English existential and universal quantifiers in English* behave logically in the same way, but they are not in fact the familiar quantifiers. Hence quantifiers are not defined by their logical relations. I’ve been looking for a simple argument to show this, and this is about as simple as can be.

Existential quantifiers aren't defined by their logic

Start with a first order language L describing the concrete objects of our world and expand L to a language L* by adding a new name “obump”. Given an interpretation I of L in a model M, create a model M* such that:

  • The domain of M* is the domain of M with one more object, o, in its domain.

  • The non-unary relations of M* are the same as those of M, except that I(=) is replaced by a new relation J = I(=)∪{(o, o)}.

  • The unary relations of M* are all and only the relations R* for R a unary relation of M, where R* = R ∪ {o} if R is equal to I(F) either for a physical predicate F of L such that I(trump)∈I(F) or for a mental predicate F of L such that I(obama)∈I(F) and R* = R otherwise.

Define the interpretation I* of L* in M* as follows: I*(a)=I(a) for any name other than obump, I*(obump)=o, I*(F)=(I(F))* for a unary F, and I*(F)=I(F) for any non-unary F.

We can now give a semantics for L*: If Iw is the intended interpretation of L in a world w, then the intended interpretation of L* in w is given by Iw*. We can define validity for L in an analogous way.

The symbols ∃ and ∀ of L* have the same logic as the same symbols of L. But the ∃ of L* is not really an existential quantifier. For if it were, then it would be true that there exists an entity that has all the mental properties of Obama and all the physical properties of Trump, which is false. Thus, logic is not sufficient to make a symbol be an existential quantifier.

Saturday, July 21, 2018

Trivial universalizations

Students sometimes find trivial universalizations, like "All unicorns have three horns", confusing. I just overheard my teenage daughter explain this in a really elegant way: She said she has zero Australian friends and zero Australian friends came to her birthday party, so all her Australian friends came to her birthday party.

The principle that if there are n Fs, and n of the Fs are Gs, then all the Fs are Gs is highly intuitive. However, the principle does need to be qualified, which may confuse students: it only works for finite values of n. Still, it seems preferable to except only the infinite case rather than both zero and infinity.

Thursday, April 26, 2018

Alethic Platonism

I’ve been thinking about an interesting metaphysical thesis about arithmetic, which we might call alethic Platonism about arithmetic: there is a privileged, complete and objectively correct assignment of truth values to arithmetical sentences, not relative to a particular model or axiomatization.

Prima facie, one can be an alethic Platonist about arithmetic without being an ontological Platonist: one can be an alethic Platonist without thinking that numbers really exist. One might, for instance, be a conceptualist, or think that facts about natural numbers are hypothetical facts about sequences of dashes.

Conversely, one can be an ontological Platonist without being an alethic Platonist about arithmetic: one can, for instance, think there really are infinitely many pluralities of abstracta each of which is equally well qualified to count as “the natural numbers”, with different such candidates for “the natural numbers” disagreeing on some of the truths of arithmetic.

Alethic Platonism is, thus, orthogonal to ontological Platonism. Similar orthogonal pairs of Platonist claims can be made about sets as about naturals.

One might also call alethic Platonism “alethic absolutism”.

I suspect causal finitism commits one to alethic Platonism.

Something close to alethic Platonism about arithmetic is required if one thinks that there is a privileged, complete and objectively correct assignment of truth values to claims about what sentence can be proved from what sentence. Specifically, it seems to me that such an absolutism about proof-existence commits one to alethic Platonism about the Σ10 sentences of arithmetic.

Thursday, March 15, 2018

Logical closure accounts of necessity

A family of views of necessity (e.g., Peacocke, Sider, Swinburne, and maybe Chalmers) identifies a family F of special true statements that get counted as necessary—say, statements giving the facts about the constitution of natural kinds, the axioms of mathematics, etc.—and then says that a statement is necessary if and only if it can be proved from F. Call these “logical closure accounts of necessity”. There are two importantly different variants: on one “F” is a definite description of the family and on the other “F” is a name for the family.

Here is a problem. Consider:

  1. Statement (1) cannot be proved from F.

If you are worried about the explicit self-reference in (1), I should be able to get rid of it by a technique similar to the diagonal lemma in Goedel’s incompleteness theorem. Now, either (1) is true or it’s false. If it’s false, then it can be proved from F. Since F is a family of truths, it follows that a falsehood can be proved from truths, and that would be the end of the world. So it’s true. Thus it cannot be proved from F. But if it cannot be proved from F, then it is contingently true.

Thus (1) is true but there is a possible world w where (1) is false. In that world, (1) can be proved from F, and hence in that world (1) is necessary. Hence, (1) is false but possibly necessary, in violation of the Brouwer Axiom of modal logic (and hence of S5). Thus:

  1. Logical closure accounts of necessity require the denial of the Brouwer Axiom and S5.

But things get even worse for logical closure accounts. For an account of necessity had better itself not be a contingent truth. Thus, a logical closure account of necessity if true in the actual world will also be true in w. Now in w run the earlier argument showing that (1) is true. Thus, (1) is true in w. But (1) was false in w. Contradiction! So:

  1. Logical closure accounts of necessity can at best be contingently true.

Objection: This is basically the Liar Paradox.

Response: This is indeed my main worry about the argument. I am hoping, however, that it is more like Goedel’s Incompleteness Theorems than like the Liar Paradox.

Here's how I think the hope can be satisfied. The Liar Paradox and its relatives arise from unbounded application of semantic predicates like “is (not) true”. By “unbounded”, I mean that one is free to apply the semantic predicates to any sentence one wishes. Now, if F is a name for a family of statements, then it seems that (1) (or its definite description variant akin to that produced by the diagonal lemma) has no semantic vocabulary in it at all. If F is a description of a family of statements, there might be some semantic predicates there. For instance, it could be that F is explicitly said to include “all true mathematical claims” (Chalmers will do that). But then it seems that the semantic predicates are bounded—they need only be applied in the special kinds of cases that come up within F. It is a central feature of logical closure accounts of necessity that the statements in F be a limited class of statements.

Well, not quite. There is still a possible hitch. It may be that there is semantic vocabulary built into “proved”. Perhaps there are rules of proof that involve semantic vocabulary, such as Tarski’s T-schema, and perhaps these rules involve unbounded application of a semantic predicate. But if so, then the notion of “proof” involved in the account is a pretty problematic one and liable to license Liar Paradoxes.

One might also worry that my argument that (1) is true explicitly used semantic vocabulary. Yes: but that argument is in the metalanguage.

Tuesday, October 3, 2017

Infinite proofs

Consider this fun “proof” that 0=1:

      …

  • So, 3=4

  • So, 2=3

  • So, 1=2

  • So, 0=1.

What’s wrong with the proof? Each step follows from the preceding one, after all, and the only axiom used is an uncontroversial axiom of arithmetic that if x + 1 = y + 1 then x = y (by definition, 2 = 1 + 1, 3 = 1 + 1 + 1, 4 = 1 + 1 + 1 + 1 and so on).

Well, one problem is that intuitively a proof should have a beginning and an end. This one has an end, but no beginning. But that’s easily fixed. Prefix the above infinite proof with this infinite number of repetitions of “0=0”, to get:

  • 0=0

  • So, 0=0

  • So, 0=0

  • So, 0=0

      …

      …

  • So, 3=4

  • So, 2=3

  • So, 1=2

  • So, 0=1.

Now, there is a beginning and an end. Every step in the proof follows from a step before it (in fact, from the step immediately before it). But the conclusion is false. So what’s wrong?

The answer is that there is a condition on proofs that we may not actually bother to mention explicitly when we teach logic: a proof needs to have a finite number of steps. (We implicitly indicate this by numbering lines with natural numbers. In the above proof, we can’t do that: the “second half” of the proof would have infinite line numbers.)

So, our systems of proof depend on the notion of finitude. This is disquieting. The concept of finitude is connected to arithmetic (the standard definition of a finite set is one that can be numbered by a natural number). So is arithmetic conceptually prior to proof? That would be a kind of Platonism.

Interestingly, though, causal finitism—the doctrine that nothing can have an infinite causal history—gives us a metaphysical verificationist account of proof that does not presuppose Platonism:

  • A proof is a sequence of steps such that it is metaphysically possible for an agent to verify that each one followed by the rules from the preceding steps and/or the axioms by observation of each step.

For, given causal finitism, only a finite number of steps can be in the causal history of an act of verification of a proposition. (God can know all the steps in an infinite chain, but God isn’t an observer: an observer’s observational state is caused by the observations.)