Showing posts with label incompleteness. Show all posts
Showing posts with label incompleteness. Show all posts

Tuesday, May 13, 2025

Truth-value realisms about arithmetic

Arithmetical truth-value realists hold that any proposition in the language of arithmetic has a fully determined truth value. Arithmetical truth-value necessists add that this truth value is necessary rather than merely contingent. Although we know from the incompleteness theorems that there are alternate non-standard natural number structures, with different truth values (e.g., there is a non-standard natural number structure according to which the Peano Axioms are inconsistent), the realist and necessist hold that when we engage in arithmetical language, we aren’t talking about these structures. (I am assuming either first-order arithmetic or second-order with Henkin semantics.)

Start by assuming arithmetical truth-value necessitism.

There is an interesting decision point for truth-value necessitism about arithmetic: Are these necessary truths twin-earthable? I.e., could there be a world whose denizens who talk arithmetically like we do, and function physically like we do, but whose arithmetical sentences express different propositions, with different and necessary truth values? This would be akin to a world where instead of water there is XYZ, a world whose denizens would be saying something false if they said “Water has hydrogen in it”.

Here is a theory on which we have twin-earthability. Suppose that the correct semantics of natural number talk works as follows. Our universe has an infinite future sequence of days, and the truth-values of arithmetical language are fixed by requiring the Peano Axioms (or just the Robinson Axioms) together with the thesis that the natural number ordering is order-isomorphic to our universe’s infinite future sequence of days, and then are rigidified by rigid reference to the actual world’s sequence of future days. But in another world—and perhaps even in another universe in our multiverse if we live in a multiverse—the infinite future sequence of days is different (presumably longer!), and hence the denizens of that world end up rigidifying a different future sequence of days to define the truth values of their arithmetical language. Their propositions expressed by arithmetical sentences sometimes have different truth values from ours, but that’s because they are different propositions—and they’re still as necessary as ours. (This kind of a theory will violate causal finitism.)

One may think of a twin-earthable necessitism about arithmetic as a kind of cheaper version of necessitism.

Should a necessitist go cheap and allow for such twin-earthing?

Here is a reason not to. On such a twin-earthable necessitism, there are possible universes for whose denizens the sentence “The Peano Axioms are consistent” expresses a necessary falsehood and there are possible universes for whose denizens the sentence expresses a necessary truth. Now, in fact, pretty much everybody with great confidence thinks that the sentence “The Peano Axioms are consistent” expresses a truth. But it is difficult to hold on to this confidence on twin-earthable necessitism. Why should we think that the universes the non-standard future sequences of days are less likely?

Here is the only way I can think of answering this question. The standard naturals embed into the non-standard naturals. There is a sense in which they are the simplest possible natural number structure. Simplicity is a guide to truth, and so the universes with simpler future sequences of days are more likely.

But this answer does not lead to a stable view. For if we grant that what I just said makes sense—that the simplest future sequences of days are the ones that correspond to the standard naturals—then we have a non-twin-earthable way of fixing the meaning of arithmetical language: assuming S5, we fix it by the shortest possible future sequence of days that can be made to satisfy the requisite axioms by adding appropriate addition and multiplication operations. And this seems a superior way to fix the meaning of arithmetical language, because it better fits with common intuitions about the “absoluteness” of arithmetical language. Thus it it provides a better theory than twin-earthable necessitism did.

I think the skepticism-based argument against twin-earthable necessitism about arithmetic also applies to non-necessitist truth-value realism about arithmetic. On non-necessitist truth-value realism, why should we think we are so lucky as to live in a world where the Peano Axioms are consistent?

Putting the above together, I think we get an argument like this:

  1. Twin-earthable truth-value necessitism about arithmetic leads to skepticism about the consistency of arithmetic or is unstable.

  2. Non-necessitist truth-value realism about arithmetic leads to skepticism about the consistency of arithmetic.

  3. Thus, probably, if truth-value realism about arithmetic is true, non-twin-earthable truth-value necessitism about arithmetic is true.

The resulting realist view holds arithmetical truth to be fixed along both dimensions of Chalmers’ two-dimensional semantics.

(In the argument I assumed that there is no tenable way to be a truth-value realist only about Σ10 claims like “Peano Arithmetic is consistent” while resisting realism about higher levels of the hierarchy. If I am wrong about that, then in the above argument and conclusions “truth-value” should be replaced by “Σ10-truth-value”.)

Friday, March 28, 2025

Some stuff about models of PA+~Con(PA)

Assume Peano Arithmetic (PA) is consistent. Then it can’t prove its own consistency. Thus, there is a model M of PA according to which PA is inconsistent, and hence, according M, there is a proof of a contradiction from a finite set of axioms of PA. This sounds very weird.

But it becomes less weird when we realize what these claims do and do not mean in M.

The model M will indeed contain an M-natural number a that according to M encodes a finite sequence of axioms of PA, and it will also contain an M-natural number p that according to M encodes a proof of a contradiction using the axioms encoded in A.

However, here are some crucial qualifications. Distinguish between the M-natural numbers that are standard, i.e., correspond to an actually natural number, one that from the point of view of the “actual” natural numbers is finite, and those that are not. The latter are infinite from the point of view of the actual natural numbers.

First, the M-natural number a is non-standard. For a standard natural number will only encode a finite number of axioms, and for any finite subtheory of PA, PA can prove its consistency (this is the “reflexivity of PA”, proved by Mostowski in the middle of the last century). Thus, if a were a standard natural number, according to M there would be no contradiction from the axioms in a.

Second, while every item encoded in a is according to M an axiom of PA, this is not actually true. This is because any M-finite sequence of M-natural numbers will either be a standardly finite length sequence of standard natural numbers, or will contain a non-standard number. For let n be the largest element in the sequence. If this is standard, then we have a standardly finite length sequence of standard natural numbers. If not, then the sequence contains a non-standard number. Thus, a contains something that is not axiom of PA.

In other words, according to our model M, there is a contradictory collection of axioms of PA, but when we query M as to what that collection is, we find out that some of the things that M included in the collection are not actually axioms of PA. (In fact, they won’t even be well-formed formulas, since they will be infinitely long.) So a crucial part of the reason why M disagrees with the “true” model of the naturals about the consistency of PA is because M disagrees with it about what PA actually says!

Wednesday, March 26, 2025

A puzzle about consistency

Let T0 be ZFC. Let Tn be Tn − 1 plus the claim Con(Tn − 1) that Tn − 1 is consistent. Let Tω be the union of all the Tn for finite n.

Here’s a fun puzzle. It seems that Tω should be able to prove its own consistency by the following reasoning:

If Tω is inconsistent, then for some finite n we have Tn inconsistent. But Con(Tn) is true for every finite n.

This sure sounds convincing! It took me a while to think through what’s wrong here. The problem is that although for every finite n, Tω can prove Con(Tn), it does not follow that Tω can prove that for every finite n we have Con(Tn).

To make this point perhaps more clear, assume Tn is consistent for all n. Then Con(Tn) cannot be proved from Tn. Thus any finite subset of Tω is consistent with the claim that for some finite n the theory Tn is inconsistent. Hence by compactness there is a model of Tω according to which for some finite n the theory Tn is inconsistent. This model will have a non-standard natural number sequence, and “finite” of course will be understood according to that sequence.

Here’s another way to make the point. The theory Tω proves Tω consistent if and only if Tω is consistent according to every model M. But the sentence “Tω is consistent according to M” is ambiguous between understanding “Tω” internally and externally to M. If we understand it internally to M, we mean that the set that M thinks consists of the axioms of ZFC together with the ω-iteration of consistency claims is consistent. And this cannot be proved if Tω is consistent. But if we understand “Tω” externally to M, we mean that upon stipulating that S is the object in M’s universe whose membersM correspond naturally to the membersV of Tω (where V is “our true set theory”), according to M, it will be provable that the set S is consistent. But there is a serious problems: there just may be no such object as S in the domain of M and the stipulation may fail. (E.g., in non-standard analysis, the set of finite naturals is never an internal set.)

(One may think a second option is possible: There is such an object as S in M’s universe, but it can’t be referred to in M, in the sense that there is no formula ϕ(x) such that ϕ is satisfied by S and only by S. This option is not actually possible, however, in this case.)

Or so it looks to me. But all this is immensely confusing to me.

Tuesday, March 25, 2025

Non-formal provability

A simplified version of Goedel’s first incompleteness theorem (it’s really just a special case of Tarski’s indefinability of truth) goes like this:

  • Given a sound semidecidable system of proof that is sufficiently rich for arithmetic, there is a true sentence g that is not provable.

Here:

  • sound: if s is provable, s is true

  • semidecidable: there is an algorithm that given any provable sentence verifies in a finite number of steps that it is provable.

The idea is that we start with a precisely defined ‘formal’ notion of proof that yields semidecidability of provably, and show that this concept of proof is incomplete—there are truths that can’t be proved.

But I am thinking there is another way of thinking about this stuff. Suppose that instead of working with a precisely defined concept of proof, we have something more like a non-formal or intuitive notion of proof, which itself is governed by some plausible axioms—if you can prove this, you can prove that, etc. That’s kind of how intuitionists think, but we don’t need to be intuitionists to find this approach attractive.

Note that I am not explicitly distinguishing axioms.

The idea is going to be this. The predicate P is not formally defined, but it still satisfies some formal constraints or axioms. These can be formulated in a formal language (Brouwer wouldn’t like this) that has a way of talking about strings of symbols and their concatenation and allows one to define a quotation function that given a string of symbols returns a string of symbols that refers to the first string.

One way to do this is to have a symbol α for any symbol α in the original language which refers to α, and a concatenation operator +, so one can then quote αβγ as α′ + ′β′ + ′γ. I assume the language is rich enough to define a quotation function Q such that Q(x) is the quotation of a string x.

To formulate my axioms, I will employ some sloppy quotation mark shorthand, partly to compensate for the difficulty of dealing with corner quotes on the web. Thus, αβγ is shorthand for α′ + ′β′ + ′γ, and as needed I will allow substitution inside the quotation marks. If there are nested quotation marks, the inner substitutions are resolved first.

  1. For all sentences ϕ and ψ, if P(′ϕ↔︎∼ψ′) and P(′ϕ′), then P(′∼ϕ′).

  2. For all sentences ϕ and ψ, if P(′ϕ↔︎∼ψ′) and P(′ψϕ′), then P(′ϕ′).

  3. For all sentences ϕ, we have P(′P(′ϕ′)→ϕ′).

  4. If ϕ has a formal intuitionistic proof from sufficiently rich axioms of concatenation theory, then P(′ϕ′).

Here, (1) and (2) embody a little bit of facts about proof, both of which facts are intuitionistically and classically acceptable. Assumption (3) is the philosophically heaviest one, but it follows from its being axiom that if ϕ is provable, then ϕ, together with the fact that all axioms count as provable. That a formal intuitionistic proof is sufficient for provability is uncontroversial.

Using similar methods to those used to prove Goedel’s first incompleteness theorem, I think we should now be able to construct a sentence g and the prove, in a formal intuitionistic proof in a sufficiently rich concatenation theory, that:

  1. g ↔︎  ∼ P(′g′).

But these facts imply a contradiction. Since 5 can be proved in our formal way, we have:

  1. P(′g↔︎∼P(′g′)′). By 4.

  2. P(′P(′g′)→g′). By 3.

  3. P(′g′). By 6, 7 and 2.

  4. P(′∼g′). By 6, 8 and 1.

Hence the system P is inconsistent in the sense that it makes both g and  ∼ g are provable.

This seems to me to be quite a paradox. I gave four very plausible assumptions about a provability property, and got the unacceptable conclusion that the provability property allows contradictions to be proved.

I expect the problem lies with 3: it lets one ‘cross levels’.

The lesson, I think, is that just as truth is itself something where we have to be very careful with the meta- and object-language distinction, the same is true of proof if we have something other than a formal notion.

Monday, February 17, 2025

Incompleteness

For years in my logic classes I’ve been giving a rough but fairly accessible sketch of the fact that there are unprovable arithmetical truths (a special case of Tarski’s indefinability of truth), using an explicit Goedel sentence using concatenation of strings of symbols rather than Goedel encoding and the diagonal lemma.

I’ve finally revised the sketch to give the full First Incompleteness theorem, using Rosser’s trick. Here is a draft.

Friday, June 30, 2023

Materialism and incompleteness

It is sometimes thought that Goedel’s incompleteness theorems yield an argument against materialism, on something like the grounds that we can see that the Goedel sentence for any recursively axiomatizable system of arithmetic is true, and hence our minds cannot operate algorithmically.

In this post, I want to note that materialism is quite compatible with being able to correctly decide the truth value of all sentences of arithmetic. For imagine that we live in an infinite universe which contains infinitely many brass plaques with a sentence of arithmetic followed by the word “true” or “false”, such that every sentence of arithmetic is found on exactly one brass plaque. There is nothing contrary to materialsim in this assumption. Now add the further assumption that the word “true” is found on all and only the plaques containing a true sentence of arithmetic. Again, there is nothing contradicting materialism here. It could happen that way simply by chance movements of atoms! Next, imagine a machine where you type in a sentence of arithmetic, and the machine starts traveling outward in the universe in a spiral pattern until it arrives at a plaque with that sentence, reads whether the sentence is true or false, and comes back to you with the result. This could all be implemented in a materialist system, and yet you could then correctly decide the truth value of every sentence of arithmetic.

Note that we should not think of this as an algorithmic process. So the way that this example challenges the argument at the beginning of this post is by showing that materialism does not imply algorithmism.

Objection 1: The plaques are a part of the mechanism for deciding arithmetic, and so the argument only shows that an infinite materialistic machine could decide arithmetic. But our brains are finite.

Response: While our brains are finite, they are analog devices. An analog system contains an infinite amount of information. For instance, suppose that my brain particles have completely precise positions (e.g., on a Bohmian quantum mechanics). Then the diameter of my brain expressed in units of Planck length at some specific time t is some decimal number with infinitely many significant figures. It could turn out that this infinitely long decimal number encodes the truth values of all the sentences of arithmetic, and a machine that measures the diameter of my brain to arbitrary precision could then determine the truth value of every arithmetical statement. Of course, this might turn out not to be compatible with the details of our laws of nature—it may be that arbitrary precision is unachievable—but it is not incompatible with materialism as such.

Objection 2: In these kinds of scenarios, we wouldn’t know that the plaques are right.

Response: After verifying a large number of plaques to be correct, and finding none that we could tell are incorrect, it would be reasonable to conclude by induction that they are all right. However, if the plaques are in fact due to random processes, this inductive conclusion wouldn’t constitute knowledge, except on some versions of reliabilism (which seem implausible to me). But it could be a law of nature that the plaques are right—that’s compatible with materialism. In any case, here the discussion gets complicated.

Friday, May 10, 2019

Closure views of modality

Logical-closure views of modality have this form:

  1. There is a collection C of special truths.

  2. A proposition is necessary if and only if it is provable from C.

For instance, C could be truths directly grounded in the essences of things.

By Goedel Second Incompleteness considerations like those here, we can show that the only way a view of modality like this could work is if C includes at least one truth that provably entails an undecidable statement of arithmetic.

This is not a problem if C includes all mathematical truths, as it does on Sider’s view.

Anti-S5

Suppose narrowly logical necessity LL is provability from some recursive consistent set of axioms and narrowly logical possibility ML is consistency with that set of axioms. Then Goedel’s Second Incompleteness Theorem implies the following weird anti-S5 axiom:

  • LLMLp for every statement p.

In particular, the S5 axiom MLp → LLMLp holds only in the trivial case where MLp is false.

For suppose we have LLMLp. Then MLp has a proof. But MLp is equivalent to ∼LLp. However, we can show that ∼LLp implies the consistency of the axioms: for if the axioms are not consistent, then by explosion they prove p and hence LLp holds. Thus, if LLLLp, then ∼LLp can be proved, and hence consistency can be proved, contrary to Second Incompleteness.

The anti-S5 axiom is equivalent to the axiom:

  • MLLLp.

In particular, every absurdity—even 0≠0—could be necessary.

I wonder if there is any other modality satisfying anti-S5.

Thursday, November 8, 2018

Provability from finite and infinite theories

Let #s be the Goedel number of s. The following fact is useful for thinking about the foundations of mathematics:

Proposition. There is a finite fragment A of Peano Arithmetic such that if T is a recursively axiomatizable theory, then there is an arithmetical formula PT(n) such that for all arithmetical sentences s, A → PT(#s) is a theorem of FOL if and only if T proves s.

The Proposition allows us to replace the provability of a sentence from an infinite recursive theory by the provability of a sentence from a finite theory.

Sketch of Proof of Proposition. Let M be a Turing machine that given a sentence as an input goes through all possible proofs from T and halts if it arrives at one that is a proof of the given sentence.

We can encode a history of a halting (and hence finite) run of M as a natural number such that there will be a predicate HM(m, n) and a finite fragment A of Peano Arithmetic independent of M (I expect that Robinson arithmetic will suffice) such that (a) m is a history of a halting run of M with input m if and only if HM(m, n) and (b) for all m and n, A proves whether HM(m, n).

Now, let PT(n) be ∃mHM(m, n). Then A proves PT(#s) if and only if there is an m0 such that A proves HM(m0, n). (If A proves PT(#s), then because A is true, there is an m such that HM(m, #s), and then A will prove HM(m0, #s). Conversely, if A proves HM(m0, #s), then it proves ∃mHM(m, #s).) And so A proves PT(#s) if and only if T proves s.

Wednesday, October 18, 2017

Are there multiple models of the naturals that are "on par"?

Assuming the Peano Axioms of arithmetic are consistent, we know that there are infinitely many sets that satisfy them. Which of these infinitely many sets is the set of natural numbers?

A plausible tempting answer is: “It doesn’t matter—any one of them will do.”

But that’s not right. For the infinitely many sets each of which is a model of the Peano Axioms are not isomorphic. They disagree with each other on arithmetical questions. (Famously, one of the models “claims” that the Peano Axioms are consistent and another “claims” that they are inconsistent, where we know from Goedel that consistency is equivalent to an arithmetical question.)

So it seems that with regard to the Peano Axioms, the models are all on par, and yet they disagree.

Here’s a point, however, that is known to specialists, but not widely recognized (e.g., I only recognized the point recently). When one says that some set M is a model of the Peano Axioms, one isn’t saying quite as much as the non-expert might think. Admittedly, one is saying that for every Peano Axiom A, A is true according to M (i.e., MA). But one is not saying that according to M all the Peano Axioms are true. One must be careful with quantifiers. The statement:

  1. For every Peano Axiom A, according to M, A is true.

is different from:

  1. According to M, all the Peano Axioms are true.

The main technical reason there is such a difference is that (2) is actually nonsense, because the truth predicate in (2) is ineliminable and cannot be defined in M, while the truth predicate in (1) is eliminable; we are just saying that for any Peano Axiom A, MA.

There is an important philosophical issue here. The Peano Axiomatization includes the Axiom Schema of Induction, which schema has infinitely many formulas as instances. Whether a given sequence of symbols is an instance of the Axiom Schema of Induction is a syntactic matter that can be defined arithmetically in terms of the Goedel encoding of the sequence. Thus, it makes sense to say that some sequence of symbols is a Peano Axiom according to a model M, i.e., that according to M, its Goedel number satisfies a certain arithmetical formula, I(x).

Now, non-standard models of the naturals—i.e., models other than our “normal” model—will contain infinite naturals. Some of these infinite naturals will intuitively correspond, via Goedel encoding, to infinite strings of symbols. In fact, given a non-standard model M of the naturals, there will be infinite strings of symbols that according to M are Peano Axioms—i.e., there will be an infinite string s of symbols such that its Goedel number gs is such that I(gs). But then we have no way to make sense of the statement: “s is true according to M” or Ms. For truth-in-a-model is defined only for finite strings of symbols.

Thus, there is an intuitive difference between the standard model of the naturals and non-standard models:

  1. The standard model N is such that all the numbers that according to N satisfy I(x) correspond to formulas that are true in N.

  2. A non-standard model M is not such that all the numbers that according to M satisfy I(x) correspond to formulas that are true in M.

The reason for this difference is that the notion of “true in M” is only defined for finite formulas, where “finite” is understood according to the standard model.

I do not know how exactly to rescue the idea of many inequivalent models of arithmetic that are all on par.

Tuesday, September 12, 2017

Numerical experimentation and truth in mathematics

Is mathematics about proof or truth?

Sometimes mathematicians perform numerical experiments with computers. Goldbach’s Conjecture says that every even integer n greater than two is the sum of two primes. Numerical experiments have been performed that verified that this is true for every even integer from 4 to 4 × 1018.

Let G(n) be the statement that n is the sum of two primes, and let’s restrict ourselves to talking about even n greater than two. So, we have evidence that:

  1. For an impressive sample of values of n, G(n) is true.

This gives one very good inductive evidence that:

  1. For all n, G(n) is true.

And hence:

  1. It is true that: for all n, G(n). I.e., Goldbach’s Conjecture is true.

Can we say a similar thing about provability? The numerical experiments do indeed yield a provability analogue of (1):

  1. For an impressive sample of values of n, G(n) is provable.

For if G(n) is true, then G(n) is provable. The proof would proceed by exhibiting the two primes that add up to n, checking their primeness and proving that they add up to n, all of which can be done. We can now inductively conclude the analogue of (2):

  1. For all n, G(n) is provable.

But here is something interesting. While we can swap the order of the “For all n” and the “is true” operator in (2) and obtain (3), it is logically invalid to swap the order of the “For all n” and the “is provable” operator (5) to obtain:

  1. It is provable that: for all n, G(n). I.e., Goldbach’s Conjecture is provable.

It is quite possible to have a statement such that (a) for every individual n it is provable, but (b) it is not provable that it holds for every n. (Take a Goedel sentence g that basically says “I am not provable”. For each positive integer n, let H(n) be the statement that n isn’t the Goedel number of a proof of g. Then if g is in fact true, then for each n, H(n) is provably true, since whether n encodes a proof of g is a matter of simple formal verification, but it is not provable that for all n, H(n) is true, since then g would be provable.)

Now, it is the case that (5) is evidence for (6). For there is a decent chance that if Goldbach’s conjecture is true, then it is provable. But we really don’t have much of a handle on how big that “decent chance” is, so we lose a lot of probability when we go from the inductively verified (5) to (6).

In other words, if we take the numerical experiments to give us lots of confidence in something about Goldbach’s conjecture, then that something is truth, not provability.

Furthermore, even if we are willing to tolerate the loss of probability in going from (5) to (6), the most compelling probabilistic route from (5) to (6) seems to take a detour through truth: if G(n) is provable for each n, then Goldbach’s Conjecture is true, and if it’s true, it’s probably provable.

So the practice of numerical experimentation supports the idea that mathematics is after truth. This is reminiscent to me of some arguments for scientific realism.

Wednesday, April 22, 2015

System-relativity of proofs

There is a generally familiar way in which the question whether a mathematical statement has a proof is relative to a deductive system: for a proof is a proof in some system L, i.e., the proof starts with the axioms of L and proceeds by the rules of L. Something can be provable in one system—say, Euclidean geometry—but not provable in another—say, Riemannian geometry.

But there is a less familiar way in which the provability of a statement is relative. The question whether a sentence p is provable in a system L is itself a mathematical question. Proofs are themselves mathematical objects—they are directly the objects in a mathematical theory of strings of symbols and indirectly they are the objects of arithmetic when we encode them using something like Goedel numbering. The question whether there exists a proof of p in L is itself a mathematical question, and thus it makes sense to ask this question in different mathematical systems, including L itself.

If we want to make explicit both sorts of relativity, we can say things like:

  1. p has (does not have) a proof in a system L according to M.
Here, M might itself be a deductive system, in which case the claim is that the sentence "p has (does not have) a proof in L" can itself be proved in M (or else we can talk of the Goedel number translation of this), or M might be a model in which case the claim is that "p has a proof in L" is true in that model.

This is not just pedantry. Assume Peano Arithmetic (PA) is consistent. Goedel's second incompleteness theorem then tells us that the consistency of PA cannot be proved in PA. Skipping over the distinction between a sentence and its Goedel number, let "Con(PA)" say that PA is consistent. Then what we learn from the second incompleteness theorem is that:

  1. Con(PA) has no proof in PA.
Now, statement (2), while true, is itself not provable in PA.[note 1] Hence there are non-standard models of PA according to which (2) is false. But there are also models of PA according to which (2) is true, since (2) is in fact true. Thus, there are models of PA according to which Con(PA) has no proof and there are models of PA according to which Con(PA) has a proof.

This has an important consequence for philosophy of mathematics. Suppose we want to de-metaphysicalize mathematics, move us away from questions about which axioms are and are not actually true. Then we are apt to say something like this: mathematics is not about discovering which mathematical claims are true, but about discovering which mathematical claims can be proved in which systems. However, what we learn from the second incompleteness theorem is that the notion of provability carries the same kind of exposure to mathematical metaphysics, to questions about the truth of axioms, as naively looking for mathematical truths did.

And if one tries to de-metaphysicalize provability by saying that what we are after in the end is not the question whether p is provable in L, but whether p is provable in L according to M, then that simply leads to a regress. For the question whether p is provable in L according to M is in turn a mathematical question, and then it makes sense to ask according which system we are asking it. The only way to arrest the regress seems to be to suppose that at some level that we simply are talking of how things really are, rather than how they are in or according to a system.

Maybe, though, one could say the following to limit one's metaphysical exposure: Mathematics is about discovering proofs rather than about discovering what has a proof. However, this is a false dichotomy, since by discovering a proof of p, one discovers that p has a proof.

Monday, April 6, 2015

More against neo-conventionalism about necessity

Assume the background here. So, there is a privileged set N of true sentences from some language L, and N includes, among other things, all mathematical truths. There is also a provability-closure operator C on sets of L-sentences. And, according to our neo-conventionalist, a sentence p of L is necessarily true just in case pC(N).

Moreover, this is supposed to be an account of necessity. Thus, N cannot contain sentences with necessity operators and C must have the property that applying C to a set of sentences without necessity operators does not yield any sentence of the form Lp, where L is the necessity operator (It may be OK to yield tautologies like "Lp or ~Lp" or conjunctions of tautologies like that with sentences in the input set, etc.) If these conditions are not met, then we have an account of necessity that presupposes a prior understanding of necessity.

Now consider an objection. Then not only is L(1=1) true, but it is necessarily true. But now we have a problem. For C(N) by the conditions in the previous paragraph contains no Lp sentences. Hence it doesn't contain the sentence "L(1=1)".

But this was far too quick. For the neo-conventionalist can say that "L(1=1)" is short for something like "'1=1'∈C(N)". And the constraints on absence of necessity operators is compatible with the sentence "'1=1'∈C(N)" itself being a member of C(N).

This means that the language L must contain a name for N, say "N", or some more complex rigidly designating term for it (say a term expressing the union of some sets). Let's suppose that "N" is in L, then. Now, sentences are mathematical objects—finite sequences of symbols in some alphabet. (Or at least that seems the best way to model them for formal purposes.) We can then show (cf. this) that there is a mathematically definable predicate D such that D(y) holds if and only if y is the following sentence:

  • "For all x, if D(x), then ~(xN)."
But if y is this sentence, then y is a mathematical claim. If this mathematical claim isn't true, then y is a member of N. But then y is true. On the other hand, if y is true, then being a mathematical claim it is a member of N, and hence y is false. (This is, of course, structurally like the Liar. But it is legitimate to deploy a version of the Liar against a formal theory whose assumptions enable that deployment. That's what Goedel's incompleteness theorems do.)

To recap. We have an initial difficulty with neo-conventionalism in that no sentences with a necessity operator ends up necessary. That difficulty can be overcome by replacing sentences with a necessity operator with their neo-conventionalist analyses. But doing that gets us into contradiction.

(It's perhaps formally a bit nicer to formulate the above in terms of Goedel numbers. Then we replace Lp with nC*(N*) where n is the Goedel number of p, and C* and N* are the Goedel-number analogues of C and N. Diagonalization then yields a contradiction.)

One place where I imagine pushback is my assumption that C doesn't generate Lp sentences. One might think that C embodies the rule of necessitation, and hence in particular it yields Lp for any theorem p. But I think necessitation presupposes necessity, and so it is illegitimate to use rules that include necessitation to definite necessity. However, this is a part of the argument that I am not deeply confident of.

Thursday, April 2, 2015

An argument against neo-conventionalism in modality

The neo-conventionalist account of necessity holds that necessity is just a messy property accidentally created by our conventions. We historically happened to distinguish a certain family N of true sentences. For instance, N might include the mathematical truths, the truths about the identities of natural kinds (e.g., "water = H2O"), the truths about the scope of composition, etc. Then we said that a sentence is necessarily true if and only if it is a member of the closure C(N) of N under some logical deduction rules. (Alternately, one might do this in terms of propositions.)

Here is a criterion of adequacy for a theory of modality. That theory must yield the following obvious, uncontroversial and innocuous-looking fact:

  1. Necessarily, some sentence is not necessary.
Some things just have to be possible. (Note: In System T, if p is any tautology, then necessarily ~p is not necessary.)

A neo-conventionalist proposal consists of a family N of true sentences and a closure operator C. For any neo-conventionalist proposal, we then can raise the question whether it satisfies condition (1). Formulating this condition precisely within neo-conventionalism takes a bit of work, but basically it'll say something like this:

  1. "Some sentence is not a member of C(N)" is a member of C(N).

There is a more intuitive way of thinking about the above condition. A family A of sentences is such that C(A) is all sentences if and only if the family A is C-inconsistent, i.e., inconsistent with respect to the rules defining C. (This is actually a fairly normal way to define inconsistency in a wide range of logics.) So (2) basically says:

  1. "N is C-consistent" is C-provable from N.

Put that way, we see that our innocuously weak assumption (1) is actually a pretty strong condition on a neo-conventionalist proposal. It is certainly not guaranteed to be satisfied. For instance, a neo-conventionalist proposal where N is a finite set of axioms and C is a formal system (with the axioms and formal system sufficient for the operations in C) will fail to satisfy (3) by Goedel's Second Incompleteness Theorem.

This last observation shows that the question of whether a neo-conventionalist proposal satisfies (3) can be far from trivial. Now, in practice nobody espouses a neo-conventionalist proposal with a finite set of axioms. All the proposals in the literature that I've seen just throw all mathematical truths in, so Goedel's Second Incompleteness Theorem is not applicable.

But even if it's not applicable, it shows that the question is far from trivial. And that is unsatisfactory. For (1) is obviously true. Yet on a neo-conventionalist proposal it becomes a very difficult question. That by itself is a reason to be suspicious of neo-conventionalism. In fact, we might say: We know (1) to be true; but if neo-conventionalism is true, we do not know (1) to be true; hence, neo-conventionalism is not true.

Now, one can probably craft neo-conventionalist proposals that satisfy our constraint. For instance, if N is just the set of mathematical truths (considered broadly enough to include truths about what sentences are C-provable from what) then "N is C-consistent" will be true, and hence a member of N, and hence C-provable from N. But of course that's just another proposal that nobody endorses: there are more necessities than the mathematical ones.

And here's the nub. The neo-conventionalist isn't just trying to craft some proposal or other that satisfies (1). She is proposing to let N be those truths that we have conventionally distinguished (she may not be making an analogous move about C; she could let C be closure under provability in the One True Logic). But we did not historically craft our choice of distinguished truths so as to ensure (3). Consider the following curious definition of an even number:

  1. A number is even if and only if it has the same parity as the number of words in my previous blog post.
This account might in fact get things right—if we are lucky enough that the number of words in my previous post is divisible in two. But I did not choose my wording in that post with that divisibility in mind. I chose the wording for completely different reasons. We don't have reason to think, without actually counting, that (4) is correct. And even if it is correct, it is only my luck that I happened to choose an even number of words, and we don't want a theory to rest on luck like that.

Monday, September 17, 2012

Vagueness and the foundations of mathematics

There are many set-theoretic constructions of the natural numbers. For instance, one might let 0 be the empty set ∅, 1 be {0}, 2 be {1,2}, and so on. Or one might let 0 be ∅, 1 be {∅}, 2 be {{∅}}, and so on. (The same point goes for the rationals, the reals, the complex numbers, and so on.) Famously, Benacerraf used this to argue that none of these constructions could be the natural numbers, since there is no reason to prefer one over another.

My graduate student John Giannini suggested to me that one might make a move of insisting that there really is a correct set of numbers, but we don't know what it is, a move analogous to epistemicism about vagueness. (Epistemicists say that there is a fact of the matter about exactly how much hair I need to lose to count as being bald, but we aren't in a position to know that fact.)

It then occurred to me that one might more strongly take the Benacerraf problem literally to be a case of vagueness. The suggestion is this. Provable intra-arithmetical claims like that 2+2=4 or that there are infinitely many primes are definitely true. Claims dependent on one particular construction of the naturals, however, are only vaguely true. Thus, it is vaguely true that 1={0}. Depending, though, on what sorts of naturalness constraints our usage might put on constructions, it could be that some conditional claims are definitely true, such as that if 3={0,1,2}, then 4={0,1,2,3}.

There are some choices about how to develop this further on the side of foundations of mathematics. For instance, one might wonder if some (all?) unprovable arithmetical claims might be vague. (If all, one might recover the Hilbert program, as regards the definite truths.) Likewise, extending this to set theory, one might wonder whether "set" and "member of" might not be vague in such a way that the Axiom of Choice, the Continuum Hypothesis and the like are all vague.

Vagueness, I think, comes from to our linguistic practices undeterdetermine the meanings of terms. Likewise, our arithmetical practices arguably undetermine the foundations.

The above account neatly fits with our intuition that intra-mathematical claims are much more "solid" than meta-mathematical claims. For the meta-mathematical claims are all vague.

The next step would be to consider what happens when plug the above into various accounts of vagueness. Epistemicism is one option: our arithmetical terminology does have reference to one particular choice of foundation, but we aren't in a position to see what it is. I find promising a theistic variant on epistemicism. Supervaluationism seems particularly neat here. There will be one precification which precisifies things consistently with one foundational story, and another with another. can also consider other options.

There might even be some elements of epistemicism and some of supervaluationism. For there might be facts beyond our ken that say that some foundational stories are false—the epistemicism part of the story—but these facts may be insufficient to determine one foundational story to be right.

That said, I think I still prefer a more ordinary structuralism, though this story has the advantage that it takes the logical form of mathematical claims at face value rather than as disguised conditionals.

Thursday, September 8, 2011

A Goedel sentence in English

The sentence containing two quotations and that is such that (a) if you replace each quotation in it with an asterisk you get the text
"The sentence containing two quotations and that is such that (a) if you replace each quotation in it with an asterisk you get the text * and (b) each quotation in it is of the text * is unprovable."
and (b) every quotation in it is of the text
"The sentence containing two quotations and that is such that (a) if you replace each quotation in it with an asterisk you get the text * and (b) each quotation in it is of the text * is unprovable." 
is unprovable.

Sunday, September 4, 2011

An easy constructive proof of a version of Tarski's Undefinability of Truth

Tarski's Undefinability of Truth theorem says that given a language that contains enough material cannot have a truth predicate, i.e., a predicate that holds of all and only the true sentences. This yields Goedel's Incompleteness Theorem if you let the predicate be IsProvable.

 Here's a proof in a string setting. Suppose that L is a language that (under some interpretation--I will generally drop that qualification for simplicity) lets you talk about finite strings of characters. Suppose L has a concatenation function +: a+b is a string consisting of the characters of a followed by the characters of b.  Suppose further that every character has a name in L given by surrounding the character with asterisks.  Thus, *+* is a name for the plus sign.  Suppose that there is a function Q in L such that if a is a string, then Q(a) is a string that consists of the names of the asterisk-based names for the characters in a interspersed with pluses.  I will call Q(a) a quotation of a.  Thus Q("abc")="*a*+*b*+*c*".  I will say that a substring q of a string s is a quotation in s provided that q is a substring of s of the form "*a*+*b*+*c*+..." and q cannot be extended to a longer quotation.  I will also use "*abc*" (etc.) to abbreviate "*a*+*b*+*c*".

Suppose that we can define a predicate T in L that is veridical, i.e., T(a) is true only if a is true.  We will now construct a sentence g in L such that g is true and T(g) is false.  This shows that there is no predicate true of all and only all sentences of g.  Here's how.  Let g be the following sentence:
  • (x)[(z)(z=*AlmostMe* → ((FirstQuotes(x,z) & FirstQuoteRemoved(x,z)))→~T(x))]
Here, AlmostMe is an abbreviation (I will put abbreviations in bold) for the following sequence of characters:
  • (x)[(z)z=() → ((FirstQuotes(x,z) & FirstQuoteRemoved(x,z)))→~T(x))]
I.e., AlmostMe is an abbreviation for g except for the quotation of AlmostMe inside g.  FirstQuotes(x,z) is an abbreviation of a complex predicate that says that the first quotation inside x is a quotation of z.  FirstQuoteRemoved(x,z) is an abbreviation of a predicate that says that z is what you get when you take x and replace the first quotation in it with "()".

Lemma 1. One can define FirstQuotes(x,z) and FirstQuoteRemoved(x,z) satisfying the above description.

I'll leave out the proof of this easy fact.

Lemma 2. The one and only string x that satisfies both FirstQuotes(x,*AlmostMe*) and FirstQuoteRemoved(x,*AlmostMe*) is g.

Here's an informal proof of Lemma 2.  The first quotation in x is indeed a quotation of AlmostMe, and so FirstQuotes(x,*AlmostMe*) does indeed hold.  Moreover, if we remove that quotation of AlmostMe and replace it with "()", we get AlmostMe.  So g does satisfy both predicates.  

Suppose now that h satisfies both predicates.  We must show that h=g.  Start with the fact that FirstQuoteRemoved(h,*AlmostMe*).  This shows that h is of the form:
  • (x)[(z)(z=*...* → ((FirstQuotes(x,z) & FirstQuoteRemoved(x,z)))→~T(x))]
where *...* is some quotation.  But because FirstQuotes(x,*AlmostMe*), that first quotation must be a quotation of AlmostMe.  But then h is g.  

Given Lemma 2, the proof of our theorem is easy.  By First Order Logic, g is equivalent to:
  • (x)((FirstQuotes(x,*AlmostMe*) & FirstQuoteRemoved(x,*AlmostMe*))→~T(x))
But the one and only x that satisfies the antecedent of the conditional is g.  Hence, g is true if and only if ~T(g).  Now, g is either true or false.  If it is false, then ~T(g) is true as T is veridical, and so g is true, which is a contradiction.  Therefore, g is true.  But if it is true, then ~T(g) and so g does not satisfy T.  That completes the proof.

I'm going to try out a version of this proof on undergraduates one of these days.

Wednesday, August 31, 2011

A sketch of a proof of a version of Goedel's incompleteness theorem

Consider a first order language (FOL) L that, in addition to the standard ingredients of FOL, including equality and a store of names and predicates, has the following:
  • A binary function +.
  • A names for every symbol of the language. I will suppose that the name for the symbol "s" is "'s'". (Double quote marks are meta-language quotation; single quote marks are L-quotation.)
  • A binary predicate Quotes.
The intended interpretation of L will have as its domain the collection of all strings of finite length, including the empty string, of symbols from L. In the intended interpretation, "+" is string concatenation, so that if "a" denotes the string "xx" and "b" denotes the string "yy", then "a+b" denotes the string "xxyy". Finally, in the intended interpretation, a and b satisfy Quotes if and only if a is of the form "'a'+'b'+...+'m'" and b is of the form "'ab...m'".

Introduce the abbreviation Substring(u,v) for the expression: ∃xy(v=x+u+y).

Introduce the abbreviation Begins(u,v) for the expression: ∃x(v=u+x).

Introduce the abbreviation FirstQuotes(u,v) for the expression: ∃xy∃w (~SubString(''', x) & u=x+w+y & Quotes(w,v) & ~Begins('+',y)).  Informally, this says that v is the first expression that is quoted in u.

Introduce the abbreviation FirstQuoteSubstituted(u, v) for the expression: ∃xyzw(FirstQuotes(u,y) & Quotes(w,y) & u=x+w+z & v=x+'()'+y.  Informally, this says that v is what you get if you replace the first quoted expression in u with "()".

Introduce the abbreviation SelfSubstitutes(u) for the expression: ∃x(FirstQuotes(u,x) & FirstQuoteSubstituted(u,x)).  This says that the first thing quoted by u is u itself with the first quoted expression replaced by "()".

Now, enriching the language as needed (I think we need one enrichment: we need a "Balanced" predicate which checks if parentheses are balanced), define the predicate or abbreviation Provable(u). Finally, form the sentence:
  1. x(∀z(z='*' → (SelfSubstitutes(x) & FirstQuoteSubstituted(x,z)))→~Provable(x)),
where we expand out all abbreviations, fixing up variables over which we quantify to avoid conflicts, and where "*" abbreviates:
  • x(∀z(z=() → (SelfSubstitutes(x) & FirstQuoteSubstituted(x,z)))→~Provable(x)).
We can then check that (1) is the one and only value of x that satisfies the antecedent of the conditional in (1) whose consequent is ~Provable(x).

Consequently, (1) is true if and only if it is not Provable. If all Provable sentences are true, then (1) must be true and not Provable. (For if (1) were false, it would be Provable, and hence true. If (1) were true and Provable, (1) would be false.)

If this works mathematically, I think it could work very nicely didactically, with some refinement.

Of course, I probably got some details wrong.  Maybe I made a bigger mistake, too.

If morbid curiosity leads you to ask what my Goedel-like sentence looks like when expanded, it looks like the following if we take "Provable" to be a primitive in L: ∀g(∀h(h='∀'+'g'+'('+'∀'+'h'+'('+'h'+'='+'('+')'+'→'+'('+'∃'+'t'+'('+'∃'+'q'+'∃'+'r'+'∃'+'s'+'('+'~'+'∃'+'m'+'∃'+'n'+'('+'q'+'='+'m'+'+'+'''+'''+'''+'+'+'n'+')'+'&'+'g'+'='+'q'+'+'+'s'+'+'+'r'+'&'+'Q'+'u'+'o'+'t'+'e'+'s'+'('+'s'+','+'t'+')'+'&'+'~'+'∃'+'p'+'('+'r'+'='+'''+'+'+'''+'+'+'p'+')'+')'+'&'+'∃'+'x'+'∃'+'y'+'∃'+'z'+'∃'+'w'+'('+'∃'+'q'+'∃'+'r'+'∃'+'s'+'('+'~'+'∃'+'m'+'∃'+'n'+'('+'q'+'='+'m'+'+'+'''+'''+'''+'+'+'n'+')'+'&'+'g'+'='+'q'+'+'+'s'+'+'+'r'+'&'+'Q'+'u'+'o'+'t'+'e'+'s'+'('+'s'+','+'y'+')'+'&'+'~'+'∃'+'p'+'('+'r'+'='+'''+'+'+'''+'+'+'p'+')'+')'+'&'+'Q'+'u'+'o'+'t'+'e'+'s'+'('+'w'+','+'y'+')'+'&'+'g'+'='+'x'+'+'+'w'+'+'+'z'+'&'+'t'+'='+'x'+'+'+'''+'('+')'+'''+'+'+'y'+')'+'&'+'∃'+'x'+'∃'+'y'+'∃'+'z'+'∃'+'w'+'('+'∃'+'q'+'∃'+'r'+'∃'+'s'+'('+'~'+'∃'+'m'+'∃'+'n'+'('+'q'+'='+'m'+'+'+'''+'''+'''+'+'+'n'+')'+'&'+'g'+'='+'q'+'+'+'s'+'+'+'r'+'&'+'Q'+'u'+'o'+'t'+'e'+'s'+'('+'s'+','+'y'+')'+'&'+'~'+'∃'+'p'+'('+'r'+'='+'''+'+'+'''+'+'+'p'+')'+')'+'&'+'Q'+'u'+'o'+'t'+'e'+'s'+'('+'w'+','+'y'+')'+'&'+'g'+'='+'x'+'+'+'w'+'+'+'z'+'&'+'h'+'='+'x'+'+'+'''+'('+')'+'''+'+'+'y'+')'+')'+'→'+'~'+'P'+'r'+'o'+'v'+'a'+'b'+'l'+'e'+'('+'g'+')'+')'→(∃t(∃q∃r∃s(~∃m∃n(q=m+'''+n)&g=q+s+r&Quotes(s,t)&~∃p(r='+'+p))&∃x∃y∃z∃w(∃q∃r∃s(~∃m∃n(q=m+'''+n)&g=q+s+r&Quotes(s,y)&~∃p(r='+'+p))&Quotes(w,y)&g=x+w+z&t=x+'()'+y)&∃x∃y∃z∃w(∃q∃r∃s(~∃m∃n(q=m+'''+n)&g=q+s+r&Quotes(s,y)&~∃p(r='+'+p))&Quotes(w,y)&g=x+w+z&h=x+'()'+y))→~Provable(g)).

If we have an explicit formula for Provable(x), we need to fix up the "'P'+'r'+'o'+'v'+'a'+'b'+'l'+'e'+'('+'g'+')'" and "Provable(g)" parts.

Monday, February 28, 2011

Syntactic self-reference without diagonal lemma or Gödel numbers

For the proof of Goedel's incompleteness theorem and in work on the Liar Paradox it is usual to use the Diagonal Lemma to secure self-reference. The challenge of self-reference is this. Given a predicate Q, find a syntactically definable predicate P such that
  1. (s)(P(s) → R(s))
is provably the one and only sequence of symbols satisfying P. Then (1) says that Q holds of itself. (To get the (Strengthened) Liar Paradox, just make R(s) say that s is not true.) But the proof of the diagonal lemma is hard to understand.

I find the following way of securing self-reference easier to understand. Start with a language that has nestable quotation marks, which I'll represent with ‘...’, and some string manipulation tools. I'll use straight double quotation marks for meta-language quotation. Add to the language a new symbol "@" which is ungrammatical (i.e., no well-formed formula may contain it). For any sequence of symbols s, we define two new sequences of symbols N(s) and Q(s) by the following rules. If s contains no quoted expressions or contains imbalanced opening and closing quotation marks, N(s) and Q(s) are just "@". If s contains a quoted expression, Q(s) is the first quoted expression, without its outermost quotation marks (but with any nested quotations being included), and N(s) is the result of taking s and replacing that first quoted occurrence of Q(s), as well as its surrounding single quotation marks, with "@". Thus:
  1. Q("abc‘def‘ghi’’+jkl")="def‘ghi’"
  2. N("abc‘def‘ghi’’+jkl")="abc@+jkl".
It is easy to see that Q and N are syntactically defined. Now, let M(s) be equal to N(s) if N(s)=Q(s) and let M(s) be an empty sequence "" otherwise. Again, M(s) is syntactic. Now consider this sentence:
  1. (s)(‘(s)(@=M(s) → R(s))’=M(s) → R(s)).
It is easy to prove (given a bit of string manipulation resources) that the only sequence s that satisfies the antecedent of the conditional is (4) itself. So we have constructed the syntactic predicate P(s). It is: ‘(s)(@=M(s) → R(s))’=M(s).

One can also adapt this to work with Goedel numbers and hence presumably for use in proving incompleteness.

[Removed a nasty typo.]