Showing posts with label proofs. Show all posts
Showing posts with label proofs. Show all posts

Wednesday, April 22, 2015

System-relativity of proofs

There is a generally familiar way in which the question whether a mathematical statement has a proof is relative to a deductive system: for a proof is a proof in some system L, i.e., the proof starts with the axioms of L and proceeds by the rules of L. Something can be provable in one system—say, Euclidean geometry—but not provable in another—say, Riemannian geometry.

But there is a less familiar way in which the provability of a statement is relative. The question whether a sentence p is provable in a system L is itself a mathematical question. Proofs are themselves mathematical objects—they are directly the objects in a mathematical theory of strings of symbols and indirectly they are the objects of arithmetic when we encode them using something like Goedel numbering. The question whether there exists a proof of p in L is itself a mathematical question, and thus it makes sense to ask this question in different mathematical systems, including L itself.

If we want to make explicit both sorts of relativity, we can say things like:

  1. p has (does not have) a proof in a system L according to M.
Here, M might itself be a deductive system, in which case the claim is that the sentence "p has (does not have) a proof in L" can itself be proved in M (or else we can talk of the Goedel number translation of this), or M might be a model in which case the claim is that "p has a proof in L" is true in that model.

This is not just pedantry. Assume Peano Arithmetic (PA) is consistent. Goedel's second incompleteness theorem then tells us that the consistency of PA cannot be proved in PA. Skipping over the distinction between a sentence and its Goedel number, let "Con(PA)" say that PA is consistent. Then what we learn from the second incompleteness theorem is that:

  1. Con(PA) has no proof in PA.
Now, statement (2), while true, is itself not provable in PA.[note 1] Hence there are non-standard models of PA according to which (2) is false. But there are also models of PA according to which (2) is true, since (2) is in fact true. Thus, there are models of PA according to which Con(PA) has no proof and there are models of PA according to which Con(PA) has a proof.

This has an important consequence for philosophy of mathematics. Suppose we want to de-metaphysicalize mathematics, move us away from questions about which axioms are and are not actually true. Then we are apt to say something like this: mathematics is not about discovering which mathematical claims are true, but about discovering which mathematical claims can be proved in which systems. However, what we learn from the second incompleteness theorem is that the notion of provability carries the same kind of exposure to mathematical metaphysics, to questions about the truth of axioms, as naively looking for mathematical truths did.

And if one tries to de-metaphysicalize provability by saying that what we are after in the end is not the question whether p is provable in L, but whether p is provable in L according to M, then that simply leads to a regress. For the question whether p is provable in L according to M is in turn a mathematical question, and then it makes sense to ask according which system we are asking it. The only way to arrest the regress seems to be to suppose that at some level that we simply are talking of how things really are, rather than how they are in or according to a system.

Maybe, though, one could say the following to limit one's metaphysical exposure: Mathematics is about discovering proofs rather than about discovering what has a proof. However, this is a false dichotomy, since by discovering a proof of p, one discovers that p has a proof.

Friday, February 14, 2014

Why is proving a mathematical theorem an a priori process?

When I prove a mathematical theorem using a paper and a pen (I dislike using pencils for this), I constantly rely on empirical data: I look back to facts that I have already established, and work from them. That's partly an a posteriori process of reasoning: I look at stuff on the page. Now for simpler theorems, and likewise for more complex ones if I were smarter, I can do it all in my head. When I do it all in my head, my memory and imagination replace the paper and pen. But why doesn't the use of memory to remember to recall that I've proved a lemma and the use of imagination to present myself with visual images of formulae an a posteriori process?

The memory case might be answered thus. I needn't remember anything about myself--say, having proved a lemma. In an idealized case of proving a theorem without relying on empirical data, instead of remembering having proved the lemma that p, maybe I would simply have acquired the belief that p when I proved it, and then relied on the fact that p--and not the empirical fact that I believe that p--later in the proof. That might work. And perhaps the use of imagination is merely heuristic.

But now notice a somewhat interesting thing: gaining a piece of a priori knowledge and a piece of a posteriori knowledge can be internally exactly similar. In both cases, I might come to believe that q on the basis of the fact that p and logic. In both cases, I need only have the knowledge that p--I need not have any beliefs about how I acquired the knowledge that p, for instance. If I did need to have such beliefs for knowledge, then there would no such thing as a priori knowledge, since beliefs about how I acquired a piece of knowledge are empirical. And in fact there needn't have even been any differences in my own life. Imagine that my knowledge that p was innate--we evolved to have that piece of knowledge. Then whether my knowledge that q, and that p for that matter, is a priori seems to depend on the precise evolutionary forces that shaped my ancestors, not on anything in my life.

Wednesday, February 4, 2009

Arguments and sincerity

When we write down a complex logical argument, it seems there is a pretty good chance that while writing down the proof, we will write down sentences that express propositions which we do not believe. Some of these sentences will be as part of a conditional proof, and those are not puzzling. But some of the sentences that express propositions which it seems we do not believe will be simply asserted. For instance, in the middle of our argument, we might make a claim that involves some complex logical or mathematical formula, which we then expand out using appropriate manipulation rules. But the expanded out claim may well exceed our mental capacities: we can handle it on paper, but it is just too complex for us to believe, it seems.

If this is right, then sincerity does not require that I believe what I say. (I assume the rules for sincerity do not depend on whether I am writing or speaking.) All that is required is that I believe that what I am saying is true. (What should I say about the speaker meaning in such a case?)

Or so it seems. But here is a curious test case. Politician reads a speech that her speechwriter wrote. She trusts her speechwriter to write only truths. The politician did not read over the speech ahead of time. She enunciates the sentences carefully, but she is distracted and pays no attention to what the text says. I think we would say that she is not being fully sincere. Maybe, though, our standards for sincerity are unfairly high in the case of politicians. Or maybe there are different kinds of sincerity—there is the bare sincerity which is one's duty in speech, and there is something that one might call "real sincerity" which entails conviction (where conviction is belief and more).

On the other hand, there is a different way of looking at the case of the complex sentence in an argument (this is inspired by some things that David Manley said based on his book with John Hawthorne). Maybe we can simply gain access to the proposition by means of the sentence, without having ourselves to understand or even parse the sentence.

Or maybe the things in the middle of proofs should not count as assertions. Perhaps making steps in proof is a mechanical procedure, akin to punching buttons on a calculator and likewise intrinsically devoid of propositional content, aimed at producing empirical evidence of the truth of some entailment. (That the evidence produced by a complex proof is empirical in nature is clear to me, weird as it may sound. One reason is that memory is intricately involved.)

Tuesday, January 27, 2009

Some offers

1. Consider the offer: "If you give me a sound deductive argument that I'll give you $1000, then I'll give you $1000." It feels like something has been risked in making the offer. But surely nothing has been risked—neither one's integrity nor one's money.

Or is there really a risk that there is a sound argument for a contradiction, and hence for any conclusion?

2. Suppose Fred is a super-smart being who, while very malicious, exhibits perfect integrity (never lies, never cheats, never breaks promises) and is a perfect judge of argument validity. Fred offers me the following deal: If he can find a valid argument for a self-contradictory conclusion with the argument having no premises, he will torment me for eternity; otherwise, he'll give me $1. Should I go for the deal? Surely I should! But it seems too risky, doesn't it?

3. Suppose Kathy is a super-smart being who, while very malicious, exhibits perfect integrity and is omniscient about what is better than what for what persons or classes of persons. Kathy offers me the following deal: If horrible eternal pain is in every respect the best thing that could happen to anyone, then she will cause me to suffer horrible pain for eternity; otherwise, she'll give me $1. Shouldn't I go for this? After all, I either get a dollar, or I get that which is the best possible thing that could happen to anyone.

Do these cases show that we're not psychologically as sure of some things as we say we are? Or do they merely show that we're not very good at counterpossible reasoning or at the use of conditionals?

[The first version of this post had screwed-up formatting, and Larry Niven pointed that out in a comment. I deleted that version, and with it the said comment. My thanks to Larry!]