# [FOM] Proof. was: Checkers is a draw

Bill Taylor W.Taylor at math.canterbury.ac.nz
Sun Jul 29 01:51:40 EDT 2007

```->> To understand why this sort of rule is necessary, consider the example
->> of showing that 170 * 40 = 6800. No one will do this proof by appealing
->> to Peano's axioms, because it would be thousands of lines long.
->> Instead we will /calculate/ that 170 * 40 gives 6800, and then appeal
->> to reflexivity, which is a one-line proof plus some computation.
->>
->> Poincare foresaw the need for this sort of rule a century ago; the
->> story goes that upon seeing a formal proof of some trivial arithmetic
->> fact he is supposed to have asked, "And how will you prove a real
->> theorem?" In his essay, "Science and Hypothesis", he contrasted
->> computation (which he called verification) with deduction proper, and
->> argued that only deductive steps should be included in a formal proof,
->> since the calculations can be re-done by the person checking the proof.

IMHO this distinction between computation and deduction is rather artificial,
even if we all "think we know the difference".  A proof machine will not
know the difference, it will use abbreviations and earlier lemmas just as
we do.  Every arithmetic calculation is just a little theorem in its own
right, proved on the spot.

Poincare's  "and how will you prove a real theorem?"  strikes me as being
a little bit disingenous, or even willfully obstinate.  Certainly he was
at just about the last time one could respectably do this, FOL and axiom
systems became de rigeur just after his time.

I have written on this, in many forums.  There is no longer any need to
worry about what a proof really is, or, to an enormously large extent,
whether an alleged proof really is a proof or not.  Proofs proper,
we now know, are themselves mathematical objects that can (in principle!)
be written down in toto with the utmost lack of ambiguity (in principle!).
OC, as Poincare hinted, and all here know, we cannot do this in practice
with any but the most baby examples.  So we use abbrevaitions, condensations
and so forth to make them more feasible.  What we normally call a proof,
as per journal etc, is *really* just a proof outline, whereby we can
(in principle!) reconstruct the full proof if required, or at least
any dodgy-looking part of it.  In turn, what we call "proof outlines" now,
are in fact merely *hopeful* suggestions toward a more formal proof outline
(called by journals a proof), that the author does not expect to have any
trouble with, but leaves open a slight but significant doubt.

This successive degradations in everyday language is inevitable; but
the distinctions should be kept clearly in mind when this sort of debate
occurs - and it can easily fool the amateur or populariser into thinking
there is "no certainty in math".  Nonsense.  There can be certainty
(even if it's not as much as many glibly assume).

OC the ultrafinitist, wrapped up in semi-formal notions of feasibility,
will insist that the "in principle"s above are REAL distinctions, and will
cut his formal cloth appropriately; but most of the rest of us have no
difficulty with the "in principle" concept, and will regard journal
proofs as real proofs, even though strictly formally they fall short.

And OC we must always be aware that ANY proof may contain gaps or mistakes;
there are many examples.  But this is epistemology.  The ontology of proof
has been (for most) fully decided and crystal-clear since about 1906 or so.

End of rant.

Bill Taylor.

```