[FOM] Epistemology and the Philosophy of Mathematics (wasMathematical Experiments)

Dan Goodman dog at fcbob.demon.co.uk
Sat Jul 5 19:49:08 EDT 2003


From: Don Fallis <fallis at email.arizona.edu>

> At least for finite combinatorial facts, evidence of the truth of a
> mathematical claim is also evidence of the existence of a proof-1 of
> the claim.  Thus, deductive proof cannot be epistemically better than
> probabilistic proof simply because it provides evidence of the
> existence of a proof-1.
>
> See page 181 and pages 184-185 of
>
http://links.jstor.org/sici?sici=0022-362X%28199704%2994%3A4%3C165%3ATESOPP%
3E2.0.CO%3B2-B
> for a more detailed discussion of this issue.

I have to say that I am uneasy about the idea of probabilistic proof despite
basically agreeing that mathematical certainty is unattainable. Suppose, as
in that article, that you have a proof checker which incorrectly validates a
proof with probability p say (maybe p depends on the length of the string,
but that would be OK as long as p was bounded below by a nonzero constant).
If your algorithm for finding a proof of a theorem is to run through all
possible strings and test if they are proofs with this algorithm (which
sounds implausible at the moment, but using techniques like the DNA
algorithm described in that article, or quantum computing, might not always
be so), then there is a probability of 1 that it will eventually validate an
incorrect proof! In general, with probability, you are never in a situation
where you can say something like "The probability that there is a proof of
this theorem is 0.999999999" because the probability that there is a proof
of the theorem must be either 1 or 0 according to whether there is or isn't
a proof of it. Instead, you find yourself in a situation in which you have
to analyse the entire process leading up to the proof instead of just the
proof itself.

One reaction to the example above would be to say: yes the algorithm would
eventually validate an incorrect proof, but if we think of it as producing
proof candidates which can subsequently be checked by another algorithm
we're OK. In other words, suppose the proof checker incorrectly validates
proofs independently of the proof itself (so, maybe the possibility of an
incorrect validation is due to random electrical fluctuations). If you run
the probabilistic proof checker on the proof candidate a second time, you
reduce the probability that it is an incorrectly validated proof to almost
zero. I have some sympathy for this view, but there is a danger still left.
Suppose that there is an even smaller but this time systematic sort of
error, that occurs with probability q (in some sense). However small q is,
eventually the algorithm will incorrectly validate a proof because of it.
Since the error is systematic, running the proof checker on it again will
again incorrectly validate it.

Actually, this isn't a problem for proofs of finite things like "Does this
graph have a Hamiltonian path?" because if there is any proof then there
must be a short proof (and so for small enough probabilities p and q from
above you don't have to worry), but most potential theorems aren't like
that. Potential proofs could be of arbitrary length.

One last objection: you might say that there is a possibility of a
systematic human proof checking error, just as there is a possibility of a
systematic probabilistic proof checking program error. I think this is true,
and worth thinking about. The only difference is that because the length of
humanly surveyable proofs is bounded by a relatively small size, and we feel
that the probability of a systematic human proof checking error is small,
this is very unlikely ever to be a problem. If proof searching using the
algorithm described above became feasible, it might become an everyday
event.

Either way, I certainly don't think we should disregard probabilistic
methods of proof, proof checking and searching using methods like the above,
but I do feel that there should always be a place for ordinary sorts of
proof and that as and when these sorts of algorithms become feasible we
should think very carefully about the way in which they are used, as well as
the way they work.

What would be really interesting is if one could write a program that takes
a computer generated proof and attempts to transform it into something
humanly surveyable (e.g. by trying to extract lemmas and definitions). It
would be a sort of compression algorithm. This, combined with a proof
finding algorithm, would be the best option. I think I've gone far enough
ahead into the realms of fantasy for the moment though.

Dan Goodman



More information about the FOM mailing list