[FOM] Certificates are fully practical

Timothy Y. Chow tchow at alum.mit.edu
Sun Sep 29 15:28:49 EDT 2013


Arnold Neumaier wrote:

> No mathematician cares (unless they work in mathematical logic or set
> theory) what happens with proofs that are significantly longer than
> that. Thus a foundation of mathematics that guarantees this - for the
> collection of concepts thaey are using and generating with their
> definitions - is fully adequate.

I'm not sure if you're agreeing with Alan Weir or not.

I think it's overstating the case to say that no mathematician cares what 
happens with proofs that are too long to be actually verified.  Consider 
examples such as Catalan's conjecture or Kepler's conjecture.  In both 
cases, they were reduced to a finite but infeasible computation a rather 
long time before they were completely settled.  If it were the case that 
"no mathematician cared" then the reduction to a finite computation would 
presumably have been considered no better than no proof at all.  But 
that's not in fact how they were viewed.  The reduction to a finite 
computation was regarded as "almost" a proof of the conjecture itself.

Alan Weir's contention is that there is no useful sense in which "finite" 
models "in principle possible."  Again, if that were true, it would be 
mysterious why the above reductions to finite computations accomplished 
anything at all.

The same sort of objection sometimes comes up in the context of 
computational complexity, when "polynomial time" is often used as a 
surrogate for "feasible."  The supposed objection is that this is a 
mistake, since any computation that exceeds our actual resources, or at 
least any polynomial with too large an exponent, doesn't deserve the name 
of "feasible."  But the presupposition behind the objection is that models 
have to be exactly right to be useful.  Anyone who actually does applied 
mathematics knows that such a principle is nonsense.  Computational 
complexity as traditionally formulated has been enormously useful in 
guiding the search for feasible computations.  Had we hamstrung ourselves 
by insisting on exact matches to reality, we'd still be arguing about 
whether the concept of an "algorithm," with its implicit extrapolation to 
the realm of infeasible computations, was a useful concept.  Meanwhile the 
practitioners would have come up with actual algorithms and using feasible 
versions of those algorithms to perform feasible computations.

There's always room for improvement of models, of course.  But if we're 
going to talk about "useful" adjustments to models, then the touchstone 
should be whether they advance science, not whether they toe the line to 
some dubious a priori philosophical principle.

Tim


More information about the FOM mailing list