[FOM] formal proofs

Harvey Friedman hmflogic at gmail.com
Mon Oct 13 21:24:07 EDT 2014


Concerning this thread regrading proof verification.

I think it might be useful to focus on some key issues. I have not
kept up enough ith the details of what is going on in recent years, so
I view myself as an outsider. I mentioned in a previous posting a
specific f.o.m.  issue that the development of this area has done
remarkably well with. That is, concerning the "actual reality" of
absolutely rigorous difficult proofs.

1. Exactly why and exactly in what sense should be trust this kind of
"absolute rigor"? Since the answer is certainly at least "in several
really good and interesting senses", one should raise the bar for an
answer to this question very very high. How high can be bar actually
be raised? Another way of talking about this is perhaps the
recognition that there has to be some "core" human involvement that is
"absolutely accepted to be absolutely obvious". But what should such a
"core" consist of? What kind of obviousness is involved? Can this
"core" be simplified? Can this "core" be ultimately simplified, so it
cannot be simplified further? And does all this mean - say, what does
all this mean, rigorously? I.e., where is the general theory of
"cores"?

2. In connection with 1, there is the prior issues of whether one has
"properly formalized the very statements being "proved"". Is this an
issue, or should it be an issue? If so, how do you deal with this
issue?

3. The price of proof verification for any kind of state of the art
mathematics is still extremely high. And the price of an error in the
usual state of the art mathematics is not all that high. At least not
all that high in several senses. One is that typically errors range
from being not fatal to being trivially fixable. One question is this:
under what circumstances and in which examples is there any kind of
proper balance between these prices? And can we argue that perhaps we
should not be comparing these prices?

4. What do we expect in terms of the trajectory of the price of
absolute rigor? In what kind of mathematics? What kinds of further
developments are needed in order to reduce this price? How responsive
is current research to the need for such further developments to
reduce the high price?

5. There seem to be opportunities for reducing the price on several
fronts. How are we doing with the exploitation of these opportunities?

a. The development of what has sometimes been called "little engines
of proof", which are specific decision procedures that work "under the
hood". An extremely old and useful one is Presburger Arithmetic. My
understanding is that this rapidly leads to too great a use of
computer resources, but there is a lot of interesting work that has
tamed the resources here and has been applied to interesting
situations. My impression also is that even just Presburger is fertile
ground for deep new work along these lines that might play a limited
role in reducing the price. Another of course is various fragments of
real closed fields (real algebra). But also there is work on adding
exponentiation and trig functions and so forth. What is the state of
the art here, and what is the state of the art agenda?

 One should also not be frightened of algorithmic undecidability.
Ultimately, one wants to work with really finite fragments. After all
these "little engines of proof" are going to be used by actual people
in actual situations.

b. No matter what the state of the art is with regard to a, you are
eventually going to have to fly blind in an arena where there are no
useful "little engines of proof". This is the realm of general
purpose. Over long periods of time, perhaps this realm gets pushed
higher and higher as a) gets more and more highly developed and
useful. Here of course decision procedures for pure logic enter, and
we can also think of this as part of a), of a different character than
the "mathy" ones.

c. So beyond a,b there is still going to inevitably be "flying blind"
where the human is sitting at a computer terminal and completely
"calling the shots" with just "contentless organizational support".
(Actually what is this?) Then we get into the realm of human
interfaces. This is also crucially important, as right now, user
interfaces are notoriously painful almost everywhere - except when
large corporations really get serious about making money and move into
"user friendliness". (The US Government isn't so good as this -
Obamacare). E.g., I am kind of impressed with WordPress, at least
compared to much earlier such software. Is there a theory of
"interface software" that can be developed? Perhaps with theorems that
certain user interfaces are optimal in various senses? Can we quantify
how goodt WordPress is?

Harvey Friedman


More information about the FOM mailing list