[FOM] formal proofs

Harvey Friedman hmflogic at gmail.com
Fri Oct 17 22:48:54 EDT 2014


On Wed, Oct 15, 2014 at 3:01 PM, Till Mossakowski
<mossakow at iws.cs.uni-magdeburg.de> wrote:
> Am 14.10.2014 03:24, schrieb Harvey Friedman:
>> 1. Exactly why and exactly in what sense should be trust this kind of
>> "absolute rigor"? Since the answer is certainly at least "in several
>> really good and interesting senses", one should raise the bar for an
>> answer to this question very very high. How high can be bar actually
>> be raised? Another way of talking about this is perhaps the
>> recognition that there has to be some "core" human involvement that is
>> "absolutely accepted to be absolutely obvious". But what should such a
>> "core" consist of? What kind of obviousness is involved? Can this
>> "core" be simplified? Can this "core" be ultimately simplified, so it
>> cannot be simplified further? And does all this mean - say, what does
>> all this mean, rigorously? I.e., where is the general theory of
>> "cores"?
>>
> There is the LCF-style architecture used by some theorem provers, e.g.
> Isabelle.
> It entails that the type system of the programming language (typically
> ML) ensures that proofs can only be constructed by a small trusted
> code-base (typically some few hundred or thousand lines). The
> obviousness involved is that for more than a decade, no flaw has been
> found in the Isabelle trusted code-base (and neither in the
> implementation of the ML type system, which of course has to be trusted
> as well).
>
> See e.g.
> http://events.inf.ed.ac.uk/Milner2012/slides/Paulson/LCF+-without-transitions.pdf
> or chapter 8 of "Mechanizing Proof: Computing, Risk, and Trust" by MacKenzie
>
> Then, there are also verified theorem provers, but as far as I know,
> only experimental versions, e.g.
> http://link.springer.com/chapter/10.1007%2F11541868_19
> In the future, this could be used for a kind of bootstrap procedure,
> where a prover like Isabelle is used to verify its own trusted code-base.
>
Thanks for this informative reply. It looks like however impressive
what we have is, there is still a lot of room for something much
better in the way of absolute rigor.

I haven't really worked on this, but have the ambition to do so in the
following way. Make high level design of a "dream system" from
scratch, first paying no attention to the present existing
developments. This will surely lead to a number of interesting and
perhaps clean theoretical questions and theorems. Then present it to
experts in the existing practical systems, who will be able to readily
compare it to reality and criticize.

But we need to start somewhere to even define the problem. A purist
starting point would be primitive ZFC, where we have a claim that a
given file constitutes an absolutely correct proof of A in ZFC. Of
course, since we are working with primitive ZFC (no sugar), no human
is going to even look at this monstrosity. There is the problem of
even getting a real life math statement into primitive notation
absolutely correctly. But I would like to separate off that question,
and take this now for granted.

So now the problem is to see just how absolutely sure we can be that a
given file really is a proof of A in pure ZFC. In this model system,
it is up to the proof assistants to create the file purporting to be a
proof of A in pure ZFC. The proof assistants don't have to be
absolutely correct. The only role that proof assistants play in this
model system is a mechanism to export files that are checked to be
proofs of A in pure ZFC (certificates).

Now create a reasonably low level formal theory T1 of finite strings,
and a reasonably low level programming language L1,  designed to state
and prove statements like "code alpha in L1 correctly determines
whether or not a string delta is a proof in pure ZFC". Then we create
a proof P1 in T1 of this fact. But how do we know that P1 is a proof
in T1 as claimed?

We can repeat the process using T1 and L1 again. But it would seem
that for this purpose, we may be able to lower the level of T1 and of
L1, which may be a good idea.

Then we continue this process for many steps. Now are we gaining
anything by repetition here? We would if we actually get shorter and
shorter proofs in weaker and weaker L's and weaker and weaker T's. Of
course, at some point, we cannot continue to be reducing. Is there a
natural limit to such a procedure?

I find this whole approach rather slippery in terms of what the
details would look like, and whether human imperfections are creeping
in in subtle ways. I need to think more deeply about this.

Harvey Friedman


More information about the FOM mailing list