[FOM] Responses re Certainty, Simplified Foundations

Harvey Friedman friedman at math.ohio-state.edu
Sat Jun 21 21:31:58 EDT 2003


Hodges 10:11AM  6/21/03:

>"Everything is exactly one of: a set, an ordered pair, or a real number"
>
>To have this as an axiom just means that the set of axioms for math
>makes an assertion about the real world which math has no business to
>make, an assertion which is in any case false (it denies the existence
>of the person making it, for one thing).   Rather, one should say that
>only reals, the null set, and constructions of sets and pairs based on
>these urelements are the subject of mathematics.
>
>To have "Everything is exactly one of: a set, an ordered pair, or a real
number" as an axiom gives rise to another, more subtle problem.

Perhaps Hodges didn't read my 5:51PM 6/20/03 before writing his 
posting. Here I suggest removing the axiom that Hodges is criticizing.

My reason for this is different from Hodges. Hodges' reason is not 
the reason that I would choose, since one should not necessarily read 
"everything" as anything more than "all mathematical objects". After 
all, this and related systems are proposed only as a foundation for 
mathematics, and not a foundation for all thinking. So it does not 
"deny the existence of the person making it [the assertion]".

Rather, the reason I give is that the axiom is not at all required, 
and also mathematicians may want to introduce new objects that are 
not sets, ordered pairs, or functions. I have been influenced by 
Blass's posting that I refer to in 5:51PM 6/20/03.

Lerma 4:07Pm 6/20/03:

>I do not think that there is any way to guarantee that
>a mathematical proof is completely correct with absolute
>certainty. To me a proof that is exposed for everybody to
>check and resists the proof of time (i.e., time passes and
>nobody can find any serious error in it) is good enough.

I disagree with you on your pessimism about absolute certainty. See 
my next posting, #182, for more of my point of view. If a file 
appears that meets very elemental criteria according to a low enough 
verified system, then that constitutes absolute certainty in a sense 
that is at a much higher level than humans feeling comfortable with a 
complicated proof through seminars, etc.

Of course, one would like to say more to back up that claim that I 
have just made, and to create some examples of suitably complicated 
proofs which have been transferred through human/computer interaction 
into the appropriate kind of files.

There may be a temporary period where only proofs of limited 
complexity have been verified in this sense, and where the level of 
"certainty" people have about such proofs of limited complexity is 
still awfully high, and there may not be immediate acceptance to the 
idea that something very significant is gained on the "certainty" 
scale by formal methods.

However, in the verification of mathematical statements of the form 
"such and such program behaves such and such way", the whole business 
may be more convincing in several respects. Of course, I am aware 
that inroads are being made in formal verification of, especially, 
hardware. But perhaps not in general purpose software.

You say "..... is good enough". But I believe that the issue being 
addressed where that is NOT good enough, is of major foundational 
significance.

In fact, one way of looking at what I am talking about, is to 
formalize a new kind of proof. That is, the presentation of a file 
that shows that the statement to be proved has a proof in T.

I think that there is great doubt, at the subconscious and 
unconscious level, among practicing mathematicians, that finished, 
polished mathematics is capable of complete formalization in the 
sense of complete proof. They grossly underestimate the power of 
formal methods and formalization generally.

It would seem that the project of seeing that one can go essentially 
all the way is of general intellectual interest.

Goodman :

>
>
>One essay discusses proof checking programs, and argues that we can't rely
>on them because as well as the unreliability of the software and hardware
>involved, the translation process from mathematics as usually written to a
>form the computer can understand is itself highly susceptible to error. The
>author (or possibly authors) give the example of the translation of the
>usual statement of Fourier's theorem into the language of the proof checking
>program - it is almost completely indigestible. This doesn't seem like a
>difficulty in principle though, more a comment on the current state of the
>art.

As far as I know, the automated deduction/proof checking people have 
paid very little attention to the crucial issue of language 
construction. For their purposes, they may not have to. 
Unfortunately, this does make the kind of thing I am talking about 
impossible in practice, using their tools, except possibly for a very 
few number of highly committed workers - and then it will undoubtedly 
hamper just what they will want to try with the limited time 
available to them (limited in the sense that everybody is limited).

Also the construction of appropriate languages for the formalization 
of mathematics is not a straightforward topic, as one must strike a 
balance between easy of use in reading/writing, and complexity of the 
system as a metamathematical object that this project requires one to 
prove theorems about. Also one must understand semantics properly, 
and how mathematicians like to, or even importantly, should like to, 
read and write. Certain notation built up over centuries in 
mathematics is fatally bad for this purpose, yet mathematicians cling 
to it. It must be replaced. And other notation is quite good, and 
must be conserved. The proper approach to this is something that I 
think I know, but is not entirely obvious.

>Following on from this, another essay in the same book proposes that we look
>at the process of mathematical proof as being like engineering. We cannot be
>assured that the proof is correct because there are so many ways, known and
>unknown, to make a mistake.

The correctness issue is what is being addressed by this project. 
First one interacts with an interactive proof checker, and produces a 
file. Then the file is shipped to the UMD and certified by its core 
proof checker. The world is then happy. Any human mistakes and 
omissions are going to be fleshed out in the interaction with the 
interactive proof checker. Any bugs in the interactive proof checker 
are going to be exposed by the core proof checker.

>However, we can guard against the known ways in
>which proofs can fail. Methods include: finding an alternative proof or two,
>checking proofs by computer using two or more different proof checking
>programs, getting more than one person to check the numerical calculations
>(if there are any) using different methods, and so forth. In particular, if
>a different approach yields the same result, that tells us more than, say,
>running the same proof checking program through different compilers or on
>different computers.

These alternative methods depend on reasonable probabilistic 
reasoning, but the method I am talking about, in the case of 
verifying mathematical theorems, seems much preferable. One has a 
certified object coming out of it, which can be the source of various 
manipulations into other forms, etcetera.

>
>Ideally we want to make our proofs such that isolated small mistakes don't
>ruin the entire proof. What's interesting is that informal proofs are often
>more stable than purely formal proofs because they are driven by ideas
>rather than purely symbolic calculations, and if the ideas are right then a
>mistake in the formalism should be easily corrected. The slightly
>counterintuitive consequence of this is that maybe we should actually prefer
>a slightly informal proof driven by strong ideas to a formal proof checked
>by a computer.
>

Of course I "prefer" a slightly informal proof driven by strong ideas 
to a file that I cannot read. In fact, I "like" a quite informal 
proof sketch driven by strong ideas even better. But if you saying 
that the former creates more certainty than the latter, then I don't 
agree. This might be the case for sufficiently simple proofs.

Putting aside such perhaps controversial issues, I think it is quite 
interesting to see just how far one can go with this project.

Jones 7:31AM 6/21/03.

>Probably the most ambitious "practical" approach
>to realising near-absolute certainty in demonstrating
>mathematical propositions was the "CLINC stack".
>
>This was a project conceived by the now defunct
>Computation Logic Inc (CLINC) to build a stack of
>verified components.
>
>i.e. a verified microprocessor running verified
>operating system, language compilers and theorem
>provers.
>
>Computational Logic was a company formed by
>     * Robert S. Boyer
>     * Richard M. Cohen
>     * Donald I. Good
>     * J Strother Moore
>     * Michael K. Smith
>
>I think originally based around the "Boyer-Moore"
>theorem prover developed by Boyer and Moore
>and the Gypsie program verification system of
>Don Good.
>
>CLINC was particularly strong in hardware verification
>and Warren Hunt at CLINC accomplished the first complete
>formal verification of a microprocessor.
>
>This was largely funded by the US government during a
>period (long ago) in which mathematical proofs were thought
>to be an important way of realising more secure computer
>systems.
>
>CLINC => http://www.cli.com/corp/research.html

There is obviously great interest associated with verification, in 
many senses, of "a microprocessor running an operating system, 
language compiliers and theorem provers", but I think that what I am 
proposing is more manageable, and avoids a great deal of that work.

>
>A more recent attempt at something similar was the
>EU funded PROCOS project.
>http://www.afm.sbu.ac.uk/procos/
>
>I would have thought that for FOM more theoretical or
>philosophical lines of investigation are more interesting.
>In particular:
>
>(1) The problem of regress in the foundations of semantics.
>and
>(2) How to achieve maximum confidence that formal systems
>     are sound with respect to their semantics.
>
>Of course, to address these, particularly (1), you need
>to recognise that there is some uncertainty about what
>(e.g.) set theory _means_, and be interested in saying something
>more informative than that set theory is "about sets".
>

The FOM pursues many "theoretical and philosophical lines of 
investigation", but I think this one is of real value, as many others 
are. Obviously, your (1) and (2) are everpresent, at least adjacent 
to the discussion.

In fact, what I am writing about this is definitely controversial. 
Some people say what I am trying to do is impossible, whereas other 
people say that it is already known. Perhaps the truth is in between?

Jones 5:36PM 6/21/03:

>On Saturday 21 June 2003  6:04 am, Harvey Friedman wrote:
>
><snip>

Friedman:

>
>>  We now come to the novel(?) idea here.
>>
>>  The idea is that so much information in the form of pointers,
>>  etcetera, into the proof in T is given after the end of the
>>  proof in T, that this alleviates the need for parsers, string
>>  searchers, and the like in the core proof checker. (We have
>>  not yet come to the core proof checker).
>
><snip>
>
>>  Extreme annotation will be so explicit that the core proof
>>  checker has only to do completely trivial grunt work of the
>>  most elemental kind.
>>
>>  We think that one should be able to easily program the core
>>  proof checker in an appropriate MACHINE LANGUAGE, and have a
>>  mathematician write a completely readable correctness proof of
>>  the machine language program that is completely convincing to
>>  any mathematician. Special purpose hardware would circumvent a
>  number of verification issues.

Jones 5:36PM 6/21/03:

>
>The idea that machine verification can and should be organised
>in such a way as to make the core proof checker a very simple
>program is by no means new.

Good.

>
>There are at least two well-known paradigms for verification
>which are based on this idea.
>
>The most famous and widely used is the "LCF paradigm",
>which originates in the proof assistant for Dana Scott's
>Logic for Computable Functions (LCF) which was developed
>at the University of Edinburgh in the 70s (i.e. about
>30 years ago), by a team lead by Robin Milner.
>
>In this method the key code critical to
>proof checking is separated out into a small logical
>kernel and the more elaborate programming which supports
>user interfaces or sophisticated proof search algorithms
>is thereby rendered incapable of making mistakes which
>go undetected.

Yes, as I like it.

I was hinting at an intensive study of just how low level the core 
proof checker can be, subject to its being of reasonable size and 
humanly digestible, including the accompanying design of crude custom 
hardware. This would seem to me to be potentially quite interesting, 
and self contained enough to also lend itself to formal verification. 
The hardware and physics issues may be quite amenable in a focused 
limited context like this.

>
>As well as establishing an architecture for reliable proof
>verfication, this research invented a new functional programming
>language for developing proof checkers, which was called
>"ML" (for meta-language).  Descendents of this language
>are used to this day in the programming of many proof assistants,
>which when implemented in these languages usually follow
>the LCF paradigm.  Among the many such tools are:
>
>	HOL (a tool for Higher Order Logic)
>	ISABELLE (A generic proof system)
>	COQ (for Coquands calculus of constructions)
>	NUPRL (constructive type theory)

Of course, what I don't like about all such systems that I am 
familiar with is that the language for the formalization of actual 
mathematics that is used by these systems is worse than bad. Bad 
enough to make a sensitive logician cry.

>A second well documented method of maximising the assurance
>with which proofs can be checked is that mandated by
>UK Defence Standard 00-55 for use in the development of
>safety critical military software.
>This involves the use of verification software which
>is capable of exporting a primitive proof script
>for verification by an independent proof checker.
>Such a proof checker is envisaged as being simple
>enough to be itself readily verifiable.
>
>I am myself less familiar with the application of this
>method, but I believe there are  proof tools which have
>the capability to export proofs for this purpose.

I'm glad that these ideas have credibility. I am proposing them with 
the phrase "new(?)" with a question mark because

a) I suspect they are not entirely new;
b) I can see that they are clearly very feasible: that I could 
envision doing them myself in a handle of man years;
c) because I suspect that more than a few man years have gone into 
previous attempts, maybe I have clearer and cleaner ideas of how to 
go about doing these things than people did before.

>
>On a brief perusal, the least credible aspect of your
>proposal is that your language T (and other aspects
>of your proposal) should become a universal standard
>adhered to thoughout the world.
>
>There is not the least chance that this could happen,
>nor should it!
>

I disagree strongly with you on both counts.

We need to have a gold standard for the communication of (verified) 
mathematical information (of the kind interesting to professional 
mathematicians). We need this even more urgently in the programming 
contexts, but it is much much much easier to set up standards for the 
communication of mathematical information.

In fact, a successful completion of this for mathematical information 
will point the way towards doing this in the programming contexts, 
and will even be an essential tool for it. But that is down the 
road...

It is also clearly very doable, and doable in an interesting way. 
There is a great deal of very standard implicitly recognized logical 
structure to mathematics.

I am falling short of asking for a gold standard way of presenting 
actual mathematical proofs that are meant to be written and read by 
human beings. That, to, is needed, but is much harder.

HTML, ASCI, MIDI, TEX, RTF, HTTP, etc., not to mention various 
engineering standards, and the EURO, are of varying degrees of 
success as standards, and they, and many others, play an essential 
role in the world.

When put in place, it will support very useful queries. I.e., give me 
all sentences post 1995 with FGG in it. Here FGG would be the agreed 
upon unary relation symbol meaning "finitely generated group". A very 
useful online index would be available online, for humans, so that 
humans can see that FGG means "finitely generated group".

Harvey Friedman










More information about the FOM mailing list