[FOM] postings on Lipton blog

Harvey Friedman friedman at math.ohio-state.edu
Sun Aug 22 20:39:29 EDT 2010


In these three previous postings, I presented the part of the Lipton  
blog dialog that I have been posting on.

http://www.cs.nyu.edu/pipermail/fom/2010-August/014982.html
http://www.cs.nyu.edu/pipermail/fom/2010-August/014986.html
http://www.cs.nyu.edu/pipermail/fom/2010-August/015000.html

I now continue with my remaining activity, which is up to date as of  
Sunday, 8:40PM, August 22, 2010.

Harvey Friedman PERMALINK
August 15, 2010 7:32 pm
It would be nice to capitalize as much as possible on this entire  
experience – which probably is not yet over. I’m thinking of two  
arenas.
1. Can we use this experience to generate ideas for modifying or  
upgrading the educational process?
2. Can we use this experience to generate ideas for modifying or  
upgrading the refereeing/publication process?
I already discussed arena 1 twice before. Basically, my idea was to  
add a necessary component to the Ph.D. process (geniuses excepted)  
that some limited portion of their thesis be “fully detailed” in a  
precise, but reasonable, manageable, sense short of being machine  
checkable (i.e., short of being created in, say, Mizar, Isabelle, Coq,  
etc). This idea would really have to be fleshed out carefully,  
theoretically and practically, to be fully workable. But this can be  
done. It would also serve to introduce scholars to at least some  
important general features of formal verification – a field in its  
infancy that is destined to become of great importance in the future  
– at least in industry.
The further discussion here reminds us of the obvious requirement that  
“all terms must be defined totally unambiguously” is of central  
importance in any “fully detailed” proof.
Of course, I do realize that this is obviously not going to  
“prevent” all flawed “proofs” of spectacular claims (and other  
claims) from being “released”. In fact, it has been suggested that  
this is so much fun and so valuable a way for people to meet, that we  
shouldn’t try to “prevent” it even if we could.
But I claim that reforms in category 1, surrounding proof writing, if  
implemented with the proper finesse, will undoubtedly do a lot of  
people at various levels of abilities, in various intensely  
mathematical fields, a great deal of good.
There are other aspects of proof writing that one can also think about  
regimenting in some way. I’m thinking of high level design. But here,  
we don’t have a good way to systematize the requirements – like we  
do in the low level of the “fully detailed”. Also I get the feeling  
that the present educational process is better, or much better, at  
requiring clear and sound statements of overall proof structure –  
correct me if I am wrong.
In arena 2, the following idea occurred to me, and surely others. YOU  
submit a paper to Journal J. J asks if YOU if YOU want private or  
public refereeing. For private refereeing, business as usual,  
anonymous and all that. For public refereeing, J puts up a website for  
your paper, which is seen by the entire world. People all around the  
world are invited to NONANONYMOUSLY comment on the paper, at any level  
of detail that they choose. [Inevitably, there needs to be a moderator  
for this, to sieve nonsense]. YOU get to comment on their comments, at  
any time. YOU may of course put up all kinds of corrected versions in  
the process. This goes on until YOU decide that there is enough there  
that the paper is ready to be accepted – in YOUR opinion. THEN the  
Editorial Board decides whether the paper should be published, or  
whether it should just hang there on the website, or suggest that YOU  
revise according to comments on the website, etcetera. YOU are allowed  
to even complain about the Editorial Board. As long as this remains  
civil, this continues. NOTE: Details to be worked out.
REPLY

V PERMALINK
August 15, 2010 9:10 pm
Dear Professor Friedman,
I have the greatest respect for you. However, I do take issue with  
some of the points you raise:
1) You believe that formal verification is somehow going to  
revolutionize the industry. I think this is a sad mistake. I know this  
because I have a PhD in formal verification, and I know how flawed  
this dream is. I have become much more realistic. Given that formal  
verification cannot be used for even small real-world systems, there  
is no chance for formal verification to be applied to current or  
future real-world systems.
2) I agree with your public refreeing process. I think it is a good idea
3) The value of teaching proofs through using proof verification  
software is also a good idea. CS PhD students should have some idea of  
how proofs work.
REPLY
		

math grad student PERMALINK
August 15, 2010 9:56 pm
V:
a) Formal verification has loads of industrial applications, see John  
Harrison at Intel, for example.
b) He was NOT talking about formal verification of software, but  
machine checkable proofs (the candidate proof is already there) and  
even mathematical constructivism!


		

Harvey Friedman PERMALINK
August 15, 2010 10:04 pm
Thanks, V, for the kind words – perhaps you have too much respect!
To be exact, I wrote
“[formal verification] is a field in its infancy that is destined to  
become of great importance in the future – at least in industry.”
which is not quite the same as what you said I said. In a few of the  
major computer companies, a serious amount of resources is now devoted  
to formal verification. See, e.g., http://asimod.in.tum.de/2010/Harrison_Abs_10.pdf 
  http://www.cl.cam.ac.uk/~jrh13/slides/nijmegen-21jun02/slides.pdf http://www.computer.org/portal/web/csdl/doi/10.1109/ICVD.1997.568077 
  http://reports-archive.adm.cs.cmu.edu/anon/2009/CMU-CS-09-147.pdf
It is true that the most success thus far, by far, is in formal  
verification of hardware. There, the investment by industry apparently  
has a good looking bottom line. For software, the situation is of  
course trickier, for many reasons – including that it is generally  
much more difficult to verify software than hardware. But I am  
convinced that there will be major changes in the future in how  
software is produced and formally verified in industry. Perhaps not in  
2010. But, I conjecture, before P != NP is proved.	

Barkley Rosser PERMALINK
August 16, 2010 11:25 pm
Harvey, As a journal editor I find your last suggestion of allowing  
the possibility of public refereeing to be extremely interesting.

Harvey Friedman PERMALINK
August 17, 2010 12:01 am
Please note that I suggested that the author have the right to ask for  
a normal private review, or a public one with no anonymous reviewers.
There are a lot of details to be worked out and issues to be resolved,  
and what I suggested above on August 15, 2010 7:32 pm were my first  
thoughts.
There are some interesting possible side effects of this approach. It  
may highlight many big developments rather early on, to a wide  
community, and also fast track particularly important papers. Also, it  
may create an invaluable open forum for potentially paradigm shifting  
controversial innovations, as the disagreeing reviewers have it out on  
crucial issues, comfortably below the flame level.

am PERMALINK
August 15, 2010 7:45 pm
Prof. Freidman,
Your second idea in particular is an excellent one. Regardless of  
whether a particular paper gets published or not it promotes  
distribution of ideas and methods. I don’t know how many journals  
would be willing to go along with this, though.

Harvey Friedman PERMALINK
August 15, 2010 10:14 pm
anonymous –
I have the feeling that you think you are responding to me. Is that  
correct?
Assuming this is correct, I did not suggest that (parts of) proofs be  
presented in machine checkable format in my earlier postings here.  
Only, for part of a Ph.D., some limited proof be given in “full  
detail”. The theoretical work I suggested that needs to be done must  
explicate “full detail” so that this becomes something user  
friendly of the kind that many professionals who are particularly  
concerned about errors creeping in in critical parts of their proofs  
will do on their own.
It is, however, a research ambition to actually also make this not  
only very friendly and easy to read, but also machine checkable.  
However, that goal is secondary to what I am suggesting, which is  
educational.
REPLY

anonymous PERMALINK
August 15, 2010 10:24 pm
prof friedman ok I understand this is a noble objective. I am just  
afraid that the sketchier publishable proofs become the more  
imperative mechanized proofs will be.

V PERMALINK
August 16, 2010 12:00 am
@ math grad student:
I have to respectfully disagree with your assessment that formal  
verification has *loads* of industrial application. (and by the way,  
Professor Friedman did state clearly that formal verification is going  
to revolutionize industry. Please read his first post carefully).
Yes, formal methods have had some impact on hardware verification. I  
know most of the people in formal methods (including the Intel folks  
you are referring to). These guys understand the limitations. The only  
people who are most enthusiastic about formal verification are either
1) Have already heavily invested their careers in formal methods
2) Have never written significant software
3) Outsiders who don’t understand the difficulty of using formal  
tools, and don’t understand the cost/benefit of using formal tools  
(For certain hardware applications these tools makes sense. Not  
otherwise)
Formal verification can be a great crucible for new ideas, but not an  
effective end in itself.
REPLY

SA PERMALINK
August 16, 2010 1:06 am
@V:
 > These guys understand the limitations.
Could you explain these limitations?
REPLY

V PERMALINK
August 16, 2010 12:20 pm
Formal methods have the following serious issues:
1) Usability: Formal tools are exceedingly difficult to use, and  
despite decades of research (nearly 50 years) not much progress has  
been made in making them useable by the ordinary programmer
2) Incorporating change: You have to constantly maintain  
correspondence between your code and your formal spec. Otherwise, they  
may go completely out of whack, and the formal spec becomes useless
3) Automation: Most of these tools are not as automatic as one would  
want them to be
4) Complexity: Most problems in formal arena are PSPACE-hard or  
harder. Model-checking is PSPACE-complete.
There are whole host of other issues with formal verification.  
However, the current orthodoxy in computer science is wedded to the  
idea.
Contrast software systems with biological systems. Bio systems are far  
more complex than software systems. There are no theorems we can prove  
about bio systems. Yet they work, most of the time.
The key point is that as complexity of systems increase, proving stuff  
about them is just too hard. We need a different paradigm for software  
assurance.

Aaron Sterling PERMALINK
August 16, 2010 12:58 pm
I don’t know where WordPress will put this comment, but it’s a  
followup to the concerns raised by V about formal verification.  
(Apologies to Messrs. Lipton and Regan for being off-topic, but things  
seem to be winding down, so I figured it was ok.)
It’s not quite correct to say that model-checking is PSPACE-complete.  
V knows this, I’m sure, and was being brief, but the reader  
interested in further information might want to check out The  
Complexity of Temporal Logic Model Checking by Schnoeblen for a fuller  
picture. Some logics are more tractable than others, in both theory  
and practice.
Also, model checking is now being applied to biological systems. One  
survey is Quantitative Verification Techniques for Biological  
Processes by Kwiatoska et al.

math grad student PERMALINK
August 16, 2010 1:24 pm
Besides complexity, your argument is merely a problem of engineering  
practice, in particular if you are so concerned with “ordinary  
programmers”, then it is indeed never going to be possible to make  
formal methods accessible, since as far as I am aware, most of them do  
not even bother to work out how to multiply two matrices!

V PERMALINK
August 16, 2010 2:04 pm
@math grad student:
You said “Besides complexity, your argument is merely a problem of  
engineering practice”.
Engineering practice is far harder than you think. Automating formal  
techniques is extremely hard, for example. If it were easy, it would  
have been done already.
We need to start thinking about self-evolving, self-correcting, self- 
programming software. A formal approach is not going to take you far.

Anonymous PERMALINK
August 17, 2010 2:28 pm
Dear Prof. Friedman,
This is a very speculative question, but I am wondering whether you  
think that it
may be possible that there are natural questions directly related to  
computational
complexity (say involving the world inside EXPTIME) that are  
independent of ZFC.
Actually it would be interesting enough if some strong assumption (say  
about large
cardinals) could be used to get some results about complexity.
Sorry if this is a hopelessly naive question!
REPLY

Harvey Friedman PERMALINK
August 17, 2010 4:14 pm
It is of course *possible* that logical issues are what is holding up  
progress on the massive number of critical lower bound issues  
throughout theoretical computer science. E.g, that one has to use more  
than ZFC in order to make progress on these problems.
Unfortunately, I cannot say that I have any kind of viable research  
program for showing that ZFC is inadequate for these crucial problems.
Let me also add that if I did have such a viable research program for  
showing this, I would probably write this posting.
REPLY

Anonymous PERMALINK
August 17, 2010 5:08 pm
I’m assuming that you are being coy here, especially given your  
assertion from the preface of your BRT book: “In fact, it still seems  
rather likely that all concrete problems that have arisen from the  
natural course of mathematics can be proved or refuted within ZFC.”
But it looks like you will keep us guessing!

Harvey Friedman PERMALINK
August 17, 2010 9:04 pm
Responding to Dick Lipton.
I have been following this with indirect interest. Logical aspects of  
P != NP (intuitionism, bounds, provability in ZFC), online interactive  
refereeing, and educational reform (experience with small fully  
detailed proofs, formal verification).
 From what I see, the consensus now is that the serious work needed to  
prove P != NP has not really begun in the manuscript. If this is the  
case, as it appears to be, there is not going to be anything close to  
a proof of P != NP in the revised manuscript either, whether or not it  
is sent in for journal review.
So my interest is particularly focused on the thought process behind  
not retracting the “proof” at this point. Not so much for the  
social aspect (though I am curious about that too), but for the  
logical aspect. I.e.,
#) what role, if any, does lack of experience or familiarity with  
“fully detailed proofs” have in the release of the original  
manuscript, the release of the revised manuscript, and the absence of  
retraction of the claim?
60 manuscripts are listed at http://www.win.tue.nl/~gwoegi/P-versus-NP.htm 
  Most of them are available there.
Does anybody know anything about how many of these claims of P = NP or  
P != NP have been retracted? Or have none of these claims been  
retracted?
Of the ones not retracted, does anybody know anything about other  
research activities of these claimants? Or how my question #) above  
might relate to these claimants?
I, like probably most people here, would never even conceive of  
releasing a claim to having proved P != NP (or P = NP) without being  
accompanied by a complete fully detailed proof. Absent that, I would  
at most put a very hedged statement like “we believe this approach  
has a reasonable chance of leading to a proof of …”, and probably  
weaker.
Obviously, the first priority here is to get to the bottom of the  
manuscript, any revised manuscripts, and the approach. But as things  
wind down, perhaps people will get interested in the questions I just  
raised.

Harvey Friedman PERMALINK
August 21, 2010 8:41 am
This question came up in the earlier discussion of the “proof”. I  
did not find the responses satisfactory. Let me state it more clearly.
Let A consist of the following data: a Turing machine algorithm  
accepting bit strings, and a polynomial time bound (absolute – i.e.,  
not asymptotic). We define n(A) to be the least length of a bit string  
x such that when the Turing machine algorithm runs at input x for the  
number of steps given by the time bound declared in A (as a function  
of the length of x), the resulting computation witnesses that A is not  
an algorithm for SAT. I.e., either A encodes a tautology and the  
algorithm does not return “yes” within the time bound declared in  
A, or A does not encode a tautology and the algorithm does not return  
“no” within the time bound declared in A.
If P != NP is provable in a weak formal system, then we do get an  
upper bound on n(A) as a function of the size of A. Roughly speaking,  
the weaker the formal system, the better this upper bound is. This is  
at least implicit in the literature.
However, if we merely know that P != NP, then can we get any upper  
bound on n(A) at all? I was thinking “no”, but perhaps I am missing  
something important here.
REPLY

Terence Tao PERMALINK
August 21, 2010 12:13 pm
(I assume you mean “x encodes a tautology” instead of “A encodes  
a tautology”.)
My guess is that there is no “compactness” property we know of that  
prevents NP from being “arbitrarily close” to P, e.g. there might  
exist an algorithm that solves (say) 3-SAT in time , and this is  
essentially best possible. (Of course, one could consider far slower  
growth functions here than , e.g. inverse Ackermann.) With this  
intuition, I would say that n(A) could (in principle) grow arbitrarily  
fast in A.
One could possibly bound n(A) by some sort of Busy Beaver function,  
but that would probably require P!=NP to not just be true, but to be  
provable in a specific formal system. But this is something you are  
more qualified to discuss than I.
REPLY

Harvey Friedman PERMALINK
August 21, 2010 3:13 pm
Terry wrote
“(I assume you mean”x encodes a tautology” instead of “A  
encodes a tautology”.)”
Yes, thanks for pointing out my typo!
If P != NP is true then n(A) is a recursive function of A, by search,  
and so is automatically eventually bounded by any of the usual Busy  
Beaver functions (see http://en.wikipedia.org/wiki/Busy_beaver).
It is interesting to look into the relationship between running times  
for solving 3-SAT and the function n(A) in my previous message.
I don’t have the patience right now for this sort of tedious stuff,  
but the following should be reasonably close to the truth:
THEOREM. 3-SAT can be solved in running time at most n^(c^(f^-1(n))).  
If 3-SAT can be solved in running time at most n^g(n) then g(n) >=  
log_c(f^-1(n)). Here f is the function n(A) in my previous message.
Proof: Let x be a problem instance of length n. Let m be greatest such  
that f(m) <= n. Then there exists an algorithm y of length at most m  
that solves 3-SAT for all problem instances of length at most n  
running in time at most (2^m)(n^(2^m)). But at this point we don't  
know what y to choose. So we go through the y of length at most m,  
running them, and see if a falsifying assignment is generated in the  
obvious way. If so, then return yes for x. If not, then return no for x.
Suppose 3-SAT can be solved in running time at most n^g(n). Etcetera…  
QED

Terence Tao PERMALINK
August 21, 2010 7:16 pm
To contrast P != NP with the Riemann hypothesis, in the case of the  
latter there is a quantifiable (but ineffective!) gap between the  
bounds in the error term  in the prime number theorem in the case when  
RH is true, and when RH is false. Namely, the error term is  when RH  
is true (and there is even an explicit value for the constant, namely   
for ), but will be larger than  infinitely often for some  if RH fails  
(indeed, one takes  to be the real part of a zero to the right of the  
critical line). This comes easily from the “explicit formula”  
linking the error term in the prime number theorem to the zeroes of  
the zeta function (roughly speaking, the error term is something  
like , where  ranges over zeroes of the zeta function). But the bound  
is ineffective because we have no lower bound on .
This gap between bounds when RH is true and RH is false can be used,  
among other things, to phrase RH in  in the arithmetic hierarchy (as  
Dick pointed out some time ago on this blog).
For P != NP, there is nothing remotely resembling the explicit  
formula, and so we don’t have this gap in bounds; nor can we place P ! 
= NP or its negation in .
REPLY

Harvey Friedman PERMALINK
August 21, 2010 7:44 pm
Dear Terry,
That’s exactly what I was getting at. That, at the moment, we should  
classify P != NP as being in Pi- 0-2 (Greek capital pi, superscript 0,  
subscript 2), which in terms of quantifiers is (forall)(therexists),  
or AE. Over the years, there seem to have been dozens of somewhat  
different estimates connected with the Riemann Hypothesis, showing  
that it is in Pi-0-1, as Terry writes.
This business of classifying open mathematical questions in terms of  
their logical complexity is something that has not been pursued in a  
proper systematic fashion. This can also be done reasonably robustly  
for solved mathematical questions. Here one has to address the  
objection that solved mathematical questions are either 0 = 0 or 1 =  
0, and so in Pi-0-0, and there is nothing to do.
As a tangent to a tangent to the main points in my book at http://www.math.ohio-state.edu/%7Efriedman/manuscripts.html 
  section 3, I endeavored to take all of the Hilbert problems and  
classify them in these terms.
See the Introduction only, and then only the section 0.3, E.g, Twin  
Prime is Pi-0-2, Goldbach is Pi-0-1, Collatz is Pi-0-2, and so forth.
I think I did at best only a passable job, and, given my limited  
research efforts in core mathematics, I could use plenty of help from  
mathematicians in various areas to make this tangent to a tangent much  
better.
In fact, when I wrote the Introduction, I had the ambition of doing  
the same for Smale’s Problem List, and for the Clay Problem List! To  
systematically treat a classification of this kind for these three  
problem sets, would itself take a large book.	

Harvey Friedman





More information about the FOM mailing list