[FOM] P NP buzz

Harvey Friedman friedman at math.ohio-state.edu
Sat Aug 14 06:54:49 EDT 2010


I have now made three postings on the webpage http://rjlipton.wordpress.com/2010/08/12/fatal-flaws-in-deolalikars-proof/ 
  entitled Fatal Flaws in Deolalikar’s Proof? in Richard Lipton's blog  
entitled Goedel's Lost Letter and P = NP, August 12, 2010.

Harvey Friedman PERMALINK
August 13, 2010 9:48 am

Now that a consensus seems to have formed that this P !NP paper is  
deeply flawed, and skepticism is even beginning to form as to whether  
it contains any new credible idea (though opinion might shift around  
on this), I thought I would put forward my impressions on some general  
issues suggested by this entire episode.

Imagine two quite different actions.

1. Someone with quite reasonable credentials circulates a well written  
(not in the technical sense) manuscript suggesting what they regard as  
"promising new approaches to P !NP", with many details included.
2. Someone with quite reasonable credentials circulates a well written  
(not in the technical sense) manuscript claiming that they have proved  
P !NP, with many details included.

The effect of these two actions are radically different. In the case  
of 2, a large number of very strong theoretical computer scientists  
and interested mathematicians are rightly compelled to pretty much  
drop whatever they are doing and look at this seriously NOW - or, in  
some cases, be compelled to make public statements as to why they are  
not dropping whatever they are doing to look at this seriously.

So 2 amounts to forcing one's ideas to the top of the queue for a very  
large number of very powerful people who are not going to be able to  
continue doing the great things they normally do until at least a  
reasonable consensus forms.

Under 1, eventually some very powerful people will take a look, but  
when it is convenient for them.

The cost of this is, on balance, considerable enough to be of some  
concern. I'm not thinking of  retribution, or anything negative like  
that.

But considerable enough so that the question of how this came about,  
and how to prevent this kind of thing from happening in the future,  
becomes pretty interesting - both from the theoretical and practical  
points of view.

 From the theoretical point of view, broadly speaking, we have a very  
satisfactory understanding of what an ultimately rigorous mathematical  
proof is. Furthermore, ultimately rigorous mathematical proofs have  
actually been constructed - with the help of computers - for a  
substantial number of theorems, including rather deep ones. E.g., see http://www.ams.org/notices/200811/tx081101408p.pdf 
  This also has a careful discussion at the end of what major advances  
are needed in order for the construction of ultimately rigorous  
mathematical proofs to become attractive for more than a few  
specialists.

On the practical side, I think that there is a reasonably clear notion  
of "completely detailed proof", written in ordinary friendly  
mathematical language, which definitely falls well short of  
"ultimately rigorous proof". I have, from time to time, felt compelled  
to construct "completely detailed proofs" - and it is rather  
exhausting. I don't do this as often as I should - perhaps because I  
don't prove something important enough as often as I should or would  
like. But there is some satisfaction that comes from writing up a  
completely detailed proof.

Bringing this to the matter at hand, I am under the impression that  
many people do not have a clear idea of what a completely detailed  
proof is, and rely on some sort of general intuition - which is  
required to come up with just about any serious proof  in any form -  
even for proofs for publication. Since the refereeing process  
generally consists mainly of whether the referee feels that they can  
"see" a correct proof based on what is written, authors are normally  
never compelled to get their hands dirty with anything remotely  
approaching a completely detailed proof. This works well as long as  
they are in an arena where their intuitions are really good enough.

Of course, (most) really strong researchers know just how dangerous it  
is to operate this way, particularly with notorious problems. But  
obviously reasonably strong people may not realize this. In fact, I  
have seen this kind of thing many times before in and around logic -  
and there is often a defiance stage.

So it might be valuable to

a. Flesh out both theoretically and practically what a "completely  
detailed proof" is  - still well short of "ultimately rigorous proof"  
which we know a lot about already.
b. Give examples of "completely detailed proofs".
c. Instill this in the educational process. E.g., all Ph.D.s in  
theoretical mathematical areas must have written a substantial  
"completely detailed proof".

I would be delighted to hear what people think of these comments.

Harvey Friedman PERMALINK
August 13, 2010 3:34 pm

Terry,

As you well know, depending on the mathematical context, the high  
level and the low level are more critical or less critical. The high  
level is always of first importance, because it is next to impossible  
to construct a fully detailed proof of something interesting without a  
good  understanding at the high level.

But I am addressing the issue of how to prevent the present situation  
from arising in the first place (assuming it is as it now appears).  
I.e., a flat out apparently bogus claim of P != NP - sufficient to  
attract the press, and a whole host of stars and superstars from their  
normal brilliant activity.

We all know the essential importance of "high level" criteria. Yet  
this obviously did not prevent a claim being made from a credible  
looking scholar, with a large number of yet more important scholars  
dropping what they are doing in order to spend serious time looking at  
this! And involving the press!!

When the dust settles, the best guess will be that the whole episode  
was a waste of a lot of people's time - relative to what else they  
would be doing. For those of us who will have learned something from  
this, we probably will have learned a lot more from continuing the  
wonderful things we were doing before this occurred.

But adherence to my "low level" criteria would have almost certainly  
prevented the claim from being made. Instead of claiming P != NP, it  
would have read something like

"an approach to P != NP"

and then interested parties could take a look according to their  
inclinations and schedules. There would have been no press coverage.

So I repeat my suggestion,  with c) below as an *addition* to the  
usual process for getting a Ph.D. in pure mathematics, theoretical  
computer science, and related heavily mathematical areas:

a. [Those of us interested in logical issues] Flesh out both  
theoretically and practically what a “completely detailed proof” is –  
still well short of “ultimately rigorous proof” which we know a lot  
about already [but which is generally too cumbersome to create under  
current technology by nonprofessionals].
b. Give [a wide palette of] examples of “completely detailed proofs”.
c. Instill this in the educational process. E.g., all Ph.D.s in  
theoretical mathematical areas must have written a modest sized  
“completely detailed proof” in normal mathematical language.  
Exceptions always should be made for "geniuses".

This is my suggestion for prevention.

Of course, c) is *in addition* to the more commonly stated criteria at  
the "high level" for Ph.D.s.

Harvey Friedman PERMALINK
August 14, 2010 5:37 am

There are some powerful automatic conversion theorems in mathematical  
logic to the effect that if a sentence of a certain form is provable  
in the usual sense then it is provable constructively. Of course,  
these terms need to be defined for the purpose of this family of  
theorems. Generally speaking, this family of general results takes the  
following form, for a large variety of axiom systems T. Let T- be the  
constructive (intuitonistic) form of T, where we simply require that  
the proofs avoid the law of excluded middle (in the usual sense as  
formulated by Arend Heyting). It is known that under general  
conditions on T, any AE sentence provable in T is already provable in  
T-, and there is an explicit conversion method for this, known as the  
A-translation. Here AE means (for all integers n)(there exists integer  
m)  such that something readily testable holds.

The most commonly quoted case of this is where T = Peano Arithmetic =  
PA, and T- = Heyting Arithmetic = HA. HA is the constructive form of  
PA. But the result applies flexibly to many other systems, both much  
stronger and much weaker than Peano Arithmetic/Heyting Arithmetic.

The statement P = NP takes the form EA, which means (numerical)  
existential universal, using SAT. So  P !NP takes the form not(EA).  
This conversion theory tells us how to convert a proof of not(EA) in  
PA to a proof of not(EA) in HA, and more. Write P != NP in the form  
(En)(Am)(P(n,m)), where P is totally innocent logically. Here "E" is  
"exists", and "A" is "for all".

If PA proves not(En)(Am)(P(n,m)), then HA proves not only not(En)(Am) 
(P(n,m)), but even (An)(Em)(not P(n,m)), and the conversion from PA  
proof to HA proof is rather explicit, using the A-translation.

Returning to the specific case at hand, of P != NP, these  
considerations tell us, for example, that any (CORRECT!) proof in PA   
of P !=NP can be explicitly converted to a proof in HA of P !=NP. In  
fact, it can be converted to a proof in HA of the statement T(f) =

"for all constants c, any algorithm of size at most n with built in  
running time <= (x+c)^c must give the wrong answer to SAT at some  
input of size x <= f(n,c)"

where f is a so called <epsilon_0 recursive function.  I am under the  
impression in this Deololikar P !=NP "proof", this statement  above  
under quote signs is "proved" where f is an exponential(?).

BTW, this raises some interesting questions that perhaps have been  
addressed(?) independently of any logical issues. What can we say  
about the statement T(f) when f is slow growing? I.e., what is the  
"strongest" statement that we can hope to prove, or at least can't  
refute, or believe, concerning the identification of, or size of, a  
mistake that any purported algorithm for SAT of a certain size running  
in a certain time must make? This question can obviously be raised in  
a very wide range of complexity contexts, and surely(?) has been  
addressed.

Harvey Friedman














More information about the FOM mailing list