[FOM] More Friedman/Carniero

Harvey Friedman hmflogic at gmail.com
Thu Apr 7 22:30:50 EDT 2016


More from https://plus.google.com/u/0/110536551627130071099/posts/6TiKLxjSCnu

Friedman:

Here I assume that you are talking about strictly finitary rules, with
derivations immediately subject to computer check? What if the sizes
are far beyond computers?? We actually have theorems to the effect
that there are situations like that where something bit sized is
derivable from bit sized rules, but the derivation is incredibly long,
and cannot be proved to exist by computer. In fact, the only proofs of
the mere existence of the finite derivations use massively infinite
reasoning - or if they don't use massively infinite reasoning, then
they must be far too long - at least 2^2^2^2^2^2^2^2^2^2^2^2 bytes.

Carniero:

After some introspection (like any good philosophy discussion will
induce), I think I would actually classify myself as a
meta-ultrafinitist. I am not an ultrafinitist insofar as I am
perfectly comfortable "asserting the existence" of infinite objects
like the integers. But at the meta level, these are (very) finite
strings, and the theorems we prove about them are also (very) finite.

It is true that there are "proofs of proof existence". I have no
qualms whatsoever about rejecting these as proofs in the same way that
an ultrafinitist might reject that Skewes' number exists. These are
actually proofs that T proves "T proves A" for some statement A (where
T is your theory of interest, strong enough for Goedel's
incompleteness theorem).

Friedman:

I don't see anything you are buying into that an ultrafinitist
wouldn't. If you present a proof physically of A in system T, all
properly physically presented, then you can conclude that T proves A.

Carniero

Correct me if I am wrong, but I think T proves "T proves "T proves A"
implies T proves A", so working one meta-level down is sufficient to
talk about these sorts of statements with ludicrously long "proofs".
It is up to you whether you want to infer T proves A from T proves "T
proves A", but if you do this it should be treated as an axiomatic
extension, because that's what it is. If you really want to prove A,
you should do so directly.

Friedman:

I think you may be making some technical error here. I'm not sure.

THEOREM. Suppose the following is provable in T. (T proves T proves A)
implies (T proves A). Then in fact T proves T proves A. This result is
provable in extremely weak fragments of arithmetic for very general T.

THEOREM. Consider the hierarchy of statements 1 = 0, T proves 1 = 0, T
proves T proves 1 = 0, T proves T proves T proves 1 = 0, ... These
statements gets successively strictly weaker within T, unless at some
point they become provable in T, in which case they remain provable in
T.

Carniero:

Practically all of "concrete" mathematics (the sort we've been talking
about) doesn't need these sort of contortions. You pick your theory,
and then (very) finitely deduce your theorems. Even large proofs like
Wiles' proof of FLT easily fit here.

The only real contenders for going past "very finite" into merely
"finite" are modern computer proofs such as the four color theorem or
Hales' proof of the Kepler Conjecture. These often require large
enumerations, which are not performed directly by the computer proof,
but are performed by writing a program to efficiently verify the
cases, then verifying the program. This is a special case of the
"computational reflection principle", and again, it's up to you if you
want to use it. I find it vaguely discomforting.

Friedman:

I developed a way of generating some rather large actual natural
numbers, and by varying simple parameters, I got a lot of control over
the size of the actual natural numbers. I.e., I was looking at certain
natural, reasonably natural, arguably natural, very natural -
depending on what view you take on naturalness - Sigma-0-1 sentences.
When the least witnesses get really very large, one also has as a
consequence that every proof in a system like PRA or RCA_0 or even PA
and higher, must have an utterly absurd number of bits, even if we use
sugar.

So what we get is a whole series of natural Sigma-0-1 sentences with a
great deal of control over the size of least witnesses, and with much
less but substantial control over the least size of proofs and sugared
proofs in various formal systems.

What this aims for, and to some unclear extent I already did, is this.
There is a fixed natural Sigma-0-1 sentence A such that as you go from
one formal system to stronger ones, in a  standard hierarchy of
systems, the least length of a proof of that Simga-0-1 sentence drops
dramatically, until when you get to a strong enough system, it becomes
normal Journal article size.

Friedman:

"In my response, I moved to a difficulty for the finitist/formalist
positions, which is the choice of "good rules". One very important
criteria that (almost everybody thinks) has to be met is consistency
(this usually can be formulated without using jargon from math logic).
But how do you tell whether a given set of even very interesting and
be sized rules are in fact consistent? That's a Pi-0-1 sentence, not
capable of Baez-witnessing like Sigma-0-1 sentences."

Carniero:

This is absolutely true. With my meta-ultrafinitist approach, Pi-0-1
statements are utterly pointless to consider. Our only recourse is to
blindly work away with our possibly-inconsistent system, and hope that
we don't find an inconsistency (and if we do, scramble to backtrack
and cordon off the bad axiom). Then again, people have done this for
years with ZFC and it seems to work out alright.

Friedman:

But the human mind seems to operate in various context with some
confidence, sometimes great confidence, that if some ideas make enough
clear sense, or are sufficiently philosophically coherent, then they
correspond to some sort of reality, and can be used with great
confidence. E.g., we are willing to drive cars, undergo surgical
procedures, declare programs as being correct for all inputs or all
reasonably large inputs, etcetera, and use them.

The problem with the formalist/finitist position(s) is that they only
seem to recognize some sort of try and see if it works idea, and it
doesn't explain why we are compelled to choose certain frameworks. You
cannot create a scenario whereby frameworks like EFA, PRA, PA, ACA,
ZC, ZFC, and the like, are generated by random trial and error to see
what works. The randomness must be rather contextual, and just what
context is that?

These are some of the main arguments why the formalist/finitist
position, as usually presented, is not popular among mathematical
logicians, at least not for a very long time.

Carniero:

If ZFC is inconsistent, but the inconsistency is not very finite, then
it doesn't matter, because we will never discover it. If instead ZFC
proves "there is a proof of inconsistency of ZFC of length less than
10^4000" then I'm not sure what I'd make of it.

Friedman

Well, ZFC "might" (I don't believe it) be inconsistent, with a very
very big least length proof even with sugar, yet the proof in ZFC +
"there exists a strongly inaccessible cardinal" is quite short, or
even in NBG, is quite short. 
Harvey Friedman


More information about the FOM mailing list