[FOM] EFA/PA correspondence

Harvey Friedman friedman at math.ohio-state.edu
Thu Sep 30 01:39:04 EDT 2010


I have been in correspondence with Richard Borcherds at Berkeley. He  
has given permission for me to put the correspondence up on FOM and to  
continue the discussion on FOM (see below).

So here is the totality of the correspondence, including my reply to  
his last reply to me.

From: Harvey Friedman <friedman at math.ohio-state.edu>
Date: September 28, 2010 6:22:51 PM EDT
To: reb at math.berkeley.edu
Cc: Tim Gowers <W.T.Gowers at dpmms.cam.ac.uk>, tao at math.ucla.edu, Joel  
Spencer <spencer at cims.nyu.edu>, Ronald Graham <graham at ucsd.edu>
Subject: EFA/PA

Dear Richard (Borcherds),

 From MO, I see that Richard is interested in "non Ramsey theoretic"  
examples of extreme rates of growth.

As Richard knows from MO, I managed to reduce Ramsey theoretic  
statements to the existence of a point - or rather two adjacent  
points. There are many variants. E.g., the ones full of k's read:

THEOREM 1. Let t >> k. For all F:[t]^k into [k], there exist distinct  
x_1,...,x_k+1 such that F(x_1,...,x_k) = F(x_2,...,x_k+1).

THEOREM 2. Let t >> k. For all limited F:[t]^k into [t]^k, there exist  
distinct x_1,...,x_k+1 such that F(x_1,...,x_k) <=* F(x_2,...,x_k+1).

THEOREM 3. Let t >> k. For all limited F:[t]^k into [t], there exist  
distinct x_1,...,x_k+2 such that F(x_1,...,x_k) <= F(x_2,...,x_k+1) <=  
F(x_3,...,x_k+2).

Here limited means max(F(x)) <= max(x). And <=* is <= coordinatewise.

Theorem 1 corresponds to iterated exponentials. Theorems 2,3  
correspond to just beyond Peano Arithmetic (this is far far beyond  
things like Ackerman's function).

I was wondering two things.

1. Does this in some way cross the line away from Ramsey theory into  
"general" math, in the sense that Richard was looking for?

2. Can statements like these be molded so that they are more clearly  
away from Ramsey theory?

Here is a direction to suppress Ramsey theoretic aspects yet further.  
I am curious as to what you might think of this approach.

TEMPLATE. Let V be a set of equalities between variables x_1,...,x_2k.  
For all F:N^k into [r], there exist distinct x_1,...,x_2k obeying V  
and F(x_1,...,x_k) = F(x_k+1,...,x_2k).

TEMPLATE. Let V be a piecewise linear subset of N^2k. For all F:N^k  
into [r], there exist x,y in N^k such that V(x,y) and F(x) = F(y).

The idea is that I am pretty sure we can give intelligible necessary  
and sufficient conditions on V for these Templates to hold. Theorem 1  
becomes (the obvious finite form of) a special case.

Similar approaches for suppressing Ramsey theoretic aspects for  
Theorems 2 and 3, making them special cases.

Harvey

PS: If you want something in another part of combinatorics, with much  
much higher rates of growth, you might take a look at

http://www.cs.nyu.edu/pipermail/fom/2010-September/015068.html

Also see

http://www.math.ohio-state.edu/~friedman/pdf/finiteseq10_8_98.pdf

and section 5 thereof.

From: reb at math.berkeley.edu
Date: September 28, 2010 7:35:08 PM EDT
To: "Harvey Friedman" <friedman at math.ohio-state.edu>
Subject: Re: EFA/PA
Reply-To: reb at math.berkeley.edu

> I was wondering two things.
>
> 1. Does this in some way cross the line away from Ramsey theory into
> "general" math, in the sense that Richard was looking for?
>

It's still some sort of Ramsey theory. By this I mean statements with  
the
flavor "in a sufficiently large set anything will happen". Ramsey theory
is perfectly good ordinary mathematics, but I was wondering if there  
were
any examples coming from areas like algebraic number theory/algebraic
geometry/Riemannian geometry/differential equations and so on. Why do
these areas with more "structure" never seem to involve functions with
growth rates more than "finite tower of exponentials"?

> 2. Can statements like these be molded so that they are more clearly
> away from Ramsey theory?
>

This wouldn't really help to answer my questions. I already know that
logicians can come up with statements very similar to "ordinary"
mathematical statements that have very high growth rates. But the odd
thing is that the statements ordinary mathematicians naturally come up
with almost never seem to have these high rates of growth. The main
exceptions I know of are the stuff you did on Kruskal's theorem and the
graph minor theorem. I guess what I want are examples of very high rates
of growth turning up naturally WITHOUT someone deliberately trying to
create high growth rates.

In your comments assigning "proof theoretic integers" to theories,
wouldn't it be more natural to assign "proof theoretic functions", with
the proof theoretic integers given by values of the functions at some
smallish argument? The function assigned to a theory might be something
like f(n)=max number of steps taking by a Turing machine  that the  
theory
can prove will halt with a proof of length at most n. Fixing n to be a
billion would give a reasonable proof-theoretic number for the theory.

I was also wondering if anyone has taken reverse mathematics down to  
lower
levels, around proof theoretic ordinals of size omega^3. The problem is
that the base system already seems far to strong for my work: I suspects
everything I have ever proved could be done in EFA (which I guess is a
corollary of your grand conjecture). So instead of asking which system  
is
needed for any theorem, I would like to assign proof theoretic  
ordinals to
individual mathematical theorems (such as mine), which would measure the
minimum amount of transfinite induction needed to prove the theorem.  
Maybe
one could assign explicit proof theoretic polynomials in omega (probably
mostly of degree 2) to most mathematical theorems. Then we would have a
completely precise way of saying whose theorem was deeper!

Richard Borcherds

From: Harvey Friedman <friedman at math.ohio-state.edu>
Date: September 28, 2010 9:35:43 PM EDT
To: reb at math.berkeley.edu
Subject: Re: EFA/PA

Thanks for your prompt reply! Did you send it to others in my original  
email? I am now just writing to you. Otherwise, I would like to cc to  
anybody else who received your very thoughtful reply!

>> I was wondering two things.
>>
>> 1. Does this in some way cross the line away from Ramsey theory into
>> "general" math, in the sense that Richard was looking for?
>>
> It's still some sort of Ramsey theory. By this I mean statements  
> with the
> flavor "in a sufficiently large set anything will happen". Ramsey  
> theory
> is perfectly good ordinary mathematics, but I was wondering if there  
> were
> any examples coming from areas like algebraic number theory/algebraic
> geometry/Riemannian geometry/differential equations and so on. Why do
> these areas with more "structure" never seem to involve functions with
> growth rates more than "finite tower of exponentials"?

So this clarifies my understanding of what you are looking for.

You might want to take a look at this from http://www.cs.nyu.edu/pipermail/fom/2010-January/014285.html

Also, recently, I constructed statements involving kernels on digraphs  
where there is no growth rates whatsoever, yet the statements are  
independent of ZFC, and can only be proved using large cardinals.

>> NOW for the restatement of the polynomials.
>>
>> Let n_1,...,n_k be in Z and 1 <= i <= k. The translates of
>> (n_1,...,n_k) in the i-th coordinate are obtained by adding an  
>> integer
>> to the i-th coordinate.
>>
>> POLYNOMIAL SHIFT TRANSLATION THEOREM. For all polynomials P:Z^k into
>> Z^k, there exist distinct positive integers n_1,...,n_k+1 such that,
>> in each coordinate, the number of translates of (n_1,...,n_k)  
>> achieved
>> by P is at most the number of translates of (n_2,...,n_k+1) achieved
>> by P.
>>
>> QUADRATIC SHIFT TRANSLATION THEOREM. For all quadratics P:Z^k into
>> Z^k, there exist distinct positive integers n_1,...,n_k+1 such that,
>> in each coordinate, the number of translates of (n_1,...,n_k)  
>> achieved
>> by P is at most the number of translates of (n_2,...,n_k+1) achieved
>> by P.
>>
>> THEOREM. Both of the above theorems are provable in ACA' but not
>> in Peano Arithmetic. They each imply 2-Con(PA) over EFA.
>>
>> Here ACA' is ACA_0 + "for all x,n, the n-th jump of x exists".
>>
>> I think that they are equivalent to 3-Con(PA) over EFA.


I mention it because it doesn't have the same "in a sufficiently large  
set anything will happen". However, it was constructed by me.

You could ask for examples that fit in reasonably well into those  
subjects you mention, or ask for examples that are already present in  
those subjects without someone like me going in there to create things  
that are somewhat close, close, or very close, to what is already there.

My own view is that to some extent it is accidental what actually is  
present TODAY, as opposed to, say, after one more million years of  
professional mathematics. The latter is much larger, and I maintain,  
even 700 or 7000 years out, contains, say, Boolean Relation Theory (as  
on my website).

>> 2. Can statements like these be molded so that they are more clearly
>> away from Ramsey theory?
>>
>
> This wouldn't really help to answer my questions. I already know that
> logicians can come up with statements very similar to "ordinary"
> mathematical statements that have very high growth rates. But the odd
> thing is that the statements ordinary mathematicians naturally come up
> with almost never seem to have these high rates of growth. The main
> exceptions I know of are the stuff you did on Kruskal's theorem and  
> the
> graph minor theorem. I guess what I want are examples of very high  
> rates
> of growth turning up naturally WITHOUT someone deliberately trying to
> create high growth rates.

Yes, this is an interesting issue. Again, I think that the situation  
should be very different going way out into the future. I believe that  
under any scenario, incompleteness of concrete mathematical statements  
is rare. As you well know, the situation is radically different in set  
theory.

> In your comments assigning "proof theoretic integers" to theories,
> wouldn't it be more natural to assign "proof theoretic functions",  
> with
> the proof theoretic integers given by values of the functions at some
> smallish argument? The function assigned to a theory might be  
> something
> like f(n)=max number of steps taking by a Turing machine  that the  
> theory
> can prove will halt with a proof of length at most n. Fixing n to be a
> billion would give a reasonable proof-theoretic number for the theory.

That is the standard approach, and that is the usual foundations for  
growth rates. The buzzwords are "provably recursive function", and  
"provable ordinal", and variants of this terminology meaning the same  
thing. But I was laying a foundation for rigorously discussing  
individual integers. This is new and badly needed.

> I was also wondering if anyone has taken reverse mathematics down to  
> lower
> levels, around proof theoretic ordinals of size omega^3. The problem  
> is
> that the base system already seems far to strong for my work: I  
> suspects
> everything I have ever proved could be done in EFA (which I guess is a
> corollary of your grand conjecture). So instead of asking which  
> system is
> needed for any theorem, I would like to assign proof theoretic  
> ordinals to
> individual mathematical theorems (such as mine), which would measure  
> the
> minimum amount of transfinite induction needed to prove the theorem.  
> Maybe
> one could assign explicit proof theoretic polynomials in omega  
> (probably
> mostly of degree 2) to most mathematical theorems. Then we would  
> have a
> completely precise way of saying whose theorem was deeper!
>

You may want to look at my paper on Strict Reverse Mathematics. See http://www.math.ohio-state.edu/%7Efriedman/manuscripts.html 
  section 1 paper number 59. This is not going to answer all of your  
issues, but I would be curious if you think it addresses some of them.  
We'll talk (email) later about this...

Harvey

From: reb at math.berkeley.edu
Date: September 29, 2010 12:16:55 AM EDT
To: "Harvey Friedman" <friedman at math.ohio-state.edu>
Subject: Re: EFA/PA
Reply-To: reb at math.berkeley.edu

I didnt send my reply to anyone else, but you are welcome to circulate  
it:
I assume that all email I send is public.

A good example of what I am looking for is Tarjan's upper bound for the
run-time of the union-find data structure. This is a completely natural
mathematical problem that seems to have nothing to do with fast-growing
functions or Ramsey theory, yet the Ackerman function (or rather its
inverse) turns up unexpectedly. (By the way, has anyone shown that you
cant do better than this, which would imply that EFA cannot prove a  
better
bound?)

> Also, recently, I constructed statements involving kernels on digraphs
> where there is no growth rates whatsoever, yet the statements are
> independent of ZFC, and can only be proved using large cardinals.

What are these statements? (The ones just following in your message seem
to be provable in much weaker systems). I dont  know of any finitistic
combinatorial statements at all depending on large cardinal axioms  
(other
than routine encodings of consistency results). In fact I cant think of
any "natural" examples beyond second order arithmetic. ("Natural" here
means roughly something thought of by non-logicians.)

Here are a couple of questions in this area that puzzle me:

(1) We have a  hierarchy of stronger and stronger systems, starting at  
EFA
and passing through PRA, PA, second order arithmetics, set theory,  
various
large cardinal axioms, and so on. Why does almost all mathematics seem  
to
need just the very lowest level of this? Is this because most of us are
just too stupid to use the more powerful axioms effectively, or is there
some reason why most natural mathematical statements only need the  
lowest
levels? In 1000 years will mathematicians be making much heavier use of
these stronger systems, or will they still mainly be messing around at  
the
level of EFA?
A variation on this question: why is second order arithmetic used so  
much
in ordinary mathematics (via reals, p-adics, etc), while 3rd order
arithmetic does not seem to be used at all? I suppose measure theory  
does
in some sense use 3rd order constructions, but this can easily be  
avoided
most of the time by using Radon measures, etc.

(2) Is there a good way to measure the "difficulty" of a mathematical
result? Two obvious ways dont work: The "proof theoretic ordinal"  
approach
would claim that Kruskal's theorem (provable in a couple of lines) is  
much
deeper than fermat's last theorem (possibly provable in EFA) which does
not seem right. And "length of proof" does not work either, due to your
examples of statements that are provable in PA but need ridiculously  
long
proofs. So what measure of "difficulty" of a proof corresponds to our
feeling that the proof of FLT really is difficult? .

Richard Borcherds

From: Harvey Friedman <friedman at math.ohio-state.edu>
Date: September 29, 2010 1:17:03 AM EDT
To: reb at math.berkeley.edu
Subject: Re: EFA/PA

> A good example of what I am looking for is Tarjan's upper bound for  
> the
> run-time of the union-find data structure. This is a completely  
> natural
> mathematical problem that seems to have nothing to do with fast- 
> growing
> functions or Ramsey theory, yet the Ackerman function (or rather its
> inverse) turns up unexpectedly. (By the way, has anyone shown that you
> cant do better than this, which would imply that EFA cannot prove a  
> better
> bound?)

 From http://en.wikipedia.org/wiki/Disjoint-set_data_structure

"These two techniques complement each other; applied together, the  
amortized time per operation is only O(α(n)), where α(n) is the  
inverse of the function f(n) = A(n,n), and A is the extremely quickly- 
growing Ackermann function. Since α(n) is the inverse of this  
function, α(n) is less than 5 for all remotely practical values of n.  
Thus, the amortized running time per operation is effectively a small  
constant.

In fact, this is asymptotically optimal: Fredman and Saks showed in  
1989 that Ω(α(n)) words must be accessed by any disjoint-set data  
structure per operation on average.[3]
M. Fredman and M. Saks. The cell probe complexity of dynamic data  
structures. Proceedings of the Twenty-First Annual ACM Symposium on  
Theory of Computing, pp. 345–354. May 1989. "Theorem 5: Any  
CPROBE(log n) implementation of the set union problem requires Ω(m  
α(m, n)) time to execute m Find's and n−1 Union's, beginning with n  
singleton sets." "

You use the phrase "completely natural", which I think may mean  
something different from "natural".
Incidentally, although the Ackermann function naturally arises here  
(more directly is the inverse Ackermann function), perhaps if this is  
couched as an explicit mathematical theorem without mentioning the  
Ackermann function, it is not so naturally striking?

Also I am not quite sure what role you have in mind for EFA in this  
situation - EFA is tied to iterated exponentiation, and the Ackermann  
function is far beyond the scope of EFA.

I would be curious how you view this kind of thing below - "natural",  
"not natural", "completely natural", or "natural but not completely  
natural"?

THEOREM 1. In any sufficiently long sequence of three letters, some  
x_i,...,x_2i is a subsequence of some x_j,...,x_2j, i < j.
THEOREM 2. In any sufficiently long sequence of k letters, some  
x_i,...,x_2i is a subsequence of some x_j,...,x_2j, i < j.

The first gives some sort of Ackermann type integer, whereas the  
second gives a growth rate that is incredibly faster than the  
Ackermann function.

>> Also, recently, I constructed statements involving kernels on  
>> digraphs
>> where there is no growth rates whatsoever, yet the statements are
>> independent of ZFC, and can only be proved using large cardinals.
>
> What are these statements? (The ones just following in your message  
> seem
> to be provable in much weaker systems). I dont  know of any finitistic
> combinatorial statements at all depending on large cardinal axioms  
> (other
> than routine encodings of consistency results). In fact I cant think  
> of
> any "natural" examples beyond second order arithmetic. ("Natural" here
> means roughly something thought of by non-logicians.)

You might want to take a look at my book at http://www.math.ohio-state.edu/%7Efriedman/manuscripts.html 
  section 3, particularly the Preface and parts of the Introduction.  
In the Introduction you will also find a summary of what is in the  
book, proper.

Basically, I am NOT a non-logician.

As you can see from the Preface, I am an optimist. There is clearly  
some notion of "natural" or "reasonably natural", and clearly there is  
a gap between what I can do and what occurs undoctored in nature.  
Optimistically, I believe the gap will be closed, and highly  
infinitistic methods, including large cardinals, will be used for  
things that are not only "natural" but of major interest. Yes, I am  
not likely to live to see this, but that is an artifact of when I was  
born - in my humble opinion.

> Here are a couple of questions in this area that puzzle me:
>
> (1) We have a  hierarchy of stronger and stronger systems, starting  
> at EFA
> and passing through PRA, PA, second order arithmetics, set theory,  
> various
> large cardinal axioms, and so on. Why does almost all mathematics  
> seem to
> need just the very lowest level of this?

My view is that part of the answer is that professional mathematics is  
just beginning, and that much larger time frames will tell a different  
story. Things like BRT (Boolean Relation Theory) are from the future.

> Is this because most of us are
> just too stupid to use the more powerful axioms effectively, or is  
> there
> some reason why most natural mathematical statements only need the  
> lowest
> levels? In 1000 years will mathematicians be making much heavier use  
> of
> these stronger systems, or will they still mainly be messing around  
> at the
> level of EFA?

I think that Concrete Mathematical Incompleteness is admittedly quite  
rare, but when you formulate 2^512 statements as in BRT with 2  
functions and 3 sets, something arises which requires large cardinals  
(see Introduction). Maybe only, say, a few hundred out of 2^512. That  
is not a large percentage.

> A variation on this question: why is second order arithmetic used so  
> much
> in ordinary mathematics (via reals, p-adics, etc), while 3rd order
> arithmetic does not seem to be used at all? I suppose measure theory  
> does
> in some sense use 3rd order constructions, but this can easily be  
> avoided
> most of the time by using Radon measures, etc.

I have been told by several people including Solovay that the natural  
setting for FLT involves serious use of sets of cardinality c, and  
hence 3rd order constructions. However, MacIntyre is convinced,  
without yet having a convincing proof, that PA is sufficient. See http://www.cwru.edu/artsci/phil/Proving_FLT.pdf

Again, this may be a temporary artifact of the fact that mathematics  
is so young, and the exceptions are so rare. But eventually,  
exceptions will be plentiful.

> (2) Is there a good way to measure the "difficulty" of a mathematical
> result? Two obvious ways dont work: The "proof theoretic ordinal"  
> approach
> would claim that Kruskal's theorem (provable in a couple of lines)  
> is much
> deeper than fermat's last theorem (possibly provable in EFA) which  
> does
> not seem right. And "length of proof" does not work either, due to  
> your
> examples of statements that are provable in PA but need ridiculously  
> long
> proofs. So what measure of "difficulty" of a proof corresponds to our
> feeling that the proof of FLT really is difficult? .

Now this is very unexplored. I think it is promising to work on the  
"difficulty" of mathematical results in the arena of near  
trivialities. I.e., way way way down in the hierarchy of "difficulty".  
This touches on issues in computer assisted proofs - what it takes to  
make the computers more helpful, or even not need human help. But at  
reasonable levels of "difficulty" this is a very difficult (no pun  
intended) topic. We have been comparatively successful with the other  
issues we have been talking about - although even there not nearly as  
successful as you would like (smile).

I WOULD LOVE TO SEND OUR CORRESPONDENCE IN TO THE FOM EMAIL LIST, AND  
CONTINUE IT THERE. ARE YOU UP FOR THAT?

Harvey

From: reb at math.berkeley.edu
Date: September 30, 2010 12:06:47 AM EDT
To: "Harvey Friedman" <friedman at math.ohio-state.edu>
Subject: Re: EFA/PA
Reply-To: reb at math.berkeley.edu

> You use the phrase "completely natural", which I think may mean
> something different from "natural".

No, I was using both as somewhat vague terms meaning something found by
someone not deliberately searching for fast-growing functions.

> Also I am not quite sure what role you have in mind for EFA in this
> situation - EFA is tied to iterated exponentiation, and the Ackermann
> function is far beyond the scope of EFA.
>

I'm not only interested in EFA; "natural" examples just beyond PRA or
Peano arithemtic and so on are also interesting. Though they seem
incredibly rare.

> I would be curious how you view this kind of thing below - "natural",
> "not natural", "completely natural", or "natural but not completely
> natural"?
>
> THEOREM 1. In any sufficiently long sequence of three letters, some
> x_i,...,x_2i is a subsequence of some x_j,...,x_2j, i < j.
> THEOREM 2. In any sufficiently long sequence of k letters, some
> x_i,...,x_2i is a subsequence of some x_j,...,x_2j, i < j.
>
> The first gives some sort of Ackermann type integer, whereas the
> second gives a growth rate that is incredibly faster than the
> Ackermann function.
>

nice examples. I had to read them several times to see the difference.  
It
seems strange to me that 3 does not yet give the general case. However
these examples were (I assume) deliberately constructed by you to  
exhibit
high growth rates, so do not count as "natural" in the sense above.  
What I
find so strange about this is that your examples and the Paris- 
Harrington
theorem and so on  seem only slightly different from questions that were
considered by non-logicians, yet the questions by non-logicians almost
never seem to give such high growth rates. Is there some subtle reason  
for
this, or is it just a historical accident?

> You might want to take a look at my book at
> http://www.math.ohio-state.edu/%7Efriedman/manuscripts.html
>  section 3, particularly the Preface and parts of the Introduction.
> In the Introduction you will also find a summary of what is in the
> book, proper.
>

Hmm, this will take a LONG time for me to digest...

> I have been told by several people including Solovay that the natural
> setting for FLT involves serious use of sets of cardinality c, and
> hence 3rd order constructions. However, MacIntyre is convinced,
> without yet having a convincing proof, that PA is sufficient.

I'm somewhat skeptical that wiles makes any essential use of 3rd order
constructions, though I confess I have not looked at the proof in enough
detail, and if Solovay says 3rd order is used seriously that makes me
pause. My general impression in this area is that the 3rd order
constructions I have seen are caused by laziness and are trivial to  
reduce
to 2nd order, but that reducing 2nd order to 1st order takes serious
effort.

> See
> http://www.cwru.edu/artsci/phil/Proving_FLT.pdf

This is quite interesting, but also somewhat misleading in places.
The author  seems to imply at various points that etale cohomology or
coherent cohomology or even group cohomology require large set-theoretic
constructions. It is obvious to anyone who understands these theories  
that
they do not require large amounts of  set theory: this is just a minor
convenience in describing them.

> I WOULD LOVE TO SEND OUR CORRESPONDENCE IN TO THE FOM EMAIL LIST, AND
> CONTINUE IT THERE. ARE YOU UP FOR THAT?

Sure.

Richard Borcherds

********************************************

I now answer Richard's last email right here on the FOM.

*********************************************************

>> You use the phrase "completely natural", which I think may mean
>> something different from "natural".
>
> No, I was using both as somewhat vague terms meaning something found  
> by
> someone not deliberately searching for fast-growing functions.

So this leaves open the possibility that someone can be deliberately  
searching for fast-growing functions and find something intrinsically  
mathematically interesting, of competitive interest to things found by  
people not searching for fast-growing functions?

In my own experience, when I look for fast growing functions in new  
ways, the first versions are much less interesting than later  
versions, which are then designed to get rid of ad hoc features, put  
things in more general contexts, etcetera.

Normally, I look for thematic studies. So that's why I asked you about  
thematics investigations concerning the inequality f(x) <=* f(y),  
where x,y are related.

>> I would be curious how you view this kind of thing below - "natural",
>> "not natural", "completely natural", or "natural but not completely
>> natural"?
>>
>> THEOREM 1. In any sufficiently long sequence of three letters, some
>> x_i,...,x_2i is a subsequence of some x_j,...,x_2j, i < j.
>> THEOREM 2. In any sufficiently long sequence of k letters, some
>> x_i,...,x_2i is a subsequence of some x_j,...,x_2j, i < j.
>>
>> The first gives some sort of Ackermann type integer, whereas the
>> second gives a growth rate that is incredibly faster than the
>> Ackermann function.
>>
>
> nice examples. I had to read them several times to see the  
> difference. It
> seems strange to me that 3 does not yet give the general case.

Theorem 1 simply asserts the existence of a particular integer.  
Theorem 2 asserts the existence of an integer depending on k. The  
first is called Sigma01, whereas the second is called Pi02. Theorem 2  
can be prove in 3 quantifier induction but not in 2 quantifier  
induction. Theorem 1 can be proved in 2 quantifier induction in a  
couple of pages, but can only be proved in 1 quantifier induction  
using thousands (and probably many more) of inductions, and can only  
be proved in EFA using at least 2^2^2^2^2^2^2^2^2^2^2^2^2^2 symbols  
(and much more).

> However
> these examples were (I assume) deliberately constructed by you to  
> exhibit
> high growth rates, so do not count as "natural" in the sense above.  
> What I
> find so strange about this is that your examples and the Paris- 
> Harrington
> theorem and so on  seem only slightly different from questions that  
> were
> considered by non-logicians, yet the questions by non-logicians almost
> never seem to give such high growth rates. Is there some subtle  
> reason for
> this, or is it just a historical accident?

My own view is that this is an historical accident. I believe that  
Theorems 1,2 above are inevitably if you go out a hundred, or  
certainly several hundred or a thousand years. Of course, you may  
disagree with this view, or think there is no evidence for it. So the  
question naturally arises: what would constitute evidence,  
realistically, for one position or the other on such a matter?

> I'm somewhat skeptical that wiles makes any essential use of 3rd order
> constructions ...
>
>> See
>> http://www.cwru.edu/artsci/phil/Proving_FLT.pdf
>
> This is quite interesting, but also somewhat misleading in places.
> The author  seems to imply at various points that etale cohomology or
> coherent cohomology or even group cohomology require large set- 
> theoretic
> constructions. It is obvious to anyone who understands these  
> theories that
> they do not require large amounts of  set theory: this is just a minor
> convenience in describing them.

I have written to Colin McLarty about your point.

Harvey Friedman





















More information about the FOM mailing list