[FOM] Re: A Newtonian computing design

Harvey Friedman friedman at math.ohio-state.edu
Fri Feb 13 00:16:16 EST 2004


This is a continuation of my recent posting, A Newtonian computing design
2/1104 3:24PM, combined with a reply to Chow.

On 2/12/04 4:48 PM, "Timothy Y. Chow" <tchow at alum.mit.edu> wrote:

> For further
> concreteness, suppose that one of the Turing machines in question
> searches for a contradiction from ZFC, or ZFC + "there exists a measurable
> cardinal" if you prefer.

The pure f.o.m. part of my proposal in that previous posting is, for
example, to 

*present an at most 100 quadruple Turing machine together with a proof in,
say, Peano arithmetic, that: ZFC + "there exists a measurable cardinal" is
consistent if and only if that Turing machine fails to halt at the empty
tape input.*

I picked the number 100 because

a) I believe that this may be tricky to do;
b) I am confident that I can easily do this for a few hundred.

So on the pure f.o.m. side, if someone succeeds in doing this with 100, then
it will be interesting to raise the bar and ask for this to be done with,
say, 50. I.e., just see how well one can do with more and more deep work.

Of course, I have thought about this kind of f.o.m. program over the years
in its pure form without connections to physics computers, etcetera. And I
always thought that this was worthwhile for a number of reasons, not the
least of which is just what deep constructions need to be made in order to
really pare down the number of quadruples needed.

However, I think that most of my colleagues around the world will need more
motivation than that in order to get into this difficult purely f.o.m.
problem. So perhaps the physical computer connection will be enough.

This problem and entire situation is also quite interesting

**even if we remove any kind of infinities here.**

For let us consider the following problem:

#) DOES ZFC + 'there exists a measurable cardinal' HAVE AN INCONSISTENCY OF
SIZE AT MOST 2^100???

By size, we mean total number of bits in the presentation. This is not the
same as the length of the proof, since we must also take into account the
size of the lines in the proof.

Here we have to be a bit careful about how we formalize set theory. I claim
that we can reasonably robustly take a formalization of set theory which
allows abbreviations. I have over the years thought about just what is a
"complete set of abbreviation mechanisms", and got involved in the design of
such (work not completed and not available). But I remember drawing the
conclusion that once one has some standard abbreviation power, then having
much more does not change the size of proofs (total number of bits when
presented) by more than a small constant factor. Issues like "what constant
factor" are interesting and important, of course.

Again, one has the conjecture that one can present a specific TM with at
most 100 quadruples and a proof in Peano Arithmetic that

#) is equivalent to: this TM does not halt at the empty tape input in at
most 2^2^200 steps.

Note that I have a double exponential here. This is because, even if one
succeeds in finding the appropriate TM code with at most 100 quadruples, the
obvious algorithm associated with #) will require an amount of time
exponential in 2^100, since at least superficially, one must examine all
size 2^100 strings. The number 100 probably has to be raised somewhat to
take into account the size of the small finite alphabet used to write
proofs. Raising to 200 looks safe.

So now we have a (perhaps new) two dimensional f.o.m. problem! Get a Turing
machine equivalent (over PA, say) to #) with a "small" number of quadruples
and with a "small" number of computation steps!! It is not clear to me just
what the tradeoffs here are.

Incidentally, what can we say about the relationships between the following
four statements?:

#) DOES ZFC + 'there exists a measurable cardinal' HAVE AN INCONSISTENCY OF
SIZE AT MOST 2^100? (as above)

##) DOES ZFC + 'there exists a nontrivial elementary embedding from some
rank into itself' HAVE AN INCONSISTENCY OF SIZE AT MOST 2^100??

##) DOES ZFC + 'there exists a measurable cardinal' HAVE AN INCONSISTENCY OF
SIZE AT MOST 2^2^100???

####) DOES ZFC + 'there exists a nontrivial elementary embedding from some
rank into itself' HAVE AN INCONSISTENCY OF SIZE AT MOST 2^2^100????

#####) DOES ZFC + 'there exists a measurable cardinal' HAVE AN INCONSISTENCY
OF SIZE AT MOST 100?????

#####) DOES ZFC + 'there exists a nontrivial elementary embedding from some
rank into itself' HAVE AN INCONSISTENCY OF SIZE AT MOST 100??????

> Case 2: The Newtonian experiment resists finite simulation.  There could
> be something subtle in the equations of classical mechanics that implies
> that any fixed finite approximation to the system fails to correctly
> predict the collision/non-collision outcome for all initial conditions
> in the specified range.  In that case, my objection would be that "belief
> in" the outcome of the physical system requires
> 
> *believing that Newtonian mechanics remains valid in regions beyond
>  what we can measure*.
> 
> For being finite ourselves, we can only measure the system a finite number
> of times to some finite accuracy.  I do not see why we should believe in
> Newtonian mechanics when it is extrapolated to untested limits.  Note that
> Harvey's setup, if it is to fit in a normal-sized lab, involves distances
> close to the Planck length.

I was intending to address this issue of reliability. I do so now.

First of all, in some appropriate sense worth analyzing in detail (I
haven't), we need only believe the physical theory to a FINITE level of
"correctness" in the above new finite forms - i.e., where the Turing
machines run only for a specified long, but finite, number of steps.

Secondly, there is a seemingly convincing way we can become quite confident
in the results of my setup.

Specifically, we first establish the following conjecture:

%a truly startling number of deep theorems of mathematics can be presented
in the form that a specific Turing machine with at most 100 quadruples does
not halt at the empty tape input, demonstrably within a very weak system%

A couple of points about %.

1. It is not clear how to formulate % formally. For it is almost certainly
true that 

%%a truly startling number of deep theorems of mathematics can be proved
within a very weak system%

2. The content of % can be conveyed INFORMALLY in the form

%%%a truly startling number of deep theorems of mathematics can be presented
in the form that a specific Turing machine with at most 100 quadruples does
not halt at the empty tape input, with a comparatively shallow proof of the
equivalence, or a least a proof of the equivalence which, in no way shape or
form, relies on any substantial aspect of the proof of that theorem.%%%

Now back to my main point about reliability.

We would run my Newtonian computer on the appropriate <= 100 quadruple
Turing machines for FLT = Fermat's last theorem, the four colorability of
planar maps, there is a prime in any [x,2x], x >= 2, there exist at least 26
sporadic simple groups, there exist 27 sporadic simple groups. The first
three are true Pi-0-1 sentences, the fourth is a true Sigma-0-1 sentence,
and the fifth is a false Sigma-0-1 sentence. Note that any proof that there
are at least 26 sporadic simple groups "should need" the construction of the
Monster group, something that moving Newtonian particles probably don't
understand very well.

Let us assume that such a physics computer gives the correct answer to all
of these problems, and many many more. In fact, the correct answer to every
single deep theorem of mathematics that has a corresponding <= 100 quadruple
Turing machine. 

I think that this would be regarded as overwhelmingly evidence of the
reliability of such a physics computer.

Of course, a rather deep question is: why is this such good evidence?

But in any case, we would then apply this physical computer to the
consistency problem for ZFC with large cardinals, probably with great
confidence. 

Harvey Friedman




More information about the FOM mailing list