[FOM] Re: On Hypercomputation

Timothy Y. Chow tchow at alum.mit.edu
Sun Mar 14 18:23:59 EST 2004

Dmytro Taranovsky <dmytro at mit.edu> wrote:
> One of the strongest arguments against hypercomputation is that, by
> definition, a hypercomputer must accept input of arbitrary length--even
> inputs much longer than two to the number of particles in the universe.

Surely this is not an argument against hypercomputation.  By definition, a
Turing machine must also accept input of arbitrary length.  I don't think
you intend to say that this is one of the strongest arguments against the
Turing machine.

> Harvey Friedman introduces the notion of a weak hypercomputer, which is
> capable of solving all instances shorter than a certain length of the
> halting problem.  Weak hypercomputers are compatible with the physical
> Church-Turing thesis, yet may have almost all of the practical benefits
> of hypercomputation.

I don't know what exactly you mean by the "physical Church-Turing thesis" 
here, but in any case, it has seemed to me all along that there has been a 
tacit assumption that by an actual hypercomputer, people mean what you 
call here a "weak hypercomputer."  This is similar to the tacit assumption
that when people talk about ordinary computers and Turing machines in the 
same breath, they mean that a real, physical computer is a "weak Turing 
machine" that accepts inputs of a limited size.

So for example, a machine that searches for a proof of a contradiction in 
ZFC can be written down explicitly in a small finite space.  Your "weak 
hypercomputer" could presumably solve this instance of the halting problem 
and it is difficult to see in what sense this is "compatible with the 
physical Church-Turing thesis" (even though I don't know exactly what you 
mean by that term).

> Also, one can verify if something is a weak
> hypercomputer (see my posting "Experimentation, Incompleteness, and
> Undecidability" on Feb 11, 2004), but to verify that a physical
> procedure solves the halting problem, one must verify that the procedure
> does not break down when the size of the input is larger than the number
> of atoms in the universe--which is, as Martin Davis remarked, "fraught
> with difficulty."

I don't think Davis or I or any of the critics of hypercomputation as it
is currently being discussed see this as the central issue.  The size of
the input is not the critical point, since even without carrying out
Harvey Friedman's program of creating succinct Turing machines, it is
clear that there are relatively small finite instances of the halting
problem that are problematic.  Let's put it this way: Suppose one builds
an alleged "weak hypercomputer" that can tell us whether an ordinary 
Turing machine that searches for an inconsistency in ZFC+MC (MC = 
measurable cardinals exist) halts.  How exactly do you propose to
verify that this alleged weak hypercomputer is really a weak hypercomputer 
and correctly answers that question, without going beyond the conventional
assumptions of (whatever version you prefer of) the physical Church-Turing

If you were to say that the problem is that the *internal operation* of
a hypercomputer, weak or not weak, seems to require infinite precision
or infinite extrapolation of physical theories (in all incarnations that
have been proposed), then I would agree with you that this is (close to)
the crux of the matter.  This is quite different from the issue of the
size of the input.


More information about the FOM mailing list