[FOM] Re: On Hypercomputation
Timothy Y. Chow
tchow at alum.mit.edu
Tue Mar 16 10:31:25 EST 2004
On Mon, 15 Mar 2004, Dmytro Taranovsky wrote:
> Physical Church-Turing thesis says that every physical process can be
> modeled--and the correct probabilities for the outcomes of the process
> obtained--by a Turing machine. A weak hypercomputer is compatible with
> physical Church-Turing thesis because its operation can be modeled by a
> Turing machine.
O.K., I see what you mean now. However, to me this just seems to
underscore my claim that the recursive/non-recursive distinction is
the wrong distinction to be thinking about.
We have weak hypercomputers today. Pick any Pi^0_1 statement of interest
---say, ZFC+MC is consistent. Then construct two computers, one of which
outputs "Halts" on all inputs, and one of which outputs "Doesn't Halt" on
all inputs. One of these is a weak hypercomputer that correctly computes
the answer to the ZFC+MC question. Who needs breakthroughs in physics and
mathematics?
In your terminology, weak hypercomputers are consistent with the physical
Church-Turing thesis for the trivial reason that any finite partial
function on the integers can be extended to a recursive function by
setting the function to zero on all sufficiently large inputs. But this
only shows that the formulation of the physical Church-Turing thesis that
you're using fails to capture the issue that is at stake---namely, whether
we can (for example) definitively determine the truth of specific Pi^0_1
statements of interest.
> Eventually, under the scenario, a number of seemingly unrelated
> predictions will be made by the theory in ZFC+PD but which imply
> Con(ZFC) (in a weak base theory). The predictions will be confirmed by
> the experiment. In addition, the theory together with experiment will
> resolve in seemingly the right way many outstanding mathematical
> questions, including the Riemann hypothesis.
This scenario doesn't look much different from what we have today. All
these sorts of things could be achieved using classical computation.
People today come to believe in the consistency of ZFC and of much
stronger systems, without needing to wait for hypercomputers. Inner
models and large cardinals give us a beautiful and coherent network of
mathematical statements about infinite set theory, a network that has come
to be accepted even though it cannot be "proved" in the classical sense.
Ordinary computers that search for zeroes of zeta(s) off the critical line
seemingly resolve the question in the right way. So why the fuss over
hypercomputation?
Tim
More information about the FOM
mailing list