[FOM] Re: Davis's reply to Ord re hypercomputation

Timothy Y. Chow tchow at alum.mit.edu
Fri Mar 12 14:23:23 EST 2004


Neil Tennant <neilt at mercutio.cohums.ohio-state.edu> wrote:
>"Fraught with difficulty" might be something of an understatement here.
>Why not rather say that the truth in such a matter, if it obtains, must 
>be knowable? So, if a proposition of the form "This device is a
>hypercomputer" is not knowable (i.e. knowably true), then there is NO 
>FACT OF THE MATTER as to whether the device in question is a 
>hypercomputer.

Although I have more or less been arguing Davis's side and against Ord's 
side, I wouldn't go quite as far as Tennant suggests.

I recall my distinction between "experimental" statements that are (in 
some suitable sense) directly verifiable in a finite experimental way, 
and "theoretical" statements that are theoretical extrapolations of
experimental statements.  This distinction has deficiencies but I think
it's good enough for the purpose at hand.

Theoretical statements typically cover an infinitude of scenarios and
hence it is always possible that they are wrong even if they are 
consistent with everything we know experimentally.  But I, and most
others, wouldn't therefore conclude that we can't know any theoretical
statements to be true, or that there is no fact of the matter about
theoretical statements.  If we can keep expanding the set of scenarios
that we test and keep finding that the theoretical statement holds as
predicted, then at some point we typically start using the word 
"knowledge."

A hypercomputational oracle could be investigated in this way, as Ord,
Friedman, and perhaps others have suggested.  Namely, we ask it various
questions that we know the answers to, and see if it works properly.
In principle there is an unbounded number of such tests we could subject
it to, and so I think this is enough to declare that there is a fact of
the matter.

The objections I have to hypercomputation as it is typically advocated
are slightly more subtle.  First, the advocates typically exploit the
fact that we associate the word "computation" with *experimental*
statements rather than theoretical ones to hint that a hypercomputation
of (say) the consistency of ZFC would somehow upgrade our confidence in
the consistency of ZFC from "theoretical" to "experimental."  That is,
the hypercomputer would free us from the chains of Goedel's 2nd theorem
by directly computing the answer just as ordinary computers directly
compute things.  Nothing in what I've seen of the hypercomputation
literature warrants such a conclusion, and indeed I doubt that such an
upgrade could possibly be justified, as I have discussed elsewhere.

Second, even if we treat the existence of a hypercomputer as a 
theoretical statement, there are difficulties because all the proposals
seem to rely on theoretical statements holding *all the way out to 
infinity*, into realms beyond what we can measure.  This is unlike all
ordinary theoretical extrapolations, which do not have to be valid all
the way out to infinity in order for their predictions in the measurable
realm to be calculated.  Experience strongly suggests that all physical
theories break down eventually when you extrapolate them too far.  So
someone needs to come up with a proposal that circumvents this "problem
of infinity" somehow before I'd be willing to put stock in it.  Although
I am skeptical, and agree with Davis's phrase "fraught with difficulties,"
I am not quite ready to rule it out entirely.

Tim



More information about the FOM mailing list