[FOM] Re: Tangential Epistemological Comment

Timothy Y. Chow tchow at alum.mit.edu
Wed Mar 24 11:28:30 EST 2004


Don Fallis <fallis at email.arizona.edu> wrote:
> PS.  Of course, even if mathematicians can accurately judge the
> probative value of non-deductive evidence, they still might want to
> restrict themselves to "a special kind of evidence."  But then what is
> the unique epistemic value of deductive evidence if it is (at least
> sometimes) less rationally convincing?

A vague idea that I have had on this subject is that it is a mistake to
visualize justified belief as being one-dimensional, spanning a line
segment from 0 (completely unjustified) to 1 (completely justified).
Rather, there are multiple axes.  One can be close to 1 along one axis 
and far from 1 along another axis.  In such cases there is plenty of
room for arguments between people who prefer a "min" metric and people
who prefer a "max" metric or some other metric.  On this multi-dimensional 
view, there's no canonical way to compare which is "better," a traditional 
proof that hasn't yet been fully scrutinized for errors or a non-rigorous
argument that is highly convincing and backed by extensive numerical
calculations.

Anyway, in spite of the subject line, I don't think this is all that
tangential to the subject of hypercomputation.  Consider something like
the consistency of ZFC.  This can be, and has been, approached along
various axes.  Goedel's 2nd theorem implies that the traditional axis for
mathematical statements---proof---is plagued by some difficulties that
many ordinary mathematical statements don't suffer from.  The question is,
would hypercomputation eliminate these difficulties from the traditional
axis?  Or if not, would it at least provide a qualitatively new axis?

I've argued that some of the rhetoric has exploited the connotations
of the word "computation" to hint that hypercomputation would indeed
fully eliminate the difficulties along the traditional axis.  The
hypercomputer would "directly compute" the answer, presumably yielding
the same *type* of certainty that finite computations currently yield.
As I've said before, I disagree with this point of view.

Whether it provides a qualitatively new axis is also unclear to me. An
extremely fast classical computer could search extensively for a
contradiction in ZFC, and we could choose to extrapolate this finite
computation and claim that ZFC is consistent.  Is this different from
extrapolating finite physical theories all the way out to infinity and
believing the answer given by a hypercomputer?  I'm not sure, but either
way I'm not sure that hypercomputation is anything to get too excited
about.  If the answer is no, then I conclude that hypercomputers are just
superfast classical computers---nice, but not only quantitatively and not
qualitatively revolutionary.  If the answer is yes, then I think that what
makes hypercomputers qualitatively different is this notion of requiring
physical laws to be valid all the way out to infinity, rather than out to
some finite and sufficiently large (or small) realm.  This seems to me to
be a pretty shaky axis, given our past experiences with physical theories
breaking down whenever we push them too far.  Harvey Friedman's suggested
tests for reliability seem to me only to push our confidence out to a
larger finite realm, not all the way to infinity.

Tim



More information about the FOM mailing list