FOM: Indubitability, CH: reply to Pollack and Riis
Joe Shipman
shipman at savera.com
Wed Sep 16 11:48:05 EDT 1998
Pollack:
>You might tell me to run Rabin's test myself, but if I get a
>different result there is no basis for discussion. If you publish
>your trace from running the test, why should I believe the randomness
>of the "random" input. Surely no probabilistic analysis should
>convince me ex post facto.
>Perhaps your trace uses a pseudo-random stream published before you
>computed the trace. This is a bit more convincing, but Cook states I
>should not (yet) accept this as evidence
No, one should use a *physically generated* stream of random bits
published beforehand. Tables containing billions (maybe even
trillions?) of such bits have been published. And of course you won't
get a different result, because I really will have used random bits so
the number really will be prime (except maybe once in the entire history
of a googol of universes like this one when 500 consecutive
bad witnesses really were selected by chance). Anyway, whether I
publish the trace and ask you to verify that the trace is consistent
output and the random input really was published in Volume X of library
Y, or ask you to select your own random numbers and test n yourself for
primality, a consensus that "n is prime" will be established. When you
say "proofs serve as evidence for what they prove" you must remember
that verifying evidence still requires some effort.
We seem to agree that proof requires independent checking resulting in a
consensus; Hersh denies that consensus requires completely rigorous (or
"rigorizable") proofs, and I claim that it does even when you expand the
notion of proof to allow computer-aided or probabilistic proofs, because
those expanded notions depend on underlying completely rigorous (or
"rigorizable") proofs that algorithms are correct. [By the way, these
are definitely distinct notions -- although a computer would be very
convenient for testing 100 witnesses to the primality of a 100-digit
number a la Rabin, a person could do this by hand in a few weeks, and
mathematicians have been willing to spend much more time than that to
verify a theorem (though I will admit no one would be willing to do this
without a computer for such a trivial theorem, most would quit after 20
witnesses (p(error)<2^-40) or less). It would be a significantly easier
task to verify by hand computer trace of Rabin's algorithm using a
published table of random numbers than to generate your own random
numbers and rerun the algorithm by hand.]
Lest I be accused of misrepresenting Hersh here, let me clarify: Hersh
has not really said that rigorous proofs are unnecessary, he's said that
rigorous foundations are unnecessary. My reply that rigorous
foundations are in fact necessary has two parts:
1) Consensus that a theorem has been established comes from "proofs"
(reductions of the proposition to previously established propositions).
This is an observed sociological fact.
2) The possibility of consensus that a theorem is completely and
rigorously established depends on the possibility of unwinding the proof
all the way down to foundational axioms when necessary [as Zermelo did
when he defended his proof of the Well-Ordering Theorem by reducing it
to some axioms about sets]. This is also an observed sociological fact,
established more recently than 1). And this does NOT mean the proof has
to be literally reduced to a machine-checkable form,
just that it has to be (informally) clear to mathematicians that it
could be.
Riis's "proof" of CH calls for such treatment! Soren, can you please
try to formulate an statement in the language of set theory from which
your theorem that the result of the game shouldn't depend on the order
the players move follows?
Simply ~CH is not enough. One candidate (from my thesis): the
cardinality of the smallest nonmeasurable set is less than the
cardinality of the smallest partition of R into measure-0 sets. This
implies the "strong Fubini theorem in 2 dimensions" that Harvey had
earlier shown consistent:
If f is nonnegative and
/Integral/Integral f(x,y) dxdy and /Integral/Integral f(x,y) dydx
both exist,
those iterated integrals are equal.
There are two possible generations to n dimensions:
a) If all n! of the iterated integrals exist, they are equal.
b) any two of the iterated integrals which exist are equal.
I showed that version a) is equivalent to the 2-dimensional case, and
version b) follows from a similar condition involving n cardinalities.
These conditions are consistent with ZFC and follow from the existence
of a countably additive real-valued measure defined on *all* subsets of
R. Interestingly, certain unorthodox formulations of quantum mechanics
required the strong Fubini theorem b) in 3 dimensions to be FALSE
(because measuring noncommuting observables corresponded to iterating
integrals in different orders--from a "quantum" perspective Soren's game
is not so counterintuitive!), so my results implied that those physical
theories required axioms going beyond ZFC. (The physicists who had
developed these formulations, Pitowsky and Gudder, blithely assumed CH
in order to make their models work without noting its significance; they
were thus able to achieve a "local hidden variables" theory agreeing
with experiment, at the cost of introducing weird nonmeasurable sets
into physics and extending probability theory to handle them).
-- Joe Shipman
More information about the FOM
mailing list