FOM: Consensus vs Indubitability JoeShipman at
Wed Sep 16 01:12:48 EDT 1998

> While I agree with the thrust of Joe's argument (that rigorous proofs 
>are necessary to establish consensus), I don't agree that the second example,
>establishing primality using Rabin's probabilistic test, provides a rigorous
>The difficulty is that there is no practical source of random input bits
>can be used to apply Rabin's algorithm.   In practice, a pseudo random number
>generator is invariably used, which is just a deterministic algorithm which
starts with a  
>(hopefully random) initial seed.  It is far from clear that this method will
>sufficiently random bits to make Rabin's algorithm work.   In fact, if one
could prove in a  
>general setting that some pseudo random number generator is sufficiently
random, it 
>would follow that the probabilistic complexity class BPP coincides with P,
>a major open problem.

This is exactly why I was careful to specify "we believe we have selected 500
witnesses in {1,2,3,...,n-1} sufficiently randomly" WITHOUT referring to
deterministic pseudo-random number generators.  My argument in fact
contemplates using a physical source of random bits (or equivalently a
published table of random numbers).  Since I was talking about admitting more
general types of "proofs" which are to be verified more in the style of
physical sciences (e.g. by reproducible experiments), this presents no
obstacle to achieving a consensus that the claim has been established.
Neither does it vitiate my contention contra Hersh that rigorous proofs are
indispensable, because we still demand a conventional proof like Rabin's to
establish a probability of  "failure" for individual trials bounded away from
1. If one used the (incorrect) Fermat test (which fails for almost all
witnesses for certain rare numbers known as "pseudoprimes") instead of
Rabin's, we would have a different situation where the same kind of consensus
could not be established.

By the way, Steve, in your opinion does BPP = P ?  Since all that is required
is that a pseudo-random number generator be "sufficiently random" for the
particular problem at hand (and in particular can be arbitrarily more complex
than the random algorithm itself within polynomial time, so that it could have
complexity O(n^Ackermann(k,k)) where the random algorithm has running time
O(n^k) ), I would guess so.  

More information about the FOM mailing list