FOM: more on Rabin's test

Stephen Cook sacook at cs.toronto.edu
Thu Sep 17 11:48:49 EDT 1998


Rabin's primality test is not a very good example to use in discussing
the quality of randomness needed for a probabilistic algorithm.  In
fact Rabin's test (which is often called the Miller/Rabin test)
was inspired by Gary Miller's STOC 75 paper, which
includes the result that, assuming the Extended Riemann Hypothesis (ERH), there
is a deterministic polynomial time algorithm for testing primality.
The algorithm is based on Ankeny's theorem, which assumes the ERH,
and which says that if n is composite and not a "Carmichael number",
then there is a witness c of size O(log^2 n) which violates the Fermat
test for primes.  (The exceptional Carmichael numbers are easily factored.)
Roughly speaking, it follows that assuming the ERH, we don't need
random numbers in Rabin's test, but it suffices to use the numbers
1,2,3,...,k, where k is bounded by the square of the length of n.

The larger issue here is: Suppose that a probabilisitic polynomial
time algorithm A solves a decision problem D, in the sense that
(assuming a perfect random source) A can achieve error probability
at most p on input x by running in time bounded by a polynomial in 
the length of x and log (1/p).  Suppose we run
algorithm A a large number of times with some random (or pseudo random)
source of bits, and get a consistent answer, say x is in D.   Under
what circumstances can we regard the result as a proof that x is in D?

BPP is the class of decision problems D which have such algorithms.
The set of primes is in BPP, by the Miller/Rabin test, but by Miller's
result it seems reasonable to conjecture that the set of primes is in P.
A few other BPP algorithms are known, including Berlekamp's algorithm
and Schwartz' singularity test (discussed below).   Unfortunately
there are no known complete problems for BPP, because there is no
known r.e. set of indices for problems in BPP.

I think the best example of a BPP algorithm is Schwartz' test.  (Actually
this is an RP algorithm, since, like the Miller/Rabin test, a "no"
answer is never in error.)  Schwartz' problem is:  given an n by n
matrix M with multivariate polynomial entries, determine whether M is
singular.   A straightforward evalution of the determinant is not
feasible, since the determinant expression grows exponentially with
the number of variables in the polynomials.  Schwartz' idea is to
pick small random integer values for the variables and evaluate the
determinant of the resulting integer matrix.  It is not hard to show
that if M is nonsingular, then most values will lead to a nonzero
determinant.

So we can rephrase Joe's and Harvey's question to:  Given a 100 by 100
matrix of 100-variable polymomials which Schwartz' test says is 
singular, under what conditions do we consider the result proved?

Harvey asks 

"But would you agree that one does not need the physically generated bits to
be perfectly random? One needs them only to be approximately random - in
some sense."

Yes, this seems right.  The notion of quality of randomness is a topic
in the literature, with contributions by Wigderson, Nisan, Blum and
his students, etc.

Steve Cook






More information about the FOM mailing list