# [FOM] normal numbers

Bill Taylor W.Taylor at math.canterbury.ac.nz
Thu Jul 19 01:36:21 EDT 2007

```Rupert McCallum writes:

->I have a question about the definition of normal numbers.
->
->Here we have defined them as numbers in which the frequency with which
->finite sequences of digits occur are the same as would be expected of
->random distribution.

That is generally agreed as being the definition.  And it DOES include
are *automatically* accounted for, for properly randomness, by the above defn.
(This would make a good 3rd-year probability problem, BTW!)

However, the definition still does not include everything "reasonable"
that one could possibly ask for.

-> I have seen elsewhere the claim that
-> sometimes further conditions are imposed,

Yes, but they are not of the correlation type.

It is rarely, but sometimes observed, that if we took a normal-random digit
generator, then surreptitiously changed the 1st, 4th, 9th, 16th and so on,
of output digits to zeros. the resulting infinite sequence would STILL PASS
all conceivable randomness tests based on testing fixed blocks of digits,
which is all the above definition guarantees.

However, it would clearly no longer be "random" in the intuitive sense.
It would (if one had been alerted to it), be trivially easy to show that
the sequence was not random in a slightly extended sense.  It would be FISHY!

So, a full description of what it means to be an infinite sequence of
random digits, must agree with the above defintion of normality not only
for the whole sequence, BUT ALSO for every recursive subsequence; those
being they that are susceptible to these previously-alerted tests.
Fortunately there are only a countable number of such subsequences,
so that is not too absurdly prohibitive.

But it's even worse!  Not only must specified-in-advance subsequences
pass normality, but so must every conditionally-generated subsequence!

For instance, we could take a fully random generator of digit sequences,
and then find the first ocurrence of (say) "111", and then go through
surreptitiously changing all the square-th positions after that to zeros.
This diabolically massaged sequence generator would now pass all possible
randomness tests of the recursive-subsequence type.
So all conditionally-generated subsequences must all also be
accounted for in the most "proper" definition of randomness!
Again, there are only a countable number of such conditions,
so it's not so bad.  It doesn't stop almost all reals having full
randomness in this most sophisticated sense.  (I recall once seeing
in a book where  it claimed it DID stop all reals being random;
but that was an error. Almost all reals are still OK.)

AFAIK, that's as far as one needs to go!  It may still be that God could
devise even subtler massaging procedures that would pass all my above tests,
yet still be clearly non-random.  But I know of no such subtler massagings,
and I doubt that they can exist, in fact.

But even so, the whole shebang above is far too deviously paranoid in
its approach to testing (equivalently, defining) "perfectly random" digit
strings. So, the standard definition is always taken to be the obvious one.
That is, normal digit strings are random and vice versa.