Asymptotic growth of a non-computable sequence

Dennis E. Hamilton dennis.hamilton at acm.org
Thu Aug 12 18:55:16 EDT 2021


Cabellero wrote:

> Kreinovich, Vladik wrote:

 > > Since a(n) has n binary digits, it is between 2^n and 2^{n+1} = 2*2^n, so 
this is a clear asymptotic

> where a(n) = the smallest number having n binary digits, which can't be 
> compressed concerning a fixed Turing machine

 > I suspect that a(n)/2^n converges to 1 as n goes to infinity.

[orcmid responds]

The TL:DR: There is something more meant about the fixed Turing machine and 
what it means to have uncompressible cases.  That needs to be expressed.

A thought experiment,

Not to be overly fussy, we can presume that the leading bit is always 1 (to 
avoid trivial compressibility) and in that case, the range of candidate binary 
numbers is from 2^(n-1) to 2^n-1 and there are 2^(n-1) of them.  Cabellero's 
suspicion can then be stated  that as n approaches infinity, a(n)/2^(n-1) 
approaches 1.  In effect, the proportion compressible by any fixed TM 
approaches 0.

We are left to deal with what it means to be uncompressible.  Maybe 
information theory shows us enough about that without consideration of Turing 
machines at all.

Presume, first, that lossless compression is intended: From the number 
representing a compressed value, we can recover the original.

I contrived an easy way to do that by assuming the a(n) candidates to all have 
the first of the n bits being 1.  Suppose then that the number corresponding 
to a compressed value first bit of 0 and fewer than n of them. If we allow 
exactly n all we have to do is flip the leading bit to 0 or even flip all the 
bits.  We know how to decompress all of those with a trivial algorithm.  But 
there's no compression and we have one wasted bit, the leading 1/0.

If there must be fewer than n bits in the compression, there are at most n-1 
of them, including the leading 0 bit, and that means there are fewer than 
2^(n-2) of them.  So, without trying, we know as many as half of the originals 
cannot be compressed by any method that is lossless and for which the 
compression is shorter than the original.   There's probably a tighter proof 
by induction.  We don't need that to see the bind.

We are now up against a well-known characteristic of compression algorithms. 
The more successful the compression is for many cases, there must also be 
cases where the compression result is worse -- has more bits than the 
original.  In practical schemes, the cost of expanded cases is minimized (like 
that fixed initial 1 bit taken to mean not compressed).  And in practical 
schemes, there is something that is known about the nature of the strings of 
bits that has the algorithm provide successful compressions in practice, with 
pessimistic cases producing tolerably-larger results.

I suspect that the fixed Turing machine case is around a different question, 
albeit constrained in at least the manner I have sketched.

 - Dennis






More information about the FOM mailing list