[FOM] 204:Finite Extrapolation

Harvey Friedman friedman at math.ohio-state.edu
Mon Jan 19 19:53:08 EST 2004


On 1/19/04 4:57 PM, "Jacques Carette" <carette at mcmaster.ca> wrote:

> I do not understand why you state that Kolmogorov Complexity is
> 'asymptotic'.  True, a lot of *applications* have been asymptotic in nature,
> but the theory itself is very much 'finite', in that it deals primarily with
> finite strings, finite programs, and programs terminating in finite time.

It is asymptotic because there must be a choice of a model of computation.
They all differ (upper and lower bounded) by some constant, but what
constant? And even if a constant is chosen, whatever that is, that only
gives relative bounds, and not exact relationships, and of what significance
is such relative bounds when one is trying to deal with really short
strings?

It seems rather unnatural and unwieldy to get hard data from a Turing
machine model. 

In contrast, what I propose is both interesting, natural, and doable, I
believe. 
> 
> Kolmogorov Complexity has been used a lot to study randomness, and here
> things are indeed rather 'asymptotic'.  Using it to study *simplicity*
> instead, turns the tables around.

So now you can give us some of your favorite results. Are they of the form

"the actual Kolmogorov Complexity of this actual string is such and such".
> 
> Your language is clearly recursively enumerable, and your length function is
> computable, so yes, this is definitely a good place to start.

My language - just the language of the ordered ring of integers with an
obvious complexity measure - is of course not recursively enumerable.
(Obviously the syntax is extremely low level recursive.) My complexity
measure is also of course not r.e., and not even arithmetical. (The
complexity of a formula is obviously extremely recursive, but that is again
pure syntax).

But that is asymptotic stuff. What happens with really simple strings is
what is interesting.

> But any r.e.
> language with computable length function will do.

Not for getting any interesting hard data of the kind discussed in my
original posting. Perhaps you do not understand what I was talking about.

>But that is not 'really'
> an interesting question, now is it?

It's your question, not mine.

I was talking about results that

"the simplest extrapolation of thus and such finite set or sequence is thus
and such."

>It is clear that in your language,
> interesting questions can be posed, but perhaps there could be a slightly
> different language with the same power but with completely different
> complexities?

No. I addressed that in my posting. I said that I believe that there is a
lot of robustness, that will be revealed by the investigation.

E.g., one can look only at minor modifications of my setup, where we add,
say, squaring as primitive. For the simple strings I mentioned where the
extrapolation pops out to the naked eye, I suspect that the results will be
the same - i.e., that for such simple strings, the simplest extrapolation is
invariant under sensible changes in the setup.

For that matter, one can use a good clean version of ZFC. I'll bet that the
simplest extrapolation of a lot of finite sets and sequences will be just
what is expected when a human being looks at the finite set or sequence.
> 
> I conjecture instead that there exists different languages than yours that
> can express the same ideas but which contain classes of objects whose
> complexity is not within a (universal) constant of the ones from your
> language.  This is what I mean by stability.

Obviously you can make up a language whose behavior on small strings is
pretty much anything you want. So you are again simply talking about
asymptotic stuff.
> 
>> Kolmogorov Complexity "succeeds" as you say, simply by becoming
>> asymptotic,
>> with no hard data.
> 
> This is why I have used MDL as my principal tool instead of KC itself.  And
> that I had to re-interpret it in an exact setting, based on axiomatically
> defined theories.  And there, you can get hard data.  See my paper for 2
> examples of 'hard data'.

Instead of commenting on what I wrote, why don't you present some hard data
of yours here on the FOM so we can all see what you are talking about?

I would like to see a full description of your basic results that you think
are relevant to my original posting.

If you do this, maybe you and I and the rest of the FOM email list will
learn something, instead of us talking around in circles.
> 
> A bit of brute force could then prove these.

The number of formulas in my language of a given complexity grows wildly, so
a bit of brute force is completely out of the question, once the complexity
gets significant. If need be, I can certainly compute some lower bounds on
the number of formulas of a given complexity.

Also, remember that my language has quantifiers over integers. How does just
a bit of brute force tell us what the effect of those quantifiers are?

Recall that I left things open ended. One can easily make the sequences a
bit more complicated, still with "obvious" extrapolations, where the obvious
definition in the language is of complexity, say, n --- where n is
sufficiently large so that the number of formulas of complexity at most n is
truly staggering, far beyond the number of electrons in the universe. How
big is your computer?

>However, your
> sequences are very short, so their information content is very very low,
> possibly leading to a lot of ties.

This I doubt. Show me.

In fact, I picked these sequences sensing that with some nontrivial effort,
one might be able to handle the conjectures. The idea, as indicated, was
once one starts off with these early results, one can get far more
ambitious. E.g., see my last posting's response to Frank.

Harvey Friedman




More information about the FOM mailing list