[FOM] Complexity of notions/intermediate degrees

Harvey Friedman friedman at math.ohio-state.edu
Tue Feb 1 12:16:29 EST 2005


I sometimes work on the project of creating a general purpose language for
the presentation of mathematics, first without proofs, and then
incorporating proofs.

Let us say that this suitably user friendly language L is in place. We can
take L to be your favorite such language that already exists. For present
purposes, we do NOT care about incorporating proofs into the language.

We can define the L complexity of a mathematical object as the number of
symbols needed to define that object in L.

We need to clarify how L is used. A definition in L consists of a
hierarchical presentation of definitions from the primitives of L. The last
definition is the definition of the notion being treated. The earlier
definitions build up the notions leading up to the last definition. The size
of the definition is the entire size of the presentation.

In practice, it appears to be extremely difficult to get any reasonable
lower bound on the L complexity of any even very simple mathematical object.

However, one can obviously give upper bounds, and presumably with some
cleverness, reduce these upper bounds substantially. One can conjecture that
after some hard work to reduce upper bounds, one generally gets something
that is or is quite close to the actual L complexity.

We can consider the case of intermediate degrees. Specifically,

1. Find a set of natural numbers, or a multivariate relation on natural
numbers, or a (partial) function on natural numbers, perhaps of more than
one variable, and a definition in L of this set or function, which is
reasonably short - where the object is not recursive. (Also see D below).

2. Do the same, except that the set or relation or function chosen must be
of degree strictly between 0 and 0'. Work hard to minimize this definition.
Perhaps a different researcher or researchers should be doing this, not the
one who does 1. (Also see D below).

3. Compare the sizes in case 1,2. Also compare the sizes in 2 with the sizes
one gets by treating a variety of universally-agreed-upon "natural" notions
in various branches of mathematics. (Also see D below).

Now there are some problems with this approach.

A. One difficulty with this approach is that there may be a lot of overhead
needed to get 1 going. I.e., there may be a lot of definitions needed that
lead up to the final definition. This overhead may be so great that is
dwarfs any difference between 1 and 2.

B. I don't know whether or not A is really the case. But there is a way
around this potential difficulty. We can assume that L has, built in, an
agreed upon library of definitions that come from elementary mathematics.
One could be precise by saying that the material in baby undergraduate set
theory/foundations of mathematics forms the basis of the built in notions.
Or the notions from the now standard texts in a course in Discrete Math at
the undergraduate level (mainly a service course for computer science
majors). 

C. One can use other specific languages that are well known to logicians.
E.g., the primitive recursion calculus. I.e., we only look for definitions
of multivariate functions from N into N given by a primitive recursive
development. We count the total number of symbols used. One can profitably
be very precise about this.

D. In the L approach, one can instead talk about defining the DEGREE and not
a particular element of the degree. This might make the comparison between 1
and 2 more stark. After all, we can informally define the degree 0' as the
highest r.e. degree, and not have to get involved in the cost of defining
any specific universal machine.

Harvey Friedman
 




More information about the FOM mailing list