FOM: Inventional and Verificational Complexity

Joe Shipman shipman at
Wed Nov 3 12:35:05 EST 1999

Detlefsen's distinction between "inventional" and "verificational"
complexity is very reminiscent of the P-NP distinction.  The key
conceptual difference is that inventional and verificational complexity
are meant to be attributes of individual finite objects (namely,
theorems or proofs) rather than of infinite sets like SAT.

In the same way that Kolmogorov complexity can be developed by first
selecting a particular basis for computation (e.g. Universal Turing
Machine), we see that verificational complexity of a THEOREM can be
conveniently defined by selecting a particular proof system (e.g.
1st-order logic with the ZFC axioms, or with the PA axioms) and taking
the (suitably defined) length of the minimal proof of the theorem.
Verificational complexity of a PROOF appears much simpler.

Inventional complexity is more interesting.  A theorem may have an
easy-to-discover long proof and a hard-to-discover short proof (though
one must take into account that a short proof which depends on more
mathematical infrastructure is not really shorter, one cannot simply
compare Newman's short complex-analysis proof of the Prime Number
Theorem with Erdos and Selberg's long "elementary" proof without
measuring somehow the length of the necessary development of Complex
Analysis).  But we could work within a fixed theory like PA, and ask
about properties of a proof other than its length that might measure

For example, although proofs are normally written as a linear sequence
of statements, they really have a tree-like structure because each line
directly depends on a small number of earlier lines.  One could measure
the height of this tree, or the number of leaves (how many of the lines
were axioms).  In my opinion, both of these are good measures of
difficulty of invention.   While it would be easy to transform a given
proof and completely mess up these measures, they are appropriate when
applied to a proof that was independently found.  If one views one's own
mathematical knowledge as a web, then it is easy to see how one might
extend this web by making inferences from known statements, and theorems
which eventually "fall out" from the normal development of a subject
thus get a low inventional complexity by either of those measures.

You also have cases where there is a great idea leading to a completely
new approach, after which the proof may be easy to complete whether or
not it has a high complexity by the measures of height or number of
leaves.  For example, Godel's first incompleteness theorem required two
brilliant new ideas (arithmetization of syntax plus diagonalization),
after which everything goes easily except for one technical obstacle
(Chinese Remainder theorem to represent sequences).  His second
incompleteness theorem required one brilliant idea (reflect on the proof
of the first incompleteness theorem to identify an unprovable
statement), after which everything goes easily (though if you want the
modern version there is a technical obstacle, solved by Rosser's trick
to get consistency rather than omega-consistency).

Some other examples would be very welcome.

-- Joe Shipman

More information about the FOM mailing list