[FOM] Mathematical knowledge management, type theory, and automatic proof

Jeremy Bem jeremy1 at gmail.com
Sun Oct 24 22:14:01 EDT 2010


This is a reply to the 2008 paper "A Language for Mathematical
Knowledge Management" by Steven Kieffer, Jeremy Avigad, and Harvey
Friedman. That paper contains a description of a language, PST
("practical set theory"), proposed for use whenever one needs to
express a mathematical proposition for the purpose of rigorous
communication. Ultimately it is based on ZFC. The representation of
proofs is not addressed.

As someone steeped in type theory, I found it a fascinating read. Type
theory has been rather successful in formalizing mathematics (see, for
example, the performance of HOL Light on Freek Wiedijk's "100
Theorems" web page). This paper was clearly related to that endeavor.
However, the issues it took up, while clearly interesting and valid,
seemed almost orthogonal to the ones that are commonly addressed by
type-theoretic proof assistants.

For example, PST provides facilities for handling undefinedness as
well as a natural language syntax. The former, especially, plays a
central role in the presentation. But generally speaking, today's
proof assistants do not offer these facilities. They tend to use
workarounds such as default values and carefully chosen identifier
names.

Meanwhile, PST seems to have little use for the sine qua non of type
theory: that every term ought to be associated with a specific type
(read "set" or "class") of which it is a member. Given that
practicality is a stated goal, one might question this simply by
noting that mathematical objects are not, in practice, uniformly
conceptualized as sets. Should a PST-parser really accept "x \in 5" as
a valid formula without even a warning that it is likely a bug?
Similarly, PST offers syntactic support for function application, but
"3(5)" is not any kind of error. However, this is probably a familiar
line of argument to many FOM'ers. I'd like to present a more unusual
reason to consider types.

Type theory and set theory, despite their names, are compatible and
arguably treat the same subject matter. PST offers several constructs
that are reminiscent of type theory, including syntactic support for
bounded quantifiers, such as "FORALL x \in s". If one were to use this
construct as often as possible, then many variables could be viewed as
having types. Moreover, the remaining variables could be viewed as
having type "set".  Looking over the PST examples, I suspect that
bounded quantifiers are the rule rather than the exception.

One could then identify a family of propositions which are
"well-typed" in the sense that every term can be assigned a type, such
that the assignment respects all term-forming operations, which in PST
include not only the application of defined function symbols, but also
lambda expressions, function application for sets, tuples, and so
forth. I won't attempt a fully rigorous definition of well-typedness
here, but here are some of its properties:

(1) Defined function symbols would carry expectations about the types
of their arguments, together with a rule for computing the result
type. These data would be derived in a straightforward way from the
definition.

(2) To accommodate sets-as-functions, one would introduce a 2-ary
function symbol "->" such that X -> Y is interpreted as the set of all
functions from X to Y, to be used for example as the type of a lambda
expression. (I'm glossing over the possibility of
sets-as-partial-functions, but there are various approaches.)

(3) Relation symbols would also carry expectations about the types of
their arguments. In particular, an application of set membership would
be considered well-typed only when its arguments have type "set", and
equality only when its arguments have the same type.

In all cases, the types would be easy to compute syntactically. In
fact, what I've described could be seen as a form of Hindley-Milner
type theory, which even admits a possibly-useful type inference
algorithm.

It can hopefully be agreed that the majority of mathematical knowledge
is representable in a well-typed fashion. For the purposes of this
essay, however, I do not claim that it is never useful or necessary to
consider propositions that are not well-typed. To develop the ordinals
in a natural way, for example, or to prove the Borel determinacy
theorem using the axiom of replacement, it may well be necessary.

OK, so one can identify a family of PST-propositions as "well-typed",
and some users of a hypothetical PST-tool might endeavor to use them
when possible. But for those who are comfortable with ZFC, and do not
require their proof assistant to flag type incongruities as possible
errors, is there really a compelling reason to do so? Until recently,
I might have reluctantly conceded that there isn't.

It seems reasonable to suppose, however, that the user of a
mathematical knowledgement tool occasionally wishes to prove
something. Of course, developing fully explicit formal proofs can be
tedious. For this reason, proof assistants of all stripes have
explored the possibility of providing interfaces to first-order
automated theorem provers, to be used for establishing the validity of
relatively small steps. These provers, such as "E"
(http://eprover.org/), are mature and powerful tools. They tend to be
based on resolution. First-order proof techniques have been essential
to the success of higher-order proof assistants.

First-order logicians especially ought to be attracted to the
possibility of using such provers. In principle, one can take a goal
expressed in the language of set theory, together with whatever facts
its validity depends upon (such as set-theoretic axioms), and present
these data to a first-order prover in the hope that a proof will be
found.

Success is absolutely possible but depends on many factors. In the
context of a highly sugared representation such as PST, the one I'd
like to highlight is: how does one translate the syntactic extensions
back into a pure first-order representation that the prover can
accept? Even more specifically, I'd like to postpone issues like the
undefinedness facility for another day, and focus on the bounded
quantifiers.

How does one translate bounded quantifiers into pure first-order
logic? This isn't answered explicitly in the paper, perhaps because
the answer is arguably so standard: the construct "FORALL x \in s,
\phi" expands to "FORALL x, x \in s IMPLES \phi" whereas "EXISTS x \in
s, \phi" expands to "EXISTS x, x \in s AND \phi".

However, this is not the translation that is used in practice by
higher-order proof assistants!  Moreover, my own experiments suggest
that it leads to serious and unnecessary inefficiencies when working
with well-typed propositions.

The translations that are used in practice are due to Joe Hurd:
"First-order proof tactics in higher-order logic theorem provers"
(2003). For the purposes of this essay, only the "fully typed"
translation from that paper is relevant.

The main idea is to introduce a 2-ary function symbol ":" whose
arguments are a term and its type. Rather than translating the type of
each variable immediately after binding it, we instead translate the
type of every single term by replacing it with an appropriate
application of ":"!  This leads to large terms, and it was initially
counterintuitive to me that it could outperform the naive translation.
However, it interacts beautifully with the term indexing and
unification strategies that are employed by first-order provers.
Unification extends to types and even plays well with the possibility
of allowing variables in types (also known as parametric
polymorphism).

The semantics of ":" have not been widely discussed. However, one way
to see that Hurd's translation is sound is to assign "x : s" the
following interpretaton:

IF x \in s THEN
  return x
ELSE IF s is non-empty THEN
  return an arbitrary element of s
ELSE
  return an arbitrary set

In this way, ":" can be seen as a conservative extension of ZFC.
Moreover, with these semantics, it is not hard to see that Hurd's
translation is equivalent to the naive translation. There are two
cases, corresponding to the two quantifiers. It matters here that
types are presumed inhabited.

As far as I am aware, only Hurd's translation is used in practice by
higher-order proof assistants. My own experience is that with Hurd's
translation, the performance of a first-order prover matches my
expectations in terms of what ought to be trivial, whereas with the
naive translation, it underperforms my expectations, forcing me to
explicitly prove subgoals that should be automatic. This leads me to
envision a ZFC-based proof assistant that would use Hurd's translation
with entirely well-typed goals, resorting to the naive translation
only when type incongruities are present.

Should these experiences of the type-theoretic community prove robust,
they would seem to provide an incentive to keep to well-typed
propositions as much as possible, even if ZFC is to be the ultimate
foundation.

REFERENCES / LINKS

S. Kieffer, J. Avigad, and H. Friedman. "A Language for Mathematical
Knowledge Management", 2008.
http://www.hss.cmu.edu/philosophy/techreports/181_Avigad.pdf

See also: S. Kieffer. "A Language for Mathematical Knowledge Management", 2007.
http://www.andrew.cmu.edu/user/avigad/Students/kieffer.pdf

F. Wiedijk. "Formalizing 100 Theorems" (web page).
http://www.cs.ru.nl/~freek/100/

J. Hurd. "First-order proof tactics in higher-order logic theorem
provers", 2003.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.7239&rep=rep1&type=pdf



More information about the FOM mailing list