# [FOM] Concerning definition of formulas

Vaughan Pratt pratt at cs.stanford.edu
Mon Oct 8 00:20:16 EDT 2007

```Arnon Avron wrote:
> One can do what is
> actually done in many textbooks: define wffs as  finite sequences
> of certain symbols which satisfy certain simple conditions.

Those conditions, simple though they may seem, such as balancing the
parentheses or ending a (forward or reverse) Polish formula with the
stack as empty as it began, invariably entail that the set of wffs form
a context-free language that is not regular.  Do you have a noninductive
way of specifying a context-free language?  And if so can you do so as
succinctly as my definition of a wff in terms of a chain E <= Q <= V of
variables, a set P of atoms, a set M (I typed m but meant M) of
disjuncts, and a linear ordering of Q?

The additional requirement that the arities of the symbols be used
consistently, as is done in my definition, takes one beyond context-free
languages, albeit still within the context-sensitive languages.  Aho's
intermediate notion of an indexed language (JACM 1968) is adequate for
managing symbol tables.  If I recall correctly so are two-way
deterministic pushdown automata, which have the added benefit that their
membership problem is in deterministic linear time (Cook's theorem,
1970, the origin of the Knuth-Morris-Pratt pattern matcher).

> the finiteness condition. However, if you do so then what you will
> be defining might be an interesting concept, but not the one
> you were set out to define.

In what sense is it not the one I set out to define?  If you start with
a definition of vector space that restricts to finite dimension, and
later drop finiteness, are the finite-dimensional vector spaces any
different now?  All you have done is opened the door to additional
objects that the user may or may not care about, with two benefits: the
difficult concept of "finite" no longer features in the definition
(simpler), and the broader definition may serve new customers (more
general).  Most users of linear algebra don't care whether the subject
is defined for all dimensions or only finite dimensions because nobody
expects to be dropped suddenly into infinite-dimensional space when
they're accustomed to dealing with at most a few thousand dimensions.
Physicists and systems analysts on the other hand insist on working in
separable Hilbert space.  Why should wffs be any different?

I started my definition with the assumption that everything was finite,
not because it was essential but to avoid the distraction of having to
think about the meaning of my definition in the infinite case.  The one
problem I foresaw with infinite wffs was the need for an infinitary
Herbrand universe, whose consequences I don't yet understand but which
may well turn out to be no more problematic than the finitary Herbrand
universes obtained with the restriction of < to finite height (which
gives each variable of E only finitely many variables of Q\E to depend
on).  Complete semilattices, sigma-algebras, complete atomic Boolean
algebras, and compact Hausdorff spaces all form varieties with
infinitary operations (in fact for all but sigma-algebras the signature
is a proper class) whence there is no immediately obvious objection to
infinitary Herbrand algebras.  However it's worth working out the
details before making any promises.  I bet there are people on this list
for whom this is more their cup of tea than mine.

>
>> are those in prenex normal form with a full DNF matrix, sufficient for
>> all definitional purposes in bootstrapping up from naive set theory.
>
> My reservation here is less important than the previous one,
> but I would like to note that things are not that simple.
> One of the main thing one does with
> formulas is to reason with them, but the formulation of even a
> basic rule like Modus Ponens is rather difficult if one
> uses only wffs in prenex normal form. In fact, it seems to me
> that for doing this in an acceptable way one has to identify
> wffs which are not identical according to your definition of "wff".

Three comments.  First, I thought of Note 6 after sending my previous
message:

6.  Propositional calculus arises as the case V = A = 0 (\emptyset) for
all symbols (s,A) in L.  Every binding is then of type b: 0 --> 0 and
therefore uniquely determined, and the atoms therefore simply
propositional variables, one per letter of the alphabet S used in W.

In that case theoremhood is just a matter of the requirement that the
matrix M be not just any set of disjuncts but the set of all disjuncts.
This of course doesn't work for monadic and higher arities.

Second, the idea was to allow the user to put a stake in the ground to
define things with wffs, not to reason with them.  Once the rest of the
machinery is set up one can go back to the original wffs and confirm
that they indeed constrained things as promised.

Third, just because prenex normal form gives your favorite Hilbert
system conniptions is no reason to infer that it breaks all systems.  If
it seems philosophically suspect somehow not to be allowed to reason
with the initial definitions, simply dualize everything in my definition
of wff --- call E the universally quantified variables and Q\E the
existential, interpret each (former) disjunct D as the set of negative
literals of a clause and P\D as the positive, regard D now as a conjunct
formed as a disjunction of literals, and call the matrix M a set of
conjuncts, interpreted as their conjunction.  That negates my original
wff, and now everything is ready to begin a resolution proof chaining
backwards from the allegedly unsatisfiable negated formula to a

Induction is the logician's hammer, making every problem look like a
recursive nail.  A greater variety of tools reveals a greater variety of
fasteners to pin down one's foundations.  In a subsequent message (this
one is already too long for a reasonable FOM posting) I'll say a bit
about topological and analytical tools suitable for managing the sorts
of proof structures that arise in the course of a resolution proof---or
for that matter a linear logic proof---without resorting to recursive
definitions.  Linear logic is all about the elimination of the
bureaucracy of recursion and iteration in proofs.

Vaughan
```