FOM: iterative conception of set: are sets natural for everything?

Vaughan Pratt pratt at cs.Stanford.EDU
Sat Feb 28 16:34:38 EST 1998

From: Charles Silver <csilver at>
>I think the fact that this question can be raised shows that not
>any set theory can be claimed to be foundationally superior to, say, CAT,
>in being firmly based on an underlying conception.  One can *say* that set
>theory is based on "collection", but not have it turn out that the
>resultant set theory is a satisfactory explication of this concept.  To my
>mind, this raises further questions about how to judge whether an
>intuitive conception is successfully embodied within a technical

I interpret Charlie to be questioning how far one can push the notion
of set as a universal representation while retaining the naturality that
motivated it in the first place.  This is a very good question.

When a set theorist walks up to a dyed-in-the-wool structuralist
and tells him that the real line is the set of its points, expect a
heated argument.  But structuralists are notoriously touchy on this
subject, and your average street nerd is much less likely to complain.
Some objects are naturally understood as sets.

Where the set theorist has to talk faster is in making the case
that each individual point on that line is a set.  Certainly you can
*represent* a real x as a set, e.g. the set of reals less than x, or the
set of rationals less than x, or the set of reals with finite decimal
expansions less than x.  But the many available representations combined
with the lack of any obvious canonical such casts serious doubt on the
appropriateness of "is" in the phrase "a real number is a set."  If you
and I represent pi with different sets, yet agree on every mathematical
theorem involving pi, clearly we cannot have *identified* pi with both
of those sets.

Going in the other direction, how can one reasonably view as a set the
process of *gradually* stretching the real line until all its points are
twice as far apart?  If you perform the stretch discretely, in one step,
it is reasonable to identify the stretch with its graph {(x,2x)|x in R}.
But modeling the intuitively natural process of *gradual* stretching as
a set requires both a baroquely complex setup *and* a commitment to the
rate and uniformity of the stretch.

In constrast, the representation of scaling up the real line by a factor
of two as a morphism  R ---> R  is ideally matched to the intuition.
The very arrow itself represents gradual stretching graphically.  (If you
see every morphism as a discrete transformation you are thinking like a
set theorist and not a category theorist.)  Such a stretch is discrete
only if for every pair R ---> X ---> R of morphisms whose composition
is this stretch, one of that pair is the identity on R.  Otherwise the
stretch factors nontrivially.  Scaling R by two factors in many ways,
including one for every rate of continuous stretching however nonuniform.

Morphism is the natural abstraction of process.  On the one hand the
morphism itself makes no commitment to whether that process is performed
discretely, continuously, steadily, jerkily, accelerating, etc.  On the
other hand the notion of composition creates an immediate and simple
connection between the basic motion itself and these refinements of it.
Set theory offers no comparably simple and natural connection.

Collecting and transforming are fundamentally different activities,
and neither is done justice by the natural abstractions of the other.

But for this reason I regard *neither* on its own as the optimal
foundation for mathematics.

Computer scientists for whom flow, process, transformation, and function
are fundamental notions have been very receptive to other foundations
than set theory.  In fact there are hardly any programming languages
that start from sets and implement their remaining abstractions on top
of them.  The preferred fundamental abstractions in computer science have
traditionally been arrays, strings, and linked lists, and more recently
records or structs, which occupy positions in the mathematical universe
somewhere midway between sets and categories.

Friedman and Simpson's insistence notwithstanding, the intellectural
debt owed by arrays, strings, linked lists, and records to sets is no
stronger than that to categories.  Most computer scientists view their
fundamental data structures as self-evident notions that are adequately
primitive in their own right, and see no advantage to reducing them to
sets, however one might go about it.

My claim that sets are not fundamental for computer science should not
be construed as saying that they are not useful.  Sets play a central
role in databases, not as a fundamental structure however but at a
relatively high level in the "food chain", namely in the definition
of a relation as a set of records all of the same type, the number
of whose fields or attributes constitutes the arity of that relation.
And one can find sets in many other places in CS.  They do not however
play the fundamental role for CS that FOM assigns to them for mathematics.

Categorical foundations for CS has a following, not a huge one but
certainly no smaller than the following for set theoretic foundations
for CS.  In fact it is considerably larger if one goes by number of
papers and conferences devoted to each.

The very first Principles of Programming Languages (POPL) conference,
in 1973 in Boston, featured a paper "Types are not sets" by Jim Morris,
whose recent research focus at Berkeley had been the implementation
of an operating system, although his 1968 MIT thesis had been on the
lambda calculus.  (He's the Morris in the Knuth-Morris-Pratt linear-time
pattern matcher.)  The following speaker, Barry Rosen, who was as much
a theorist as Jim was a systems programmer, had prepared his talk on
the premise that types were sets (what else could Type Int possibly be
than the set of integers?), so you can imagine the audience's merriment.
The program committee's ordering may have been a little unkind to Barry.

Vaughan Pratt

More information about the FOM mailing list