FOM: Atiyah's Bakerian lecture

Lou van den Dries vddries at math.uiuc.edu
Fri Oct 17 00:21:25 EDT 1997


1.  Let me follow Steve in accentuating the positive below. But not
after having given a brief resume' of my viewpoint on FOM (in the sense
of Simpson):

FOM will play only occasionally an *active* role in mathematics, namely
when basic notions and ways of reasoning about them are in the process
of changing, as happened a century ago. And I see no reason why current
FOM-specialists would be better equipped to deal with such a
situation (when it occurs) than mathematicians with a broad outlook
and familiarity with the trouble at hand. On the contrary. This is
confirmed by what happened in more recent times when subareas of
mathematics did undergo an overhaul of their foundations, as shown by 
the examples of probability theory (Kolmogorov), and algebraic
geometry (Zariski, Weil, Grothendieck). However, this activity, which
concerns "only" subfields of mathematics, is not the "lofty" kind of
FOM championed by Harvey and Steve. (The influence of such overhauls
has nevertheless been extensive and deep.) Also, I don't see that "state of
the art FOM" (like reverse math) has anything to say about applied
mathematics, and other ways in which mathematics relates to the rest
of the world, notwithstanding the claims that we keep hearing about.

2.  I emphatically view the role of *logic* in mathematics as going far 
beyond its role in FOM. There are a number of possibilities of
significant interaction of logic and other parts of mathematics,
and of course some of this potential has already been realized. 
I don't have principally in mind the obvious things, like independence
results, the unsolvability of word problems for groups, Hilbert's 10th
problem, where (negative) solutions necessarily had to rely on precise
notions of "proof" and "algorithm" as developed by mathematical
logicians. (Important as these things are, and fully recognizing the
many positive aspects of the negative solutions.) 

3. Below I try to sketch a certain way of looking at a (positive)
interaction of logic and various parts of pure mathematics which I do
find agreeable and productive. Nothing in what I will say is
particularly new, it's just a (very incomplete) articulation of views
that are probably held by many in the model-theoretic community, and 
perhaps beyond. I have held them for about 25 years, without ever
expressing them in public, so perhaps it's time. (And I fully
recognize other positive interactions, like non-standard analysis, and
the work of the Kechris school, which I won't go into here.)

4.  Logic made effective the old idea of Leibniz that statements *about* 
mathematical objects can themselves be regarded and manipulated as 
mathematical (essentially combinatorial) objects. In a similar way
that topologists attach to a space various discrete and algebraic
invariants such as homotopy and (co)homology groups, logicians attach
to a mathematical object various "theories", each theory being the
collection of statements in some given language that are true about that
object (an ultrafilter in a certain boolean algebra, if you
like). These "theories" are themselves mathematical objects of a
nature in principle quite *different* from the objects they are describing. 
(Just like in the analogy with topology.) This "difference" is crucial.

5. KEYPOINT (overlooked by many logicians and its popularizers): 
For this to be an effective strategy in shedding light on the original
structure the language better be not too rich or expressive, on
penalty of producing too complicated a theory to be of any use. 
 
  And that seems to me one of the lessons of Goedel's Incompleteness
Theorem: if in some way your object contains a certain minimal
amount of discrete arithmetic structure AND your language allows to express
that fact, then its theory in that language reflects too much the 
object itself and is---as all logicians know in exhaustive
detail---extremely complicated; in fact, this theory is then beyond 
any kind of effective description and unlikely to play a role in
creating "effective, positive understanding" of the object it was
supposed to describe. (Of course, knowing that the Goedel phenomena
apply to a certain mathematical object as described in a certain
language, is itself valuable knowledge, but is here interpreted as
a negative: the language, acting here as a kind of binoculars,
was too strong and prevents one from seeing the forest beyond the trees.) 

For similar reasons so-called 2nd order languages and most of the
"infinitary" languages have very limited use *in this connection*: too 
expressive, and so on. Of course this is an oversimplification, and it
has not stopped people from writing a 1000 page treatise exactly on
those languages and their logics. But by and large the whole
accumulated experience of model theory has shown that it is sensible
to restrict oneself to objects as described in first-order languages
in which the Goedel phenomena do not manifest themselves.  
(Similarly, in topology, at a certain stage of its
development, one does not consider completely arbitrary spaces,
but introduces sensible restrictions or extra structure, like being a
manifold, or a CW-complex, etc., when such restrictions are satisfied
for many spaces of interest, and lead to a coherent body of knowledge.) 

6.  The still widely popular but deeply mistaken view that by "staying
below the level where the Goedel phenomena manifest themselves" 
there is not much scope left for logic or the axiomatic method, has been 
thoroughly discredited by events. This began roughly when Tarski proved
his theorem on the field of real numbers (which implies the Tarski-
Seidenberg projection property.) By the way, Tarski narrowly
interpreted his result in the logical tradition as a decision
procedure, which I think rather missed the point.    

  In fact, for a surprisingly large variety of mathematical structures the
act of attaching to them their first-order (or elementary) theory
escapes the Goedel limitation, and has turned into a viable strategy
for understanding these structures. An easy example is the fact that two
algebraically closed fields are *elementarily equivalent*, i.e., have
the same elementary theory in the language of rings, if and only if
they have the same characteristic. This explains many instances of the
Lefschetz principle, and of the equivalence between
"characteristic 0" and "characteristic p for infinitely many (or
almost all) p". In the fifties Abraham Robinson (and others) found
several other interesting possibilities of this kind, and introduced
some useful notions like model completeness and some neat tricks that 
help in unraveling first-order theories. 
   In the Ax-Kochen work in the 60's this was taken much further, and led
to the "for almost all p" solution of several open questions of p-adic number
theory. (They showed that the elementary theory of a henselian valued
field is completely determined by the elementary theories of the
residue field, and of the value group. The henselian valued field is
subject to certain restrictions here.) 

7. More recently we have learnt to associate to a structure not only its
elementary theory, but also its category of definable sets and maps,
where "definable" means "definable in the structure by a formula in
the language considered". A couple of remarks are in order here:

a) Many of the techniques that were developed to understand
elementary theories (model completeness, QE, ultraproducts, saturation) 
are also crucial in accomplishing the more ambitious task of understanding
these categories in reasonable detail. In addition, the enormous 
technical development of pure model theory that began in the 60's
with Morley and Baldwin-Lachlan and taken up foremost by Shelah,
is now being brought to bear on this issue. One could write
a long essay on this latter interaction, the coming together of pure and
applied strands of model theory: this is what has been happening in
the last decade, and is what I would like to understand better
myself. (It happened against expectations I had in the early 80's.)
  One remarkably fruitful kind of question is what one can say
about the group objects in such a category of definable sets and maps
A lot of work by Zil'ber, Hrushovski and Pillay deals with that.

b) This enlargement of the class of logical objects considered
(formulas rather than sentences) has turned out to be crucial for 
applications. Traditionally, logic focuses on 
*sentences* (which denote statements) rather than 
*formulas*  (each of which parametrizes a *whole family* of sentences):
logic tends to see formulas merely as building material
in the construction of sentences. This preoccupation with sentences, 
so evident in FOM-related logic, is one of the worst addictions that
narrowly educated logicians often have a tendency too. 
   The way to understand things, in mathematics as elsewhere, is, 
remarkably often, to embed a particular situation into a 
"continuously varying" family of similar situations, to vary the 
parameters, so to say, and look what happens. (This is even true for 
frogs, who can't catch a fly until it moves.) Model theory has
developed into the subject where one studies systematically the
logical aspects of "variation of parameters". Thus the various 
model-theoretic notions of dimension, generic points, independence
(going under the horrible name of non-forking), the routine of passing
to bigger structures with the same elementary properties to get 
elbow room, etcetera. Incidentally, while the ideas and techniques are
there, the way model theory is usually exposed leaves a lot to be
desired.


8. One attempts to classify structures not just up to elementary equivalence 
(i.e. by their first-order theory), but also up to "interdefinability"
or "bi-interpretability". Here one considers mathematical structures
described in possibly different languages, so elementary equivalence
wouldn't even make sense. 

A "definition" (or "interpretation") of one structure into another is 
roughly a kind of mapping between the structures that behaves formally
much like a homotopy class of continuous mappings between spaces. 
By considering structures up to interdefinability we try to get beyond
the particular language or primitive relations used to describe these 
structures. Nevertheless "definition" and "interpretation" are
notions that are very much in the spirit of logic. And just as 
properties like connectedness and compactness are preserved under 
continuous maps, there are rather surprising and hidden properties 
preserved under "interpretation", like stability, simpleness, the 
non-independence property, etc. Perhaps a bit similar to homology
groups, one can attach to a large variety of structures certain 
combinatorial geometries, which are remarkably robust kinds of
invariants, in particular invariants under bi-interpretability. 
Here we get into the arena of the Zil'ber Principle.
    One has the feeling we are just at the beginning of an enormous
development here, but it is perhaps worth pointing out once more
that in all this we are staying far below the "Goedel horizon":
the "good" properties (stability, simpleness, or o-minimality, etc.) that 
enable these invariants to exist at all, can only hold for structures 
at an immense distance below this horizon. But this does leave
a lot of room for actual and potential applications in mathematics.
 

9. I hope this shows there are other lessons one can learn from Goedel
and the development of logic in this century than those of FOM. 
The analogies mentioned are of course only intended as illustration of
this view, not as trying to compete with or imitate topology, which is
a much larger, more developed, and older subject than model theory. 

Best regards,
              Lou van den Dries



More information about the FOM mailing list