Vaughan R. Pratt pratt at cs.Stanford.EDU
Tue Nov 4 23:13:37 EST 1997

```Harvey's request for more details about my proposed unification is very
reasonable, albeit a tad aggressively phrased.  I'll undertake to write
a short account of how multiplicative linear logic works and the sense
in which it complements Boolean logic.  (I'll give some of this in this
message, about 40 lines from now.)  Time permitting I'll then extend to
the additives and the exponentials and show how these complement first
order logic.  Since I haven't tried the latter before this will be a
good practice run for me.

I'm not sure what Harvey means by "what it has to do with any
substantive concept of mathematical proof", but if he's challenging me
to exhibit at the outset a complete logic sufficient to prove say
Rolle's theorem, I can't since I'm in the middle of learning this stuff
myself.  This is all going to be bottom-up, developing concepts in
about the same order as a conventional introductory logic text, just in
mirror image (proofs in place of propositions).

>Size does not loom large in the kind of state of the art foundations of
>mathematics that I do.

Not even large cardinals?

>"Everything is a set" is a gross oversimplification
>of the usual f.o.m. viewpoint.

This puts me in mind of the defense counsel who, when the prosecuting
attorney summed up his case with "So, you see, members of the jury, the
defendant simply stabbed the victim to death", jumped up and shouted
"Objection, your honor, that's a gross oversimplification!"

I thought that ZF was the usual f.o.m. viewpoint and that everything in
ZF was a set.

>The mathematicians have a trivial translation of all of mathematics into
>sets that is sufficiently powerful so as to obtain crucial nonderivability
>results. A forced reduction of physics to particles is comparatively very
>crippled.

It is?

>Please document for us, in the clearest possible basic terms, this "same
>underlying machinery."

Well, the machinery is very simple, but if I tell you what it is you
won't be happy and you'll say "that's too simple to do anything."  Let
me stupidly do it anyway.

Fix a set K.  In the case of physics K is the complex numbers, in the
case of classical logic it is 2, for 3-valued logic 3, for fuzzy logic
the unit interval of reals.

An object \A consists of two sets A and X and a function r:AxX->K.
Object \A = (A,r,X) is transformed into \B = (B,s,Y) by a pair (f,g) of
functions satisfying

s(f(a),y) = r(a,g(y))

for all a in A and y in Y.  Composition (f,g)(f',g') is defined by
(ff',g'g), easily seen to satisfy this condition.  This composition is
also easily seen to be associative and to have identity maps satisfying
the condition.  Therefore we have a category C.  The idea for this
construction goes back to George Mackey's 1945 Harvard thesis, from
where it has a long history towards the latter part of which its
objects became called Chu spaces.

On these objects one defines operations of dual \A', tensor product
\A@\B (with unit 1), direct sum \A+\B (with unit 0), and "exponential"
(dreadful name) !A.  For brevity I'll skip the actual definitions (a
few lines, if you must know go to http://boole.stanford.edu/chuguide.html
and take the first paper).  Suffice it to say that @ and + are functors
C^2 -> C, ! is a functor C->C, and ' is the contravariant "op" functor
on C, C^op -> C.  (A functor is a structure preserving map of
categories, defined exactly analogously to group homomorphisms.)

The linear logic of this setup has for its propositions all terms built
by composition (as functors) from these primitive functors and
constants (as objects of C).  For the proofs of linear logic one wants
to say that these consist of all natural transformations between such
functors.  (Given categories D and E, e.g. C^3 and C^2 where C is as
above, a natural transformation from functor F:D->E to G:D->E is a
family of morphisms F(d)->G(d) of E indexed by objects d of D such that
for every morphism f:d->d' of D the evident diagram commutes.)

There are two catches with this definition.  First, duality (aka linear
negation) means that propositions may contain variables with mixed
variance.  Ordinarily one solves this by passing to dinatural
transformations.  The second catch is that these are not strong enough
for decent completeness results (something I only discovered this
summer), and therefore one passes (more accurately I was pushed, by
Gordon Plotkin who'd been insisting they were the right thing all
along) to *logical* transformations, where the morphism f:d->d' in the
definition of naturality is replaced by a logical relation (a concept
Harvey himself had a founding role in, back around 1970, in fact I may
have even attended a talk by him at Stanford then on these).

With this definition of semantic proof the goal is then to find a
bijection if any between the logical transformations of terms of linear
logic (a semantic concept) and the cut-free proofs of Girard's
axiomatization (a syntactic concept).  Abramsky and Jagadeesan have
proposed calling such a bijection of proofs a full completeness result,
reserving "ordinary completeness" to mean a bijection between valid
formulas and provable formulas (mere proof existence without trying for
any bijection thereof).

Such a full completeness result for the case of multiplicative linear
logic, just ' and @, would complement the corresponding completeness
theorem for Boolean algebras, that the axioms of a complemented
distributive lattice suffice to derive all equational properties of the
standard two-element Boolean algebra.  I'm not quite sure where the
next correspondence comes, but it is some variation on: the
corresponding result for the whole of linear logic would complement
Goedel's completeness proof for first order logic.

That's enough for now.

>>A proof denotes a transformation of the givens into the result(s).
>Attempts to illuminate proofs as transformations have been uninspiring,
>leading to uninformative messes. Do you have any better ideas?

See above.

>In what sense is linear logic "the appropriate logic of proof?"

It deals at a suitably primitive level with flow of information from
premises to conclusion.  Before Boolean logic there were Aristotelian
syllogisms.  Boolean logic dealt at a suitably primitive level with
notion of partition into true and false, and as such replaced the less
suitably primitive (from a partition point of view) syllogism.
Syllogisms are still of interest from a flow perspective, but they are
not as combinatorially attractive a model of flow as linear logic,

>Do you think that no
>serious mathematical proof is cut-free, and if so, does this indicate that
>what you are talking about may be unimportant for the foundations of
>mathematics?

Cut-free proofs are only important for foundations in a dual sense to
Boolean logic being important.  Boolean logic is not everything, and
neither are cut-free proofs.  The two are siblings in a much larger
picture.  I hope you don't want to remove Boolean logic altogether from
foundations!  It's what computers are built of; moreover a lot of
people who do hardware verification and software verification, very big
with industry these days, believe that the logical connectives are
Boolean.  Conceivably computers will have to be built from linear logic
instead when Boolean logic runs into quantum problems in 2020 (ok, ok,
just kidding :).

>Can you state some metamathematical theorems you are shooting for and why
>they would have any impact on foundations of mathematics?

Completeness results of the kind I outlined above.  Did Goedel's
completeness result for first order logic have any impact on
foundations?  If it did, and if proofs do turn out to be as attractive
as propositions structurally, albeit for complementary reasons, then
one can hope that such completeness results might have a similar impact
on foundations.

>It's just
>that you are on the spot to say something basic and intelligible that lends
>credence to this idea.

Trying, trying.  There's an *awful* lot to say, it's just like trying
to write a logic text for someone who's never seen logic (if linear
logic doesn't seem bizarre then you haven't understood it).

>> Just as the basic
>>semantic entities for propositional calculus are 0 and 1 (the *'s in my
>>diagram), so is the basic semantic entity for MLL the abstract
>>transformation or morphism (the edge in my diagram).  Just as 0's and
>>1's combine under Boolean connectives, so do edges combine under
>>composition.
>
>Sounds like an algebraic trick, and not something fundamental; perhaps not
>a fundamental explanation of anything.

Hey Harv, lighten up a bit.  Bits are fundamental and graphs are

>I meant things like: using (set theoretic) functions as primitive instead
>of sets. I reject the analogy with photon/wave. The sense of translation
>between functions and sets is different than the relationship between
>photon and wave.

Let me try to narrow this gap a bit.  Won't be much but it'll have to
do for now.  Picture the sets as vertices and the functions as edges.
Now take a chain of n edges, transitively close it, and mark one end as
constant, making it a linear order with bottom.  Define its dual as the
chain of n+1 monotone bottom-preserving maps from it to 2, the
two-element such, ordered pointwise and so corresponding to order
filters; call each such order filter a dual point.  There is an evident
bijection between the n nonconstant dual points and the n edges: each
nonconstant dual point "cuts" one edge in the evident way.  The
constant (aka bottom) dual point cuts an imaginary (n+1)-st edge
extending the original chain.  Such a dual point (as a function from
chain B to 2) when composed with any monotone map f:A->B between two
chains yields a dual point of B; in this way f induces a map of dual
points of B to dual points of A.  So going back to the edges and
vertices as functions and sets, we have a duality that interchanges
functions and sets, which is a true categorical duality (flips all
monotone functions around).  This a bit primitive but it makes the
basic point.

Now at the core of photon-wave duality (and for that matter Fourier
transforms) is the duality of locally compact abelian groups.  This
duality works in the same way, with the multiplicative topological
group T of complex numbers mod 1 (which inherits its topology from that
of the Euclidean plane) in place of the 2-element chain with bottom.
Continuous group homomorphisms from G to T, called group characters,
constitute the dual points of G.  Continuous group homomorphism from G
to H induce ditto from the dual group H' to G'.

Many other dualities in mathematics work this way as well, e.g. finite
dimensional vector spaces (in place of 2 and T use the one-dimensional
vector space), and believe it or not the Chu spaces (A,r,X) described
above.

Trying to anticipate your objections, these dualities might all seem too
complicated to be foundational.  Ah, not all.  The last one on the list
is not complicated: simply transpose (A,r,X) to (X,r',A) where
r'(x,a)=r(a,x), which has the side effect of reversing the direction of
every (f,g) (becoming (g,f)) between Chu spaces.  Yet all the rich
algebraic structure common to the other dualities is present here too,
which is very unexpected given the simplicity of transposition!

Even more unexpected is that all the other dualities *embed* fully and
faithfully in this one very simple duality: simply pick K to be the
*underlying set* (forget the structure!) of the dualizing object,
whether 2, T, or the one-dimensional vector space (aka the ambient
field).  The (f,g) maps are in bijection with respectively the monotone
maps, the continuous group homomorphisms, and the linear
transformations respectively.  A trick?  Calling it names won't change
the facts, it strikes me as more like a miracle.

>So what does "an unacknowledged pathology of their framework" mean?
>There are over 10000 papers in which Borel measurable subsets of Euclidean
>spaces are mentioned.

The point is, to what extent can a theory neglect continuity while
claiming to have relevance to anything of practical value?  My take on
much of mathematics is that it has been conducted with blithe disregard
of the topology that is intrinsic to many constructions but that is
simply being neglected, presumably unintentionally.  That's too short
to satisfy you I know, but it's the direction my concerns are coming
from.

>In order for comments like this to be useful to the e-mail list, there has
>to be a great deal more explanation at a very basic level. You haven't even
>told us what linear logic is and why it is interesting.

Made a start.  More to come later.  I mainly talked about a semantics
for LL above, I need to talk about the combinatorics of its syntax, in
particular what abstract proofs mean and how they tangle and untangle.
That the semantics perfectly matches up to the syntax in which this
tangling is taking place is really beautiful, since it then gives one a
concrete handle on tangling in semantics.  I see this as potentially of
profound importance to foundations.

Vaughan

```