[FOM] "Salvaging" Voevodsky's talk

Timothy Y. Chow tchow at alum.mit.edu
Sun Jun 5 16:32:42 EDT 2011


Neel Krishnaswami, who doesn't subscribe to FOM but who saw my message on 
the FOM archives, sent me what I consider to be a very informative 
response, which I am forwarding here with his permission.

Tim

---------- Forwarded message ----------
Date: Sun, 5 Jun 2011 18:59:47 +0000
From: Neelakantan Krishnaswami <neelk at microsoft.com>
To: "tchow at alum.mit.edu" <tchow at alum.mit.edu>
Subject: RE: [FOM] "Salvaging" Voevodsky's talk

Voevodsky is coming to type theory from the point of view of a
homotopy theorist. The traditional set-theoretic formulations of that
subject make use of Grothendieck universes, and so go beyond
ZFC. However, the big discovery of his univalent foundations project
is that homotopy theory has a natural formalization in Martin-Lof type
theory, which has the proof-theoretic strength of Kripke-Platek set
theory (ie, vastly less than ZF). (IIRC, Anton Setzer has written a
survey paper on this, "Proof Theory of Martin-Lof Type Theory: an 
Overview".)

Generally, it's no surprise when a theorem can be encoded in a weaker
foundations, but the real shocker behind homotopy type theory is that
a suitable type-theoretic view seemingly requires *fewer* encoding tricks 
than the traditional view. The apparent need for large constructions
vanishes because the type structure of type theory prevents you from
performing constructions which are okay on points but which fail to
respect geometrical invariants like continuity.  As a result, you can
now talk about plain old functions instead of natural transformations
on sheaf categories or whatever.

So even if the worst fears of the predicativists are true and ZFC is
inconsistent, the homotopy type theory crowd can make the argument 
that homotopy theory is still a coherent subject.

-*-*-*-

This suggests a thought-experiment: could a similar trick be
possible for PA? Few of the algorithms lurking in mathematical
arguments seem to be very complex, after all. So perhaps requiring
computations to run in polynomial time or space is enough for "most"
mathematics, in which case the full principle of induction is not
necessary. However, observe that simply restricting the principle of
induction while leaving logic/set theory untouched doesn't work well,
since it blocks too many natural constructions.

This is in part due to the unstructured nature of set theory -- it
gives us a handful of very powerful axioms (eg, separation, powersets)
from which we construct things like function spaces. This makes the
theory very parsimonious, but if you weaken the primitives too much
then the higher levels of mathematics can't be built. OTOH,
type-theoretic approaches generally include many more primitives. This
is less parsimonious, but in exchange it offers more ability to
selectively weaken the foundations, since we can now try to do things
like (for example) eliminating powersets without giving up function
spaces.

Indeed, there are type theories which capture notions of computability
much more restrictive than what most constructivists demand, such as
polytime computability. See the work of Martin Hofmann and Kazushige
Terui in particular. Their work typically uses linear logic instead of
paraconsistent logics. So false continues to explode, even though
entailment is non-monotonic (since propositions represent resources,
and not merely states of knowledge).

I don't know if anyone has investigated how much of ordinary
mathematics fits in these frameworks, though. I suspect the answer to
this question is "not that much", but I also suspect/hope it will fail
for very interesting reasons rather than trivial ones.


More information about the FOM mailing list