FOM: On CH/1

Harvey Friedman friedman at math.ohio-state.edu
Sun Dec 7 20:28:20 EST 1997


The issue of the status of CH = continuum hypothesis has spawned so much
discussion on the fom that I can no longer resist taking some time out from
preparing goodies for my "positive series" and get into the fray.

I am making at least two initial postings about this. In this first
posting, I want to spell out some principal views of the matter, and what
research these lead to. In later postings, I want to specifically comment
on what people have said.

Of course, taking a view on CH entails taking a view on many of the
fundamental issues concerning the general status of set theory and set
theoretic statements. CH is a crucial focal point for wider issues.

Let me say at the outset that at this stage of reserch, it is not very
clear what view will ultimately prevail. I do think that some fairly
specific view will ultimately prevail in the long run - say by the year
2100. I.e., (the status of) CH is not like questions such as "is there a
beginning of time, and if so, how and why did it start?" or "what is human
consciousness? - is it a physical process, and if so, what?" I strongly
suspect that by 2100, the CH situation will be clarified, but by no means
these others.

Note that I said that a view will ultimately prevail - that the situation
will be clarified. Not necessarily that an answer, yes or no, will
ultimately be given to CH. On the other hand, as seen below, I don't think
one can rule out certain kinds of yes/no answer either.

Let's start from the beginning with some presumably noncontroversial
observations. One of the most striking features of mathematical thought,
only fully realized since Frege, Russell, Zermelo, and others, is that such
a little bit goes such a long way. That is, only a small handful of basic
concepts and construction principles suffice to support all of the commonly
accepted mathematical definitions ever published. And also this small
handful suffices to prove or refute such an overwhelming number of
propositions.

Currently, the best way we have of presenting this is in terms of axiomatic
set theory, with its customary division into the purely logical part
(represented by first order predicate calculus) and the finitely many
axioms of set theory. By the way, it is of course important that there are
only a small number of axioms. And of course, what I said is literally
false in the usual setup - one uses schemes with infinitely many axioms.
But the proper formulation that reflects the finiteness of the axioms is
through the use of the obvious extension of the usual predicate calculus by
schematic letters, with an appropriate rule of substitution of schematic
letters by formulas without schematic letters. Such modifications of the
usual predicate calculus should be studied more systematically, from a
model theoretic and proof theoretic point of view. For extreme conjectures
concerning schemes, see 12:30AM, 11/17/97, 8:Schemes, on the fom. However,
this is no substitute for a systematic study.

For the "purely logical part" - and the exact nature of this delineation is
a hugely interesting issue outside the scope of this posting - one has a
striking explanation that such a little bit goes such a long way; namely,
that according to the completeness theorem, a little bit goes all the way.

And by the incompleteness theorems, for the system taken as a whole, a
little bit cannot go all the way. In fact, no finite axiomatization can go
all the way.

So what are we to make of the fact that a such a little bit goes such a
long way, but that no finite amount can go all the way? That is a crucial
issue.

I want to continue on with some more noncontroversial observations. At the
level of natural numbers and finitary mathematics, a little bit seems to go
an extremely long way. As one gets to more serious doses of set theoretic
ideas, a little bit still goes a strikingly long way, but of course a lot
less far. It is amazing just how much set theory can be done in ZF and its
fragments. And adding choice allows for going an even much longer way -
this is a truly major jump. Other major jumps are afforded by other basic
hypotheses, such as continuum hypothesis, the axiom of contructibility,
various large cardinals, genericity axioms (stating the existence of
generic ultrafilters, starting with Martin's axiom), etcetera.

Notice how noncommital I am being thus far. And it is interesting to note
that we now have naturally generated a number of important questions. One
of these is:

QUESTION: Is there a way to measure this kind of jump, and compare this
kind of jump, without regard to issues of absolute truth? Or at least
identify a significant jump in formal terms? E.g., can we identify and
compare the jumps given by, say, ZFC over ZF? I.e., ZFC/ZF, where /
indicates "division." ZFC + CH/ZFC? ZFC + all reals are constructible/ZFC +
CH? ZFC + MA (various form)/ZFC? ZFC + GCH/ZFC + CH? Etcetera, etcetera.
Once can also do this with large cardinals. One possible answer - only
satisfactory to a dedicated naturalist - would be to actually count the
number of open problems in the published literature settled by the top of
the fraction, and not settled by the bottom of the fraction (or maybe known
not to be settled by the bottom of the fraction), perhaps with weights
assigned to the significance of these publications and/or the interest
generated in the open problems - and this has to be debugged in terms of
possible overlap between open problems.

Now back to our crucial issue, which I repeat: So what are we to make of
the fact that a such a little bit goes such a long way, but that no finite
amount can go all the way?

First of all, consider just how long a way one goes with very weak
fragments of ZFC in the context of actual discrete/finite mathematics. In
discrete mathematics, there are quantifiers over infinite sets of finite
objects, particularly universal quantifiers on the outside; in finite
mathematics all quantifiers range over finite objects. This is a workable
distinction, perhaps not completely in tune with common usage in math' but
the common usage in math is very informal.

In light of this and related phenomena such as the uniqueness theorems
about the natural number system, it is very natural to expect
mathematicians to think in terms of absolute truth in the strongest
possible terms in the context of finite mathematics. I.e.,

1) every statement in finite mathematics is objectively either true or
false, and we will be able to determine it by hard work.

Now the real fork in the road comes with the realization that by the
incompleteness theorems, certain strong formalizations of this expectation
are refutable. In particular, one refutes

2) every statement in finite mathematics can be proved or refuted in ZFC.

assuming that ZFC is consistent. And if one does not readily accept that
ZFC is consistent, one replaces it by one's favorite fragment that one has
confidence in the consistency of, and that easily suffices to do all of
published finite mathematics.

Here is one of many real forks in the road. Mathematicians naturally resist
the idea that the rules of the game can be up for grabs, and like the
security blanket that it doesn't matter what the rules really are since
only a tiny number of the actual rules of the game are ever used or ever
needed for their work. So they operate at a subconscious level of
resistance, and restate this as:

3) every reasonable statement in finite mathematics can be proved or
refuted in ZFC.

For the mathematician, here reasonable is an informal notion that supports
the rejection of statements as unreasonable on very wide and possibly open
ended grounds. Nearly any verbalizable difference between a proposed
statement and actual statements made in the normal course of finite
mathematics is grounds for rejection as unreasonable.

Of course, they know that there is a problem with

4) every reasonable statement in set theory can be proved or refuted in ZFC.

because of CH and the like. So they gravitate towards the view that the
essence of mathematics is finitary. They are attracted to:

5) every reasonable statement in mathematics is finitary, or

6) the reasonable essence of every statement in mathematics is finitary, or
castable in finitary terms.

I don't hate mathematicians, or think they are stupid - at least in the
usual sense of the word. Most of them went into mathematics - especially
pure mathematics - because they liked rigid intellectual and cultural
structures, where the rules of the game are clear and fixed. They don't
want to be philosophers, biologists, politicians, psychologists, artists,
etcetera. Even in terms of their artistic interests, they gravitate to that
one art for which the rules of the game are comparatively immutable -
music. They are defending their turf. They have grown up thinking that
mathematics has a unique kind of absolute objectivity, with clear and fixed
rules that they find natural and attractive. They are not going to give
this up without great resistance.

The mathematicians also have another reason to put up great resistance to
the idea that the rules of the game are to be reexamined and perhaps
changed. The nature and style of research in this vein is very much unlike
their usual research. Sufficiently so that it is not easy for them to
absorb the main ideas. Even when they have a clear motivation to do so -
such as applications of model theory to algebra and number theory - they
vigorously resist. I.e., they won't take the trouble to absorb elementary
predicate calculus theory, which makes certain things so quick and easy -
they seem to be literally repulsed and disgusted by the philosophical
flavor of it, so much so that it overcomes the purely mathematical aspects
of it.

Now let's focus on CH = continuum hypothesis. I want to carefully discuss a
few specific attitudes towards CH and related issues:

1. Godelian viewpoint. There is a uniform objective reality to mathematical
objects. There is no difference in this regard between the natural numbers,
the real numbers, sets of real numbers, sets of sets of real numbers,
ordinals, and sets in the cumulative hierarchy. Or for that matter, between
these and the set consisting of 0 and 1. Propositions concerning these
notions may or may not be difficult to settle. It is true that very few
basic and known principles are usually needed to settle statements about
some of theses. Unfortunately, this is known not to be true about others.
As the objects get larger in cardinality, there tends to be more difficulty
in settling basic questions. This is in direct analogy with: as physical
objects get smaller, or as we talk of more and more distant physical
objects, and larger and larger intervals of time, we also have to work much
harder in order to get answers. But we do make progress. There is no reason
whatsoever to be even remotely pessimistic about our abilities to answer
fundamental questions about any of these notions. We just have to work -
just like we worked on FLT - and succeeded with a great deal of effort!
More work has gone into FLT than into CH.

2. Mathematicians viewpoint. CH? I try not to care. Give me a reason to
care, and I might think about it. Otherwise, don't bother me. I have real
matheamtics to do, involving matters of real concern. Every set of reals I
consider is pretty simple - at least low level Borel. And these have
exactly two infinite cardinalities. Want to bother me with arbitrary sets
of reals, or really exotic ones? Why? Who cares? Get out of my office. I
have final exams to grade. See 10:Pathology, 12/8/97, 12:37AM.

3. Neo relativism. There is no uniform objective reality to mathematical
objects. There is only the following spectacular phenomena. One identifies
certain pre-formal concepts which are yet to be explained and worked with.
Then one makes some explanatory remarks connecting these concepts with
other concepts that have been long been discussed and worked with by
people. After these explanatory remarks, one then enunciates a number of
intuitively clear principles about them. These principles are not to be
thought of as evident or true - but rather as explanatory as to how to work
with these concepts. There is absolutely no attempt to say that one has
completely defined or delineated any of these concepts, or even stated any
truths. Rather, these principles attempt to fix aspects of a picture that
these concepts invoke, so that people can work together on a common ground.
Furthermore, these principles try to be strong enough so that all
fundamental aspects of the picture are fixed, in order to facilitate
communication. So that different people will not have different pictures.
One finds that this process is unexpectedly and spectacular successful -
that a little bit goes a long way as discussed above. That the usual axioms
we work with in f.o.m. go a very long way. But not far enough for CH. Thus
this process is only more or less successful. One has relative success,
depending on the context. In arithmetic and real/complex algebra, etcetera,
it is much more successful than in the set theory of sets of real numbers.
The CH is a an instance of failure.

4. Redefinition. Having trouble settling CH? Maybe you need to be more
specific about the set theoretic universe, and then you can see about CH.
By far the most basic redefinition is L = constructible sets. This does the
trick. CH follows. Is V = L evident? This makes no sense. It is simply a
redefinition or re-explanation of a notion of set, for which a little bit
goes a very very very much longer way. Too technical? Perhaps. Make it less
technical? Yes. Does it reflect a fundamental conceptual picture such as in
3? Probably not.

In one way or another, I work on all but 1 professionally. I think that 1 -
the Godelian viewpoint - has run out of steam, and I don't work on it
directly. I also have yet to be convinced that the set theorists are
getting or will get anywhere with the Godelian viewpoint. I will make my
views along these lines much clearer when I discuss what people have said
on the fom list.

Here are my feelings about Neo relativism:

a. That there is a concept of "conceptual picture" which can be formally
analyzed in a convincing way. There is a strong geometric component to it.
b. That many of the usual axiom systems of f.o.m. depict such conceptual
pictures, subject to completeness theorems.
c. That it is a THEOREM that many of the set theoretic statements such as
CH cannot be settled through any coherent conceptual picture.
d. That versions of certain large cardinal hypotheses do follow from
conceptual pictures. That their consistency is equivalent to the coherence
of certain conceptual pictures.

There is another approach to CH and the like which depends on an entirely
different view as to what an axiom of set theory is. In this point of view,
an axiom of set theory is a transfer principle from the hereditarily finite
sets to the hereditarily transfinite sets (the cumulative hierarchy). There
again is no question of truth or falsity or evidence here. Transfinite set
theory is simply viewed as an interesting purely formal extension of what
we know about the finite or the discrete. Some transfer principles lead to
inconsistent systems and some lead to apparently consistent systems which
are interesting and fruitful. In particular, there should be a consistent
transfer principle that refutes CH. However, there should be a theorem that
no consistent transfer principle proves CH. Thus CH is settled negatively -
in a very exotic and sophisticated new sense. And the only axiom of set
theory is: "every consistent transfer principle is true." And it can be
shown to be consistent if and only if one of the very large cardinal
hypotheses we study is consistent with ZFC.

Of course, both of these approaches are quite speculative and far from
being realized. However, I do have some initial suggestive results which
will be discussed at a later time.















More information about the FOM mailing list