FOM: determinate truth values, coherent pragmatism

Harvey Friedman friedman at math.ohio-state.edu
Mon Sep 4 20:31:03 EDT 2000


Reply to Davis 1:43PM 9/4/00:

>As to the bet, alas it was a rhetorical flourish; being 72 years old, it's
>most unlikely that I'd be around to collect (or pay).

But you can legally commit your estate!

>I find the notions: "subset" and "power set" crystal clear. Likewise for
>omega in the sense of the von Neumann finite ordinals. Since CH is a very
>specific assertion involving these notions,  I regard it as having a
>determinate truth value.

Let me take a somewhat different tack. What does "determinate" mean? Is
there a notion of "determinate" that is independent of any specification of
how one can determine?

I, personally, am not comfortable with any notion of "determinate" that is
independent of any specification of how one can, even theoretically,
determine. That doesn't mean that I reject the concept out of hand. It just
means that I think the concept needs clarification.

>My reasons for the bet are much weaker. In principle, I have no problem
>with the possibility that although CH has a determinate truth value, the
>human race may never determine it.

Or any intelligent beings of any kind, even of far greater intelligence
than we happen to be on the scale of possible intelligence?

>After all, there are many such
>propositions.

I would like to see examples of mathematical problems whose difficulties
are similar in nature and/or similar in magnitude to those of CH.

In a separate posting, I will give some examples of problems that appear to
me to be likely to have serious underlying logical difficulties that are
bounded statements in the integers.

>I find myself optimistic about the direction of Woodin's
>recent work on CH. At last an effort is being made to get around the
>plethora of models of set theory the forcing method made it possible to
>produce which seemed to make progress on CH impossible. I agree that at
>this preliminary stage my optimism is hardly warranted by the facts. But so
>be it.

I view Woodin's and related programs by the set theorists as serious
attempts to argue for new axioms for set theorists along the lines of
convenience and usefulness. In a way, it promises to be somewhat similar to
the process that I described in my previous e-mail for the adoption of new
axioms by the mathematical community. Only it is aimed at the set theory
community. It is not aimed at the general math logic community or the math
community, neither of which is likely to find such things either convenient
or useful.

>>Actually, one should ask: generally accepted by who? I take it that you
>>mean "the mathematical community." Then I will bet against you very
>>strongly.
>
>Yes that's what I mean. Think how much it has changed over the past twenty
>years. And your own work will probably do more than anything else to lead
>them to accept methods going beyond ZFC.

The principal relevant change in the mathematics community is the
intensification of the move away from set theoretic problems or problems
with any nontrivial set theoretic content - that started in earnest in the
1960's - and towards concrete mathematics. Look at, say, the nature of the
problems and concerns voiced in

[1]  V. Arnold, M. Atiyah, P. Lax, B. Mazur, eds., Mathematics: Frontiers
and Perspectives, American Mathematical Society, 2000.

[2]  F. Browder, ed., Mathematics into the Twenty-first Century, American
Mathematical Society Centennial Publications, Volume II, 1992.

>I agree: no "self evident" principles. I believe (and again your own work
>will play a key role) that mathematicians will become comfortable with the
>view that they must work with principles that are not "self evident".

Only because they have so much to gain by the acceptance of new axioms for
what they regard as normal mathematics. They are not going to entertain new
axioms for the purposes of abnormal mathematics.

>As you know very well, the existence of non constructible sets is implied
>by large cardinal axioms (specifically the existence of measurable
>cardinals). Now there is nothing self evident about these axioms. But the
>way in which they line up in linear order with respect to various criteria
>suggests very strongly that they are true.

This way of using "true" in this manner is utterly foreign to the general
mathematical community. They will take an entirely pragmatic position.

>This views them like a
>physicist.

In order to win the Nobel Prize in physics, it is said that there must at
least be experimental confirmation. The analog in math here is nonexistent
or at the most extremely weak compared to what goes on in physics.

>Set theorists explore this territory as utterly remote from
>everyday human experience as quarks. They must look for theoretical
>cohesiveness. As Goedel suggested years ago, such axioms can justify
>themselves on the basis of their consequences - and here again it is you
>who are doing so much to bring this about.

But the justification for the general math community is not along any idea
of truth. It is rather along the lines of coherent pragmatism. Because the
possibility of experimental confirmation is nonexistent - or at least
inconceivable at this point - truth plays no significant role in the
equation. Only coherent usefulness.

>Yes. But they will want more. They will hardly regard them as acceptable,
>however convenient, unless they have reason to believe that their
>arithmetic consequences are true.

The only way they have of testing truth of arithmetic consequences is that
there is no contradiction. So this just amounts to faith in consistency.
This they will gradually accept after there is enough well worked out
theory and enough smart people are using them to gain confidence there are
no problems and to get a vested interest in their use. Notions of truth
seem utterly irrelevant for this process in the general mathematical
community.

There is only one other way I can conceive of that some very concrete
statements may be confirmed, and that is probabilistically. I.e., one may
derive from a large cardinal an estimate on the fraction of elements of a
specific finite set that have a computer testable property. And then trials
can actually be done to confirm the estimate. And if that estimate can be
rigorously proved using large cardinals but not without - either in an
appropriate sense necessarily involving lengths of proofs, or at least with
no known way to remove the large cardinal - then that would amount to
something dramatic in favor of the large cardinals. However, it would still
be incomparably weaker than what routinely happens in physics, partly
because alternatives to large cardinals can also deliver the same
consequence. E.g., one can always use "continuum is real valued measurable"
instead of "there is a measurable cardinal" for any such purpose.

The idea that one can confirm arithmetic propositions via physical
observation is much too far fetched to be seriously contemplated for a
variety of reasons.

>>However, it seems extremely far fetched to think that this process is going
>>to apply to CH or to any statement that settles CH  - since such statements
>>appear to be in principle divorced from what the mathematicians view as
>>normal mathematics.
>
>Here you take an inappropriately narrow view. These things change, and we
>know that even the remotest large cardinal axioms have arithmetic
>consequences.

Let me put it more coherently. For every natural mathematical statement -
at least thus far - there is a large cardinal axiom (including possibly 1 =
0) which has exactly the same arithmetic consequences (and somewhat higher)
over ZFC. Because of this, one can without loss of generality just look at
large cardinal axioms from the point of view of normal mathematics. There
is not even the remotest hint of a counterexample to this around.

>I take it that when "Argle" speaks of arbitrary collections, he does not
>intend them to be restricted to the cumulative hierarchy.

What is it about the cumulative hierarchy that makes the notion of
arbitrary collection there any clearer to you  than the notion of arbitrary
collection anywhere else? Are you willing to say that there is no
difference to you in the level of determinateness that arbitrary statements
about subsets of V(kappa + 2) have compared to Pi-0-1 sentences, where
kappa is the first measurable cardinal?

>Nevertheless, I'm
>glad you asked. Without the availability of long well-orderings, the
>cumulative hierarchy stops short. Methodologically, what the large cardinal
>axioms supply is precisely the well-orderings needed to extend the
>hierarchy. We know (Russell Paradox) that there is no end to the process.

But what does this have to do with truth? Granted, I am showing that
postulating extensions of the cumulative hierarchy in this way is
coherently useful and convenient for normal mathematics, with a general
feeling of consistency.

>But there is nothing to stop us from going on and on. And that is the great
>service that the set theorists do.

Cardinals far beyond our real understanding were already introduced in 1911
by Mahlo. A real challenge is to make sense of them and to use them for
normal mathematics. I'm doing the latter, but the former would really be a
great service, even for tiny ones. The process of making sense of them
probably needs to be done in a new way already starting at aleph-one. After
all, that really is a very large ordinal.

In summary, let me make some remarks:

1. The analogy with physics is severely strained. There are serious
differences that make it unconvincing.

2. The general mathematical community is going to adopt new axioms only on
the basis of coherent usefulness for normal mathematics, together with some
feeling of consistency. Truth will not enter into the equation for them. No
one knows how to get it to enter in any convincing way anyways.

3. Large cardinals will be the center of attention by the mathematics
community, since consistency and relations with concrete statements are the
only issues for them in connection with adoption of new axioms.

4. I also see the possibility that the general math community may be more
comfortable with "there is an atomless probability measure on all subsets
of the unit interval" than with large cardinal axioms, in that this
involves objects that are far less abnormal than a large cardinal. Of
course, this particular one and a measurable cardinal are known to be
equivalent for normal mathematical purposes. (Actually this equvalence
passes through some sophisticated set theory that is not "convenient" for
mathematicians, and so there is a need to put things in a form that
mathematicians will find easy to use). Of course, a byproduct of this
probability measure is that CH is very badly false. But, when viewed as an
approach to CH, this is seriously at odds with the approach currently taken
by the set theorists. After all, if they like it, then the CH would have
been viewed as being settled long long ago by them. They reject this
probability measure as an axiom candidate. However, from the coherent
pragmatism that I see in the math community, they may well accept this when
shown how to use this in an easy and coherent and effective way for a
sufficient body of attractive normal mathematics. This would be where the
set theorists' approach  via some notion of truth and the mathematicians
approach via coherent pragmatism would clash.






More information about the FOM mailing list