FOM: determinate truth values, coherent pragmatism
Harvey Friedman
friedman at math.ohio-state.edu
Tue Sep 5 12:02:04 EDT 2000
Reply to Davis 11;15PM 9/4/00:
>>I, personally, am not comfortable with any notion of "determinate" that is
>>independent of any specification of how one can, even theoretically,
>>determine. That doesn't mean that I reject the concept out of hand. It just
>>means that I think the concept needs clarification.
>
>In the natural sciences, the means for determining the truth of specific
>propositions is often available long after the proposition itself arises.
The idea that any proposition of genuine scientific interest can in
principle be confirmed or rejected by observation or experimentation is
very much ingrained in existing science and engineering. Of course, there
is the realization that one may need a great deal of cleverness and perhaps
new technological advances at the practical level in order to successfully
complete such confirmations or refutations. That is the professional
business of the experimentalists, and the experimentalists are very
successful. Time and time again, these confirmations and refutations are
forthcoming, normally along the lines orginally envisioned by theorists.
Such confirmations and refutations, actually carried out by actual
experiments and observations, are routinely expected and achieved. within
predicted and manageable amounts of time.
Any attempt to justify or compare what is happening in set
theory/mathematics by appeal to the sciences must seriously take into
account these characteristic features of the sciences. When this is
appropriately taken into account, such substantive differences appear as to
render the comparisons totally unconvincing. In fact, the substantive
differences between the set theory/mathematics situation and the
science/experiment situation get highlighted.
>Your position is reminiscent of the failed positivist program that sought
>to equate meaning of empirical propositions with the availability of means,
>at least in principle, of verifying them.
When properly and carefully stated, a version of this is essentially
universally accepted by current scientists. Of course, one has to be
careful with the words "equate", "meaning", empricial", "propositions",
"availability", "means", "in principle", and "verifying".
In fact, as I mentioned before, from what I gather, the scientists still
require that for a Nobel Prize, a theoretical advance must be
experimentally confirmed. If they do, in fact, require this, then that
speaks volumes as to their fundamental attitudes.
>The world we live in and the
>world of mathematics are both vast complex entities in which we with our
>finite minds can make only modest inroads. But to identify what is out
>there with what we are able to ascertain (even in principle) is entirely
>unjustufied.
I wasn't talking about such an identification. I am not sure whether or not
there is an objective external reality of the kind you are assuming exists.
This sort of thing may perhaps exist, and this sort of thing may perhaps
not exist. I do think that future advances may well bear on this issue.
However, I thought that we were talking about knowledge. E.g., you wrote
>I even believe
>that CH has a determinate truth value, and would even bet that in twenty
>years that truth value will be known and generally accepted.
And the reason you believed in such a statement is through your analogies
with how the natural sciences operate. I say that those analogies are
fundamentally defective and do not provide any evidence in that direction.
And that, in fact, mathematics operates very differently, and that this
different way of operating makes your prediction most unlikely.
>Anyhow, in the cases we are discussing, Goedel already did indicate how one
>could "theoretically" hope to determine. By deducing from new axioms for
>the adoption of which there are compelling reasons. See the work of Harvey
>Friedman. And of course there's the famous example of Projective
>Determinacy and descriptive set theory.
We agree that the only axioms that have been accepted by the general
mathematical community up to now have been of the self evident kind, and
these have most likely run out - at least any new ones almost certainly
will not settle CH.
I'm saying that the general mathematical community may be compelled to
accept some new axioms that are not self evident. But this is going to
happen only through what I call coherent pragmatism. Issues of truth will
not enter the picture, as far as they are concerned. Only the issue of
consistency will be of concern. The analogy with the situation in the
sciences breaks down right here.
Issues of truth would enter the picture if truth could be appropriately
accessed as it is routinely in the sciences. But it cannot be so accessed
(independently of outright proofs and refutations) for the forseeable
future. See below for much more detailed discussion.
In other words, the breakdown in the analogy occurs because - at least at
the moment - there appears to be no way to confirm or reject arithemetic
propositions (and higher up) other than by proving or refuting them -
perhaps with the aid of a computer.
And this leads me to some important challenges that lie at the heart of
this discussion. Major positive advances towards realizing these challenges
will of course go only a small way towards changing the situation with
regard to any possible appropriate analogy between set theory/mathematics
and science/experiments. Much more would have to be done. But at least
meeting these challenges would be a start.
CHALLENGE 1. Find a way to confirm or reject a Pi-0-1 sentence other than
finding a proof or refutation of that statement from accepted axioms.
CHALLENGE 2. Find a way to confirm or reject a Pi-0-1 sentence whose
quantifiers range over all bit strings of length at most 1000 other than
finding a proof or refutation of that statement from accepted axioms.
On the other hand, we already know how to meet the following challenge by
the statistical method of repeated trials:
CHALLENGE 3. Find a way to confirm or reject a Pi-0-1 sentence of the form
"for most bit strings of length at most 1000, such and such feasibly
testable property holds" other than finding a proof or refutation of that
statement from accepted axioms.
In some very special and interesting cases, it is known that if something
holds for most bit strings of length at most 1000, then it holds for all
bit strings of length at most 1000. Then challenge 2 is met for that
special case.
But I am not aware of a single case where the statistical method of
repeated trials has actually been carried out for challenge 3 but where one
could not also provide a computer aided rigorous proof using very weak
axioms. Here some relevant computational number theoretic or algorithmic
experts could help me. So:
CHALLENGE 4. Meet challenge 3 in a case where one cannot also provide a
computer aided rigorous proof when pressed to do so by standard techniques.
When it comes to Pi-0-2 sentences and higher, the situation seems far more
difficult.
CHALLENGE 5. Find a way to confirm or reject a Pi-0-2 sentence other than
finding a proof or refutation of that statement from accepted axioms.
Here, if the Pi-0-2 sentence is not equivalent to a Pi-0-1 sentence, so the
existential quantifiers seriously involve numbers far bigger than the
preceding universal quantifiers, then this challenge is particularly
difficult.
But all of this is not going to help with the construction of any real
analogy between set theory/mathematics and science/experiments unless
additional challenges can be met:
CHALLENGE 6. In any of the cases, require that the sentence being confirmed
or rejected is proved or refuted by new axioms beyond ZFC, but without any
known way of converting that proof or refutation to one within ZFC.
Or much more convincing still:
CHALLENGE 7. In any of the cases above, require that the sentnece being
confirmed or rejected is proved or refuted by new axioms beyond ZFC, but
cannot be proved or refuted within ZFC in any reasonable number of steps.
Until serious advances have been made on such challenges, any talk of
serious analogies between set theory/mathematics and science/experiments
seems to me to be unwarranted and looks to me like really far fetched wild
speculation.
I should add that there are some strong derandomization conjectures in
computer science which are strongly believed by the experts, and seem to
bear very negatively on the prospects for such challenges. I.e., they
conjecture that P = BPP, which informally means that any polynomial time
based on coin tossing can be replaced by a deterministic polynomial time
algorithm.
There is also the prospect of running quantum experiments whose outcomes
are predicted with the help of large cardinals but not predicted in ZFC.
There are corresponding challenges associated with this idea. At the moment
this is incomparably more difficult to pull off than major steps in the
quantum computing area that are already far beyond what we can do
theoretically or technologically. Furthermore, I have been told that papers
are just appearing now in the physics literature that start to give severe
physical limitations as to what incomparably more modest things can be
accomplished by quantum computing.
All of these incredibly difficult challenges - with next to nothing
accomplished yet - are stated in the background of there being
corresponding routine everyday common achievments in the sciences arguably
for thousands of years, many of which are permanent and celebrated icons.
Newton and Einstein are just the most famous of these.
>>I view Woodin's and related programs by the set theorists as serious
>>attempts to argue for new axioms for set theorists along the lines of
>>convenience and usefulness. In a way, it promises to be somewhat similar to
>>the process that I described in my previous e-mail for the adoption of new
>>axioms by the mathematical community. Only it is aimed at the set theory
>>community. It is not aimed at the general math logic community or the math
>>community, neither of which is likely to find such things either convenient
>>or useful.
>Today.
But you could make this same response to practically anything I would say
here.
>>The principal relevant change in the mathematics community is the
>>intensification of the move away from set theoretic problems or problems
>>with any nontrivial set theoretic content - that started in earnest in the
>>1960's - and towards concrete mathematics.
>And intellectual trends never reverse themselves? I didn't claim the change
>was in the direction I was talking about. Only that any static snapshot of
>that kind gives no reliable evidence concerning future developments.
But you could make this same response to practically anything I would say
here.
>>Only because they have so much to gain by the acceptance of new axioms for
>>what they regard as normal mathematics. They are not going to entertain new
>>axioms for the purposes of abnormal mathematics.
>As I said, they will only accept them if convinced that they give reliable
>results. And once accepted, they will naturally also accept other
>consequences of these axioms even if they lie in what you call "abnormal"
>mathematics and even if these consequences are remote from their interests.
I disgaree on two counts. First of all, they do not have any working
concept of reliability for normal mathematical statements independent of
proofs or refutations. Those incredibly difficult challenges would have to
met in just the right ways in order to begin to change this situation.
Secondly, they will recognize that the particular choice of axioms is
decided by coherent pragmatism. For instance, they will make a choice of
whether to accept the complete probability measure or a measurable cardinal
(or some combination), and they will realize that coherent pragmatism was
behind that choice. So they will resist the idea that they are deciding
anything like the "truth" of the consequences of these axioms for
statements higher up. They will remain agnostic about it. They will not
regard notCH as knowledge.
Specifically, they will give credit to people using either of the two
axioms when they use them to "settle" arithmetic questions. They won't even
like to use the word "true" here. Here they will avoid "true" and use
"settle". But they will not credit themselves for using these axioms to
"settle" questions relatively high up. In particular, after deciding to
accept, say, the existence of a complete probability measure, explicitly on
the gounds of coherent pragmatism, they are not going to turn around and
credit each other for having "determined the truth value of the CH to be
false" or "settled CH".
>>This way of using "true" in this manner is utterly foreign to the general
>>mathematical community. They will take an entirely pragmatic position.
>
>And they may be wrong. Emil Post said that Goedel's work "must inevitably
>result in at least partial reversal of the entire axiomatic trend of the
>late nineteenth and early twentieth centuries, with a return to meaning and
>truth as being of the essence of mathematics." The question of how to
>regard the large cardinal hierarchy forces the issue. Using words like
>"useful" or "convenient" or "pragmatic" just dodges the issue. What reason
>is there for believing that the combinatorial consequences you draw from
>these axioms are correct except that the axioms are in some sense true.
Correctness in the sense that you mean has virtually no prospect of being
accessed in any known way except through proofs and refutations, and
therefore correctness in the sense that you mean does not enter the
picture. Only coherent pragmatism with a feeling of consistency is expected
to enter into the picture.
If set theory/mathematics was in fact like science/experiments, then
correctness would enter the picture. But since they seem destined to be
radically different for the forseeable future, correctness will not enter
the picture for the forseeable future. Only consistency enters the picture
- because consistency can be accessed on the negative side. This is a very
natural outgrowth of the stark fact of the absence of methods for
confirmation and rejection.
I would like to point out a feature of the present axiom candidates which I
think is of significance.
Each one asserts the existence of some object that has strong properties
without naming or giving any example of such. In fact, the properties in
question are known to fail for all normal mathematical objects.
This is in contrast to the axioms of ZF, which are not of this character.
The AxC is of course of this character. However, it is distinguished by the
fact that it is regarded by so many as self evident, whereas the (stronger)
present axiom candidates are not regarded as self evident by many people.
For some, this lack of specificity in AxC is one of many coherently
pragmatic reasons for adopting V = L. Under V = L, one derives AxC and
gives a recipe for choosing.
On the other hand, I expect to eventually develop a body of attractive
normal mathematical statements that are equivalent to the 1-consistency
and/or consistency of very large cardinals that are incompatible with V =
L. But these very large cardinals are well known to be incompatible with V
= L.
There will be a period of proposals for how to proceed after the new normal
mathematical statements have been worked out and accepted as having that
critical level of normality, naturalness and interest. There will probably
be conflicting proposals for new axioms, including versions of the very
large cardinals that are compatible with V = L, if that proves to be
elegant, coherent, simple, and useful enough. But coherent pragmatism will
prevail, and there will be a desire to usefully maximize the power of the
new axioms, with a premium paid to staying as close as possible to normal
mathematical objects.
At the core of any new axioms accepted will be statements that are
equivalent to large cardinals for the purposes of arithmetical (and
somewhat higher) consequences, thus assuring the permanent relevance of
large cardinals.
More information about the FOM
mailing list