[FOM] On the Nature of Reals

Dmytro Taranovsky dmytro at mit.edu
Thu Oct 27 17:03:03 EDT 2016


There are two views of real numbers:
A. Every real number is given by some rule.
B. Some (almost all) real numbers are random and not rule-based.

I am not sure about the ultimate answer, but my conjecture is that as 
rules become increasingly more expressive, rule based reals become 
increasingly indistinguishable from random reals, so the distinction (if 
there is one) cannot be done in (V, in).

While the true theory of V remains uncertain, a study of canonical 
models of increasing complexity shows a clear pattern.
1. There is a well-ordered progression from weaker to stronger axioms.
2. As the strength increases, so does the complexity of well-ordering of 
the reals in the model.
3. Non-existence of a well-ordering of the reals at a particular 
complexity levels corresponds with the model agreeing with V at that level.

At (in a sense) the lowest level, HYP (hyperarithmetic sets) only has 
Delta^1_1 reals, can compute Sigma^1_1 truth (but is not Sigma^1_1 
correct), and requires little strength for a reasonable axiomatization.  
The constructible universe L goes further with Delta^1_2 in a countable 
ordinal reals and is Sigma^1_2 correct, but fails badly at higher levels 
of expressiveness.  The minimal iterable inner model with n Woodin 
cardinals, M_n, uses all reals that are Delta^1_{n+2} in a countable 
ordinal, and it can compute Sigma^1_{n+2} truth.

In generic extensions of these models, the difference between rule-based 
and other reals becomes increasingly complex.  For example, in the 
generic collapse Coll(omega, <inaccessible)^L, the rule based reals are 
the ones in L, and the predicate for being rule-based is Sigma^1_2, but 
in the analogues using higher canonical models, the predicate for being 
rule based (or just a predicate that holds for rule based reals but 
fails for some real) becomes increasingly complex.

At Sigma^2_1 level, (3) is controversial (CH is controversial), and 
current inner model theory (even aside from the iterability problem) is 
limited to Delta^2_2-(in a countable ordinal) reals, so a skeptic may 
claim that the pattern will end there.  However, a more optimistic 
hypothesis is the following:
* For each definable level V_alpha, there is a large cardinal notion phi 
such that canonical models of phi are correct about V_alpha. (2) and (3) 
continue for all levels of expressiveness.

Finding a true reasonably complete axiomatization of set theory is the 
most important problem in foundations of mathematics (aside from 
sociological problems such as getting enough interest in foundations).  
If the above hypothesis is true, the problem can be solved by
- Finding large cardinal axioms.  Supercompact cardinals (and their 
strengthenings) are a natural guess to reach beyond third order arithmetic.
- Choosing additional principles to get a canonical theory, perhaps one 
with conditional generic absoluteness.
- Developing a deep theory (including inner model theory) showing 
consequences of the axioms as well as how different principles relate to 
each other, and making a compelling argument that the axioms are true.  
(Alternatively, for formalists, arguing that set theory is better with 
these axioms than otherwise.)

We do not know what the theory is, but a broad vision about 
definability, including definability of real numbers, is possible.

There are two measures of complexity of real numbers.
1. The complexity of definitions of the numbers.
2. The complexity of comparing the numbers in a well-ordering, 
equivalently, the complexity of definition from ordinal parameters.

While the first measure is more direct, the second measure is more 
'honest' when complexity is measured inside different canonical models.  
For example, in L there are reals that are not Delta^V_2 (or Delta^V_n) 
definable (in L), but L betrays the simplicity of its reals by having a 
Delta^1_2 well-ordering, and similarly with other canonical models.  If 
V has random data coded into it (for example, through Easton iteration), 
then random reals might be ordinal definable, but if there is no such 
coding and there is sufficient symmetry, then only rule based reals 
appear to be ordinal definable.

The indistinguishability of complex ruled-based from random reals 
corresponds to existence of rule-based reals of arbitrary complexity 
from ordinal parameters.  A related second property, which for natural 
models holds to the extent the first property holds, is existence of 
definable uniformization for definable subsets of R^2 of arbitrary 
complexity.  Sometimes there is a complexity gap to get uniformization:  
To get a definable uniformization {(r,s) in R^2: s not in HOD(r)^{L(R)}} 
one has to go beyond L(R), and with {(r,s) in R^2: s not in HOD(r)} one 
has to go beyond (V, in).  The gap corresponds to the gap between 
definability sufficient to say "r is ordinal definable" and ordinal 
definability, and for natural models, the gap is (in a sense) the 
minimal possible.

Using our general conjectures on definability of reals, and 
extrapolating from known canonical models, here are some specific 
conjectures:
1. HOD(R) satisfies AD.
2. For all positive integers m and n and real r, either Sigma^m_n(r) or 
Pi^m_n(r) has the scale property.
3. For every ordinal kappa, there is an ordinal lambda and an ordinal 
definable real r such that Theory(V_kappa) = Theory(V_lambda) and r is 
not ordinal definable in V_lambda.

(3) formalizes that for V as a whole, the gap between definability and 
ordinal definability is minimal.  One can extend (3) to ordinal 
definability in V_alpha (instead of V) for appropriate alpha, and even 
combine it with uniformization.

As for the Continuum Hypothesis, five schools of thought can be 
summarized as follows:
- All sets are countable (or even finite).  ZFC is accepted for 
convenience, but it is not clear whether resolving CH through new axioms 
will eventually be desirable.  One can still entertain arguments on 
whether CH or its negation leads to a nicer theory of sets.
- Every real number is rule-based -- CH (and GCH) is true; likely V 
equals ultimate L.
- c is as large as possible -- c is real-valued measurable.  AD might 
have been appealing if AC was not so ingrained in current mathematics 
and if one views reals as random but sets of reals as rule-based.
- Sets of cardinality omega_1 have many of the nice properties of 
countable sets -- c=omega_2, and PFA (Proper Forcing Axiom) is popular.
- The most coherent theory of V as a whole -- GCH is true.

Sincerely,
Dmytro Taranovsky
http://web.mit.edu/dmytro/www/main.htm


More information about the FOM mailing list