FOM: Church's thesis and complexity theory
Vladimir Sazonov
sazonov at logic.botik.ru
Mon Nov 2 16:03:50 EST 1998
Joe Shipman asks for a
> sharper definition of "effective computability"
in connection with complexity theory and a physical reality.
In another posting he also gave some clarifications to exclude
confusion between "effective computability" and "feasible
computability". Also he writes
> I am looking for an answer that is informed by our knowledge of physics
> (and possibly other sciences) -- there must be some dependence of the
> answer on our being inhabitants of THIS universe rather than disembodied
> intellects. I see Church's thesis as a statement of physics.
I am not sure that I understand the difference between "effective
computability" as "physical" one and "feasible computability"
(cf. the posting of Shipman for his explanations). Of course,
feasible computability in a general sense is related with our
physical reality and may have various aspects (parallelism, quantum
computations, etc.) So, in fact I do not see any difference at
all. May be Joe means some difference like what was proposed by
Friedman: arithmetic as mathematical vs. physical theory?
I personally think more on mathematical theories, even if they
arise from some physical assumptions. Physical theory differs
from mathematical crucially in the role of (formal!) proofs
in both sciences. Thus, saying on "physical effective
computability" I prefer to consider this as a *purely
mathematical* theory (by its methods) based or related,
probably in not very direct way, on some reality.
Even (and may be especially!) in this case I would not consider
Church's Thesis (CT) as a statement of physics.
I think that CT cannot be violated as it cannot be violated now
our *feeling* that the ordinary epsilon-delta definition of
continuous function is *reasonable* one (despite the existence
of continuous, but nowhere differentiable functions and the like).
Also, I will try to demonstrate below that the meaning of CT
depends on a context.
When we have a *reasonable* definition or formalization we may
only change it by or consider together with it another also
reasonable definition. The previous version remains alive. CT
just expresses our great satisfaction by a given precise
definition *with respect to some kind of intuition* of
computability. Starting with a *different* intuition or reality
we could in principle get some different definition
(formalization) of the concept of computability with
corresponding new "thesis" that the definiton is OK (if any).
So, we may have many various mathematical definitions
(formalisms) which are OK, each in its own sense with its own
domain and limits of applicability and with corresponding
exceptional cases. Nothing rejects anything. Only our attention
may be somewhat changed depending on our goals, current
interests and the domain of applicability.
Also, our (formal!) definitions may strongly depend on the
framework where they are done. The ordinary framework for
computability theory is PA or even ZFC and the corresponding
*concept of finiteness*. Then, taking into account even naive
complexity theoretic considerations and our computing practice
we "see" that exponential function, however computable (from
the point of view of PA), is actually "bad". On the other
hands, polynomials, unlike exponential, are in a sense
"inevitable". Take a finite domain with n elements and consider,
say, three variables ranging over this domain. Then we have
n^3 of their possible values, i.e. a polynomial. More or less
in this way we "inevitably" come to PTIME-computability. On the
other hands, we know very well that polynomials of sufficiently
high degree are also very "bad" as is exponential.
Mathematically PTIME is very elegant, have nice closure properties,
etc. This leads us to the S.Cook's thesis for PTIME. But *as usual*
for any formalization, there are always some exceptions.
We could chose another framework different from PA or ZFC. Say,
postulating that there exists a biggest natural number \Box
(an abstract *resource bound*, without postulating anything on
how large or small it is, may be even "non-standard"! or it may
be even fixed, say = 1000; the latter is in reply to Martin Devis's
note:
> I think (idiosyncratically) that talk about feasible computation as
> identifiable with PT-computability is fundamentally flawed because of the
> strong finiteness of everything real.
) we come very naturally to the notion of (\Box-)recursive functions
in this (indefinitely) finite row of natural numbers. By comparing
this approach with the ordinary complexity theory in the framework
of PA or ZFC, we conclude that "\Box-recursive" = (i.e. corresponds
to) "PTIME-computable" (the result by myself and, independently, by
Y.Gurevich) and that "\Box-primitive recursive" = "LOGSPACE-computable"
(Y.Gurevich). There are also analogous results in more logical terms
(N.Immerman and M.Vardi).
Alternatively, our framework could be just (some version of)
Bounded Arithmetic (BA) with the axiom \neg EXP which says that
(the "bad") exponential function is simply (somewhere) *partial*
function. (This is consistent with BA in the ordinary sense of
the word "consistency".) Strictly speaking, BA is rather a theory
of finite binary strings than of natural numbers. BA may be
considered as a *weak* formalization of feasibility concept
because \neg Exp (weakly) reflecting our intuition on
feasibility is consistent with it. In a sense all finite binary
strings of this theory are (weakly) feasible and BA may be
considered as describing (weakly) our "real" world. It is more
"realistic" than PA.
Then we may formalize in this framework the notion of Turing
machine as in the case of PA and define the class of Turing
computable functions. Then the ("bad") exponential proves to
be just *partial recursive*. All partial recursive functions in
this framework may be considered as intuitively corresponding to
(partial) computability in our "real" world. Also note, that
*provably-total* Turing computable functions in BA are exactly
PTIME-computable ones. Partial recursive functions in this
framework satisfy the ordinary properties: universal function
exists, s-n-m- and recursion (i.e. fixed point) theorems hold.
Note that arguments of a computable functions are now feasible
and, in the case of halting, the result of a computation, the
time and used space are also feasible. It is possible to consider
intuitionistic version of such BA (an analogue of Heyting
arithmetic HA) with formalized Church's Thesis and Markovs
Principle and with corresponding Kleene's realizability defined
almost word-by-word. (There are some important subtleties. It is
interesting that constructiveness of Markov's Principle in its
*direct* formulation proves here to be equivalent to the equality
P=NP.) These considerations are by myself. Note also some works
on polynomial-time realizability for intuitionistic version of BA
in somewhat different terms by S.Buss, and S.Cook and A.Urquhart.
Also we could postulate, instead of \neg EXP, a *stronger* axiom
which says that exponential function is partial on some fixed
and not very large argument, say, 1000 (or postulate that log
log x < 10 for all x). I believe that in such a framework (of
feasible numbers theory) we again can define in some reasonable
(it seems not unique) way the corresponding notion of "feasible"
computability, "feasible" partial recursive functions with
corresponding version(s) of CT and feasibly constructive
arithmetic. Thus, exponential function is again "feasible" (*if*
it is defined then the result is feasible number), but partial.
The framework of feasible numbers is essentially very new and
surely is much more delicate even that the case of BA. It is
interesting what would be provably total recursive functions
here? (Say, multiplication is not provably total!)
Thus, changing our basic theory like PA to some more "realistic"
theories of feasibility we will get formally (almost) the same
CT in the new context having different and actually rather
"realistic" meaning than it had in the framework of PA. If PA
is not changed then we put some restrictions (on time, space,
etc.) or generalize something, say, allow oracles. In any case
we get some *formal*, mathematical definition in some framework
with a "thesis" that it is OK! It is the ordinary mathematical
activity.
> Both Freeman Dyson (in the case
> of an open Universe) and Frank Tipler (in the case of a closed Universe)
> have argued that arbitrarily long computations are permitted by the laws
> of physics.
What do they mean by "arbitrarily long computations are permitted
by the laws of physics"? Say, as long as 2^{2^{2^1000}}? Is this number
"permitted" if it is known (from physics!) to be much-much bigger
than the number of electrons? Can anybody comment? Also what does
it mean "infinity" in physics? Say, I consider that feasible numbers
which are closed under successor, but < 2^1000 constitute an
infinite "semiset" ("semiset" - in terms of P.Vopenka). Thus I
personally am lost when I see the word "infinity" in such,
nonmathematical context. Cf also the last posting from Shipman
on 02 Nov 1998 14:43:52. It seems that the meaning of terms
finite/infinite in physics strongly depends on what mathematical
foundations suggest to them and may be changed if we will see that
the ordinary meaning is too rough. Thus, as it follws from the above,
I consider that our physical universe is *infinite* despite the
fact(?) that the number of electrons in it is < 2^1000.
Vladimir Sazonov
-- | Tel. +7-08535-98945 (Inst.),
Computer Logic Lab., | Tel. +7-08535-98953 (Inst.),
Program Systems Institute, | Tel. +7-08535-98365 (home),
Russian Acad. of Sci. | Fax. +7-08535-20566
Pereslavl-Zalessky, | e-mail: sazonov at logic.botik.ru
152140, RUSSIA | http://www.botik.ru/~logic/SAZONOV/
More information about the FOM
mailing list