FOM: Feasibility and (in)determinateness of the standard numbers

Vladimir Sazonov sazonov at logic.botik.ru
Fri Apr 3 12:58:36 EST 1998


Torkel Franzen wrote:
>
>   Vladimir Sazonov says:
>
>   >The stronger are axioms the more we will have examples of concrete Turing
>   >machines which could be (feasibly) proved to halt and therefore
>   >the more we will have corresponding "provably recursive/existing
>   >natural numbers". (Examples of such comparatively small numbers
>   >provably existing in PRA or even in much weaker systems of
>   >arithmetic:  2^1000, 2^{2^1000}), etc.)
>   >Therefore the "length" of the "standard model" seems to be rather
>   >indeterminate.
>
>   Your argument until the last sentence is clearly correct, but that last
> sentence is a non sequitur. How do you arrive at the conclusion "therefore..."?

Of course it is not by a direct using the rule modus ponens (cf.
also quotation marks I use).  You have some beliefs in "standard
model", and I have some non-beliefs which are also a (very
different) kind of belief.  

>   Of course I suspect that I know how you arrived at it: by identifying
> "the standard model [according to a theory T]" with "the numbers that
> can feasibly be proved in T to have notations

or those which are < than some having notation

> in the language of T" (or
> some closely related concept). Such a terminology is non-standard, and
> can only serve to confuse.

I do not know which confusion you mean. I am just trying to find
any possible explication of the "concept" of standard model,
possibly to reduce a thesis to some absurd. 

> Much better to explain your views by saying
> that you reject the concept of the standard model.

I am rather non-believing in this concept as meaningful one in
the way you do.  Especially I do not understand why some of our
beliefs should get any absolute status.

>   Such formulations as "the length of the standard model is indeterminate"
> are of course more striking and newsworthy. Brouwer consistently formulated
> his criticism of classical mathematics in similarly dramatic terms.

What about "the length of cumulative hierarchy"? Note, that e.g.
Martin Davis "has a problem with it". What is the difference?
Why not believe then in analogous concept of "standard
cumulative hierarchy" or "standard class of all ordinals" of a
maximal length?

>   This is clear, I think, already because "feasible" is a vague
> concept, and ordinary mathematical induction is invalid for vague
> concepts.

You know my opinion (or belief) that so called "standard model",
or what it is, is even more vague. It satisfies induction axiom
only for formulas in a specific signature which does not involve
feasibility concept. This is a source for illusion that it is not
vague.

>   >I only say that feasibility concept taken
>   >"foundationally" is not embeddable straightforwardly in the
>   >ordinary mathematics as we know it.  This seems to touch on some
>   >illusions we have.
>
>   I'm sure you're right about feasibility not being straightforwardly
> embeddable in ordinary mathematics. But the illusion you invoked
> before was an illusion involving the natural numbers as classically
> (non-feasibly) understood. It's the claim that our classical
> understanding of and use of arithmetic is somehow based on an illusion
> that I reject.

Thus, you insist that an ideal image of a first order model
arising in our consciousness when working in a formal system is
not an illusion (possibly having some objective roots in the
real world, in the formal system, in our psychology, in the
objective lows of thinking, etc.), but corresponds to a real
non-illusory thing or entity. Do you need to believe in this
thing? It is your personal right.  Beliefs can make our life
subjectively easier.  However, it is quite sufficient to me to
have and to use this image with the same effect and
simultaneously to know that it is only an illusion.

>   Undoubtedly it's difficult for most of us to absorb and take
> seriously concepts that we didn't absorb when we were young. However,
> I think many people will be prepared to study feasibility if they see
> it as potentially useful or enlightening. Even Yesenin-Volpin's stuff,
> with its tactics of attention and whatnot, is not beyond the realm of
> the potentially mathematically meaningful. But we lazy bums

Why such irritation?

> who are
> not captured by the intrinsic interest of such developments need some
> incentive to study them.

This is a very good question. I am not sure that I have so good
answer, but I will try to give some.

Feasibility concept may correspondingly illuminate the relation
of mathematics to practical computability in a more direct way.
Potentially, this can be considered as an alternative approach
to complexity theory on the level of foundations rather than
traditional estimation of some complexity measures by using
asymptotics. Take, for example, chess as an algorithmic problem
on finding the winning strategy. Which kind of asymptotic may be
considered for this case (8 x 8 board)?  Analogously, in the
real computational practice complexity problems like "P=NP?"
actually have such a restricted form and essentially do not
loose their meaning under such a restriction.  However, the
ordinary mathematics, which completely neglects the feasibility
concept as the primitive one, considers these finite problems as
trivially decidable. It seems that a general theory of such
problems can be properly approached via feasible numbers and
that this could shed a new light on the nature of these
problems.  Anyway, the practical source of complexity theory is
exactly the need of feasible algorithms in the sense of
feasibility which is discussed here. Asymptotic approach is a
very reasonable and seemingly the only possible one in the
absence of the proper concept of feasibility. Also feasibility
concept may help in formalizing vague concepts, in considering
the paradoxes like that of heap, in understanding the relation
between discrete and continuous, in developing "feasibly"
non-standard analysis and feasible constructivism. (Some
interesting notes on the latter subject are presented in the
book of Troelstra and van Dahlen "Constructivism in Mathematics".)

>   >Which rule then generates 2^1000 (as a set of binary strings)?
>   >Of course, enumerating all these strings in the lexicographical
>   >order is irrelevant as an infeasible and highly non-realistic
>   >process.
>
>   My comment was ambiguous (in the ordinary way): by "generated by a rule"
> I didn't mean "actually generated" but "potentially generated". 2^1000 is
> generated by the rule(s)
>
>      s0 is in M, where s0 is the sequence of zeroes of length 1000
>      if s is in M then s' is in M, where s' results from s by
>                    changing a 0 to 1
>
>   There are no such rules that generate 2^N.

Yes, *such* a rule cannot generate 2^N.  But does this
nondeterministic procedure generate any *determinate* collection
of (intuitively more and more complex, say, of arbitrary
feasible complexity) binary strings *unlike* deterministic
procedure generating *indeterminate* or vague collection of
(more and more big) feasible numbers?  What is the difference?
Will we consider only all really existing now or existing in the
future or all imaginary applications of this rule?  What does it
mean these "all" and "imaginary"?  Why "imaginary" should be
considered as something determinate at all and how it differs
from the concept of also imaginary "set" {0,1}^1000 itself?

I personally believe that both 2^N and 2^1000 are indeterminate.
You believe that only 2^N is indeterminate.  What is the reason
for the difference you make?



It seems that our beliefs and non-beliefs related with "standard 
model" lead us into a vicious circle. A reasonable "neutral" and 
fruitful solution in general would consist in representation of 
any beliefs and intuitions by a formal system.  Then we may have 
a solid ground even for rather unclear beliefs.  After 
formalization we can even get an impression that our unclear 
beliefs became crystal-clear.  This is the main advantage of 
mathematical approach in general. For example, the limit and 
continuity concepts were rather unclear before formalization.  
Also, in the framework of formal system ZFC G"odel and Cohen 
*actually* have demonstrated in some concrete terms why and how 
2^N is indeterminate.  Analogously, we should consider suitable 
formalisms for feasibility concept to understand better what is 
the nature of 2^1000.  Say, my belief on indeterminateness of 
the "set" {0,1}^1000 is represented by different, seemingly 
non-equivalent, forms of quantification or abbreviation rules 
over binary strings allowed in corresponding formal system of 
feasible arithmetic. If it will be demonstrated that these rules 
are indeed inequivalent then we will know with more sure that 
that indeterminateness holds.  (There are some reasons to hope 
that such "indeterminateness" result could also give rise to a 
negative solution of the problem "P=NP?". Thus, it is actually 
not an easy question.)


Vladimir Sazonov
--
Program Systems Institute,  	| Tel. +7-08535-98945 (Inst.),
Russian Acad. of Sci.		| Fax. +7-08535-20566
Pereslavl-Zalessky,		| e-mail: sazonov at logic.botik.ru
152140, RUSSIA			| http://www.botik.ru/~logic/SAZONOV/



More information about the FOM mailing list