[FOM] Feasible and Utterable Numbers

Mirco Mannucci mmannucc at cs.gmu.edu
Fri Aug 18 12:09:22 EDT 2006




---- V.Sazonov at csc.liv.ac.uk wrote (in his last postings):


> If the term "fuzzy" (because of participating "feasible"?) will remain
> non-formalized then what will you do with all of this? What EXACTLY do
> you mean by "fuzzy equivalence class"?
>
> For me this is the key question to all considerations related to
> feasibility. Do you intend to make a rigorous mathematics from these
> considerations or not? If yes, the HOW???
>
>

and
>
> My problem with your postings was that I did not understand well your
> point of view and whether what you suggest is really formalisable. (My
> understanding changed several times.) I am still unsure what do you
> really mean, in particular when you refer to vagueness or the like.
> Probably a full detailed text would be more helpful.
>

          $---------------------------------------$

Dear VS,

the fault was entirely mine. Indeed, I have been mixing all along 3 different (though
loosely related) lines of thought I am currently dabbling with. Their only obvious common
denominator is that they are an attempt to establish a model theory for ultrafinitism (UF)
that would be acceptable to  mainstream mathematicians (this is, if I understand it, also your
main  goal. I  think neither of us is interested in new " FOM cults", only in clarifying
a very important foundational concept).

Here they are:

1) Categorical models of UF based on Feasible Realizability (see my question/posting
Feasible Realizability of April to which I got 2 excellent replies, yours and Andrej Bauer's).
I will sketch this approach in a separate posting,under the header ---Feasible realizability II----
where I will provide at least the bare bones of the program; (some) details are in a manuscript
called --Feasible Objects in the Effective Topos--, which will eventually see the light of the day
(for now is just an unstructured and messy set of notes).

I can  anticipate now that this approach USES topos theory a' la Hyland,but it is NOT topos theory
(I am not a fan of intuitionistic logic. Way too moderate for my taste and for the purposes of UF).
Instead, it aims at selecting a subcategory FS of (or a category  over) the Hyland topos
that represents "feasible sets" inside the larger universe of "computable
sets". FS will not be a topos, but its "internal logic" will make it  a nice vacation spot
for ultrafinitists (or so I hope!).


2) an approach in the Parikh-Sazonov spirit, i.e. using a "feasibility predicate" F() on
top of a standard arithmetic theory (such as Q). The chief novelties are:

  a) F() is not necessarily monotonic or downward closed (as I said in a previous posting, I look for
     a theory that can accommodate physical constraints, but also psychological, biological, etc. )
  b) there is no set-in-stone universal bound as to what is feasible (I repeated several times my belief that
     feasibility is a contextual notion. I leave physics to physicists)
  c) F() can have parameters, allowing for the notion of RELATIVE FEASIBILITY (for example, if a previously
     unfeasible number N1 becomes feasible because of an increase of resources, one can consider a new class
     of numbers that can be obtained from N1 through a feasible amount of steps)
  d) there can be in principle SEVERAL feasibility predicates, representing the "feasibility radius" of
     someone changing over time, or the feasibility radiuses of several "agents" (individual,
     machines, etc.) exploring in  parallel the arithmetical universe.
  e) I take Parikh's and your notion of "feasibly consistent theory" very seriously, and thus try
     to establish a ultrafinitistic version of the completeness theorem.

 This approach is partly based on the material I prepared for the Tennenbaum Memorial at MAMLS, and it is almost
 ready. I''ll post the preprint by the end of September.


3) finally, the more radical notion of NUMBERS as finite similarity CLASSES of terms ((3) should be more aptly
called  Model Theory of UltraFormalism).Notice that I removed the attribute "equivalence", because I
used it in my last postings in a sloppy way. Indeed, as you promptly and correctly observed, there are
issues with transitivity. In a crisp version (for instance provable equivalence of terms based on
calculations strictly bounded in length and/or number of symbols by a hard number, say 2^100), what one
gets is a TOLERANCE RELATION (i.e. reflexive + symmetric, but not necessarily transitive),
a concept quite familiar to people in pattern recognition and modeling of approximate knowledge (see
for instance www.ida.liu.se/~patdo/patdosite1/pub/2003/gerai03_final_www.pdf).

The chief idea of this approach is deconstructionism: take the term model of arithmetics ( aka N), and tear
it apart. There is a lattice of finite term models (FTM) (T, ~), where T is a finite set of closed arithmetic
terms hereditarily closed, and  ~ is the tolerance relation. The FTMs are a lattice under the  "merging"
operation: take two finite term models and 'glue" them together. The colimit over the entire lattice is
precisely N. Of course, this colimit does NOT exists from the ultrafinitist viewpoint (it is a
countable colimit!).

In this sense, N, alias the natural number series that we were brainwashed into believing to be an
immediate and crystal-clear entity, is actually the ideal completion of this lattice. Each FTM is
a "concrete arithmetic world" built out of some ground arithmetical theory (say Q). One could see
the entire lattice as a monumental LEGO game: start from the ground theory, which gives you the
rules of the arithmetical game, and play: an individual FTM is a miniature natural numbers segment.
If Q is inconsistent, the game (alas!) will in due time collapse: all FTMs above a certain threshold will turn
into a one-point FTM!

I just wish to emphasize an important point: the ordering in the lattice is actually twofold. You can expand
a FTM in 2 ways: by INCREASING your computational power, or by CREATING new definitions of primitive recursive
terms. By changing to faster and faster notations, humanity reaches farther out into the arithmetic realm
(by the way, Classical Recursion Theory operates under the assumption that this endless collection of functions
and terms is ALREADY there. A convenient idealization, but at the same time a dangerous one: it makes us
forget the great work of innovation behind the creation of new terms).

Numbers in this scheme are not exactly equivalence classes (because ~ is not an equivalence relation);
instead they are clusters of terms inside a (T, ~) (see again the cited ref or related ones for clustering
in tolerance spaces).

Now, if one does not like crisp tolerance relations, one can use fuzzy tolerance relations, or even fuzzy
equivalence relations (see for instance http://logcom.oxfordjournals.org/cgi/content/abstract/14/6/827).

These fuzzy similarities between terms come up naturally when one thinks of provable equivalence under
calculations where the constraints (their length, number of symbols involved, etc.) are NOT so sharply
defined. In other words, when the class of allowable or feasible computations is a fuzzy subset
(or a semi-set, if one likes to cast it into the AST framework)of all calculations.


Allow me to anticipate your reaction reading this last statement: HOW can I talk about feasible computations if
I do not have already in place a framework for feasibility?

There are, I think, two answers (possibly more):

Answer a): Add a feasibility MODAL OPERATOR FR() to Q.

FR(t1 = t2) means "there is a feasible reduction from t1 to t2" in Q.

FR(x=x) is certainly the case, as well as  FR(x=y)<-> FR(y=x) and transitivity. You then add an axiom to
the effect that, if two terms are feasibly equivalent, then their respective successors remain feasibly
equivalent. Finally , you postulate that there are 2 terms that are equivalent in the standard sense, but
not feasibly equivalent. If the terms are concrete terms, the theory is classically (and intuitionistically)
inconsistent (so no Kripke models. But there will be "models", using FTMs).


Answer b) : use the Parikh-Sazonov approach. Arithmetize metamathematics inside your theory and declare
that two terms are feasibly equivalent by means of a computation c iff c is coded by a feasible number
(notice that here the definition depends on how you code metamathematics inside your theory):

           FR(t1 = t2)  =   Exists c such that Comp([c], t1, t2) AND F(length([c])),


where Comp(p, t1, t2) means c is the code of a reduction [c] of the term t1 to t2 of length length([c]), and
F is the Parikh-Sazonov feasibility predicate.


I prefer a) above, even though it is more painful to spell out. The reason is that you can then DEFINE a
number to be feasible if one of its denoting terms is feasibly reducible to SSSSS...SS0, as I have
already suggested in a previous posting. In other words, in (3) feasibility of numbers is a
derived notion, not a primitive one. This is somewhat akin to what Karlis Podnieks has been
suggesting:

    computations are in a sense more fundamental than numbers themselves.


Indeed, as  N is the colimit of all the finite term models, EACH number is different,
every time we find new ways of denoting it, passing from a FTM to a more comprehensive one.

Best Wishes

Mirco Mannucci








More information about the FOM mailing list