[FOM] The unreasonable effectiveness of Nonstandard Analysis

Sam Sanders sasander at cage.ugent.be
Fri Sep 4 11:38:12 EDT 2015


Discussing Nonstandard Analysis with Harvey Friedman and Frank Waaldijk, I realized
the following analogy regarding Reverse Mathematics and NSA might be helpful.  
This analogy is meant to dispel two commonly held myths about NSA, namely:

i) NSA does not have/provide/yield any numerical meaning.

ii) NSA does not bring much more to the table than short proofs of standard theorems (and pays a heavy logical price for that).  

Here we go:

Reverse Mathematics (RM) is arguably a successful research program in FOM.  Classifying theorems according to “which axioms are needed to prove them” 
turns out to yield a very nice and interesting picture (Big Five, the RM zoo, robustness,…).  The RM classification can also be viewed as being based on computability, i.e.
one wants to know "how non-computable" are the objects claimed to exist by a given theorem (as witnessed by the RM-equivalence class it falls into)?  

Two observations come to the fore here:

1) RM is neutral on whether mathematics should be finitist, computable, predicative, etc.  (IDEOLOGY FREE)

2) RM studies theorems from mathematics, rather than formula classes.   (SHIFT TO PRACTICE)

Both aspects of RM have been important for its success, in my opinion.  


Essentially, I have applied these two aspects (ideology-freeness and shift-to-practice) to NSA, which translates to the following: 

I) (ideology-freeness of NSA) Yes, infinitesimals have a lot of history, and can have properties which apparently even instill fear in, and cause nightmares for, some of the more ardent platonists.  

In light of these properties, one could easily claim, like Errett Bishop and Alain Connes did, that NSA must therefore lack any numerical meaning.  

This ideological claim lives quite strongly in the minds of mathematicians, CS people, philosophers (especially), etc.

This ideological claim is also COMPLETELY FALSE, as shown below.  


II) (shift-to-practice in NSA)  All flavors of NSA (ideally) come with a conservation result:  The NSA systems prove the same formulas (in the original language) as the original systems.

Therefore, why should we do NSA but for some shorter proofs here and there?  Is all the overhead (logical and ontological) worth the hassle expect in a few specific areas?

This would be the view Paul Halmos expressed concerning NSA.  

However, if we shift our attention to *theorems* of *pure* NSA (where “pure” means “using only the well-known definitions of nonstandard continuity, diff., Riemann int., compactness, …), 
we observe that such theorems “always” have a very specific normal form (namely “forall standard x, there is standard y, such that A(x,y)” where A is internal, i.e. does not involve the standardness predicate.   


These two observations lead to the following application, leading to the extraction, from NSA, of LOTS of computational content (the technical term I have heard on the other side of the pond is “up the wazoo”):

There is a carefully crafted “term extraction" algorithm based on logical meta-theorems, among others from the above paper by van den Berg et al (APAL 2012).

On input (proofs of) theorems of pure NSA, this term extraction algo will:

a) always yield theorems of usual mathematics, i.e. not involving NSA.

b) yield either theorems of constructive/comp. math, or equivalences from RM if the original theorem is non-constructive.  The extracted terms (prim. rec., i.e. Goedel’s T) witness the existential quantifiers.  


The relevant paper for all this is:  http://arxiv.org/abs/1508.07434

The title of the paper is “The unreasonable effectiveness of NSA”, which seems apt in light of ii) and II) above.  


Best, 

Sam Sanders





More information about the FOM mailing list