[FOM] Unreasonable effectiveness

Timothy Y. Chow tchow at alum.mit.edu
Sat Nov 2 11:40:51 EDT 2013


In 1960, Wigner argued for the unreasonable effectiveness of mathematics 
in the natural sciences, and his thesis has been enthusiastically accepted 
by many others.

Occasionally, someone will express a contrarian view.  The two main 
contrarian arguments I am aware of are:

1. The effectiveness of mathematics is about what one would expect at 
random, but humans have a notorious tendency to pick patterns out of 
random data and insist on an "explanation" for them when no such 
explanation exists.

2. The effectiveness of mathematics is higher than one would expect from a 
completely random process, but there is a form of natural selection going 
on.  Ideas are generated randomly, and ineffective ideas are silently 
weeded out, leaving only the most effective ideas as survivors.  The 
combination of random generation and natural selection suffices to explain 
the observed effectiveness of mathematics.

Unfortunately, the application of mathematics to the natural sciences is 
such a complex and poorly understood process that I see no way of modeling 
it in a way that would allow us to investigate the above controversy in a 
quantitative manner.  I am wondering, however, if recent progress in 
computerized formal proofs might enable one to investigate the analogous 
question of the (alleged) "unreasonable effectiveness of mathematics in 
mathematics."

I am not sure exactly how this might go, but here is a vague outline. 
Theorems are built on lemmas.  We want to construct some kind of model of 
the probability that Lemma X will be "useful" for proving Theorem Y.  This 
model would be time-dependent; that is, at any given time t, we would have 
a probabilistic model, trained on the corpus of mathematics known up to 
time t, that could be used to predict future uses of lemmas in theorems. 
This model would represent "reasonable effectiveness."  Then the thesis of 
"unreasonable effectiveness" would be that this model really does evolve 
noticeably over time---that the model at time t systematically 
underestimates uses of Lemma X in Theorem Y at times t' > t.

I am wondering if anyone else has thought along these lines.  Also I am 
wondering if there is any plausible way of using the growing body of 
computerized proofs to make the above outline more precise.  There is of 
course the problem that the "ontogeny" of computerized proofs does not 
exactly recapitulate the "phylogeny" of how the theorems were arrived at 
historically, but nevertheless maybe something can still be done.

Tim


More information about the FOM mailing list