[FOM] Unreasonable effectiveness

meskew at math.uci.edu meskew at math.uci.edu
Sat Nov 2 16:48:29 EDT 2013


I believe there is another type of answer to Wigner.  At first glance, one
may be surprised at the effectiveness of analytic geometry and calculus in
synthetic geometry.  However, once the right synthetic axioms are laid
down, as in Hilbert's system, one can prove that there is an isomorphism
of R^n with the synthetic Euclidian space that corresponds the arithmetic
of real numbers with the synthetic notions of measure based on congruence.
 So there is indeed a mathematical explanation for this, and ultimately no
mystery.

Now synthetic geometry is much more like a physical theory than is an
abstract number system.  The explanation for why a synthetic geometrical
theory is effective in physics is just that the physical interpretation of
the theory gives (approximately) true statements about physical space. 
Assuming the approximation is close enough so that most deduced
consequences of the theory are also approximately true (a notion of which
I will not attempt to give an account here), the explanation for why the
calculus of real or complex vector spaces or manifolds applies to physical
space is then no mystery.  It reduces to the more general question of how
do we come up with effective physical theories.

There are of course many uses of mathematics in the natural sciences not
covered by this example, but I think it serves to illustrate that
sometimes a good explanation is available, and we needn't resort to
probabilistic or evolutionary explanations.  Of course there may be many
cases where such a careful explanation cannot be made, perhaps because
science just got lucky in that case and did not try to make the theory
very rigorous in the first place.

Now Tim, as for your idea on the effectiveness of mathematics in
mathematics, I'm not sure that a broad explanation is called for.  Perhaps
more experienced mathematicians will have a different view, but it is not
apparent to me that there is anything surprising about the degree to which
past mathematical work is useful in future mathematical work.  Sometimes
things go nowhere, and sometimes they are extremely fruitful, and it
doesn't seem that things are very fruitful with any surprising frequency. 
Furthermore, when they are fruitful, there is a self-contained explanation
in the proof itself, and it does not raise any philosophical questions. 
Just my two cents, but perhaps you could give some historical examples
that seem to call for an explanation.

Best,
Monroe




> In 1960, Wigner argued for the unreasonable effectiveness of mathematics
> in the natural sciences, and his thesis has been enthusiastically accepted
> by many others.
>
> Occasionally, someone will express a contrarian view.  The two main
> contrarian arguments I am aware of are:
>
> 1. The effectiveness of mathematics is about what one would expect at
> random, but humans have a notorious tendency to pick patterns out of
> random data and insist on an "explanation" for them when no such
> explanation exists.
>
> 2. The effectiveness of mathematics is higher than one would expect from a
> completely random process, but there is a form of natural selection going
> on.  Ideas are generated randomly, and ineffective ideas are silently
> weeded out, leaving only the most effective ideas as survivors.  The
> combination of random generation and natural selection suffices to explain
> the observed effectiveness of mathematics.
>
> Unfortunately, the application of mathematics to the natural sciences is
> such a complex and poorly understood process that I see no way of modeling
> it in a way that would allow us to investigate the above controversy in a
> quantitative manner.  I am wondering, however, if recent progress in
> computerized formal proofs might enable one to investigate the analogous
> question of the (alleged) "unreasonable effectiveness of mathematics in
> mathematics."
>
> I am not sure exactly how this might go, but here is a vague outline.
> Theorems are built on lemmas.  We want to construct some kind of model of
> the probability that Lemma X will be "useful" for proving Theorem Y.  This
> model would be time-dependent; that is, at any given time t, we would have
> a probabilistic model, trained on the corpus of mathematics known up to
> time t, that could be used to predict future uses of lemmas in theorems.
> This model would represent "reasonable effectiveness."  Then the thesis of
> "unreasonable effectiveness" would be that this model really does evolve
> noticeably over time---that the model at time t systematically
> underestimates uses of Lemma X in Theorem Y at times t' > t.
>
> I am wondering if anyone else has thought along these lines.  Also I am
> wondering if there is any plausible way of using the growing body of
> computerized proofs to make the above outline more precise.  There is of
> course the problem that the "ontogeny" of computerized proofs does not
> exactly recapitulate the "phylogeny" of how the theorems were arrived at
> historically, but nevertheless maybe something can still be done.
>
> Tim
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
>




More information about the FOM mailing list