[FOM] 2,3 Turing machine proof controversy

Bob Hearn robert.a.hearn at dartmouth.edu
Sat Nov 10 17:04:23 EST 2007

On Nov 9, 2007, at 12:26 PM, Vaughan Pratt wrote:

> ... whereas the definition in your proof allowed manual
> restarting.  The DARPA Grand Challenge race disallowed any team
> assistance during the race, which seems equally applicable here.
> ...
> I see two challenges here.  First, can you replace outside  
> intervention
> by a single infinite initial condition that encodes at one time the
> infinitely many restarts your proof was performing manually, so that  
> the
> subject 2,3 machine can do its own restarting?

My understanding is that this is in fact the way Smith's construction  
works. There are no  restarts, merely a single run from an infinite  
initial condition. Did I misunderstand?

To me, these are the issues, in decreasing order of importance:

1. What is the nature of the computation that produces the infinite  
initial condition?

2. If this computation is agreed to be "sufficiently simple", is it  
justifiable to provide an infinite non-repeating pattern to a  
universal Turing Machine? (I don't believe it's even universally  
agreed yet that it's justifiable to provide an infinite repeating  
input. The case of Wolfram's "rule 110", which does use an infinite  
repeating input, is different, because it is a CA, where all cells  
recompute at the same time; it makes some sense to give each cell a  
finite initial input.)

3. If we agree that the above are not problems, and the (2, 3) machine  
should be called universal, is it meaningful to refer to it, as  
Wolfram does, as "the smallest universal Turing machine that exists"?  
There are many kinds of computation machine. It seems to me that to  
compete in a game of minimizing states * symbols, for a result to be  
meaningful there must be an established definition of what kind of  
machine one is using. In this case a lot of what would normally be  
done by the Turing Machine is offloaded to pre- and post-processing,  
which is a way of redefining the problem to suit your (Wolfram's)   
purposes. How can we compare the information content of this kind of  
Turing Machine to a "standard" universal Turing Machine? And how can  
we compare it to, say, Conway's Game of Life, or Post's tag systems,  
or any other deterministic models of computation, let alone  
nondeterministic models? (There is a game model of computation that is  
undecidable using finite resources, e.g.)

There is also a more troubling meta-issue with this proof, which is  
the way Wolfram Research has handled it. As previously mentioned, the  
prize committee evidently did not sign off on the proof; Wolfram  
Research did. FOM postings from Wolfram Research have raised a number  
of excellent points, correctly in my view pointing out that issues  
such as the above are worthwhile, and that discussion of them is a  
contribution to the literature. However, that does not seem to be what  
is happening. There is no mention of the implicit redefinition of  
universality on www.wolframscience.com. Instead, we see "The lower  
limit on Turing machine universality is proved -- providing new  
evidence for Wolfram's Principle of Computational Equivalence."

Evidently Smith's proof will be published in the journal Complex  
Systems, so one could hope for objective peer review. However, this  
journal is an arm of Wolfram Research.

This entire episode seems to me a massive perversion of the scientific  

Incidentally, I am giving a short talk on this topic this evening, so  
if anyone has any immediate feedback to the above I would appreciate  
hearing it.

Bob Hearn

Robert A. Hearn
Neukom Institute for Computational Science, Dartmouth College
robert.a.hearn at dartmouth.edu

More information about the FOM mailing list