[FOM] 2,3 Turing machine proof controversy

Vaughan Pratt pratt at cs.stanford.edu
Sun Nov 11 04:41:41 EST 2007

(My reply to Bob Hearn subsumes that to Alex.  By way of background 
information on Bob he is a Clarisworks principal who has more recently 
obtained some very nice results in collaboration with Erik Demaine and 
others.  Bob's ClarisWorks killed Microsoft Works in 1991, Bob's 
collaboration with Erik began a decade later.)

Bob Hearn wrote:
> My understanding is that this is in fact the way Smith's construction 
> works. There are no  restarts, merely a single run from an infinite 
> initial condition. Did I misunderstand?

Although I still haven't figured out just where in Alex's paper it says 
this explicitly I'll go with this interpretation.  (As I understand him 
Alex is claiming that it's in the condition on page 4 of his proof, 
namely "To be precise, 'finite-length initial condition', given in this 
and future conjectures, refers to an initial condition in which only 
finitely many of the cells are relevant, and no other cells are visited 
during the evolution of that initial condition."  I'm happy to defer to 
any competent judge who's been able to read into this sentence what Alex 
claims to have been his intended meaning.  Personally I find this all 
very fuzzy.)

> To me, these are the issues, in decreasing order of importance:
> 1. What is the nature of the computation that produces the infinite 
> initial condition?

This is a fair question.  Todd Rowland (one of the de facto judges of 
the submission) has a shot at this in
where he gives a precise recursive characterization of the initial 
condition as
Init[2n]=Init[n] + F[Init[n],n,k]
This seems like a reasonable answer to Bob's question 1.

> 2. If this computation is agreed to be "sufficiently simple", is it 
> justifiable to provide an infinite non-repeating pattern to a universal 
> Turing Machine? (I don't believe it's even universally agreed yet that 
> it's justifiable to provide an infinite repeating input. The case of 
> Wolfram's "rule 110", which does use an infinite repeating input, is 
> different, because it is a CA, where all cells recompute at the same 
> time; it makes some sense to give each cell a finite initial input.)

Bob's concern here was what stopped me from a stronger statement than my 
"Let's say it's at least less objectionable than a machine that has to 
be repeatedly restarted manually (deus ex machina)" in my previous post. 
  The ability to run autonomously on a precomputed set of restart 
instructions is better than being restarted manually.  Whether it's 
legitimate to self-restart is still up for grabs.

While I agree in principle with Bob that there's a question even about 
periodic "background", this bothers me less because with a few more 
states a Turing machine can generate such a background on its own.  To 
my thinking all that a periodic background accomplishes is to feed a few 
steroids to the Turing machine without however converting a 
non-universal machine (in some "moral" sense) into a universal one.

I find the nonperiodic background Alex proposes more questionable as it 
starts to look like restarting.  The fact that the Turing machine can 
restart itself demonstrates a certain commendable degree of autonomy, 
but the fact that it needs to restart at all remains suspicious.  Why 
can't it just simulate the target machine in a single run?  Is it not 
sufficiently universal?

> 3. If we agree that the above are not problems, and the (2, 3) machine 
> should be called universal, is it meaningful to refer to it, as Wolfram 
> does, as "the smallest universal Turing machine that exists"? There are 
> many kinds of computation machine. It seems to me that to compete in a 
> game of minimizing states * symbols, for a result to be meaningful there 
> must be an established definition of what kind of machine one is using.

If the game is merely to minimize the state-symbol product, Alastair 
Hewitt's universal 3,2 machine posted at
on July 24 would seem to settle this question elegantly without having 
to agonize even over nonblank backgrounds let alone restarting 
strategies.  The crucial difference between 3,2 machines and 2,3 
machines is that the latter can only carry one bit from cell to cell, 
whereas the former can carry log_2(3) = 1.58496 bits, making it that 
much harder for a 2,3 machine to simulate a universal 3,2 machine. (Not 
to say that it's impossible, in fact this would be how I'd go about 
trying to find a universal 2,3 machine, rather than trying to emulate 
Rule 110 directly.)

> In this case a lot of what would normally be done by the Turing Machine 
> is offloaded to pre- and post-processing, which is a way of redefining 
> the problem to suit your (Wolfram's)  purposes. How can we compare the 
> information content of this kind of Turing Machine to a "standard" 
> universal Turing Machine? And how can we compare it to, say, Conway's 
> Game of Life, or Post's tag systems, or any other deterministic models 
> of computation, let alone nondeterministic models? (There is a game 
> model of computation that is undecidable using finite resources, e.g.)

Hear, hear.

> There is also a more troubling meta-issue with this proof, which is the 
> way Wolfram Research has handled it. As previously mentioned, the prize 
> committee evidently did not sign off on the proof; Wolfram Research did. 
> FOM postings from Wolfram Research have raised a number of excellent 
> points, correctly in my view pointing out that issues such as the above 
> are worthwhile, and that discussion of them is a contribution to the 
> literature. However, that does not seem to be what is happening. There 
> is no mention of the implicit redefinition of universality on 
> www.wolframscience.com. Instead, we see "The lower limit on Turing 
> machine universality is proved -- providing new evidence for Wolfram's 
> Principle of Computational Equivalence."

Well, there is *some* mention of this "implicit redefinition" at


in the sentence preceding the section heading "Proofs of Universality" 
halfway down that page, namely

"For the purposes of this prize, the treatment of universality in any 
particular submission must be considered acceptable by the Prize Committee."

This is the key sentence licensing external kibitzers like Bob and me to 
be no more than an *amicus curiae*.  The whole competition revolves 
around the willingness of the relevant audience (the "committee," 
whoever that is from one day to the next) to accept whatever definition 
Smith is comfortable with.  That either Bob or I might be uncomfortable 
with his definition might be relevant to automata theorists, but it is 
irrelevant to the competition as defined by its rules.

> This entire episode seems to me a massive perversion of the scientific 
> process.

In the grand scheme of things I find "perversion of the scientific 
process" far less destructive than the wholesale starvation of computer 
science funding that the US government has been indulging itself in 
during the past decade or so.  Putting people in charge of funding an 
area that they have zero grasp of is a sure-fire recipe for destroying a 
country's leadership in that area.  The performance of certain officials 
at last week's DARPA Urban Challenge highlighted this to perfection, see 
my (admittedly flippant) remark on this race at the end of John 
Markoff's article on the race in today's NY Times,


The government's attitude during the past decade is that research that 
cannot be evaluated directly by outsiders is not worth funding.  This is 
blatantly destructive stupidity.  It will take the US some years to 
recover from this insanity and catch up to where it could have been with 
more insightful funding decisions.

Vaughan Pratt

More information about the FOM mailing list