[FOM] The Lucas-Penrose Thesis

Eray Ozkural examachine at gmail.com
Mon Oct 2 09:04:31 EDT 2006

On 10/1/06, Robbie Lindauer <robblin at thetip.org> wrote:
> It is almost certainly not true that the program can "use any system
> you like".  At least not any actual computer, and certainly not the
> traditional turing machine.  Different actual computers have different
> limitations.
> But remember,this proposal is not a contest, this proposal is,
> supposedly, a scientific hypothesis.
> It says "Formalism X is the formalism of person P."  There is no room
> for peekaboo here.

As far as I can tell, your objection does not stand.

The hypothetical "algorithm" can generate and execute any
computer program, and evaluate this program as would a
human mathematician. Thus, there is no fundamental limitation
to the "algorithm" that the human is free of. (For instance, we can
consider variants of Levin search that generate and test programs.)

Let me try to explain it a little more.

The computer mathematician is by no means restricted to a
single formal theory, i.e. a non-halting computer program that
generates a set of theorems. We can use any system
that we like so long as the axioms fit in memory. Most computers,
unlike humans, have an easily extensible memory, so this is not a
problem in practice. The axiom schemas we use are usually very
small, they would fit in (say) 1 kilobyte. On the other hand, a human will have
a hard time remembering 1 kilobyte of compressed information.
This means that the usual computer has a much better chance at
"using any system that we like" than does a human. There is no
disadvantage for electronic computers on this front. [*]

I would like to explain a little more, but the Obtuseness
objection as far as I can tell, does not even touch the crux
of the matter. Furthermore, it seems to rest on an
obviously wrong explanation. A robot that is as intelligent
as Lucas would not have to deliver any single algorithm
as a representation of his or her self. But this does not
mean that the robot does not have a firmware. [+] Similarly, all humans
have a definite DNA code. Unfortunately we are not yet smart enough to
know its semantics. So, to us, it could mean a number of things,
but in fact, it means only _one_ thing. At least, that is what molecular
biologists seem to think.

As mathematicians, it is not hard to see that the "semantics" of the DNA code is
completely mechanical as it rests solely on the laws of physics, which
do not change from person to person. We should in fact be able to deduce
from the DNA + mechanics of the development environment, the exact
"algorithm" of the nervous system, we just cannot do it at the moment.
Thus, our current ignorance does not seem to support Lucas's position
(that there is no "program" that defines a human).

As an interesting note, Godel himself wrote that the human brain is
"a finite mechanism by all appearances". He seems to have thought
that this appearance is misleading. However, I am somewhat inclined
to think that Lucas-Penrose do not appreciate this simple "empirical"
observation about our brains. So, the newer arguments seem like
a step back from Godel.


[*] The human is of course free to use paper and pen and extend
her memory.
[+] The robot's programming may be such that it is not
allowed to examine the program in its firmware at all! Like Lucas,
the robot wouldn't be able to tell what algorithm defines itself, then.

Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
ai-philosophy: http://groups.yahoo.com/group/ai-philosophy

More information about the FOM mailing list