[FOM] The Lucas-Penrose Thesis vs The Turing Thesis

Calvin Ostrum calvin.ostrum at gmail.com
Sun Oct 8 16:51:59 EDT 2006

On Saturday 07 October 2006 20:48, Robbie Lindauer wrote:
> If there is a machine that is consistent
> 	and
> If that machine can perform mathematics of at least the complexity
> of PA Then there exists a (possibly true) sentence which that
> machine can not decide.

If by "decide",  you mean produce a formal proof of the
sentence in PA, then, assuming PA is consistent (something
that has not been "decided", or otherwise proven), then that
is correct. 

However, netiher we we nor God can do this either, regardless of
whether we are machines.  So why is this an issue?

I assume that the machine in question is rather complex,
and can do more than produce formal proofs in PA.  It can
also, I assume, talk in English with us, reason about its
own mental states, gather observations from the world
and reason about them, etc. (Put "reason" and 
"gather" in the machine case if you want, but that
appears to be begging the question in this context).

In doing all of this, it may end up doing what we do also,
and concluding that PA is consistent.   Why couldn't it?
We ourselves do this, although we cannot prove it (except
in still other systems whose consistency we are not any
more certain of, and probably less certain of (these
systems being stronger in some ways, but weaker
in other)).   On what grounds do we ourselves conclude
that PA is consistent?   Are these grounds not available
to the machine?   Are they even good grounds in
the first place?

More information about the FOM mailing list