FOM: Re: Godel, f.o.m.

charles silver silver_1 at
Wed Jan 26 12:09:37 EST 2000

> > Steve Stevenson wrote:
> > > James Fetzer has a really great article on this particular view of
> > > computing.

>charles silver wrote:
> >
> >     Geez, I remember thinking about 8 years ago when I read the article
> > it had little content.   Wasn't the paper an extended argument against
> > program verification, and wasn't his main point just the standard
> > criticism that we can always make a error, no matter how much we try to
> > verify a program with a formal proof?   The proof that the program is
> > correct may itself wrong, and other things can go wrong as well.

> > Steve Stevenson wrote:
>I believe that his point is something different, especially given the
>"formal systems" tenor of Vladmir's comment. The machine is inherently
>different, period. Your criticism were used by Hoare and the rest of
>the formalist part of computer science.

    I wasn't aware anyone had responded to Fetzer.  As I recall, his remarks
begged for a response.  I'm glad to be in Hoare's company, except that I
hope you're not branding me a "formalist" in order to dismiss my points.  If
I'm wrong about Fetzer's points, what do you think they really were, and how
does "the 'formalist systems' tenor of Vlamir's comment" matter to the
points Fetzer made in his article?

>It would seem to me that 21st Century mathematics must include the
>computer as part of the tool kit. If this is not the case, then the
>principles of computing will lie elsewhere. If it is the case, then we
>must recognize the limits to what it does and does not do. For
>example, is it possible to put enough constraints on a computation to
>correct machine errors when they occur?

    What you're saying above is difficult for me to fully understand, though
I feel somewhat sympathetic with your remarks.  Just to clear the air, it
seemed to me when I read Fetzer's paper that it would be loved by some
people and hated by others.  The people who loved it were the--enormous
generalization coming--non-mathematical types who had dominated c.s. for
several years and could feel their power slipping away, as the new
generation of more mathematically sophisticated c.s people started asserting
themselves.   Whew!  The ones who hated it were the math people who for one
reason or another (often because they couldn't get good math jobs) landed in
the field of computer science and were using the authority of mathematics to
push the older generation of programmers, compiler writers, etc. into the
background.  Whew again!   I'm not taking a side here; I just wanted to
clear the air by saying that there  were passionate views behind this.

>Turing machines are really boring. And there was a time when people
>actually used mathematics for things ....

    Sorry, I don't get your point here.  If it pertains to my allusion to
Turing, I meant only that the recursive unsolvability of the Halting Problem
makes it impossible for there to be a universal program checker.  I'd have
thought you'd be sympathetic with this.  As for Turing Machines being
boring, I don't think this is a disputable point, assuming you mean that
it's a struggle for a TM to do anything "interesting".   I don't want to get
preachy, and I know you know this anyway, but the value of a TM is not in
its actual use, but in its theoretical power.   I think I may be missing
your point, though, because in looking back at what you said, I re-read:
"there was a time when people
actually used mathematics for things ...." and am not sure what you're
getting at.  Is it that you think the mathematics behind TMs is not for
(doing) things.  If so, I agree.

> >     Please correct me if I am wrong, but as I recall, his arguments
> > equally well to mathematics (except for his specific comments about
> > compilers, of course).  That is, consider a formal proof checker that's
> > created to check the accuracy of math proofs.  It is possible for a
proof to
> > be be wrong and for the proof checker to err (for any number of reasons)
> > incorrectly label it as a good proof.  (And, if we thought we could
> > this up with a proof checker to check the proof checker, then *this*
> > proof checker would need a proof checker, etc. etc.)  Again, this overly
> > general fact about fallibility doesn't seem to cut any ice about
> > science specifically.  Here, the Fetzer conclusion would be that math
> > is not formal, since one can't (in Descartes's sense) be *absolutely
> > certain* that a mistake in a math proof hasn't been made.

>I think you're dodging here.

    What am I dodging?

>For platonic logicians, I'm sure these are
>arguments that are easy to accept ...

    I'm not sure what your point is.

>for those of us who use this
>stuff the field, there is a question about all this.

    Is Hoare not in the field?  (You cited him as arguing against Fetzer.)
I should mention that during the eight years I taught c.s. I encountered
lots of hostility toward program verification among older computer science
profs who preferred to work "from the seat of their pants" and resented the
intrusion of mathematics into c.s.   In my opinion, one effect of keeping
mathematics out of c.s. was that the textbooks I had to teach from were
often models of unclarity--especially the books on data structures.

>Let me bore you with a reality.  Nuclear weapons policy is being made
>by regarding the output of simulations as 100% correct and that the
>physics is 100% correct based on old test data. That's really bad
>science, let alone policy. This need to reason about scientific
>simulations is guided by the mathematics, numerical and otherwise;
>it is called validation. Now, it makes a lot of difference whether
>those simulations are *right* or *wrong*.

    I fully agree with you that in matters of such importance we need to
think clearly about several complex and subtle issues.   I never thought of
program verification as a panacea.  My disagreement with Fetzer pertains to
what I thought were simplistic arguments extending to everything under the

>The mathematical point is not that "Well, it could be right or it
>could be wrong and that's a philosophical point not worth caring
>about". The mathematical point is to ask "How reliable is your

    Yes.  One can always raise the question of the correctness of the
program checker, or, to use your well-chosen word, to question its

>Historically, we let a countable number of grad students
>pore over proofs looking for a dissertation.

    A "countable number"?  (Just kidding.)

> Not very practical any
>more. Abel and Cauchy got into the foundations game because they saw
>unreliable computations (divergent series) being used without reliable
>ways of knowing when these computations were right or wrong. [Of
>course, we now understand asymptotic series well enough that physics
>uses them reliably quite often.] Time to revisit that reliability
>point, IMHO.

    I agree with what I take your general point to be (please correct me if
I'm wrong), which is that one should not place complete reliance on a
program checker, especially when the programs to be checked are used to
guide nuclear weapons. (Incidentally, I'm sure you know that in the area of
weaponry, it is difficult to test, in the old-fashioned way, whether a
program will function properly or not.  One can't just shoot off a missile
every other day.  Here, mathematical formulations are really essential, if
not in program verification, at least in computer simulation.)


More information about the FOM mailing list