FOM: Floating point and A Program for Validation Research.

Steve Stevenson steve at cs.clemson.edu
Wed Feb 2 11:44:54 EST 2000


>Stevenson writes:
>>Are these viable conclusions?
>
>>This is the problem facing the simulation community right now and it's
>>called "validation of simulations" rather than "verification of
>>simulations".
>
>Obviously the problem is that you are using floating-point arithmetic
>rather than some sort of exact or interval arithmetic.  Plenty of work
>has been done on this, but I guess the interval arithmetic isn't
>implemented in a fast enough way to satisfy your physicist friend.  Why
>shouldn't this be the place to focus your research as a numerical
>analyst?

Yes, the interval arithmetic people have very active programs. I have
been working with them, but there are some problems. For example, they 
return intervals with no apriori distribution of possible
answers. Engineers need a number. There are several research (not
commercial) compilers available to support interval
arithemetic. Businesses only use trusted commercially available stuff.

> We all know that physicists and numerical analysts are just
>pretending that floating-point arithmetic has stability properties it
>does not in fact have; so why aren't computers and compilers based on
>interval arithmetic used more widely?

As stated above, lack of commercially available compilers.

>The other alternative is to make some assumptions about the randomness
>of the rounding error and prove some theorems about the probable
>reliability of the final result, dealing with the "cancellation" problem
>algebraically to the extent possible (I am sure you already know how to
>avoid ill-conditioned matrices and the like; if you know cancellation
>must be occurring you can rearrange your calculation to minimize the
>effect).

The size and complexity of the codes work against us here. Ken Kennedy 
at Rice and lots of others have tried to work with this area but they
don't have robust systems available.

>There are additional issues related to reversibility here - can the
>simulation be organized in such a way that running it backwards is
>possible and returns to the original state?  Theoretically we know this
>can be done, but what is the situation in practice?

Not sure of this. Probably a case by case problem. Things like
simulated annealing probably can't be reversed since they're
heuristics, anyway.


This is my take on what I should be doing. There's lots here for the
philosophers and mathematicians to shoot at.

I.    There are many models (explanations) of a phenomenon.
II.   Scientists want consistency.
III.  Scientists work "model side" and not "axiom-rule" side.
IV.   I+II+III point to investigating topologies and their
      relationship to various algebras, then move to topoi.
V.    Reasoning is easier if you use "rules."
VI.   V+I means representing models in the Carnap/Hempel language of
      observations, etc.
VII.  Uncertainties in understanding of phenomenon lead to choices in
      model (even the same explanation). These choices are
      approximations.
VIII. Approximations are not analytically solvable so stuck with
      numerical simulations.
IX.   VI+VII leads to one topology.
X.    VI+VIII leads to another topology which can be understood using
      denotational semantics.
XI.   The requirements are that X imply IX imply II
Grand Finale: how do you do X and X => IX reliably?

Just my take on the problem. The question isn't numbers, which is what 
classical numerical analysis is about. It's about reasoning with 
the simulations.

Best regards,

steve
-----
Steve (really "D. E.") Stevenson           Assoc Prof
Department of Computer Science, Clemson,   (864)656-5880.mabell
Support V&V mailing list: ivandv at cs.clemson.edu






More information about the FOM mailing list