Uri Klein, Nir Piterman, and Amir Pnueli.
Effective synthesis of asynchronous systems from GR(1)
specifications.
In Viktor Kuncak and Andrey Rybalchenko, editors, Verification,
Model Checking, and Abstract Interpretation, volume 7148 of Lecture
Notes in Computer Science, pages 283298. Springer Berlin / Heidelberg,
2012.
[ bib 
DOI 
.pdf ]
We consider automatic synthesis from linear temporal logic specifications for asynchronous systems. We aim the produced reactive systems to be used as software in a multithreaded environment. We extend previous reduction of asynchronous synthesis to synchronous synthesis to the setting of multiple input and multiple output variables. Much like synthesis for synchronous designs, this solution is not practical as it requires determinization of automata on infinite words and solution of complicated games. We follow advances in synthesis of synchronous designs, which restrict the handled specifications but achieve scalability and efficiency. We propose a heuristic that, in some cases, maintains scalability for asynchronous synthesis. Our heuristic can prove that specifications are realizable and extract designs. This is done by a reduction to synchronous synthesis that is inspired by the theoretical reduction.
Tim King and Clark Barrett.
Exploring and categorizing error spaces using BMC and SMT.
In Proceedings of the 9th International Workshop on
Satisfiability Modulo Theories (SMT '11), July 2011.
[ bib 
.pdf ]
We describe an abstract methodology for exploring and categorizing the space of error traces for a system using a procedure based on Satisfiability Modulo Theories and Bounded Model Checking. A key component required by the technique is a way to generalize an error trace into a category of error traces. We describe tools and techniques to support a human expert in this generalization task. Finally, we report on a case study in which the methodology is applied to a simple version of the Traffic Air and Collision Avoidance System.
Uri Klein, Nir Piterman, and Amir Pnueli.
Effective Synthesis of Asynchronous Systems from GR(1)
Specifications: CIMS, NYU  Technical Report TR2011944 (an extended version
of a VMCAI'12 paper).
Technical report, Courant Institute of Mathematical Sciences, NYU,
2011.
[ bib 
.pdf ]
We consider automatic synthesis from linear temporal logic specifications for asynchronous systems. We aim the produced reactive systems to be used as software in a multithreaded environment. We extend previous reduction of asynchronous synthesis to synchronous synthesis to the setting of multiple input and multiple output variables. Much like synthesis for synchronous designs, this solution is not practical as it requires determinization of automata on infinite words and solution of complicated games. We follow advances in synthesis of synchronous designs, which restrict the handled specifications but achieve scalability and efficiency. We propose a heuristic that, in some cases, maintains scalability for asynchronous synthesis. Our heuristic can prove that specifications are realizable and extract designs. This is done by a reduction to synchronous synthesis that is inspired by the theoretical reduction.
Clark Barrett, Christopher Conway, Morgan Deters, Liana Hadarean, Dejan
Jovanović, Tim King, Andrew Reynolds, and Cesare Tinelli.
Cvc4.
volume 6806 of Lecture Notes in Computer Science, chapter 14,
pages 171177. Springer Berlin / Heidelberg, Berlin, Heidelberg, 2011.
[ bib 
DOI 
.pdf ]
CVC4 is the latest version of the Cooperating Validity Checker. A joint project of NYU and U Iowa, CVC4 aims to support the useful feature set of CVC3 and SMTLIBv2 while optimizing the design of the core system architecture and decision procedures to take advantage of recent engineering and algorithmic advances. CVC4 represents a completely new code base; it is a fromscratch rewrite of CVC3, and many subsystems have been completely redesigned. Additional decision procedures for CVC4 are currently under development, but for what it currently achieves, it is a lighterweight and higherperforming tool than CVC3. We describe the system architecture, subsystems of note, and discuss some applications and continuing work.
Uri Klein and Kedar S. Namjoshi.
Formalization and automated verification of RESTful behavior.
In Ganesh Gopalakrishnan and Shaz Qadeer, editors, Computer
Aided Verification, volume 6806 of Lecture Notes in Computer Science,
pages 541556. Springer Berlin / Heidelberg, 2011.
[ bib 
DOI 
.pdf ]
REST is a software architectural style used for the design of highly scalable web applications. Interest in REST has grown rapidly over the past decade, spurred by the growth of open web APIs. On the other hand, there is also considerable confusion surrounding REST: many examples of supposedly RESTful APIs violate key REST constraints. We show that the constraints of REST and of RESTful HTTP can be precisely formulated within temporal logic. This leads to methods for model checking and runtime verification of RESTful behavior. We formulate several relevant verification questions and analyze their complexity.
Uri Klein and Kedar S. Namjoshi.
Formalization and Automated Verification of RESTful Behavior: CIMS,
NYU  Technical Report TR2011938 (an extended version of a CAV'11 paper).
Technical report, Courant Institute of Mathematical Sciences, NYU,
2011.
[ bib 
.pdf ]
REST is a software architectural style used for the design of highly scalable web applications. Interest in REST has grown rapidly over the past decade, spurred by the growth of open web APIs. On the other hand, there is also considerable confusion surrounding REST: many examples of supposedly RESTful APIs violate key REST constraints.We show that the constraints of REST and of RESTful HTTP can be precisely formulated within temporal logic. This leads to methods for model checking and runtime verification of RESTful behavior. We formulate several relevant verification questions and analyze their complexity.
Uri Klein and Amir Pnueli.
Revisiting synthesis of GR(1) specifications.
In Sharon Barner, Ian Harris, Daniel Kroening, and Orna Raz, editors,
Hardware and Software: Verification and Testing (Proceedings of HVC
'10), volume 6504 of Lecture Notes in Computer Science (LNCS), pages
161181. Springer Berlin / Heidelberg, 2011.
[ bib 
DOI 
.pdf ]
The last few years have seen a rising interest in the problem
of synthesizing systems from temporal logic specifications. One major
contributor to this is the recent work of Piterman et al., which showed
how polynomial time synthesis could be achieved for a class of LTL
specifications that is large enough and expressive enough to cover an
extensive number of complex, realworld, applications (despite a known
doublyexponential time lower bound for general LTL formulae). That
approach has already been used extensively for the synthesis of various
applications and as basis for further theoretical work on synthesis.
Here, we expose a fundamental flaw in the initial processing of specifications
in that paper and demonstrate how it may produce incorrect
results, declaring that specifications could not be synthesized when, in
fact, they could. We then identify a class of specifications for which this
initial processing is sound and complete. Thus, giving an insight to the
reason that this problem arises in the first place. We also show that it can
be easily checked whether specifications belong to the sound and complete
class by using the same synthesis techniques. Finally, we show in
the cases that specifications do not fall into this category how to modify
them so that their processing is, indeed, both sound and complete.
Dejan Jovanović and Clark Barrett.
Sharing is caring.
In Proceedings of the 8th International Workshop on
Satisfiability Modulo Theories (SMT '10), July 2010.
[ bib 
.pdf ]
One of the main shortcomings of the traditional methods for combining theories is the complexity of guessing the arrangement of the variables shared by the individual theories. This paper presents a reformulation of the NelsonOppen method that takes into account explicit equality propagation and can ignore pairs of shared variables that the theories do not care about. We show the correctness of the new approach and present care functions for the theories of uninterpreted functions and the theory of arrays. The effectiveness of the new method is illustrated by encouraging experimental results.
Dejan Jovanović and Clark Barrett.
Polite theories revisited.
In Proceedings of the 17th international conference on Logic for
programming, artificial intelligence, and reasoning, LPAR'10, pages
402416, Berlin, Heidelberg, 2010. SpringerVerlag.
[ bib 
DOI 
.pdf ]
The classic method of Nelson and Oppen for combining decision procedures requires the theories to be stablyinfinite. Unfortunately, some important theories do not fall into this category (e.g. the theory of bitvectors). To remedy this problem, previous work introduced the notion of polite theories. Polite theories can be combined with any other theory using an extension of the NelsonOppen approach. In this paper we revisit the notion of polite theories, fixing a subtle flaw in the original definition. We give a new combination theorem which specifies the degree to which politeness is preserved when combining polite theories. We also give conditions under which politeness is preserved when instantiating theories by identifying two sorts. These results lead to a more general variant of the theorem for combining multiple polite theories.
Christopher Conway and Clark Barrett.
Verifying LowLevel implementations of HighLevel datatypes
computer aided verification.
In Tayssir Touili, Byron Cook, and Paul Jackson, editors,
Computer Aided Verification, volume 6174 of Lecture Notes in Computer
Science, chapter 28, pages 306320. Springer Berlin / Heidelberg, Berlin,
Heidelberg, 2010.
[ bib 
DOI 
.pdf ]
For efficiency and portability, network packet processing code is typically written in lowlevel languages and makes use of bitlevel operations to compactly represent data. Although packet data is highly structured, lowlevel implementation details make it difficult to verify that the behavior of the code is consistent with highlevel data invariants. We introduce a new approach to the verification problem, using a highlevel definition of packet types as part of a specification rather than an implementation. The types are not used to check the code directly; rather, the types introduce functions and predicates that can be used to assert the consistency of code with programmerdefined data assertions. We describe an encoding of these types and functions using the theories of inductive datatypes, bit vectors, and arrays in the Cvc SMT solver. We present a case study in which the method is applied to opensource networking code and verified within the Cascade verification platform.
Andrew Reynolds, Liana Hadarean, Cesare Tinelli, Yeting Ge, Aaron Stump, and
Clark Barrett.
Comparing proof systems for linear real arithmetic with LFSC.
In A. Gupta and D. Kroening, editors, Proceedings of the 8th
International Workshop on Satisfiability Modulo Theories (Edinburgh,
England), 2010.
[ bib 
.pdf ]
LFSC is a highlevel declarative language for defining proof systems and proof objects for virtually any logic. One of its distinguishing features is its support for computational side conditions on proof rules. Side conditions facilitate the design of proof systems that reflect closely the sort of highperformance inferences made by SMT solvers. This paper investigates the issue of balancing declarative and computational inference in LFSC focusing on (quantifierfree) Linear Real Arithmetic. We discuss a few alternative proof systems for LRA and report on our comparative experimental results on generating and checking proofs in them.
Amir Pnueli and Uri Klein.
Synthesis of programs from temporal property specifications.
In Proceedings of the 7th IEEE/ACM International Conference on
Formal Methods and Models for CoDesign (MEMOCODE '09), pages 17, July
2009.
[ bib 
DOI 
.pdf ]
The paper investigates a development process for reactive programs, in which the program is automatically generated (synthesized) from a highlevel temporal specification. The method is based on previous results that proposed a similar synthesis method for the automatic construction of hardware designs from their temporal specifications. Thus, the work reported here can be viewed as a generalization of existing methods for the synthesis of synchronous reactive systems into the synthesis of asynchronous systems.
In the synchronous case it was possible to identify a restricted subclass of formulas and present an algorithm that solves the synthesis problem for these restricted specifications in polynomial time. Here the results are less definitive in the sense that we can offer some heuristics that may provide polynomialtime solutions only in some of the cases.
Christopher L. Conway, Dennis Dams, Kedar S. Namjoshi, and Clark Barrett.
Pointsto analysis, conditional soundness, and proving the absence of
errors.
In Static Analysis Symposium (SAS), pages 6277, Valencia,
Spain, July 2008.
[ bib 
DOI 
.pdf ]
It is well known that the use of pointsto information can substantially improve the accuracy of a static program analysis. Commonly used algorithms for computing pointsto information are known to be sound only for memorysafe programs. Thus, it appears problematic to utilize pointsto information to verify the memory safety property without giving up soundness. We show that a sound combination is possible, even if the pointsto information is computed separately and only conditionally sound. This result is based on a refined statement of the soundness conditions of pointsto analyses and a general mechanism for composing conditionally sound analyses.
A. Cohen, A. Pnueli, and L. D. Zuck.
Verification of transactional memories that support
NonTransactional memory accesses, February 2008.
[ bib 
.pdf ]
Transactional memory is a programming abstraction intended to simplify the synchronization of conflicting memory accesses (by concurrent threads) without the difficulties associated with locks. In a previous work we presented a formal framework for proving that a transactional memory implementation satisfies its specifications and provided with model checking verification of some using small instantiations. This paper extends the previous work to capture nontransactional accesses to memory, which occurs, for example, when using legacy code.We provide a mechanical proof of the soundness of the verification method, as well as a mechanical verification of a version of the popular TCC implementation that includes nontransactional memory accesses. The verification is performed by the deductive temporal checker TLPVS.
Christopher L. Conway, Dennis Dams, Kedar S. Namjoshi, and Clark Barrett.
Pointsto analysis, conditional soundness, and proving the absence of
errors.
Technical Report TR2008910, New York University, Dept. of Computer
Science, 2008.
[ bib 
.pdf ]
It is well known that the use of pointsto information can substantially improve the accuracy of a static program analysis. Commonly used algorithms for computing pointsto information are known to be sound only for memorysafe programs. Thus, it appears problematic to utilize pointsto information to verify the memory safety property without giving up soundness. We show that a sound combination is possible, even if the pointsto information is computed separately and only conditionally sound. This result is based on a refined statement of the soundness conditions of pointsto analyses and a general mechanism for composing conditionally sound analyses.
Ariel Cohen and Kedar S. Namjoshi.
Local proofs for LinearTime properties of concurrent programs.
In Computer Aided Verification, 20th International Conference,
CAV 2008, Princeton, NJ, USA, July 2008, Proceedings, volume 5123 of
Lecture Notes in Computer Science, pages 149161. Springer, 2008.
[ bib 
DOI 
.pdf ]
This paper develops a local reasoning method to check linear time temporal properties of concurrent programs. In practice, it is often infeasible to model check over the product state space of a concurrent program. The method developed in this paper replaces such global reasoning with checks of (abstracted) individual processes. An automatic refinement step gradually exposes local state if necessary, ensuring that the method is complete. Experiments with a prototype implementation show that local reasoning can hold a significant advantage over global reasoning.
Ariel Cohen, Amir Pnueli, and Lenore D. Zuck.
Formal verification of transactional memories, 2008.
[ bib 
.pdf ]
Ariel Cohen, Amir Pnueli, and Lenore D. Zuck.
Mechanical verification of transactional memories with
nontransactional memory accesses.
In Computer Aided Verification, 20th International Conference,
CAV 2008, Princeton, NJ, USA, July 2008, Proceedings, volume 5123 of
Lecture Notes in Computer Science, pages 121134. Springer, 2008.
[ bib 
DOI 
.pdf ]
Transactional memory is a programming abstraction intended to simplify the synchronization of conflicting memory accesses (by concurrent threads) without the difficulties associated with locks. In a previous work we presented a formal framework for proving that a transactional memory implementation satisfies its specifications and provided with model checking verification of some using small instantiations. This paper extends the previous work to capture nontransactional accesses to memory, which occurs, for example, when using legacy code. We provide a mechanical proof of the soundness of the verification method, as well as a mechanical verification of a version of the popular TCC implementation that includes nontransactional memory accesses. The verification is performed by the deductive temporal checker TLPVS.
A. Cohen, J. W. O'Leary, A. Pnueli, M. R. Tuttle, and L. D. Zuck.
Verifying correctness of transactional memories.
In Proceedings of the 7th International Conference on Formal
Methods in ComputerAided Design (FMCAD), pages 3744, November 2007.
[ bib 
DOI 
.pdf ]
We show how to verify the correctness of transactional memory implementations with a model checker. We show how to specify
transactional memory in terms of the admissible interchange of transaction operations, and give proof rules for showing that an implementation satisfies this specification. This notion of an
admissible interchange is a key to our ability to use a model checker, and lets us capture the various notions of
transaction conflict as characterized by Scott. We demonstrate our work using
the TLC model checker to verify several wellknown implementations described abstractly in the TLA+ specification
language.
Amir Pnueli and Yaniv Sa'ar.
All you need is compassion (TR).
Technical report, Department of Computer Science, New York
University, October 2007.
[ bib 
.pdf ]
The paper presents a new deductive rule for verifying response properties
under the assumption of compassion (strong fairness) requirements. It
improves on previous rules in that the premises of the new rule are all first
order. We prove that the rule is sound, and present a constructive completeness
proof for the case of finitestate systems. For the general case, we present a
sketch of a relative completeness proof. We report about the implementation of
the rule in PVS and illustrate its application on some simple but nontrivial
examples.
Prakash Chandrasekaran, Christopher L. Conway, Joseph M. Joy, and Sriram K.
Rajamani.
Programming asynchronous layers with CLARITY.
In Foundations of Software Engineering (FSE), pages 6574,
Dubrovnik, Croatia, September 2007.
[ bib 
DOI 
.pdf ]
Asynchronous systems components are hard to write, hard to reason about, and (not coincidentally) hard to mechanically verify. In order to achieve high performance, asynchronous code is often written in an eventdriven style that introduces nonsequential control flow and persistent heap data to track pending operations. As a result, existing sequential verification and static analysis tools are ineffective on eventdriven code.
We describe CLARITY, a programming language that enables analyzable design of asynchronous components. CLARITY has three novel features: (1) Nonblocking function calls which allow eventdriven code to be written in a sequential style. If a blocking statement is encountered during the execution of such a call, the call returns and the remainder of the operation is automatically queued for later execution. (2) Coords, a set of highlevel coordination primitives, which encapsulate common interactions between asynchronous components and make highlevel coordination protocols explicit. (3) Linearity annotations, which delegate coord protocol obligations to exactly one thread at each asynchronous function call, transforming a concurrent analysis problem into a sequential one.
We demonstrate how these language features enable both a more intuitive expression of program logic and more effective program analysismost checking is done using simple sequential analysis. We describe our experience in developing a network device driver with CLARITY. We are able to mechanically verify several properties of the CLARITY driver that are beyond the reach of current analysis technology applied to equivalent C code.
Yeting Ge, Clark Barrett, and Cesare Tinelli.
Solving quantified verification conditions using satisfiability
modulo theories.
In Frank Pfenning, editor, Proceedings of the 21st International
Conference on Automated Deduction (CADE '07), volume 4603 of Lecture
Notes in Artificial Intelligence, pages 167182. SpringerVerlag, July
2007.
[ bib 
DOI 
.ps ]
First order logic provides a convenient formalism for describing a wide variety
of verification conditions. Two main approaches to checking such conditions
are pure first order automated theorem proving (ATP) and automated theorem proving based on satisfiability modulo theories (SMT). Traditional ATP systems are designed to handle quantifiers easily, but often have difficulty reasoning with respect to theories. SMT systems, on the other hand, have builtin support for many useful theories, but have a much more difficult time with quantifiers. One clue on how to get the best of both worlds can be found in the legacy system Simplify which combines builtin theory reasoning with quantifier instantiation heuristics. Inspired by Simplify and motivated by a desire to provide a competitive alternative to ATP systems, this paper describes a methodology for reasoning about quantifiers in SMT systems. We present the methodology in the context of the Abstract DPLL Modulo Theories framework. Besides adapting many of Simplify's techniques, we also introduce a number of new heuristics. Most important is the notion of instantiation level which provides an effective mechanism for prioritizing and managing the large search space inherent in quantifier instantiation techniques. These techniques have been implemented in the SMT system CVC3. Experimental results show that our methodology enables CVC3 to solve a significant number of benchmarks that were not solvable with any previous approach.
Clark Barrett and Cesare Tinelli.
CVC3.
In Werner Damm and Holger Hermanns, editors, Proceedings of the
19th International Conference on Computer Aided Verification (CAV '07),
volume 4590 of Lecture Notes in Computer Science, pages 298302.
SpringerVerlag, July 2007.
[ bib 
DOI 
.ps ]
CVC3, a joint project of NYU and U Iowa, is the new and latest version of the Cooperating Validity Checker. CVC3 extends and builds on the functionality of its predecessors and includes many new features such as support for additional theories, an abstract architecture for Boolean reasoning, and SMTLIB compliance. We describe the system and discuss some applications and continuing work.
Clark Barrett, Igor Shikanian, and Cesare Tinelli.
An abstract decision procedure for satisfiability in the theory of
recursive data types.
In Byron Cook and Roberto Sebastiani, editors, Combined
Proceedings of the 4th Workshop on Pragmatics of Decision Procedures in
Automated Reasoning (PDPAR '06) and the 1st International Workshop on
Probabilistic Automata and Logics (PaUL '06), volume 174(8) of
Electronic Notes in Theoretical Computer Science, pages 2337. Elsevier,
June 2007.
[ bib 
DOI 
.ps ]
The theory of recursive data types is a valuable modeling tool for software verification. In the past, decision procedures have been proposed for both the full theory and its universal fragment. However, previous work has been limited in various ways. In this paper, we present a general algorithm for the universal fragment. The algorithm is presented declaratively as a set of abstract rules which are terminating, sound, and complete. We show how other algorithms can be realized as strategies within our general framework. Finally, we propose a new strategy and give experimental results showing that it performs well in practice.
Ittai Balaban.
Shape Analysis by Augmentation, Abstraction, and
Transformation.
PhD thesis, New York University, New York, NY, U.S.A., May 2007.
[ bib 
.pdf ]
The goal of shape analysis is to analyze properties of
programs that perform destructive updating on
dynamically allocated storage (heaps).
In the past various frameworks have been proposed, most notable being
the line of work based on shape graphs and canonical abstraction.
Frameworks have been proposed since, among them based on counter automata,
predicate abstraction, and separation logic.
However, among these examples there has been little effort in dealing with
liveness properties (e.g., termination) of systems whose verification depends
on deep heap properties.
This dissertation presents a new shape analysis framework based on
predicate abstraction, program augmentation, and
model checking. The combination of predicate abstraction and program
augmentation is collectively known as Ranking Abstraction and provides
a powerful model for verification of liveness properties of
sequential and concurrent systems.
A new predicate abstraction method is presented allowing for
automatic computation of abstract systems that does not rely on theorem
provers.
This approach has several intrinsic limitations, most notably on the class of
analyzable heap shapes. Thus several extensions are described that allow for
complex heap shapes.
Hillel Kugler, Cory Plock, and Amir Pnueli.
Synthesizing reactive systems from LSC requirements using the
PlayEngine.
In OOPSLA Companion, pages 801802, 2007.
[ bib 
DOI 
.pdf ]
Live Sequence Charts (LSCs) is a scenariobased language for modeling objectbased reactive systems with
liveness properties. A tool called the PlayEngine allows a user to create LSCs
using a pointandclick interface and view certain executable traces of
LSC specifications using a feature called smart playout. The user can play the
role of an external environment by choosing an environment input action and watching the system respond
with a sequence of system actions called a superstep. If at least one superstep exists which does not violate
the requirements, smart playout chooses one arbitrarily and executes it.
We are interested in constructing reactive software systems that are guaranteed to never violate the
LSC requirements. Nonviolation is not guaranteed using smart playout because it may
choose a superstep leading to a state from which no further supersteps exist.
In this work, we consider an extension to smart playout that performs a complete analysis
of the state space and guarantees a system strategy for nonviolation, provided a strategy exists.
Using this method, we may synthesize correct executable programs directly from LSC requirements.
Ittai Balaban, Amir Pnueli, and Lenore Zuck.
Shape analysis of SingleParent heaps.
In Byron Cook and Andreas Podelski, editors, Proc. of the 8th
Int. Conference on Verification, Model Checking, and Abstract
Interpretation, Lect. Notes in Comp. Sci. SpringerVerlag, 2007.
[ bib 
DOI 
.pdf ]
We define the class of singleparent heap systems, which rely
on a singlylinked heap in order to model destructive updates on
tree structures. This encoding has the advantage of relying on a
relatively simple theory of linked lists in order to support
abstraction computation. To facilitate the application of this
encoding, we provide a program transformation that, given a program
operating on a multilinked heap without sharing, transforms it into
one over a singleparent heap. It is then possible to apply shape
analysis by predicate and ranking abstraction as in
[51]. The technique has been successfully applied on
examples with trees of fixed arity (balancing of and insertion into
a binary sort tree).
Ariel Cohen and Kedar S. Namjoshi.
Local proofs for global safety properties.
In Computer Aided Verification, 19th International Conference,
CAV 2007, Berlin, Germany, July 37, 2007, Proceedings, volume 4590 of
Lecture Notes in Computer Science, pages 5567. Springer, 2007.
[ bib 
DOI 
.pdf ]
This paper explores the concept of locality in proofs of global safety
properties of asynchronously composed, multiprocess programs. Model
checking on the full state space is often infeasible due to state explosion.
A local proof, in contrast, is a collection of perprocess invariants, which together imply the
global safety property.
Local proofs can be compact: but a central problem is that local reasoning is
incomplete. In this paper, we present a “completion” algorithm, which
gradually exposes facts about the internal state of
components, until either a local proof or a real error is discovered.
Experiments show that local reasoning can have significantly better
performance over a reachability computation. Moreover, for some parameterized protocols,
a local proof can be used to show correctness for all instances.
Anna Zaks and Amir Pnueli.
Compiler validation by program analysis of the CrossProduct.
Technical report, 2007.
[ bib 
.pdf ]
The paper presents a framework for proving program equivalence and its application
to verification of transformations performed by optimizing compilers. To leverage existing
program analysis techniques, we convert the equivalence checking problem to analysis of one
system  a crossproduct of the two input programs. Further, we show how it can be applied
in a setting, when the compiler is treated as a black box, and no compiler annotations are
available. Finally, we report on the prototype tool CoVaC that applies the developed
methodology to verification of the LLVM compiler. CoVaC handles many of the classical
intraprocedural compiler optimizations such as constant folding, reassociation, common
subexpression elimination, code motion, dead code elimination, branch optimizations,
and others.
Clark Barrett, Igor Shikanian, and Cesare Tinelli.
An abstract decision procedure for a theory of inductive data types.
Journal on Satisfiability, Boolean Modeling and Computation,
3:2146, 2007.
[ bib 
DOI 
.ps ]
Inductive data types are a valuable modeling tool for software verification. In the past, decision procedures have been proposed for various theories of inductive data types, some focused on the universal fragment, and some focused on handling arbitrary quantifiers. Because of the complexity of the full theory, previous work on the full theory has not focused on strategies for practical implementation. However, even for the universal fragment, previous work has been limited in several significant ways. In this paper, we present a general and practical algorithm for the universal fragment. The algorithm is presented declaratively as a set of abstract rules which we show to be terminating, sound, and complete. We show how other algorithms can be realized as strategies within our general framework, and we propose a new strategy and give experimental results indicating that it performs well in practice. We conclude with a discussion of several useful ways the algorithm can be extended.
Ittai Balaban, Amir Pnueli, and Lenore Zuck.
Modular ranking abstraction.
Int. J. Found. Comput. Sci., 18(1):544, 2007.
[ bib 
DOI 
.pdf ]
We define the class of singleparent heap systems, which rely on a singlylinked heap in order to model destructive updates on tree structures. This encoding has the advantage of relying on a relatively simple theory of linked lists in order to support abstraction computation. To facilitate the application of this encoding, we provide a program transformation that, given a program operating on a multilinked heap without sharing, transforms it into one over a singleparent heap. It is then possible to apply shape analysis by predicate and ranking abstraction as in
[51]. The technique has been successfully applied on examples with trees of fixed arity (balancing of and insertion into a binary sort tree).
Clark Barrett, Robert Nieuwenhuis, Albert Oliveras, and Cesare Tinelli.
Splitting on demand in SAT modulo theories.
In Miki Hermann and Andrei Voronkov, editors, Proceedings of the
13th International Conference on Logic for Programming, Artificial
Intelligence, and Reasoning (LPAR '06), volume 4246 of Lecture Notes in
Computer Science, pages 512526. SpringerVerlag, November 2006.
[ bib 
DOI 
.ps ]
Lazy algorithms for Satisfiability Modulo Theories (SMT) combine a generic DPLLbased SAT engine with a theory solver for the given theory T that can decide the Tconsistency of conjunctions of ground literals. For many theories of interest, theory solvers need to reason by performing internal case splits. Here we argue that it is more convenient to delegate these case splits to the DPLL engine instead. The delegation can be done on demand for solvers that can encode their internal case splits into one or more clauses, possibly including new constants and literals. This results in drastically simpler theory solvers. We present this idea in an improved version of DPLL(T), a general SMT architecture for the lazy approach, and formalize and prove it correct in an extension of Abstract DPLL Modulo Theories, a framework for modeling and reasoning about lazy algorithms for SMT. A remarkable additional feature of the architecture, also discussed in the paper, is that it naturally includes an efficient NelsonOppenlike combination of multiple theories and their solvers.
Clark Barrett, Robert Nieuwenhuis, Albert Oliveras, and Cesare Tinelli.
Splitting on demand in SAT Modulo Theories (tr).
Technical Report 0605, Department of Computer Science, University of
Iowa, August 2006.
[ bib 
.pdf ]
Lazy algorithms for Satisfiability Modulo Theories (SMT) combine a generic DPLLbased SAT engine with a theory solver for the given theory T that can decide the Tconsistency of conjunctions of ground literals. For many theories of interest, theory solvers need to reason by performing internal case splits. Here we argue that it is more convenient to delegate these case splits to the DPLL engine instead. The delegation can be done on demand for solvers that can encode their internal case splits into one or more clauses, possibly including new constants and literals. This results in drastically simpler theory solvers. We present this idea in an improved version of DPLL(T), a general SMT architecture for the lazy approach, and formalize and prove it correct in an extension of Abstract DPLL Modulo Theories, a framework for modeling and reasoning about lazy algorithms for SMT. A remarkable additional feature of the architecture, also discussed in the paper, is that it naturally includes an efficient NelsonOppenlike combination of multiple theories and their solvers.
Nikhil Sethi and Clark Barrett.
CASCADE: C assertion checker and deductive engine.
In Thomas Ball and Robert B. Jones, editors, Proceedings of the
18th International Conference on Computer Aided Verification (CAV '06),
volume 4144 of Lecture Notes in Computer Science, pages 166169.
SpringerVerlag, August 2006.
[ bib 
DOI 
.ps ]
We present a tool, called CASCADE, to check assertions in C programs as part of a multistage verification strategy. CASCADE takes as input a C program and a control file (the output of an earlier stage) that specifies one or more assertions to be checked together with (optionally) some restrictions on program behaviors. For each assertion, CASCADE produces either a concrete trace violating the assertion or a deduction (proof) that the assertion cannot be violated.
Amir Pnueli, Aleksandr Zaks, and Lenore Zuck.
Monitoring interfaces for faults.
In Proceedings of the 5th Workshop on Runtime Verification (RV
2005), volume 144 of Electronic Notes in Theoretical Computer Science,
pages 7389, May 2006.
[ bib 
DOI 
.pdf ]
We consider the problem of a module interacting with an external interface(environment)
where the interaction is expected to satisfy some system specification Φ.
While we have the full implementation details of the module, we are only given
a partial external specification for the interface. The interface specification being
partial (incomplete) means that the interface displays only a strict subset of the
behaviors allowed by the interface specification.
Based on the assumption that interface specifications are typically incomplete,
we address the question of whether we can tighten the interface specification
into a strategy, consistent with the given partial specification, that will guarantee
that all possible interactions resulting from possible behaviors of the module
will satisfy the system specification Φ. We refer to such a tighter specification
as Φguaranteeing specification. Rather than verifying whether
the interface, which is often an offtheshelf component, satisfies the tighter specification,
the paper proposes a construction of a runtime monitor which continuously checks
the existence of a Φguaranteeing interface.
We view the module and the external interface as players in a 2player game.
The interface has a winning strategy if it can guarantee that no matter what the module does,
the overall specification Φis met. The problem of incomplete specifications
is resolved by allowing the interface to follow any strategy consistent with the
interface specification. Our approach essentially combines traditional
runtime monitoring and static analysis. This allows going beyond the focus of
traditional runtime monitoring tools  error detection in the execution trace,
towards the focus of the static analysis  bug detection in the programs.
Amir Pnueli and Anna Zaks.
Translation validation of interprocedural optimizations.
In Proceedings of the 4th International Workshop on Software
Verification and Validation (SVV 2006), 2006.
[ bib 
.pdf ]
Translation Validation is an approach of ensuring compilation correctness in which each
compiler run is followed by a validation pass that proves that the target code produced by
the compiler is a correct translation (implementation) of the source code.
In this work, we extend the existing framework for translation validation of optimizing
compilers to deal with procedural programs and define the notion of correct translation
for reactive programs with intermediate inputs and outputs. We present an Interprocedural
Translation Validation algorithm that automatically proves correct translation of the source
program to the target program in presence of interprocedural optimizations such as global
constant propagation, inlining, tailrecursion elimination, interprocedural dead code elimination,
dead argument elimination, and cloning.
Ittai Balaban, Ariel Cohen, and Amir Pnueli.
Ranking abstraction of recursive programs.
In VMCAI'2006: Verification, Model Checking, and Abstract
Interpretation, 2006.
[ bib 
DOI 
.pdf ]
We present a method for modelchecking of safety and liveness
properties over procedural programs, by combining state and
ranking abstractions with procedure summarization. Our
abstraction is an augmented finitary abstraction
[?], meaning that a concrete procedural program
is first augmented with a well founded ranking function, and then
abstracted by a finitary state abstraction. This results in a
procedural abstract program with strong fairness requirements which
is then reduced to a finitestate fair discrete system (FDS)
using procedure summarization. This FDS is then model
checked for the property.
Ittai Balaban, Amir Pnueli, and Lenore D. Zuck.
Invisible safety of distributed protocols.
In Automata, Languages and Programming, 33rd International
Colloquium, Part II, volume 4052 of Lecture Notes in Computer Science,
pages 528539. Springer, 2006.
[ bib 
DOI 
.pdf ]
The method of “Invisible Invariants” has been applied successfully to
protocols that assume a “symmetric” underlying topology, be it cliques,
stars, or rings. In this paper we show how the method can be applied to
proving safety properties of distributed protocols running under arbitrary
topologies. Many safety properties of such protocols have reachability
predicates, which, at first glance, are beyond the scope of the Invisible
Invariants method. To overcome this difficulty, we present a technique,
called “coloring,” that allows, in many instances, to replace the second
order reachability predicates by first order predicates, resulting in
properties that are amenable to Invisible Invariants. We demonstrate our
techniques on several distributed protocols, including a variant on Luby's
Maximal Independent Set protocol, the Leader Election protocol used in the
IEEE 1394 (Firewire) distributed bus protocol, and various distributed
spanning tree algorithms.
All examples have been tested using the symbolic model checker TLV.
Sean McLaughlin, Clark Barrett, and Yeting Ge.
Cooperating theorem provers: A case study combining HOLLight and
CVC Lite.
In Alessandro Armando and Alessandro Cimatti, editors,
Proceedings of the $3^rd$ Workshop on Pragmatics of Decision Procedures in
Automated Reasoning (PDPAR '05), volume 144(2) of Electronic Notes in
Theoretical Computer Science, pages 4351. Elsevier, January 2006.
[ bib 
DOI 
.ps ]
This paper is a case study in combining theorem provers. We define a derived rule in HOLLight, CVC_PROVE, which calls CVC Lite and translates the resulting proof object back to HOLLight. As a result, we obtain a highly trusted proofchecker for CVC Lite, while also fundamentally expanding the capabilities of HOLLight.
Ying Hu, Clark Barrett, Benjamin Goldberg, and Amir Pnueli.
Validating more loop optimizations.
In J. Knoop, G. C. Necula, and W. Zimmermann, editors,
Proceedings of the 4th International Workshop on Compiler Optimization meets
Compiler Verificaiton (COCV '05), volume 141(2) of Electronic Notes in
Theoretical Computer Science, pages 6984. Elsevier, December 2005.
[ bib 
DOI 
.ps ]
Translation validation is a technique for ensuring that a translator, such as a compiler, produces correct results. Because complete verification of the translator itself is often infeasible, translation validation advocates coupling the verification task with the translation task, so that each run of the translator produces verification conditions which, if valid, prove the correctness of the translation. In previous work, the translation validation approach was used to give a framework for proving the correctness of a variety of compiler optimizations, with a recent focus on loop transformations. However, some of these ideas were preliminary and had not been implemented. Additionally, there were examples of common loop transformations which could not be handled by our previous approaches. This paper addresses these issues. We introduce a new rule REDUCE for loop reduction transformations, and we generalize our previous rule VALIDATE so that it can handle more transformations involving loops. We then describe how all of this (including some previous theoretical work) is implemented in our compiler validation tool TVOC.
Lenore Zuck, Amir Pnueli, Benjamin Goldberg, Clark Barrett, Yi Fang, and Ying
Hu.
Translation and RunTime validation of loop transformations.
Formal Methods in System Design (FMSD), 27(3):335360,
November 2005.
[ bib 
DOI 
.ps ]
This paper presents new approaches to the validation of loop optimizations that compilers use to obtain the highest performance from modern architectures. Rather than verify the compiler, the approach of translation validation performs a validation check after every run of the compiler, producing a formal proof that the produced target code is a correct implementation of the source code.
As part of an active and ongoing research project on translation validation, we have previously described approaches for validating optimizations that preserve the loop structure of the code and have presented a simulationbased general technique for validating such optimizations. In this paper, for more aggressive optimizations that alter the loop structure of the code  such as distribution, fusion, tiling, and interchange  we present a set of permutation rules which establish that the transformed code satisfies all the implied data dependencies necessary for the validity of the considered transformation. We describe the extensions to our tool VOC64 which are required to validate these structuremodifying optimizations.
This paper also discusses preliminary work on runtime validation of speculative loop optimizations. This involves using runtime tests to ensure the correctness of loop optimizations whose correctness cannot be guaranteed at compile time. Unlike compiler validation, runtime validation must not only determine when an optimization has generated incorrect code, but also recovering from the optimization without aborting the program or producing an incorrect result. This technique has been applied to several loop optimizations, including loop interchange and loop tiling, and appears to be quite promising.
Clark Barrett, Leonardo de Moura, and Aaron Stump.
Design and results of the first satisfiability modulo theories
competition (SMTCOMP 2005).
Journal of Automated Reasoning, 35(4):373390, November 2005.
[ bib 
DOI 
.ps ]
The Satisfiability Modulo Theories Competition (SMTCOMP) is intended to spark further advances in the decision procedures field, especially for applications in hardware and software verification. Public competitions are a wellknown means of stimulating advancement in automated reasoning. Evaluation of SMT solvers entered in SMTCOMP took place while CAV 2005 was meeting. Twelve solvers were entered, 1352 benchmarks were collected in seven different divisions.
Clark Barrett, Igor Chikanian, and Cesare Tinelli.
An abstract decision procedure for satisfiability in the theory of
recursive data types.
Technical Report TR2005878, Department of Computer Science, New York
University, November 2005.
[ bib 
.ps ]
The theory of recursive data types is a valuable modeling tool for software verification. In the past, decision procedures have been proposed for both the full
theory and its universal fragment. However, previous work has been limited in various ways, including an inability to deal with multiple constructors, multisorted logic, and mutually recursive data types. More significantly, previous algorithms for the universal case have been based on inefficient nondeterministic guesses and have been described in fairly complex procedural terms. We present an algorithm which addresses these issues for the universal theory. The algorithm is presented declaratively as a set of abstract rules which are terminating, sound, and complete. We also describe strategies for applying the rules and explain why our recommended strategy is more efficient than those used by previous algorithms. Finally, we discuss how the algorithm can be used within a broader framework of cooperating decision procedures.
CarlJohan H. Seger, Robert B. Jones, John W. O'Leary, Tom Melham, Mark D.
Aagaard, Clark Barrett, and Don Syme.
An industrially effective environment for formal hardware
verification.
IEEE Transactions on ComputerAided Design of Integrated
Circuits and Systems, 24(9):13811405, September 2005.
[ bib 
DOI 
.pdf ]
The Forte formal verification environment for datapathdominated hardware is described. Forte has proven to be effective in largescale industrial trials and combines an efficient lineartime logic modelchecking algorithm, namely the symbolic trajectory evaluation (STE), with lightweight theorem proving in higherorder logic. These are tightly integrated in a generalpurpose functional programming language, which both allows the system to be easily customized and at the same time serves as a specification language. The design philosophy behind Forte is presented and the elements of the verification methodology that make it effective in practice are also described.
Clark Barrett, Yi Fang, Ben Goldberg, Ying Hu, Amir Pnueli, and Lenore Zuck.
TVOC: A translation validator for optimizing compilers.
In Kousha Etessami and Sriram K. Rajamani, editors, Proceedings
of the 17th International Conference on Computer Aided Verification (CAV
'05), volume 3576 of Lecture Notes in Computer Science, pages
291295. SpringerVerlag, July 2005.
[ bib 
DOI 
.pdf ]
We describe a compiler validation tool called TVOC, that uses the translation validation approach to check the validity of compiler optimizations: for a given
source program, TVOC proves the equivalence of the source code and the target
code produced by running the compiler. TVOC accepts as input the source and target representations of the code in the WHIRL intermediate representation format. Then, TVOC generates a set of verification conditions sufficient to show
that the two representations are equivalent. These verification conditions are validated using the automatic theorem prover CVC Lite. There are two phases to the verification process: the first phase verifies loop transformations using the proof rule PERMUTE; the second phase verifies structurepreserving optimizations using the proof rule VALIDATE. In this paper, we briefly describe
TVOC and illustrate its use with a simple example.
Clark Barrett, Leonardo de Moura, and Aaron Stump.
SMTCOMP: Satisfiability modulo theories competition.
In Kousha Etessami and Sriram K. Rajamani, editors, Proceedings
of the 17th International Conference on Computer Aided Verification (CAV
'05), volume 3576 of Lecture Notes in Computer Science, pages 2023.
SpringerVerlag, July 2005.
[ bib 
DOI 
.ps ]
Report of the first competition for Satisfiability Modulo Theories (SMTCOMP 2005).
Clark Barrett and Jacob Donham.
Combining SAT methods with nonclausal decision heuristics.
In Wolfgang Ahrendt, Peter Baumgartner, Hans de Nivelle, Silvio
Ranise, and Cesare Tinelli, editors, Selected Papers from the Workshops
on Disproving and the Second International Workshop on Pragmatics of Decision
Procedures (PDPAR '04), volume 125(3) of Electronic Notes in
Theoretical Computer Science, pages 312. Elsevier, July 2005.
[ bib 
DOI 
.ps ]
A decision procedure for arbitrary firstorder formulas can be viewed as combining a propositional search with a decision procedure for conjunctions of firstorder literals, so Boolean SAT methods can be used for the propositional search in order to improve the performance of the overall decision procedure. We show how to combine some Boolean SAT methods with nonclausal heuristics developed for firstorder decision procedures. The combination of methods leads to a smaller number of decisions than either method alone.
Sergey Berezin, Clark Barrett, Igor Shikanian, Marsha Chechik, Arie Gurfinkel,
and David L. Dill.
A practical approach to partial functions in CVC Lite.
In Wolfgang Ahrendt, Peter Baumgartner, Hans de Nivelle, Silvio
Ranise, and Cesare Tinelli, editors, Selected Papers from the Workshops
on Disproving and the Second International Workshop on Pragmatics of Decision
Procedures (PDPAR '04), volume 125(3) of Electronic Notes in
Theoretical Computer Science, pages 1323. Elsevier, July 2005.
[ bib 
DOI 
.ps ]
Most verification approaches assume a mathematical formalism in which functions are total, even though partial functions occur naturally in many applications. Furthermore, although there have been various proposals for logics of partial functions, there is no consensus on which is "the right" logic to use for verification applications. In this paper, we propose using a threevalued Kleene logic, where partial functions return the "undefined" value when applied outside of their domains. The particular semantics are chosen according to the principle of least surprise to the user; if there is disagreement among the various approaches on what the value of the formula should be, its evaluation is undefined. We show that the problem of checking validity in the threevalued logic can be reduced to checking validity in a standard twovalued logic, and describe how this approach has been successfully implemented in our tool, CVC Lite.
Benjamin Goldberg, Lenore Zuck, and Clark Barrett.
Into the loops: Practical issues in translation validation for
optimizing compilers.
In J. Knoop, G. C. Necula, and W. Zimmermann, editors,
Proceedings of the 3rd International Workshop on Compiler Optimization meets
Compiler Verificaiton (COCV '04), volume 132(1) of Electronic Notes in
Theoretical Computer Science, pages 5371. Elsevier, May 2005.
[ bib 
DOI 
.ps ]
Translation Validation is a technique for ensuring that the target code produced by a translator is a correct translation of the source code. Rather than verifying the translator itself, translation validation validates the correctness of each translation, generating a formal proof that it is indeed a correct. Recently, translation validation has been applied to prove the correctness of compilation in general, and optimizations in particular.
Tvoc, a tool for the Translation Validation of Optimizing Compilers developed by the authors and their colleagues, successfully handles many optimizations employed by Intel's ORC compiler. Tvoc, however, is somewhat limited when dealing with loop reordering transformations. First, in the theory upon which it is based, separate proof rules are needed for different categories of loop reordering transformations. Second, Tvoc has difficulties dealing with combinations of optimizations that are performed on the same block of code. Finally, Tvoc relies on information, provided by the compiler, indicating which optimizations have been performed (in the case of the current ORC, this instrumentation is fortunately part of the compiler).
This paper addresses all the issues above. It presents a uniform proof rule that encompasses all reordering transformations performed by the Intel ORC compiler, describes a methodology for translation validation in the presence of combinations of optimizations, and presents heuristics for determining which optimizations occurred (rather than relying on the compiler for this information).
Hillel Kugler, David Harel, Amir Pnueli, Yuan Lu, and Yves Bontemps.
Temporal logic for ScenarioBased specifications.
In Proc. 11th Intl. Conference on Tools and Algorithms for the
Construction and Analysis of Systems (TACAS'05), 2005.
[ bib 
DOI 
.pdf ]
We provide semantics for the powerful scenariobased language of live sequence charts
(LSCs). We show how the semantics of live sequence charts can be captured using
temporal logic. This is done by studying various subsets of the LSC language and
providing an explicit translation into temporal logic. We show how a kernel subset of the
LSC language (which omits variables, for example) can be embedded within the temporal
logic CTL*. For this kernel subset the embedding is a strict inclusion. We show that
existential charts can be expressed using the branching temporal logic CTL while
universal charts are in the intersection of linear temporal logic and branching temporal
logic LTL ⋂ CTL. Since our translations are efficient, the work described here may be
used in the development of tools for analyzing and executing scenariobased
requirements and for verifying systems against such requirements.
Ittai Balaban, Amir Pnueli, and Lenore Zuck.
Shape analysis by predicate abstraction.
In VMCAI'05, January 2005.
[ bib 
DOI 
.ps ]
The paper presents an approach for shape analysis based on predicate
abstraction. Using a predicate base which involves reachability
relations between program variables pointing into the heap, we are
able to analyze functional properties of programs with destructive
heap updates, such as list reversal, and various inplace list
sorts. The approach allows verification of both safety and liveness
properties. The abstraction we use does not require any abstract
representation of the heap nodes (e.g. abstract shapes), but only
reachability relations between the program variables.
The computation of the abstract transition relation is precise and
automatic but does not require the use of a theorem prover. Instead,
we use a small model theorem to identify a truncated (small)
finitestate version of the program whose abstraction is identical to
the abstraction of the unboundedheap version of the same program. The
abstraction of the finitestate version is then computed by BDD
techniques.
For proving liveness properties, we augment the original system by a
wellfounded ranking function, which is abstracted together with the
system. Wellfoundedness is then abstracted into strong fairness
(compassion). We show that, for a limited class of programs which
still includes many interesting cases, the small model theorem can be
applied to this joint abstraction.
Independently of the application to shapeanalysis examples, we
demonstrate the utility of the ranking abstraction method, and its
advantages over the direct use of ranking functions in a deductive
verification of the same property.
Ittai Balaban, Amir Pnueli, and Lenore Zuck.
Ranking abstraction as companion to predicate abstraction.
In Formal Techniques for Networked and Distributed Systems
(FORTE) 2005, 2005.
[ bib 
DOI 
.pdf ]
Predicate abstraction has become one of the most successful methodologies
for proving safety properties of programs. Recently, several abstraction
methodologies have been proposed for proving liveness properties. This paper
studies ranking abstraction where a program is augmented by a
nonconstraining progress monitor, and further abstracted by predicate
abstraction, to allow for automatic verification of progress
properties. Unlike most liveness methodologies, the augmentation does not
require a complete ranking function that is expected to decrease with each
step. Rather, the inputs are component rankings from which a complete
ranking function may be formed.
The premise of the paper is an analogy between the methods of ranking
abstraction and predicate abstraction, one ingredient of which is refinement:
When predicate abstraction fails, one can refine it. When ranking abstraction
fails, one must determine whether the predicate abstraction, or the ranking
abstraction, need be refined. The paper presents strategies for determining
which case is at hand.
The other part of the analogy is that of automatically deriving deductive proof
constructs: Predicate abstraction is often used to derive program invariants for
proving safety properties as a boolean combination of the given predicates.
Deductive proof of progress properties requires wellfounded ranking functions
instead of invariants. We show how to obtain concrete global ranking functions
from abstract programs.
We demonstrate the various methods on examples with nested loops,
including a bubble sort algorithm on linked lists.
Ittai Balaban, Yi Fang, Amir Pnueli, and Lenore Zuck.
IIV: An invisible invariant verifier.
In Computer Aided Verification (CAV) 2005, 2005.
[ bib 
DOI 
.pdf ]
This paper describes the Invisible Invariant Verifier (IIV)  an automatic tool
for the generation of inductive invariants, based on the work in
[?].
The inputs to IIV are a parameterized system and an invariance
property p, and the output of IIV is “success” if it finds an inductive
invariant that strengthens p and “fail” otherwise.
Cory Plock, Benjamin Goldberg, and Lenore Zuck.
From requirements to specifications.
In Proc. 12th Annual IEEE Intl. Conference and Workshop on the
Engineering of ComputerBased Systems. IEEE Computer Society Press, 2005.
[ bib 
DOI 
.ps ]
Live Sequence Charts (LSCs) provide a formal visual framework
for creating scenariobased requirements for reactive systems.
An LSC imposes a partial order over a set of events, such as the
sending or receiving of messages. Each event is associated with
a liveness property, indicating whether it can or must occur.
Time extensions can further specify when these events should
occur. An individual LSC tells a story about particular fragment of
system behavior, whereas LSC specificationsa finite set of
LSCscollectively define the total behavior. An LSC
specification may require that more than one distinct scenario, or
multiple instances of the same scenario, execute concurrently. It
is natural to ask whether LSC specifications can be synthesized
into formal languages. Previous work offers translations from
untimed LSCs to finite state machines, and from single (non
concurrent) LSCs to timed automata. Here, we show that LSC
specifications with concurrency can also be expressed as a timed
automata.
Pierre Combes, David Harel, and Hillel Kugler.
Modeling and verification of a telecommunication application using
live sequence charts and the PlayEngine tool.
In Proc. 3rd Int. Symp. on Automated Technology for Verification
and Analysis (ATVA '05), volume 3707 of LNCS, pages 414428.
Springer, 2005.
[ bib 
DOI 
.pdf ]
We apply the language of live sequence charts (LSCs) and the PlayEngine
tool to a realworld complex telecommunication service. The service, called
Depannage, allows a user to make a phone call and ask for help from a doctor,
the fire brigade, a car maintenance service, etc. This kind of service is built on
top of an embedded platform, using both new and existing service
components. The complexity of such applications stems from their distributed
architecture, the various time constraints they entail, and the fact the
underlying systems are rapidly evolving, introducing new components,
protocols and associated hardware constraints, all of which must be taken into
account. We present the results of our work on the specification, animation
and formal verification of the Depannage service, and draw some initial
conclusions as to an appropriate methodology for using a scenariobased
approach in the telecommunication domain. The complete specification of the
Depannage application in LSCs and some animations showing simulation and
verification results are made available as supplementary material.
Ying Hu, Clark Barrett, and Benjamin Goldberg.
Theory and algorithms for the generation and validation of
speculative loop optimizations.
In Proceedings of the 2nd IEEE International Conference on
Software Engineering and Formal Methods (SEFM '04), pages 281289. IEEE
Computer Society, September 2004.
[ bib 
DOI 
.ps ]
Translation validation is a technique that verifies the results of every run of a translator, such as a compiler, instead of the translator itself. Previous papers by the authors and others have described translation validation for compilers that perform loop optimizations (such as interchange, tiling, fusion, etc), using a proof rule that treats loop optimizations as permutations. In this paper, we describe an improved permutation proof rule which considers the initial conditions and invariant conditions of the loop. This new proof rule not only improves the validation process for compiletime optimizations, it can also be used to ensure the correctness of speculative loop optimizations, the aggressive optimizations which are only correct under certain conditions that cannot be known at compile time. Based on the new permutation rule, with the help of an automatic theorem prover, CVC Lite, an algorithm is proposed for validating loop optimizations. The same permutation proof rule can also be used (within a compiler, for example) to generate the runtime tests necessary to support speculative optimizations.
Clark Barrett and Sergey Berezin.
CVC Lite: A new implementation of the cooperating validity checker.
In Rajeev Alur and Doron A. Peled, editors, Proceedings of the
16th International Conference on Computer Aided Verification (CAV '04),
volume 3114 of Lecture Notes in Computer Science, pages 515518.
SpringerVerlag, July 2004.
[ bib 
DOI 
.ps ]
We describe a tool called CVC Lite (CVCL), an automated theorem prover for formulas in a union of firstorder theories. CVCL supports a set of theories which are useful in verification, including uninterpreted functions, arrays, records and tuples, and linear arithmetic. New features in CVCL (beyond those provided in similar previous systems) include a library API, more support for producing proofs, some heuristics for reasoning about quantifiers, and support for symbolic simulation primitives.
Clark Barrett, Benjamin Goldberg, and Lenore Zuck.
Runtime validation of speculative optimizations using CVC.
In Oleg Sokolsky and Mahesh Viswanathan, editors, Proceedings of
the 3rd International Workshop on Runtime Verification (RV '03), volume
89(2) of Electronic Notes in Theoretical Computer Science, pages
87105. Elsevier, July 2003.
[ bib 
DOI 
.ps ]
Translation validation is an approach for validating the output of
optimizing compilers. Rather than verifying the compiler itself, translation
validation mandates that every run of the compiler generate a formal proof that
the produced target code is a correct implementation of the source code.
Speculative loop optimizations are aggressive optimizations which are only
correct under certain conditions which cannot be validated at compile time.
We propose using an automatic theorem prover together with the
translation validation framework to automatically generate runtime tests for
such speculative optimizations. This runtime validation approach must
not only detect the conditions under which an optimization generates incorrect
code, but also provide a way to recover from the optimization without aborting
the program or producing an incorrect result. In this paper, we apply the
runtime validation technique to a class of speculative reordering
transformations and give some initial results of runtime tests generated by
the theorem prover CVC.
Clark Barrett and Sergey Berezin.
A ProofProducing boolean search engine.
In Proceedings of the 1st International Workshop on Pragmatics
of Decision Procedures in Automated Reasoning (PDPAR '03), July 2003.
[ bib 
.ps ]
We present a proofproducing search engine for solving the Boolean
satisfiability problem. We show how the proofproducing infrastructure can be
used to track the dependency information needed to implement important
optimizations found in modern SAT solvers. We also describe how the
same search engine can be extended to work with decision procedures for
quantifierfree firstorder logic. Initial results indicate that it is
possible to extend a stateoftheart SAT solver with proof production in a way
that both preserves the algorithmic performance (e.g. the number of decisions
to solve a problem) and does not incur unreasonable overhead for the proofs.
Lenore Zuck, Amir Pnueli, Yi Fang, and Benjamin Goldberg.
VOC: A methodology for the translation validation of optimizing
compilers.
In Journal of Universal Computer Science, 2003.
[ bib 
DOI 
.pdf ]
There is a growing awareness, both in industry and academia, of the crucial role of formally
verifying the translation from highlevel sourcecode into lowlevel object code that is typically performed
by an optimizing compiler. Formally verifying an optimizing compiler, as one would verify any other large
program, is not feasible due to its size, ongoing evolution and modification, and, possibly, proprietary
considerations. Translation validation is a novel approach that offers an alternative to the verification of
translators in general and compilers in particular: Rather than verifying the compiler itself, one constructs
a validation tool which, after every run of the compiler, formally confirms that the target code produced
in the run is a correct translation of the source program. The paper presents voc, a methodology for the
translation validation of optimizing compilers. We distinguish between structure preserving optimizations,
for which we establish a simulation relation between the source and target code based on computational
induction, and structure modifying optimizations, for which we develop specialized "permutation rules".
The paper also describes voc64a prototype translation validator tool that automatically produces
verification conditions for the global optimizations of the SGI Pro64 compiler.
The ACSysPublications citeulike group is used to generate a bibtex file.
bibtex2html is then used to
generate this file.
