A computational environment is called concurrent when it allows several threads of sequential control, or processes, to overlap in time and to communicate with each other. Such an environment is called synchronous when the length of time it takes any process to execute any sequence of steps can be determined in advance. When such a calculation is impossible (at least to the precision required), the environment is called asynchronous.
Algorithms designed to work in the asynchronous concurrent environment have appeared in the literature for such data structures as B-trees, hash tables, and queues. The most common standard of correctness for a concurrent algorithm is serializability, which requires that the effects of a concurrent computation be equivalent to some serial composition of the same actions. However, several notions of ``equivalence'' exist, depending on whether they take into account the semantics of the data structure,or only the syntax of the computation.
We examine the drawbacks and advantages of several correctness standards, and identify a particular standard to be of general utility. Furthermore, we formalize the notion of decisive operations, and show how it can be applied to greatly simplify semantic serializability proofs.
We apply the concepts of syntactic and semantic serializability to the development of several novel algorithms, including an extension of the tree protocol to changing trees, a highly concurrent B-tree algorithm, and a wait-free set manipulation algorithm. Useful techniques appearing in the design are identified, and the correctness proofs serve as examples of the techniques previously described.