Sequential consistency is a strong safety property for concurrent systems. Informally, sequential consistency implies that operations appear to take place in some total order, and that that order is consistent with the order of operations on each individual process.
Sequential consistency cannot be totally or sticky available; in the event of a network partition, some or all nodes will be unable to make progress.
A process in a sequentially consistent system may be far ahead, or behind, of other processes. For instance, they may read arbitrarily stale state. However, once a process A has observed some operation from process B, it can never observe a state prior to B. This, combined with the total ordering property, makes sequential consistency a surprisingly strong model for programmers.
When you need real-time constraints (e.g. you want to tell some other process about an event via a side channel, and have that process observe that event), try linearizability. When you need total availability, and a total order isn’t required, try causal consistency.
Formally
Leslie Lamport defined sequential consistency in his 1979 paper How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs. He uses “sequentially consistent” to imply…
… the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.
Viotti and Vukolić decompose sequential consistency into three properties:
- SingleOrder (there exists some total order of operations)
- PRAM
- RVal (the order must be consistent with the semantics of the datatype)