1 Introduction

Linearizability [23, 24] is a well-studied [16] condition that defines correctness of a concurrent object in terms of a sequential specification. It ensures that for each history (i.e., execution trace) of an implementation, there is a history of the specification such that (1) each thread makes the same method invocations in the same order, and (2) the order of non-overlapping operation calls is preserved. The condition, however, critically depends on the existence of a total order of memory events (e.g., as guaranteed by sequential consistency (SC) [31]) to guarantee contextual refinement [20] and compositionality [24]. Unfortunately, most modern execution environments can only guarantee a partial order of memory events, e.g., due to the effects of relaxed memory [3, 5, 8, 34]. It is known that a naive adaptation of linearizability to the partially ordered setting of weak memory is problematic from the perspective of contextual refinement [18]. In this paper, we propose a generalisation of linearizability to cope with partially ordered executions, which we show satisfies compositionality.

Fig. 1.
figure 1

Writing to shared variables

Fig. 2.
figure 2

Writing to shared stacks

To motivate the problem consider the following. Figures 1 and 2 show two examplesFootnote 1 of multi-threaded programs on which weak memory model effects can be observed. Figure 1 shows two threads writing to and reading from two shared variables x and y. Under SC, the assert in process 2 never fails: if y equals 1, x must also equal 1. However, this is not true in weak memory models like C11 [8, 28]: if the writes to x and y are relaxed, process 2 may observe the write to y, yet also observe the initial value x (missing the write to x by process 1).

Such effects are not surprising to programmers familiar with memory models [8, 28]. However, programmer expectations for linearizable objects, even in a weak memory model like C11, are different: if the two stacks \(\textsc {S}\) and \(\textsc {S}'\) in Fig. 2 are linearizable, the expectation is that the assert will never fail since linearizable objects are expected to be compositional [23, 24], i.e., any combination of linearizable objects must itself be linearizable. However, it is indeed possible for the two stacks to be linearizable (using the classical definition), yet for the program to generate an execution in which the assert fails, i.e., the composition of the two stacks is not linearizable. The issue here is that linearizability, when naively applied to a weak memory setting, allows too many operations to be considered “unordered”.

Failure of compositionality is repaired by strengthening the requirements of linearizability on partial orders. Namely, we require the ability to infer enough order in an execution to ensure that the method call S.Push(1) precedes S.Pop, forcing S.Pop to return 1, whenever S’.Push(1) precedes S’.Pop.

The contributions of this paper are as follows.

  • Our main contribution is the development of a new compositional notion of correctness; we call this condition causal linearizability.

  • We establish two meta-theoretical properties of causal linearizability. First, we show that, as expected, causal linearizability reduces to linearizability when the underlying memory model is totally ordered. Second, we show that that causal linearizability is the weakest possible strengthening of linearizability that guarantees compositionality, i.e., any correctness condition stronger than linearizability that is also compositional must imply causal linearizability.

  • We present a new inductive simulation-style proof technique for verifying causal linearizability of weak memory implementations of concurrent objects, where the induction is over linear extensions of the happens-before relation. This is the first such proof method for weak memory, and one of the first that enables refinement-based verification, building on existing techniques for linearizability in SC [13, 16, 36].

  • We apply this proof technique and verify causal linearizability of a blocking version of the Treiber Stack executing in the C11 weak memory model. For the standard Treiber Stack under C11, we identify a synchronisation pitfall when using only release-acquire synchronisation.

Causal linearizability is so called because it takes into account the causal relationship between events in a way that is relevant to weak memory models. There is an earlier definition of a condition also called “causal linearizability” introduced by Doherty and Derrick in [11]. However, this earlier definition considers causality at the level of (interleaved) sequences and only applies to memory models such as TSO, that satisfy certain operational properties.Footnote 2 In contrast, the definition in this paper (Definition 6) considers causality directly over partial orders, making it applicable to a wider range of memory models.

The definition of causal linearizability in this paper is built on the same concerns as the earlier definition in [11], but is not a generalisation of it in a technical sense. Thus Definition 6 does not reduce to the condition in [11], or vice versa, although both reduce to classical linearizability [24]. All mentions of “causal linearizability” in this paper refers to Definition 6. Further comparisons to related correctness conditions are given in Sect. 8.

Causal linearizability is defined in terms of an execution structure [32], taking two different relations over operations into account: a “precedence order” (describing operations that are ordered in real time) and a “communication relation”. Execution structures allow one to infer the additional precedence orders from communication orders (see Definition 3 (A5)). Applied to Fig. 2, for a weak memory execution in which the assert fails, the execution restricted to stack S would not be causally linearizable in the first place (see Sect. 3 for full details). Execution structures are generic, and can be constructed for any weak memory execution that includes method invocation/response events. We develop one such scheme for mapping executions to execution structures based on the happens-before relation of the C11 memory model.

This paper is structured as follows. We present our motivating example, the Treiber Stack in C11 in Sect. 2; describe the problem of compositionality and motivate our execution-structure based solution in Sect. 3; and formalise causal linearizability and compositionality in Sect. 4. Causal linearizability for C11 is presented in Sect. 5, and verification of the stack described in Sect. 6. Section 7 describes a synchronisation pitfall.

figure a

2 Treiber Stack in C11

The example we consider (see Algorithm 1) is the Treiber Stack [40] (well-studied in a SC setting, but not in a weak memory one), executing in a recent version of the C11 [30] memory model. In C11, commands may be annotated, e.g., R (for release) and A (for acquire), which introduces extra synchronisation, i.e., additional order over memory events [8, 28]. We assume racy read and write accesses that are not part of an annotated command are unordered or relaxed, i.e., we do not consider the effects of non-atomic operations [8]. Full details of the C11 memory model are deferred until Sect. 5.

Due to weak memory effects, the events under consideration, including method invocation and response events are partially ordered [5, 6, 14, 28, 30]. As we show in Sect. 3, it turns out that one cannot simply reapply the standard notion of linearizability in this weaker setting; compositionality demands that we use modified correctness condition, causal linearizability, that additionally requires “communication” across conflicting operations.

In Algorithm 1, all accesses to Top are via an annotated command. Thus, any read of Top (lines 7, 13) reading from a write to Top (lines 9, 16) induces happens-before order from the write to the read. This order, it turns out, is enough to guarantee invariants that are in turn strong enough to guaranteeFootnote 3 causal linearizability of the Stack (see Sect. 6).

Note that we modify the Treiber Stack so that the Pop operation blocks by spinning instead of returning empty. This is for good reason - it turns out that the standard Treiber Stack (with a non-blocking Pop operation) is not naturally compositional if the only available synchronisation is via release-acquire atomics (see Sect. 7).

3 Compositionality and Execution Structures

This section describes the problems with compositionality for linearizability of concurrent objects under weak execution environments (e.g., relaxed memory) and motivates a generic solution using execution structures [32].

Notation. First we give some basic notation. Given a set X and a relation \(r \subseteq X \times X\), we say r is a partial order iff it is reflexive, antisymmetric and transitive, and a strict order, iff it is irreflexive, antisymmetric and transitive. A partial or strict order r is a total order on Xiff either \((a, b) \in r\) or \((b, a) \in r\) for all \(a, b \in X\). We typically use notation such as <, \(\prec \), to denote orders, and write, for example, \(a < b\) instead of \((a, b) \in \mathop<\). We let \(X^*\) denote the set of all finite sequences over X, let \(\langle \rangle \) denote the empty sequence and use \(\circ \) as a concatenation operator on sequences. For a sequence w, we let be the (total) order on its elements: if \(w = w_1 \circ \langle e \rangle \circ w_2 \circ \langle e' \rangle \circ w_3\).

Fix a set of invocations Inv and a set of responses \( Res \). A pair from \( Inv \times Res \) represents an operation. Each invocation includes both a method name and any arguments passed to the method; each response includes any values returned from the method. For example, for a stack \(\textsc {S}\) of natural numbers, the invocations of the stack might be represented by the set \(\{\textsc {Push}(n) \mid n \in \mathbb {N}\} \cup \{\textsc {Pop}\}\), and the responses by \(\mathbb {N}\cup \{\bot , \mathtt{empty}\}\), and the set of operations of the stack is

$$\begin{aligned} \varSigma _\textsc {S} = {}&\{(\textsc {Push}(n),\bot ), (\textsc {Pop}, n) \mid n\in \mathbb {N}\} \cup \{(\textsc {Pop},\mathtt{empty})\}. \end{aligned}$$

In an execution, an occurrence of an invocation, response, or operation will take the form of an event. In a full treatment, events would have the form \(e=(l,t,g)\), where l is a label of type \( Inv \cup Res \) (for executions of a concrete implementation) or \( Inv \times Res \) (for executions of an abstract specification), t is a thread identifier t from some given set of threads (or processes) Tid and g is a tag uniquely identifying the event in the execution. However, for clarity of presentation, we omit tags in this paper. Furthermore, for uniformity, we assume that all invocations, responses and operations are indexed by thread identifiers. For example, the invocations are now given by the set \(\{\textsc {Push}_t(n),\textsc {Pop}_t \mid t \in T \wedge n \in \mathbb {N}\}\). We only make thread ids explicit when necessary. We let \({ tid}(e)\) be the thread identifier of event e. For a sequence or partial order of events w, we let be the restriction of w to events e with \({ tid}(e)=t\) only.

The standard notion of linearizability is defined for a concurrent history, which is a sequence (or total order) of invocation and response events of operations. Since operations are concurrent, an invocation of an operation may not be directly followed by its matching response in this sequence, and hence, a history induces a partial order on operations (through the total order on events). For linearizability, we focus on the precedence order (denoted ), where, for operations o and \(o'\), we say in a history iff the response of operation o occurs before the invocation of operation \(o'\) in the history. A concurrent implementation of an object is linearizable if the precedence order () for any history of the object can be extended to a total order that is legal for the object’s specification [24]. It turns out that linearizability in this setting is compositional [23, 24]: any history of a family of linearizable objects is itself guaranteed to be linearizable.

Unfortunately, histories in modern execution contexts (e.g., due to relaxed memory or distributed computation) are only partially ordered since processes do not share a single global view of time. It might seem that this is unproblematic for linearizability and that the standard definition can be straightforwardly applied to this weaker setting. However, it turns out that a naive application fails to satisfy compositionality. To see this, consider the following example.

Example 1

Consider an execution of Fig. 2 where the operations are only ordered by a happens-before relation, which is a relation present in many weak-memory models [3, 5, 8, 34]. Since we do not have a global notion of time, we say operation o precedes \(o'\) (denoted ) if the response of o happens before the invocation of \(o'\) (also see [18]). For the C11 memory model, happens-before includes program order, and hence, the progam in Fig. 2 may generate the following execution, where operations executed by the same thread are precedence ordered.

figure b

If we restrict the execution above to S only, we can obtain a legal stack behaviour by linearizing \((\textsc {S.Pop}, \mathtt{empty})\) before \((\textsc {S.Push}(1),\bot )\) without contradicting the precedence order in the diagram above. Similarly, the execution when restricted to \(\textsc {S}'\) is linearizable. However, the full execution is not linearizable: ordering the pop of S before its push, and the push of S’ before its pop contradicts the precedence order .    \(\square \)

A key contribution of this paper is the development of a correctness condition, causal linearizability, that recovers compositionality of concurrent objects with partially ordered histories. Our definition is based on two main insights.

Our first insight is that one must augment the precedence order with additional information about the underlying concurrent execution. In particular, one must introduce information about the communication between operations, e.g., when one operation sees the effects of another one. In our example, a pop would see the effect of a push; in the Treiber algorithm it would specifically see the change of Top. Causal linearizability states that the ordering we impose during linearization has to (a) preserve the precedence order of operations and (b) has to be consistent with the communication order. We represent communication by a relation .

Example 2

Consider again the partial order in Example 1. For stack \(\textsc {S}\), we must linearize pop before push, and for stack S’, push before pop. Causal linearizability mandates the existence of a logical order that contains such that all linear extensions of the logical order are legal w.r.t. the specification object. Moreover, it requires that this logical order is contained within the communication relation. Hence, in Example 1, neither S nor S’ is causally linearizable: for S, the only valid logical order is S.Pop before S.Push(1), but there is currently no communication from S.Pop to S.Push(1). Thus, the execution in Example 1 is not a counterexample to compositionality of causal linearizability. Now consider changing the example by introducing communication as follows:

figure c

Here, communication is introduced in a manner consistent with the logical order, which requires that \((\textsc {S.Pop}, \mathtt{empty})\) is linearized before \((\textsc {S.Push}(1),\bot )\) and that \((\textsc {S'.Push}(1), \bot )\) is linearized before \((\textsc {S'.Pop}, 1)\). So far, we would consider this to be a valid counterexample to compositionality. We describe why this cannot be the case below.    \(\square \)

Our second insight is that the operations (taken as events) together with the precedence order and the communication relation must form an execution structure [32].

Definition 3

(Execution structure). If E is a finiteFootnote 4 set of events, and are relations over E (the precedence order and communication relation), an execution structure is a tuple satisfying the following axioms for \(e_1, e_2, e_3 \in E\).

A1.:

The relation is a strict order.

A2.:

Whenever , then and .

A3.:

If or , then .

A4.:

If , then .    \(\square \)

Example 4

We apply Definition 3 to Example 2. The requirements of an execution structure, in particular axiom A4 necessitate that we introduce additional precedence order edges as follows.

figure d

For example, the edge is induced by the combination of edges together with axiom A4. However, this structure now fails to satisfy axiom A2 and is thus no longer a proper execution structure.    \(\square \)

Hence, for our running example, compositionality no longer fails. We conclude that for causally linearizable stacks, the assert in Fig. 2 always holds if it is executed.

4 Causal Linearizability

Causal linearizability extends linearizability to cope with partially ordered executions. Next, we will formally define it and its compositionality property.

Like ordinary linearizability, causal linearizability is defined by comparing the behaviour of concurrent executions to legal sequential ones. The comparison typically proceeds by bringing concurrent operations in sequence under some given constraints. This basic principle is kept for the partially ordered case. Legality is defined by a sequential object specification, which we define operationally.

Definition 5

A sequential object is a 4-tuple \((\varSigma , \varGamma , { init}, \tau )\), where \(\varSigma \) is a set of labels, \(\varGamma \) is a set of states, \({ init}\in \varGamma \) is an initial state, and \( \tau \subseteq \varGamma \times \varSigma \times \varGamma \) is a transition relation.

The set \(\varSigma \subseteq Inv \times Res \) consists of pairs of invocations and responses. For our stack example, \(\varGamma = \mathbb {N}^*\), \({ init}= \langle \rangle \) and

$$\begin{aligned} \tau= & {} \{ (s, (\textsc {Push}(n),\bot ),\langle n \rangle \circ s) \mid n \in \mathbb {N} \} \cup \{ (\langle n \rangle \circ s, (\textsc {Pop}, n), s) \mid n \in \mathbb {N} \}\\&{} \cup \{ (\langle \rangle , (\textsc {Pop},\mathtt{empty}), \langle \rangle ) \} \end{aligned}$$

We write for \((s,op,s') \in \tau \). For a sequence \(w \in \varSigma ^*\), we write iff either \(w = \langle \rangle \) and \(s = s'\), or \(w = \langle op \rangle \circ w'\) and there exists an \(s''\) such that and . The set of legal histories of an object \(\mathbb {S}= (\varSigma , \varGamma , { init}, \tau )\) is given by .

In general, executions of concurrent processes might invoke operations on more than one object. To capture this, we define a notion of an object product. If \(\mathbb {S}_1 = (\varSigma _1, \varGamma _1, { init}_1, {\tau _1})\) and \(\mathbb {S}_2 = (\varSigma _2, \varGamma _2, { init}_2, {\tau _2})\) are two sequential objects with \(\varSigma _1 \cap \varSigma _2 = \emptyset \), the object product of \(\mathbb {S}_1\) and \(\mathbb {S}_2\) is defined by \(\mathbb {S}_1 \otimes \mathbb {S}_2 = (\varSigma _1 \cup \varSigma _2, \varGamma _1 \times \varGamma _2, ({ init}_1, { init}_2), {\tau _1} \otimes {\tau _2})\), where

$$\begin{aligned} {\tau _1} \otimes {\tau _2} =&\{((s_1, s_2), op_1, (s_1', s_2)) \mid op_1 \in \varSigma _1 \wedge (s_1, op_1, s_1') \in {\tau _1}\} \\&{} \cup \{((s_1, s_2), op_2, (s_1, s_2')) \mid op_2 \in \varSigma _2 \wedge (s_2, op_2, s_2') \in {\tau _2}\}. \end{aligned}$$

Clearly, this construction can be generalised to products of more than two objects, provided their sets of actions are pairwise disjoint. We abstain from such a treatment here since a compositionality result for two objects is sufficient to ensure compositionality over multiple objects.

Causal linearizabilty compares the concurrent execution given by an execution structure to the legal sequential behaviour. The constraint on this sequentialization is that the precedence and the communication order of execution structures provide lower and upper bounds for the allowed ordering. More precisely, we use a partial order \(\mathop {<}\) that contains all orders necessary to ensure legality, i.e., there is no linear extension of < that is not legal. Causal linearizability requires (a) the precedence order of execution structures to be preserved by this order, and (b) this order to be contained in the communication relation. We say a strict partial order \(\mathop {<}\) is a logical order of an execution structure iff \(\mathop {<} \subseteq E \times E\) and . For concurrent objects, one possible instantiation of a logical order is given in [12], where the logical order corresponds to a conflict-based notion of causality.

For a partial order \(\mathop < \subseteq E \times E\), we let be the set of linear extensions of \(\mathop {<}\).

Definition 6

Let \(\mathbb {S}\) be a sequential object. An execution structure \(\mathbb {E}\) is causally linearizable w.r.t. \(\mathbb {S}\) iff there exists a logical order \(\mathop {<}\) of \(\mathbb {E}\) such that \( LE (<) \subseteq { legal}_{\mathbb {S}}\).

Causal linearizability guarantees compositionality, i.e., the composition of causally linearizable concurrent objects is causally linearizable. More formally, for an execution structure and events \(X \subseteq E\), we let be the execution structure restricted to X, i.e., .

Theorem 7

(Compositionality). If \(\mathbb {S}_1 = (\varSigma _1, \ldots )\) and \(\mathbb {S}_2 = (\varSigma _2, \ldots )\) are sequential objects with \(\varSigma _1 \cap \varSigma _2 = \emptyset \) and is an execution structure, then is causally linearizable w.r.t. \(\mathbb {S}_1\) and causally linearizable w.r.t. \(\mathbb {S}_2\) iff \(\mathbb {E}\) is causally linearizable w.r.t. \(\mathbb {S}_1 \otimes \mathbb {S}_2\).    \(\square \)

Standard linearizability as introduced by Herlihy and Wing [24] is defined on executions (histories) which are totally ordered sequences of invocations and responses of operations, i.e. a history h is an element of \(( Inv \cup Res )^*\). Note that this allows executions in which operations are concurrent because invocation and response events are now separated. Histories are required to be well-formed and completeFootnote 5, which means that the projection of a history onto one thread forms a sequence of invocations and corresponding responses.

A strict order \(\prec \) on \( Inv \cup Res \) is well-formed and complete iff for all threads \(t\in Tid\), is sequential, i.e., forms an alternating total order of invocations and responses starting with an invocation. Invocations and responses thus form matching pairs as defined by a function mp. For a strict order \(\prec \) such that \(i \prec r\), \((i,r) \in mp(\prec )\) iff \({ tid}(i) = { tid}(r)\) and there is no event e such that \(i \prec e \prec r\) and \({ tid}(e) = { tid}(i)\). This allows us to derive an execution structure from any strict order and thus also from a history h by using its ordering .

Definition 8

Let \(\prec \) be a well-formed and complete strict order on \( Inv \cup Res \). We say is the execution structure corresponding to \(\prec \) if

Note that this construction guarantees a saturation property: for two events \(e,e'\), we either have or .

The classical definition of linearizability only employs the precedence ordering of an execution structure. That is, is linearizable w.r.t. a sequential object \(\mathbb {S}\) iff there exists a sequence \(hs \in { legal}_\mathbb {S}\) such that (i) for all \(t \in Tid \) (threads execute the same sequence of operations) and (ii) ( precedence ordering between operations is preserved). We say that a strict order \(\prec \) is linearizable iff \( exec (\prec )\) is linearizable and that a history h is linearizable iff is linearizable.

Theorem 9

Suppose h is a history and \(\mathbb {S}\) a sequential object. Then h is linearizable w.r.t. \(\mathbb {S}\) iff is causally linearizable w.r.t. \(\mathbb {S}\).    \(\square \)

We now provide an adequacy result for causal linearizability, i.e., show that causal linearizability is, in fact, the weakest possible strengthening of linearizability that is compositional. The technical exposition is formalised in terms of correctness conditions that guarantee linearizability. Here, we regard a correctness condition to be a function mapping a sequential object to the set of all well-formed strict orders on \( Inv \cup Res \) accepted as being correct for the object, where the mapping is closed under renaming of the operations.

We let \(\mathcal{S}\) be the set of all possible sequential objects and \(\mathcal{H}\) the set of all possible well-formed complete strict orders on \( Inv \cup Res \). To formalise closure under renaming, we assume a bijection \(b: X \rightarrow Y\) between sets X and Y. If \(\mathbb {S}= (X, \varGamma , { init}, \tau )\) is a sequential object, define \(b[\mathbb {S}] = (Y, \varGamma , { init}, b[\tau ])\), where \(b[\tau ] = \{(s, b(x), s') \mid (s, x, s') \in \tau \}\) and if \(w \in X^*\), define \(b[w] \in Y^*\) to be the sequence obtained from w by replacing each \(w_i\) by \(b(w_i)\).

Definition 10

We say a function \(\varDelta : \mathcal{S} \rightarrow 2^\mathcal{H}\) is a correctness condition iff \(\varDelta \) is closed under renaming of operations, i.e., for all bijective functions \(b: \varSigma \rightarrow \varSigma '\) and for all \(\mathbb {S}= (\varSigma , \dots ) \in \mathcal{S}\), we have \( \mathop \prec \in \varDelta (\mathbb {S})\) iff \(b[\mathop \prec ] \in \varDelta (b[\mathbb {S}])\).

Definition 11

We say \(\varDelta : \mathcal{S} \rightarrow 2^\mathcal{H}\) guarantees linearizability iff for all \(\mathbb {S}\in \mathcal{S}\), each \(\mathop \prec \in \varDelta (\mathbb {S})\) is linearizable w.r.t. \(\mathbb {S}\).

Our adequacy result for causal linearizability is defined for well-formed strict orders that have exactly one possible legal linearization. Formally, for a correctness condition \(\varDelta \) and sequential object \(\mathbb {S}\), we say \(\mathord \prec \in \varDelta (\mathbb {S})\) is strongly synchronised iff it is linearizable w.r.t. exactly one \(hs \in { legal}_\mathbb {S}\).

Theorem 12

Let \(\varDelta \) be a compositional correctness condition, and let \(\mathbb {S}\) be a sequential object. Then for any strongly synchronised strict order \(\mathord \prec \in \varDelta (\mathbb {S})\), \( exec (\mathord \prec )\) is causally linearizable.

Strong synchronisation may always arise for some specifications, e.g., a data structure such as a stack or a counter object that only provides a fetch-and-increment operation. In general, the execution of any object may be strongly synchronised due to interactions with other objects or a client (see [18]), causing additional precedence order to be introduced. For example, a client thread of a concurrent object may introduce precedence order via program order, inserting fences between operation calls, or calling objects that induce additional order [6, 18]. Thus for typical sequential objects, a correctness condition that prohibited strongly synchronized executions would be overly restrictive. Theorem 12 ensures that, for such executions, if the correctness condition is compositional then it is at least as strong as causal linearizability.

5 C11 Executions

We now briefly introduce the C11 memory model as to be able to reason about programs executing within C11. To this end, we simply give a (condensed) adaption of the programming-language oriented presentation of C11 [14, 28], but we ignore various features of C11 not needed for our discussion, including non-atomic operations and fences. For a more complete explanation see e.g. [28].

The C11 Memory-Model. The memory model specifies how read and write operations access shared state. Let L be a set of such shared locations (ranged over by xy) and let V be a set of values (ranged over by uv). Our model employs a set of memory events, which can be partitioned into read events, R, write events, W, and update (read-modify-write) events, U. A read event would e.g. take the form rd(x, 0). An update event occurs for instance when a CAS operation is executed: a shared location is read, compared to a local variable and then possibly written to. We let \({ Mod}= W \cup U\) be the set of events that modify a location, and \({ Qry}= R \cup U\) be the set of events that query a location. For any memory event e, let loc(e) be the event’s accessed location and \({ Loc}(x) = \{e \mid loc(e) = x\}\) the set of events accessing location x. For any query event let rval(e) be the value read; and for any modification event let wval(e) be the value written. An event may carry a synchronisation annotation, which (in our restricted setting) may either be a release, \(\mathsf{R}\), or an acquire, \(\mathsf{A}\), annotation, and we let ann(e) be an event e’s annotation.

Definition 13

A C11 execution is a tuple \(\mathbb {D}= (D, { sb}, { rf}, { mo})\), where D is a set of events, and \({ sb}, { rf}, { mo}\subseteq D \times D\) define the sequenced-before, reads-from and modification order relations, respectively.

We say a C11 execution is valid when it satisfies the following constraints: (V1) \({ sb}\) is a strict order, such that, for each process p, the projection of sb onto p is a total order; the reads-from relation specifies the write a particular read event reads-from: \({ rf}\subseteq { Mod}\times Qry\) and (V2) for all \((w, r) \in { rf}\), \(loc(w) = loc(r)\) and \(wval(w) = rval(r)\) as well as (V3) for all \(r \in D \cap Qry\), there exists some \(w \in D \cap { Mod}\) such that \((w, r) \in { rf}\); the modification order relates writes to the same location and these writes are totally ordered: (V4) for all \((w, w') \in { mo}\), \(loc(w) = loc(w')\); and (V5) for all \(w, w' \in { Mod}\) such that \(loc(w) = loc(w')\), \((w, w') \in { mo}\) or \((w', w) \in { mo}\).

Other relations can be derived from these basic relations. For example, assuming \(D_\mathsf{R}\) and \(D_\mathsf{A}\) denote the sets of events with release and acquire annotations, respectively, the synchronises-with relation, \({ sw}= { rf}\cap (D_\mathsf{R}\times D_\mathsf{A})\), creates interthread ordering guarantees based on synchronisation annotations. The annotations \(\mathsf{R}\) and \(\mathsf{A}\) can thus be used by programmers to achieve certain visibility effects of their write events. The from-read relation, \({ fr}= ({ rf}^{-1} ; { mo}) {\setminus } Id\), relates each query to the events in modification order after the modification that it read from. Our final derived relation is the happens before relation \({ hb}= ({ sb}\cup { sw})^+\), which formalises causality. We say that a C11 execution is consistent if (C1) \({ hb}\) is acyclic, and (C2) \({ hb}; ({ mo}\cup { rf}\cup { fr})\) is irreflexive.

Method Invocations and Responses. So far, the events appearing in our memory model are low level read and write events. Our goal is to model algorithms such as the Treiber stack. Thus, we add method events to the standard model, namely, invocations, \( Inv \), and responses, \( Res \). Unlike weak memory at the processor architecture level, where program order may not be preserved [18], program order in C11 is consistent with happens-before order, and hence, invocation and response events can be introduced here in a straightforward manner. The only additional requirement is that validity also requires (V6) sb for each process projected restricted \( Inv \cup Res \) must be an alternating sequence of invocations and matching responses, starting with an invocation. In any execution of a well-formed program, this condition is naturally satisfied.

From C11 Executions to Execution Structures. A C11 execution with method invocations and responses naturally gives rise to an execution structure. First, for a C11 execution \(\mathbb {D}\) and \( IR = Inv \cup Res \), we let \({ hb}_{ir} = { hb}\cap ( IR \times IR )\), i.e., the happens-before relation of \(\mathbb {D}\) restricted to the invocation and response events. By (V6), \({ hb}_{ir}\) is well-formed and complete. Thus, we can apply the construction defined in Sect. 4 to build an execution structure \( exec ({ hb}_{ir})\).

Definition 14

We say that a C11 execution \(\mathbb {D}\) is causally linearizable w.r.t. a sequential object if \( exec ({ hb}_{ir})\) is.    \(\square \)

Compositionality of causal linearizability thus naturally carries over to C11 executions. Finally, we say that a data structure (like the Treiber stack) is causally linearizable on C11 when all its C11 executions are. Thus, we will in the following investigate how we can prove such a property.

6 Verification

We now describe an operational method for proving that a given C11 execution is causally linearizable w.r.t. a given sequential object. Our method is based on a simulation-based proof rule described in Sect. 6.1. We illustrate our technique on the Treiber Stack (Sect. 6.2), which is often used as an exercise in the verification literature [16]. Unlike these existing verifications, we consider weak memory, and hence, the stack in Algorithm 1 generates more behaviours than in a standard sequentially consistent setting. The proof in Sect. 6.2 below is the first to verify that the stack under C11 satisfies causal linearizability. Moreover, our proof technique, which considers simulation over a happens-before relation, is novel to this paper.

6.1 A Simulation Relation over Happens-Before

For the remainder of this section, fix a C11 execution \(\mathbb {D}= (D, { sb}, { rf}, { mo})\), and a sequential object \(\mathbb {S}= (\varSigma , \varGamma , { init}, \tau )\). We describe a method for proving that \(\mathbb {D}\) is causally linearizable w.r.t. \(\mathbb {S}\). In what follows, we write when \((e, e') \in hb_{\mathbb {D}}\).

As in the interleaving setting our method depends on assigining linearization points [16] to each operation. Therefore, the verifier must define a function \( lp : D \cap Inv \rightarrow D\), which returns the memory event that linearizes the given high-level operation, represented by its invocation. For simplicity, in this presentation, we require that our linearizations be injective.Footnote 6 Recall from the previous section that the operations in the execution structure \( exec ({ hb}_{ir})\) are elements of matching pairs from the set \(mp({ hb}_{ir})\). To recover the abstract order of operations corresponding to a linearization, we use .

Definition 15

We say \( lp \) is a linearization iff for each \(i \in D \cap Inv\), and matching response r, . Furthermore, we say \( lp \) is a legal linearization iff \( LE (<_ lp ) \subseteq { legal}_{\mathbb {S}}\).

Note that, for a legal linearization, we require that every linear extension of \(<_ lp \) yields a legal history under the linearization function \( lp \). Of course, if were total, this would reduce to essentially the standard notion of linearization point, and thus our proof technique is a generalization of a standard technique.

The existence of a legal linearization is sufficient to prove causal linearizability of the C11 execution.

Theorem 16

(Legal linearizations guarantee causal linearizability). If there is a legal linearization \( lp \), then D is causally linearizable w.r.t. \(\mathbb {S}\).

The key difficulty in using Theorem 16 is showing that a given linearization function is legal. To this end, we extend the standard simulation method to prove legality of a linearization function [16].

In the usual setting, a simulation relation relates states of the implementation to states of the specificiation, and this relation encodes an induction hypothesis for an induction on the executions of the specification. In our current setting, the simulation relation (which we denote \(\rho \), below) relates sets of low-level actions to abstract states. The simulation relation encodes an induction hypothesis for an induction on the linear extensions of the hb-relation. Thus, at each stage of the induction we can assume \(\rho (Z, \gamma )\) for some set of events \(Z \subseteq D\) and state \(\gamma \in \varGamma \), where Z is downwards-closed with respect to the hb order. The set Z is the set of low-level actions already considered by the induction. In order to be a simulation, the relation \(\rho \) must satisfy the conditions given in the following definition.

Definition 17

(hb-simulation). Suppose \( lp \) is a linearization. An hb-simulation is a relation \(\rho \subseteq 2^D \times \varGamma \) such that:

figure e

Condition 1 ensures that the initial states match up: at the concrete level this is the empty set and at the abstract level, this is the initial state. The induction considers the low-level actions in hb order, the low-level action under consideration must be an element of \(D \backslash Z\) such that all its hb predecessors are already in Z. For each such event e, there are two possibilities: either e is a stutter step (in which case the abstract state is unchanged), or e linearizes the operation invoked by i (in which case the abstract system takes a step). In either case, the event e is added to the set Z, and we must show that the simulation relation is preserved.

The existence of an hb-simulation guarantees that \( lp \) is a legal linearization. This fact is captured by the next theorem.

Theorem 18

(hb-simulation guarantees legal linearization). If \( lp \) is a linearization and \(\rho \) is an \({ hb}\)-simulation with respect to \( lp \), then \( lp \) is a legal linearization.

Thus, if we can exhibit a linearization \( lp \) and an hb-simulation \(\rho \), then \(\mathbb {D}\) is causally linearizability w.r.t. \(\mathbb {S}\).

6.2 Case Study: The Treiber Stack

We now describe a linearization function \( lp \) and an \({ hb}\)-simulation relation \(\rho \), demonstrating causal linearizability of the Treiber stack. We fix some arbitrary C11 execution \(\mathbb {D}= (D, { sb}, { rf}, { mo})\) that contains an instance of the Treiber stack. That is, the invocations in \(\mathbb {D}\) are the stack invocations, and the responses are the stack responses (as given in Sect. 3). Furthermore, the low-level memory operations between these invocations and responses are generated by executions of the operations of the Treiber stack (Algorithm 1). As before, we write when \((e, e') \in hb_{\mathbb {D}}\).

The linearization function \( lp \) for the Treiber stack is completely standard: referring to Algorithm 1 on page 4, each operation is linearized at the unique update operation generated by the unique successful CAS at line 9 (for pushes) or line 16 (for pops).

The main component of our simulation relation \(\rho \) guarantees correctness of the data representation, i.e., the sequence of values formed by following next pointers starting with & Top forms an appropriate stack, and we focus on this aspect of the relation. As is typical with verifications of shared-memory algorithms, there are various other properties that would need to be considered in a full proof.

In a sequentially consistent setting, the data representation can easily be obtained from the state (which maps locations to values). However, for C11 executions calculating the data representation requires a bit more work. In what follows, we define various functions that depend on a set Z of events, representing the current stage of the induction.

We define the latest write in Z to a location x as

and the current value of a location x in some set Z as \(cval_Z(x) = wval(latest_Z(x))\), which is the value written by the last write to x in modification order. It is now straightforward to construct the sequence of values corresponding to a location as \({ stackOf}_Z(x) = v \cdot { stackOf}_Z(y)\), where \(v = cval_Z(x.val)\) and \(y = cval_Z(x.nxt)\).

Now, assuming that s is a state of the sequential stack, our simulation requires:

$$ \begin{aligned} { stackOf}_Z(cval_Z( \& Top)) = s \end{aligned}$$
(1)

Further, we require that all modifications of &Top are totally ordered by hb:

(2)

to ensure that any new read considered by the induction sees the most recent version of &Top.

In what follows, we illustrate how to verify the proof obligations given in Definition 17, for the case where the new event e is a linearization point. Let e be an update operation that is generated by the CAS at line 9 of the push operation in Algorithm 1. The first step is to prove that every modification of &Top in Z is happens-before the update event e. Formally,

(3)

Proving this formally is somewhat involved, but the essential reason is as follows. Note that there is an acquiring read r to &Top executed at line 7 of e’s operation and sb-prior to e. r reads from some releasing update u. Thus, by Property 2, and the fact the hb contains sb, e is happens after u, and all prior updates. If there were some update \(u'\) of&Top such that \((u', e) \notin hb\), then \((u', u) \notin hb\) so by Property 2, . But it can be shown in this case that the CAS that generated e could not have succeeded, because \(u'\) constitutes an update intervening between r and e. Therefore, there can be no such \(u'\).

Property 3 makes it straightforward to verify that Condition 2b of Definition 17 is satisfied. To see this, note that every linearization point of every operation is a modification of & Top. Thus, if \((i', r')\) is some operation such that \(lp(i') \in Z\) (so that this operation has already been linearized) then .

Using Property 3 it is easy to see that both Propertys 1 and 2 are preserved. We show by contradiction that \( latest_{Z'}( \& Top) = e\). Otherwise, we have \( (e, latest_{Z'}( \& Top)) \in mo\). Therefore \( (latest_{Z'}( \& Top), e) \notin hb\), but \( latest_{Z'}( \& Top)\) is a modification operation, so this contradicts Property 3.

It follows from \( latest_{Z'}( \& Top) = e\) that \({ stackOf}(cval_{Z'}) = { stackOf}(wval(e))\). Given this, it is straightforward to show that Property 1 is preserved. This step of the proof relies on certain simple properties of push operations. Specifically, we need to show that the current value of the val field of the node being added to the stack (formally, \(cval_Z((wval(e)).nxt)\)) is the value passed to the push operation; and that the current value of the nxt field (formally, \(cval_Z((wval(e)).nxt)\)) is the current value of &Top when the successful CAS occurs. These properties can be proved using the model of dynamic memory given in Sect. 5.

7 A Synchronisation Pitfall

We now describe an important observation regarding failure of causal linearizability of read-only operations caused by weak memory effects. The issue can be explained using our abstract notion of an execution structure, however, a solution to the problem is not naturally available in C11 with only release-acquire annotations. Note that this observation does not indicate that our definition of causal linearizability is too strong, but rather that release-acquire annotations cannot guarantee the communication from a read-only operation to a writing operation necessary for compositionality.

Fig. 3.
figure 3

Read-only operations without communication (not compositional)

Consider the Treiber Stack in Algorithm 1 that returns empty instead of spinning; namely where the inner loop (lines 12–14) is replaced by code block

top :=\(^\mathsf{A}\) Top ; if top = null then return empty

Such an implementation could produce executions such as the one in Fig. 3 which, like the examples in Sect. 3, is not compositional. Recovering compositionality requires one to introduce additional communication edges from the pops returning empty to the corresponding push operations. In the C11 memory model, these correspond to “from-read” anti-dependencies from a read to a write overwriting the value read. However, release-acquire synchronisation is not adequate for promoting from-read order in the memory to happens-before.

One fix would be to disallow read-only operations, e.g., by introducing a release-acquire CAS operation on a special variable that always succeeds at the start of each operation. However, such a fix is somewhat unnatural. Another would be to use C11’s SC annotations, which can induce synchronisation across from-read edges. However, the precise meaning of these annotations is still a topic of active research [7, 30].

8 Conclusion and Related Work

We have presented causal linearizability, a new correctness condition for objects implemented in weak-memory models, that generalises linearizability and addresses the important problem of compositionality. Our condition is not tied to a particular memory model, but can be readily applied to memory models such as C11, that feature a happens-before relation. We have presented a proof method for verifying causal linearizability. We emphasise that our proof method can be applied directly to a standard axiomatic memory model. Unlike other recent proposals [15, 25], we model C11’s relaxed accesses without needing to prohibit their problematic dependency cycles (so called “load-buffering” cycles). Although causal linearizability has been presented as a condition for concurrent objects, it is possible to extend this condition to cover, for example, transactional memory.

Causal linearizability is closely related to causal \({ hb}\)-linearizability defined in [18], which is a causal relaxation of linearizability that uses specifications strengthened with a happens-before relation. The compositionality result there requires that either a specification is commuting or that a client is unobstructive (does not introduce too much synchronisation). Our result is more general as we place no such restriction on the object or the client. In previous work (see also Sect. 1), we have defined a correctness condition that is only compositional when the client satisfies certain constraints [11]; in contrast, the treatment in this paper achieves a full decoupling between the client and object. Furthermore, that condition is only defined when the underlying memory model is given operationally, rather than axiomatically like C11. Early attempts, targeting TSO architectures, used totally ordered histories but allowed the response of an operation to be moved to a corresponding “flush” event [10, 17, 21, 39]. Others have considered the effects of linearizability in the context of a client abstraction. This includes a correctness condition for C11 that is strictly stronger than linearizability under SC [6]. Although we have applied causal linearizability to C11, causal linearizability itself is more general as it can be applied to any weak memory model with a happens-before relation. Causal consistency [4] is a related condition, aimed at shared-memory and data-stores, which has no notion of precedence order and is not compositional.

There exists a rich body of work on the semantics of weak memory models, including semantics for the C11 memory model [3, 5, 7, 8, 26, 30, 35]. This has been used as a basis for program logics [14, 15, 22, 25, 29, 38] and given rise to automated tools for analysis of weak memory programs [1, 2, 27, 37]. These logics and associated verification tools however, are typically not designed to reason about simulation and refinement as is essential for proofs of (causal) linearizability of concurrent data structures [16]. There are several existing automated techniques for checking (classical) linearizability [9, 19, 33, 41] that use simulation-based techniques. We anticipate that such techniques could be extended to verify hb-simulation and causal linearizability, however, leave such extensions as future work.