Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Solvers for Satisfiability Modulo Theories (SMT) are widely used as back ends for program analysis and verification tools. In a typical application, portions of a program’s source code together with one or more desired properties are translated into formulas which are then checked for satisfiability by an SMT solver. A key challenge in many of these applications is scalability: for larger programs, the solver often fails to report an answer within a reasonable amount of time because the generated formula is too complex. Thus, one key objective in program analysis and verification research is finding ways to reduce the complexity of SMT formulas arising from program analysis.

For imperative languages like C, the modeling of memory can play a significant role in formula complexity. For example, a flat model of memory represents all of memory as a single array of bytes. This model is simple and precise and can soundly and accurately represent type-unsafe constructs and operations like unions, pointer arithmetic, and casts. On the other hand, the flat model implicitly assumes that any two symbolic memory locations could alias or overlap, even when it is known statically that such aliasing is impossible. Introducing disjointness constraints for every pair of non-overlapping locations leads to a quadratic blow-up in formula size, quickly becoming a bottleneck for scalability.

The Burstall model [6] is a well-known alternative that addresses this by having a separate array for each distinct type in the language. It is based on the assumption that pointers with different types never alias, eliminating the need for disjointness constraints between such pointers. However, it cannot model many type-unsafe operations, making it unsuitable for languages like C.

This paper discusses a family of memory models, which we call partitioned memory models, whose goal is to provide much of the efficiency of the Burstall model without losing the accuracy of the flat model. These models rely on a points-to analysis to determine a conservative approximation of which areas of memory may alias or overlap. The memory is then partitioned into distinct arrays for each of these areas.

When deciding which points-to analysis performs best in this context, there are two key attributes to consider: (1) the precision of the analysis (i.e. how many alias groups are generated); and (2) the ability to track the access size of objects in the alias group. The first is important because more alias groups means more partitions and fewer disjointness constraints. The second is important because if we know that all objects in the group are of a certain access size, then we can model the corresponding partition as an array whose elements are that size. After a review of existing field-sensitive points-to analyses (including Steensgaard’s original analysis [27], Steensgaard’s field-sensitive analysis [26], and Data Structure Analysis (DSA) [18]), we introduce a new cell-based points-to analysis and show that it improves on both of these attributes when compared to existing analyses. This is supported by experiments using the Cascade verification platform [29].

2 Memory Models for C Program Analysis

Consider the C code in Fig. 1. We will look at how to model the code using the flat memory model, the Burstall memory model, and the partitioned memory models.

Fig. 1.
figure 1

Code with unsafe pointer cast.

Fig. 2.
figure 2

The points-to graph of . Each \(\tau _i\) represents a distinct alias group.

Flat Model. In the flat model, a single array of bytes is used to track all memory operations, and each program variable is modeled as the content of some address in memory. Suppose M is the memory array, a is the location in M which stores the value of the variable , and b is the location in M which stores the value of the variable . We could then model the first two lines of in Fig. 1 (following SMT-LIB syntax [3]) as follows:

figure a

This is typical of the flat model: each program statement layers another store on top of the current model of memory. As a result, the depth of nested stores can get very large. Also, note that C guarantees that the memory regions of any two variables are non-overlapping. But the flat model must explicitly model this using an assumption on each pair of variables. This can be done with the following disjointness predicate, where p and q denote locations of variables, and \( size (p)\) is the size of the memory region starting at location p:

$$\begin{aligned} disjoint (p, q) \equiv p + size (p) \le q \vee q + size (q) \le p. \end{aligned}$$

For the code in Fig. 1, the required assumption is: \( disjoint (a, b) \wedge disjoint (a, c) \wedge disjoint (c, b)\). Deeply nested stores and the need for such disjointness assertions severely limit the scalability of the flat model.

Burstall Model. In the Burstall model, memory is split into several arrays based on the type of the data being stored. In Fig. 1, there are four different types of data, so the model would use four arrays: \(M_{int}\), \(M_{char}\), \(M_{int*}\) and \(M_{char*}\). In this model, a is a location in \(M_{int}\), b is a location in \(M_{int*}\), and c is a location in \(M_{char*}\). Disjointness is guaranteed implicitly by the distinctness of the arrays. The depth of nested stores is also limited to the number of stores to locations having the same type rather than to the number of total stores to memory. Both of these features greatly improve performance. However, the model fails to prove the assertion in Fig. 1. The reason is that the assumption that pointers to different types never alias is incorrect for type-unsafe languages like C.

Partitioned Models. In the partitioned memory model, memory is divided into regions based on information acquired by running a points-to analysis.Footnote 1 The result of a standard points-to analysis is a points-to graph whose vertices are sets of program locations called alias groups. An edge from an alias group \(\tau _1\) to an alias group \(\tau _2\) indicates that dereferencing a location in \(\tau _1\) gives a location in \(\tau _2\). As an example, the points-to graph for the code in Fig. 1 is shown in Fig. 2. There are three alias groups identified: one for each of the variables , , and . We can thus store the values for these variables in three different memory arrays (making their disjointness trivial). Note that according to the points-to graph, a dereference of either or must be modeled using the array containing the location of , meaning that the model is sufficiently precise to prove the assertion. Like the Burstall model, the partitioned model divides memory into multiple arrays, reducing the burden on the solver. However, it does this in a way that is sound with respect to type-unsafe behavior.

The points-to analysis employed by the partitioned memory model must derive points-to relations that are also functions. That is, each node must have only a single successor in the points-to graph. This is because the successor determines the array to use when modeling a pointer dereference. The unification-based analyses proposed by Steensgaard [27] naturally guarantee this property. However, inclusion-based analyses proposed by Andersen [1] and others violate it. For this reason, we focus on Steensgaard’s method and its variants.

Partitioned memory models that rely on unification-based points-to analyses are already used in a number of verification tools. For example, SMACK [24] leverages data structure analysis (DSA) [18], a field-sensitive and context-sensitive points-to analysis, to divide the memory. SeaHorn [13] uses a simpler context-insensitive variant of DSA. CBMC [7] and ESBMC [20] also use may-points-to information to refine their memory models. These tools illustrate the practical value of partitioned memory models. Note that in each case, the efficiency of the memory model depends heavily on the precision of the underlying points-to analysis. The more precise the analysis, the more partitions there are, which means fewer disjointness constraints and shallower array terms, both of which reduce the size and difficulty of the resulting SMT formulas.

3 Field-Sensitive Points-to Analyses

Field-sensitivity is one dimension that measures the precision of points-to analyses. We say a pointer analysis is field-sensitive if it tracks individual record fields with a set of unique locations. That is, a store to one field does not affect other fields. In this sense, a field-sensitive analysis is much more precise than a field-insensitive one. Unfortunately, field-sensitivity is complicated by the weak type system of the C language. With the presence of union types, pointer casts, and pointer arithmetic, fields may not be guaranteed to be disjoint. The original Steensgaard’s points-to analysis treats fields as a single alias group, while his field-sensitive analysis [26] improves the precision by separating fields into distinct alias groups only if the fields do not participate in any type-unsafe operations. Yong et al. [30] also propose to collapse fields upon detecting a field access through a pointer whose type does not match the declared type of the field. However, none of these analyses are able to distinguish fields on heap data structures (dynamic memory allocations).

Lattner et al. [18] present a data structure based points-to analysis (DSA), a unification-based, field-sensitive, and context-sensitive analysis. DSA explicitly tracks the type information and data layout of heap data structures and performs a conservative but safe field-sensitive analysis. The other key feature of DSA is that two objects allocated at the same location in a function called from different call sites are distinguished. It is in this sense that the analysis is context-sensitive. This feature, however, is orthogonal to our focus in this paper.

DSA computes a Data Structure Graph (DS graph) to explicitly model the heap. A DS graph consists of a set of nodes (DS nodes) representing memory objects and a set of points-to edges. Each DS node tracks the type information and data layout of the represented objects. If two DS nodes or their inner fields may be pointed to by the same pointer, they are merged together. When two DS nodes are merged, their type information and data layouts are also merged if compatible; otherwise, both nodes are collapsed, meaning that all fields are combined into a single alias group. To illustrate the DS node merging and collapsing process, we use the code in Fig. 3 as a running example (for simplicity, we assume pointers are 32 bits in this example).

Fig. 3.
figure 3

Code for running example.

Fig. 4.
figure 4

DS Graph for function and (see [18]). Note that the numbers under each DS node are the offsets of its inner fields in bytes.

In function , an alias relationship between and is introduced via a conditional assignment. The aliasing of the two fields, as mentioned above, causes the merging of their containing DS nodes. The merging process is shown in Fig. 4. Before being merged, the DS nodes pointed to by and are disjoint, and each holds the type information of structure , including the field layout. During the process of merging, the DS node pointed by is shifted by 4 bytes to align the aliased fields and is then merged with the node pointed to by . In the resulting graph, we can see that the aliased fields and are placed together, but the subsequent fields and are also placed in the same cell (colored gray), even though they are not aliased.

In function , the conditional assignment introduces an alias relationship between two fields in the same DS node pointed to by . In this case, as shown in Fig. 4, the whole DS node is collapsed and all the fields are merged into a single alias group. The reason for this is that in DSA, if a DS node is merged with itself with shifting, this is considered an incompatible merge. In other words, DSA does not support field sensitivity for records with inner field aliasing.

As shown above, while DSA does support field-sensitive points-to analysis on heap data structures, the computed alias information is still rather conservative. By performing the merging process at the object level (DS nodes) rather than at the field level, invalid alias relationships are introduced. Partly to address these issues, we developed a novel cell-based field-sensitive (CFS) points-to analysis.

4 Cell-Based Points-to Analysis

A cell Footnote 2 is a generalization of an alias group. Initially, each program expression that corresponds to a memory location at runtime (i.e. an l-value) is associated with a unique cell whose size is a positive integer denoting the size (in bytes) of the values that expression can have. In addition, each cell has a type, which is scalar unless its associated program expression is of structure or union type (in which case the cell type is record). Under certain conditions, the analysis may merge cells. If two cells of two different sizes are merged, then the result is a cell whose size is \(\top \). The analysis maintains an invariant that the locations associated with any two isolated scalar cells are always disjoint, which makes the memory partitioning using the analysis possible.

Our analysis creates a points-to graph whose vertices are cells. The graph has two kinds of edges. A points-to edge plays the same role here as in the points-to graphs mentioned in the previous section: \(\alpha \rightharpoonup \beta \) denotes that dereferencing some expression associated with cell \(\alpha \) yields an address that must be in one of the locations associated with cell \(\beta \). A key contribution of our approach which improves precision is that unlike other field-sensitive analyses (e.g. [18, 26]), inner cells (fields) may be nested in more than one outer cell (record). Thus, we use additional graph edges to represent containment relations. A contains edge \({\alpha \hookrightarrow _{i, j} \beta }\) denotes that cell \(\alpha \) is of record type and that \(\beta \) is associated with a field of the record whose location is at an offset of i from the record start and whose size in bytes is \(j-i\). Each cell may have multiple outgoing contains edges to its inner cells and multiple incoming contains edges from its outer cells.

Fig. 5.
figure 5

Concrete memory state and its graph representation.

Figure 5 shows a simple example. On the left is the memory layout of a singly-linked list with one element. The element is a record with two fields, a data value and a next pointer (which points back to the element in this case). The graph, shown on the right, contains three cells. The square cell is associated with the entire record element and the round cells with the inner fields (here and in the other points-to graphs below, we follow the convention that square cells are of record type and round cells are of scalar type). The solid edge is a points-to edge from the next field to the record cell, and the dashed edges are contains edges from the record cell to the field cells. These contains edges are labeled with their corresponding starting and ending offsets within the record. The following properties hold for contains edges:

  • reflexivity: \({\alpha \hookrightarrow _{0, s} \alpha }\), if \(\alpha \) is a cell with a numeric size s;

  • transitivity: if \({\alpha _1 \hookrightarrow _{i_1, j_1} \alpha _2}\) and \({\alpha _2 \hookrightarrow _{i_2, j_2} \alpha _3}\), then \({\alpha _1 \hookrightarrow _{i_1 + i_2, i_1 + j_2} \alpha _3}\);

  • linearity: if \({\alpha _1 \hookrightarrow _{i_1, j_1} \alpha _2}\) and \({\alpha _1 \hookrightarrow _{i_2, j_2} \alpha _3}\), then \({\alpha _2 \hookrightarrow _{i_2-i_1, j_2-i_1} \alpha _3}\) if \(i_1 \le i_2 < j_2 \le j_1\).

Fig. 6.
figure 6

CFS points-to graph for function and .

Let us revisit the running example from Fig. 3. The points-to graphs computed by our cell-based points-to analysis are shown in Fig. 6. In these graphs, record fields are separated out into individual cells. When field aliasing is detected, the individual field cells are merged (rather than their containing record cells), and any associated contains edges are kept unchanged. As shown in the graph on the left, in function , fields and are merged into a single cell with contains edges from the record cells pointed to by and , while other unaliased fields are kept separate. In function , as shown in the graph on the right, fields and share the same cell with two incoming contains edges from the record cell pointed to by , each labeled with different offsets. The record cell itself is not collapsed. Note that for these examples (and unlike DSA), our analysis introduces no extraneous alias relationship. Below, we explain our analysis in more detail by illustrating how it behaves in the presence of various type-unsafe operations.

Union types declare fields that share the same memory region. In a union, all fields with scalar types (e.g. float or int) are aliased. If there are nested records in a union, then two nested fields are aliased only if their offset ranges overlap. Both cases can be captured naturally in our analysis using contains edges.

Pointer arithmetic is particularly problematic for points-to analyses as it can (in principle) be used to access any location in the memory. We follow the standard approach of assuming that pointer arithmetic will not move a pointer outside of the memory object it is pointing to [30]. This assumption, coupled with the appropriate checks in the verification tool, is still sound in the sense that if it is possible to access invalid memory, the tool will detect it [10]. In our algorithm, any cell pointed to by operands of a pointer arithmetic expression is collapsed, meaning all of its outer record and inner field cells are merged into a single scalar cell. Consider function in Fig. 7. The call to induces the same graph for as in Fig. 6. However, the expression results in this graph being collapsed into a single cell (colored in gray), as shown on the left of Fig. 9.

Fig. 7.
figure 7

Code with pointer arithmetic.

Fig. 8.
figure 8

Code with pointer casting.

Pointer casting creates an alternative view of a memory region. To model this, a fresh cell is added to the points-to graph representing the new view. To illustrate, consider function in Fig. 8. Again, the function call induces the same graph for as in Fig. 6. After the call, pointer is cast to the type . A new record cell (colored in gray) is added to the graph as shown in the middle of Fig. 9. The newly added cell is essentially a copy of the cell pointed to by except that the offset intervals are enlarged in order to match the size of . The casting introduces an alias between and , as both may alias with . By applying the properties of contains edges, the graph can be simplified into the one on the right of Fig. 9, which precisely captures the alias introduced by casting.

Fig. 9.
figure 9

CFS point-to graphs of function and .

Another contribution of our analysis is that we track the size of each alias group (either a numeric value or \(\top \)). The size enables further improvements in the memory model: the memory array for an alias group whose size is \(\top \) is modeled as an array of bytes, while the memory array for a group whose size is some numeric value n can be modeled as an array of n-byte elements. For these latter arrays, it then becomes possible to read or write n bytes with a single array operation (whereas with an array of bytes, n operations are needed). Not having to decompose array accesses into byte-level operations reduces the size and complexity of the resulting SMT formulas.

5 Constraint-Based Analysis

In this section, we formalize the cell-based field-sensitive points-to analysis described above using a constraint framework. Our constraint-based program analysis is divided into two phases: constraint generation and constraint resolution. The constraint generation phase produces constraints from the program source code in a syntax-directed manner. The constraint resolution phase then computes a solution of the constraints in the form of a cell-based field-sensitive points-to graph. The resulting graph describes a safe partitioning of the memory for all reachable states of the program.

5.1 Language and Constraints

For the formal treatment of our analysis, we consider the idealized C-like language shown in Fig. 10. To simplify the presentation, complex assignments are broken down to simpler assignments between expressions of scalar types, static arrays are represented as pointers to dynamically allocated regions, and a single type is used to represent all pointer types. Function definitions, function calls, and function pointers are omitted.Footnote 3

Fig. 10.
figure 10

Language syntax

Let \(\mathbb {C}\) be an infinite set of cell variables (denoted \(\tau \) or \(\tau _i\)). We will use cell variables to assign program expressions to cells in the resulting points-to graph. To do so, we assume that each subexpression \(e'\) of an expression e is labeled with a unique cell variable \(\tau \), with the exception that program variables are always assigned the same cell variable, \(\tau _x\). Cell variables associated with program variables and heap allocations are called source variables. To avoid notational clutter, we do not make cell variables explicit in our grammar. Instead, we write \(e: \tau \) to indicate that the expression e is labeled by \(\tau \).

Constraints. The syntax of our constraint language is defined as follows:

figure b

Here, a term \(\eta \) denotes a cell size. The constant \(\top \) indicates an unknown cell size. A constraint \(\phi \) is a positive Boolean combination of cell size constraints, equalities on cell variables, points-to edges \(\tau _1 \rightharpoonup \tau _2\), contains edges \({\tau _1 \hookrightarrow _{i, j} \tau _2}\) and special predicates whose semantics we describe in detail below. We additionally introduce syntactic shorthands for certain constraints. Namely, we write \(i \sqsubseteq \eta \) to stand for the constraint \(i = \eta \vee \eta = \top \), \(i \le \eta \) to stand for \(i < \eta \vee i = \eta \), and \(i \preceq \eta \) to stand for \(i \le \eta \vee \eta = \top \).

Constraints are interpreted in cell-based field-sensitive points-to graphs (CFPGs). A CFPG is a tuple \(G = (C, cell , size , source , scalar , contains , ptsto )\) where

  • C is a finite set of cells,

  • \( cell : \mathbb {C}\rightarrow C\) is an assignment from cell variables to cells,

  • \( size : C \rightarrow \mathbb {N}\cup \{\top \}\) is an assignment from cells to cell sizes,

  • \( source \subseteq C\) is a set of source cells,

  • \( scalar \subseteq C\) is a set of scalar cells,

  • \( contains \subseteq C \times \mathbb {N}\times \mathbb {N}\times C\) is a containment relation on cells, and

  • \(ptsto: C \rightarrow C\) is a points-to map on cells.

For \(c_1,c_2 \in C\), and \(i,j \in \mathbb {N}\), we write as notational sugar for \((c_1,i,j,c_2) \in contains \), and similarly \(c_1 \mathop {\rightharpoonup }\limits ^{G} c_2\) for \(ptsto(c_1)=c_2\). Let \( contains '\) be the projection of \( contains \) onto \(C \times C\): \( contains '(c_1,c_2) \equiv \exists \,i,j.\, contains (c_1,i,j,c_2)\).

The functions and relations of G must satisfy the following consistency properties. These properties formalize the intuition of the containment relation and the roles played by source and scalar cells:

  • the \( contains '\) relation is reflexive (for cells of known size), transitive, and anti-symmetric. More specifically,

    $$\begin{aligned} \forall \,c \in C.\, size (c)\not =\top&\implies {c \mathop {\hookrightarrow }\limits ^{G}_{0, size(c)} c} \end{aligned}$$
    (1)
    $$\begin{aligned} \forall \,\left( \begin{array}{l} c_1,c_2,c_3\in C,\\ i_1,i_2,j_1,j_2 \in \mathbb {N}\end{array} \right) .\, {c_1 \mathop {\hookrightarrow }\limits ^{G}_{i_1, j_1} c_2} \wedge {c_2 \mathop {\hookrightarrow }\limits ^{G}_{i_2, j_2} c_3}&\implies {c_1 \mathop {\hookrightarrow }\limits ^{G}_{i_1 + i_2, i_1 + j_2} c_3}\\ \end{aligned}$$
    (2)
    $$\begin{aligned} \forall \,\left( \begin{array}{l} c_1,c_2\in C,\\ i_1,i_2,j_1,j_2 \in \mathbb {N}\end{array} \right) .\, {c_1 \mathop {\hookrightarrow }\limits ^{G}_{i_1, j_1} c_2} \wedge {c_2 \mathop {\hookrightarrow }\limits ^{G}_{i_2, j_2} c_1}&\implies c_1 = c_2 \end{aligned}$$
    (3)
  • cells that are of unknown size or that point to other cells must be scalar:

    $$\begin{aligned} \forall \,c \in C.\, size(c)=\top \implies c \in scalar \end{aligned}$$
    (4)
    $$\begin{aligned} \forall \,c,c' \in C.\, c \mathop {\rightharpoonup }\limits ^{G} c' \implies c \in scalar \end{aligned}$$
    (5)
  • the \( contains \) relation must satisfy the following linearity property:

    (6)
  • scalar cells do not contain other cells:

    (7)
  • overlapping scalar cells are equivalent. The notion of overlap is formally expressed as

    $$\begin{aligned} \forall \, c,c' \in C.\, c \in scalar \wedge c' \in scalar \wedge overlap^G(c, c') \implies c = c' \end{aligned}$$
    (8)
  • cell sizes must be consistent with the \( contains \) relation:

    (9)

Semantics of Constraints. Let G be a CFPG with components as above. For a cell variable \(\tau \in \mathbb {C}\), we define \(\tau ^G = cell(\tau )\) and for a size term \(\eta \) we define \(\eta ^G = size(\tau ^G)\) if \(\eta =\mathsf {size}( \tau )\) and \(\eta ^G=\eta \) otherwise. The semantics of a constraint \(\phi \) is given by a satisfaction relation \(G \models \phi \), which is defined recursively on the structure of \(\phi \) in the expected way. Size and equality constraints are interpreted in the obvious way using the term interpretation defined above. Though, note that we define \(G \not \models i < \top \) and \(G \not \models i = \top \). Points-to constraints \(\tau _1 \rightharpoonup \tau _2\) are interpreted by the points-to map \(\tau ^G_1 \mathop {\rightharpoonup }\limits ^{G} \tau ^G_2\); contains constraints \({\tau _1 \hookrightarrow _{i, j} \tau _2}\) are interpreted by the containment relation ; and \(\mathsf {source}\) and \(\mathsf {scalar}\) are similarly interpreted by \( source \) and \( scalar \).

Intuitively, a cast predicate \(\mathsf {cast}(k, \tau _1, \tau _2)\) states that cell \(\tau _2\) is of size k and is obtained by a pointer cast from cell \(\tau _1\). Thus, any source cell that contains \(\tau _1\) at offset i must also contain \(\tau _2\) at that offset. That is, \(G \models \mathsf {cast}(k, \tau _1, \tau _2)\) iff:

The predicate \(\mathsf {collapsed}(\tau )\) indicates that \(\tau \) points to a cell c that may be accessed in a type-unsafe manner, e.g., due to pointer arithmetic. Then, \(G \models \mathsf {collapsed}(\tau )\) iff

The predicate \(\tau _1 \trianglelefteq \tau _2\) (taken from [26]) is used to state the equivalence of the points-to content of \(\tau _1\) and \(\tau _2\). Formally, \(G \models \tau _1 \trianglelefteq \tau _2\) iff

$$\begin{aligned} \forall \,c \in C.\,\tau ^G_1 \mathop {\rightharpoonup }\limits ^{G} c \implies \tau ^G_2 \mathop {\rightharpoonup }\limits ^{G} c. \end{aligned}$$

5.2 Constraint Generation

The first phase of our analysis generates constraints from the target program in a syntax-directed bottom-up fashion. The constraint generation is described by the inference rules in Fig. 11. Recall that each program expression e is labeled with a cell variable \(\tau \). The judgment form \(e:\tau |\phi \) means that for the expression e labeled by the cell variable \(\tau \), we infer the constraint \(\phi \) over the cell variables of e (including \(\tau \)). A box with annotated \(\phi \) in the premise of an inference rule is used to indicate that \(\phi \) is the conjunction of the formulas contained in the box. Thus, the formula in the box is the constraint inferred in the conclusion of that rule.

For simplicity, we assume the target program is well-typed. Our analysis relies on the type system to infer the byte sizes of expressions and the field layout within records (e.g. structures or unions). To this end, we assume a type environment \(\mathcal {T}\) that assigns C types to program variables. Moreover, we assume the following functions: \( typeof (\mathcal {T}, e)\) infers the type of an expression following the standard type inference rules in the C language; |t| returns the byte size of the type t; and returns the offset of a field from the beginning of its enclosing record type t. Finally, \( isScalar (t)\) returns true iff the type t is an integer or pointer type.

Fig. 11.
figure 11

Constraint generation rules.

The inference rules are inspired by the formulation of Steensgaard’s field-insensitive analysis due to Forster and Aiken [12]. We adapt them to our cell-based field-sensitive analysis. Note that implications of the form \( isScalar (t) \implies \mathsf {scalar}(\tau )\), which we use in some of the rules, are directly resolved during the rule application and do not yield disjunctions in the generated constraints.

We only discuss some of the rules in detail. The rule Malloc generates the constraints for a operation. We assume that each occurrence of in the program is tagged with a unique identifier l and labeled with a unique cell variable \(\tau \) representing the memory allocated by that . The return value of is a pointer with associated cell variable \(\tau '\). Thus, \(\tau '\) points to \(\tau \).

The rules Dir-Sel, Arith-Op, and Cast are critical for the field-sensitive analysis. In particular, Dir-Sel generates constraints for field selections. A field within a record expression e is associated with a cell variable \(\tau _f\). The rule states there must be a contains-edge from the cell variable \(\tau \) associated with e to \(\tau _f\) with appropriate offsets. Rule Arith-Op is for operations that may involve pointer arithmetic. The cell variables \(\tau _1\), \(\tau _2\) and \(\tau \) are associated with \(e_1\), \(e_2\), and \(e_1 \odot e_2\), respectively. Any cells pointed to by \(\tau _1\) and \(\tau _2\) must be equal, which is expressed by the constraints \(\tau _1 \trianglelefteq \tau \) and \(\tau _2 \trianglelefteq \tau \). Moreover, if \(\tau \) points to another cell \(\tau '\), then pointer arithmetic collapses all relevant cells containing \(\tau '\), since we can no longer guarantee structured access to the memory represented by \(\tau '\). Rule Cast handles pointer cast operations. A cast can change the points-to range of a pointer. In the rule, \(\tau _1\) and \(\tau _2\) represent the operand and result pointer, respectively. \(\tau _1'\) and \(\tau _2'\) represent their points-to contents. Similar to , each cast \(t*\) has a unique identifier l and is labeled with a unique cell variable \(\tau _2'\) that represents the points-to content of the result. The constraint \(\mathsf {cast}(s, \tau _1', \tau _2')\) specifies that both \(\tau _1'\) and \(\tau _2'\) are within the same source containers with the same offsets. In particular, the size of \(\tau _2'\) must be consistent with |t| (the size of the type t).

5.3 Constraint Resolution

We next explain the constraint resolution step that computes a CFPG G from the generated constraint \(\phi \) such that \(G \models \phi \). The procedure must be able to reason about containment between cells, a transitive relation. Inspired by a procedure for the reachability in function graphs [15], we propose a rule-based procedure for this purpose.

The procedure is defined by a set of inference rules that infer new constraints from given constraints. The rules are shown in Fig. 12. They are derived directly from the semantics of the constraints and the consistency properties of CFPGs. Some of the rules make use of the following syntactic shorthand:

$$\mathsf {overlap}(\tau ,\tau _1,i_1,j_1,\tau _2,i_2,j_2) \equiv {\tau \hookrightarrow _{i_1, j_1} \tau _1} \wedge {\tau \hookrightarrow _{i_2, j_2} \tau _2} \wedge i_1 \le i_2 \wedge i_2 < j_1 $$

We omit the rules for reasoning about equality and inequality constraints, as they are straightforward. We also omit the rules for detecting conflicts. The only possible conflicts are inconsistent equality constraints such as \(i = \top \) and inconsistent inequality constraints such as \(i < \top \).

Our procedure maintains a context of constraints currently asserted to be true. The initial context is the set of constraints collected in the first phase. At each step, the rewrite rules are applied on the current context. For each rule, if the antecedent formulas are matched with formulas in the context, the consequent formula is added back to the context. The rules are applied until a conflict-free saturated context is obtained. The rule Split branches on disjunctions. Note that the rules do not generate new disjunctions. All disjunctions come from the constraints of the form \(i \preceq \eta \) and \(i \sqsubseteq \eta \) in the initial context. Each disjunction in the initial context has at least one satisfiable branch. Our procedure uses a greedy heuristic that first chooses for each disjunction the branch that preserves more information and then backtracks on a conflict to choose the other branch. For example, for a disjunct \(i \sqsubseteq \eta \), we first try \(i = \eta \) before we choose \(\eta = \top \). Once a conflict-free saturated context has been derived, we construct the CFPG using the equivalence classes of cell variables induced by the derived equality constraints as cells of the graph. The other components can be constructed directly from the constraints.

Termination. To see that the procedure terminates, note that none of the rules introduce new cell variables \(\tau \). Moreover, the only rules that can increase the offsets ij in containment constraints \({\tau _1 \hookrightarrow _{i, j} \tau _2}\) are Cast and Trans. The application of these rules can be restricted in such a way that the offsets in the generated constraints do not exceed the maximal byte size of any of the types in the input program. With this restriction, the rules will only generated a bounded number of containment constraints.

Fig. 12.
figure 12

Constraint resolution rules

Soundness. The soundness proof of the analysis is split into three steps. First, we prove that the CFPG resulting from the constraint resolution indeed satisfies the original constraints that are generated from the program. The proof shows that the inference rules are all consequences of the semantics of the constraints and the consistency properties of CFPGs. The second step defines an abstract semantics of programs in terms of abstract stores. These abstract stores only keep track of the partition of the byte-level memory into alias groups according to the computed CFPG. We then prove that the computed CFPG is a safe inductive invariant of the abstract semantics. The safety of the abstract semantics is defined in such a way that it guarantees that the computed CFPG describes a valid partition of the reachable program states into alias groups. Finally, we prove that the abstract semantics simulates the concrete byte-level semantics of programs. The details of the soundness proof are omitted due to space restrictions.

6 Experiments

To assess the impact of different memory models in a verification setting, we implemented both the flat memory model and the partitioned memory model in the Cascade verification framework [29], a competitiveFootnote 4 C verification tool that uses bounded model checking and SMT solving. For the partitioned memory model, a points-to analysis is run as a preprocessing step, and the resulting points-to graph is used to: (i) determine the element size of the memory arrays; (ii) select which memory array to use for each read or write (as well as for each memory safety check); and (iii) add disjointness assumptions where needed (for distinct locations assigned to the same memory array). We implemented several points-to analyses, including Steensgaard’s original and field-sensitive analyses, the data structure analysis (DSA), and our cell-based points-to analysis.

Benchmarks. For our experiments, we used a subset of the SVCOMP’16 benchmarks [4], specifically the Heap Data Structures category (consisting of two sub-categories HeapReach and HeapMemSafety) as these contained many programs with heap-allocated data structures. For HeapReach, we checked for reachability of the label in the code. For HeapMemSafety, we checked for invalid memory dereferences, invalid memory frees, and memory leaks.

Configuration. Like other bounded model checkers, Cascade relies on function inlining and loop unrolling. It takes as parameters a function-inline depth d and a loop-unroll bound b. It then repeatedly runs its analysis, inlining all functions up to depth d, and using a set of successively larger unrolls until the bound b is reached. There are four possible results: unknown indicates that no result could be determined (for our experiments this happens only when the depth of function calls exceeds d); unsafe indicates that a violation was discovered; safe indicates that no violations exist within the given loop unroll bound; and timeout indicates the analysis could not be completed within the time limit provided. For the reachability benchmarks, we set \(d = 6\) and \(b = 1024\); for the memory safety benchmarks, we set \(d = 8\) and \(b = 200\) (these values were empirically determined to work well for SV-COMP). Note that Cascade may report safe even if a bug exists, if this bug requires executing a loop more times than is permitted by the unroll limit (the same is true of other tools relying on bounded model checking, like CBMC and LLBMC). For other undiscovered bugs, it reports unknown or timeout.

Table 1 reports results for the flat model (Flat) and partitioned models using the original (field-insensitive) Steensgaard analysis (St-fi), the field-sensitive Steensgaard analysis (St-fs), the context-insensitive DSA (DSA-local),Footnote 5 and our cell-based field-sensitive points-to analysis (CFS). More detail is shown in the scatter plots in Fig. 13. These results show that the baseline partitioned model (St-fi) already improves significantly over the flat model. Of the partitioned models, the cell-based model (CFS) is nearly uniformly superior to the others.

Table 1. Comparison of various memory models in l. Experiments were performed on an Intel Core i7 (3.7 GHz) with 8 GB RAM. The timeout was 850 s. “#solved” is the number of benchmarks correctly solved within the given limits. In the columns labeled “False”, a benchmark is solved if Cascade found a bug that violates the specified property. In the columns labeled “True”, a benchmark is solved if Cascade completed its analysis up to the maximum unroll and inlining limits without finding a bug. “time” is the average time spent on the benchmarks (solved and unsolved) in each category. “ptsTo” is the average time spent on the points-to analysis in each category.
Fig. 13.
figure 13

Scatter plots showing a benchmark-by-benchmark comparison of various memory models over all the 271 benchmarks. The timeout (TO) was set to 850 s.

To compare the precision of the different points-to analyses, we also computed the number of alias groups computed by each algorithm. Over the 190 benchmarks in HeapMemSafety, CFS always computes more or the same number of alias groups than the other analyses (it is better than DSA-local on 10 benchmarks, St-fs on 67 benchmarks, and St-fi on 165 benchmarks). The same is true of the 81 benchmarks in HeapReach, (better than DSA-local on 2 benchmarks, St-fs on 28 benchmarks and St-fi on 52 benchmarks). The precision advantage of CFS over DSA-local is somewhat limited because field aliasing does not occur too frequently in these programs.

The other advantage of CFS is that it tracks the access size of objects in each alias group. The only other analysis that does this is St-fs, and as a result, St-fs solves more benchmarks than DSA-local, even though the alias information computed by DSA-local is much more precise. This shows the advantage of tracking size information for memory modeling applications.

7 Related Work

Several memory models have been proposed as alternatives to the flat model. Cohen et al. [8] simulate a typed memory model over untyped C memory by adding additional disjointness axioms. Their approach introduces quantified axioms which can sometimes be challenging for SMT solvers. Böhme et al. [5] propose a variant of Cohen’s memory model. CCured [21] separates pointers by their usage (not alias information) and skips bounds checking for “safe” pointers. Quantified disjointness axioms are also introduced. Rakamarić et al. [25] propose a variant of the Burstall model that employs type unification to cope with type-unsafe behaviors. However, in the presence of type casting, the model can easily degrade into the flat model. Havoc [9, 16] refines the Burstall model with field-sensitivity that is only applicable to field-safe code fragments. Lal et al. [17] split memory in order to reason about the bitvector operations and integer operations separately. Frama-C [11] develops various models at different abstraction levels. However, no attempt is made to further partition the memory.

For points-to analyses in general, we refer the reader to the survey by Hind [14]. Yong et al. [30] propose a framework for a spectrum of analyses from complete field-insensitivity through various levels of field-sensitivity, but none of these analyses support field-sensitive analysis of heap data structures. Pearce et al. [22, 23] extends field-sensivity on heap data structures with an inclusion-based approach, and Balatsouras et al. [2] further improve the precision. However, none of them are precise enough to analyze field-aliasing.

8 Conclusion

In this paper, we introduced partitioned memory models and showed how to use various points-to analyses to generate coarser or finer partitions. A key contribution was a new cell-based points-to analysis algorithm, which improves on earlier field-sensitive points-to analyses, more precisely modeling heap data structures and type-unsafe operations in the C language.

In SV-COMP 2016, Cascade won the bronze medal in the Heap Data Structures category using the cell-based partitioned memory model. We conclude that the cell-based partitioned memory model is a promising approach for obtaining both scalability and precision when modeling memory.