Abstract
In this paper, we describe Compact-Table (CT), a bitwise algorithm to enforce Generalized Arc Consistency (GAC) on table constraints. Although this algorithm is the default propagator for table constraints in or-tools and OscaR, two publicly available CP solvers, it has never been described so far. Importantly, CT has been recently improved further with the introduction of residues, resetting operations and a data-structure called reversible sparse bit-set, used to maintain tables of supports (following the idea of tabular reduction): tuples are invalidated incrementally on value removals by means of bit-set operations. The experimentation that we have conducted with OscaR shows that CT outperforms state-of-the-art algorithms STR2, STR3, GAC4R, MDD4R and AC5-TC on standard benchmarks.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
Table constraints, also called extension(al) constraints, explicitly express the allowed combinations of values for the variables they involve as sequences of tuples, which are called tables. Table constraints can theoretically encode any kind of constraints and are among the most useful ones in Constraint Programming (CP). Indeed, they are often required when modeling combinatorial problems in many application fields. The design of filtering algorithms for such constraints has generated a lot of research effort, see [1, 10, 12, 17, 20, 21, 23, 30, 33].
Over the last decade, many developments have thus been achieved for enforcing the well-known property called Generalized Arc Consistency (GAC) on binary and/or non-binary extensionally defined constraints. Among successful techniques, we find:
-
bitwise operations that allow performing parallel operations on bit vectors. Already exploited during the 70’s [27, 32], they have been applied more recently to the enforcement of arc consistency on binary constraints [3, 22].
-
residual supports (residues) that store the last found supports of each value. Initially introduced for ensuring optimal complexity [2], they have been shown efficient in practice [18, 19, 24] when used as simple sentinels.
-
tabular reduction, which is a technique that dynamically maintains the tables of supports. Based on the structure of sparse sets [4, 16], variants of Simple Tabular Reduction (STR) have been proved to be quite competitive [17, 20, 33].
-
resetting operations that saves substantial computing efforts in some particular situations. They have been successfully applied to the algorithm GAC4 [30].
In this paper, we introduce a very efficient GAC algorithmFootnote 1 for table constraints that combines the use of bitwise operations, residual supports, tabular reduction, and resetting operations. It is called Compact-Table (CT), and originates from or-tools, the Google solver that won the latest MiniZinc Challenges. It is important to note that or-tools developers prefer to focus on highly-optimized implementations of a few important (global) constraints instead of having many of them. Through the years, CT has reached a good level of maturity because it has been continuously improved and extended with many cutting edge ideas such as those introduced earlier. Unfortunately, the core algorithm of CT has not been described in the literature so farFootnote 2 and is thus seldom used as a reference for practical comparisons. The first version of CT implemented in or-tools, with a bit-set representation of tables, dates back to 2012, whereas the version of CT presented in this paper is exactly the last one implemented in OscaR [29].
Outline. After presenting related works in Sect. 2, we introduce some technical background in Sect. 3. Then, we recall in Sect. 4 usual state restoration mechanisms implemented in CP solvers, and describe reversible sparse bit-sets in Sect. 5. In Sect. 6, we describe our algorithm CT. Before concluding, we present in Sect. 7 the results of an experimentation we have conducted with CT and its contenders on a large variety of benchmarks.
2 Related Work
Propagators for table constraints are filtering procedures used to enforce GAC. Given the importance of table constraints, it is not surprising that much research has been carried out in order to find efficient propagators. This section briefly describes the most efficient ones.
Generic Algorithms. On the one hand, GAC3 is a classical general-purpose GAC algorithm [25] for non-binary constraints. Each call to this algorithm for a constraint requires testing if each value is still supported by a valid tuple accepted by the constraint. Several improvements to fasten the search for a support gave birth to variants such as GAC2001 [2] and GAC3\(^{rm}\) [19]. Unfortunately, the worst-case time complexity of all these algorithms grows exponentially with the arity of the constraints. On the other hand, GAC4 [28] is a value-based algorithm, meaning here that for each value, it maintains a set of valid tuples supporting it. Each time a value is removed, all supporting tuples are removed from the associated sets, which allows us to identify values without any more supports. GAC4R is a recent improvement of GAC4 [30], which recomputes the sets of supporting tuples from scratch (referred to as a resetting operations) when it appears to be less costly than updating them based on the removed values.
AC5 Instantiations. In [12], Mairy et al. introduce several instantiations of the generic AC5 algorithm for table constraints, the best of them being AC5-TCOptSparse. This algorithm shares some similarities with GAC4 since it precomputes lists of supporting tuples which allows us to retrieve efficiently new supports by iterating over these lists. Note that a reversible integer, i.e., an integer storage location with a facility to restore its successive values, is used to indicate the current position of a support in each list. This algorithm is implemented in Comet, and has been shown to be efficient on ternary and quaternary constraints.
Simple Tabular Reduction. STR1 [33] and STR2 [17] are algorithms that globally enforce GAC by traversing the constraint tables while dynamically maintaining them: each call to the algorithm for a constraint removes the invalid tuples from its table. The improvements brought in STR2 avoid unnecessary operations by considering only relevant subsets of variables when checking the validity of a tuple, and collecting supported values. Contrary to its predecessors, STR3 [20] is a fine-grained (or value-based) algorithm. For each value, it initially computes a static array of tuples supporting it, and keeps a reversible integer curr that indicates the position of the last valid tuple in the array. STR3 also maintains the set of valid tuples. STR3 is shown to be complementary to STR2, being more efficient when the tables are not reduced drastically during search.
Compressed Representations. Other algorithms gamble on the compression of tables to reduce the time needed to ensure GAC. The most promising data structure allowing a more compact representation is the Multi-valued Decision Diagram (MDD) [31], but note that the order of variables used to build an MDD may significantly impact its size. Two notable algorithms using MDDs as main data structure are mddc [6] and MDD4R [30]. The former does not modify the decision diagram and performs a depth-first search of the MDD during propagation to detect which parts of the MDD are consistent or not. MDD4R dynamically maintains the MDD by deleting nodes and edges that do not belong to a solution. Each value is matched with its corresponding edges in the MDD, so, when a value has none of its edges present in the MDD, it can be removed. On the other hand, some other forms of compression have been studied from the concepts of compressed tuples [14, 35], short supports [13] and sliced tables [11].
3 Technical Background
A constraint network (CN) N is composed of a set of n variables and a set of e constraints. Each variable x has an associated domain, denoted by dom(x), that contains the finite set of values that can be assigned to it. Each constraint c involves an ordered set of variables, called the scope of c and denoted by scp(c), and is semantically defined by a relation, denoted by rel(c), which contains the set of tuples allowed for the variables involved in c. The arity of a constraint c is |scp(c)|, i.e., the number of variables involved in c. A (positive) table constraint c is a constraint such that rel(c) is defined explicitly by listing the tuples that are allowed by c.
Example 1
The constraint \(x \ne y\) with \(x \in \{1, 2, 3\}\) and \(y \in \{1, 2\}\) can be alternatively defined by the table constraint c such that \(scp(c)=\{x,y\}\) and \(rel(c)=\{(1, 2), (2, 1), (3, 1), (3,2)\}\). We also write:
Let \(\tau = (a_1,a_2,\dots ,a_r)\) be a tuple of values associated with an ordered set of variables \(X = \{x_1,x_2,\dots ,x_r\}\). The ith value of \(\tau \) is denoted by \(\tau [i]\) or \(\tau [x_i]\). The tuple \(\tau \) is valid iff \(\forall i \in 1..r, \tau [i] \in dom(x_i)\). An r-tuple \(\tau \) is a support on the r-ary constraint c iff \(\tau \) is a valid tuple that is allowed by c. If \(\tau \) is a support on a constraint c involving a variable x and such that \(\tau [x]=a\), we say that \(\tau \) is a support for (x, a) on c. Generalized Arc Consistency (GAC) is a well-known domain-filtering consistency defined as follows:
Definition 1
A constraint c is generalized arc consistent (GAC) iff \(\forall x \in scp(c), \forall a \in dom(x)\), there exists at least one support for (x, a) on c. A CN N is GAC iff every constraint of N is GAC.
Enforcing GAC is the task of removing from domains all values that have no support on a constraint. Many algorithms have been devised for establishing GAC according to the nature of the constraints. For table constraints, STR [33] is such an algorithm: it removes invalid tuples during search of supports using a sparse set data structure which separates valid tuples from invalid ones. This method of seeking supports improves search time by avoiding redundant tests on invalid tuples that have already been detected as invalid during previous GAC enforcements. STR2 [17], an optimization of STR, limits some operations concerning the validity of tuples and the identification of supports, through the introduction of two sets called \(S^{sup}\) and \(S^{val}\) (described later in Sect. 6).
4 Reversible Objects and Implementation Details
Trail and Timestamping. The issue of storing related states of the solving process is essential in CP. In many solversFootnote 3, a general mechanism is used for doing and undoing (on backtrack) the current state. This mechanism is called a trail and it was first introduced in [9] for implementing non-deterministic search. A trail is a stack of pairs (location, value) where location stands for any piece of memory (e.g., a variable), which can be restored when backtracking. Typically, at each search node encountered during the solving process, the constraint propagation algorithm is executed. A same filtering procedure (propagator) can thus be executed several times at a given node. Consequently, if one is interested in storing some information concerning a filtering procedure, the value of a same memory location can be changed several times. However, stamping that is part of the “folklore” of programming [15] can be used to avoid storing a same memory location on the trail more than once per search node. The idea behind timestamping is that only the final state of a memory location is relevant for its restoration on backtrack. The trail contains a general time counter that is incremented at each search node, and a timestamp is attached to each memory location indicating the time at which its last storage on the trail happened. If a memory location changes and its timestamp matches the current time of the trail then there is no need to store it again. CP solvers generally expose some “reversible” objects to the users using this trail+timestamping mechanism. The most basic one is the reversible version of primitive types such as int or long values. In the following, we denote by rint and rlong the reversible versions of int and long primitive types.
Reversible Sparse Sets. Reversible primitive types can be used to implement more complex data structures such as reversible sets. It was shown in [16] how to implement a reversible set using a single rint that represents the current size (limit) of the set. In this structure, which is called reversible sparse set, an array of size n is used to store the permutation from 0 to \(n-1\). All values in this permutation array at indices smaller than or equal to a variable limit are considered as part of the set, while the others are considered as removed. When iterating on current values of the set (with decreasing indices from limit to 0), the value at the current index can be removed in O(1) by just swapping it with the value stored at limit and decrementing limit. Making a sparse set reversible just requires managing a single rint for limit. On backtrack, when the limit is restored, all concerned removed values are restored in O(1).
Domains and Deltas. In OscaR [29], the implementation of domains relies on reversible sparse sets. One advantage is that one can easily retrieve the set of values removed from a domain between any two calls to a given filtering procedure. All we need to store in the filtering procedure is the last size of the domain. The delta set (set of values removed between the two calls) is composed of all the values located between the current size and the last recorded size. More details on this cheap mechanism to retrieve the delta sets can be found in [16].
5 Reversible Sparse Bit-Sets
This section describes the class RSparseBitSet that is the main data structure for our algorithm to maintain the supports. In what follows, when we refer to an array t, t[0] denotes the first element (indexing starts at 0) and t.length the number of its cells (size). Also, \(0^k\) will stand for a sequence of k bits set to 0.
The class RSparseBitSet, which encapsulates four fields and 6 methods, is given in Algorithm 1. One important field is \(\mathtt {words}\), an array of p 64-bit words (actually, reversible long integers), which defines the current value of the bit-set: the ith bit of the jth word is 1 iff the \((j-1) \times 64 + i\)th element of the (initial) set is present. Initially, all words in this array have all their bits at 1, except for the last word that may involve a suffix of bits at 0. For example, if we want to handle a set initially containing 82 elements, then we build an array with \(p= \lceil 82/64 \rceil = 2\) words that initially looks like:
Because, in our context, only non-zero words (words having at least one bit set to 1) are relevant when processing operations on the bit-set, we rely on the sparse-set technique by managing in an array \(\mathtt {index}\) the indices of all words: the indices of all non-zero words are in \(\mathtt {index}\) at positions less than or equal to the value of a variable \(\mathtt {limit}\), and the indices of all zero-words are in \(\mathtt {index}\) at positions strictly greater than \(\mathtt {limit}\). For our example, we initially have:
If we suppose now that the 66 first elements of our set above are removed, we obtain:
The class invariant describing the state of a reversible sparse bit-set is the following:
-
\(\mathtt {index}\) is a permutation of \([0,\ldots ,p-1]\), and
-
\(\mathtt {words}[\mathtt {index}[i]] \ne 0^{64}\Leftrightarrow i \le \mathtt {limit}\), \(\forall i \in 0..p-1\)
Note that the reversible nature of our object comes from (1) an array of reversible long (denoted rlong) (instead of simple longs) to store the bit words, and (2) the reversible prefix size of non-zero words by using a reversible int (rint).
A RSparseBitSet also contains a local temporary array, called \(\mathtt {mask}\). Is is used to collect elements with Method addToMask(), and can be cleared and reversed too. A RSparseBitSet can only be modified by means of the method intersectWithMask() which is an operation used to intersect with the elements collected in \(\mathtt {mask}\). An illustration of the usage of these methods is given in next example.
Example 2
Figure 1 illustrates the use of Methods addToMask() and intersectWithMask(). We assume that the current state of the bit-set is given by the value of \(\mathtt {words}\), and that clearMask() has been called such that \(\mathtt {mask}\) is initially empty. Then two bit-sets are collected in \(\mathtt {mask}\) by calling addToMask(). The value of \(\mathtt {mask}\) is represented after these two operations. Finally intersectWithMask() is executed and the new value of the bit-set \(\mathtt {words}\) is given at the last row of Fig. 1.
We now describe the implementation of the methods in RSparseBitSet. Method isEmpty() simply checks if the number of non-zero words is different from zero (if the limit is set to \(-1\), it means that all words are non-zero). Method clearMask() sets to 0 all words of \(\mathtt {mask}\) corresponding to non-zero words of \(\mathtt {words}\), whereas Method reverseMask() reverses all words of \(\mathtt {mask}\). Method addToMask() applies a word by word logical bit-wise or operation. Once again, notice that this operation is only applied to words of \(\mathtt {mask}\) corresponding to non-zero words of \(\mathtt {words}\). Method intersectMask() considers each non-zero word of \(\mathtt {words}\) in turn and replaces it by its intersection with the corresponding word of \(\mathtt {mask}\). In case the resulting new word is zero, it is swapped with the last non-zero word and the value of \(\mathtt {limit}\) is decremented. Finally, Method intersectIndex() checks if a given bit-set (array of longs) intersects with the current bit-set: it returns the index of the first word where an intersection can be proved, \(-1\) otherwise.
6 Compact-Table (CT) Algorithm
As STR2 and STR3, Compact-Table (CT) is a GAC algorithm that dynamically maintains the set of valid supports regarding the current domain of each variable. The main difference is that CT is based on an object RSparseBitSet. In this set, each tuple is indexed by the order it appears in the initial table. Invalid tuples are removed during the initialization as well as values that are not supported by any tuple. The class ConstraintCT, Algorithm 2, allows us to implement any positive table constraint c while running the CT algorithm to enforce GAC.
6.1 Fields
As fields of Class ConstraintCT, we first find \(\mathtt {scp}\) for representing the scope of c and \(\mathtt {currTable}\) for representing the current table of c by means of a reversible sparse bit-set. If \(\langle \tau _0, \tau _1, \dots , \tau _{t-1} \rangle \) is the initial table of c, then \(\mathtt {currTable}\) is a RSparseBitSet object (of initial size t) such that the value i is contained (is set to 1) in the bit-set if and only if the ith tuple is valid:
We also have three fields \(\mathtt {S^{val}}\), \(\mathtt {S^{sup}}\) and \(\mathtt {lastSizes}\) in the spirit of STR2. The set \(\mathtt {S^{val}}\) contains variables whose domains have been reduced since the previous invocation of the filtering algorithm on c. To set up \(\mathtt {S^{val}}\), we need to record the domain size of each modified variable x right after the execution of CT on c: this value is recorded in \(\mathtt {lastSizes}[x]\). The set \(\mathtt {S^{sup}}\) contains unfixed variables (from the scope of the constraint c) whose domains contain each at least one value for which a support must be found. These two sets allow us to restrict loops on variables to relevant ones.
We also have a field \(\mathtt {supports}\) containing static data. During the set up of the table constraint c, CT also computes a static array of words \(\mathtt {supports}[x,a]\), seen as a bit-set, for each variable-value pair (x, a) where \(x \in scp(c) \wedge a \in dom(x)\): the bit at position i in the bit-set is 1 if and only if the tuple \(\tau _i\) in the initial table of c is a support for (x, a).
Example 3
Figure 2 shows an illustration of the content of those bit-sets after the initialization of the following table constraint \(\langle x,y,z \rangle \in T\), with:
-
\(dom(x)=\{ a, b \}\), \(dom(y)=\{ a, b, d \}\), \(dom(z)=\{ a, b, c \}\)
-
\(T = \langle (a, a, a), (a, a, b), (a, b, c), (b, a, a), (a, c, b), (a, b, b), (b, a, b), (b, b, a), (b, b, b) \rangle \)
The tuple (a, c, b) is initially invalid because \(c \notin dom(y)\), and thus will not be indexed. Value d will be removed from dom(y) given that it is not supported by any tuple.
Finally, we have an array \(\mathtt {residues}\) such that for each variable-value pair (x, a), \(\mathtt {residues}[x,a]\) denotes the index of the word where a support was found for (x, a) the last time one was sought for.
6.2 Methods
The main method in ConstraintCT is enforceGAC(). After the initialization of the sets \(\mathtt {S^{val}}\) and \(\mathtt {S^{sup}}\), CT updates \(\mathtt {currTable}\) to filter out (indices of) tuples that are no more supports, and then considers each variable-value pair to check whether these values still have a support.
Updating the Current Table. For each variable \(x \in \mathtt {S^{val}}\), i.e., each variable x whose domain has changed since the last time the filtering algorithm was called, updateTable() performs some operations. This method assumes an access to the set of values \(\varDelta _x\) removed from dom(x) since the last call to enforceGAC(). There are two ways of updating \(\mathtt {currTable}\), either incrementally or from scratch after resetting. Note that the idea of using resets has been proposed in [30] and successfully applied to GAC4 and MDD4, with the practical interest of saving computational effort in some precise contexts. This is the strategy implemented in updateTable(), by considering a reset-based computation when the size of the domain is smaller than the number of deleted values.
In case of an incremental update (line 10), the union of the tuples to be removed is collected by calling addToMask() for each bit-set (of supports) corresponding to removed values, whereas in case of a reset-based update (line 14), we perform the union of the tuples to be kept. To get a mask ready to apply, we just need to reverse it when it has been built from removed values. Finally, the (indexes of) tuples of \(\mathtt {currTable}\) not contained in the mask, built from x, are directly removed by means of intersectWithMask(). When there are no more tuples in the current table, a failure is detected, and updateTable() is stopped by means of a loop break.
Filtering of Domains. Values are removed from the domain of some variables during the search of a solution, which can lead to inconsistent values in the domain of other variables. As \(\mathtt {currTable}\) is a reversible and dynamically maintained structure, the value of some bits changes from 1 to 0 when tuples become invalid (or from 0 to 1 when the search backtracks). On the contrary, the \(\mathtt {supports}\) bit-sets are only computed at the creation of the constraint and are not maintained during search. It follows from the definition of those bit-sets that (x, a) has a valid support if and only if
Therefore, each time a tuple becomes invalid, the constraint must check this condition for every variable value pair (x, a) such that \(a \in dom(x)\), and remove a from dom(x) if the condition is not satisfied any more. This operation is efficiently implemented in filterDomains() with the help of residues and the method intersectIndex().
Example 4
The same set of tuples as in Example 3 is considered. Suppose now that a was removed from dom(x) (by another constraint) after the initialization. Given that the domain of x is reduced, when updateTable() is called by enforceGAC(), all tuples supporting a (because \(\varDelta _x = \{a\}\)) will be invalidated. Figure 3a illustrates the intermediary bit-sets used to compute the new value \(\mathtt {currTable}^{out}\) from \(\mathtt {currTable}^{in}\) and \(\mathtt {supports}[x,a]\). Then filterDomains() computes for each variable-value pair \((x_i,a_i)\) (with \(x_i \in \mathtt {S^{sup}}\) and \(a_i \in dom(x)\)) the intersection of its associated set of supports with \(\mathtt {currTable}\) as shown in Fig. 3b. Given that the intersection for \(\mathtt {supports}[z,c]\) and \(\mathtt {currTable}\) is empty, c is removed from dom(z).
6.3 Improvements
The algorithm in Sect. 6.2 can be improved to avoid unnecessary computations in some cases.
Filtering Out Bounded Variables. The initialization of \(\mathtt {S^{val}}\) at line 32 can be only performed from unbound variables (and the last assigned variable), instead of the whole scope. We can maintain them in a reversible sparse set.
Last Modified Variable. It is not necessary to attempt to filter values out from the domain of a variable x if this was the only modified variable since the last call to enforceGAC(). Indeed, when updateTable() is executed, the new state of \(\mathtt {currTable}\) will be computed from dom(x) or \(\varDelta _x\) only. Because every value of x had a support in \(\mathtt {currTable}\) the last time the propagator was called, we can omit filtering dom(x) by initially removing x from \(\mathtt {S^{sup}}\).
7 Experiments
We experimented CT on 1, 621 CSP instances involving (positive) table constraints (15 GB of uncompressed files in format XCSP 2.1). This corresponds to a large variety of instances, taken from 37 series. For guiding search, we used binary branching with domain over degree as variable ordering heuristic and min value as value ordering heuristic. A timeout of 1,000 s was used for each instance. The tested GAC algorithms are CT, STR2 [17], STR3 [20], GAC4 [28, 30], GAC4R [30], MDDR [30] and AC5TCRecomp [26]. All scripts, codes and benchmarks allowing to reproduce our experiments are available at https://bitbucket.org/pschaus/xp-table. The experiments were run on a 32-core machine (1400MHz cpu) with 100GB using Java(TM) SE Runtime Environment (build 1.8.0_60-b27) with 10GB of memory allocated (-Xmx option).
Performance Profiles. Let \(t_{i,\,s}\) represent the time obtained with filtering algorithm \(s \in S\) on instance \(i \in I\). The performance ratio is defined as follows: \(r_{i,\,s} = \frac{t_{i,\,s}}{\min \{t_{i,\,s} | s \in S\}}\). A ratio \(r_{i, \,s} = 1\) means that s was the fastest on instance i. The performance profile [8] is a cumulative distribution function of the performance of s (speedup) compared to other algorithms: \(\rho _s(\tau ) = \frac{1}{|I|} \times |\{i \in I | r_{i,\,s} \le \tau \}|\).
Our results are visually aggregated to form a performance profile in Fig. 4 generated by means of the online tool [5] http://sites.uclouvain.be/performance-profile. Note that we filtered out the instances that (i) could not be solved within 1,000 s by all algorithms (ii) were solved in less than 2 s by the slowest algorithm, and (iii) required less than 500 backtracks. The final set of instances used to build the profile is composed of 227 instances. An interactive performance profile is also available at https://www.info.ucl.ac.be/~pschaus/assets/publi/performance-profile-ct to let the interested reader deactivate some family of instances to analyze the results more closely.
Table 1 reports the speedup statistics of CT over the other algorithms. A first observation is that CT is the fastest algorithm on 94.47 % of the instances. Among all tested algorithms, AC5TCRecomp obtains the worse results. Then it is not clear which one among STR2, STR3, GAC4 and GAC4R is the second best algorithm. Based on the geometric mean speedup, STR3 seems to be the second best algorithm followed by STR2, GAC4R and MDD4R. Importantly, one can observe that the geometric mean speedup of CT over the best of the other algorithms is about 2.75.
Impact of Resetting Operations. In Algorithm 2, the choice of being incremental or not, when updating currTable, depends on the size of several sets and is thus dynamic. We propose to analyze two variants of Algorithm 2 when this choice is static:
-
Full incremental (CTI): only the body of the ‘if’ at line 10 is executed (deltas are systematically used).
-
Full re-computation (CTR): only the body of the ‘else’ at line 14 is executed (domains are systematically used).
The performance profiles with these two variants are given in Fig. 5, and the speedup table of the static versions over the dynamic one is given in Table 2.
As can be seen from both the performance profiles and the speedup table, the dynamic version using the resetting operations dominates the static ones. The geometric mean speedup is around 4 % over CTI and 34 % over CTR.
Contradiction with Previous Results. In [26], AC5TCRecomp was presented as being competitive with STR2. When we analyzed the codeFootnote 4 of STR2 used in [26], it appeared that STR2 was implemented in Comet using built-in sets (triggering the garbage collection of Comet). We thus believe that the results and conclusions in [26] may over-penalize the performance of STR2. Our results also somehow contradict the results in [30] where STR3 and STR2 were dominated by MDD4R and GAC4R. When analyzing the performance of the implementation of STR2 and STR3 used in [30] with or-tools, it appears that it is not as competitive as that in AbsCon (sometimes slower by a factor of 3). The results presented in [30] may thus also over-penalize STR2 and STR3.
One additional contribution of this work is a fined-tuned implementation of the best filtering algorithms for table constraints. The implementation of these algorithms in OscaR was optimized, and checked to be close in performance to the ones by the original authors. For CT, STR2 and STR3, a comparison was made with AbsCon, and for CT, MDD4R and GAC4R, a comparison was made with or-tools. Our implementation required a development effort of 10 man-months in order to obtain an efficient implementation of each algorithm. It involved the expertise of several OscaR developers and a deep analysis of the existing implementations in AbsCon and or-tools. The implementation of all algorithms used in this paper is open-source and part of OscaR release 3.1.0.
8 Conclusion
In this paper, we have shown that Compact-Table (CT) is a robust algorithm that clearly dominates state-of-the-art propagators for table constraints. CT benefits from well-tried techniques: bitwise operations, residual supports, tabular reduction and resetting operations. We believe that CT can be easily implemented using the reversible sparse bit-set data structure.
Notes
- 1.
We are aware of an independent work [34] on a similar topic, but hadn’t the opportunity of reading it at the time of writing our paper.
- 2.
Note that some parts of this paper were published in a Master Thesis report [7].
- 3.
One notable exception is Gecode, a copy-based solver.
- 4.
available at http://becool.info.ucl.ac.be.
References
Bessiere, C., Régin, J.-C.: Arc consistency for general constraint networks: preliminary results. In: Proceedings of IJCAI 1997, pp. 398–404 (1997)
Bessiere, C., Régin, J.-C., Yap, R., Zhang, Y.: An optimal coarse-grained arc consistency algorithm. Artif. Intell. 165(2), 165–185 (2005)
Bliek, C.: Wordwise algorithms and improved heuristics for solving hard constraint satisfaction problems. Technical Report 12–96-R045, ERCIM (1996)
Briggs, P., Torczon, L.: An efficient representation for sparse sets. ACM Lett. Programm. Lang. Syst. 2(1–4), 59–69 (1993)
Van Cauwelaert, S., Lombardi, M., Schaus, P.: A visual web tool to perform what-if analysis of optimization approaches. Technical report, UCLouvain (2016)
Cheng, K., Yap, R.: An MDD-based generalized arc consistency algorithm for positive and negative table constraints and some global constraints. Constraints 15(2), 265–304 (2010)
Demeulenaere, J.: Efficient algorithms for table constraints. Technical report, Master Thesis, under the supervision of P. Schauss, UCLouvain (2015)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)
Floyd, R.W.: Nondeterministic algorithms. J. ACM 14(4), 636–644 (1967)
Gent, I.P., Jefferson, C., Miguel, I., Nightingale, P.: Data structures for generalised arc consistency for extensional constraints. In: Proceedings of AAAI 2007, pp. 191–197 (2007)
Gharbi, N., Hemery, F., Lecoutre, C., Roussel, O.: Sliced table constraints: combining compression and tabular reduction. In: Simonis, H. (ed.) CPAIOR 2014. LNCS, vol. 8451, pp. 120–135. Springer, Heidelberg (2014)
Van Hentenryck, P., Mairy, J.-B., Deville, Y.: Optimal and efficient filtering algorithms for table constraints. Constraints 19(1), 77–120 (2014)
Jefferson, C., Nightingale, P.: Extending simple tabular reduction with short supports. In: Proceedings of IJCAI 2013, pp. 573–579 (2013)
Katsirelos, G., Walsh, T.: A compression algorithm for large arity extensional constraints. In: Proceedings of CP 2007, pp. 379–393 (2007)
Knuth, D.E.: The Art of Computer: Combinatorial Algorithms, vol. 4. Addison-Wesley (2015)
de Saint-Marcq, V.C., Schaus, P., Solnon, C., Lecoutre, C.: Sparse-sets for domain implementation. In: Proceeding of TRICS 2013, pp. 1–10 (2013)
Lecoutre, C.: STR2: Optimized simple tabular reduction for table constraints. Constraints 16(4), 341–371 (2011)
Lecoutre, C., Boussemart, F., Hemery, F.: Exploiting multidirectionality in coarse-grained arc consistency algorithms. In: Proceedings of CP 2003, pp. 480–494 (2003)
Lecoutre, C., Hemery, F.: A study of residual supports in arc consistency. In: Proceedings of IJCAI 2007, pp. 125–130 (2007)
Lecoutre, C., Likitvivatanavong, C., Yap, R.: STR3: A path-optimal filtering algorithm for table constraints. Artif. Intell. 220, 1–27 (2015)
Lecoutre, C., Szymanek, R.: Generalized arc consistency for positive table constraints. In: Proceedings of CP 2006, pp. 284–298 (2006)
Lecoutre, C., Vion, J.: Enforcing arc consistency using bitwise operations. Constraint Program. Lett. 2d, 21–35 (2008)
Lhomme, O., Régin, J.-C.: A fast arc consistency algorithm for n-ary constraints. In: Proceedings of AAAI 2005, pp. 405–410 (2005)
Likitvivatanavong, C., Zhang, Y., Bowen, J., Freuder, E.C.: Arc consistency in MAC: a new perspective. In: Proceedings of CPAI 2004 Workshop held with CP 2004, pp. 93–107 (2004)
Mackworth, A.K.: Consistency in networks of relations. Artif. Intell. 8(1), 99–118 (1977)
Mairy, J.-B., van Hentenryck, P., Deville, Y.: An optimal filtering algorithm for table constraints. In: Proceedings of CP 2012, pp. 496–511 (2012)
McGregor, J.J.: Relational consistency algorithms and their application in finding subgraph and graph isomorphisms. Inf. Sci. 19, 229–250 (1979)
Mohr, R., Masini, G.: Good old discrete relaxation. In: Proceedings of ECAI 1988, pp. 651–656 (1988)
Team, O.: OscaR: Scala in OR (2012). https://bitbucket.org/oscarlib/oscar
Perez, G., Régin, J.-C.: Improving GAC-4 for table and MDD constraints. In: Proceedings of CP 2014, pp. 606–621 (2014)
Srinivasan, A., Kam, T., Malik, S., Brayton, R.K.: Algorithms for discrete function manipulation. In: Proceedings of ICCAD 1990, pp. 92–95 (1990)
Ullmann, J.R.: An algorithm for subgraph isomorphism. J. ACM 23(1), 31–42 (1976)
Ullmann, J.R.: Partition search for non-binary constraint satisfaction. Inf. Sci. 177, 3639–3678 (2007)
Wang, R., Xia, W., Yap, R., Li, Z.: Optimizing simple table reduction with bitwise representation. In: Proceedings of IJCAI 2016 (2016)
Xia, W., Yap, R.: Optimizing STR algorithms with tuple compression. In: Proceedings of CP 2013, pp. 724–732 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Demeulenaere, J. et al. (2016). Compact-Table: Efficiently Filtering Table Constraints with Reversible Sparse Bit-Sets. In: Rueher, M. (eds) Principles and Practice of Constraint Programming. CP 2016. Lecture Notes in Computer Science(), vol 9892. Springer, Cham. https://doi.org/10.1007/978-3-319-44953-1_14
Download citation
DOI: https://doi.org/10.1007/978-3-319-44953-1_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-44952-4
Online ISBN: 978-3-319-44953-1
eBook Packages: Computer ScienceComputer Science (R0)