Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Constraint Programming and Metaheuristics

Constraint programming (GlossaryTerm

CP

) [1, 2] is an effective methodology for the solution of combinatorial problems that has been successfully applied in many domains. In a nutshell, GlossaryTerm

CP

is a declarative programming paradigm based on the idea of describing the relations (i. e., constraints) between variables that must hold in all solutions of the combinatorial problem at hand. For example, in the solution to a Sudoku puzzle, the numbers to be placed must be unique with respect to columns, rows, and blocks of the board.

GlossaryTerm

CP

has an interdisciplinary nature, since it relies on contributions and methods from the communities of logic programming (GlossaryTerm

LP

), artificial intelligence (GlossaryTerm

AI

), and operations research (GlossaryTerm

OR

). Indeed, the simple declarative modeling language of GlossaryTerm

CP

, consisting of variables and constraints, is very similar to those available in classical GlossaryTerm

LP

languages such as Prolog. The solution method features constraint propagation which, in its essence, is a reasoning or inference procedure typical of GlossaryTerm

AI

. Finally, especially for optimization problems, the solution process makes use of GlossaryTerm

OR

inspired branch and bound procedures and/or of dedicated GlossaryTerm

OR

solvers for specific types of variables/constraints (e. g., the simplex method for real variables and linear ).

A GlossaryTerm

CP

model is an encoding of the problem statement using the basic GlossaryTerm

CP

building blocks, i. e., variables and constraints. Once a GlossaryTerm

CP

model of the problem under consideration has been stated, a GlossaryTerm

CP

solver is used to systematically search the solution space by alternating deterministic phases (constraint propagation) and non-deterministic phases (variable assignment, tree search), thus exploring implicitly or explicitly the whole search space. To this respect, GlossaryTerm

CP

belongs to the family of complete (or exact) solution methods. In other words, GlossaryTerm

CP

guarantees finding the (optimal) solution of the problem or proving that the problem is not

A different approach is usually taken by metaheuristics [3], such as local search [4], evolutionary algorithms [5], and ant colony optimization [6], just to name a few. These methods are incomplete, since they rely on heuristic information to focus on interesting areas of the search space and, in general, do not explore it entirely but are stopped after a given time limit. As a consequence, these algorithms do not guarantee finding the (optimal) solution, trading completeness for a (possibly) greater (empirical) efficiency in the solution process.

Just looking at completeness, it seems that the clear choice for solving combinatorial problems would be to always prefer GlossaryTerm

CP

over metaheuristics as the solution method. However, in practice completeness is hindered by the high computational effort due to the worst case complexity of the problems considered (usually NP-complete or NP-hard). Therefore, for practical purposes, also the execution of GlossaryTerm

CP

solvers is terminated before the whole search space has been explored and a number of heuristics is used to focus the search in the regions where it is more likely to find the solutions of the problem. Consequently, GlossaryTerm

CP

and metaheuristics could be seen as complementary

Although these two kinds of methods are have been individually studied by separated scientific communities (for historical reasons), in recent years we have witnessed an increasing interest in the integration of the methods. In many cases, indeed, each approach has its own strengths and weaknesses, and the general aim of method integration is to create hybrid algorithms that enhance the strengths of both approaches and (possibly) overcome some of the weaknesses. To this respect, Yunes maintains a web page listing a number of success stories of hybrid solution methods [7], that is, papers describing integrated approaches that outperform single optimization methods.

A number of conferences and workshops specifically aiming at bringing together researchers working on the integration of solution techniques for combinatorial problems have also recently started. Notable examples are the series of GlossaryTerm

CP

-GlossaryTerm

AI

-GlossaryTerm

OR

conferences [8, 9], started in 1999, and the Hybrid Metaheuristics workshops [10, 11, 12, 13, 14, 15, 16], started in 2004. The scope of these conferences is not limited to the integration of GlossaryTerm

CP

techniques with metaheuristics, but they also consider hybridization among other methods.

Additionally, a few surveys on the integration of complete methods with metaheuristics have appeared in the literature [17, 18, 19]. However, these surveys either deal with a particular class of metaheuristics (i. e., local search) [17, 19] and/or with a different class of complete methods (integer linear programming) [17, 18]. Jourdan etal [20] also took GlossaryTerm

CP

methods into account, but they provide mostly a taxonomy of cooperation between optimization methods rather than surveying the specific integrations. Wallace and Azevedo etal [21, 22] surveys hybrid algorithms, but from a constraint programming viewpoint and mainly in the settings of hybrid exact methods. In their recent review of hybrid metaheuristics Blum etal [23] include a section on the integration of GlossaryTerm

CP

with local search and ant colony optimization (GlossaryTerm

ACO

). However, to the best of our knowledge, at present no specific survey on the integration of metaheuristics and constraint programming has been published in the literature. This work tries to overcome this lack and to review the different approaches specifically employed in the integration of GlossaryTerm

CP

methods within metaheuristics.

The chapter is organized as follows. In Sect. 62.2 the basic concepts of the constraint programming paradigm are introduced. They include modeling (Sect. 62.2.1), solution methods (Sect. 62.2.2), and GlossaryTerm

CP

systems (Sect. 62.2.3). The integration of GlossaryTerm

CP

with metaheuristics is presented in Sect. 62.3, which is organized on the basis of the metaheuristic type involved in the integration. Finally, in Sect. 62.4 some conclusions are drawn.

2 Constraint Programming Essentials

In this section, we will briefly describe the essential concepts of GlossaryTerm

CP

, which are needed to understand the following sections. The readers interested in a more detailed introduction to GlossaryTerm

CP

are referred to the book of Apt [1] and to the recent comprehensive Handbook of Constraint Programming [2].

In order to apply constraint programming to a combinatorial problem one first needs to model it through the specific formalism of constraint satisfaction or constrained optimization problems. Afterwards, the model can be solved by a GlossaryTerm

CP

solver, which alternates the analysis of constraints with tree search. Let us review these basic concepts.

2.1 Modeling

Constraint satisfaction problems (GlossaryTerm

CSP

s) are a useful formalism for modeling many real-world problems, either discrete or continuous. Remarkable examples are planning, scheduling, timetabling, just to name a few.

GlossaryTerm

CSP

is generally defined as the problem of associating values (taken from a set of domains) to variables subject to a set of constraints. A solution of a GlossaryTerm

CSP

is an assignment of values to all the variables so that the constraints are satisfied. In some cases not all solutions are equally preferable and we can associate a cost function to the variable assignments. In these cases, we talk about constrained optimization problems (GlossaryTerm

COP

s), and we are looking for a solution that (without loss of generality) minimizes the cost value. These concepts are formally introduced in the following.

2.1.1 Constraint Satisfaction Problems

Given:

  • X = { x 1 , , x k } is a set of variables.

  • D = { D 1 , , D k } is a set of domains associated to the variables. In other words, each variable x i can assume value d i if and only if d i D i .

  • C is a set of constraints, i. e., mathematical relations over Dom = D 1 × × D k .

We say that a tuple d 1 , , d k Dom satisfies a constraint C C if and only if d 1 , , d k C .

A constraint satisfaction problem (GlossaryTerm

CSP

) P, described by the triple X , D , C , is the problem of finding the tuples d ¯ = d 1 , , d k Dom that satisfy every constraint C C . Such tuples are called solutions of the GlossaryTerm

CSP

, and the set of solutions of P is denoted by  sol ( P ) .

P is said to be consistent or satisfiable if and only if sol ( P ) .

Notice that, depending on the modeling of the combinatorial problem at hand, we could be interested in determining different properties of the GlossaryTerm

CSP

. In the extreme case, for example, one could just want to know whether the problem is satisfiable, regardless of the actual solutions. The most common case is to search and provide a single solution to the problem, whereas sometimes one could be interested in all the solutions.

2.1.2 Constrained Optimization Problems

constrained optimization problem (GlossaryTerm

COP

) O = X , D , C , f is a GlossaryTerm

CSP

P = X , D , C with an associated objective function f : sol ( P ) E , where E , is a well-ordered set (typically, E is one of the sets N , Z , R ).

Differently from the previous case, the tuples d ¯ sol ( O ) that satisfy every constraint are called feasible solutions, and the set of these tuples is usually assumed to be non-empty. A solution of the GlossaryTerm

COP

O is a feasible solution e ¯ sol ( O ) for which the value of the objective function f is minimized, i. e.,

d ¯ sol ( O ) f ( e ¯ ) f ( d ¯ ) .

2.1.3 Observations

A few observations about this formalism are worth noting. First, notice that the general framework does not impose any restriction on either the type of domains and constraints or on the form of the objective function that can be used to express the problem. The basic type of domain is a finite set of integer values (also known as a finite domain), but there are other possibilities that enhance the expressive power of the modeling framework and capture some combinatorial substructures of the problem more naturally. For example, it is possible to deal with variables whose values are finite (multi)sets, (hyper)graphs, real valued intervals, or resources of a scheduling problem. Moreover, also the kind of constraints that can be employed is quite rich and includes arithmetic constraints, set constraints, permutation, counting and other types of combinatorial constraints, resource scheduling constraints, path constraints on graphs, and constraints expressible through regular expressions, just to name a few possibilities (see [24] for a comprehensive set of constraints and their implementation in actual GlossaryTerm

CP

systems).

These features clearly make the modeling phase easier and more precise with respect to other formalisms such as integer linear programming. Indeed, part of the combinatorial structure of the problem can be directly captured by the use of complex domains/constraints and, as for the objective function, there is no general limitation on its form, in particular, there is no assumption of linearity.

Another important point to be noticed regards the role of constraints. Differently from other modeling formalisms, which distinguish between constraints that must be satisfied (called hard constraints) and that should preferably be satisfied (soft constraints), in the original GlossaryTerm

CSP

/GlossaryTerm

COP

framework constraints are all hard and the solution methods, described in the following section, consider it mandatory to satisfy all of them. There have been several attempts in the GlossaryTerm

CP

literature to include soft constraints in the general framework (see, e. g., [25] for a review) but the most common way to handle them is to include a measure of their violation in the objective function of the problem.

2.2 Solution Methods

GlossaryTerm

CP

solution methods basically exploit a form of tree search that interleaves a branching phase with an analysis of constraints called constraint propagation. These two components are described in the following.

2.2.1 Branching and Tree Search

Once the combinatorial problem has been modeled as a GlossaryTerm

CSP

or a GlossaryTerm

COP

, GlossaryTerm

CP

solves it by constructing a solution by a process that exploits a non-deterministic variable assignment, where one value is selected together with one value in its current domain. This phase is also called labeling using (constraint) logic programming terminology, and a solution to the problem is a complete labeling. The process proceeds by recursively checking whether the current labeling can be extended to a consistent solution or, in the negative case, undoing the current assignment.

The pseudocode of the procedure, called (chronological) backtracking, is given in Algorithm 62.1. The procedure is at first called with the full set of variables and empty labeling as follows Backtracking ( X , , C , Dom ) . The procedure performs an implicit form of tree search, where a branch is identified by the selection of one variable (a node of the search tree) and all the possible values for that variable (the edges).

Note that, at each step of the recursive procedure, the choice of the variable and the value to branch on is non-deterministic. Therefore, these choices are susceptible to heuristics to enhance performances.

In addition, there are also other possibilities to define a branching rule. For example, instead of selecting a possible value for the variable selected (i. e., the assignment x i := v ), the branching rule could split the domain of a given variable x i in two by selecting a value v D i and adding the constraint x i v on one branch and x i > v on the other.

2.2.2 Consistency and Constraint Propagation

The check for solution consistency does not need all the variables to be instantiated, in particular for detecting the unsatisfiability of the GlossaryTerm

CSP

with respect to some constraint. For example, in Algorithm 62.2, the most straightforward implementation of the procedure Consistent ( L , C , Dom ) is reported. The procedure simply checks whether the satisfiability of a given constraint can be ascertained according to the current labeling (i. e., if all of the constraint variables are assigned). However, the reasoning about the current labeling with respect to the constraints of the problem and the domains of the unlabeled variables does not necessarily need all the variables appearing in a constraint to be instantiated. Moreover, the analysis can prune (as a side effect) the domains of the unlabeled variables while preserving the set of solutions sol ( P ) , making the exploration of the subtree more effective. This phase is called constraint propagation and is interleaved with the variable assignment. In general, the analysis of each constraint is repeated until a fixed point for the current situation is achieved. In the case that one of the domains becomes empty consistency cannot be achieved and, consequently, the procedure returns a fail.

Different notions of consistency can be employed. For example, one of the most common and most studied notions is hyper-arc consistency [26]. For a k-ary constraint C it checks the compatibility of a value v in the domain of one of the variables with the currently possible combinations of values for the remaining k - 1 variables, pruning v from the domain if no supporting combination is found. The algorithms that maintain hyper-arc consistency have a complexity that is polynomial in the size of the problem (measured in terms of number of variables/constraints and size of domains). Other consistency notions have been introduced in the literature, each having different pruning capabilities and computational complexity, which are, usually, proportionally related to their effectiveness.

One of the major drawbacks of (practical) consistency notions is that they are local in nature, that is, they just look at the current situation (partial labeling and current domains). This means that it would be impossible to detect future inconsistencies due to the interaction of variables. A basic technique, called forward checking, can be used to mitigate this problem. This method exploits a one-step look-aheadwith respect to the current assignment, i. e., it simulates the assignment of pair of variables, instead of a single one, thus evaluating the next level of the tree through a consistency notion. This technique can be generalized to several other

Algorithm 62.1 Backtracking (U, L, C, Dom)

 1: if  U = then

 2:  returnL

 3: end if

 4: for  v D i /*Try to label x i with value v*/ do

 5:   Dom Dom

 6:    r Backtracking ( U { x } , L { x := v } , C , Dom )

 7:   if  r fail then

 8:    returnr /*a consistent assignment has been found for the variables in U { x i } with respect to x i := v */

 9:   end if

10:  end if

11: end for

12: returnfail /*backtrack to the previous variable (no consistent assignment has been found for x i )*/

Algorithm 62.2 Consistent (L, C, Dom)

1: for  C C do

2:  if all variables in C are labeled in L C is not satisfied by L then

3:   returnfail

4:  end if

5: end for

6: returntrue

Algorithm 62.3 BranchAndBound (U, L, C, Dom, f, b, L b )

 1: if  U = then

 2:  if  f ( L ) < b then

 3:    b f ( L ) L b L

 4:  end if

 5: else

 6:  for  v D i /*Try to label x i with value v*/ do

 7:    Dom Dom

 8:    BranchAndBound( U { x } , L { x := v } , C, Dom , f, b, L b )

 9:   end if

10:  end for

11: end if

2.2.3 Branch and Bound

In the case of a GlossaryTerm

COP

, the problem is solved by exploring the set sol ( O ) in the way above, storing the best value for f found as sketched in Algorithm 62.3. However, a constraint analysis ( bound ( f , L { x := v } , Dom ) ) based on a partial assignment and on the best value already computed, might allow to sensibly prune the search tree. This complete search heuristic is called (with a slight ambiguity with respect to the same concept in operations research) branch and bound.

2.3 Systems

A number of practical GlossaryTerm

CP

systems are available. They mostly differ with regards to the targeted programming language and modeling features available. For historical reasons, the first constraint programming systems were built around a Prolog system. For example, SICStus Prolog [27], was one of the first logic programming systems supporting constraint programming which is still developed and released under a commercial license. Another Prolog-based system specifically intended for constraint programming is  E C L i P S e  [28], which differently from SICStus Prolog is open source. Thanks to their longevity, both systems cover many of the modeling features described in the previous sections (such as different type of domains, rich sets of constraints, etc.).

Another notable commercial system specifically designed for constraint programming is the ILOG GlossaryTerm

CP

optimizer, now developed by IBM [29]. This system offers modeling capabilities either by means of a dedicated modeling language (called GlossaryTerm

OPL

 [30]) or by means of a callable library accessible from different imperative programming languages such as C/C++, Java, and C#. The modeling capabilities of the system are mostly targeted to scheduling problems, featuring a very rich set of constructs for this kind of problems. Interestingly, this system is currently available at no cost for researchers through the IBM Academic Initiative.

Open source alternatives that can be interfaced with the most common programming languages are the C++ libraries of Gecode [31], and the Java libraries of Choco [32]. Both systems are well documented and constantly developed.

A different approach has been taken by other authors, who developed a number of modeling languages for constraint satisfaction and optimization problems that can be interfaced to different type of general purpose GlossaryTerm

CP

solvers. A notable example is MiniZinc [33], which is an expressive modeling language for GlossaryTerm

CP

. MiniZinc models are translated into a lower level language, called FlatZinc, that can be compiled and executed, for example, by Gecode, E C L i P S e or SICStus prolog.

Finally, a mixed approach has been taken by the developers of Comet [34]. Comet is a hybrid GlossaryTerm

CP

system featuring a specific programming/modeling language and a dedicated solver. The system has been designed with hybridization in mind and, among other features, it natively supports the integration of metaheuristics (especially in the family of local search methods) with GlossaryTerm

CP

.

3 Integration of Metaheuristics and CP

Differently from Wallace [21], we will review the integration of GlossaryTerm

CP

with metaheuristics from the perspective of metaheuristics, and we classify the approaches on the basis of the type of metaheuristic employed. Moreover, following the categorization of Puchinger and Raidl [18], we are mostly interested in reviewing the integrative combinations of metaheuristics and constraint programming, i. e., those in which constraint programming is embedded as a component of a metaheuristic to solve a subproblem or vice versa.

Indeed, the types of collaborative combinations are either straightforward (e. g., collaborative-sequential approaches using GlossaryTerm

CP

as a constructive algorithm for finding a feasible initial solution of a problem) or rather uninvestigated (e. g., parallel or intertwined hybrids of metaheuristics and GlossaryTerm

CP

).

3.1 Local Search and CP

Local search methods [4] are based on an iterative scheme in which the search moves from the current solution to an adjacent one on the basis of the exploration of a neighborhood obtained by perturbing the current solutions.

The hybridization of constraint programming with local search metaheuristics is the most studied one and there is an extensive literature on this subject.

3.1.1 CP Within Local Search

The integration of GlossaryTerm

CP

within local search methods is the most mature form of integration. It dates back to the mid 1990s [35], and two main streams are identifiable to this respect. The first one consists in defining the search of the candidate neighbor (e. g., the best one) as a constrained optimization problem. The neighborhoods induced by these definitions can be quite large, therefore, a variant of this technique is known by the name of large neighborhood search (GlossaryTerm

LNS

) [36]. The other kind of integration, lately named constraint-based local search (GlossaryTerm

CBLS

) [34], is based on the idea of expressing local search algorithms by exploiting constraint programming primitives in their control (e. g., for constraint checks during the exploration of the neighborhood) [37]. In fact, the two streams have a non-empty intersection, since the GlossaryTerm

CP

primitives employed in GlossaryTerm

CBLS

could be used to explore the neighborhood in a GlossaryTerm

LNS

fashion. In the following sections we review some of the work in these two

A few surveys on the specific integration between local search and constraint programming exist, for example [38, 39].

3.1.1.1 Large Neighborhood Search

In GlossaryTerm

LNS

 [36, 40] an existing solution is not modified just by applying small perturbations to solutions but a large part of the problem is perturbed and searched for improving solutions in a sort of re-optimization approach. This part can be represented by a set F X of released variables, called fragment, which determines the neighborhood relation N. Precisely, given a solution s ¯ = d 1 , , d k and a set F { X 1 , , X k } of free variables, then

N ( s , F ) = { e 1 , , e k sol ( O ) : ( X i F ) ( e i = d i ) } .

Given F, the neighborhood exploration is performed through GlossaryTerm

CP

methods (i. e., propagation and tree search).

The pseudocode of the general GlossaryTerm

LNS

procedure is shown in Algorithm 62.4. Notice that in the procedure there are a few hotspots that can be customized. Namely, one of the key issues of this technique consists in the criterion for the selection of the set F given the current solution s ¯ , which is denoted by SelectFragment ( s ¯ ) in the algorithm. The most straightforward way to select it is to randomly release a percentage of the problem variables. However, the variables in F could also be chosen in a structured way, i. e., by releasing related variables simultaneously. In [41], the authors compare the effectiveness of these two alternative choices in the solution of a job-shop scheduling problem.

Also the upper bounds employed for the branch and bound procedure can be subject to a few design alternatives. A possibility, for example, is to set the bound value to f ( s ¯ b ) , the best solution value found that far, so that the procedure is forced to search at each step only for improving solutions. This alternative can enhance the technique when the propagation on the cost functions is particularly effective in pruning the domains of the released variables. At the opposite extreme, instead, the upper bound could be set to an infinite value so that a solution is searched regardless whether or not it is improving the cost function with respect to the current incumbent.

Moreover, another design point is the solution acceptance criterion, which is implemented by the AcceptSolution function. In general, all the classical local search solution acceptance criteria are applicable, obviously in dependence on the neighborhood selection criterion employed. For example, in the case of randomly released variables a Metropolis acceptance criterion could be adequate to implement a sort of simulated annealing.

Finally, the TerminateSearch criterion is one of those usually adopted in non-systematic search methods, such as the expiration of a time/iteration budget, either absolute or relative, or the discovery of an optimal solution.

Algorithm 62.4 LargeNeighborhoodSearch (X, C, Dom, f)

 1:  s ¯ b s ¯ 0

 2:  i 0

 3: while not TerminateSearch ( i , s ¯ i , f ( s ¯ i ) , s ¯ b ) do

 4:   L { x j := d j i : x j F }

 5:   U F

 6:  if  AcceptSolution ( L ) then

 7:    s ¯ i + 1 L

 8:   if  f ( s ¯ i + 1 ) < f ( s ¯ b ) then

 9:     s ¯ b s ¯ i + 1

10:   end if

11:  else

12:    s ¯ i + 1 s ¯ i

13:  end if

14:   i i + 1

15: end while

16: return s ¯ b

GlossaryTerm

LNS

has been successfully applied to routing problems [36, 42, 43, 44, 45], nurse rostering [46], university course timetabling [47], protein structure prediction [48, 49], and car sequencing [50].

Cipriano etal propose G E L A T O , a modeling language and a hybrid solver specifically designed for GlossaryTerm

LNS

 [51, 52, 53]. The system has been tested on a set of benchmark problems, such as the asymmetric traveling salesman problem, minimum energy broadcast, and university course timetabling.

The developments of the GlossaryTerm

LNS

technique in the wider perspective of very large neighborhood search (GlossaryTerm

VLNS

) was recently reviewed by Pisinger and Ropke [54]. Charchrae and Beck [55] also propose a methodological contribution to this area with some design principles for GlossaryTerm

LNS

.

3.1.1.2 Constraint-Based Local Search

The idea of encoding a local search algorithm by means of constraint programming primitives was originally due to Pesant and Gendreau [35, 56], although in their papers they focus on a framework that allows neighborhoods to be expressed by means of GlossaryTerm

CP

primitives. The basic idea is to extend the original GlossaryTerm

CP

model of the problem with a sort of surrogate model comprising a set of variables and constraints that intentionally describe a neighborhood of the current solution.

A pseudocode of GlossaryTerm

CBLS

defined along these lines is reported in Algorithm 62.5. The core of the procedure is at line 5, which determines the neighborhood model on the basis of the current solution. The main components of the neighborhood model are the new set of variables Y and constraints C X , Y that act as an interface of the neighborhood variables Y with respect to those of the original problem X. For example, the classical swap neighborhood, which perturbs the value of two variables of the problem by exchanging their values, can be modeled by the set Y = { y 1 , y 2 } , consisting of the variables to exchange, and with the interface constraints

( y 1 = i y 2 = j ) ( x i = s j x j = s i ) i , j { 1 , , n } .

Moreover, an additional component of the neighborhood model is the evaluator of the move impact  Δ f , which can be usually computed incrementally on the basis of the single move.

It is worth noticing that the use of different modeling viewpoints is common practice in constraint programming. In classical GlossaryTerm

CP

modeling the different viewpoints usually offer a convenient way to express some constraint in a more concise or more efficient manner. The consistency between the viewpoints is maintained through the use of channeling constraints that link the different modelings. Similarly, although with a different purpose, in GlossaryTerm

CBLS

the linking between the full problem model and the neighborhood model is achieved through interface constraints.

Algorithm 62.5 ConstraintBasedLocalSearch (X, C X , Dom X , f)

 1:  s ¯ b s ¯ 0

 2:  i 0

 3: while not TerminateSearch ( i , s ¯ i , f ( s ¯ i ) , s ¯ b ) do

 4:   Y , C X , Y , Dom Y , Δ f NeighborhoodModel ( s ¯ i )

 5:   L

 6:   U Y

 7:   BranchAndBound ( U , L , C X , Y , Dom Y , Δ f ) /*neighborhood exploration*/

 8:  if  AcceptSolution ( L ) then

 9:    s ¯ i + 1 Apply ( L , s ¯ i )

10:   if  f ( s ¯ i + 1 ) < f ( s ¯ b ) then

11:     s ¯ b s ¯ i + 1

12:   end if

13:  else

14:    s ¯ i + 1 s ¯ i

15:  end if

16:   i i + 1

17: end while

18: return s ¯ b

This stream of research has been revamped thanks to the design of the Comet language [34, 57], the aim of which is specifically to support declarative components inspired from GlossaryTerm

CP

primitives for expressing local search algorithms. An example of such primitives are differentiable invariants [58], which are declarative data structures that support incremental differentiation to effectively evaluate the effect of local moves (i. e., the Δ f in Algorithm 62.5). Moreover, Comet support control abstractions [59, 60] specifically designed for local search such as the neighbors construct, which aims at expressing the unions of heterogeneous neighborhoods. Finally, Comet has been extended also to support distributed computing [61].

The embedding of local search within a constraint programming environment and the employment of a common programming language makes it possible to automatize the synthesis of GlossaryTerm

CBLS

algorithms from a high-level model expressed in Comet [62, 63]. The synthesizer analyzes the combinatorial structure of the problem, expressed through the variables and the constraints, and combines a set of basic recommendations, which are the basic constituents of the synthesized algorithm.

3.1.1.3 Other Integrations

The idea of exploring with local search a space of incomplete solutions (i. e., those where not all variables have been assigned a value) exploiting constraint propagation has been pursued, among others, by Jussien and Lhomme [64] for an open-shop scheduling problem. Constraint propagation employed in the spirit of forward checking and, more in general, look-ahead has been effectively employed, among others, by Schaerf [65] and Prestwich [66], respectively, for scheduling and graph coloring problems.

3.1.2 Local Search Within CP

Moving to the integration of local search within constraint programming, the most common utilization of local search-like techniques consists in limiting the exploration of the tree search only to paths that are ‘‘close’’ to a reference one. An example of such a procedure is limited discrepancy search (GlossaryTerm

LDS

) [67], an incomplete method for tree search in which only neighboring paths of the search tree are explored, where the proximity is defined in terms of different decision points called discrepancies. Only the paths (i. e., complete solutions) with at most k discrepancies are considered, as outlined in Algorithm 62.6.

Algorithm 62.6 LimitedDiscrepancySearch (X, C, Dom, f, k)

 1:  s ¯ * FirstSolution ( X , C , Dom , f )

 2:  s ¯ b s ¯ *

 3: for  i { 1 , , k } do

 4:   if  Consistent ( t ¯ , Dom ) f ( t ¯ ) < f ( s ¯ b ) then

 5:     s ¯ b t ¯

 6:   end if

 7:  end for

 8: end for

 9: return s ¯ b

Another approach due to Prestwich [68] is called incomplete dynamic backtracking. Differently from GlossaryTerm

LDS

, in this approach proximity is defined among partial solutions, and when backtracking needs to take place it is executed by randomly unassigning (at most) b variables. This way, the method could be intended as a local search on partial solutions. In fact, the method also features other GlossaryTerm

CP

machinery, such as forward checking, which helps in boosting the search.

An alternative possibility is to employ local search in constraint propagation. Local probing [69, 70] is based on the partition of constraints into the set of easy and hard ones. At each choice point in the search tree the set of easy constraints is dealt with a local search metaheuristic (namely simulated annealing), while the hard constraints are considered by classical constraint propagation. This idea generalizes the approach of Zhang and Zhang [71], who first presented such a combination. Another similar approach was taken by Sellmann and Harvey [72], who used local search to propagate redundant constraints.

In [73] the authors discuss the incorporation of the tabu search machinery within GlossaryTerm

CP

tree search. In particular, they look at the memory mechanisms for limiting the size of the tree and the elite candidate list for keeping the most promising choices in order to be evaluated first.

3.2 Genetic Algorithms and CP

A genetic algorithm [5] is an iterative metaheuristic in which a population of strings, which represent candidate solutions, evolves toward better solutions in a process that mimics natural evolution. The main components of the evolution process are crossover and mutation operators, which, respectively, combine two parent solutions generating an offspring and mutate a given solution. Another important component is the strategy for the offspring selection, which determines the population at the next iteration of the process.

To the best of our knowledge, one of the first attempts to integrate constraint programming and genetic algorithms is due to Barnier and Brisset [74]. They employ the following genetic representation: given a GlossaryTerm

CSP

with variables { X 1 , , X k } , the i-th gene in the chromosomes is related to the variable X i and it stores a subset of the domain D i that is allowed to be searched. Each chromosome is then decoded by GlossaryTerm

CP

, which searches for the best solution of the sub-GlossaryTerm

CSP

induced by the restrictions in the domains. The genetic operators used are a mutation operator that changes values on the subdomain of randomly chosen genes and a crossover operator that is based on a recombination of the set-union of the subdomains of each pair of genes. The method was applied to a vehicle routing problem and outperformed both a GlossaryTerm

CP

and a GlossaryTerm

GA

solver.

A different approach, somewhat similar to local probing, was used in [75] for tackling a production scheduling problem. In this case, the problem variables are split into two sets, defining two coupled subproblems. The first set of variables is dealt with by the genetic algorithm, which determines a partial schedule. This partial solution is then passed to GlossaryTerm

CP

for completing (and optimizing) the assignment of the remaining variables.

Finally, GlossaryTerm

CP

has been used as a post-processing phase for optimizing the current population in the spirit of memetic algorithms. In [76] GlossaryTerm

CP

actually acts as an unfeasibility repairing method for a university course timetabling problem, whereas in [77] the optimization performed by GlossaryTerm

CP

on a flow-shop scheduling problem is an alternative to the classical local search applied in memetic algorithms. This approach is illustrated in Algorithm 62.7.

Algorithm 62.7 A Memetic Algorithm with CP for Flow-Shop scheduling (adapted from [77])

 1: generate an initial population P = { p 1 , , p l } of permutations of n jobs (each composed of k tasks τ ij whose start time and end time are denoted by σ ij and η ij respectively)

 2:  g 0

 3: while not TerminateSearch ( g , P , min⁡ p P f ( p ) ) do

 4:  select p1 and p2 from P by binary tournament

 5:   c p 1 p 2 /*apply crossover*/

 6:  if  f ( c ) min⁡ p P f ( p ) then

 7:   mutate c under probability p m

 8:  end if

 9:  decode c = c 1 , , c n to the set of precedence constraints C = { η k c j σ 1 c j + 1 : j = 1 , , n - 1 }

10:   L

11:   U { σ ij , η ij : i = 1 , , k , j = 1 , , n }

12:   BranchAndBound ( U , L , C { η ij σ i + 1 j : i = 1 , , k } , Dom , f )

13:  if  f ( c ) max⁡ p P f ( p ) then

14:   discard c

15:  else

16:   select r by reverse binary tournament

17:   c replaces r in P

18:  end if

19:   g g + 1

20: end while

21: return arg⁡ min⁡ p P f ( p )

3.3 ACO and CP

Ant colony optimization [6] is an iterative constructive metaheuristic, inspired by ant foraging behavior. The GlossaryTerm

ACO

construction process is driven by a probabilistic model, based on pheromone trails, which are dynamically adjusted by a learning mechanism.

The first attempt to integrate GlossaryTerm

ACO

and GlossaryTerm

CP

is due to Meyer and Ernst [78], who apply the method for solving a job-shop scheduling problem. The proposed procedure employs GlossaryTerm

ACO

to learn the variable and value ordering used by GlossaryTerm

CP

for branching in the tree search. The solutions found by the GlossaryTerm

CP

procedure are fed back to the GlossaryTerm

ACO

in order to update its probabilistic model. In this approach, GlossaryTerm

ACO

can be conceived as a master online-learning branching heuristic aimed at enhancing the performance of a slave GlossaryTerm

CP

solver.

A slightly different approach was taken by Khichane etal [79, 80]. Their hybrid algorithm works in two phases. At first, GlossaryTerm

CP

is employed to sample the space of feasible solutions, and the information collected is processed by the GlossaryTerm

ACO

procedure for updating the pheromone trails according to the solutions found by GlossaryTerm

CP

. In the second phase, the learned pheromone information is employed as the value ordering used for GlossaryTerm

CP

branching. This approach, differently from the previous one, uses the learning capabilities of GlossaryTerm

ACO

in an offline-learning fashion.

More standard approaches in which GlossaryTerm

CP

is used to keep track of the feasibility of the solution constructed by GlossaryTerm

ACO

and to reduce the domains through constraint propagation have been used by a number of authors. Khichane etal apply this idea to job-shop scheduling [78] and car sequencing [79, 81]. Their general idea is outlined in Algorithm 62.8, where each ant maintains a partial assignment of values to variables. The choice to extend the partial assignment with a new variable/value pair is driven by the pheromone trails and the heuristic factors in lines 7–8 through a standard probabilistic selection rule. Propagation is employed at line 10 to prune the possible values for the variables not included in the current

Another work along this line is due to Benedettini etal [82], who integrate a constraint propagation phase for Boolean constraints to boost a GlossaryTerm

ACO

approach for a bioinformatics problem (namely, haplotype inference). Finally, in the same spirit of the previous idea, Crawford etal [83, 84] employ a look-ahead technique within GlossaryTerm

ACO

and apply the method to solve set covering and set partitioning problems.

Algorithm 62.8 Ant Constraint Programming (adapted from [79])

 1: initialize all pheromone trails to τ max⁡

 2:  g 0

 3: repeat

 4:  for  k { 1 , , n } do

 5:    A k

 6:   repeat

 7:    select a variable x j X so that x j var ( A k ) according to the pheromone trail τ j

 8:    add { x j := v } to A k

 9:     Propagate ( A k , C )

10:   until var ( A k ) = X orFailure

11:   update pheromone trails using { A 1 , , A n }

12:  end for

13: until var ( A i ) = X for some i { 1 , ( ) , n } or TerminateSearch ( g , A i )

4 Conclusions

In this chapter we have reviewed the basic concepts of constraint programming and its integration with metaheuristics. Our main contribution is the attempt to give a comprehensive overview of such integrations from the viewpoint of metaheuristics.

We believe that the reason why these integrations are very promising resides in the complementary merits of the two approaches. Indeed, on the one hand, metaheuristics are, in general, more suitable to deal with optimization problems, but their treatment of constraints can be very awkward, especially in the case of tightly constrained problems. On the other hand, constraint programming is specifically designed for finding feasible solutions, but it is not particularly effective for handling optimization. Consequently, a hybrid algorithm that uses GlossaryTerm

CP

for finding feasible solutions and metaheuristics to search among them has good chances to outperform its single components.

Despite the important steps made in this field during the last decade, there are still promising research opportunities, especially in order to investigate topics such as collaborative hybridization of GlossaryTerm

CP

and metaheuristics and validate existing integration approaches in the yet uninvestigated area of multiobjective optimization. We believe that further research should devote more attention to these aspects.