Keywords

1 Introduction

Constraint-based scheduling models combine rich representational flexibility with compositional search procedures and heuristics, and this combination can provide significant leverage in addressing complex scheduling problems. These approaches start with a formulation of the problem of interest as a Constraint Satisfaction Problem (CSP), which involves specification of a set of decision variables together with a set of constraints on their mutual values. Much like the MILP formulations discussed in Chap. 2 of this handbook, there are different possibilities. One basic approach is to formulate the problem as one of finding a consistent assignment of start times for all constituent activities to be scheduled (analogous to the time-indexed formulations of Chap. 2 of this handbook). This fixed times assignment formulation, however, has a couple of drawbacks. From a constraint reasoning perspective, it promotes over-commitment (i.e., the problem constraints likely do not require commitment to specific start times to ensure feasibility), which results in a correspondingly larger solution space to search. From an operational perspective, a fixed-times solution offers no resilience against executional uncertainty, and is likely to quickly become invalid.

An alternative CSP formulation, which we adopt in this chapter, is to reduce the problem to one of establishing sufficient precedence constraints between activities competing for the same resources to eliminate all possible resource contention, and ensure both time and resource feasibility. This approach, referred to as Precedence Constraint Posting (PCP), address the over-commitment issue raised above by instead producing a so-called “flexible schedule” where activity start times are constrained to occur within an interval that is consistent with the problem constraints. The original formulation (Smith and Cheng 1993) was shown to achieve order-of-magnitude speedup in solving time over a corresponding fixed times assignment procedure in a basic job shop scheduling setting. More recently, this approach has been generalized to address cumulative resources, has been effectively applied to the resource constrained project scheduling problem (Cesta et al. 2002), and has been extended to generate Partial Order Schedules (POSs), activity networks with the property that any temporal solution to the graph is also resource-feasible (Policella et al. 2007). The ability to generate POSs provides a basis for efficiently responding to unexpected disruptions at execution time. It also provides a conceptual framework for optimizing solution robustness in the absence of knowledge about the uncertainties in the execution environment (similar in some ways to the robust scheduling techniques discussed in Chap. 40 in the second volume of this handbook, and in contrast to the stochastic models of Chap. 37 in the second volume of this handbook).

In this chapter we summarize this PCP approach to Project Scheduling. We present a basic set of core solving algorithms for generating a solution in the form of an activity network N, a directed graph N = (V, E), where the edges in E are simple precedence constraints imposed on the set of problem activities V. Via a polynomial time calculation, such a network N can be always turned into a Partial Order Schedule (POS). We also discuss incorporation of these core algorithms into extended optimizing search procedures targeted on two different objectives: improve the robustness of a solution (Policella et al. 2009) and minimize the solution makespan (e.g., Oddi et al. 2010a).

The chapter is organized as it follows: Sect. 6.2 reviews the state-of-the-art in Constraint-based Scheduling. After the introduction of the reference scheduling problem (RCPSP/max, Sect. 6.3) and its CSP representation (Sect. 6.4), the chapter than continues describing the basic concepts behind PCP (Sect. 6.5). A core PCP framework is discussed in Sect. 6.6. The last part of the chapter then summarizes the results along two different directions of work: a first for generating robust schedules (Sect. 6.7); a second one for the generation of optimal solutions (Sect. 6.8). The final conclusions are discussed in Sect. 6.9.

2 Constraint-Based Scheduling

Constraint Programming (see Rossi et al. 2006) is an approach to solving combinatorial search problems based on the Constraint Satisfaction Problem (CSP) paradigm (Montanari 1974; Kumar 1992; Tsang 1993). Constraints are just relations and a CSP states which relations should hold among the given problem decision variables. This framework is based on the combination of search techniques and constraint propagation. Constraint propagation consists of using constraints actively to prune the search space. Different propagation algorithms have been defined for different kinds of constraints. Their aim is to reduce the domains of variables involved in the constraints by removing the values that cannot be part of any feasible solution. The filtering algorithm is invoked any time a domain of some variable is changed (either as a result of search decisions or during constraint propagation) to propagate the consequences of the change over all decision variables. As described by Dechter and Rossi (2002), “in general, constraint satisfaction tasks, like finding one or all solutions or the best solution, are computationally intractable, \(\mathcal{N}\!\mathcal{P}\)-hard”. For this reason the constraint propagation process cannot be complete, that is, some infeasible values may still remain in the domains of the variables and thus decisions are necessary to find a complete feasible valuation of the variables. In general, constraint propagation is aimed at achieving local consistency among subsets of variables.

Constraint satisfaction and propagation rules have been successfully used to model, reason and solve about many classes of problems in such diverse areas as scheduling, temporal reasoning, resource allocation, network optimization and graphical interfaces. In particular, CSP approaches have proven to be an effective way to model and solve complex scheduling problems (see, for instance, Fox 1990; Sadeh 1991; Smith 1994; Beck et al. 1998; Baptiste et al. 2001; Cesta et al. 2002). The use of variables and constraints provides representational flexibility and reasoning power. For example, variables can represent the start and the end times of an activity, and these variables can be constrained in arbitrary ways.

In the remainder of this section we first provide a formal definition of the Constraint Satisfaction Problem , next a brief description of a generic solver, and finally an overview of the different ways in which this paradigm has been used to solve scheduling problems.

2.1 Constraint Satisfaction Problem

A Constraint Satisfaction Problem, CSP, consists of a finite set of decision variables, each associated with a domain of values, and a set of constraints that define the relation between the values that the variables can assume. Therefore, a CSP is defined by the tuple \(\left \langle V,D,\mathfrak{C}\right \rangle\) where:

  • V = { v 1, v 2, , v n } is a set of n variables;

  • D = { D 1, D 2, , D n } is the set of corresponding domains of values for any variable, such that v i  ∈ D i , i = 1, , n;

  • \(\mathfrak{C} =\{ c_{1},c_{2},\ldots,c_{\nu }\}\), is a set of ν constraints, c μ (v 1, v 2, , v n ), that are predicates defined on the Cartesian product of the variable domains, D 1 × D 2 × × D n .

A solution s is a value assignment to each variable v i , from its domain,

$$\displaystyle{(\lambda _{1},\lambda _{2},\ldots,\lambda _{n}) \in D_{1} \times D_{2} \times \ldots \times D_{n}}$$

such that the set of constraints is satisfied.

Constraint processing tasks include not only the satisfaction task, but also Constraint Optimization Problems (COP). In this case an objective function f(S) (or cost function) evaluates any single feasible solution S. The goal is to find an optimal solution S which minimizes (or maximizes) the objective function f().

Finally, we observe an instance of a CSP \(\left \langle V,D,\mathfrak{C}\right \rangle\) can be represented as a constraint graph, G = (V, E). For every variable v i  ∈ V, there is a corresponding node in the graph. For every set of variables connected by a constraint \(c_{j} \in \mathfrak{C}\), there is a corresponding hyper-edge. In the particular case of binary constraints (each constraint involves at most two variables) the hyper-edges become simply edges. A well known example of binary CSP (extensively used in this chapter) is the Simple Temporal Problem (STP) introduced by Dechter et al. (1991).

2.2 A Generic CSP Solver

A complete CSP solving procedure consists of the following three steps (Algorithm 6.1): (a) current problem P is checked for consistency [CheckConsistency(P)] by the application of a propagation procedure, if at least one constraint is violated the algorithm exits with failure. If the problem P is also a solution [IsSolution(P)], then the algorithm exits and returns the generated solution S = P. Otherwise the problem is still not solved and the following two steps are executed; (b) a variable v i is selected by a variable ordering heuristic; (c) a value λ i is chosen by a value ordering heuristic and added to P. The solver is recursively called on the updated problem \(P \cup \{\lambda _{i}\}\).

Algorithm 6.1: CSP-Solver(P)

      if CheckConsistency(Pthen

          if IsSolution(Pthen

              S ← P

              return S

          else

              v i  ← SelectVariable(P)

              λ i  ← Choose-Value(P, v i )

              CSP-Solver(P ∪{λ i })

          end if

    else

        return 

    end if

2.3 CSP Approaches to Scheduling Problems

Scheduling problems are difficult combinatorial optimization problems and represent an important application area for constraint directed search. Different constraint programming approaches have been developed in this direction, for instance, the reader can refer to Baptiste et al. (2001) for a thorough analysis of different constraint based techniques for scheduling problems. The work of Constraint directed Scheduling of the 1980s (see for example Fox 1990; Sadeh 1991; Smith 1994) has developed into Constraint-based Scheduling approaches in the late 1990s (see Baptiste and Le Pape 1995; Nuijten and Aarts 1996; Beck et al. 1998; Smith and Pyle 2004; Hentenryck and Michel 2009). These approaches all focus on the use of constraints as a basis for representing and managing the search for a solution to the scheduling problem at hand. As mentioned above, the search for a solution to a CSP can be viewed as modifying the constraint graph G = (V, E) through the addition and removal of constraints, where the constraint graph is an evolving representation of the search state, and a solution is a state in which a single value remains in the domain of each variable and all constraints are satisfied.

As mentioned at the outset, research in constraint-based scheduling has pursued two basic formulations of the scheduling problem. One set of approaches (e.g., Sadeh 1991; Nuijten and Le Pape 1998; Smith and Pyle 2004) has formulated the problem as that of finding a consistent assignment of start times for each activity to be performed. Under this model, decision variables are time points that designate the start times of various activities and CSP search focuses on determining a consistent assignment of start time values. The “serial and parallel” methods discussed in Chap. 1 of this handbook similarly aim to find consistent sets of start times. A second set of approaches have focused on a problem formulation more akin to least-commitment frameworks. In this model, which is based on a disjunctive graph representation (Adams et al. 1988) and is referred to as Precedence Constraint Posting (Smith and Cheng 1993), solving consists of posting additional precedence constraints between pairs of activities contending for the same resources to ensure feasibility with respect to time and capacity constraints. Solutions generated in this way generally represent a set of feasible schedules (i.e., the sets of activity start times that remain consistent with posted sequencing constraints), as opposed to a single assignment of start times.

3 The Reference Scheduling Problem: RCPSP/max

We adopt the Resource-Constrained Project Scheduling Problem with minimum and maximum time lags, RCPSP/max,Footnote 1 as a reference problem (see Bartusch et al. 1988). The basic entities of interest in this problem are activities. The set of activities is denoted by V = { 1, 2, … n} where each activity has a fixed processing time, or duration, p i and must be scheduled without preemption.

A schedule is an assignment of start times to each activity in V, i.e., a vector \(S = (S_{1},S_{2},\ldots,S_{n})\) where S i denotes the start time of activity i. The time at which activity i has been completely processed is called its completion time and is denoted by C i . Since we assume that processing times are deterministic and preemption is not permitted, completion times are determined by:

$$\displaystyle{ C_{i} = S_{i} + p_{i}\quad (i \in V ) }$$
(6.1)

Schedules are subject to two types of constraints, temporal constraints and resource constraints. In their most general form temporal constraints designate arbitrary minimum and maximum time lags between the start times of any two activities,

$$\displaystyle{ d_{\mathit{ij}}^{\mathit{min}} \leq S_{ j} - S_{i} \leq d_{\mathit{ij}}^{\mathit{max}}\quad ((i,j) \in V ) }$$
(6.2)

where d ij min and d ij max are the minimum and maximum time lag of activity j relative to i. A schedule \(S = (S_{1},S_{2},\ldots,S_{n})\) is time feasible, if all inequalities given by the activity precedences/time lags (6.2) and durations (6.1) hold for start times S i .

During their processing, activities require specific resource units from a set \(\mathcal{R} =\{ 1,2,\ldots,K\}\) of resources. Resources are reusable (renewable), i.e., they are released when no longer required by an activity and are then available for use by another activity. Each activity i ∈ V requires r ik units of the resource \(k \in \mathcal{R}\) during its processing time p i . Each resource \(k \in \mathcal{R}\) has a limited capacity of R k units.

A schedule S is resource feasible if at each time t the demand for each resource \(k \in \mathcal{R}\) does not exceed its capacity R k , i.e.,

$$\displaystyle{ r_{k}(S,t) =\sum _{i\in V:S_{i}\leq t<C_{i}}r_{\mathit{ik}} \leq R_{k}\quad (k \in \mathcal{R};\ t \in \left [0,T\right ]) }$$
(6.3)

A schedule S is called feasible if it is both time and resource feasible.

The RCPSP/max problem is a complex scheduling problem: in fact not only the optimization version but also the feasibility problem is \(\mathcal{N}\!\mathcal{P}\)-hard (see Bartusch et al. 1988). The reason for this \(\mathcal{N}\!\mathcal{P}\)-hardness result lies in the presence of maximum time lags. In fact these imply the presence of deadline constraints, transforming feasibility problems for precedence-constrained scheduling to scheduling problems with time windows.

4 The RCPSP/max as a CSP

As introduced above, we formulate the project scheduling problem as a Constraint Satisfaction Problem (CSP), in particular we refer to the definition of the problem proposed in the work Bartusch et al. (1988), such that decision variables are the so-called Forbidden Sets also known as Minimal Critical Sets (see Laborie and Ghallab 1995 or Chap. 2 of this handbook, we use MCS in the rest of the chapter).

Given a generic resource k, a conflict is a set of activities requiring the resource k, which can mutually overlap and whose combined resource requirement is in excess of the resource capacity R k . A Minimal Critical Set, \(\mathit{MCS} \subseteq V\), represents a resource conflict of minimal size (each subsets is not a resource conflict), which can be resolved by posting a single precedence constraint between two of the competing activities in the conflict set. Hence, in CSP terms, a decision variable is defined for each MCS and the domain of possible vales is the set of all possible feasible precedence constraints i ≺ j which can be imposed between any pair of activities in the MCS.

A solution of the scheduling problem is a set of precedence constraints (added to the original problem described in the previous Sect. 6.3) such that removes all the MCSs.

A solution takes the form of an activity network N S , a directed graph N S  = (V S , E), where V S  = V ∪{source, sink}, the set of problem activities V plus two fictitious activities source and sink, and E is the set of directed edges (i, j), representing the set of precedence constraints i ≺ j defined among the activities in V S . In particular, the set E is partitioned in two subsets, E = E prob E post , where E prob is the set of precedence constraints originating from the problem definition and E post is the set of precedence constraints posted to resolve resource conflicts. In general, the directed graph N S (V S , E) represents a set of temporal solutions \((S_{1},S_{2},\ldots,S_{n})\), that is a set of assignments to the activities’ start-times which are consistent with the set of constraints E and the set of imposed resource constraints.

We observe as in the new formulation of the problem it becomes a pure disjunctive temporal problem (Oddi et al. 2010b), such that the original resource constraints are compiled into a set of MCSs, each MCS can be seen as a disjunctive temporal clause. A drawback of the previous MCS reduction is the large number of MCSs obtained for each resource k; if n k is the number activities requiring resource k, the number of MCSs is \(\binom{\vert n_{k}\vert }{R_{k} + 1}\), hence \(\mathcal{O}(n_{k}^{R_{k}+1})\).

The next section proposes a summary of the works (Cesta et al. 19992002) which describes how to overcame the limitation imposed by the large size set of MCSs. The proposed approach, targeted to identification of decision variables, attempts to reconcile two, typically conflicting desiderata: (1) on one hand to always take the decision centers on the most critical precedence constraint to post and (2) on the other to minimize the amount of time spent in the analysis that leads to this decision. For details on the empirical evaluation of the algorithms the author can refer to the original works (Cesta et al. 19992002).

5 Precedence Constraint Posting

The proposed Precedence Constraint Posting (PCP) approach was first introduced by Smith and Cheng (1993) for problems with binary resources and then extended to more general problems in subsequent research, aims at synthesizing additional precedence constraints between pairs of activities for the purpose of pruning all inconsistent allocations of resources to activities. The general schema of this approach is provided in Fig. 6.1 and consists of representing, analyzing, and solving different aspects of the problem in two separate layers. In the former the temporal aspects of the scheduling problem, e.g., activity durations, constraints between pairs of activities, due dates, release time, etc., are considered. The second layer, instead, represents and analyzes the resource aspects of the problem. Let us now explain the details of the two layers.

Fig. 6.1
figure 1

Precedence Constraint Posting schema

5.1 Time Layer

The temporal aspects of the scheduling problems are represented through an STP (Simple Temporal Problem) network (see Dechter et al. 1991). This is a temporal graph in which the set of nodes represents a set of temporal variables named time-points, tp μ , while linear temporal constraints, of the form tp μ tp ν  ≤ d μ ν , define the distances among them. Each time point has initially a domain of possible values equal to [0, T] where T is the temporal horizon of the problem. The problem is represented by associating with each activity a pair of time points which represent, respectively, the start and the end-time of the activity. Therefore a temporal constraint may be imposed between a pair of time points that can “belong” to the same activity or not. In the latter case (when they do not belong to the same activity) the temporal constraints represent constraints between two activities of the problem. If alternatively, the two time-points belong to the same activity, the temporal constraints represent the duration, or processing time, of the activity. By propagating the temporal constraints it is possible to bound the domains of each time-point, tp μ  ∈ [lb μ , ub μ ]. In the case of empty domains for one or more time-points the temporal graph does not admit any solution. In Dechter et al. (1991) it was proved that it is possible to completely propagate the whole set of temporal constraints in polynomial time, \(\mathcal{O}(n^{3})\), and, moreover, that a solution can be obtained by selecting for each time-point its lower bound value, tp μ  = lb μ (this solution is referred to as the Earliest Start-Time Solution) . The temporal layer then, given the temporal aspects of a scheduling problem, provides, in polynomial time (using constraint propagation) a set of solutions defined by a temporal graph. This result is taken as input in the second layer. In fact, at this stage we have a set of temporal solutions (time feasible) that need to also be proven to be resource feasible.

5.2 Resource Layer

This layer takes into account the other aspect of the scheduling problem, namely the constraints on resources (i.e., capacity). In general, resources can be binary, multi-capacitive, or consumable (non-renewable). As described above, the input to this layer in the PCP approach is a temporally flexible solution—a set of temporal solutions (see also Fig. 6.1). Like in the previous layer it is possible to use constraint propagation to reduce the search space. Even though there are different methodologies described in the literature, these propagation procedures are not sufficient in general (see Nuijten and Aarts 1996; Laborie 2003). In fact they are not complete, which implies that they are not able to prune all inconsistent temporal solutions. For this reason a PCP procedure uses a Resource Profile (see Cesta et al. 2002) to analyse resource usage over time and detect MCS decision variables. The procedure then proceeds to post further constraints to level (or solve) some of the detected conflicts. These new constraints are propagated in the underlying layer to check the temporal consistency. Then the time layer provides a new temporally flexible solution that is analyzed again using the resource profiles. The search stops when either the temporal graph becomes inconsistent or the resource profiles are consistent with the resource capacities.

6 The Core Constraint-Based Scheduling Framework

The core of the implemented framework is based on the greedy procedure described in Algorithm 6.2, which is an instance of the procedure described in Sect. 6.2.2. Within this framework, a solution is generated by progressively detecting time periods where resource demand is higher than resource capacity (conflicts) and posting sequencing constraints between competing activities to reduce demand and eliminate capacity conflicts. As explained above, after the current situation is initialized with the input problem, S 0 ← P, the procedure builds an estimate of the required resource profile according to the current temporal precedences in the network and detects resource conflicts—SELECTCONFLICTSET(S 0). If the set of conflicts F is not empty then new constraints are synthesized, SELECTLEVELINGCONSTRAINT(F), and posted on the current situation. The search proceeds until either the STP temporal graph becomes inconsistent or a solution is found.

Using the reference scheduling problem (RCPSP/max) introduced above, we proceed now to summarize the core components needed to fully specify the approach. In fact, the introduction of a specific scheduling problem allows us to set the general framework introduced above. The first issue in the framework implementation concerns how to identify activities that are in a conflicting situation. This allows identification of the points in the current solution state that need to be resolved. The second issue concerns the heuristics (variable and value ordering) used to respectively select and solve conflicts that have been identified.

6.1 Consider Resource Utilization

A first, important issue that needs to be explored is how to compute resource utilization profiles. In fact, the input temporal graph represents a set of solutions, possibly infinite, and to consider all the possible combinations is impossible in practice.

A possible affordable alternative consists of computing bounds on resource utilization. Examples of bounding procedures can be found in Drabble and Tate (1994), Cesta and Stella (1997), Laborie (2003), Muscettola (2002). It is worth noting that consideration of resource bounds as resource profiles implies that all temporal solutions represented by the temporal graph are also resource feasible.

A different approach to dealing with resources consists of focusing attention on a specific temporal solution and its resource utilization. In contrast to resource bounding approaches, this process only assures that the final temporal graph contains at least one resource feasible solution (the one for which the resource utilization is in fact considered); some of the temporal solutions may not be resource feasible. Since only a single feasible solution is computed instead of encapsulating all possible feasible solutions, this approach gains substantial computational efficiency.

Figure 6.2 summarizes the two alternative resource profiles. In the first case resource bounds are used to consider all the temporal solutions and their associated resource utilization (Fig. 6.2a). Alternatively, only one temporal solution of the set is considered in the second case (Fig. 6.2b).

Fig. 6.2
figure 2

Two different ways to consider the resource utilization. (a) Bounds of the resource utilization for the set of solutions defined by a temporal graph. (b) Resource utilization of a single temporal solution

6.2 How to Identify Decision Variables

The starting point in identifying the possible conflicts is the computation of the possible contention peaks. A contention peak is a set of activities whose simultaneous execution exceeds the resource capacity. A contention peak designates a conflict of a certain size (corresponding to the number of activities in the peak). In Algorithm 6.2 the function SELECTCONFLICTSET(S 0) collects all maximal peaks Footnote 2 in the current schedule. Then selects a decision variable (MCS) from the set of peaks. An alternative selection procedure first ranks the selected peaks, next picks the more critical one (e.g., one with maximal size), and last selects a decision variable from the selected peaks. The selected MCS is solved by imposing on the conflicting activities a single precedence constraint i ≺ j. Next Sect. 6.6.3 gives more details about the used selection procedures.

Algorithm 6.2: GREEDYPCP(P)

Require: a problem P

Ensure: a solution S (or the empty set otherwise)

      S 0 ← P

      if Exists an unresolvable conflict in S 0 then

          S ← 

      else

          F ← SELECTCONFLICTSET(S 0)

          if F =  then

              S ← S 0

          else

              {i ≺ j} ← SELECTLEVELINGCONSTRAINT(F)

            \(S^{0} \leftarrow S^{0} \cup \{ i \prec j\}\)

            S ← GREEDYPCP(S 0)

        end if

    end if

    return S

Cesta et al. (19992002) showed that much of the advantage of this type of global conflict analysis can be retained by using an approximate polynomial procedure for computing MCSs. In particular, Cesta et al. (19992002) proposed two polynomial strategies for sampling MCSs from a peak of size | F | . These strategies are based on the idea of sorting the activities of each peak according to their resource usage (greatest first), then MCSs are collected by visiting such a list and extracting sub-sequences of activities corresponding to MCSs. The two methods are named linear and quadratic according to their complexity (they respectively collect \(\mathcal{O}(\vert F\vert )\) and \(\mathcal{O}(\vert F\vert ^{2})\) elements). An additional and effective strategy is based on the idea of imposing a lexicographical order on the set of searched MCSs, the reader can refer to the paper Cesta et al. (2002) to see the detail of the procedure.

6.3 Selecting and Solving Conflicts

According to the proposed CSP framework, in all cases where no mandatory decisions can be deduced from the propagation phase, heuristics and methods used to respectively select and solve one of the conflicts are introduced by defining variable and value ordering heuristics for the decision variables. The basic idea is to repeatedly evaluate the decision variables and select the one with the best heuristic evaluation. The selection of which variable to assign next is based on the most constrained first (MCF) principle, and the selection of values follows the least constraining value (LCV) heuristic. More specifically, the following heuristics are assumed:

Ranking conflicts::

for evaluating MCSs we have used the heuristic estimator κ() described by Laborie and Ghallab (1995), where the MCS with highest value of κ(MCS) is then chosen. A conflict is unsolvable if no pair of activities in the conflict can be ordered. Basically, κ() will measure how close a given conflict is to being unsolvable.

Slack-based value selection::

to choose an ordering decision among the possible, the choice which retains the most temporal slack is taken.

It is worth underscoring that the above PCP framework establishes resource feasibility strictly by sequencing conflicting activities. It remains non-committal on activity start times. As such, PCP preserves temporal flexibility that follows from problem constraints. Further, the two heuristic choices adopt a minimal commitment strategy with respect to preserving temporal slack, and this again favors temporal flexibility.

7 Flexible Solutions, Robustness, and Partial Order Schedules

As shown in the previous section, the outcome of a Precedence Constraint Posting solver is a Simple Temporal Problem (STP) network or STN, that not only contains the temporal constraints belonging to the initial problem, but also the additional precedences that have been added during the resolution process. In a STN, each time point is associated with a bounded interval of values which represents the set of admissible values for that time point; hence, the adjective “temporally flexible” is often used to refer to these kind of solutions. Therefore the PCP approach tries to retain the temporal flexibility of the underlying STN to the extent possible (somehow maximizing the domain size of the time points). In particular, the use of PCP approaches (and the schedules produced by them) can be justified in two ways:

  • As a means of retaining the flexibility implied by problem constraints (time and capacity) and avoiding over commitment;

  • As a means of establishing conditions for guaranteed executability.

In fact, in most practical scheduling environments, off-line schedules can have a very limited lifetime and scheduling is really an ongoing process of responding to unexpected and evolving circumstances. In such environments, insurance of robust response is generally the first concern. Unfortunately, the lack of guidance that might be provided by a schedule often leads to myopic, sub-optimal decision-making.

One way to address this problem is reactively, through schedule repair. To keep pace with execution, the repair process must be both fast and complete. The response to a disruption must be fast because of the need to re-start execution of the schedule as soon as possible. A repair must also be complete in the sense of accounting for all changes that have occurred, while attempting to avoid the introduction of new changes. As these two goals can be conflicting, a compromise solution is often required. Different approaches exist and they tend to favour either timeliness (Smith 1994) or completeness (El Sakkout and Wallace 2000) of the reactive response.

An alternative, proactive approach to managing execution in dynamic environments is to focus on building schedules that retain flexibility and are able to absorb some amount of unexpected events without rescheduling. One technique consists of factoring time and/or resource redundancy into the schedule, taking into account the types and nature of uncertainty present in the target domain (Davenport et al. 2001; Hiatt et al. 2009). An alternative technique is to construct an explicit set of contingencies (i.e., a set of complementary solutions) and use the most suitable with respect to the actual evolution of the environment (Drummond et al. 1994). Both of these proactive techniques presume an awareness of the possible events that can occur in the operating environment, and in some cases, these knowledge requirements can present a barrier to their use.

Research approaches are based on different interpretations of the concept of a robust solution, e.g., the ability to preserve solution qualities or the ability to maintain a stable solution. The concept of robustness, on which this work is based, can be viewed as execution-oriented; a solution to a scheduling problem will be considered robust if it provides two general features: (1) the ability to absorb exogenous and/or unforeseen events without loss of consistency, and (2) the ability to keep the pace with the execution guaranteeing a prompt answer to the various events.

Figure 6.3a describes the execution of a schedule. This is given to an executor (it can be either a machine or a human being) that manages the different activities. If something happens (i.e., an unforeseen event occurs) the executor will give feedback to a scheduler module asking for a new solution. Then, once a new solution is computed, it is given back to the executor. In Fig. 6.3b, instead, the execution of a flexible schedule is highlighted. The substantial difference in this case is that the use of flexible solutions allows the introduction of two separate rescheduling phases: the first enabling rapid response by immediate means like temporal constraint propagation over the set of activities, and the second entailing to more extensive re-computation of the schedule when the first phase cannot offer a response. In practice, the first phase exploits the flexibility characteristics of the solution (and for this reason we named this module bounded repair scheduler). Of course it is possible that an unforeseen event will force the system outside of the bounds provided by the flexible solution. In this case, it will be necessary to invoke the second, more complete scheduling phase. This second phase involves re-computation of the overall flexible schedule, and performs a much more extensive constraint-based search procedure (global revision module). Note that the use of flexible schedules makes it possible to bypass this extended computation in many circumstances in favor of a prompt answer.Footnote 3

Fig. 6.3
figure 3

Rescheduling actions during the execution. (a) General rescheduling phase. (b) Rescheduling phase using a flexible solution

7.1 Partial Order Schedules

To take maximum advantage of the opportunity to bypass extended computation in the event of unexpected events, a stronger form of flexible schedule is required. Policella et al. (2004) and Policella (2005) further elaborated the idea of exploiting temporal flexibility by adopting a graph formulation of the scheduling problem and focusing on generation of the Partial Order Schedules (POSs).

Definition 6.1 (Partial Order Schedule).

A Partial Order Schedule POS for a problem P is an activity network, such that any possible temporal solution is also a resource-consistent assignment.

Within a POS, each activity retains a set of feasible start times, and these options provide a basis for responding to unexpected disruptions.

An attractive property of a POS is that reactive response to many external changes can be accomplished via simple propagation in an underlying temporal network (a polynomial time calculation); only when an external change exhausts all options for an activity is it necessary to recompute a new schedule from scratch. In fact the augmented duration of an activity, as well as a greater release time, can be modeled as a new temporal constraint to post on the graph. To propagate all the effects of the new edge over the entire graph it is necessary to achieve the arc-consistency of the graph (that is, ensure that any activity has a legal allocation with respect to the temporal constraints of the problem).

Note that, even though the propagation process does not explicitly consider consistency with respect to the resource constraints, it is guaranteed to obtain a feasible solution by definition. Therefore a partial order schedule provides a means to find a new solution and ensures to compute it in a fast way.

The common thread underlying a POS is the characteristic that activities which require the same resource units are linked via precedence constraints into precedence chains. Given this structure, each constraint becomes more than just a simple precedence constraint, but represents a producer-consumer relation, allowing each activity to be connected with the set of predecessors which supply the units of resource it requires for execution. In this way, the resulting network of chains can be interpreted as a flow of resource units through the schedule; each time an activity completes its execution, it passes its resource unit(s) on to its successors (a similar formulation is used in Chap. 2 of this handbook). Figure 6.4 shows an example of Partial Order Schedule for a single resource with capacity five. In particular, activities are represented as rectangles and edges represent the precedence constraints. The numbers inside the rectangles represent the resource requirements and the labeling numbers on the directed edges represents the flow of resource units supplied to a generic activity i from its predecessors in order to satisfy the imposed resource constraint, independently from the start time values of the activities. For example, the activity which requires four units of resource receives two units of resource from each of its two predecessors and supplies one and three units of resource respectively to its two successors. In general, in a POS solution each activity has a set of inputs predecessors which supply the units of resource needed for its execution.

Fig. 6.4
figure 4

An example of Partial Order Schedule (POS)

Hence, an activity network N S (V S , E POS ) is in POS-form if for each resource k there exists a labeling function \(f_{k}: E_{\mathit{POS}}\longrightarrow [0..R_{k}]\) representing the flow of resource units among the activities such that for each activity i the following constraint holds:

$$\displaystyle{ \sum _{j\in \mathit{Pred}(i)}f_{k}(j,i) =\sum _{j\in \mathit{Succ}(i)}f_{k}(i,j) = r_{\mathit{ik}} }$$
(6.4)

where Pred(i) = { j ∈ V | ∃(j, i) ∈ E POS )} and Succ(i) = { j ∈ V | ∃(i, j) ∈ E POS )}.Footnote 4 Given an input solution S (represented either as a graph or as a set of start-time values) a polynomial transformation method, named CHAINING, can be defined that creates sets of activity chains (Policella et al. 2007). This operation can be accomplished in three steps:

  1. 1.

    all the previously posted leveling constraints are removed from the input partial order;

  2. 2.

    the activities are sorted by increasing activity earliest start times;

  3. 3.

    for each resource and for each activity i (according to the increasing order of start times), one or more predecessors p are chosen, which supplies the units of resource required by i – a precedence constraint p ≺ i is posted for each predecessor p. The last step is iterated until all the activities are linked by precedence chains and the constraints in (6.4) are satisfied.

Before concluding we make a further remark about partial order schedules. Roy and Sussman (1964) introduced the disjunctive graph representation of the classical job shop scheduling problem and describe how a solution can be achieved by solving all the disjunctive constraints and transforming each into a conjunctive one. Also in our case of RCPSP/max, solution of all disjunctive constraints is required to achieve a POS. In essence, the disjunctive graph representation has been extended to the more general case where multi-capacity resources are defined. In this case “disjunctive” hyper-constraints among activities that use the same resource are introduced, the so-called MCSs. Based on this representation we can note that a partial order schedule is obtained once any disjunctive MCS is solved. In this case, a set of precedence constraints is posted to solve each MCS.

8 Extended Optimizing Search

In the previous sections we have presented a basic set of core solving algorithms for generating a solution in the form of an activity network N. In addition, we have also shown as such a network can be always turned into a Partial Order Schedule (POS) via a polynomial time calculation. Now in this section we summarize how to use these core algorithms into a set of extended optimizing search procedures targeted on two different objectives: minimize the solution makespan and improve the robustness of a solution.

A first procedure is described in (Cesta et al. 2002), where an iterated version of Algorithm 6.2, called the ISES procedure, is proposed. This algorithm is an iterative method, which at each step utilizes a randomized version of Algorithm 6.2 to produce different solutions. The key idea underlying the described approach is to heuristically bias random choices in a dynamic fashion, according to how well (or how poor) the available search heuristics (variable and value ordering) discriminate among several alternatives.

A generalization of the previous procedure is the so-called Iterative Flattening Search (IFS, Oddi et al. 2010a; Cesta et al. 2000). The concept of iterative flattening search is quite general and provides a framework for designing effective procedures for scheduling optimization, this concept is also known in literature as Large Neighborhood Search (LNS) and was independently developed for solving vehicle routing problems in Shaw (1998). It iteratively applies two steps: (1) Random relaxation of the current solution; (2) An incremental solving step to regain solution feasibility. Algorithm 6.3 introduces the generic IFS procedure. The algorithm alternates relaxation and flattening steps until a better solution is found or a maximal number of non-improving iterations is reached. The procedure takes two parameters as input: (1) an initial solution S; (2) a positive integer MaxFail, which specifies the maximum number of consecutive non makespan-improving moves that the algorithm will tolerate before terminating. After initialization, a solution is repeatedly modified within the while loop by applying the RELAX procedure, and a PCP procedure is used as solving step. At each iteration, the RELAX step reintroduces the possibility of resource contention, and the PCP step is called again to restore resource feasibility. If a better makespan solution is found, the new solution is saved in S best . If no improvement is found within MaxFail moves, the algorithm terminates and returns the best solution found. It is worth noting that Partial Order Schedules in the context of IFS (or LNS) are an effective way for designing neighborhood structures, examples of neighborhoods for solving scheduling problems with cumulative renewable resources are given in the papers Oddi et al. (2010a), in addition, as shown in Laborie and Godard (2007), the concept can be extended to various types of resources.

Algorithm 6.3: IFS(S, MaxFail)

      S best  ← S

      counter ← 0

      while counter ≤ MaxFail do

          Relax(S)

          S ← PCP(S)

          if C max (S) < C max (S best then

              S best  ← S

              counter ← 0

          else

            counter ← counter + 1

        end if

    end while

    return S best

The problem of increasing (optimizing) the robustness of generated POSs is addressed via an iterative (randomized) chaining procedure in Policella et al. (2009). In particular, the problem is addressed by separating the phase of problem solution, which may pursue a standard optimization criterion (e.g., minimal makespan), from a subsequent phase of solution robustification in which a more flexible set of solutions is obtained and compactly represented through a Partial Order Schedule. In particular, the paper focuses on specific heuristic algorithms for synthesis of POSs, starting from a pre-existing schedule and different extensions of the technique CHAINING algorithm described in Sect. 6.7.1, which progressively introduces temporal flexibility into the representation of the solution. In fact, we can observe that given the form of the output solution (an activity network), “classical” objective like the minimization of project makespan, C max , co-exists very naturally with a POS solution and the objective of increasing the solution’s robustness. In fact, in the best case, the early start time solution which minimizes makespan is executed; but in the event that unexpected events prevent this possibility, the accompanying POS provides a feasible, bounded relaxation strategy.

9 Conclusions

In this chapter we have summarized a constraint satisfaction problem solving (CSP) framework for solving project scheduling problems. We have advocated a somewhat unconventional Precedence Constraint Posting (PCP) approach, which exploits the expressiveness and computational efficiency of a simple temporal network (STN) to enforce complex temporal constraints and interdependencies between activities, and establishes resource feasibility by iteratively sequencing activities that are found to be competing for the same resources until all potential resource conflicts have been resolved. One important advantage of a PCP approach is that a generated solution consists of a set of feasible schedules, as opposed to a single assignment of activity start times that is the target of many scheduling approaches. In fact, such a flexible solution can be efficiently transformed into a Partial Order Schedule (POS), which guarantees resource feasibility over ranges of activity start times that are consistent with posted constraints. POSs can be enable quick reaction to unforeseen events during execution via efficient temporal constraint propagation procedures.

We have discussed core technology components for managing complex temporal constraints and for detecting and responding to potential resource conflicts, which are necessary ingredients for configuring a PCP-based scheduling procedure. We have also illustrated their applicability to the resource constrained project scheduling problem with minimum and maximum lag times (RCPSP/max). Due to space considerations, we have not focused heavily on the issue of optimization of project schedules. However, it is important to note that the PCP approach we have described can be directly embedded as a sub-procedure in various forms of extended optimizing search (see Cesta et al. 2002; Oddi et al. 2010a; Policella et al. 2009).

Minimization of project makespan, for example, is an objective that co-exists very naturally with a POS solution—In the best case, the early start time solution which minimizes makespan is executed; but in the event that unexpected events prevent this possibility, the accompanying POS provides a feasible, bounded relaxation strategy. Overall, PCP provides a flexible and efficient framework for solving complex combinatorial problems like project scheduling.