Keywords

1 Introduction

Managing multiple projects is a complex decision-making process, where a number of projects must share concurrently a set of limited resources. Examples of multi-project environments are new product development, multi-product manufacturing, infrastructure constructions, and maintenance of systems.

The basic multi-project scheduling problem (BMPSP) can be considered an extension of the well-known resource constrained project scheduling problem (RCPSP) where two or more projects which require the same scarce resources are scheduled simultaneously.

There are two main distinguished research fields in multi-project scheduling—the static and the dynamic project environment (Dumond and Mabert 1988). In this chapter we assume a closed project portfolio, which does not change over time. The BMPSP in a static environment has been studied, amongst others, by Fendley (1968), Pritsker et al. (1969), Kurtulus and Davis (1982), Kurtulus and Narula (1985), Lawrence and Morton (1993), Lova et al. (2000), Lova and Tormos (20012002), Gonçalves et al. (2008), Krüger and Scholl (2010), Browning and Yassine (2010), Kumanam and Raja (2011), and Cai and Li (2012).

The existing solution methods apply either a single- or a multi-project approach. The single-project approach is equivalent to the RCPSP, since it merges all projects of the multi-project into an artificial super-project with dummy start and end activities. The multi-project approach keeps the projects separate. The approach considered in this chapter uses a single-project approach.

Scheduling involves the allocation of the given resources to projects to determine the start and completion times of a set of detailed activities. There may be multiple projects contending for limited resources, which makes the solution process more complex. The allocation of scarce resources then becomes a major objective of the problem and several compromises have to be made to solve the problem to the desired level of near-optimality.

In this chapter, we present a biased random-key genetic algorithm (BRKGA) approach to solve the BMPSP. The remainder of the chapter is organized as follows. Section 31.2 describes the problem and presents a conceptual model and Sect. 31.3 reviews the literature. Section 31.4 describes the approach used to solve the BMPSP. Section 31.5 describes the parameterized schedule-generation procedure and Sect. 31.6 reports on some computational experiments. Concluding remarks are made in Sect. 31.7.

2 Problem Description

The BMPSP consists of a set Q of projects, where each project q ∈ Q is composed of activities j ∈ V q , where activities α q and ω q are dummies and represent, respectively, the initial and final activities of project q. Let V be the set of all activities and let \(\mathcal{R} =\{ 1,\ldots,K\}\) represent the set of renewable resources. While being processed, activity j ∈ V requires r jk units of resource \(k \in \mathcal{R}\) during each time instant of its non-preemptable duration p j . Resource \(k \in \mathcal{R}\) has a limited availability of R k at any point in time. Parameters p j , r jk , and R k are assumed to be non-negative and deterministic. The activities are interrelated by the following two kinds of constraints:

  • Precedence constraints, which force each activity \(i \in V\) to be scheduled after all predecessor activities j ∈ Pred(i) are completed;

  • Resource constraints, which assure that the processing of the activities is subject to the availability of resources with limited capacities.

For the start and end activities of each project q, we have, for all q ∈ Q, that

$$\displaystyle{\mbox{ $p_{\alpha _{q}} = p_{w_{q}} = 0$ and $r_{\alpha _{q}k} = r_{w_{q}k} = 0 (k \in \mathcal{R})$}}$$

Activities 0 and n + 1 are dummy activities, have no duration, and correspond to the start and end of all projects.

The BMPSP consists in finding a schedule for all the activities taking into account precedence constraints and the availability of resources, while minimizing some measure of performance. Let C j represent the finish time of activity j ∈ V. A schedule can be represented by a vector of finish times (C 1, , C n+1). Let \(\mathcal{A}(t)\) be the set of activities being processed at time t. The conceptual model of the BMPSP can be described as

$$\displaystyle{ \mbox{ Min.}\,\,\,\,\,{\it \text{Measure of Performance}}\;(\,C_{1},\ldots,C_{n}\,) }$$
(31.1)
$$\displaystyle\begin{array}{rcl} \text{s.t.}& & \\ & & C_{i} \leq C_{j} - p_{j}(j \in V;\,i \in \mathit{Pred}(j)){}\end{array}$$
(31.2)
$$\displaystyle\begin{array}{rcl} \sum \limits _{j\in \mathcal{A}(t)}r_{\mathit{jk}} \leq R_{k}\,\,\,(k \in \mathcal{R};\,t \geq 0)& &{}\end{array}$$
(31.3)
$$\displaystyle\begin{array}{rcl} C_{j}\,\, \geq \,\, 0(j \in V )& &{}\end{array}$$
(31.4)

According to objective (31.1) we seek to minimize some performance measure. Constraints (31.2) impose the precedence relations between activities, and constraints (31.3) limit the resource usage imposed by the activities being processed at time t to the available capacity. Finally, constraints (31.4) force the finish times to be non-negative.

A variety of measures of performance have been used for the BMPSP. Minimization of project duration has been used widely (Baker 1974). Other measures of performance include: minimization of total project delay, lateness, or tardiness (Kurtulus and Davis 1982), minimization of average project delay (Lova and Tormos 2001), minimization of total lateness or lateness penalty (Kurtulus 1985), minimization of the overall project cost (Talbot 1982), minimization the total cost of delay (Kurtulus and Narula 1985), and maximization of the resource leveling (Woodworth and Willie 1975). In this chapter, we seek to minimize a measure of performance which involves the due date (tardiness), starting time (earliness), and work in process (flow time) of each project (Gonçalves et al. 2008). This performance measure simultaneously incorporates three criteria: tardiness, earliness, and flow time and is described below. The following notation will be used:

Table 1

the performance measure is defined as

$$\displaystyle{ w^{T}\,\sum \limits _{ q}\mathop{T}\nolimits _{q}^{3}\,\,\,\,\, +\,\,\, w^{E}\,\sum \limits _{ q}\mathop{E}\nolimits _{q}^{2}\,\,\,\,\, +\,\,\, w^{\mathit{FD}}\,\sum \limits _{ q}\mathop{\mathit{FD}}\nolimits _{q}^{2} }$$
(31.5)

where w T, w F, and w FD are parameters defined by the decision maker. Note that the tardiness has an exponent equal to 3 because in the real-world it is considered more important than the earliness or the flow-time (which have an exponent equal to 2). To overcome the problem of not knowing the target duration of a project in a real-world situation, we replace

$$\displaystyle{w^{T}\sum _{ q}\,\mathit{\mathit{FD}_{q}^{2}}\,\,\,\,\,\mbox{ by}\,\,\,\,\,\,w^{T}\,\,\sum \limits _{ q}\,\frac{\left (\mathit{C}_{q}\, -\,\mathit{S}_{q}\right )^{2}} {\mathit{LB}_{0}^{q}} }$$

In the above model, the constraints for the resources are expressed by condition (31.3). However, there are others types of constraints related with the start of a project which cannot be modeled by that condition. To be able to model this kind of constraint, we add

$$\displaystyle{C_{\alpha _{q}} \geq \mathit{ES_{q}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(q \in Q)}$$

to the model, where ES q represents earliest release date for project q. These constraints are enforced in the model implicitly by assigning to the initial activity of each project a duration \(\mathit{ES}_{q} \geq \overline{\mathit{ES}}_{q}\), i.e.,

$$\displaystyle{p_{\alpha _{q}} = \mathit{\mathit{ES}_{q} \geq \overline{\mathit{ES}}}_{q}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(q \in Q)}$$

3 Literature Review

The BMPSP is a generalization of the RCPSP. Blazewicz et al. (1983) show that the RCPSP, as a generalization of the classical job shop scheduling problem, belongs to the class of \(\mathcal{N}\mathcal{P}\)-hard optimization problems. Therefore the BMPSP, as a generalization of the RCPSP, is also \(\mathcal{N}\mathcal{P}\)-hard.

Exact methods to solve the BMPSP are proposed in the literature. The pioneering work of multi-project scheduling by Pritsker et al. (1969) proposed a zero-one programming approach. Mohanthy and Siddiq (1989) studied the problem of assigning due dates to the projects in a multi-project environment. Drexl (1991) considered a non-preemptive variant of the resource-constrained assignment problem using a hybrid branch-and-bound/dynamic programming algorithm with a Monte Carlo-type upper bounding heuristic. Deckro et al. (1991) formulated the BMPSP as a block angular general integer programming model and employed a decomposition approach to solve large problems. Vercellis (1994) describes a Lagrangian decomposition technique for solving multi-project planning problems with resource constraints and alternative modes of performing each activity in the projects.

Several approaches to the BMPSP using heuristic methods have been proposed in the literature. For example, Fendley (1968) used multi-projects with three and five projects and considered three efficiency measurements in the computational analysis. Kurtulus and Davis (1982) designed multi-project instances whose projects have between 34 and 63 activities and resource requirements for each activity between two and six units.

Kurtulus and Narula (1985) studied penalties due to project delay. Dumond and Mabert (1988) studied the problem of assigning due dates to the projects in a multi-project environment. Tsubakitani and Deckro (1990) proposed a heuristic for multi-project scheduling with resource constraints using the Kurtulus and Davis (1982) approach to select appropriate heuristic decision rules. Bock and Patterson (1990) designed a computational experiment based on the work of Dumond and Mabert (1988) with three factors. Lawrence and Morton (1993) studied the due date setting problem of scheduling multiple resource-constrained projects with the objective of minimizing weighted tardiness costs. Shankar and Nagi (1996) proposed a two-level hierarchical approach consisting of the planning and scheduling stages.

Özdamar et al. (1998) examined different dispatching rules for the tardiness and the net present value objective embedded in a multi-pass heuristic. Ash (1999) proposed a deterministic simulation scheme using available project data to choose an activity scheduling heuristic which not only allows for the establishment of good project schedules, but determines a priori which resources will be assigned to specific project activities.

Lova et al. (2000) developed a multi-criteria heuristic that, lexicographically, improves two criteria: a temporal criterion (mean project delay in relation to the unconstrained critical path duration or multi-project duration increase) and a non-temporal criterion (project splitting, in-process inventory, resource leveling, or idle resources) that can be chosen by the user.

Mendes (2003) presents a genetic algorithm that uses a random-key representation and a modified parallel schedule-generation scheme (SGS).

4 Biased Random-Key Genetic Algorithm

We begin this section with an overview of the proposed solution process. This is followed by a discussion of the biased random-key genetic algorithm, including detailed descriptions of the solution encoding and decoding, evolutionary process, and fitness function.

4.1 Overview

Considering the difficulty to solve real-world problems with exact methods, a new solution approach is developed that combines a genetic algorithm with a schedule-generation scheme (SGS) that creates parameterized active schedules. The SGS constructs a schedule based on the priorities and delay times of the activities, and the release dates of the projects.

The role of the genetic algorithm (GA) is to evolve the encoded solutions, or chromosomes, which encode the vectors of priorities (\(\varPi\)) and delays (Δ) of the activities and the vector of project release dates (ES). For each chromosome, the following phases are applied to decode the chromosome:

  1. 1.

    Decoding of the priorities. This phase transforms a part of the chromosome supplied by the genetic algorithm into the vector of activity priorities (Π).

  2. 2.

    Decoding of the delay times. This phase transforms a part of the chromosome supplied by the genetic algorithm into the vector of activity delays (Δ).

  3. 3.

    Decoding of the release dates. This phase transforms a part of the chromosome supplied by the genetic algorithm into the vector of project release dates (ES).

  4. 4.

    Schedule generation. This phase makes use of Π, Δ, and ES, generated in the previous phases, and constructs parameterized active schedules.

  5. 5.

    Fitness evaluation: This phase computes the fitness of the solution (or measure of quality of the schedule) according to Eq. (31.5).

Figure 31.1 illustrates the sequence of steps applied to each chromosome generated by the BRKGA.

Fig. 31.1
figure 1

Architecture of the algorithm

The remainder of this section details the genetic algorithm, the decoding procedure, and the SGS

4.2 Biased Random-Key Genetic Algorithm

Genetic algorithms with random keys, or random-key genetic algorithms (RKGA), for solving problems like sequencing, whose solutions can be encoded as permutation vectors, were introduced in Bean (1994). In an RKGA, chromosomes are represented as vectors of randomly generated real numbers in the interval [0, 1]. The decoder, a deterministic algorithm, takes as input a chromosome and associates with it a solution of the combinatorial optimization problem for which an objective value or fitness can be computed.

Random key GAs are particularly attractive for sequencing problems and/or when the chromosomes have several parts (see, for example, Gonçalves and Almeida 2002; Gonçalves and Resende 2004; Gonçalves and Sousa 2011). Unlike traditional GAs, which need to use special repair procedures to handle permutations or sequences, RKGAs move all the feasibility issues into the objective evaluation procedure and guarantee that all offspring formed by crossover correspond to feasible solutions. When the chromosomes have several parts, traditional GAs need to use different genetic operators for each part. However, since RKGAs use parametrized uniform crossovers (instead of the traditional one-point or two-point crossover), they do not need to have different genetic operators for each part.

A RKGA evolves a population of random-key vectors over a number of generations (iterations). The initial population is made up of σ pop init vectors of n key random keys. Each component of the solution vector, or random key, is generated independently at random in the real interval [0, 1]. Next, the fitness of each individual is computed by the decoder in generation g, the population is partitioned into two groups of individuals: a small group of n elit  < σ pop init∕2 elite individuals, i.e., those with the best fitness values, and the remaining set of σ pop initn elit non-elite individuals. To evolve the population of generation g, a new generation of individuals is produced. All elite individuals of the population of generation g are copied without modification to the population of generation g + 1. RKGAs implement mutation by introducing mutants into the population. A mutant is a vector of random keys generated in the same way in which an element of the initial population is generated. At each generation, a small number n mut  < σ pop init∕2 of mutants is introduced into the population. With n elit + n mut individuals accounted for in the population of generation g + 1, \(\sigma _{\mathit{pop}}^{\mathit{init}} - n_{\mathit{elit}} - n_{\mathit{mut}}\) additional individuals need to be generated to complete the σ pop init individuals that make up population g + 1. This is done by producing \(\sigma _{\mathit{pop}}^{\mathit{init}} - n_{\mathit{elit}} - n_{\mathit{mut}}\) offspring solutions through the process of mating or crossover.

A biased random-key genetic algorithm, or BRKGA (Gonçalves and Resende 2011a), differs from a RKGA in the way parents are selected for mating. While in the RKGA of Bean (1994) both parents are selected at random from the entire current population, in a BRKGA each offspring is generated combining a parent selected at random from the elite partition in the current population and one selected at random from the rest of the population. Repetition in the selection of a mate is allowed and therefore an individual can produce more than one offspring in the same generation. As in RKGAs, parameterized uniform crossover (Spears and Dejong 1991) is used to implement mating in BRKGAs. Let π elit be the probability that an offspring inherits the vector component of its elite parent. Recall that n key denotes the number of components in the solution vector of an individual. For l = 1, , n key , the l-th component c(l) of the offspring vector c takes on the value of the l-th component e(l) of the elite parent e with probability \(\pi _{\mathit{elit}}\) and the value of the l-th component \(\bar{e}(l)\) of the non-elite parent \(\bar{e}\) with probability \(1 -\pi _{\mathit{elit}}\).

When the next population is complete, i.e., when it has σ pop init individuals, fitness values are computed for all of the newly created random-key vectors and the population is partitioned into elite and non-elite individuals to start a new generation.

A BRKGA searches the solution space of the combinatorial optimization problem indirectly by searching the continuous n key -dimensional hypercube, using the decoder to map solutions in the hypercube to solutions in the solution space of the combinatorial optimization problem where the fitness is evaluated.

To specify a biased random-key genetic algorithm, we simply need to specify how solutions are encoded and decoded and how their corresponding fitness values are computed. We specify our algorithm next by first showing how the resource-constrained multi-project scheduling solutions are encoded and then decoded and how their fitness evaluation is performed.

4.3 Chromosome Representation

A chromosome represents a solution to the problem and is encoded as a vector of random keys. In a direct representation, a chromosome represents a solution of the original problem, and is usually called genotype, while in an indirect representation it does not and special procedures are needed to derive a solution from it usually called phenotype .

In the present context, the direct use of schedules as chromosomes is too complicated to represent and manipulate. In particular, it is difficult to develop corresponding crossover and mutation operations. Instead, solutions are represented indirectly by parameters that are later used by a schedule generator to obtain a solution. To obtain the solution (phenotype) we use the parameterized active schedule generator described in Sect. 31.5. Each solution chromosome is made of 2n + m genes, where n is the number of activities and m is the number of projects:

$$\displaystyle{\mathit{Chromosome} = (\underbrace{\mathop{\mathit{gene}_{1},\ldots,\mathit{gene}_{n}}}\limits _{\mathrm{Priorities}},\underbrace{\mathop{\mathit{gene}_{n+1},\ldots,\mathit{gene}_{2n}}}\limits _{\mathrm{DelayTimes}},\underbrace{\mathop{\mathit{gene}_{2n+1},\ldots,\mathit{gene}_{2n+m}}}\limits _{\mathrm{ReleaseDates}})}$$

The first n genes are used to determine the priorities of each activity. The genes n + 1 to 2n are used to determine the delay time used at each of the n iterations of the scheduling procedure, which schedules one activity per iteration. The last m genes are used to determine the release dates of each of the m projects.

4.4 Decoding of the Activity Priorities

As mentioned in Sect. 31.4.3, the first n genes are used to obtain activity priorities. Activity priorities are values between 0 and 1. The higher the value, the higher the priority will be. Below, we present the decoding procedure for the activity priorities.

Let \(\mathit{TF}_{j} = \mathit{d}_{q(j)} -\mathit{l}_{j}\), represent the slack of activity j where d q(j) is the due date of the project q to which activity j belongs and l j is the length of the longest-length path from the beginning of activity j to the end of the project q(j) to which activity j belongs. Furthermore, let TF max be the maximum slack for all activities amongst all projects.

The priority of each activity j is given by an expression which produces priority values between 70 and 100 % of the normalized slack. The priority of each activity j is given by the following expression

$$\displaystyle{\varPi _{j} = \frac{\mathit{TF}_{j}} {\mathit{\mathit{TF^{max}}}} \times \left (0.7 + 0.3 \times \mathit{gene}_{j}\right )}$$

4.5 Decoding of the Delays

Genes n + 1 to 2n are used to determine the delay times Δ i , used by each scheduling iteration i. The delay time for each activity i is calculated by

$$\displaystyle{\varDelta _{i} = \mathit{gene}_{i} \times 1.5 \times \mathit{p^{max}}}$$

where p max is the maximum duration amongst all activity durations. The factor 1.5 was obtained after experimenting with values between 1.0 and 2.0 in increments of 0.1.

4.6 Decoding of the Release Dates

The last m genes of each the chromosome (genes 2n + 1 to 2n + m) are used to determine the release dates of each project q ∈ Q. The following decoding expression is used to obtain the release date of each project q ∈ Q:

$$\displaystyle{\mathit{ES}_{q} = \mathit{\overline{ES}}_{q} + \mathit{gene}_{2n+q} \times \left (\mathit{d_{q}} -\mathit{\overline{ES}}_{q}\right )}$$

Consequently, the duration of the initial activity of each project q is equal to

$$\displaystyle{p_{\alpha _{q}} = \mathit{ES}_{q}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(q \in Q)}$$

5 Schedule-Generation Procedure

The set of active schedules is usually very large and contains many schedules with relatively large delay times, having therefore poor quality in terms of the performance measure. To reduce the solution space, parameterized active schedules , introduced by Gonçalves and Beirão (1999) and Gonçalves et al. (2005) are used. The basic idea of parameterized active schedules consists in controlling the delay time allowed for each activity to encounter. By controlling the maximum delay time allowed, one can reduce or increase the solution space. A maximum delay time equal to zero is equivalent to restricting the solution space to non-delay schedules and a maximum delay time equal to infinity is equivalent to allowing general active schedules.

The procedure used to construct parameterized active schedules is based on a schedule-generation scheme that proceeds by time-increments. For each iteration μ, there is a scheduling time t μ . All activities which are active at t μ form the active set, i.e.,

$$\displaystyle{\mathcal{A}_{\mu } =\, \left \{j \in V \,\,\vert \,\,C_{j} - d_{j} \leq t_{\mu } < C_{j}\right \}}$$

The remaining resource capacity of resource k at instant time t μ is given by

$$\displaystyle{\mathit{R^{'}}_{k}(t_{\mu }) = R_{k}(t_{\mu }) -\sum \limits _{j\in \mathcal{A}_{\mu }}r_{\mathit{jk}}}$$

All activities that have been scheduled up to iteration μ are contained in the set \(\mathcal{C}_{\mu }\) and Γ μ denotes the set of finish times of the activities in \(\mathcal{C}_{\mu }\). Let Δ μ be the delay time associated with the activity being scheduled at iteration μ, and let the set \(\mathcal{D}_{\mu }\) comprise all activities which are precedence-feasible in the interval [t μ , t μ +Δ μ ], i.e.,

$$\displaystyle{\mathcal{D}_{\mu } = \left \{j \in V \setminus \mathcal{C}_{\mu -1}\,\,\vert \,\,C_{i} \leq t_{\mu } +\varDelta _{\mu }\,\,\,\,\forall i \in \mathit{Pred}(j)\right \}}$$

The algorithmic description of the schedule-generation scheme used to generate parameterized active schedules is given by the pseudocode shown in Fig. 31.2.

Fig. 31.2
figure 2

Pseudocode to generate parameterized active schedules

The basic idea of parameterized active schedules is incorporated in the selection step of the procedure, i.e., in the step

$$\displaystyle{j^{{\ast}}:=\mathop{ \mbox{ argmax}}\limits _{ j\in \mathcal{D}_{\mu }}\,\left \{\,\,\varPi _{j}\,\,\right \}}$$

The set \(\mathcal{D}_{\mu }\) forces the selection to be made only amongst activities which will have a delay smaller or equal to the maximum allowed delay.

Table 31.1 Range of parameters for the evolutionary strategy

Parameters \(\varPi _{j}\) (priority of activity j) and \(\mathcal{D}_{\mu }\) (delay for the activity being scheduled at iteration μ) are supplied by the genetic algorithm.

6 Computational Results

In the next subsections we present the details of the computational experiments used to illustrate the effectiveness of the algorithm described in this chapter.

6.1 Test Problems

The test problems used in the computational experiments are the ones proposed by Gonçalves et al. (2008). These test problems have known optimal solutions equal to zero for the measure of performance described in Sect. 31.2 (i.e., tardiness = 0, earliness = 0, and flow time deviation = 0).

Five types of multi-project instances were used, with 10, 20, 30, 40, and 50 single-project instances, each with 120 activities. For each problem type, 20 instances were used. Since each single-project instance has 120 activities, we have that each multi-project instance has 1,200, 2,400, 3,600, 4,800, and 6,000 activities, respectively. Each activity can use up to four resources. The average number of overlapping projects in execution can be 3, 6, 9, 12, and 15. Table 31.3 shows the combinations of the number of overlapping projects used for the problems with 10, 20, 30, 40, and 50 single-projects.

6.2 BRKGA Configuration

Although there is no straightforward way to configure the parameters of a genetic algorithm, our past experience with genetic algorithms based on the same evolutionary strategy (see Gonçalves and Almeida 2002; Gonçalves and Resende 20042011b201220132014; Gonçalves et al. 20052008) has shown that good results can be obtained with the values of n elit , n mut , and Crossover Probability (π elit ) shown in Table 31.1.

For the population size we obtained good results by indexing it to the size of the problem, i.e., use small size populations for small problems and larger populations for larger problems. Having in mind this past experience and in order to obtain a reasonable configuration, we conducted a factorial analysis on a small pilot set of problem instances not included in the experimental tests. The configuration shown in Table 31.2 was the best in terms of the sum of fitness values and the number of best results and was held constant for all problem instances in the experiments. The experimental results demonstrate that this configuration not only provides high-quality solutions but it is very robust.

Table 31.2 Configuration of the BRKGA for the computational experiments

6.3 Results

Table 31.3 summarizes the experimental results. It lists the fitness, earliness, tardiness, and flow time deviation for each problem type. Let m be the number of projects in each problem instance. Averages and standard deviations were computed for the 20 problem instances included in each problem type. Columns Avg1 and SD1 list averages and standard deviations for the expression

$$\displaystyle{ \frac{1} {m}\,\left (w^{T}\,\sum \limits _{ i=1}^{m}T_{ i}^{3}\,\,\, +\,\,\, w^{E}\,\sum \limits _{ i=1}^{m}E_{ i}^{2}\,\,\, +\,\,\, w^{\mathit{FD}}\,\sum \limits _{ i=1}^{m}\mathit{FD}_{ i}^{2}\right )}$$

Columns Avg2 and SD2 list, respectively, averages and standard deviations for the expression

$$\displaystyle{ \frac{1} {m}\,\sum \limits _{i=1}^{m}E_{ i}}$$

Columns Avg3 and SD3 list, respectively, averages and standard deviations for the expression

$$\displaystyle{ \frac{1} {m}\,\sum \limits _{i=1}^{m}T_{ i}}$$

and columns Avg4 and SD4 list, respectively, averages and standard deviations for the expression

$$\displaystyle{ \frac{1} {m}\,\sum \limits _{i=1}^{m}\mathit{FD}_{ i}}$$
Table 31.3 Experimental results

The last column with heading % Improv represents the percentage improvement of the average last generation fitness values on those of the first generation, i.e.,

$$\displaystyle{{100\,\%}\, \times \frac{\left (\text{Fitness at first generation} - \text{Fitness at last generation}\right )} {\text{Fitness at first generation}} }$$

Table 31.3 shows that all averages of the tardiness are close to zero and that the averages values of the earliness are also close to zero for all instances with more than three overlapping projects. As expected, the fitness obtained gets smaller (i.e., improves) as the number of overlappings of projects increases. This is due to the fact that as the number of overlapping projects increases, so does the flexibility in terms of capacity, therefore allowing for more possibilities of finding good schedules. Finally, the % Improv values show that the BRKGA achieves a substantial improvement in the quality of the solutions. Sometimes the average percentage improvement is as large as 100 %.

The computational experiments were run on a PC with a 1.33 GHz AMD Thunderbird CPU on the MS Windows Me operating system and the algorithm was implemented in Visual Basic 6.0. Table 31.4 presents the average computational times, in seconds, for each problem instance and for 50 generations.

Table 31.4 Average elapsed time for 50 generations

7 Conclusions

This chapter presents the Basic Multi-Project Scheduling Problem and a solution approach using a biased random-key genetic algorithm. The chromosome representation of the problem is based on random keys. The schedules are constructed using a schedule-generation scheme that generates parameterized active schedules based on priorities, delay times, and release dates generated by the biased random-key genetic algorithm.

The approach is tested on a set of test problems with 10, 20, 30, 40, and 50 projects (having 1,200, 2,400, 3,600, 4,800, and 6,000 activities, respectively). In the computational experiments, the algorithm obtained values near the optimum (zero), therefore validating the effectiveness of the proposed approach.