Keywords

1 Introduction

In the flexible job shop problem (FJSP) a given set of jobs has to be scheduled. Each job consists of an ordered set of operations and for each operation a set of different machines is available to carry out the required processing. In most papers that address this classical version of the flexible job shop problem some given economic or production criteria like cost, total (weighted) tardiness or makespan have to be optimized while meeting some production constraints like adhering to the sequence of operations or preventing pre-emption.

In green scheduling this problem formulation is extended by considering additional aspects like energy cost, energy demand or carbon footprint. These additional aspects can be integrated by additional constraints or by adopting the objective function, for example in a multi-objective optimization approach. In this paper a hierarchical two-objective approach is pursued for a flexible job shop problem. This approach was motivated by a case study carried out at a mechanical engineering company. Besides the primary objective function that minimizes the makespan an additional secondary objective function that minimizes peak energy demand (peak load) is introduced. We assume that energy demand is determined solely by electricity demand. Other forms of energy such as thermal energy or compressed air are not considered. As for the classical FJSP, the operations can be carried out on different machines, which, however, have different energy demands.

Obviously, the two objective functions are in conflict with one another. For example, in order to minimize peak energy demand, it could be advantageous not to use machines with high energy demand at the same time. However, this can lead to an increased makespan if machines with high energy demand are not used in parallel. In order to ensure high capacity utilization and adherence of delivery dates, makespan minimization is often the most important objective of a company.

The motivation for considering peak load minimization as the secondary objective is that in the German electricity market bulk consumers for energy like industrial companies are mainly charged according to their peak energy demand, as power providers have to guarantee that peak energy demand can be satisfied. Thus, minimizing peak demand results in lower energy costs for a company. Merely minimizing the peak energy demand makes no sense from an economic point of view since the proportion of energy costs in relation to total costs is low and, as described above, other criteria must also be considered in production planning.

The remainder of the paper is organized as follows. The relevant literature to the problem under consideration is reviewed in the next section. In the third section the problem is formulated as a non-linear integer programming problem. This problem formulation will later on be used to evaluate the performance of a solution approach that is presented in section four. The main focus of the proposed approach is to minimize the peak energy demand for a given makespan. The problem of minimizing the makespan in a flexible job shop is not addressed in this paper. We refer to the large number of articles in the literature addressing that problem (e.g. [1, 2]). Results on comprehensive tests on extended benchmark problems taken from the literature are discussed in section five. The last section summarizes the main results and puts them into a perspective.

2 Literature Review

Below we review the literature on the flexible job shop problem with respect to papers dealing with energy aspects. In addition, we will also discuss the literature on minimizing peak energy demand. As the number of papers addressing this objective for the flexible job shop problem is rather limited, we will discuss peak load minimization for different variants of scheduling problems. A comprehensive survey on the literature on different types of green scheduling problems like single or parallel machine problems, job shop problems as well as flow shop problems can be found in the paper of [16].

2.1 Green Scheduling Approaches for Flexible Job Shops

The flexible job shop problem in which operations can be carried out on different machines, that also have different energy demands seems somehow predestined for the consideration of energy aspects. Thus, several authors have recently addressed green scheduling for different variants of the flexible job shop problem. Surveys on flexible job shop approaches can be found in the papers of [8] or of [23]. The latter one also includes green scheduling approaches. Besides makespan optimization or minimization of total tardiness in most approaches minimizing total energy demand (or total energy cost) is used as the secondary objective in a multi-objective optimization approach.

Many papers on green scheduling assume that energy demand at a given time depends on the state of the machine during that time. Therefore, states like idle, setup, loading, warm-up or processing are considered. For most of these states, the energy demand is known in advance and constant over time for each operation. Only total energy demand during idle times is not known in advance as the length of idle times depends on the result of the scheduling. Thus, in many approaches the objective of minimizing total energy demand can be reduced to minimizing idle energy demand or even idle times (cf. the approaches of [10, 15, 18, 19, 21, 24, 34, 39, 42,43,44]).

In some approaches the production speed for the operations is an additional decision variable, for example in the articles of [42] or [15]. With a higher production speed processing times for operations can be reduced, resulting for example in a lower makespan, but energy demand is increased.

2.2 Approaches Minimizing Peak Energy Demand

Peak energy demand aspects for different types of scheduling problems have recently been addressed by several authors. In these approaches, restricting peak energy demand can be imposed by a constraint that limits a given maximum value (threshold) for each single period or by the objective function that minimizes peak energy demand.

[4, 27] as well as [36] addressed problems from the process industry. Single and parallel machine scheduling problems were discussed in the articles of [37] and [3] as well as [40], respectively. [14] as well as [25] addressed the general flow shop problem, whereas [6] and [12, 13] focused on flexible flow shops. Peak energy demand minimization for job shop problems were discussed by several authors, e.g. [32, 41, 45] (reactive scheduling), [22] and [26].

As mentioned in Sect. 1, when peak energy demand aspects are considered, the energy demand of a machine must be modeled more precisely. The same applies if energy prices vary over time, e.g. for time-of-use tariffs. Therefore, [38] proposed using energy load profiles instead of average energy demand values particularly when considering peak energy demand. Furthermore, [35] recommended a discretization of the energy demand as it avoids the modelling of complex functions and thus reduces the problem’s complexity.

Thus, it is interesting to see that in most papers energy demand solely depends on the factors like the operation, the machine, the machine load, the state of the machine or the chosen speed of the machine, but it is kept constant over the processing time (cf. [3, 4, 6, 12,13,14, 25, 26, 29, 30, 37, 40]). Only a few authors consider fluctuating energy demand over time.

[27] were the first authors who examined variable energy demand (steam energy) in their simulation model for the process industry. [36] consider varying energy demand during processing a job for a batch production problem in the process industry. [45] and [22] divided each operation of their job shop scheduling problem into two parts. Energy demand is constant for both parts, but the energy demand for the first part is higher than for the second part. [32] model energy demand of machines by an integral of the energy during processing times. The authors address a reactive job shop scheduling problem for a flexible manufacturing system. [41] approximate the energy profile of a machine for a job shop problem by a mathematical power series.

In summary, it can be stated that the flexible job shop problem with different energy-oriented objectives and restrictions has been dealt with in the literature in recent years. The main focus, however, was on minimizing total energy consumption. There is also extensive literature on the topic of minimizing peak energy load, with various problems being discussed. In contrast, the intersection of the two subject areas green scheduling for the flexible job shop problem and minimization of peak energy load is hardly considered. Furthermore, if any, other optimization models than the problem presented in Sect. 1 are discussed. This is surprising insofar as it is precisely in this case that peak load minimization is promising if the machines that can carry out the processing of the operations have different energy demands, and if the energy demand also changes over the course of the processing.

Below we will introduce a new solution approach to the flexible job shop problem where we model the operation-dependent energy demand of the machines by step functions. In the objective functions we will consider minimization of the makespan and peak energy demand. We will implement a hierarchical approach minimizing in the first stage the time-based objective function and in the second stage peak energy demand. As far as the author is aware, this question has so far not been investigated for the flexible job shop problem. Similar optimization approaches have only been suggested for different scheduling problems. [6] proposed this hierarchical approach for the flow shop problem; [3] addressed it to the parallel machine problem.

3 Problem Formulation

The problem under consideration consists of n jobs that have to be scheduled on a set of m machines. We will pursue a hierarchical two-objective optimization approach. The primary objective function minimizes the makespan. Minimization of peak energy demand serves as a secondary objective. Each job j consists of \(h_j\) operations. For each operation \(O_{j,h}\) a sub-set of the machines is permitted to carry out the operations. We assume that each operation can be processed by only one machine at a time and that each machine can only carry out one operation at a time. Pre-emption is not allowed.

Before stating the problem formally, we introduce some notation in Table 1. In the problem formulation used here, the use of time-indexed variables is dispensed with. The variables used instead lead to a non-linear constraint on the one hand. On the other hand, they increase the comprehensibility and make it clear how the sub-problems are separated from one another within the framework of the hierarchical solution approach. It should also be noted that the use of time-indexed variables does not improve the solvability of the problem since the number of variables increases with the number of periods. For example, in the test cases used in Sect. 5, this leads to a few million variables.

Table 1. Parameters and decision variables used
$$\begin{aligned} min \varGamma \end{aligned}$$
(1)
$$\begin{aligned} min \varPi \end{aligned}$$
(2)

The two objective functions (1) and (2) minimize the makespan and the peak energy demand, respectively, subject to the following constraints.

$$\begin{aligned} \sum _{i=1}^{M} y_{i,j,h}=1 \quad j=1,...,N; \quad h=1,...,h_j \end{aligned}$$
(3)
$$\begin{aligned} \begin{aligned} y_{i,j,h} \le a_{i,j,h} \quad i=1,...,M; \quad j=1,...,N; \quad h=1,...,h_j\ \end{aligned} \end{aligned}$$
(4)
$$\begin{aligned} \sum _{j=1}^{N} \sum _{h=1}^{h_j} x_{i,j,h,k}\le 1 \quad i=1,...,M; \quad k=1,...,l \end{aligned}$$
(5)
$$\begin{aligned} \begin{aligned} \sum _{j=1}^{N} \sum _{h=1}^{h_j} x_{i,j,h,k} \ge \sum _{j=1}^{N} \sum _{h=1}^{h_j} x_{i,j,h,k+1} \quad i=1,...,M; \quad k=1,...,l-1 \end{aligned} \end{aligned}$$
(6)
$$\begin{aligned} \begin{aligned} \sum _{k=1}^{l} x_{i,j,h,k}=y_{i,j,h} \quad i=1,...,M; \quad j=1,...,N; \quad h=1,...,h_j \end{aligned} \end{aligned}$$
(7)

Constraint (3) guarantees that each operation \(O_{j,h}\) is assigned to exactly one machine. This assignment can only take place if the corresponding operation can be performed on the considered machine i. This is imposed by constraint (4). The following constraint (5) limits the number of operations that can be assigned on the k-th position of a machine to at most one. By constraint (6) the positions of a machine are assigned sequentially. Note, that this constraint is not required as one can always find an optimal solution that satisfies this constraint. Nevertheless, this constraint limits the number of possible assignments of operations to machine positions. Finally, each operation assigned to one machine must also be assigned to one position in the sequence of operations on that machine; cf. constraint (7).

$$\begin{aligned} \begin{aligned} t_{j,h}+ \sum _{i=1}^{M} y_{i,j,h} \cdot p_{i,j,h} \le t_{j,h+1} \quad j=1,...,N; \quad h=1,...,h_j-1 \end{aligned} \end{aligned}$$
(8)
$$\begin{aligned} \begin{aligned} TM_{i,k} \le t_{j,h} + (1- x_{i,j,h,k}) \cdot L \\ \quad i=1,...,M; \quad j=1,...,N; \quad h=1,...,h_j; \quad k=1,...,l \end{aligned} \end{aligned}$$
(9)
$$\begin{aligned} \begin{aligned} t_{j,h} \le TM_{i,k} + (1- x_{i,j,h,k}) \cdot L \\ \quad i=1,...,M; \quad j=1,...,N; \quad h=1,...,h_j; \quad k=1,...,l \end{aligned} \end{aligned}$$
(10)
$$\begin{aligned} \begin{aligned} TM_{i,k} + \sum _{j=1}^{N} \sum _{h=1}^{h_j} p_{i,j,h} \cdot x_{i,j,h,k} \le TM_{i,k+1} \\ \quad i=1,...,M; \quad k=1,..., l-1 \end{aligned} \end{aligned}$$
(11)

Constraint (8) ensures that the available start time of operation \(O_{j,h+1}\) is greater than or equal to the completion time of the previous operation of that job j. The following two constraints (9) and (10) guarantee the consistency of the time variables of operation \(O_{j,h}\), i.e. \(t_{j,h}\), with the time variables of machine i, i.e. \(TM_{i,k}\). Both constraints are only restrictive in the event that the operation is carried out on the considered machine as the k-th operation, i.e. if \(x_{i,j,h,k}=1\). Finally, constraint (11) guarantees that the time span between the starting times of two consecutive operations on a machine are large enough to carry out the operation. Again, this constraint is only restrictive if operation \(O_{j,h}\) is the k-th operation on machine i.

$$\begin{aligned} \begin{aligned} \sum _{i=1}^{M} \sum _{j=1}^{N} \sum _{h=1}^{h_j} \sum _{k=1}^{l} \sum _{d=1}^{p_{i,j,h}} x_{i,j,h,k} \cdot w_{i,k,t-d+1} \cdot E_{i,j,h}(d) \le \varPi \\ \quad t=1,...,T \end{aligned} \end{aligned}$$
(12)
$$\begin{aligned} \begin{aligned} t_{j,h_j}+\sum _{i=1}^{M} y_{i,j,h_j} \cdot p_{i,j,h_j} \le \varGamma \quad j=1,...,N \end{aligned} \end{aligned}$$
(13)

The non-linear constraint (12) imposes an upper bound on the energy demand for every single period of the planning horizon. If operation \(O_{j,h}\) is assigned to the k-th position in the sequence of machine i, i.e. \(x_{i,j,h,k}=1\), and the starting time of that operation was in period \(t-d+1\), i.e. \(w_{j,k,t-d+1}=1\), then an amount of \(E_{i,j,h}(d)\) must be added to the total energy demand in period t. In addition, the makespan is restricted by the delivery dates of the jobs; c.f. constraint (13).

$$\begin{aligned} t_{j,h} \ge 0 \quad j=1,...,N; \quad h=1,...,h_j \end{aligned}$$
(14)
$$\begin{aligned} TM_{i,k} \ge 0 \quad i=1,...,M; \quad k=1,...,l \end{aligned}$$
(15)
$$\begin{aligned} \begin{aligned} y_{i,j,h} \in \{0,1\} \quad i=1,...,M; \quad j=1,...,N; \quad h=1,...,h_j \end{aligned} \end{aligned}$$
(16)
$$\begin{aligned} \begin{aligned} x_{i,j,h,k} \in \{0,1\} \\ \quad i=1,...,M; \quad j=1,...,N; \quad h=1,...,h_j; \quad k=1,...,l \end{aligned} \end{aligned}$$
(17)
$$\begin{aligned} \begin{aligned} w_{i,k,r} \in \{0,1\} \quad i=1,...,M; \quad k=1,...,l; \quad r=0,...,T-1 \end{aligned} \end{aligned}$$
(18)

The remaining technical constraints (14) to (18) restrict the values of the decision variables.

Hereinafter we will focus on optimizing the problem with the secondary objective function, i.e. problem (2) to (18). Minimizing the first objective function corresponds to the classical FJSP, which is already an NP-hard combinatorial optimization problem. Several approaches have been proposed in the literature to approximately solve this problem. Instead of proposing a new solution algorithm for makespan minimization for the FJSP we refer to the comprehensive literature (cf. [8, 23]).

The focus of our investigation is to examine the potential of peak energy demand minimization under a strict makespan restriction. This corresponds to the hierarchical optimization approach, where minimizing the makespan is the main objective.

4 Solution Approach

The solution approach pursued in this paper is based on the variable neighborhood search (VNS). VNS, originally proposed by [17, 28], is a meta-heuristic method for solving combinatorial optimizations problems. A pseudo-code description of the solution approach is shown in Algorithm 1.

In a first step an initial solution is generated by solving a classical FJSP with respect to makespan minimization. For test runs carried out to assess the quality of the proposed approach we used known optimal solutions for test cases from the literature. It goes without saying that the proposed approach also works when the makespan is not set to its minimal value.

This starting solution is set as the incumbent solution s. Within the proposed approach a solution s is represented by a string that is a permutation of all operations supplemented by a feasible machine \(\mu \) and a starting time \(t_{j,h}\) for each operation \(O_{j,h}\), i.e. \(s_i(j,h,\mu ,t_{j,h})\) for \(i=1... \sum _{j=1}^{N} h_j\). One should keep in mind that in order to reduce peak energy demand it is not sufficient to consider only active schedules where all operations are scheduled as early as possible. Therefore, the starting time of each operation must also be specified in the solution string. The transformation of a solution string into a solution is carried out by the local search procedures (cf. Sect. 4.2).

It should be noted that only permutations in which the specified sequence of operations is adhered to for all jobs represent feasible solutions.

In the VNS approach a predetermined number of iterations \(i_{\text {max}}\) is carried out, each of them consisting of a shaking procedure and a local search step. In every iteration, a random solution \(s'\) is generated in the current neighborhood \(NH_g(s)\) of s (shaking). Therefore, a total of \(g_{max}\) neighborhoods are available. It starts in the first iteration with the first neighborhood, i.e. \(g:=1\). Whenever a solution s is replaced by a new solution the value of g is reset to 1; otherwise g is increased by 1 in the subsequent iteration, i.e. \(g:=g+1\). If g reaches a value bigger than a predetermined value \(g_{max}\) then g is set to 1.

A subsequent local search procedure is applied to \(s'\) transforming this solution to \(s''\). If \(s''\) is better than s with respect to a specific objective function \(f(\cdot )\) or \(s''\) meets another acceptance criterion then \(s''\) replaces s. At the end of the VNS approach, the best solution \(s_{best}\) found during all iterations is returned as the final solution.

figure a

The implemented neighborhoods, the local search procedures as well as the acceptance criteria are presented in detail below.

4.1 Implemented Neighborhoods

Neighborhoods are used to generate new, potentially better solutions. As proposed by [33] two different types of neighborhoods are used within the approach. For both types the parameter n represents the neighborhood width.

The first type of neighborhood (NH1) changes the assignment of operations to machines. Each time this neighborhood is invoked, n operations are randomly selected from the set of operations that can be processed on at least two different machines and assigned to new machines. Thereby, all potential operations and all selectable machines for an operation have the same probability of being selected. After changing the machine assignment of n operations in the solution string, the starting times of all operations must be updated. This is done by the procedure presented in the next section.

The second type of neighborhood (NH2) changes the sequence of operations in the solution string. Whenever this neighborhood is invoked, up to n operations are shifted one after another. An operation \(O_{j,h}\) is selected by roulette wheel selection, where probabilities for selecting an operation are calculated using values \(\pi (j,h)\). When shifting operations to earlier or later periods in a classical FJSP while minimizing makespan there is always a direct relationship to the objective function value as the makespan is also measured in time periods. Unfortunately, there is no direct relationship between time and objective function value when minimizing peak energy demand is pursued. For this reason, we have to employ other mechanisms that detect which operations are difficult to plan and which are likely to cause high peak energy demands. This selection mechanism based on the \(\pi \)-values has been shown in test runs to be superior to neighborhoods in which the operations were selected according to their maximum energy demand. Variants in which operations were preferably selected that took place at peak times were also significantly worse. The \(\pi (j,h)\) values are updated by the local search procedures (cf. explanation below). The new position of the operation to be shifted is selected randomly following a uniform distribution for all positions that do not violate the processing sequence of the corresponding job. After shifting one operation the local search procedure is started in order to update the starting time of all operations in the solution string.

Because of the high computing effort for the generation of solutions, we have restricted ourselves to inserting individual operations and have neglected more complex operators in which several operations swap positions. In addition, we had to restrict the neighborhood width n to at most 3. It turned out that larger values for n did not improve the solutions any more while just increasing required CPU times. Thus, the value of \(g_{max}\) was set to 6. The neighborhoods are applied in the following order: NH2/1, NH1/1, NH2/2, NH1/2, NH2/3, NH1/3, where the number after the slash represents the value of the neighborhood width n.

4.2 Local Search Procedures

A solution string can be transformed into a solution (Gantt chart) by scheduling the operations one by one starting with the first operation in the string. The optimal solution for minimizing the makespan always includes an active schedule in which all operations are scheduled as early as possible. To generate solutions with low makespan we limit ourselves to generating only active schedules when decoding a solution string into a solution. Therefore, each operation is scheduled as early as possible on the designated machine without any delays. It should be noted that an operation of a job can only be scheduled after the preceding operation of the job is completed. Additionally, scheduling an operation on a machine must not delay the beginning of another operation already scheduled on that machine.

In order to minimize the secondary objective function (peak energy demand) the following three different local search procedures have been implemented.

  • Smoothing: Delaying the start of operations

  • Shaving: Reducing energy demand in peak periods

  • Updating machines: Changing the assignment of operations to machines

They are invoked in the displayed order. It should be noted that the first two procedures do not change the sequence of operations on the machines. With all three local search procedures shifts are carried out in such a way that makespan of a solution \(s'\) generated by shaking does not increase. As a result, optimization is no longer limited to only active schedules.

Smoothing. Starting from a given solution string where the sequence of operations is already given for every machine, this procedure tries to reduce peak energy demand by delaying the beginning of operations. Remember that every operation has been scheduled as early as possible when decoding a solution string. If, for example, a machine is not used after the completion of an operation, the start of processing that operation may be delayed in order to reduce peak energy demand. Figure 1 shows an illustrative example of this situation with three jobs. Each job consists of up to two operations that are denoted by \(O_{j,h}\). The Gantt chart on the left shows the initial situation. Energy demand for the three jobs and total energy demand for all periods is shown in the table above the Gantt chart. The Gantt chart on the right shows the situation when the beginning of the first operation of job 1 on machine 2, i.e. operation \(O_{1,1}\), is delayed by three periods. This shift results in a reduction of peak energy demand from 14 to 10.

Fig. 1.
figure 1

Reducing peak energy demand from 14 (left) to 10 (right) by postponing the start of the first operation of job 1, i.e. operation \(O_{1,1}\), on machine 2 by three periods

The procedure tries to shift operations one by one beginning with the operation that determines the makespan, i.e. the operation with the latest completion time. Thereafter, in every iteration of the smoothing procedure the operation with the latest completion time that has not yet been considered is identified and the beginning of that operation is delayed as long as makespan is not increased or the next operation scheduled on that machine had to be delayed.

Shaving. This local search procedure considers the whole planning horizon. Starting from the incumbent solution \(s'\) the procedure tries to shift operations that are processed during the peak period in order to reduce peak energy demand. At the same time, however, makespan must not be increased.

Let \(t_{peak}^{'}\) denote the peak period of a solution \(s'\). Energy demand for every period t of the planning horizon and therefore the peak period can be calculated by the left hand side of Eq. (12) as for a given solution \(s'\) the values of decision variables \(x_{i,j,h,k}\) and \(w_{j,k,t-d+1}\) are known.

Within the procedure the set P contains all operations that are carried out in solution \(s'\) during the peak period. In addition, let us define by M(jh) the machine an operation \(O_{j,h}\) is assigned to. The operations from set P are examined one by one and the beginning of processing that operation is gradually increased by one period. Note that when shifting an operation, it might also be necessary to shift the succeeding operations belonging to the same job as well as operations of other jobs that are assigned in a later period to the same machine. These resulting shifts are also considered when examining the effects on peak energy demand. Whenever shifting an operation causes the completion of an operation (either that one or a different one) to be postponed beyond the end of the specified makespan, the examination of further shifting this operation is stopped and the starting period of that operation is set to its original period.

A second stopping criterion is a reduction in peak energy demand. In this case the incumbent solution is replaced by the newly generated solution, the value of the peak period as well as the set P are updated and the procedure starts again in trying to postpone the beginning of operations that are processed during the newly generated peak period. Eventually, the local search procedure terminates when all operations from set P have been checked without finding any further improvement.

During the execution of the shaving procedure values \(\pi (i,j)\) for an operation \(O_{j,h}\) that are used in the second type of neighborhood are updated. At the beginning of the VNS approach all values of \(\pi (i,j)\) are initialized with 1. Whenever an increase of the peak energy demand is observed during a shift in the shavings procedure then we identify the operations that are processed in the peak period. For each of these operations it is compared how high the energy demand was at the peak period before and after the shift. In this way, we can identify the operations that contribute to the peak energy demand. For all contributing operations we increase the value \(\pi (i,j)\) of that operation by 1. Thus, in the long run we can identify operations that are somehow critical and therefore should be considered during the shaking phase.

Updating Machines. A third local search procedure tries to reduce machine load by changing the assignment of operations to machines. Therefore, all operations are examined with respect to the feasible machines for each operation. Additionally, the smoothing and the shaving local search procedures are invoked for the newly generated assignment. Whenever an improvement in the objective function is found, the new assignment is realized and the search starts again from the beginning. The procedure terminates when all operations have been checked and no further improvement could be found.

4.3 Acceptance Criteria

To decide whether a newly generated solution \(s''\) should be accepted, an objective function f() that consists of two summands is employed. The first summand represents the peak energy demand of the solution. The second summand penalizes makespan violations of operations. Therefore, whenever an operation is not completed within the given makespan a penalty is incurred that is calculated by multiplying the length of the makespan violation with a penalty factor. This factor is initialized by 1 at the beginning. Whenever a new solution is accepted that has makespan violations then the penalty factor is increased by multiplying it with a factor randomly chosen from the interval [1.05, 1.1]. In contrast, when the newly accepted solution has no makespan violations then the penalty factor is decreased by dividing it again by a factor randomly chosen from the same interval. This type of acceptance criteria was first proposed by [9] for the dial-a-ride problem and has proven to be very effective in finding new feasible solutions for highly restricted optimization problems.

The newly generated solution \(s''\) replaces the incumbent solution s if \(f(s'')\) has a smaller value than f(s). Additionally, also deteriorating solutions \(s''\) may become incumbent solutions with probability \(exp(-[f(s'')-f(s_{best})]/\tau )\). The initial temperature \(\tau \) is then set such that if \(f(s'')\) is \(0.5\%\) larger than \(f(s_{best})\), \(s''\) is accepted with a probability of 0.2. Thereafter, \(\tau \) is linearly decreased to 0 until the maximum number of iterations has been reached. Carrying out test runs it could be observed that after a strong reduction in the objective function value the temperature seemed to be too high, resulting in the acceptance of too many deteriorating solutions. Therefore, the temperature \(\tau \) is additionally recalculated whenever a new best solution has been found based on this new solution. The temperature is again linearly decreased to 0 over the remaining number of periods.

5 Computational Results

The use of the proposed approach as part of the above-mentioned case study at the mechanical engineering company led to a reduction in peak energy demand of 23.6%. However, this result says little about the performance of the proposed approach in general. In order to enable a broader evaluation of the proposed approach, modified test cases from the literature were also used (cf. [31]). Therefore, the test cases were extended by adding randomly generated energy demands for all operations on their eligible machines. The profiles were generated in such a way that the difference between the machine with the lowest and the highest total energy demand for each operation is at most 20%.

5.1 Benchmark Approach

In order to assess the performance of the proposed approach from Sect. 4, we implemented the following benchmark approach. Therefore, the problem (2) to (18) is decomposed into two subproblems that are solved sequentially.

In the first subproblem the makespan is minimized subject to the classical FJSP constraints without considering energy demand aspects, i.e. this problem consists of the objective function (1) and constraints (3) to (11) and (13) to (18). We used the optimal solution as the starting point for minimizing the second subproblem that minimizes the objective function (2) subject to the constraints (3) to (18). With the solution of the first subproblem, the remaining problem can be simplified considerably.

This second subproblem could be solved for the test cases under consideration using the commercial solver CPLEX 12. In the following, this approach in which both subproblems are solved sequentially is referred to as the benchmark approach.

This benchmark approach corresponds to the procedure in many industrial companies where optimization is initially carried out subject to minimizing makespan. Only in the second step is an attempt made to minimize peak energy demand. Compared to the benchmark approach, the approach proposed in Sect. 4 possesses additional flexibility as operations might be rescheduled, processed in different orders on the machines or reassigned to other machines as long as makespan is not increased and the sequence of operations for each job is respected. The test results are intended to show whether or not this additional flexibility can be used to reduce peak energy demand.

5.2 Test Results

The proposed approach of Sect. 4 was coded in Delphi. Each test case was solved 20 times. The number of iterations was set to 20,000 in order to limit the computing effort.

Table 2 summarizes the aggregated results for the different classes of test cases. The values in columns two through four indicate the number of considered test cases. In addition, it is indicated how often the proposed approach achieved a better solution than the benchmark approach and how often both approaches led to the same results, respectively. In the remaining three columns the best, average and worst results are displayed that were obtained by applying the proposed solution approach. The given values represent the share in the objective function value achieved. For example, if the benchmark approach achieves an average objective function value of 1000 for all test problems of this class and the value in the column min is 0.95 then the average of the best objective function values obtained by the proposed approach was 950. Thus, values in columns 3 to 5 that are below 1 represent improvements obtained by the proposed solution approach over the benchmark.

Table 2. Aggregated results for the test cases of [5, 7, 11, 20]

The results can be summarized as follows. The proposed algorithm clearly outperformed the benchmark approach for most of the 173 test cases. With five test cases, namely three test cases of Dauzère-Pérès and Paulli and one test cases of Brandimarte and Hurink et al. respectively, both approaches achieved the same solution. For 14 test cases a better solution than the benchmark approach could not be found in any of the twenty test runs - these were three test cases of Chambers and Barnes and eleven test cases from the edata set of Hurink et al. In the remaining 159 test cases, however, almost consistently better results were achieved compared to the comparison method, in the majority of cases with a very clear difference based on the best or worst individual values as well as on the average values for all 20 test runs.

Of the test cases of [20] the edata test cases were the most difficult ones to solve. For the rdata and even more for the vdata test cases more machines are available to be chosen for the operations. Thus, this kind of flexibility can help to reduce peak energy demand considerably.

Finally, further experiments with the proposed approach were carried out to investigate whether and if so to what extent a relaxation of the strict limitation of the makespan has an impact on peak energy demand. Therefore, the optimal value of makespan, i.e. the value of \(\varGamma \) in Eq. (13), was increased in 1% steps from 1% to 15%. Test runs were carried out for 30 selected test cases from the edata set of Hurink et al., i.e. abz5 and abz6, la01 to la20, mt06, mt10 and mt20 and orb1 to orb10. Each test case was solved ten times for each step. With a larger makespan peak energy demand can be reduced considerably. The maximum decrease in peak energy demand is almost 20% if at the same time makespan is increased by 10%. Further reductions in peak energy demand could not be realized even with higher increases in makespan.

6 Conclusion

In this paper an approach to minimize peak energy demand in a flexible job shop considering narrow specifications for makespan was presented. In particular, the last results from Sect. 5.2 show that peak energy demand can be reduced if the makespan is increased. Obviously, there is a tradeoff between these two objectives. But, as the results of the test cases also show, there is even a strong potential for minimizing peak energy demand when the makespan is kept at its lowest value. The greater the number of alternative machines (with different energy requirements) available for the processing of individual operations, the greater this potential is.

Another finding from the computational tests is that a greater reduction in peak energy demand is possible if both objectives are considered at the same time. This becomes clear by comparing the results of the proposed algorithm with the benchmark approach. In the latter approach, the optimal solution for the objective of minimizing the makespan was first determined and then, based on this solution, peak energy demand was minimized in the second step. Although optimal solutions were generated for both subproblems, the approach presented in Sect. 4 outperformed the benchmark considerably. This is because the VNS approach can change the assignment of operations to machines, as well as the order of operations on the machines, provided that the makespan does not increase. And this additional flexibility is sufficient to reduce the peak energy demand significantly.

Apart from very energy-intensive industries, with today’s energy prices, many companies do not view minimizing energy costs as a top priority. The proposed optimization approach could be of interest to these companies, especially if they want to cut down peak energy for environmental reasons or if energy prices rise. Another application would be in the context of a demand side management in which the local search procedures used in the article can also be used to smooth the demand.

Improvements would also be possible on the algorithmic side. Due to the very high computing times that are due to the optimization of the energy requirement, an obvious possibility would be to consider the parallelization of the calculation.