1 Introduction

In the specialized literature on constrained numerical optimization problems with meta-heuristics, evolutionary algorithms (EAs) stand out as a valid option to solve them (Coello Coello 2002; Mezura-Montes 2009; Michalewicz and Schoenauer 1996). However, in recent years, the presence of some kind of dynamism in the objective function and/or the constraints has raised the interest of researchers and practitioners (Mezura-Montes and Coello 2011; Nguyen et al. 2012; Nguyen and Yao 2012, 2013). This type of problem is known as the dynamic constrained optimization problem (DCOP) (Nguyen et al. 2012; Nguyen and Yao 2009, 2012, 2013; Singh et al. 2009). A DCOP could be considered as a single search problem in which the objective function and/or the constraints change through time. Given those conditions, traditional EAs must be adapted to identify changes in the fitness landscape and/or in the feasible region so as to be able to find new feasible optimal solutions (Nguyen et al. 2012; Nguyen and Yao 2012, 2013).

The specialized literature on EAs shows a significant amount of research in dynamic unconstrained optimization, e.g., multimodal functions (Rohlfshagen and Yao 2013; Filipiak and Lipinski 2014; Li et al. 2014, 2015; Mukherjee et al. 2016; Umenai et al. 2016; Zhang et al. 2016; Yu and Wu 2016; Pekdemir and Topcuoglu 2016) and multi-objective optimization (Liu et al. 2014; Azzouz et al. 2015; Martínez-Peñaloza and Mezura-Montes 2015; Jiang and Yang 2016). Nevertheless, in the presence of dynamic constraints in continuous search spaces, the research is still scarce as mentioned in a survey on evolutionary constrained optimization (Mezura-Montes and Coello 2011). In a recent review on EAs to solve DCOPs, the genetic algorithm (GA) appears as the most popular algorithm. However, there are new proposals based on other bio-inspired algorithms, e.g., differential evolution (DE) (Ameca-Alducin et al. 2014), gravitational search algorithm (GSA) (Pal et al. 2013b), evolutionary algorithm (EA) (Sharma and Sharma 2012b), T cell artificial immune system (Aragón et al. 2013) and multi-population algorithm (Bu et al. 2016). To deal with a constrained dynamic space, two main types of mechanisms have been added to the abovementioned algorithms: (1) introduction and maintenance of diversity as in a GA with elitism and random immigrants (RIGAElit) (Grefenstette 1992), a GA with elitism and hypermutation (HyperMElit) (Cobb 1990), a dynamic constrained T cell (DCTC) (Aragón et al. 2013), intelligent constraint handling evolutionary algorithm (ICHEA) (Sharma and Sharma 2012b), a dynamic species-based particle swam optimization (DSPSO) (Bu et al. 2016) and a DE with two variants (DE/rand/1/bin and DE/best/1/bin) called dynamic differential evolution with combined variants (DDECV) (Ameca-Alducin et al. 2014) and (2) repair mechanisms within a GA (GA \(+\) Repair) (Nguyen and Yao 2012), within DE (DE \(+\) Repair) (Pal et al. 2013a) and also within GSA (GSA \(+\) Repair) (Pal et al. 2013b).

Recently, an improved version of DDECV, called DDECV \(+\) Repair (Ameca-Alducin et al. 2015a, b), was proposed, where a simple but effective repair method based on the differential mutation operator and resampling was proposed. The novelty of this repair method with respect to the abovementioned is the fact that no feasible solutions are required. In contrast, the usage of the differential mutation with random vectors is emphasized so as to generate feasible solutions.

Even though DDECV \(+\) Repair was already proposed, its empirical validation was very limited [i.e., just a couple of measures were adopted in the experiments (Ameca-Alducin et al. 2015a, b)]. Therefore, this work aims to provide an in-depth analysis of DDECV \(+\) Repair, where the following unexplored issues are investigated:

  1. 1.

    The role of each one of its elements in the search behavior and expected performance.

  2. 2.

    The effects of (a) the change frequency and (b) the severity of the change in both, the objective function and the constraints, in its overall performance.

  3. 3.

    Its ability to detect a change and recover after it, and the way diversity is handled (i.e., suitable infeasible solutions maintenance during the search).

  4. 4.

    Its performance when solving problems where dynamism is present only in the objective function, only in the constraints or dynamism in both of them.

For each one of those four issues an experiment is designed. The empirical evidence provided in this work comprises both, direct and indirect comparisons. seven performance measures are computed (offline error, recovery rate, absolute recovery rate, percentage of infeasible solutions, detected change count, best error before change and feasible offline error), and statistical tests are applied to validate the findings observed in the samples of runs carried out. Eight approaches found in the specialized literature are adopted for comparison purposes. The contribution of this work aims to add knowledge about the capabilities of this particular DE-based algorithm to deal with a dynamic numerical constrained search space.

The rest of the paper is divided as follows. In Sect. 2 the problem of interest is stated, while Sect. 3 briefly introduces DE, DECV (the base algorithm) and details DDECV \(+\) Repair. Section 4 presents the four experiments and their corresponding results, where a recently proposed benchmark is solved (Nguyen and Yao 2012). Finally, Sect. 5 includes the conclusions and directions regarding future research.

2 Problem statement

A DCOP can be seen as a search problem where its fitness landscape and feasible region change through time. Without loss of generality, a DCOP can be defined as to:

Find \(\mathbf x \), at each time t, which:

$$\begin{aligned} \min _{{\mathbf{x}}\in F_t\subseteq [L, U]} f(\mathbf x , t) \end{aligned}$$

where \(t \in N^+\) is the current time,

$$\begin{aligned}{}[L, U]= & {} \left\{ \mathbf x = \left( x_{1},x_{2},\ldots ,x_{D}\right) |L_i \le x_i \le U_i,\right. \\&\left. i = 1 \ldots D\right\} \end{aligned}$$

is the search space,

subject to:

is called the feasible region at time t.

\(\forall \mathbf x \in F_t\) if there exists a solution \(\mathbf x ^* \in F_t\) such that \(f(\mathbf x ^*,t)\le f(\mathbf x ,t)\), then \(\mathbf x ^*\) is called a feasible optimal solution and \(f(\mathbf x ^*,t)\) is called the feasible optima value at time t.

Four types of DCOPs are defined: (1) a static objective function and static constraints (i.e., a static constrained optimization problem), (2) a dynamic objective function and static constraints, (3) a static objective function and dynamic constraints and (4) a dynamic objective function and dynamic constraints.

3 DDECV \(+\) Repair

3.1 Differential evolution

DE is a stochastic search algorithm which operates with a population of solutions called vectors (Price et al. 2005). The population is represented as: \(\mathbf x _{i, G}, i=1,\ldots , NP \), where \(\mathbf x _{i,G}\) represents vector i at generation G, and \( NP \) is the population size. Each target vector \(\mathbf x _{i, G}\) generates one trial vector \(\mathbf u _{i, G}\) by using a mutant vector \(\mathbf v _{i,G}\). The mutant vector is obtained as in Eq. (1), where \(\mathbf x _{r0,G}\), \(\mathbf x _{r1,G}\) and \(\mathbf x _{r2,G}\) are vectors chosen at random from the current population (\(r0 \ne r1 \ne r2 \ne i\)); \(\mathbf x _{r0,G}\) is known as the base vector, and \(\mathbf x _{r1,G}\) and \(\mathbf x _{r2,G}\) are the difference vectors. \(F>0\) is a scale factor defined by the user.

$$\begin{aligned} \mathbf v _{i,G}= \mathbf x _{r0,G}\,+\,F (\mathbf x _{r1,G} - \mathbf x _{r2,G}) \end{aligned}$$
(1)

After the mutant vector \(\mathbf v _{i, G}\) is generated, it is combined with the target vector \(\mathbf x _{i, G}\) to generate the trial vector \(\mathbf u _{i, G}\) by applying a crossover operator as shown in Eq. (2).

$$\begin{aligned} u_{i,j,G} = {\left\{ \begin{array}{ll} v_{i,j,G} &{}\quad \text{ if }\,(\mathrm{rand}_{j} \le \mathrm{CR}) \text{ or } (j=J_{\mathrm{rand}}) \\ x_{i,j,G} &{}\quad \text{ otherwise } \end{array}\right. } \end{aligned}$$
(2)

where \(\mathrm{CR} \in [0,1]\) defines the similarity between the trial vector and the mutant vector, \(\mathrm{rand}_{j}\) generates a random real number with uniform mutation between 0 and 1, \(j \in \left\{ 1,\ldots , D\right\} \) is the jth variable of the D-dimensional vector, \(J_{\mathrm{rand} }\in [1,D]\) is an random integer which prevents a target vector copy as its trial vector.

Finally, the best vector, based on its objective function value, between the target and trial vector is chosen to remain in the population for the next generation as shown in Eq. (3) (assuming minimization):

$$\begin{aligned} \mathbf x _{i,G+1} = {\left\{ \begin{array}{ll} \mathbf u _{i,G} &{}\quad \text{ if }\left( f(\mathbf u _{i,G}) \le f(\mathbf x _{i,G})\right) ,\\ \mathbf x _{i,G} &{}\quad \text{ otherwise } \end{array}\right. } \end{aligned}$$
(3)

This DE variant is known as DE/rand/1/bin, where “rand” means the criterion to choose the base vector \(\mathbf x _{r0,G}\), “1” indicates the number of vector differences, and “bin” is the type of crossover (binomial in this case, as in Eq. (2)).

Another DE variant is DE/best/1/bin, where the only difference with respect to DE/rand/1/bin is that the best vector in the current population, represented as \(\mathbf x _{\mathrm{best},G}\), is the base vector for all differential mutations [see Eq. (4)]. Details of other DE variants, particularly for constrained optimization, can be found in (Mezura-Montes et al. 2010)

$$\begin{aligned} \mathbf v _{i,G}= \mathbf x _{\mathrm{best},G}\,+\,F (\mathbf x _{r1,G} - \mathbf x _{r2,G}) \end{aligned}$$
(4)

3.2 Differential evolution with combined variants (DECV)

DDECV \(+\) Repair algorithm is based on differential evolution with combined variants (DECV) (Mezura-Montes et al. 2010), where DE/rand/1/bin is used at the beginning of the search, and after a percentage of feasible vectors (PFV), defined by the user, is found, DE/best/1/bin is used instead. DECV was initially proposed to solve static constrained optimization problems (SCOPs) (Mezura-Montes et al. 2010). The feasibility rules proposed by Deb (2000) are used in DECV as selection criteria in Eq. (3), and also every time the best vector is selected in DE/best/1/bin. The three rules are as follows:

  1. 1.

    Between two feasible vectors, the one with the best objective function value is selected.

  2. 2.

    If one vector is feasible and the other one is infeasible, the feasible vector is selected.

  3. 3.

    If both vectors are infeasible, the one with the lowest sum of constraint violation is selected.

To favor a self-contained paper, the complete pseudocode of DECV is detailed in Algorithm 1

figure a

3.3 DDECV \(+\) Repair

To deal with DCOPs, DDECV \(+\) Repair was added with a change detection mechanism which covers modifications in the objective function and the constraints. When a change is detected, DDECV \(+\) Repair utilizes DECV’s two-variant combination to promote exploration in the dynamic constrained search space. As it is important to promote exploration after a change, approaching faster to the, maybe different, feasible region, is essential as well. Therefore, a repair mechanism based on the differential mutation and resampling is applied to infeasible vectors. Finally, a set of randomly generated vectors called immigrants are added to increase diversity in the population. In the following subsections, those four DDECV \(+\) Repair elements are detailed (Ameca-Alducin et al. 2015a, b).

3.3.1 Change detection

A timely change detection of the objective function and/or the constraints of a DCOP is the desired starting point to deal with a dynamic search space (du Plessis 2012; Richter 2009b). Therefore, DDECV + Repair uses a solution (i.e., vector) re-evaluation, also known as sensor-based detection (Richter 2009a). At each generation, before the first target vector and the target vector at the middle of the current population generate their corresponding trial vector, they are evaluated again and their objective function values and constraint values are compared against their previous values. If any value is different, an indicator is activated and the best vector in the current population is stored in an archive, called the memory population. Furthermore, all vectors in the current population and also those in the memory population are re-evaluated so as to get them updated. The memory population keeps promising solutions, based on the DCOP features before the detected change, that can be used afterward if similar conditions return later in the dynamic search (Nguyen et al. 2013). Using the first vector and the one located at the middle of the population for change detection purposes look to decrease the chance of missing a change as such a mechanism operates twice during a single generation and just two extra solution evaluations are computed. The pseudocode of the change detection mechanism (Ameca-Alducin et al. 2015b) is detailed in Algorithm 2.

figure b

3.3.2 Exploration promotion

DDECV \(+\) Repair, as in the original DECV, starts using DE/rand/1/bin. Once a change is detected (see Algorithm 2), then the exploration promotion mechanism is activated as follows: the DE variant is changed to DE/best/1/bin, whose usage will last a number of generations defined by the user (\(\mathrm{Gen}_{\mathrm{best}}\)), and the F value is increased during such period of time to favor larger movements promoting exploration. Furthermore, considering that DE/best/1/bin is used, the best vector can be chosen from either the current population or the memory population (the best vectors found in previous environments). The idea of using DE/best/1/bin with larger F values to promote diversity in a constrained search space was concluded in Mezura-Montes et al. (2010), and it is adopted in this work. The details of the exploration promotion mechanism can be seen in Algorithm 3.

figure c

3.3.3 The repair method

Repairing, in the context of constrained optimization, is understood as the process of converting an infeasible solution into a feasible one. Reference feasible solutions are then required for that purpose (Michalewicz and Nazhiyath 1995; Nguyen and Yao 2012, 2009; Pal et al. 2013a, b). However, the repair method used in DDECV \(+\) Repair does not use feasible vectors as reference. It is a resampling approach based on the differential mutation operator which works as follows (Ameca-Alducin et al. 2015b):

After each trial vector is generated, if it is infeasible, three new and temporal vectors are randomly generated with uniform distribution in order to apply the differential mutation operator [see Eq. (1)] in a similar way as a mutant vector is created in DE. Feasibility is then checked on the obtained vector. The process repeats until either a feasible vector is generated or Repair_Limit iterations are computed. Regardless of the feasibility of the vector obtained after the repair process, it is considered as the trial vector to increase diversity in the population. As only constraints are evaluated by the repair method, such evaluations are not added to the total evaluations required by DDECV \(+\) Repair. Algorithm 4 includes the repair details.

figure d

3.3.4 The random immigrants

To add more diversity to the current population in DDECV \(+\) Repair, a number of IB immigrants (vectors generated at random with uniform distribution) are added to the population at the end of each generation. IB stands for “Immigrants Before a change”. Moreover, during the period of time the exploration promotion is working (controlled by the \(\mathrm{Gen}_{\mathrm{best}}\) parameter), such number of immigrants (IA, “Immigrants After a change”) is increased. In both cases, the immigrants replace the worst vectors in the current population.

The pseudocode of DDECV \(+\) Repair is shown in Algorithm 5.

figure e

4 Experiments and results

4.1 Experimental design

Recalling from Sect. 1, the aim of this paper is to deepen into the empirical analysis of DDECV \(+\) Repair by considering (1) the role of each one of its elements, (2) how affected is with different change frequencies and severities, (3) its ability to detect a change and recover after it and its diversity handling and (4) its performance with dynamism in different parts of the problem. Therefore, four experiments were designed:

  • A comparison of DDECV \(+\) Repair against own versions, each one without one of its elements (exploration promotion, repair method and random immigrants).

  • A comparison of DDECV \(+\) Repair against recent approaches to solve DCOPs by varying the change frequency and severity.

  • A comparison of DDECV \(+\) Repair against recent approaches to solve DCOPs by measuring changes detected, recovery rate and balance between feasible and infeasible vectors in the population.

  • A comparison of DDECV \(+\) Repair against two recent approaches (ICHEA and DCTC) to analyze the presence of dynamism in different parts of the problem, i.e., dynamic objective function and static constraints, static objective function and dynamic constraints and dynamic objective function and dynamic constraints.

The four experiments solved a recently proposed benchmark for DCOPs (Nguyen and Yao 2012), which contains eighteen problems. The main features of those problems are summarized in Table 1, and the details can be found in Nguyen and Yao (2012, (2013). The parameters used for DDECV are those in Table 3. Such values were taken from (Ameca-Alducin et al. 2015b) and were obtained by using the iRace tool for parameter tuning (López-Ibáñez and Stützle 2012). The parameters used for the benchmark problems are detailed in Table 2, where different change frequencies and severities were considered.

Table 1 Main features of the test problems (Nguyen and Yao 2012)
Table 2 Parameter values for the test problems taken from Nguyen and Yao (2012)
Table 3 DDECV \(+\) Repair parameter values taken from Ameca-Alducin et al. (2014, (2015b)

4.1.1 Performance measures

The following seven performance measures were used in this work:

  1. 1.

    Offline error (Branke and Schmeck 2003) the most popular measure in the specialized literature of DCOPs (Nguyen and Yao 2012; Pal et al. 2013b). It is defined as the average of the computed errors at each one of the generations covering the total number of times. The offline error is always greater than or equal to zero. This latter value indicates a perfect performance (Nguyen et al. 2012). This measure is defined as in Eq. (5):

    $$\begin{aligned} \mathrm{offline}\_\mathrm{error}= \frac{1}{G_{\mathrm{max}}} \sum _{G = 1}^{G_{\mathrm{max}}} e(G) \end{aligned}$$
    (5)

    where \(G_{\mathrm{max}}\) is the number of generations computed by the algorithm and e(G) denotes the error in the current generation G [see Eq. (6)]:

    $$\begin{aligned} e(G)= |f(\mathbf x ^*,t) - f(\mathbf x _{\mathrm{best},G},t)| \end{aligned}$$
    (6)

    where \(f(\mathbf x ^*,t)\) denotes the feasible global optima in current time t, and \(f(\mathbf x _{\mathrm{best},G},t)\) is the best solution (feasible or infeasible) found so far at generation G in current time t. Using the absolute value of the error mitigates those problems related to the presence of infeasible solutions with a better objective function value with respect to the feasible optimum of the corresponding time.

  2. 2.

    Detected_change count each detected change is counted to verify whether the algorithm has the ability to detect all of them during the search process.

  3. 3.

    Recovery rate (RR) (Nguyen and Yao 2012) this measure was designed to analyze how quickly an algorithm recovers after a change and starts converging to the new best solution before the next change occurs. Such new best solution is not necessarily the global optimum. The RR value would be 1 in the best case where the algorithm is able to recover and converge to the best solution immediately after a change. A value closer to zero means the algorithm is unable to recover [see (Eq. 7)]:

    $$\begin{aligned} \mathrm{RR}= \frac{1}{t_{\mathrm{max}}} \sum _{t = 1}^{t_{\mathrm{max}}} \frac{\sum _{G = 1}^{G_{\mathrm{max}}(t)}\left[ f(\mathbf x _{\mathrm{best},G},t)- f(\mathbf x _{\mathrm{best},1},t)\right] }{G_{\mathrm{max}}(t)\left[ f(\mathbf x _{\mathrm{best},G},t)-f(\mathbf x _{\mathrm{best},1},t)\right] }\nonumber \\ \end{aligned}$$
    (7)

    where \(t_{\mathrm{max}}\) is the total number of times (changes) in the environment and every change occurs at time t, \(G_{\mathrm{max}}(t)\) is the total number of generations at time t, \(f(\mathbf x _{\mathrm{best},G},t)\) is the objective function value of the best feasible solution found at current generation G in current time t, and \(f(\mathbf x _{\mathrm{best},1},t)\) is the objective function value of the best feasible solution at the first generation of the new time (i.e., the first best feasible solution obtained by the algorithm after a change).

  4. 4.

    Absolute recovery rate (ARR) (Nguyen and Yao 2012) it is similar to RR, but used to analyze how fast an algorithm starts converging to the global optimum before the next change occurs. The ARR value would be 1 in the best case when the algorithm is able to recover and converge to the global optimum immediately after a change and would be zero in case the algorithm is unable to recover [see Eq. (8)]:

    $$\begin{aligned} \mathrm{ARR}= \frac{1}{t_{\mathrm{max}}} \sum _{t = 1}^{t_{\mathrm{max}}} \frac{\sum _{G = 1}^{G_\mathrm{max}(t)}[ f(\mathbf x _{\mathrm{best},G},t)- f(\mathbf x _{\mathrm{best},1},t)]}{G_\mathrm{max}(t)[f(\mathbf x ^*,t)-f(\mathbf x _{\mathrm{best},1},t)]}\nonumber \\ \end{aligned}$$
    (8)

    where \(f(\mathbf x ^*,t)\) is the feasible global optima in current time t. \(t_\mathrm{max}\), t, \(G_\mathrm{max}(t)\), G, \(f(\mathbf x _{\mathrm{best},G},t)\) and \(f(\mathbf x _{\mathrm{best},1},t)\) were defined in Eq. (7) for RR.

  5. 5.

    Percentage of selected infeasible individuals this measure computes the average of the number of infeasible vectors at each one of the \(G_\mathrm{max}\) generations. A value greater than zero is desirable for most of the search, as the algorithm is able to maintain infeasible solutions to avoid feasible local optima.

  6. 6.

    Best error before change this measure, proposed in Trojanowski and Michalewicz (1999), is calculated as the average of the smallest errors accomplished at the end of each period of time right before a change of time occurs [see Eq. (9)]:

    $$\begin{aligned} E_\mathrm{best}=\frac{1}{t_\mathrm{max}} \sum _{t = 1}^{t_\mathrm{max}}e_\mathrm{best}(G_\mathrm{max}(t)) \end{aligned}$$
    (9)

    where \(t_\mathrm{max}\) is the total number of times (changes) in the environment and every change occurs at time t, \(G_\mathrm{max}(t)\) is the total number of generations at time t, and \(e_\mathrm{best}(G_\mathrm{max}(t))\) is the difference between the feasible global optima in current time t and the best solution found so far at generation G in current time t [see (Eq. 6)].

  7. 7.

    Feasible offline error (Aragón et al. 2013) it is similar to offline error [see Eq. (5)], but only the feasible solutions are computed. If an infeasible solution is found, then nothing is added.

4.1.2 Compared algorithms

The results obtained by DDECV \(+\) Repair were compared with those obtained by the following eight state-of-the-art algorithms in dynamic constrained optimization, which are briefly described below.

  • GAElit is a traditional genetic algorithm (GA) with elitism. In this algorithm, nonlinear ranking parent selection, arithmetic crossover and uniform mutation are employed. To cope with constraints, a static penalty function is used. To avoid obsolete data, the change detection mechanism (see Algorithm 2) is applied when elitism takes place.

  • HyperMElit is similar to GAElit, but with an adaptive mechanism to switch between two mutation rates: (1) low (standard mutation) rate and (2) high (hypermutation) rate, to increase diversity. If the best solution gets worse, the hypermutation is applied for an user-defined number of generations (Cobb 1990).

  • RIGAElit is also similar to GAElit. However, after the mutation operator is applied, a fraction of the current population is replaced with randomly generated individuals (random immigrants). This fraction is determined by a random immigrant rate (also named replacement rate). In the experiments of this work, the third part of the current population is replaced by random immigrants in order to maintain diversity throughout the search process (Grefenstette 1992).

  • GA \(+\) Repair was proposed in Nguyen and Yao (2009). This algorithm is similar to GAElit but with a repair method based on GENOCOP III (Michalewicz and Nazhiyath 1995). The repair method converts infeasible solutions in the population (called search population) into feasible ones by using feasible solutions as reference. Those feasible solutions are in the so-called reference population, where only feasible solutions are allowed. To avoid obsolete data, the change detection mechanism is applied (see Algorithm 2).

  • DE + Repair is based on DE/rand/1/bin with a change detection mechanism, in which the solutions are re-evaluated in order to detect modifications in the environment. A modified repair method proposed by Michalewicz and Nazhiyath (1995) is used as a constraint handler, whose application is based on closeness in the variable space between the reference (feasible) solution and the infeasible solution to be repaired (Pal et al. 2013a).

  • GSA \(+\) Repair is based on the gravitational search algorithm (GSA) (Rashedi et al. 2009) and shares the same change detection mechanism and repair method with DE \(+\) Repair (Pal et al. 2013b).

  • ICHEA is a variation of an EA. This approach uses a intermarriage crossover operator which employs knowledge from constraints rather than blindly searching the solution. This crossover is intended to make a generic offspring that tries to satisfy more than one constraint because its parents are selected from two different feasible regions (Sharma and Sharma 2012a). The algorithm favors those offspring which satisfy more constraints by using Deb’s rules (Deb 2000). To avoid the loss of diversity in the population, this algorithm has a diversity management and a stagnant local optimal solutions management which works like tabu search algorithm (Sharma and Sharma 2012b).

  • Dynamic constrained T cell (DCTC) is an algorithm inspired on the T cell model, which operates on four populations, corresponding to the groups in which the T-cells are divided: (1) virgin cells to provide diversity, (2) effector cells CD4 to explore the search space, (3) effector cells CD8 to use real numbers representation and (4) memory cells to explore the neighborhood of the best found solutions. Furthermore, a change detection mechanism in the environment through the re-evaluation of solutions is added. The feasibility criteria are used as constraint handler (Aragón et al. 2013).

The parameter values used for the abovementioned compared algorithms, where direct comparisons were carried out (experiment 3), are given in Table 4.

Table 4 Parameter values for the tested algorithms (GAElit, RIGAElit, HyperMElit and GA \(+\) Repair), taken from Nguyen and Yao (2012)

4.2 Results

4.2.1 Experiment 1. DDECV \(+\) Repair element analysis

The first experiment compared DDECV+Repair against its own versions, where in each one of them, a single element was deactivated, as follows:

  • DDE rand \(+\) Repair A version without the exploration promotion mechanism (i.e., without the combined variants and memory population), where only DE/rand/1/bin is used.

  • DDE best \(+\) Repair A version without the exploration promotion mechanism (i.e., without the combined variants and memory population), where only DE/best/1/bin is used.

  • DDECV − Repair A version with all the elements with the exception of the repair method.

  • DDECV − Imm \(+\) Repair A version with all the elements with the exception of the random immigrants.

  • DDECV − Mem \(+\) Repair A version with all the elements with the exception of the memory population (the best vectors in the previous times).

The eighteen test problems were solved by each one of the six versions, and the offline error measure was computed. The change frequency was 1000 evaluations, and the severity of the change was medium (i.e., \(k=0.50\) and \(S=20\)). Those values are the most used in the specialized literature of DCOPs (Nguyen and Yao 2012). The results obtained are summarized in Table 5. The statistical validation was made with the 95 % confidence Kruskal–Wallis (KW) test and the Bergmann–Hommels post hoc test, as suggested in Derrac et al. (2011). Such tests indicated no significant differences among DDECV \(+\) Repair versions by considering all the eighteen test problems.

Table 5 Average and standard deviation offline error values obtained by DDECV \(+\) Repair and its five incomplete versions with a change frequency of 1000 evaluations and a medium severity of change (\(k=0.50\) and \(S=20\))

To get a closer look, each test problem was analyzed separately. In this way, the 95 % confidence Wilcoxon rank-sum test was applied for each test problem in pairwise comparisons between DDECV \(+\) Repair and each one of its five versions. The results are presented in Table 6.

Table 6 95 % confidence Wilcoxon rank-sum test results on pairwise comparisons between DDECV \(+\) Repair against each one of its five incomplete versions based on the offline error results in Table 5

Based on such results, DDECV \(+\) Repair outperformed DDE_rand \(+\) Repair in fourteen test problems (g24_u, g24_1, g24_2, g24_3, g24_3b, g24_4, g24_5, g24_6a, g24_6b, g24_6c, g24_6d, g24_7, g24_8a and g24_8b), while DDE_rand \(+\) Repair was better in just one (unconstrained) test problem (g24_2u). In three test problems (g24_f, g24_uf and g24_3f), all of them static and the second of them unconstrained, no significant differences were observed.

DDECV \(+\) Repair outperformed DDE_best \(+\) Repair in nine test problems (g24_u, 24_3, g24_3b, g24_4, g24_6b, g24_6d, g24_7, g24_8a and g24_8b). On the other hand, DDE_best \(+\) Repair had a better performance in six test problems (g24_f, g24_uf, g24_2u, g24_3f, g24_6a and g24_6c test). No significant differences were observed in three test problems (g24_1, g24_2 and g24_5). It is important to note that the problems where DDE_best \(+\) Repair provided a better performance have static constraints or they are unconstrained ones.

Regarding DDECV − Repair, the results indicate that DDECV \(+\) Repair outperformed it in thirteen test problems (g24_1, g24_f, g24_2, g24_3, g24_3b, g24_3f, g24_4, g24_5, g24_6a, g24_6c, g24_6d, g24_7 and g24_8b). In contrast, DDECV − Repair was better in just one unconstrained test problem (g24_u). No significant differences were observed in four test problems (g24_uf, g24_2u, g24_6b and g24_8a), where the first two are unconstrained. Those results highlight the importance of the repair method in DDECV \(+\) Repair.

Furthermore, DDECV \(+\) Repair obtained better results with respect to DDECV − Imm \(+\) Repair in nine test problems (g24_u, g24_2u, g24_3b, g24_3f, g24_4, g24_5, g24_6d, g24_8a and g24_8b), while DDECV − Imm \(+\) Repair surpassed the results of DDECV \(+\) Repair in five test problems (g24_f, g24_3, g24_6b, g24_6c and g24_7). In four test problems (g24_1, g24_uf, g24_2, and g24_6a) no significant differences were obtained. The common feature of three test problems where the version with no immigrants was better than DDECV \(+\) Repair (g24_f, g24_3 and g24_7) is that the global optimum is in the boundaries of the feasible region, while in the other two test problems (g24_6b and g24_6c), the feasible global optimum is in the boundaries of the search space. Therefore, the addition of those randomly generated solutions into the current population seems to affect the convergence to those solutions located in the boundaries of either the feasible region or the search space.

Table 7 Average and standard deviation offline error values obtained by DDECV \(+\) Repair and the compared algorithms with a change frequency of 250 evaluations and a medium change severity (\(k=0.5\) and \(S=20\))
Table 8 Average and standard deviation offline error values obtained by DDECV \(+\) Repair and the compared algorithms with a change frequency of 500 evaluations and a medium change severity (\(k=0.5\) and \(S=20\))
Table 9 Average and standard deviation offline error values obtained by DDECV \(+\) Repair and the compared algorithms with a change frequency of 1000 evaluations and a medium change severity (\(k=0.5\) and \(S=20\))
Table 10 Average and standard deviation offline error values obtained by DDECV \(+\) Repair and the compared algorithms with a change frequency of 2000 evaluations and a medium change severity (\(k=0.5\) and \(S=20\))
Table 11 Average and standard deviation offline error values obtained by DDECV \(+\) Repair and the compared algorithms with a change frequency of 4000 evaluations and a medium change severity (\(k=0.5\) and \(S=20\))

Finally, DDECV \(+\) Repair outperformed DDECV − Mem \(+\) Repair in fifteen test problems (g24_u, g24_1, g24_f, g24_uf, g24_2, g24_3, g24_3b, g24_3f, g24_4, g24_5, g24_6b, g24_6c, g24_6d, g24_8a and g24_8b), while DDECV − Mem \(+\) Repair had a better performance in two test problems (g24_2u and g24_6a), where the first problem is unconstrained and the second has static constraints. No significant differences were observed in just one test problem (g24_7), where the main feature is that the global optimum is in the boundaries of the feasible region. The results suggest the importance of the memory population, because the previous search conditions appear later in the search.

The overall results of this first experiment lead to the following conclusions:

  • The most important mechanism in DDECV \(+\) Repair is precisely the repair mechanism. However, even without it, the algorithm can provide competitive results but only in unconstrained dynamic problems.

  • The absence of the diversity promotion mechanism also affects DDECV \(+\) Repair’s performance, but such impact is less significant. In fact, even without it, if DE/best/1/bin is used, problems with a dynamic objective function but with static constraints can still be solved.

  • The random immigrants have a positive effect on DDECV \(+\) Repair. However, the presence of random solutions at each generation may affect the algorithm’s performance when the feasible global optimum is located at the boundaries of either the feasible region or the search space.

  • Other important mechanism in DDECV \(+\) Repair is the memory population. Without it, the performance of DDECV \(+\) Repair is affected.

Table 12 \(95\,\%\) confidence Kruskal–Wallis test results on the offline error values in Tables 7810 and 11, obtained by DDECV+Repair, GAElit, RIGAElit, HyperMElit and GA \(+\) Repair with different change frequencies and a medium change severity (\(k=0.5\) and \(S=20\))
Fig. 1
figure 1

Bonferroni–Dunn post hoc test results based on the offline error values obtained by DDECV \(+\) Repair and the compared algorithms with a medium change severity (\(k=0.5\) and \(S=20\)), considering two change frequencies: a 250 evaluations and b 500 evaluations

4.2.2 Experiment 2: Change frequency and severity analysis

The second experiment analyzed the effects of different change frequencies and also different change severities in the performance provided by DDECV+Repair. The six algorithms already described were used for comparison purposes, and their results were taken from their corresponding documents (Cobb 1990; Cobb and Grefenstette 1993; Nguyen et al. 2012; Nguyen and Yao 2010; Pal et al. 2013a, b). The eighteen test problems, as in Experiment 1, were solved.

The first comparison of this second experiment focused on the change frequency. The results of the 95 % confidence Kruskal–Wallis test applied to the offline error results obtained by DDECV \(+\) Repair and the compared algorithms (GAElit, RIGAElit, HyperMElit, GA \(+\) Repair, DE \(+\) Repair and GSA \(+\) Repair) with five different change frequencies and a medium change severity (\(k=0.5\) and \(S=20\)) are presented in Table 12 (250, 500, 2000 and 4000 evaluations) and Table 13 (1000 evaluations, which was presented separately because DE \(+\) Repair and GSA \(+\) Repair only reported results for that change frequency).

The complete offline error values of this comparison are detailed in Tables 7, 89,10, and 11, for 250, 500, 1000, 2000 and 4000 evaluations, respectively.

Fig. 2
figure 2

Bonferroni–Dunn post hoc test results based on the offline error values obtained by DDECV \(+\) Repair and the compared algorithms with a medium change severity (\(k=0.5\) and \(S=20\)), considering two change frequencies: a 2000 evaluations and b 4000 evaluations

Fig. 3
figure 3

Bonferroni–Dunn post hoc test results based on the offline error values obtained by DDECV \(+\) Repair and the compared algorithms considering 1000 evaluations as a change frequency and a medium change severity (\(k=0.5\) and \(S=20\))

From Table 12 it was observed that DDECV \(+\) Repair outperformed all algorithms in all change frequencies, with the exception of GA \(+\) Repair with 250 evaluations. The results of the Bonferroni–Dunn post hoc test for 250 and 500 evaluations in Fig. 1 and for 2000 and 4000 evaluations in Fig. 2 confirm such finding. Regarding the change frequency of 1000 evaluations (Table 13), DDECV \(+\) Repair outperformed again GAElit, RIGAElit, HyperMElit and GA \(+\) Repair. On the other hand, its performance was similar with respect to DE \(+\) Repair and GSA \(+\) Repair. Figure 3, with the Bonferroni–Dunn post hoc test results, confirms the abovementioned.

The second comparison of the second experiment aimed to analyze the impact of the change severity in DDECV \(+\) Repair. The results of the 95 % confidence Kruskal–Wallis test applied to the offline error results obtained by DDECV \(+\) Repair and the compared algorithms (GAElit, RIGAElit, HyperMElit and GA+Repair) with a low (\(k=0.25\) and \(S=10\)) and a high (\(k=1.0\) and \(S=50\)) change severities, both with 1000 evaluations as change frequency, are presented in Table 16. DE \(+\) Repair and GSA \(+\) Repair were omitted in this experiment because no results were found. The complete offline error values of this comparison are detailed in Tables 14 and 15, for the low change severity and the high change severity, respectively.

Table 13 \(95\,\%\) confidence Kruskal–Wallis test results on the offline error values in Table 9, obtained by DDECV+Repair, GAElit, RIGAElit, HyperMElit, GA \(+\) Repair DE \(+\) Repair and GSA \(+\) Repair with a change frequency of 1000 evaluations and a medium change severity (\(k=0.5\) and \(S=20\))

Based on the results in Table 16, DDECV \(+\) Repair outperformed the compared algorithms in both, low and high change severities. The only exception was GA \(+\) Repair when the severity is low. In that case, both algorithms performed in a similar way. The results of the Bonferroni–Dunn post hoc test in Fig. 4a, b uphold that observation.

The second experiment provided the following research findings:

  • DDECV \(+\) Repair was robust to different change frequencies. However, when the change frequency took medium values (i.e., 1000 evaluations), just comparable results were obtained with respect to two approaches with repair methods in their elements, DE \(+\) Repair and GSA \(+\) Repair.

  • DDECV \(+\) Repair was not sensitive to low and high change severities in both, the objective function and the constraints of a DCOP.

Table 14 Average and standard deviation offline error values obtained by DDECV \(+\) Repair and the compared algorithms with a change frequency of 1000 evaluations and a low change severity (\(k=0.25\) and \(S=10\))
Table 15 Average and standard deviation offline error values obtained by DDECV \(+\) Repair and the compared algorithms with a change frequency of 1000 evaluations and a high change severity  (\(k=1.0\) and \(S=50\))
Table 16 \(95\,\%\) confidence Kruskal–Wallis test results on the offline error values in Tables 14 and 15, obtained by DDECV \(+\) Repair, GAElit, RIGAElit, HyperMElit and GA+Repair with a low (\(k=0.25\) and \(S=10\)) and a high (\(k=1.0\) and \(S=50\)) change severities, both with 1000 evaluations as change frequency
Fig. 4
figure 4

Bonferroni–Dunn post hoc test results based on the offline error values obtained by DDECV \(+\) Repair and the compared algorithms with 1000 evaluations as the change frequency, considering two change severities: a low (\(k=0.25\) and \(S=10\)) and b high (\(k=1.0\) and \(S=50\))

4.2.3 Experiment 3: Change detection, recovery and diversity analysis

The third experiment studied the ability of DDECV \(+\) Repair to detect a change in the objective function and/or the constraints and its capacity to recover after it. Moreover, its diversity handling (feasible and infeasible solutions in the population) was also analyzed.

In Table 17, the average and standard deviation of the number of successful changes detected per each dynamic test problem in 50 independent runs by DDECV \(+\) Repair and HyperMElit are presented (static test problems were not considered in this experiment). Due to the fact that, among the compared algorithms, only HyperMElit uses a different detection mechanism (best objective function value decrease), it was selected for comparison purposes. The change frequency was set to 1000 evaluations, and the change severity was medium (\(k=0.5\) and \(S=20\)).

Recalling that the total number of changes in a single run is 10, the results suggest that the re-evaluation of solutions used by DDECV \(+\) Repair is more effective with respect to the objective function value decrease when solving DCOPs (e.g., in G24_3 test problem HyperMElit was unable to detect changes in the dynamic constraints because the objective function is static). Furthermore, the feasible global optimum switches between disconnected regions. This performance was observed with the other change frequencies and severities.

After getting some knowledge about how effective is DDECV \(+\) Repair to detect a change, Figs. 5a–c and 6a, b depict the recovery rate (RR) and absolute recovery rate (ARR) diagrams of the following algorithms: GAElit, RIGAElit, HyperMElit, GA \(+\) Repair and DDECV \(+\) Repair, all of them with five change frequencies (250, 500, 1000, 2000 and 4000 evaluations) and a medium change severity (\(k=0.5\) and \(S=20\)). Such results indicate that DDECV \(+\) Repair recovers faster than the compared algorithms, regardless of the change frequency. The same behavior is observed in Fig. 7a, b, where a change frequency of 1000 evaluations coupled with a low change severity (\(k=0.25\) and \(S=10\)) and a high change severity (\(k=1.0\) and \(S=50\)), respectively, was considered. It was noted that the recovery ability showed by DDECV \(+\) Repair, as those of the compared algorithms, seems to be more affected by a fast change (i.e., 250 evaluations in Fig. 5a) than a high change severity (i.e., \(k=1.0\) and \(S=50\) in Fig. 7b).

Table 17 Average number of changes successfully detected by DDECV \(+\) Repair and HyperMElit with a change frequency of 1000 evaluations and a medium change severity (\(k=0.5\) and \(S=20\))
Fig. 5
figure 5

Mapping of the RR/ARR scores of GAElit, RIGAElit, HyperMElit, GA \(+\) Repair and DDECV \(+\) Repair to the RR-ARR diagram for three change frequencies: a 250 evaluations, b 500 evaluations and c 1000 evaluations. If a point is closer to the right-hand side area of the graph, it indicates a faster recovery. Moreover, if the point lies on the diagonal line, the algorithm has been able to recover from the change and also converge to the new global optimum. The RR-ARR diagram was proposed in Nguyen and Yao (2012)

Fig. 6
figure 6

Mapping of the RR/ARR scores of GAElit, RIGAElit, HyperMElit, GA \(+\) Repair and DDECV \(+\) Repair to the RR-ARR diagram for two change frequencies: a 2000 evaluations and b 4000 evaluations. If a point is closer to the right-hand side area of the graph, it indicates a faster recovery. Moreover, if the point lies on the diagonal line, the algorithm has been able to recover from the change and also converge to the new global optimum. The RR-ARR diagram was proposed in Nguyen and Yao (2012)

Fig. 7
figure 7

Mapping of the RR/ARR scores of GAElit, RIGAElit, HyperMElit, GA \(+\) Repair and DDECV \(+\) Repair to the RR-ARR diagram with a change frequency of 1000 evaluations and: a a low change severity (\(k=0.25\) and \(S=10\)) and b a high change severity (\(k=1.0\) and \(S=50\)). If a point is closer to the right-hand side area of the graph, it indicates a faster recovery. Moreover, if the point lies on the diagonal line, the algorithm has been able to recover from the change and also converge to the new global optimum. The RR-ARR diagram was proposed in Nguyen and Yao (2012)

Finally, for DDECV \(+\) Repair and the compared algorithms, the diversity handling, in the context of a constrained search space (i.e., the percentage of feasible and infeasible solutions in the population), was assessed by calculating the percentage of infeasible solutions in the population. The averages of infeasible solutions in the population considering the whole set of test problems with five different change frequencies (250, 500, 1000, 2000 and 4000 evaluations) and a medium change severity (\(k=0.5\) and \(S=20\)) are shown in Table 18. The same information is presented in Table 19, with 1000 evaluations as the change frequency, but now considering a low (\(k=0.25\) and \(S=10\)) and a high (\(k=1.0\) and \(S=50\)) change severities.

In both tables, regardless of different change frequencies and severities, a similar behavior was observed, where the two algorithms with a repair method kept a lower percentage of feasible solutions than those approaches without it. Furthermore, it is worth noticing that, from both repair-based algorithms, GA \(+\) Repair and DDECV \(+\) Repair, the latter maintained a higher proportion of infeasible solutions (approximately between 8 and 14 %). Such ratio, considering the competitive overall results obtained by DDECV + Repair, seems to be suitable for an algorithm dealing with DCOPs. In fact, such finding agrees with the importance of keeping some feasible solutions to favor a successful search in the presence of constraints (Mezura-Montes and Coello 2011).

From this third experiment the following can be summarized:

  • DDECV \(+\) Repair’s change detection mechanism based on solution re-evaluation was particularly effective in the set of dynamic test problems solved in this study.

  • DDECV \(+\) Repair provided the fastest recovery after a change. Moreover, it was found that faster changes affect this ability more than high severity changes.

  • Similarly to static constrained optimization, keeping a low proportion of infeasible solutions (between 8 and 14 %), helps DDECV \(+\) Repair to provide competitive results when solving DCOPs.

4.2.4 Experiment 4: Dynamism in different parts of the problem and best-error-before-change analysis

The fourth experiment studied the performance of DDECV \(+\) Repair with dynamism in different parts of the problem. ICHEA and DCTC algorithms, already described, were used for comparison purposes, and their results were taken from their corresponding documents (Sharma and Sharma 2012b; Aragón et al. 2013). The twelve out of eighteen test problems, as in Experiment 1, were solved.

The first comparison of this fourth experiment was between DDECV \(+\) Repair and ICHEA (Sharma and Sharma 2012a), and the Best-error-before-change measure was computed. The change frequency was 1000 evaluations, and the severity of the change was medium (i.e., \(k=0.50\) and \(S=20\)). The results obtained are presented in Table 20. The statistical validation was made with the 95 % confidence Wilcoxon rank-sum test. Such test indicated no significant difference between DDECV \(+\) Repair and ICHEA algorithm by considering all the four test problems (g24_u, g24_1, g24_3 and g24_4). Therefore, DDECV \(+\) Repair has a similar performance with respect to ICHEA to reach the optimum solution before a change occurs.

The second comparison of this fourth experiment studied the effects of dynamism in different parts of the problem with several change frequencies and also different change severities in the performance provided by DDECV \(+\) Repair and DCTC (Aragón et al. 2013). The measure used in this comparison was the feasible offline error. The change frequencies were 250, 500 and 1000 evaluations and different change severities (\(k=0.25, 0.5\) and 1.0 and \(S=10, 20\) and 50). The statistical validation was made with the 95 % confidence Wilcoxon rank-sum test. The comparisons were divided by the kind of test problems as:

  1. 1.

    Dynamic objective function and dynamic constraints. In Table 21 the mean and standard deviation values of the feasible offline error are shown for DDECV \(+\) Repair and DCTC in three test problems (g24_3b, g24_4 and g24_5) with different change frequencies (250, 500 and 1000 evaluations) and different change severities (\(k=0.25\), 0.5 and 1.0 and \(S=10, 20\) and 50). Twenty-seven combinations were carried out by each test problem. The statistical test indicated that there were significant differences. Based on such results, DDECV \(+\) Repair outperformed DCTC in all combinations of the two test problems (g24_3b and g24_4). On the other hand, DDECV \(+\) Repair outperformed DCTC in nineteen combinations of one test problem (g24_5), while DCTC was better in eight combinations regardless of the frequency change with low and medium values of the severities of change in the objective function and high values of severity in the constraints.

  2. 2.

    Dynamic objective function and static constraints. In Table 22 the mean and standard deviation values of the feasible offline error are presented for DDECV \(+\) Repair and DCTC in three test problems (g24_1, g24_2 and g24_8b) with different change frequencies (250, 500 and 1000 evaluations) and different change severities in the objective function (\(k=0.25,0.50\) and 1.0). Nine combinations were computed by each test problem. The statistical test indicated that there were significant differences. According to those results, DCTC was better in all combinations of two test problems (g24_1 and g24_2) and also eight combinations of one test problem (g24_8b). On the other hand, DDECV \(+\) Repair surpassed the results on just one combination when the frequency change was 250 evaluations and the severity change was high (\(S=50\)). In Table 23, the mean and standard deviation values of the feasible offline error are presented for DDECV \(+\) Repair and DCTC in three test problems (g24_6a, g24_6c and g24_6d) with different change frequencies (250, 500 and 1000 evaluations) without a change severity. The results suggest that DDECV \(+\) Repair performance was similar with respect to DCTC, because no significant differences were observed.

  3. 3.

    Static objective function and dynamic constraints. In Table 24 the mean and standard deviation values of the feasible offline error are presented for DDECV \(+\) Repair and DCTC in two test problems (g24_3 and g24_7) with different change frequencies (250, 500 and 1000 evaluations) and different change severities in the constraints (\(S=10\), 20 and 50). Nine combinations were calculated by each test problem. Such results indicated that DDECV \(+\) Repair performance was similar with respect to DCTC, because no significant differences were observed.

Table 18 Average of percentage of selected infeasible individuals with change frequency of 250, 500, 1000, 2000 and 4000 evaluations
Table 19 Average of percentage of selected infeasible individuals with frequency at 1000 evaluations and the severity of change is small and large (\(k=0.25,\) 1.0 and \(S=20,\) 50)
Table 20 Average and standard deviation Best-error-before-change values obtained by DDECV \(+\) Repair and ICHEA with a change frequency of 1000 evaluations and a medium severity of change  (\(k=0.50\) and \(S=20\))

The fourth experiment generated the following conclusions:

  • DDECV \(+\) Repair outperformed the compared algorithms where the test problems had dynamism in the objective function and constraints.

  • In the test problems with a dynamic objective function and static constraints, DCTC outperformed DDECV \(+\) Repair.

  • Similar results were obtained between DDECV \(+\) Repair and DCTC in test problems with a static objective function and dynamic constraints.

Table 21 Average and standard deviation Feasible offline error values obtained by DDECV \(+\) Repair and DCTC with different change frequencies (250, 500 and 1000 evaluations) and several severities of change (\(k=0.25, 0.5\) and 1.0 and \(S=10, 20\) and 50), for test problems with dynamic objective function and dynamic constraints
Table 22 Average and standard deviation Feasible offline error values obtained by DDECV \(+\) Repair and DCTC with different change frequencies  (250, 500 and 1000 evaluations) and several severities of change  (\(k=0.25, 0.5\) and 1.0), for test problems with dynamic objective function and static constraints
Table 23 Average and standard deviation Feasible offline error values obtained by DDECV \(+\) Repair and DCTC with different change frequencies (250, 500 and 1000 evaluations), for test problems with dynamic objective function and static constraints
Table 24 Average and standard deviation Feasible offline error values obtained by DDECV \(+\) Repair and DCTC with different change frequencies (250, 500 and 1000 evaluations) and several severities of change (\(S=10, 20\) and 50), for test problems with static objective function and dynamic constraints

5 Conclusions and future work

This paper presented an empirical assessment of the dynamic differential evolution with combined variants plus a repair method (DDECV \(+\) Repair) in the solution of dynamic constrained optimization problems (DCOPs). Four unexplored issues of DDECV \(+\) Repair were addressed: (1) the role of the exploration promotion mechanism, the repair method, the random immigrants and the memory in its performance, (2) its sensitivity to different change frequencies and severities, (3) its ability to detect and recover after a change, besides its diversity handling during the search, and (4) its performance to solve problems with dynamism in both (the objective function and constraints) or dynamism in only one of them. Using seven performance measures, eight algorithms found in the specialized literature and a recently proposed set of test problems, four experiments were designed.

From the obtained results, statistically validated, it was found that the repair method is the most relevant mechanism in DDECV \(+\) Repair, while the presence of the random immigrants is positive, but may affect the results in problems where the feasible global optimum is located at the boundaries of either the feasible region or the search space. The memory mechanism plays an important role in DDECV + Repair because the optimal solution sometimes returns to the locations near its previous regions. Moreover, the sensitivity of DDECV \(+\) Repair to different change frequencies and severities was low, and from those two sources of difficulty, faster changes may affect its good ability to recover after a change. Furthermore, the re-evaluation-based change detection mechanism proved to be very effective. Moreover, it was showed that, as in the case of static constrained optimization, keeping a low ratio of feasible solutions during the search helps DDECV \(+\) Repair to reach competitive results in the resolution of DCOPs. Finally, DDECV \(+\) Repair had a better performance solving test problems with dynamism in both (the objective function and constraints) and a competitive performance in problems with dynamism in only one of them.

Part of the future work includes testing DDECV \(+\) Repair with random change frequencies and severities. Also, the immigrants technique will be revisited so as to analyze the negative effect of a feasible global optimum at the boundaries of either the search space or the feasible region. Furthermore, other modifications will be designed in order to improve the performance of DDECV \(+\) Repair to deal dynamism in the objective function and constraints separately. Finally, other test problems with smaller feasible regions will be sought so as to test the repair method in those situations.