Keywords

1 Introduction

In science and engineering disciplines, it is common to encounter a large number of constrained optimization problems (COPs). During the past decades, researchers have widely used evolutionary algorithms (EAs) to deal with COPs [13], and made considerable achievements. In recent years, with the development of the multi-objective and adaptive evolutionary theories and methodologies, more and more works are managed to add these fruits to solving constrained problems.

Coello first used dominance-based selection strategy to deal with constraints [4]. In [5] Coello and Mezura proposed a new version of the Niched-Pareto Genetic Algorithm (NPGA). This approach uses dominance-based selection scheme to assign fitness function value, and adopts an additional parameter called \(S_{r}\) to control the diversity of the population. Venkatraman and Yen [6] proposed genetic algorithm-based two-phase framework for solving COPs. In the first phase the objective function is completely disregarded, and only the constraints of the problem are focused on. In the second phase, the objective function and satisfaction of the constraints are treated as two objectives to be simultaneously optimized. Hsieh [7] proposed an algorithm based on well-known multi-objective evolutionary algorithm, NSGA-II. The procedure, used as a hybrid constraint handling mechanism, combines \(\epsilon \)-comparison method of multi-objective optimization and penalty method of constraints-handling. Yong Wang [8] presented hybrid constrained optimization EA (HCOEA), which effectively combines multi-objective optimization with global and local search models. In global model, Pareto-dominance-based tournament selection among parent and offspring and similarity measuring by Euclidean distance among individuals are used to promote population diversity; in the local model, a parallel search in subpopulations is implemented to accelerate convergence. Penalty function is a classical method used for solving COP, but the determination of penalty parameters is a difficulty. Deb [9] proposed a hybrid algorithm which combines bi-objective evolutionary approach with the penalty function methodology. The bi-objective approach provides a good estimate of the penalty parameter, and the unconstrained penalty function approach being constructed using provided penalty parameter generates the optimal solutions of overall hybrid algorithm. Zeng and Li [10, 11] used not only multi-objective optimization technology but also dynamic constraint mechanism for COPs. They first convert COP to a dynamic constrained multi-objective optimization problem, then adopt a dynamic constrained multi-objective optimization algorithm to solve the problem.

In this paper, we convert the COP into the many-objective optimization problem, there are m+1 objectives (m is the number of constraints) in a problem, in other words, each constraint function is converted into a violation objective function. So we can introduced many-objective optimization technique into our method. Besides, we adopt dynamic constraint handling mechanism to deal with constraints. The proposed many-objective optimization evolutionary algorithm with dynamic constrained handling, MaDC, uses DE to generate offspring and reference-point-based nondominated sorting approach to create next parent population.

The rest of this paper is organized as follows. Section 2 introduces process of converting a COP into an equivalent dynamic constrained many-objective optimization problem (DCMOP). Section 3 describes implementation of MaDC algorithm. Experiments and results are shown in Sect. 4 to test whether the methodology is effective. Section 5 gives the conclusion.

2 Convert COP to DCMOP

This section first converts a COP into a constrained many-objective optimization problem (CMOP) which is equivalent to the COP. Then the CMOP is converted into a dynamic constrained many-objective optimization problem (DCMOP), a series of CMOPs. In this way, the COP can be solved by solving the equivalent DCMOP.

2.1 Convert COP to CMOP

Without loss of generality, minimization optimization is assumed unless specified otherwise in this paper. A constrained optimization problem (COP) can be stated as follows:

$$\begin{aligned} \begin{array}{ll} min&{}y=f({\varvec{x}})\\ st:&{}{\varvec{g}}({\varvec{x}})=(g_1({\varvec{x}}),g_2({\varvec{x}}),...,g_{m}({\varvec{x}}))\le \mathbf {0}\\ where &{}{\varvec{x}}=(x_1,x_2, ...,x_{n})\in \mathbf {X}\\ &{} \mathbf {X} = \{{\varvec{x}}|{\varvec{l}}\le {\varvec{x}}\le \ {\varvec{u}}\}\\ &{}{\varvec{l}}=(l_1,l_2,...,l_{n}),{\varvec{u}}=(u_1,u_2,...,u_{n}) \end{array} \end{aligned}$$
(1)

where \({\varvec{x}}\) is the solution vector and \(\mathbf {X}\) is the whole search space, \({\varvec{l}}\) and \({\varvec{u}}\) are the lower bound and upper bound of the solution space, respectively, \({\varvec{g}}({\varvec{x}})\le \mathbf {0}\) is the constraint and \(\mathbf {0}\) is the constrained boundary. When an equality constraint \(h({\varvec{x}})=0\) is involved in the COP, it is usually transformed into an inequality constraint \(|h({\varvec{x}})|-\epsilon \le 0\), \(\epsilon \) is a positive close-to-zero number, \(\epsilon =0.0001\) in this paper.

A solution \({\varvec{x}}=(x_1,x_2, ...,x_{n})\) is feasible if it satisfies the constraints conditions \({\varvec{g}}({\varvec{x}})\le \mathbf {0}\), otherwise it is infeasible. A feasible set \(\mathbf {S}_{\mathbf {F}}\) of a COP is defined as \(\mathbf {S}_{\mathbf {F}}=\{{\varvec{x}}|{\varvec{x}} \in \mathbf {X} \ and \ {\varvec{x}} \ is feasible\}\).

Now we would like to construct a constrained many-objective optimization problem equivalent to the COP discussed above. This can be implemented by converting the constraint function \({\varvec{g}}({\varvec{x}})=(g_1({\varvec{x}}),g_2({\varvec{x}}),...,g_{m}({\varvec{x}}))\) to violation objective function \(\varvec{\varphi }({\varvec{x}})=(\varphi _1({\varvec{x}}),\varphi _2({\varvec{x}}),...,\varphi _{m}({\varvec{x}}))\) and inserting \(\varvec{\varphi }({\varvec{x}})\) into the COP as additional objectives without deleting the constraints \({\varvec{g}}({\varvec{x}})\le \mathbf {0}\), i.e., a constrained many-objective optimization problem (CMOP) is constructed as follow:

$$\begin{aligned} \begin{array}{ll} min&{}{\varvec{y}}=(f({\varvec{x}}),\varphi _1({\varvec{x}}),\varphi _2({\varvec{x}}),...,\varphi _{m}({\varvec{x}}))\\ st:&{}{\varvec{g}}={\varvec{g}}({\varvec{x}})=(g_1({\varvec{x}}),g_2({\varvec{x}}),...,g_{m}({\varvec{x}}))\le \mathbf {0} \end{array} \end{aligned}$$
(2)

where \({\varvec{x}}=(x_1,x_2, ...,x_{n})\) is n dimension search vector, \({\varvec{y}}\), \({\varvec{g}}\) are functions of vector \({\varvec{x}}\), \(\varphi _{i}({\varvec{x}})\) \((i=1,2,\cdots ,m)\) is a violation objective function converted from \(g_{i}({\varvec{x}})\) , the conversion is stated as:

$$\begin{aligned} \varphi _{i}({\varvec{x}})= max\{g_{i}({\varvec{x}}), 0\} , i = 1, 2, \cdots ,m \end{aligned}$$
(3)

so the COP is transformed CMOP with m+1 evolution objectives and m constraint conditions.

Obviously, the CMOP in Eq. (2) has the same feasible set and the same optimal solutions as the COP in Eq. (1). Then the CMOP is equivalent to the COP, and therefore, we could solve the COP by the way of solving the CMOP by using a constrained many-objective optimization algorithm.

In multi-objective optimization, Pareto dominance is an essential relation in comparing two solution individuals. Given two solutions \({\varvec{x}}_1\) and \({\varvec{x}}_2\), \({\varvec{x}}_1\) is called Pareto dominates \({\varvec{x}}_2\) if and only if \(f_{i}({\varvec{x}}_1) \le f_{i}({\varvec{x}}_2)\) for every objective index i, and \(f_{j}({\varvec{x}}_1) < f_{j}({\varvec{x}}_2)\) for at least one index j. A solution \({\varvec{x}}^ {*}\) is Pareto optimal (non-dominated) solution if there is no solution \({\varvec{x}}\) such that \(f({\varvec{x}})\) Pareto dominates \(f({\varvec{x}}^ {*})\).

2.2 Convert CMOP to DCMOP

Many-objective evolutionary algorithm (MOEA) in solving CMOP will face the same difficulty of handling constraints as that of EA in solving COP. We know that multi-objective evolutionary algorithm in solving a multi-objective optimization problem (MOP) without constraints performs very well, if we can make the CMOP look MOP without constraints and use MOEA to overcome, we will obtain the optimal resolution. So, the key issue is to achieve a feasible population all the time, which can be addressed by adopting dynamic constraint handling technique.

First, the original constrained boundary \(\mathbf {0}\) of the CMOP in Eq. (2) is largely broadened to \({\varvec{e}}^{(0)}\) at the beginning. Then the broadened boundary \({\varvec{e}}^{(0)}\) shrinks gradually back to \(\mathbf {0}\). Each change of boundary is small enough so that the whole population is always near feasible.

This process constructs a sequence of CMOPs \(\{CMOP^{(s)}\}, s = 0, 1, 2, \cdots , S\), i.e., a dynamic constrained many-objective optimization problem (DCMOP) as follows:

$$\begin{aligned} \begin{array}{ll} COMP^{0}&{}\left\{ \begin{aligned} &{}min \ {\varvec{y}}=(f({\varvec{x}}),\varphi _1({\varvec{x}}),\varphi _2({\varvec{x}}),...,\varphi _{m}({\varvec{x}}))\\ &{}st:{\varvec{g}}({\varvec{x}})\le {\varvec{e}}^{(0)} \end{aligned} \right. \\ COMP^{1}&{}\left\{ \begin{aligned} &{}min \ {\varvec{y}}=(f({\varvec{x}}),\varphi _1({\varvec{x}}),\varphi _2({\varvec{x}}),...,\varphi _{m}({\varvec{x}}))\\ &{}st:{\varvec{g}}({\varvec{x}})\le {\varvec{e}}^{(1)} \end{aligned} \right. \\ ...... ......\\ COMP^{S}&{}\left\{ \begin{aligned} &{}min \ {\varvec{y}}=(f({\varvec{x}}),\varphi _1({\varvec{x}}),\varphi _2({\varvec{x}}),...,\varphi _{m}({\varvec{x}}))\\ &{}st:{\varvec{g}}({\varvec{x}})\le {\varvec{e}}^{(S)}=\mathbf {0} \end{aligned} \right. \\ \end{array} \end{aligned}$$
(4)

where \({\varvec{e}}^{(s)} = (e^{(s)}_1, e^{(s)}_2, ..., e^{(s)}_{m}),s \in \{ 0, 1, 2, \cdots , S\}\), \({\varvec{e}}^{(0)}\ge {\varvec{e}}^{(1)}\ge \cdots \ge {\varvec{e}}^{(S)}=\mathbf {0} \).

\({\varvec{e}}^{(s)}\) is called elastic constrained boundary, and s is called environment state.

The initial boundary \({\varvec{e}}^{(0)}\) on the initial state \(s = 0\) needs to enable initial population \(\mathbf {P}(0)\) feasible, It is set as \(e^{(0)}_{i} = \max \limits _{{\varvec{x}} \in \mathbf {P}(0)}\{g_i({\varvec{x}}) \} \), \(g_i({\varvec{x}})\) is the function value of the ith constraint, \(i = 1, 2, ... ,m\). On the final state \(s = S\), the boundary goes back to \(\mathbf {0}\), i.e., \({\varvec{e}}^{(S)}=\mathbf {0}\). the boundary change on every environment state is modelled as follow:

$$\begin{aligned} \begin{array}{l} e_i^{(s)} = A_i e^{-(\frac{s}{B_i})^{2}} - \varepsilon , i=1,2,\cdots ,m\\ \end{array} \end{aligned}$$
(5)

Regarding to elastic constrained boundary, if a solution satisfies inequality \({\varvec{g}} \le {\varvec{x}}\), it is said to be e -feasible, otherwise, it is said to be e -infeasible. Obviously, a feasible solution is e-feasible, while an e-feasible solution might by infeasible or feasible.

Pareto-domination is defined without considering the constraints, see Subsect. 2.2. An e-constrained Pareto-domination for the DCMOP Eq. (4) is stated as follows:

Given two solutions:

  • if both are e-feasible, the one which dominates the other at all objectives (involve the original objective and the violation objectives) wins;

  • if one is e-feasible and the other is e-infeasible, the e-feasible solution wins;

  • if both are e-infeasible, the one which dominates the other at violation objectives wins.

3 Algorithm Description

This section gives the implementation of many-objective optimization algorithm with dynamic constraints (MaDC) for solving DCMOP.

figure a

MaDC use DE to generate offspring, and reference-point-based nondominated sorting approach to create next population. Reference-point-based nondominated sorting approach is proposed in literature [12], it is an evolutionary many- objective optimization technique of combining nondominated sorting and reference-point-based selection strategy.

Note if the algorithm could not evolve to achieve an e-feasible population on a certain state s, then it would iterate infinitely on this state. A maximal run generation MaxG is set to abort the run.

The generation of offspring population in step 3 of Algorithm 1 is a combination of some genetic operators: affine mutation, crossover and uniform mutation. The detail of genetic operators is as Algorithm 2:

figure b

Algorithm 3 was given as the details of creating next parent population. It uses reference-point-based nondominated sorting method to select individuals to construct the next population.

figure c

4 Experiments and Results

In this section, we apply our proposed methodology to a number of benchmark problems proposed in Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter Optimization [13], online available: http://www.ntu.edu.sg/home/epnsugan/. The 24 test instances are minimization problems. The detail of the test problems refers to [13].

4.1 Determination of Reference Points and Algorithm Parameters

The proposed algorithm uses a predefined set of reference points to ensure diversity of many-objective optimization. We use determination method presented in [12] that places points on a normalized hyper-planean \((M- 1)\)-dimensional unit simplex—which is equally inclined to all objective axes and has an intercept of one on each axis. If p divisions are considered along each objective, the total number of reference points (H) in an M-objective problem is given by:

$$\begin{aligned} \begin{array}{ll} H=C_{M+p-1}^{p} \end{array} \end{aligned}$$
(6)

For example, in a three-objective problem (\(M= 3\)), if six divisions (\(p= 6\)) are chosen for each objective axis, \(H = 28\) reference points will be created [12]. When there are many objectives (\(M \ge 5\)), one layer of reference points is not appropriate. For eight-objective problems, even if we use \(p= 8\) (to have exactly one intermediate reference point), it requires 5040 reference points. To avoid such a situation, we use two layers of reference points (boundary layer and inside layer) in many-objective problems.

The population size N is set the number of reference points. Table 1 shows the number of chosen reference points (H) and corresponding population sizes.

Table 1. Number of reference points and corresponding population size
Table 2. Function values obtained by MADC, SAMO-DE, ECHT-EP2, DE-DPS, HCOEA and DCMOEA

Other parameters are as follow:

Number of repeats: 25.

In the offspring generation procedure (Algorithm 2), the scaling factor \(F = 0.5\), crossover rate \(CR = 0.9\), uniform mutation probability \(P_{m} = 0.01\).

The \(\varepsilon \) in Eq. 5 was set to 0.000 000 1.

The number of environment changes was set \(S=240 000/N\).

The maximal run generation \(MaxG = 10 000\), if a problem has no feasible solutions or the algorithm could not find feasible solutions, the algorithm would abort after evolving 10 000 generations. Problems g20 and g22 could not find feasible solution.

4.2 Results and Comparison

The detailed results of MaDC are provided in Table 2, along with that of the state-of-the-art algorithms such as: (1) self-adaptive multioperator differential evolution (SAMO-DE) [14]; (2) ensemble of constraint handling techniques based on evolutionary programming (ECHT-EP2) [15]; (3) differential evolution with dynamic parameters selection (DE-DPS) [3]; (4) hybrid constrained optimization evolutionary algorithm (HCOEA) [8]; (5) dynamic constrained multi-objective evolutionary algorithm (DCMOEA) [10]. All algorithms solved 22 test problems, except HCOEA and DCMOEA, in which only the 13 test problems were solved.

From Table 2, MaDC was able to obtain the optimal solutions for all problems except g23. The algorithm SAMO-DE, ECHT-EP2, DE-DPS were able to obtain the optimal solutions for 20, 19, 20 problems, respectively. The algorithm HCOEA and DEMOEA obtained the optimal solutions for 9, 12 out of 13 problems. In regard to the average results, MaDC is superior to SAMO-DE, ECHT-EP2, DE-DPS, HCOEA and DEMOEA for eight, four, three, three, two text problems, respectively. It can be seen that our proposed method performs better than or is competitive to state-of-the-art algorithms.

5 Conclusion

In this paper, we have suggested a many-objective optimization algorithm with dynamic constraint mechanism for solving constrained optimization problem. We first construct an equivalent dynamic constrained many-objective optimization problem to the COP, then adopt MaDC algorithm to solve the DCMOP, thus the COP is solved. Dynamic technology is implemented by setting an elastic boundary for the constrained problem, and the trade-off between the population diversity and accuracy is mainly handled by reference-point-based nondominated sorting method. The proposed algorithm is tested by a number of benchmark problems. Experimental results show that it is competitive to state-of-the-art algorithms referred in this paper. The future work should be: (1) Retaining more feasible solutions to improve the performance of the algorithm in each evolutionary generation by adopting other selection strategy; (2) Introducing other better many-objective optimization technique in the algorithm; (3) Using dynamic parameters selection mechanism in DE to speed up the convergence of the algorithm; (4) To explore other candidates of the dynamic environment.