Abstract
Nature-inspired optimization algorithms have been designed for unconstrained problems. However, real-world optimization problems usually deal with a lot of limitations, either boundary of design variables, or equality/inequality constraints. Therefore, an extensive number of efforts have been made to make these limitations understandable for the optimization algorithms. Here, a more important fact is that those constraint handling approaches affect the algorithms’ performances considerably. In this study, some of the well-known strategies are incorporated into particle swarm optimization algorithms (PSO). The performance of the PSO algorithm is examined through several benchmarks, constrained problems, and the results discussed comprehensively.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
1 Introduction
Real-world optimization problems are often very complicated, with many decision variables and practical limitations on the range of feasible solutions [10, 14, 15, 17, 26, 27]. These complexities result in optimization problems that are non-convex, discontinuous, have high dimensionality, and pose many challenges in developing algorithms that converge to the optimal global solution. Metaheuristic algorithms are designed for unconstrained optimization problems, so it is crucial to develop methods to account for constraints [6, 11,12,13, 16, 25, 32]. Most metaheuristic optimization algorithms are based on two crucial phases: 1-diversification, and 2-intensification.
Diversification focuses on exploring the entire search space, often in random and chaotic ways making the algorithm capable of overcoming difficulties related to the discontinuity of the solution space. On the other hand, intensification tends to focus the search on regions identified by the best solutions. Therefore, one of the most critical features of any constraint handling scheme is limiting the influence of infeasible solutions while preserving the stability of the algorithm. Many constrained optimization problems often have the optimal solution located near the boundaries of search space, so any constraint handling approach needs to enable an algorithm to explore the boundaries effectively.
Much research has been devoted to developing efficient methods for handling the constraints in both single- and multi-objective algorithms. Homaifar et al. [22] and Hoffmeister and Sprave [21] proposed methods based on a static penalty function in which a constant penalty term is added to the objective function value. In this approach, for any violated constraint, a penalty term is added to the objective function. The resulting optimization algorithm then attempts to decrease the penalty value while also finding the global optima. Morales and Quezada [33] proposed another version of the static penalty function method called the death penalty where a predetermined large value is set as the objective function for infeasible solutions. Joines and Houck [24], Kazarlis and Petridis [28], and Petridis et al. [35] developed a dynamic penalty function where the penalty term is increased with the iteration of the algorithm. Another iteration-dependent penalty function was proposed by Carlson and Shonkwiler [4] based on simulated annealing called annealing penalty function. In this way, in the course of iterations, the temperature decreases resulting in a higher penalty. Coit and Smith [7], and Gen and Cheng [18] proposed adaptive penalty function methods for handling the constraints by permitting algorithms to explore beyond the feasible search space to some level by adjusting the penalty term during the search. Other researchers have introduced a variety of hybrid methods such as lagrangian multipliers by Adeli and Cheng [1], a hybrid interior-lagrangian penalty-based approach by Myung and Kim [34], and the application of fuzzy logic by Le [30]. Deb [9] proposed a method based on the separation of constraints and objectives. This method is based on a pair-wise comparison of solutions in every iteration. In this context, a tournament selection operator was proposed to compare every candidate solution with the following strategy: 1—any feasible solution overcome the infeasible solution, 2—between two feasible solutions, the fitter solution is the winner, 3—between two infeasible solutions, the one with lower constraint violation is preferred.
The impact of constraint handling methods on the efficiency of algorithms was also the subject of several most recent studies. Li et al. [31] explored the effect of different constraint handling strategies on the performance of evolutionary multi-objective algorithms. In this way, three constraint handling approaches as constrained-domination principle, self-adaptive penalty, and adaptive tradeoff model combined with nondominated sorting genetic algorithm II for solving 18 test problems. Jamal et al. [23] explored three constraint handling methods (i.e., ε-Level comparison, superiority of feasible solutions, and penalty function) for matrix adaptation evolutionary strategy to solve CEC-2010 benchmark constrained problems. Biswas et al. [3] tackled the problem of optimal power flow solutions using differential evolution algorithms. In this study three different constraint handling approaches were utilized as follows: 1—superiority of feasible solutions (SF), 2—self-adaptive penalty (SP), and 3—an ensemble of these two constraint handling techniques (SF and SP). Ameca-Alducin et al. [2] explored the efficiency of four constraint handling schemes (i.e., stochastic ranking, ǫ-constrained, penalty, and feasibility rules) for differential evolution algorithm to handle dynamic constrained optimization problems. Zou et al. [37] developed a new constraint handling method for solving the combined heat and power economic dispatch using an improved genetic algorithm. Datta et al. [8] proposed a novel constraint handling approach based on individual penalty approach in which all the constraints were scaled adaptively without a need for specific information from the problem. The proposed approach was examined through solving 23 benchmark test problems and two engineering problems using a hybrid evolutionary algorithm.
In this paper, we presented a review of six well-known penalty function approaches for constraint handling. Each of these schemes is incorporated into a particle swarm optimization (PSO) algorithm to evaluate their effectiveness and efficiency for a set of benchmark constrained optimization problems.
2 Particle Swarm Optimization
Kennedy and Eberhart [29] developed the particle swarm optimization (PSO) algorithm based on the behaviors of bird flocks in searching for food. In this context, the PSO algorithm searches the solution space with a population of particles. Each particle in the swarm represents a potential solution to the optimization problem. The PSO algorithm changes the position of the particles within the search space with the aim of finding more appropriate solutions. PSO determines the position of particles within the search space by two primary qualities; position and velocity. In PSO, each particles’ position changes iteratively based on its current position and velocity given as:
where \( X_{i}^{t + 1} \) is the updated position of the i-th particle, \( X_{i}^{t} \) is the current position, and \( V_{i}^{t + 1} \) is the velocity. A targeted search is conducted by a particle movement using a velocity term. Each particles’ velocity connected to two important achievements in each iteration: the particles’ position relative to the global best-found solution Pg and to its own best solution Pi. Clerc and Kennedy [5] proposed updating the velocity term as
where \( V_{i}^{t + 1} \) and \( V_{i}^{t} \) are the new velocity and old velocity of the i-th particle, respectively, C1 and C2 control the relative attraction to Pi and Pg, respectively, r1 and r2 are random numbers within [0,1], and χ < 1 is the constriction factor that makes convergence rating slower and provides a better exploration of solution space (diversification).
Choosing appropriate values for parameters in the PSO algorithm is vital to obtaining the best performance. In Eq. (2), the values of χ, C1, and C2 impact how each particle will be attracted to its best position and the global best position. Clerc and Kennedy [5] proposed the following values: C1 = C2 = 2.05 and χ = 0.72984.
3 Constraint Handling Approaches
In general, an optimization problem for the objective function f(x) can be described as
subject to
where x is a vector of design variables, h and g are equality and inequality constraints, respectively, m and p are the number of equality and inequality constraints, respectively.
In this study, six different penalty function-based approaches are utilized to incorporate the effects of constraints into the PSO algorithm. Penalty function-based techniques are utilized to transform a constrained problem into an unconstrained one. In this way, the optimization algorithm generates a potential solution without considering its feasibility. At the next step, the constraints are checked, and a penalty value, based on the degree of violations of each constraint, is added to the objective function value. The penalty functions considered in this study are:
-
1.
A simple static penalty function approach proposed by Homaifar et al. [22] is used to compute the penalized objective function F(x) as
where Ri,j is the penalty coefficient, p is the number of constraints, and j = 1, 2, …, l, where l is the number of levels of a violation defined by the user. In this study, we used the same constant penalty coefficients for all the constraints.
-
2.
The static penalty function method proposed by Hoffmeister and Sprave [21] is defined as
where
-
3.
The death penalty function proposed by Morales and Quezada [33] for updating the objective value is given as
$$ F\left( x \right) = \left\{ {\begin{array}{*{20}c} {f\left( x \right) \quad if\, x\, is\, feasible} \\ {K - \mathop \sum \limits_{i = 1}^{s} \left( {\frac{K}{p}} \right)\, otherwise} \\ \end{array} } \right. $$(9)where K is a large constant, s is the number of satisfied constraints, and p is the total number of constraints.
-
4.
The adaptive penalty function approach proposed by Coit and Smith [7] is utilized to alter the penalty term based on the feedback taken from the evolution process. The penalized objective function is
$$ F\left( x \right) = f\left( x \right) + \left( {B_{feasible} - B_{all} } \right)\mathop \sum \limits_{i = 1}^{p} \left( {\frac{{g_{i} \left( x \right)}}{NFT\left( t \right)}} \right)^{k} $$(10)where Bfeasible is the best known feasible objective function at generation t, Ball is the best known (unpenalized) objective function at generation t, k is a constant that adjusts the “severity” of the penalty, and NFT is the so-called Near Feasibility Threshold, which is defined as the threshold distance from the feasible region as
$$ NFT = \frac{{NFT_{0} }}{1 + \lambda \times t} $$(11)where NFT0 is an upper bound for NFT, and λ is a constant which guarantees a search of the whole region between NFT0 and zero.
-
5.
The dynamic penalty function developed by Joines and Houck [24] is defined as
$$ F\left( x \right) = f\left( x \right) + \left( {C \times t} \right)^{\alpha } \mathop \sum \limits_{i = 1}^{p} \left| {g_{i} \left( x \right)} \right|^{\beta } $$(12)where C, α, and β are constants defined by the user.
-
6.
The annealing-based penalty function method developed by Joines and Houck [24] is defined as
$$ F\left( x \right) = f\left( x \right) + { \exp }\left( {\left( {C \times t} \right)^{\alpha } \mathop \sum \limits_{i = 1}^{p} \left| {g_{i} \left( x \right)} \right|^{\beta } } \right) $$(13)
These methods labeled from 1 to 6 are referred to in all tables and figures as Homaifar, Hoffmeister, Death, Adaptive, Dynamic, and Annealing, respectively.
4 Numerical Simulation
In this section, we incorporate the above-mentioned constraint handling approaches into a PSO algorithm to solve several benchmark problems. In all the cases, the particle population size is 50, and the number of iterations is 1,000. Metaheuristic optimization algorithms search the solution space stochastically to find the global minimum. Therefore, we evaluated the performance of each constraint handling approach based on the best, mean, and standard deviation (STD) of solutions over a series of 50 independent runs. The best-found solutions are highlighted in bold in their relevant tables.
In all cases, parameter values for each constraint handling method are held constant. For the Homaifar approach, the constant penalty coefficient is 1013 for all the constraints. In the Death method, the K value is 109. In the Adaptive approach NFT0, λ, and K are equal to 10, 2, and 2, respectively. In the Dynamic scheme, α, β, and C are equal to 2, 1, and 0.5, respectively. In the Annealing method, α, β, and C are equal to 1, 1, and 0.05, respectively.
4.1 Test Problems Series 1
In the following section, we solved five single-objective constrained benchmark optimization problems as follows (Wikipedia website [36]: 1—Rosenbrock function constrained with a cubic and a line, 2—Rosenbrock function constrained to a disk, 3—Mishra’s Bird function, 4—Townsend function (modified), and 5—Simionescu function. Table 1 lists the objective function and constraints for each problem.
Table 2 lists the best-found solutions to the benchmark functions for each constraint handing method. Table 3 gives values for the mean ± STD of the 50 independent runs for each case. Table 4 lists the values of the constraints for the best solution for each case. The results in Table 4 indicate that none of the algorithms can satisfy the F1 constraints. Also, only two methods were successful in meeting all the F3 constraints. In all the remaining functions, all the constraints are satisfied.
For function F1, the values of the constraint violations recorded by Hoffmeister, Death, Dynamic, and Annealing are negligible. Since the Annealing constraint handing method produced both the best solution and the lowest constraint violations, it is considered the best method for function F1. However, the Hoffmeister method has the lowest mean value over multiple runs. For function F2, the Hoffmeister method produced the best solution and had the lowest mean value. Function F3 posed more of a challenge than the other functions. In this case, the Homaifar and Death methods were able to solve the problem, and recorded similar best and mean values; however, the Death method recorded a slightly lower STD. Results from F4 simulations showed that the Death penalty function method found the lowest best value, while the Dynamic method had the lowest mean value. For the F5 function, all the methods except Dynamic found equal best values; however, the Adaptive method had the lowest STD value.
4.2 Test Problems Series 2
In this section, more complicated optimization problems with numerous constraints and design variables are considered to evaluate the performance of constraint handling approaches better. To this end, we selected some benchmark optimization problems presented by Dr. Abdel-Rahman Hedar on his official website [19, 20]. Table 5 lists the objective functions and constraints for the selected optimization problems.
Numerical simulations are conducted on these functions using PSO with previously mentioned constraint handling schemes. Tables 6 and 7 tabulate the results for the best solution, and the mean ± STD over a series of independent runs, respectively. Table 8 provides constraint values from the best solution found using each constraint handling method. The results listed in Table 8 show that for functions G1 to G4, the Homaifar, Death, Adaptive, and Dynamic methods meet all the constraints successfully. A comparison of the best results for G1 shows that the Death method found the lowest objective function value and had a lower mean value than the other successful methods. For the G2 function, the Adaptive method found the highest objective function value and had the best mean and STD values. Results for the G4 function, show the lowest objective function value was recorded by the Adaptive method, while the associated mean and STD values are comparable with other successful methods. For the G6 function, the only approach that satisfied both constraints is the Adaptive method. The G7 function seemed to be a challenging problem; in that, none of the methods could satisfy all the constraints. In the G8 function study, all the methods except for the Hoffmeister and Annealing methods satisfied the constraints effectively. Among the successful approaches, the Dynamic method had higher best and mean values, while the other methods reached similar best values. A comparison of the mean values of the remaining successful methods demonstrated that Homaifar and Death very close, while Homaifar had a lower STD value. The results listed in Table 8 for the G9 function show that all the methods, except Annealing, were able to satisfy all the constraints. In contrast, the Hoffmeister method found the lowest best objective function value and had the lowest mean and STD values.
5 Summary
In this study, the performance of six popular penalty function-based constraint handling methods is explored. A PSO algorithm was selected as the testbed for this study because of its robustness and ability to handle complex optimization problems. Each of the six penalty function methods was incorporated into a PSO algorithm. Twelve benchmark optimization problems were solved to examine the effectiveness of each of the six constraint handling approaches. For each of the constraint handling method and objective function (total of 72 cases), the best solution was reported, and the mean and standard deviation were computed a series of 50 independent runs. For each method, the values of constraint were reported for the best solution. In general, the results indicated the Homaifar and Adaptive methods provide satisfactory performance, while the Hoffmeister and Annealing methods were unsuccessful in satisfying the constraints in all the cases.
References
Adeli, H., Cheng, N.T.: Augmented Lagrangian genetic algorithm for structural optimization. J. Aerosp. Eng. 7(1), 104–118 (1994)
Ameca-Alducin, M.Y., Hasani-Shoreh, M., Blaikie, W., Neumann, F., Mezura-Montes, E.: A comparison of constraint handling techniques for dynamic constrained optimization problems. In: 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8. IEEE (2018)
Biswas, P.P., Suganthan, P.N., Mallipeddi, R., Amaratunga, G.A.: Optimal power flow solutions using differential evolution algorithm integrated with effective constraint handling techniques. Eng. Appl. Artif. Intell. 68, 81–100 (2018)
Carlson, S.E., Shonkwiler, R.: Annealing a genetic algorithm over constraints. In: SMC’98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 98CH36218) (Vol. 4, pp. 3931–3936). IEEE (1998)
Clerc, M., Kennedy, J.: The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 6(1), 58–73 (2002)
Coello, C.A.C.: Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput. Methods Appl. Mech. Eng. 191(11–12), 1245–1287 (2002)
Coit, D.W., Smith, A.E.: Penalty guided genetic search for reliability design optimization. Comput. Ind. Eng. 30(4), 895–904 (1996)
Datta, R., Deb, K., Kim, J.H.: CHIP: Constraint Handling with Individual Penalty approach using a hybrid evolutionary algorithm. Neural Comput. Appl. 31(9), 5255–5271 (2019)
Deb, K.: An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 186(2–4), 311–338 (2000)
Gandomi, A.H., Kashani, A.R.: Automating pseudo-static analysis of concrete cantilever retaining wall using evolutionary algorithms. Measurement 115, 104–124 (2018)
Gandomi, A.H., Kashani, A.R.: Probabilistic evolutionary bound constraint handling for particle swarm optimization. Oper. Res. Int. J. 18(3), 801–823 (2018)
Gandomi, A.H., Kashani, A.R.: Evolutionary bound constraint handling for particle swarm optimization. In: 2016 4th International Symposium on Computational and Business Intelligence (ISCBI), pp. 148–152. IEEE (2016)
Gandomi, A.H., Kashani, A.R., Mousavi, M.: Boundary constraint handling affection on slope stability analysis. In: Engineering and Applied Sciences Optimization, pp. 341–358. Springer, Cham (2015)
Gandomi, A.H., Kashani, A.R., Mousavi, M., Jalalvandi, M.: Slope stability analysis using evolutionary optimization techniques. Int. J. Numer. Anal. Meth. Geomech. 41(2), 251–264 (2017)
Gandomi, A.H., Kashani, A.R., Zeighami, F.: Retaining wall optimization using interior search algorithm with different bound constraint handling. Int. J. Numer. Anal. Methods Geomech. (2017)
Gandomi, A.H., Yang, X.S.: Evolutionary boundary constraint handling scheme. Neural Comput. Appl. 21(6), 1449–1462 (2012)
Gandomi, A.H., Yang, X.S., Talatahari, S., Alavi, A.H.: Metaheuristic algorithms in modeling and optimization. In: Gandomi et al. (eds.) Metaheuristic Applications in Structures and Infrastructures, pp. 1–24. Elsevier, Waltham, MA (2013)
Gen, M., Cheng, R.: Interval programming using genetic algorithms. In: Proceedings of the Sixth International Symposium on Robotics and Manufacturing, Montpellier, France (1996)
Hedar, A.-R.: Dr. Abdel-Rahman Hedar’s official website (2020). http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar.html
Hedar, A.-R.: Test problems for con-strained global optimization (2020). http://www-opti-ma.amp.i.kyotou.ac.jp/member/student/hedar/Hedar_files/TestGO_files/Page422.htm
Hoffmeister, F., Sprave, J. Problem-independent handling of constraints by use of metric penalty functions. In: Proceedings of Evolutionary Programming, pp. 289–294 (1996)
Homaifar, A., Qi, C.X., Lai, S.H.: Constrained optimization via genetic algorithms. Simulation 62(4), 242–253 (1994)
Jamal, M.., Ming, F., Zhengang, J.: Solving constrained optimization problems by using covariance matrix adaptation evolutionary strategy with constraint handling methods. In: Proceedings of the 2nd International Conference on Innovation in Artificial Intelligence, pp. 6–15 (201)
Joines, J.A., Houck, C.R.: On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GA’s. In: IEEE, 1994, vol. 572, pp. 579–584 (1994)
Jordehi, A.R.: A review on constraint handling strategies in particle swarm optimisation. Neural Comput. Appl. 26(6), 1265–1275 (2015)
Kashani, A.R., Saneirad, A., Gandomi, A.H.: Optimum design of reinforced earth walls using evolutionary optimization algorithms. Neural Comput. Appl., 1–24 (2019)
Kashani, A.R., Gandomi, M., Camp, C.V., Gandomi, A.H.: Optimum design of shallow foundation using evolutionary algorithms. Soft. Comput. 24(9), 6809–6833 (2020)
Kazarlis, S., Petridis, V.: Varying fitness functions in genetic algorithms: Studying the rate of increase of the dynamic penalty terms. In: International conference on parallel problem solving from nature, pp. 211–220. Springer, Berlin, Heidelberg (1998)
Kennedy, J., Eberhart, R.C.: Swarm Intelligence. Morgan Kaufmann, San Francisco, CA (2001)
Le, T.V.: A fuzzy evolutionary approach to constrained optimization problems. In: Proceedings of the Second IEEE Conference on Evolutionary Computation, pp. 274–278. IEEE Perth (1995)
Li, J.P., Wang, Y., Yang, S., Cai, Z.: A comparative study of constraint-handling techniques in evolutionary constrained multiobjective optimization. In: 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 4175–4182. IEEE (2016)
Michalewicz, Z.: A survey of constraint handling techniques in evolutionary computation methods. Evol. Prog. 4, 135–155 (1995)
Morales, A.K., Quezada, C.V.: A universal eclectic genetic algorithm for constrained optimization. In: Proceedings of the 6th European Congress on Intelligent Techniques and Soft Computing, vol. 1, pp. 518–522 (1998)
Myung, H., Kim, J.H.: Hybrid interior-lagrangian penalty based evolutionary optimization. In: International Conference on Evolutionary Programming, pp. 85–94. Springer, Berlin, Heidelberg (1998)
Petridis, V., Kazarlis, S., Bakirtzis, A.: Varying fitness functions in genetic algorithm constrained optimization: the cutting stock and unit commitment problems. IEEE Trans. Syst., Man, Cybern., Part B (Cybern.) 28(5), 629–640 (1998)
Wikipedia.: Test functions for optimization (16 April 2020). https://en.wikipedia.org/wiki/Test_functions_for_optimization
Zou, D., Li, S., Kong, X., Ouyang, H., Li, Z.: Solving the combined heat and power economic dispatch problems by an improved genetic algorithm and a new constraint handling strategy. Appl. Energy 237, 646–670 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Rostamian, M., Kashani, A.R., Camp, C.V., Gandomi, A.H. (2021). Experimental Comparison of Constraint Handling Schemes in Particle Swarm Optimization. In: Kulkarni, A.J., Mezura-Montes, E., Wang, Y., Gandomi, A.H., Krishnasamy, G. (eds) Constraint Handling in Metaheuristics and Applications. Springer, Singapore. https://doi.org/10.1007/978-981-33-6710-4_4
Download citation
DOI: https://doi.org/10.1007/978-981-33-6710-4_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-33-6709-8
Online ISBN: 978-981-33-6710-4
eBook Packages: Computer ScienceComputer Science (R0)