Abstract
The Grey Wolf Optimizer (GWO) algorithm is a very famous algorithm in the field of swarm intelligence for solving global optimization problems and real-life engineering design problems. The GWO algorithm is unique among swarm-based algorithms in that it depends on leadership hierarchy. In this paper, a Modified Grey Wolf Optimization Algorithm (MGWO) is proposed by modifying the position update equation of the original GWO algorithm. The leadership hierarchy is simulated using four different types of grey wolves: lambda (\(\lambda\)), mu (\(\mu\)), nu (\(\nu\)), and xi (\(\xi\)). The effectiveness of the proposed MGWO is tested using CEC 2005 benchmark functions, with sensitivity analysis and convergence analysis, and the statistical results are compared with six other meta-heuristic algorithms. According to the results and discussion, MGWO is a competitive algorithm for solving global optimization problems. In addition, the MGWO algorithm is applied to three real-life optimization design problems, such as tension/compression design, gear train design, and three-bar truss design. The proposed MGWO algorithm performed well compared to other algorithms.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
An optimization problem refers to a problem with multiple possible solutions, and the process of selecting the optimal solution among these options is known as optimization [1]. An optimization problem consists of decision variables, constraints, and an objective function [2, 3]. Advances in science and technology have led to more complicated and emerging optimization difficulties that require the use of relevant tools. There are two types of strategies for addressing optimization problems: deterministic and stochastic [4, 5]. Deterministic approaches, categorized as gradient-based and non-gradient-based, excel in solving linear, convex, and basic optimization problems. However, these approaches are ineffective for complicated, non-differentiable, nonlinear, non-convex, and NP-hard problems. These are the primary characteristics of optimization problems in real-life applications. Due to the limitations of deterministic methods, researchers have developed stochastic approaches like meta-heuristic algorithms [6]. Meta-heuristic algorithms are widely regarded as the most effective optimization algorithms due to their robustness, performance reliability, simplicity, and ease of implementation. Meta-heuristic algorithms are categorized into literary categories, like: (1) Evolutionary-based algorithms are based on evolutionary theory. (2) Swarm-based algorithms mimic the social behavior and decision-making of different groups. These algorithms rely on bio-community information and collaborative action to achieve certain goals. (3) Physics-based algorithms are influenced by natural physical principles. (4) Human behavior-based algorithms are inspired by human social behavior. (5) Hybrid and advanced algorithms use features from multiple optimization strategies to improve outcomes. Tables 1 and 2 show the classification of meta-heuristic algorithms in the literature.
The GWO algorithm is a meta-heuristic based on the hunting behavior and leadership hierarchy of grey wolves. It has been used to optimize key values in cryptography algorithms [70], time forecasting [71], feature subset selection [72], economic dispatch problems [73], optimal power flow problems [74], optimal design of double later grids [75], and flow shop scheduling problems [76]. Several algorithms have been developed to improve the convergence performance of the GWO algorithm, including binary GWO algorithm [77], parallelized GWO algorithm [78, 79], hybrid DE algorithm with GWO algorithm [80], hybrid GA algorithm with GWO algorithm [81], hybrid GWO algorithm using Elite Opposition Based Learning strategy and simplex method [82], Mean Grey Wolf Optimizer Algorithm (MGWOA) [83], integration of DE algorithm with GWO algorithm [84], and hybrid PSO algorithm with GWO algorithm [85]. The optimization problems have been simplified and solved using linear or integer programming techniques. Existing fuzzy and intuitionistic fuzzy optimization methods, such as FMODI, IFMODI, IFMZMCM, FHM, IFHM, and IFRM, can be complicated due to their numerous steps. Usually, fuzzy and intuitionistic fuzzy optimization problems are 1st translated to equivalent crisp optimization problems. Then second, to solve TORA software, crisp optimization problems are transformed into equivalent linear programming problems. It uses branch and bound methods to solve problems. The use of fuzzy and intuitionistic fuzzy sets to solve optimization problems has been in the literature [86,87,88,89,90,91]. The methodology of estimating theory, statistical learning, and data mining is known as multivariate adaptive regression splines (MARS) [92]. Nowadays it is used in many different domains including science, technology, management, and economics [93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118]. Non-parametric regression analysis such as MARS does not require certain preconditions on the functional relationships between the explanatory and involved response variables. Since it automatically models interactions and non-linearities, it can be considered an extension of linear models [119]. MARS is unable to completely handle variable uncertainty despite all of its accomplishments.
Some of the issues/challenges in the existing literature. A fundamental challenge is that achieving global optimum value requires sluggish convergence and considerable computing overhead. A lot of algorithms lack a proper balance between exploration and exploitation abilities. A few algorithms converge prematurely to the local optimum, making them unsuitable for real-life engineering problems. Another drawback is that the algorithm has a large number of algorithm-specific parameters, and picking optimum values requires a high computing load.
The main research question in the study of meta-heuristic algorithms is whether there is still a need to propose new approaches despite the abundance of optimization algorithms created. In answer to this difficulty, the No Free Lunch (NFL) theorem [120] says that an algorithm’s strong performance in dealing with a set of optimization problems does not ensure the same performance in other optimization problems. Therefore, claiming that an algorithm is optimal for all optimization applications is inaccurate. The NFL theorem encourages authors to propose innovative algorithms to solve optimization problems.
In this paper, a Modified Grey Wolf Optimization Algorithm (MGWO) is proposed by modifying the position update equation of the original GWO algorithm. The leadership hierarchy is simulated using four different types of grey wolves: lambda (\(\lambda\)), mu (\(\mu\)), nu (\(\nu\)), and xi (\(\xi\)). The MGWO algorithm addresses the shortcomings of leading wolves in GWO by enhancing their performance. The performance of the proposed algorithm has been evaluated on 23 benchmark functions, and the results were compared to popular meta-heuristic algorithms. MGWO is applied to three real-life engineering problems, and the results were compared to popular meta-heuristic algorithms.
The paper is organized as follows: Sect. 2 provides a brief introduction to the GWO algorithm. In Sect. 3 modified version of GWO named MGWO has been proposed. Results and discussion on CEC 2005 benchmark functions have been presented in Sect. 4. In Sect. 5 MGWO for solving real-life engineering problems are presented. Finally, in Sect. 6 we conclude the paper and suggest future works.
2 Grey wolf optimization algorithm (GWO)
The GWO algorithm is inspired by the hierarchy and hunting behavior of grey wolf groups. The method optimizes grey wolf populations by mathematically replicating the tracking, surrounding, hunting, and attacking processes. The grey wolf hunting procedure consists of three steps: social hierarchy stratification, encircling the prey, and attacking the prey.
2.1 Social hierarchy
Grey wolves are gregarious canids that live at the top of the food chain and have a tight social dominance structure. The best solution is indicated as the lambda (\(\lambda\)); the second-best solutions are marked as the mu (\(\mu\)); the third-best solutions are marked as the nu (\(\nu\)); and the other solutions are marked as xi (\(\xi\)). Figure 1 illustrates its dominating social hierarchy.
2.2 Encircling the prey of GWO
The wolves encircling approach around the prey is mathematically represented by the following equations:
where \(Z\) is the grey wolf’s position vector, \(Z _{p}\) represents the position vectors of the prey, t represents the current iteration, and \(L\) and \(N\) are coefficient vectors.
The vectors \(L\) and \(N\) are calculated as follows:
where \(b\) components are linearly decreasing from 2 to 0 over iterations and \(r_{1}\), \(r_{2}\) are random vectors in [0, 1].
2.3 Attacking the prey of GWO
Grey wolves can recognize possible prey locations, and the search is primarily carried out with the assistance of \(\lambda\), \(\mu\), and \(\nu\) wolves. The best three wolves (\(\lambda\), \(\mu\), and \(\nu\)) in the current population are preserved in each iteration, while the positions of other search agents are updated based on their position information. The following formulas are provided in this regard:
In the above equation, \(Z _{\lambda }\), \(Z _{\mu }\), and \(Z _{\nu }\) are the position vectors of \(\lambda\), \(\mu\), and \(\nu\) wolves, respectively; the calculations of \(L _{1}\), \(L _{2}\), and \(L _{3}\) are similar to \(L\), while the calculations of \(N _{1}\), \(N _{2}\), and \(N _{3}\) are similar to \(N\). The distance between the current candidate wolves and the best three wolves is represented by \(D _{\lambda }\), \(D _{\mu }\), and \(D _{\nu }\).
Figure 2 shows that the candidate solution eventually falls within the random circle formed by \(\lambda\), \(\mu\), and \(\nu\). The other contenders then update their locations near the prey at random, guided by the current best three wolves. They begin searching for prey position information in a disorganized way before focusing on assaulting the prey.
3 A modified grey wolf optimization algorithm (MGWO)
A Modified Grey Wolf Optimization Algorithm (MGWO) was inspired by the GWO algorithm, which is already discussed in Sect. 2. Then the mathematical form of the MGWO algorithm is provided as follows:
3.1 Encircling the prey of MGWO
The wolves’ encircling approach around the prey is mathematically described by providing the following equations as
where \(Z ^{\prime }\) is the grey wolf’s position vector, \(Z ^{\prime }_{p}\) represents the position vectors of the prey, t represents the current iteration, \(L ^{\prime }\) is coefficient vector, \(b ^{\prime }\) components are linearly decreasing from 2 to 0 over iterations and \(r_{1}\) is random vectors in [0, 1].
where \(N ^{\prime }\) is coefficient vector, \(b ^{\prime }\) components are linearly decreasing from 2 to 0 over iterations and \(r_{2}\) is random vectors in [0, 1].
3.2 Attacking the prey of MGWO
Grey wolf attacking technique can be mathematically described by approximating the prey position using \(\lambda\), \(\mu\), and \(\nu\) solutions (wolves). As a result, by using this estimate, each wolf can update their positions by
where \(Z ^{\prime }_{1}\), \(Z ^{\prime }_{2}\), and \(Z ^{\prime }_{3}\) are calculated by using Eq. 13.
where \(L ^{\prime }_{1}\), \(L ^{\prime }_{2}\), \(L ^{\prime }_{3}\), \(D ^{\prime }_{\lambda }\), \(D ^{\prime }_{\mu }\), and \(D ^{\prime }_{\nu }\) are calculated by using Eq. 14 and 15.
where \(N ^{\prime }_{1}\), \(N ^{\prime }_{2}\), and \(N ^{\prime }_{3}\) are calculated by using Eq. 16.
The candidate solution eventually falls within the random circle formed by \(\lambda\), \(\mu\), and \(\nu\). The other contenders then update their locations near the prey at random, guided by the current best three wolves. They begin searching for prey position information in a disorganized way before focusing on assaulting the prey. The pseudo code of the MGWO algorithm is presented in Algorithm 1. The flowchart of the MGWO is given in Fig. 3.
3.3 Computational complexity
The computational complexity of the proposed MGWO algorithm depends on three main processes: initialization, evaluation of the fitness function, and updating each particle. The computational complexity of the basic process with a n particle is O(n), and updating the MGWO mechanism is equal to \(O(M \times n)+O (M \times n \times D)\), where M signifies the maximum number of iterations and D signifies the dimension of the problems. Therefore, the total computational complexity of the proposed MGWO is equals \(O(n\times (M+M \times D+1))\).
4 Results and discussion
In this section, to analyze the performance of the MGWO algorithm, seven uni-modal test functions, six multi-modal optimization functions, and 10 fixed-dimensional multi-modal optimization functions are selected. Table 4 lists these functions’ precise expressions, dimensions, search space, and optimal values. The uni-modal test functions are represented by F1–F7, the multi-modal test functions by F8–F13, and the fixed dimensional multi-modal test functions by F14–F23. The uni-modal test function is primarily used to calculate the MGWO’s convergence speed and accuracy of the solution. The primary purpose of the multi-modal test function is to gauge the MGWO’s global surveying capability. To increase the experiment’s accuracy, the six chosen algorithms use identical experimental parameters: swarm size (n = 30), dimension (D = 30), maximum number of iterations (M = 1000), each algorithm is run 30 times independently and the results are recorded. The values set for the control parameters of the competitor algorithms are given in Table 3. The experiments are performed on Windows 11, Intel Core i3, 2.10GHz, 8.00 GB RAM, MATLAB R2022b (Table 4).
4.1 Sensitivity analysis
The proposed MGWO algorithm employs two parameters, i.e., the number of grey wolves and the maximum number of iterations.
4.2 Number of grey wolves
The MGWO algorithm was simulated for different values of grey wolf (i.e., 10, 15, 20, 25, 30). Figure 4 shows the variations of different numbers of search agents on benchmark test functions. Figure 4 shows that the value of the fitness function reduces as the number of search agents rises.
4.3 Maximum number of iterations
The MGWO algorithm was run for different numbers of iterations. The values of Maximum iteration used in experimentation are 200, 400, 600, 800, and 1000. Figure 5 demonstrates the impact of the number of iterations on benchmark test functions. As the number of iterations increases, the MGWO algorithm converges to the optimum.
4.4 In comparison to other algorithms
To further evaluate the performance of the MGWO, the MGWO algorithm was tested on 23 benchmark functions, and the results were compared with the PSO [15], TSA [20], SSA [121], MVO [39], GWO [16], and IGWO [122] algorithms. Each algorithm is evaluated using the average values and standard deviation after 30 runs, the solution with the highest accuracy is bolded in the table. Tables 5 and 6 displays the results of the 23 test functions. The MGWO’s convergence accuracy and optimization capacity can be seen in the average values and standard deviation shown in Tables 5 and 6. When solving the F1–F5 and F7 functions for the seven uni-modal functions, the MGWO performs better in terms of accuracy and standard deviation, even though the optimization accuracy falls short of the theoretically ideal value of 0. When solving the F9 and F11 functions, the optimization accuracy for the six multi-modal functions reaches the theoretical optimal value of 0, and the algorithm’s great robustness and precision of the solution are clearly demonstrated. Meanwhile, when solving the F8 and F10 functions, the MGWO also yields a better result when compared to other optimization techniques. When it comes to the ten fixed dimensional multi-modal functions, F14, F15, and F20–F23 outperforms other algorithms in terms of value, while the remaining functions’ outcomes mostly agree with the contrast algorithm’s.
4.5 Statistical analysis
This subsection presents a statistical analysis of the performance of competitor algorithms and MGWO to establish whether or not MGWO has a statistically significant advantage. The Wilcoxon rank sum test [123] is utilized to ascertain the statistically significant difference between the average of two data samples. Using an index defined as a p-value, the Wilcoxon rank sum test is used to establish whether or not the statistical advantage of MGWO over any of the competing algorithms is significant. Two-tailed t-tests [124] have been used to compare different statistical outcomes at a consequence of 0.05. The t values are determined with the help of average and std values. A -t value indicates that the statistical outcomes of the MGWO optimization mistakes are significantly less, and vice versa. The corresponding t value is highlighted if the difference is a statistically significant error. The symbols \(+/=/-\) represent that MGWO wins functions, ties functions, and loses functions. The statistical outcomes of the optimization mistakes demonstrate that MGWO has much superior total achievement when compared with the other algorithms. Tables 7 and 8 present results of the Wilcoxon rank-sum test and t-test on 23 benchmark functions and validation of the Wilcoxon rank-sum test and t-test on 23 benchmark functions comparing the performance of competing algorithms with MGWO. MGWO outperforms the corresponding algorithm statistically in situations where the p-value and t-value are less than 0.05, according to these data.
4.6 Convergence analysis
Figures 6, 7, and 8 displays the convergence graph of MGWO and other algorithms. As illustrated in Figs. 6, 7, and 8 the suggested method in uni-modal functions adheres to a certain pattern that prioritizes the exploitation stage (functions F1 and F3). The proposed method exhibits a distinct pattern in multi-modal functions with numerous local optimal values. It gives more consideration to the early algorithmic stages of the exploration process. Nevertheless, exploration is carried out in broken form (functions F12 and F13) during the algorithm’s final stages, which are often the exploitation phase. The suggested algorithm offers a superior pattern of convergence for almost all functions.
5 MGWO for solving real-life engineering problems
This section evaluates the proposed algorithm performance in three real-life engineering problems using constrained engineering benchmarks. The tension/compression spring, the gear train, and the three-bar truss are all part of the engineering design problems. The MGWO runs independently for each engineering problem 30 times, with a selected grey wolf population size of 30, with 1000 iterations, and a number of function evaluations (NFEs) of 15,000.
5.1 Tension/compression spring design problem
This problem aims to optimize the weight of a tension/compression spring [125], as shown in Fig. 9. The problem has constraints on minimum deflection, shear stress, surge frequency, outside diameter limits, and design variables. The design variables are the mean coil diameter D, the wire diameter d, and the number of active coils N. Table 9 presents the outcomes of this experiment. The MGWO algorithm outperformed other algorithms in this problem.
5.2 Gear train design problem
Sandgren presented the gear train design problem [126, 127], an unconstrained discrete problem in mechanical engineering. This benchmark task aims to minimize the gear ratio, which is the ratio of the angular velocity of the output shaft to the input shaft. The number of teeth of gears \({\mathcal {C}}_{1}\), \({\mathcal {C}}_{2}\), \({\mathcal {C}}_{3}\), and \({\mathcal {C}}_{4}\) are considered as the design variables, as shown in Fig. 10. Table 10 presents the outcomes of this experiment. The MGWO algorithm outperformed other algorithms in this problem.
5.3 Three-bar truss design problem
This optimization problem from civil engineering has a confined and troublesome space [128]. The primary goal of this challenge is to reduce the weight of bar constructions. The restrictions for this problem are determined by the stress constraints of each bar. The resulting problem contains a non-linear objective function and three non-linear constraints, as shown in Fig. 11. The results are presented in Table 11 The proposed method successfully identified the optimal value for the problem.
6 Conclusion
The original GWO algorithm has premature convergence and poor accuracy while solving global optimization problems. In this study, a modified GWO is proposed to overcome the shortcomings. The MGWO algorithm is proposed by modifying the position update equation of the original GWO algorithm. We investigated 23 functions with various features, including uni-modal, multi-modal, and fixed-dimensional multi-modal, and compared the outcomes to six algorithms. The experimental results indicate that the MGWO algorithm outperforms compare with six different algorithms in terms of optimization performance and stability. Then three real-life engineering optimization design problems (tension/compression spring, gear train, and three-bar truss) are solved using various objective functions, constraint conditions, and features. Meanwhile, the Wilcoxon rank-sum test and t-test were used to evaluate the results of the MGWO algorithm. The experimental results demonstrate that the MGWO algorithm outperforms other comparison algorithms and is capable of dealing with engineering design problems. However, the proposed MGWO algorithm has shown insignificant and mediocre results for one uni-modal (F6) and two multi-modal (F12 and F13) functions. In future work, the MGWO suggests several improvements, such as the inclusion of adaptive inertia factors, image segmentation, feature selection, levy flight distribution, binary, and multi-objective problems.
Data availability
All data generated or analyzed during this study are included in this article.
Abbreviations
- \(+/=/-\) :
-
The MGWO wins functions, ties functions, and loses functions
- \(r_{1} \ and \ r_{2}\) :
-
Random vectors in [0,1]
- \(\lambda\) :
-
The best solution
- \(b , b ^{\prime }\) :
-
Linearly decreasing from 2 to 0
- \(D _{\lambda }, D _{\mu }, and \ D _{\nu }\) :
-
The distance between the current candidate wolves and the best three wolves
- \(L , b , L ^{\prime }, and \ b ^{\prime }\) :
-
The coefficient vectors
- \(Z , Z ^{\prime }\) :
-
The grey wolf’s position vector
- \(Z _{\lambda }, Z _{\mu }, and \ Z _{\nu }\) :
-
The position vectors of \(\lambda\), \(\mu\), and \(\nu\) wolves
- \(Z _{p}, Z ^{\prime }_{p}\) :
-
The position vectors of the prey
- \(\mu\) :
-
The second best solutions
- \(\nu\) :
-
The third best solutions
- \(\xi\) :
-
The other solutions
- \(D\) :
-
The dimension of the problems
- \(M\) :
-
The maximum number of iterations
- \(n\) :
-
Swarm size
- \(t\) :
-
The current iteration
- ACO:
-
Ant colony optimization
- AIS:
-
Artificial immune system
- BBO:
-
Biogeography-based optimization
- CA:
-
Cultural algorithm
- CPA:
-
Colony predation algorithm
- DE:
-
Differential evolution
- ES:
-
Evolutionary strategy
- FHM:
-
Fuzzy Hungarian method
- FMODI:
-
Fuzzy modified distribution method
- GA:
-
Genetic algorithm
- GP:
-
Genetic programming
- GWO:
-
Grey wolf optimizer
- HHO:
-
Harris hawks optimization
- HOA:
-
Horse herd optimization algorithm
- IFHM:
-
Intuitionistic fuzzy Hungarian method
- IFMODI:
-
Intuitionistic fuzzy modified distribution method
- IFMZMCM:
-
Intuitionistic fuzzy min-zero min-cost method
- IFRM:
-
Intuitionistic fuzzy reduction method
- IGWO:
-
Improved gray wolf optimization
- MBO:
-
Monarch butterfly optimization
- MGWO:
-
Modified grey wolf optimization algorithm
- MGWOA:
-
Mean grey wolf optimizer algorithm
- MPA:
-
Marine predators algorithm
- NFL:
-
No free lunch
- PSO:
-
Particle swarm optimization
- SMA:
-
Slime mould algorithm
- SO:
-
Snake optimization
- SSA:
-
Salp swarm algorithm
- TSA:
-
Tunicate search algorithm
- WOA:
-
Whale optimization algorithm
- WSO:
-
White shark optimization
References
Dhiman, G.: SSC: a hybrid nature-inspired meta-heuristic optimization algorithm for engineering applications. Knowl. Based Syst. 222, 106926 (2021)
Dehghani, M., Hubálovskỳ, Š, Trojovskỳ, P.: Tasmanian devil optimization: a new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access 10, 19599–19620 (2022)
Mohapatra, P., Das, K.N., Roy, S.: A modified competitive swarm optimizer for large scale optimization problems. Appl. Soft Comput. 59, 340–362 (2017)
Chen, X., Mei, C., Xu, B., Yu, K., Huang, X.: Quadratic interpolation based teaching-learning-based optimization for chemical dynamic system optimization. Knowl. Based Syst. 145, 250–263 (2018)
Mohapatra, P., Roy, S., Das, K.N., Dutta, S., Raju, M.S.S.: A review of evolutionary algorithms in solving large scale benchmark optimisation problems. Int. J. Math. Oper. Res. 21(1), 104–126 (2022)
Francisco, M., Revollar, S., Vega, P., Lamanna, R.: A comparative study of deterministic and stochastic optimization methods for integrated design of processes. IFAC Proc. Vol. 38(1), 335–340 (2005)
Storn, R., Price, K.: Differential evolution: a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11, 341–359 (1997)
Whitley, D.: A genetic algorithm tutorial. Stat. Comput. 4, 65–85 (1994)
Espejo, P.G., Ventura, S., Herrera, F.: A survey on the application of genetic programming to classification. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 40(2), 121–144 (2009)
Beyer, H.-G., Schwefel, H.-P.: Evolution strategies: a comprehensive introduction. Nat. Comput. 1, 3–52 (2002)
Reynolds, R.G., Peng, B.: Cultural algorithms: computational modeling of how cultures learn to solve problems—an engineering example. Cybern. Syst. Int. J. 36(8), 753–771 (2005)
Castro, Ld., Timmis, J.I.: Artificial immune systems as a novel soft computing paradigm. Soft. Comput. 7, 526–544 (2003)
Hofmeyr, S.A., Forrest, S.: Architecture for an artificial immune system. Evolut. Comput. 8(4), 443–473 (2000)
Simon, D.: Biogeography-based optimization. IEEE Trans. Evolut. Comput. 12(6), 702–713 (2008)
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95-International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE (1995)
Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014)
Dorigo, M., Birattari, M., Stutzle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006)
Faramarzi, A., Heidarinejad, M., Mirjalili, S., Gandomi, A.H.: Marine predators algorithm: a nature-inspired metaheuristic. Expert Syst. Appl. 152, 113377 (2020)
Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016)
Kaur, S., Awasthi, L.K., Sangal, A., Dhiman, G.: Tunicate swarm algorithm: a new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 90, 103541 (2020)
Braik, M., Hammouri, A., Atwan, J., Al-Betar, M.A., Awadallah, M.A.: White shark optimizer: a novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl. Based Syst. 243, 108457 (2022)
MiarNaeimi, F., Azizyan, G., Rashki, M.: Horse herd optimization algorithm: a nature-inspired algorithm for high-dimensional optimization problems. Knowl. Based Syst. 213, 106711 (2021)
Hashim, F.A., Hussien, A.G.: Snake optimizer: a novel meta-heuristic optimization algorithm. Knowl. Based Syst. 242, 108320 (2022)
Li, S., Chen, H., Wang, M., Heidari, A.A., Mirjalili, S.: Slime mould algorithm: a new method for stochastic optimization. Future Gener. Comput. Syst. 111, 300–323 (2020)
Wang, G.-G., Deb, S., Cui, Z.: Monarch butterfly optimization. Neural Comput. Appl. 31, 1995–2014 (2019)
Tu, J., Chen, H., Wang, M., Gandomi, A.H.: The colony predation algorithm. J. Bionic Eng. 18, 674–710 (2021)
Heidari, A.A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., Chen, H.: Harris hawks optimization: algorithm and applications. Future Gener. Comput. Syst. 97, 849–872 (2019)
Wang, G.-G.: Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization problems. Memet. Comput. 10(2), 151–164 (2018)
Ahmadianfar, I., Heidari, A.A., Gandomi, A.H., Chu, X., Chen, H.: RUN beyond the metaphor: an efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 181, 115079 (2021)
Dhiman, G., Kumar, V.: Emperor penguin optimizer: a bio-inspired algorithm for engineering problems. Knowl. Based Syst. 159, 20–50 (2018)
Jiang, Y., Wu, Q., Zhu, S., Zhang, L.: Orca predation algorithm: a novel bio-inspired algorithm for global optimization problems. Expert Syst. Appl. 188, 116026 (2022)
Zhao, W., Wang, L., Mirjalili, S.: Artificial hummingbird algorithm: a new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 388, 114194 (2022)
Braik, M.S.: Chameleon swarm algorithm: a bio-inspired optimizer for solving engineering design problems. Expert Syst. Appl. 174, 114685 (2021)
Abualigah, L., Abd Elaziz, M., Sumari, P., Geem, Z.W., Gandomi, A.H.: Reptile search algorithm (RSA): a nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 191, 116158 (2022)
Bertsimas, D., Tsitsiklis, J.: Simulated annealing. Stat. Sci. 8(1), 10–15 (1993)
Dehghani, M., Montazeri, Z., Dhiman, G., Malik, O., Morales-Menendez, R., Ramirez-Mendoza, R.A., Dehghani, A., Guerrero, J.M., Parra-Arroyo, L.: A spring search algorithm applied to engineering optimization problems. Appl. Sci. 10(18), 6173 (2020)
Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)
Dehghani, M., Samet, H.: Momentum search algorithm: a new meta-heuristic optimization algorithm inspired by momentum conservation law. SN Appl. Sci. 2(10), 1720 (2020)
Mirjalili, S., Mirjalili, S.M., Hatamlou, A.: Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput. Appl. 27, 495–513 (2016)
Faramarzi, A., Heidarinejad, M., Stephens, B., Mirjalili, S.: Equilibrium optimizer: a novel optimization algorithm. Knowl. Based Syst. 191, 105190 (2020)
Eskandar, H., Sadollah, A., Bahreininejad, A., Hamdi, M.: Water cycle algorithm: a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 110, 151–166 (2012)
Hashim, F.A., Hussain, K., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W.: Archimedes optimization algorithm: a new metaheuristic algorithm for solving optimization problems. Appl. Intell. 51, 1531–1551 (2021)
Hashim, F.A., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W., Mirjalili, S.: Henry gas solubility optimization: a novel physics-based algorithm. Future Gener. Comput. Syst. 101, 646–667 (2019)
Ahmadianfar, I., Heidari, A.A., Noshadian, S., Chen, H., Gandomi, A.H.: Info: an efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 195, 116516 (2022)
Kaveh, A., Dadras, A.: A novel meta-heuristic optimization algorithm: thermal exchange optimization. Adv. Eng. Softw. 110, 69–84 (2017)
Rao, R.V., Savsani, V.J., Vakharia, D.: Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems. Inf. Sci. 183(1), 1–15 (2012)
Dehghani, M., Mardaneh, M., Malik, O.: FOA:‘following’ optimization algorithm for solving power engineering optimization problems. J. Oper. Autom. Power Eng. 8(1), 57–64 (2020)
Atashpaz-Gargari, E., Lucas, C.: Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In: 2007 IEEE Congress on Evolutionary Computation, pp. 4661–4667. IEEE (2007)
Moosavi, S.H.S., Bardsiri, V.K.: Poor and rich optimization algorithm: a new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 86, 165–181 (2019)
Zhang, Y., Jin, Z.: Group teaching optimization algorithm: a novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 148, 113246 (2020)
Kashan, A.H.: An efficient algorithm for constrained global optimization and application to mechanical engineering design: league championship algorithm (LCA). Comput. Aided Des. 43(12), 1769–1792 (2011)
Yang, Y., Chen, H., Heidari, A.A., Gandomi, A.H.: Hunger games search: visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 177, 114864 (2021)
Askari, Q., Younas, I., Saeed, M.: Political optimizer: a novel socio-inspired meta-heuristic for global optimization. Knowl. Based Syst. 195, 105709 (2020)
Shi, Y.: Brain storm optimization algorithm. In: Advances in Swarm Intelligence: Second International Conference, ICSI 2011, Chongqing, China, June 12–15, 2011, Proceedings, Part I 2, pp. 303–309. Springer (2011)
Wang, C., Zhang, X., Niu, Y., Gao, S., Jiang, J., Zhang, Z., Yu, P., Dong, H.: Dual-population social group optimization algorithm based on human social group behavior law. IEEE Trans. Comput. Soc. Syst. 10(1), 166–177 (2022)
Panwar, D., Saini, G., Agarwal, P.: Human eye vision algorithm (HEVA): a novel approach for the optimization of combinatorial problems. In: Artificial Intelligence in Healthcare, pp. 61–71 (2022)
Mousavirad, S.J., Ebrahimpour-Komleh, H.: Human mental search: a new population-based metaheuristic optimization algorithm. Appl. Intell. 47, 850–887 (2017)
Gopi, S., Mohapatra, P.: A modified whale optimisation algorithm to solve global optimisation problems. In: Proceedings of 7th International Conference on Harmony Search, Soft Computing and Applications: ICHSA 2022, pp. 465–477. Springer (2022)
Hussain, K., Neggaz, N., Zhu, W., Houssein, E.H.: An efficient hybrid sine-cosine Harris hawks optimization for low and high-dimensional feature selection. Expert Syst. Appl. 176, 114778 (2021)
Gopi, S., Mohapatra, P.: Opposition-based learning cooking algorithm (OLCA) for solving global optimization and engineering problems. Int. J. Mod. Phys. C 35, 1–28 (2023)
Mafarja, M.M., Mirjalili, S.: Hybrid binary ant lion optimizer with rough set and approximate entropy reducts for feature selection. Soft. Comput. 23(15), 6249–6265 (2019)
Sarangi, P., Mohapatra, P.: Evolved opposition-based mountain gazelle optimizer to solve optimization problems. J. King Saud Univ. Comput. Inf. Sci. 35(10), 101812 (2023)
Cheng, Z., Song, H., Wang, J., Zhang, H., Chang, T., Zhang, M.: Hybrid firefly algorithm with grouping attraction for constrained optimization problem. Knowl. Based Syst. 220, 106937 (2021)
Mohapatra, S., Sarangi, P., Mohapatra, P.: An improvised grey wolf optimiser for global optimisation problems. Int. J. Math. Oper. Res. 26(2), 263–281 (2023)
Sarangi, P., Mohapatra, P.: Modified hybrid GWO-SCA algorithm for solving optimization problems. In: International Conference on Data Analytics and Computing, pp. 121–128. Springer (2022)
Pelusi, D., Mascella, R., Tallini, L., Nayak, J., Naik, B., Deng, Y.: An improved moth-flame optimization algorithm with hybrid search phase. Knowl. Based Syst. 191, 105277 (2020)
Qin, A.K., Suganthan, P.N.: Self-adaptive differential evolution algorithm for numerical optimization. In: 2005 IEEE Congress on Evolutionary Computation, vol. 2, pp. 1785–1791. IEEE (2005)
Chandran, V., Mohapatra, P.: Enhanced opposition-based grey wolf optimizer for global optimization and engineering design problems. Alex. Eng. J. 76, 429–467 (2023)
Gopi, S., Mohapatra, P.: Fast random opposition-based learning Aquila optimization algorithm. Heliyon 10(4), 26187–26187 (2024)
Shankar, K., Eswaran, P.: A secure visual secret share (VSS) creation scheme in visual cryptography using elliptic curve cryptography with optimization technique. Aust. J. Basic Appl. Sci. 9(36), 150–163 (2015)
Yusof, Y., Mustaffa, Z.: Time series forecasting of energy commodity using grey wolf optimizer (2015)
Emary, E., Zawbaa, H.M., Grosan, C., Hassenian, A.E.: Feature subset selection approach by gray-wolf optimization. In: Afro-European Conference for Industrial Advancement: Proceedings of the First International Afro-European Conference for Industrial Advancement AECIA 2014, pp. 1–13. Springer (2015)
Kamboj, V.K., Bath, S., Dhillon, J.: Solution of non-convex economic load dispatch problem using grey wolf optimizer. Neural Comput. Appl. 27, 1301–1316 (2016)
El-Fergany, A.A., Hasanien, H.M.: Single and multi-objective optimal power flow using grey wolf optimizer and differential evolution algorithms. Electric Power Compon. Syst. 43(13), 1548–1559 (2015)
Gholizadeh, S.: Optimal design of double layer grids considering nonlinear behaviour by sequential grey wolf algorithm. J. Optim. Civ. Eng. 5(4), 511–523 (2015)
Komaki, G., Kayvanfar, V.: Grey wolf optimizer algorithm for the two-stage assembly flow shop scheduling problem with release time. J. Comput. Sci. 8, 109–120 (2015)
Emary, E., Zawbaa, H.M., Hassanien, A.E.: Binary grey wolf optimization approaches for feature selection. Neurocomputing 172, 371–381 (2016)
Pan, T.-S., Dao, T.-K., Nguyen, T.-T., Chu, S.-C.: A communication strategy for paralleling grey wolf optimizer. In: Genetic and Evolutionary Computing: Proceedings of the Ninth International Conference on Genetic and Evolutionary Computing, August 26–28, 2015, Yangon, Myanmar-Volume II 9, pp. 253–262. Springer (2016)
Jayapriya, J., Arock, M.: A parallel GWO technique for aligning multiple molecular sequences. In: 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 210–215. IEEE (2015)
Jitkongchuen, D.: A hybrid differential evolution with grey wolf optimizer for continuous global optimization. In: 2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE), pp. 51–54. IEEE (2015)
Tawhid, M.A., Ali, A.F.: A hybrid grey wolf optimizer and genetic algorithm for minimizing potential energy function. Memet. Comput. 9, 347–359 (2017)
Zhang, S., Luo, Q., Zhou, Y.: Hybrid grey wolf optimizer using elite opposition-based learning strategy and simplex method. Int. J. Comput. Intell. Appl. 16(02), 1750012 (2017)
Singh, N., Singh, S.: A modified mean gray wolf optimization approach for benchmark and biomedical problems. Evolut. Bioinform. 13, 1176934317729413 (2017)
Zhu, A., Xu, C., Li, Z., Wu, J., Liu, Z.: Hybridizing grey wolf optimization with differential evolution for global optimization and test scheduling for 3D stacked SoC. J. Syst. Eng. Electron. 26(2), 317–328 (2015)
Singh, N., Singh, S., et al.: Hybrid algorithm of particle swarm optimization and grey wolf optimizer for improving convergence performance. J. Appl. Math. 2017, 2030489 (2017)
Kumar, P.S.: The PSK method: a new and efficient approach to solving fuzzy transportation problems. In: Transport and Logistics Planning and Optimization, pp. 149–197 (2023)
Kumar, P.S.: The theory and applications of the software-based PSK method for solving intuitionistic fuzzy solid transportation problems. In: Perspectives and Considerations on the Evolution of Smart Systems, pp. 137–186 (2023)
Kumar, P.S.: Algorithms for solving the optimization problems using fuzzy and intuitionistic fuzzy set. Int. J. Syst. Assur. Eng. Manag. 11(1), 189–222 (2020)
Kumar, P.S.: Developing a new approach to solve solid assignment problems under intuitionistic fuzzy environment. Int. J. Fuzzy Syst. Appl. (IJFSA) 9(1), 1–34 (2020)
Kumar, P.S.: Intuitionistic fuzzy solid assignment problems: a software-based approach. Int. J. Syst. Assur. Eng. Manag. 10(4), 661–675 (2019)
Kumar, P.S.: Computationally simple and efficient method for solving real-life mixed intuitionistic fuzzy 3D assignment problems. Int. J. Softw. Sci. Comput. Intell. (IJSSCI) 14(1), 1–42 (2022)
Friedman, J.H.: Multivariate adaptive regression splines. Ann. Stat. 19(1), 1–67 (1991)
Özmen, A., Weber, G.-W., Kropat, E.: Robustification of conic generalized partial linear models under polyhedral uncertainty. Methods 20, 21–22 (2012)
Özmen, A., Zinchenko, Y., Weber, G.-W.: Robust multivariate adaptive regression splines under cross-polytope uncertainty: an application in a natural gas market. Ann. Oper. Res. 324(1), 1337–1367 (2023)
Özmen, A., Kropat, E., Weber, G.-W.: Robust optimization in spline regression models for multi-model regulatory networks under polyhedral uncertainty. Optimization 66(12), 2135–2155 (2017)
Kropat, E., Özmen, A., Weber, G.-W., Meyer-Nieberg, S., Defterli, O.: Fuzzy prediction strategies for gene-environment networks-fuzzy regression analysis for two-modal regulatory systems. RAIRO Oper. Res. Rech. Opér. 50(2), 413–435 (2016)
Kropat, E., Meyer-Nieberg, S.: A multi-layered adaptive network approach for shortest path planning during critical operations in dynamically changing and uncertain environments. In: 2016 49th Hawaii International Conference on System Sciences (HICSS), pp. 1369–1378. IEEE (2016)
Özmen, A., Kropat, E., Weber, G.-W.: Spline regression models for complex multi-modal regulatory networks. Optim. Methods Softw. 29(3), 515–534 (2014)
Weber, G.W., Kropat, E., Tezel, A., Belen, S.: Optimization applied on regulatory and eco-finance networks-survey and new developments (2010)
Kropat, E., Tikidji-Hamburyan, R.A., Weber, G.-W.: Operations research in neuroscience. Ann. Oper. Res. 258, 1–4 (2017)
Weber, G.-W., Batmaz, I., Köksal, G., Taylan, P., Yerlikaya-Özkurt, F.: CMARS: a new contribution to nonparametric regression with multivariate adaptive regression splines supported by continuous optimization. Inverse Probl. Sci. Eng. 20(3), 371–400 (2012)
Taylan, P., Weber, G.-W., Yerlikaya Özkurt, F.: A new approach to multivariate adaptive regression splines by using Tikhonov regularization and continuous optimization. TOP 18, 377–395 (2010)
Kalaycı, B., Purutçuoğlu, V., Weber, G.W.: Operation research in neuroscience: a recent perspective of operation research application in finance. In: Operations Research: New Paradigms and Emerging Applications, pp. 170–190 (2022)
Kalaycı, B., Özmen, A., Weber, G.-W.: Mutual relevance of investor sentiment and finance by modeling coupled stochastic systems with MARS. Ann. Oper. Res. 295, 183–206 (2020)
Graczyk-Kucharska, M., Szafrański, M., Gütmen, S., Çevik, A., Weber, G.-W., Włodarczyk, Z., Goliński, M., Özmen, A.: Modelling Problems in a Regional Labor Market in Poland with MARS (2019)
Çevik, A.: Computer-aided diagnosis of Alzheimer’s disease and mild cognitive impairment with MARS/CMARS classification using structural MR images (2017)
Çevik, A., Weber, G.-W., Eyüboğlu, B.M., Oğuz, K.K., Initiative, A.D.N.: Voxel-MARS: a method for early detection of Alzheimer’s disease by classification of structural brain MRI. Ann. Oper. Res. 258, 31–57 (2017)
Kuter, S., Akyurek, Z., Weber, G.-W.: Retrieval of fractional snow covered area from MODIS data by multivariate adaptive regression splines. Remote Sens. Environ. 205, 236–252 (2018)
Kuter, S., Weber, G.-W., Akyürek, Z., Özmen, A.: Inversion of top of atmospheric reflectance values by conic multivariate adaptive regression splines. Inverse Probl. Sci. Eng. 23(4), 651–669 (2015)
Baltas, I., Frangos, N., Yannacopoulos, A.: Optimal investment and reinsurance policies in insurance markets under the effect of inside information. Appl. Stoch. Model. Bus. Ind. 28(6), 506–528 (2012)
Baltas, I., Xepapadeas, A., Yannacopoulos, A.N.: Robust control of parabolic stochastic partial differential equations under model uncertainty. Eur. J. Control. 46, 1–13 (2019)
Savku, E., Weber, G.-W.: Stochastic differential games for optimal investment problems in a Markov regime-switching jump-diffusion market. Ann. Oper. Res. 312(2), 1171–1196 (2022)
Savku, E.: Memory and anticipation: two main theorems for Markov regime-switching stochastic processes. arXiv preprint arXiv:2302.13890 (2023)
Ghosh, S., Roy, S.K., Weber, G.-W.: Interactive strategy of carbon cap-and-trade policy on sustainable multi-objective solid transportation problem with twofold uncertain waste management. Ann. Oper. Res. 326(1), 157–197 (2023)
Pervin, M., Roy, S.K., Sannyashi, P., Weber, G.-W.: Sustainable inventory model with environmental impact for non-instantaneous deteriorating items with composite demand. RAIRO Oper. Res. 57(1), 237–261 (2023)
Belen, S.-C., Weber, G.-W., Ozel, M.E.: A search about why the earth may have already been discovered by the extraterrestrial (s) and their possible stochastic travels over interstellar distances. In: 43rd COSPAR Scientific Assembly. Held 28 January–4 February, vol. 43, p. 1936 (2021)
Gürbüz, B., Mawengkang, H., Husein, I., Weber, G.-W.: Rumour propagation: an operational research approach by computational and information theory. Cent. Eur. J. Oper. Res. 30, 345–365 (2022)
Gürbüz, B., Gökçe, A.: An algorithm and stability approach for the acute inflammatory response dynamic model. In: Operations Research: New Paradigms and Emerging Applications, pp. 192–217 (2022)
Kriner, M.: Survival analysis with multivariate adaptive regression splines. PhD thesis, LMU (2007)
Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evolut. Comput. 1(1), 67–82 (1997)
Mirjalili, S., Gandomi, A.H., Mirjalili, S.Z., Saremi, S., Faris, H., Mirjalili, S.M.: Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017)
Li, Y., Lin, X., Liu, J.: An improved gray wolf optimization algorithm to solve engineering problems. Sustainability 13(6), 3208 (2021)
Wilcoxon, F.: Individual comparisons by ranking methods. In: Breakthroughs in Statistics: Methodology and Distribution, New York, NY: Springer New York, pp. 196–202 (1992)
Mohapatra, P., Roy, S., Das, K.N., Dutta, S., Raju, M.S.S.: A review of evolutionary algorithms in solving large scale benchmark optimisation problems. Int. J. Math. Oper. Res. 21(1), 104–126 (2022)
Arora, J.S.: Introduction to Optimum Design. Elsevier. (2004)
Sandgren, E.: Nonlinear integer and discrete programming in mechanical design optimization (1990)
Sandgren, E.: Nonlinear integer and discrete programming in mechanical design. In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 26584, pp. 95–105. American Society of Mechanical Engineers (1988)
Nowacki, H.: Optimization in pre-contract ship design (1973)
Acknowledgements
The author would like to thanks VIT University for supporting this research work.
Funding
No funding was received for conducting this study.
Author information
Authors and Affiliations
Contributions
Gopi S: Conceptualization, Methodology, Writing-original draft.
Prabhujit Mohapatra: Conceptualization, Methodology, Supervision, Writing-review and editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no confict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Gopi, S., Mohapatra, P. A modified grey wolf optimization algorithm to solve global optimization problems. OPSEARCH (2024). https://doi.org/10.1007/s12597-024-00785-x
Accepted:
Published:
DOI: https://doi.org/10.1007/s12597-024-00785-x