1 Introduction

Optimization is the process to discover an optimum solution in a feasible time. This area has been very dynamic since proposing a genetic algorithm (GA) and differential evolution (DE). Therefore, the number of optimization problems are increasing and becoming more complex. Consequently, these problems require better optimization methods in order to be solved [1]. There might be several efficient algorithms that can be used to solve a specific problem. However, we cannot consider naming one of them as the best before evaluating it against the others on the problem. As a result, optimization algorithms can be used to solve different problems effectively [2]. There are two types of optimization algorithms: randomized and deterministic. The process of executing deterministic requires at most one direction toward the solution; otherwise, it is terminated. However, the randomized or stochastic technique executes randomly and violates the deterministic constraints [3, 4]. Overall, stochastic is classified as heuristic and metaheuristic. Nature-inspired metaheuristic algorithms can solve real-world problems and standard mathematical functions efficiently in their exploration and exploitation phases. However, balancing between these two phases is a crucial problem in which metaheuristic optimizations are suffered from [5].

NP-hard problems have been solved by most recent metaheuristic algorithms such as job scheduling problem [6], task assignment problems [7], quadratic assignment [8], travel salesman person [9], vehicle routing problem [10], home health care scheduling problem [11] and frequency assignment problem [12]. The most common algorithms are namely evolutionary algorithm (specifically GA) [13], particle swarm optimizer (PSO) [14], artificial bee colony (ABC) [15], whale optimization algorithm (WOA) [16] and grey wolf optimizer (GWO) [17].

Mirjalili and Lewis proposed WOA in [16]. This metaheuristic algorithm is motivated by the humpback whale hunting mechanism. This algorithm presented significant results against other metaheuristic algorithms as WOA has random numbers to balance between its two phases. It has better exploration capability by using an updating mechanism. It also uses a random search mechanism in order to change the position for finding optimum solutions. Both exploration and exploitation execute independently so that WOA can avoid local optima and obtain better convergence speed. Despite this mechanism in WOA, other common metaheuristic algorithms do not have specific operators to split the exploration and exploitation, so they fallouts into local optima [16]. WOA has better performance against PSO, gravitational search algorithm (GSA), DE, and feedforward error propagation (FEP) [16]. According to [18], WOA performs well in terms of convergence time and balancing between exploration and exploitation.

Despite having efficient performance against common algorithms, WOA has some drawbacks. For example, using a randomization mechanism in WOA for complex problems increases computational time [19]. Convergence and speed are not efficient in both of the phases because they depend on a single parameter which is a [20]. WOA results in poor performance in jumping out from local solutions as the encircling mechanism is used in the search space [21]. Furthermore, not improving the best solution in a better way is another issue that is related to the encircling mechanism [22], and also the WOA exploitation phase requires improvement in order to obtain better solutions. In addition, controlling parameters is crucial in order to improve the performance of the algorithm. It is worth mentioning that the parameter setting has three categories according to their strategies. These are determinist, adaptive, and self-adaptive control parameters. Each one of them has an effect on the performance of the algorithm setting. Consequently, these strategies are useful to be involved to improve WOA [23].

As a result, WOA has been hybridized with various algorithms. For example, WOA is integrated with the local search (LS) strategy in order to tackle the ordering form of the flow shop-scheduling problem. A swap mutation operator is also used to diversify the population to improve performance. Furthermore, WOA could escape from local optima by adding reverse-inserted operation. Therefore, the proposed hybrid Whale algorithm (HWA) improved the performance and the solution quality of WOA due to using LS [24].

WOA is also hybridized with Colliding Bodies Optimization (CBO) due to improving the solution quality and convergence rate. In WOA-CBO, whales are divided into two groups which are explorer and imitator. This division derived from the original CBO. Explorer is those agents who are in the range of lower half whale. However, the upper half is called imitator. Explorer whale changes its position according to the best solution while the imitator updates its position depending on the other half of the whale which is lower [25].

Brain storm optimization (BS) is hybridized with WOA to tackle the difficulty of stagnation in local optima which WOA has it. In BS-WOA, the BS update function is added inside the WOA to update the position of whale based on a coefficient and search area [26].

PSO is an efficient algorithm in the exploitation phase. Thus, it is embedded inside WOA for the exploitation phase while WOA only works in the exploration phase. Therefore, the hybridized algorithm improved and produced better results comparing to WOA and PSO [27].

Because of having problems regarding local optima, the BAT algorithm is used with WOA for the exploration phase. The result of WOA–BAT showed that WOA–BAT improved well comparing to WOA and BAT algorithms [18]. The more detail of WOA modification and hybridization has been described in [18].

Grey wolf optimization was proposed in 2014 in [17]. It is a metaheuristic technique which is inspired by the grey wolves’ behavior. This algorithm shows a competitive result against other metaheuristic algorithms, for instance, PSO, DE, GSA, and FEP [28]. It is very competitive in the exploitation phase compared to others, while it has merit results in the exploration phase. It is also presented a better performance in half of the 29 functions due to avoiding local optima [17].

Despite the fact of having better performance, GWO has issues relating to the balance between exploration and exploitation [29]. It also has a drawback because of having the inability to solve nonlinear equation systems and unconstrained optimization problems [30]. It has an efficient updating mechanism. Though, this mechanism can be improved and enhanced [31]. Initializing grey wolves’ population is randomized in order to diversify the population. Still, this practice had a drawback and it was solved in [32].

WOA has the following issues [21, 28] which are the main motivations of hybridizing GWO with WOA in this paper:

  1. 1.

    WOA suffers from avoiding local optima as it uses encircling search mechanism.

  2. 2.

    Improving solution in WOA after each iteration is not sufficient.

  3. 3.

    WOA has low performance in the exploitation phase.

The above problems, which WOA has, motivated authors for proposing the hybridized algorithm. Consequently, authors have decided to choose a hybridized of GWO and WOA to produce better performance in the exploitation phase by GWO, especially when it is evaluated by unimodal benchmark functions. GWO also has a greater capability of exploitation by using multimodal benchmark functions. Thus, this paper aims to propose a hybridized approach to overcome the WOA problems by using two effective ways: The first step is saving the best solution for each iteration, and the second step is comparing each new solution against the best solution in the exploration phase. If the result is better than the best solution, the positions of the agents will get changed; otherwise, they are staying in the old positions. Adding the GWO hunting mechanism in the exploitation phase is the second method in order to enhance the performance of WOA.

The proposed WOAGWO is differentiated from the above hybridizations of WOA as WOA has not been hybridized with GWO. This hybridization combines two techniques (WOA with GWO) and adds a condition to update positions inside the exploitation phase. The proposed algorithm is also distinguished from WOA and GWO as a new update method is added to the exploration phase of WOA. GWO hunting mechanism for updating the position of the whale is also added to the exploitation phase. As a result, WOAGWO is a new suggested hybridization which enhances the performance of WOA.

The structure of our paper is organized as WOA with its mechanism is presented in Sect. 2 and then WOA modifications and hybridizations are explained. After that, GWO modification and hybridization are described in Sect. 3. Our proposed approach WOAGWO is described in detail in Sect. 4. In Sect. 5, WOAGWO is evaluated against 23 common benchmark test functions [16], 25 benchmark test functions from CEC2005, and 10 benchmark functions in CEC2019. Next, statistical results are presented. Furthermore, it is evaluated against other common algorithms, for example, DE, ABC, BSO, and WOA. Then, WOAGWO is presented to solve an engineering problem namely: pressure vessel design problem. Finally, the conclusion with future works is presented.

2 WOA

A metaheuristic algorithm such as WOA is derived from whale behavior. Mirjalili and Lewis first developed this algorithm [16]. It can be said that the school of small fish that are swimming close to the surface of the water is the target to be hunted by a humpback whale. The whale is creating bubbles by shrinking its circle so these circles can be called 9 shaped paths. This algorithm is divided into two phases. The exploration is the first phase which includes the random strategy for searching the prey. Encircling prey can be done in the second phase with the spiral bubble-net attack. This phase is also called the exploitation phase. The following subsections represent details of each phase of WOA [33].

2.1 Encircling prey and bubble-net attacking mechanism

In order to begin the hunt, the whale must first locate the prey. The whale’s position is not optimized. Therefore, the whale required to change its position to encircle the prey by using Eqs. (1) and (2).

$$\overrightarrow {X } \left( {i + 1} \right) = \vec{X}^{*} \left( i \right) - \overrightarrow {A } \cdot \vec{D}$$
(1)
$$\vec{D} = \left| { \overrightarrow {C } \cdot \vec{X}^{*} \left( i \right) - \vec{X} \left( i \right)^{ } } \right|$$
(2)

where \(\overrightarrow {X }^{ *} \left( i \right)\) represents the best position of the whale which is found so far at iteration i. The current position of the whale is indicated by \(\overrightarrow {X } \left( {i + 1} \right)\), the distance between whale and prey is represented by \(\vec{D}\) vector with an absolute value. Coefficient vectors like C and A are calculated, respectively:

$$\vec{A} = 2 \cdot \vec{a} \cdot \vec{r} + \vec{a}$$
(3)
$$\vec{C} = 2 \cdot \vec{r}$$
(4)

In both of the two phases, the value of a decreases from the initial value which is 2–0 until it reaches 0 at the end of the iterations. The range of the variable r is between 0 and 1 which is a random number. The area of the whale where near the prey can be controlled by values of A and C vectors. By assigning values for \(\vec{A}\) in the range [− 1 and 1], the new location of the search agent can be identified between the current position of the whale and the best position.

Equation (5) is used to calculate the distance between the best position \(\overrightarrow {X }^{ *} \left( i \right)\) and the current position X, and it is also used to create a spiral-shaped approach.

$$\overrightarrow {X } \left( {i + 1} \right) = e^{bk} \cdot \cos \left( {2\pi k} \right) \cdot \overrightarrow {{D^{*} }} + \overrightarrow {{X^{*} }} \left( i \right)$$
(5)

where \(D^{ *}\) represents the distance between the whale and prey which is the best solution obtained so far.

$$\overrightarrow {{D^{*} }} = \left| {\overrightarrow {{X^{*} }} \left( i \right) - \overrightarrow {X } \left( i \right) } \right|$$
(6)

where b represents a constant value that identifies the logarithmic spiral shape and k denotes a random number in the range [− 1 and 1]. Forming the encircling shrinking mechanism and spiral-shaped mechanism, each mechanism has a 50% chance of being chosen through the iterations as shown in Eq. (7).

$$\overrightarrow {X } \left( {i + 1} \right) = \left\{ {\begin{array}{*{20}l} {\overrightarrow {{X^{*} }} - \overrightarrow {A } \cdot \vec{D} } \hfill & { \quad {\text{if}}\;p < 0.5 } \hfill \\ {e^{bk} \cdot \cos \left( {2\pi k} \right) \cdot \overrightarrow {{D^{*} }} + \overrightarrow {{X^{*} }} \left( i \right)} \hfill & { \quad {\text{if}}\;p \ge 0.5} \hfill \\ \end{array} } \right.$$
(7)

where p is an arbitrary number between [0 and 1].

2.2 Searching for prey

The exploration phase consists of random search techniques instead of updating the position according to the best position found. This strategy enhances the exploration phase. Finding prey depends on the techniques of changing the position of each whale. Therefore, \(\vec{A}\) vector is used to control the whale to move far from the local whale. Throughout this phase, the position of whales is changing, and it depends on the random search rather than the best position. This technique is resulted in performing global optima and overcoming local optima:

$$\overrightarrow {X } \left( {i + 1} \right) = \overrightarrow {{X_{\text{rand}} }} - \overrightarrow {A } \cdot \vec{D}$$
(8)
$$\vec{D} = \left| {\overrightarrow {C } \cdot \overrightarrow {{X_{\text{rand}} }} - \overrightarrow {X }^{ } } \right|$$
(9)

where \(\overrightarrow {{ X_{\text{rand }} }}\) is the position of one whale which is randomly chosen from the whales.

Algorithm 1 represents the WOA pseudocode, and it can be noted that the population is initialized randomly. Then, the fitness of each search agent is evaluated. This process progresses until it reaches the best solution. After that, the coefficients variables are updated and a random number is used to update the position of agents using Eqs. (2) and (8) or Eq. (5).

WOA can guarantee the convergence as it updates the position according to the best solution obtained. As a result, WOA may stick in the local optima and because of decreasing linearly from 2 to 0, a is the main influence to balance on both phases.

figure a

2.3 WOA modifications and hybridizations

Different types of modifications have been proposed since 2016. Table 1 illustrates the essential modifications of WOA. WOA has been hybridized with different metaheuristic algorithms. Therefore, Table 2 presents several WOA hybridizations.

Table 1 WOA modifications
Table 2 WOA hybridizations

3 Grey wolf optimization

GWO was proposed in [17], which is motivated by the idea of hunting mechanism and hierarchy level among grey wolves in wildlife. The grey wolves are classified into four categories in GWO, namely; alpha (α) wolf leader, beta (β) helping the leader, delta (δ) follows both previous wolves and omega (ω) [17]. Figure 1 shows the grey wolves’ hierarchy.

Fig. 1
figure 1

Grey wolves’ hierarchy

GWO has a social hierarchy, the first best solution is alpha (α), then the beta (β), and the third-best solution is the delta (δ). The remaining candidates’ solution is called omega (ω). These wolves (ω) follow the other three wolves, which are above omega (ω) in the hierarchy.

3.1 Encircling prey

Grey wolves try to encircle the prey in order to hunt by using Eqs. (10) and (11).

$$\overrightarrow {X } \left( {i + 1} \right) = \overrightarrow {Xl} \left( i \right) - \overrightarrow {A } \cdot \vec{D}$$
(10)
$$\vec{D} = \left| { \overrightarrow {C } \cdot \overrightarrow {Xl } \left( i \right) - \vec{X} \left( i \right)^{ } } \right|$$
(11)

where \(\overrightarrow { Xl } \left( i \right)\) denotes the location of the prey at iteration i. \(\overrightarrow {X } \left( {i + 1} \right)\) is the location of a grey wolf. A and C are coefficient vectors, which can be calculated as follows:

$$\vec{A} = 2 \cdot \vec{a} \cdot \overrightarrow {{r_{1} }} + \vec{a}$$
(12)
$$\vec{C} = 2 \cdot \overrightarrow {{r_{2} }}$$
(13)

Decreasing a value from 2 to 0 is happening in both phases until GWO reaches the maximum iteration. \(\overrightarrow {{r_{1} }} \;{\text{and}}\; \overrightarrow {{r_{2} }}\) are random numbers in the range of [0,1]. The area of wolves where near the prey can be controlled by the values of A and C vectors.

3.2 Hunting

After the encircling mechanism, a grey wolf starts to hunt the best solution. Despite the fact that the best solution required to be optimized, so alpha wolf stores the best solution in each iteration, and it changes if the solution is improved. The location of the prey can be identified by beta and delta. Thus, the best solutions are saved by each type of grey wolves and used to update the position of grey wolves by using the following equations.

$$\begin{aligned} \overrightarrow {{D_{\alpha } }} & = \left| { \overrightarrow {{C_{1} }} \cdot \overrightarrow {{X_{\alpha } }} - \vec{X}} \right|, \\ \overrightarrow {{D_{\beta } }} & = \left| { \overrightarrow {{C_{2} }} \cdot \overrightarrow {{X_{\beta } }} - \vec{X}} \right|, \\ \overrightarrow {{D_{\delta } }} & = \left| { \overrightarrow {{C_{3} }} \cdot \overrightarrow {{X_{\delta } }} - \vec{X}} \right| \\ \end{aligned}$$
(14)
$$\begin{aligned} \overrightarrow {{X_{1} }} & = \overrightarrow {{X_{\alpha } }} - \overrightarrow {{A_{1} }} \cdot \overrightarrow {{D_{\alpha } }} , \overrightarrow {{X_{2} }} = \overrightarrow {{X_{\beta } }} - \overrightarrow {{A_{2} }} \cdot \overrightarrow {{D_{\beta } }} , \\ \overrightarrow {{X_{3} }} & = \overrightarrow {{X_{\delta } }} - \overrightarrow {{A_{3} }} \cdot \overrightarrow {{D_{\delta } }} \\ \end{aligned}$$
(15)
$$\overrightarrow {X } \left( {i + 1} \right) = \frac{{\overrightarrow {{X_{1} }} + \overrightarrow {{X_{2} }} + \overrightarrow {{X_{3} }} }}{3}$$
(16)

3.3 Attacking prey (exploitation)

Hunting mechanism can be done by a grey wolf, which tries to stop the movement of the prey in order to attack them in this step. This mechanism is done by declining the value of a. The value of \(\vec{A}\) is also reduced by the value a, and it is in the range of [− 1, 1]. Attacking the prey can be done by a grey wolf, if \(\vec{A}\) is greater than − 1 and less than 1. However, GWO suffers from stagnation in the local optima and researchers are trying to discover different mechanisms to solve this problem [17].

3.4 Search for prey (exploration)

Alpha, beta, and delta influence the searching mechanism. These three categories are different from each other. Thus, they require a mathematical equation to converge and attack the prey. So, the value of \(\vec{A}\) is between − 1 and 1, if the value is greater than 1 or less than − 1, the search agents are forced to diverge from the prey. In addition, if \(\vec{A}\) greater than 1, then the search agent tries to find better prey. \(\vec{C}\) is another component factor, which influences the exploration phase in GWO.

Overall, the random population is created in the GWO algorithm. Alpha, beta, and delta assume the location of the prey. Then, the candidate solution distance is updated. After that, a is reduced from 2 to 0 to balance between both of the phases. Next, the search agents go away from attacking the prey, if \(\vec{A}\) > 1. If \(\vec{A}\) < 1, then, they go forward the prey. Finally, the GWO has reached a satisfactory result and is terminated. Algorithm 2 describes the detail of the GWO Algorithm.

figure b

3.5 GWO modifications and hybridizations

Many types of research have been done by researchers to modify and hybridize GWO. Therefore, paper [41] described both modifications and the hybridization of GWO in detail. However, Tables 3 and 4 briefly mentioned several crucial modifications and hybridizations of GWO in order to know that GWO has not been hybridized with WOA.

Table 3 GWO modifications
Table 4 GWO hybridizations

4 Our approach: WOAGWO

Based on the previous sections about WOA and GWO, the proposed approach is explained in this section by combining WOA and GWO to enhance the performance of WOA in terms of efficiency in exploitation phase to obtain better solutions.

In general, the standard WOA can perform well in finding the best solution. However, refining the optimum solution in each iteration is not sufficient. Therefore, WOA is hybridized with GWO in order to improve the performance of WOA. The hybridized algorithm is called WOAGWO. As a result, the standard WOA is hybridized by adding two sections. Firstly, we added a condition inside the exploitation phase in WOA for improving the hunting mechanism. According to Eq. (16), A1, A2, and A3 have a greater impact on exploitation performance. Therefore, a new condition is added to the standard exploitation phase of WOA for avoiding local optima where each A is less than 1 or greater than − 1. Secondly, we adapted Eqs. (14), (15) and (16). And we used them inside the condition that was added to the exploitation phase which includes A1, A2, and A3. Finally, another new condition is added to the exploration phase to make the current solution move toward the best solution. It also avoids the whale to change to a position that is not better than the previous position.

The differences between WOAGWO and WOA are Eqs. (14), (15) and (16) which are added to the exploitation of WOA. A new mechanism is added inside the exploration phase to improve the solution. Therefore, this condition with equations of GWO improves the hunting mechanism of WOA. It also improves the best solution after each iteration and generates better performance regarding local optima. Furthermore, using the condition inside the exploration phase improves the searching capability as it improves the quality of the solution if it exists.

WOAGWO is started by initializing the population size of the search agents (which includes both whales and wolves). Then, the population goes through a process to amend the agents if they go beyond the search space. Therefore, the fitness function is calculated. If fitness is less than the Alpha_score (Best_Score, then Alpha_score is equal to fitness. After that, these variables are updated: a, A, C, L, and p. Then a random number is generated.

If the random number is less than 0,5, then it goes to another condition which is if (/A/< 1). If this condition is true, then the new position is calculated using Eq. 1. As a result, if the new position is better than the old position, then the old position is updated. However, if (/A/ ≥ 1), then the new position is found using Eq. 2. Like the previous condition, the new position fitness is compared to the old fitness. If it is better than the old one, then the position is updated.

On the other hand, if the random value is greater than or equal to 1, then the new condition is counted which is if((A1 > − 1 || A1 < 1). If these conditions are true, then the Alpha_position, Beta_position, and Delta_position are calculated using Eq. 15. Consequently, the new position is calculated by Eq. 16.

After the above steps, the new position requires checking either it is beyond the search space or not. If they are out of the feasible space, then the position is amended depending on the limitation. As a result, a new fitness value is calculated, and finally, the best fitness value is returned.

WOAGWO pseudocode and flowchart are presented in Algorithm 3 and Fig. 2.

figure c
Fig. 2
figure 2

WOAGWO flowchart

5 Experimental result and discussion

WOAGWO algorithm is implemented and evaluated against 23 benchmark functions [28], 25 benchmark functions from CEC2005, and 10 benchmark functions from CEC2019. The following subsections describe benchmark functions, experimental setup, evaluation criteria, statistical results, and evaluations of WOAGWO against other metaheuristic algorithms.

5.1 Benchmark functions

Three various benchmark functions are conducted in order to verify our proposed WOAGWO. The first benchmark function is 23 functions. Then, the CEC2005 benchmark function is used. These are 25 functions of CEC2005. The third part of the benchmark functions is CEC2019. These functions include multimodal functions, unimodal functions, expanded multimodal functions, and hybrid composition functions. These benchmark test functions can be seen in [28].

5.2 Experimental setup

The code is implemented by using MATLAB R2017b on Windows 10. The first population is initialized randomly in order to have a better and accurate result. Table 5 shows parameter initialization for implementation.

Table 5 Parameter initialization

5.3 Evaluation criteria

Different ways are used for evaluating WOAGWO. The next is the evaluation points:

  1. 1.

    Presenting average and standard deviation.

  2. 2.

    Comparing WOAGWO with WOA.

  3. 3.

    Comparing WOAGWO with GWO.

  4. 4.

    Comparing WOAGWO with other metaheuristic algorithms (DE, ABC, BSO, and WOA).

  5. 5.

    Creating a box and whisker plot for comparison of WOA, GWO, and WOAGWO.

5.4 WOAGWO versus WOA

The performance of WOAGWO can be evaluated using these functions. Functions f1–f7 are called unimodal functions, which have a single solution. As a result, the WOAGWO exploitation capability can be evaluated by using these unimodal functions. Table 6 shows that WOAGWO has better exploitation capability compared to the standard WOA in all seven functions.

Table 6 Comparison of WOAGWO with WOA

In other functions, such as f8–f23, which are multimodal functions and they are useful to assess our proposed algorithm in terms of exploration. Table 6 shows that WOAGWO outperforms in 13 out of 16 multimodal functions. As a result, it can be said that WOAGWO improves the performance of WOA in exploration. Nonetheless, the WOAGWO algorithm has the same result as WOA for function 16. Conversely, WOA performs well in both functions f16 and f17.

CEC2005 benchmark function also used to evaluate the WOAGWO algorithm. Table 7 illustrates that WOAGWO exploitation performance is better than WOA in f2, f3, f3, f4, and f5. However, WOA performs well only in f1. To evaluate exploration capability, f6–f12 is used. As a result, WOAGWO performs well in all functions except f7, which has the same result as WOA. Despite having worse results in 4 functions compared to WOA, WOAGWO performs well in 10 out of 14 functions. Overall, we can say that WOAGWO improves WOA in exploration and exploitation in 19 functions, WOA is better in four functions, and they are the same in one function.

Table 7 WOA, WOAGWO, and GWO comparison results on CEC2005

WOAGWO also compared with GWO in Table 7 shows that WOAGWO performs better than GWO in 4 out of 5 unimodal functions. However, WOAGWOA exploration performance improves only in 4 multimodal functions. WOAGWO is also better than WOA in 9 functions. In general, WOAGWO is efficient in 16 functions while GWO is better than WOAGWO in 8 functions and they are the same in f7.

Overall, WOAGWO has better functionality in 14 benchmark functions compared to WOA and GWO. It has the same result as WOA and GWO in 1 function. However, GWO has a better result in 7 functions while WOA performs well only in 3 test functions.

CEC2019 is also used to test WOAGWO and compared it with WOA and GWO. Table 8 and Fig. 3 show that WOAGWO is better than WOA in seven functions, such as f1, f2, f4, f5, f7, f8, and f9 and it has the same result as WOA in f3. However, WOA is better in f6 and f10.

Table 8 WOA, WOAGWO, and GWO comparison results on CEC2019
Fig. 3
figure 3

Box and Whisker plot of WOA, GWO, and WOAGWO on CEC2019

Comparison of WOAGWO against GWO is shown in Table 8. WOAGWO performs well in five functions; they have the same result in both functions (f2, f8). However, GWO is better than WOAGWO in 3 functions.

Overall, WOAGWO is better than WOA and GWO in 5 multimodal benchmark functions and two functions have the same results. WOA is better than WOAGWO in 2 functions. Finally, GWO is better than WOA and WOAGWO in 1 function.

5.5 Statistical test

In order to show that the results are either significant or not in Tables 6, 7, and 8, the Wilcoxon rank-sum test is used to find the p values for all benchmark test functions. The results of the Wilcoxon rank-sum test are shown in Table 9. The p value found for all the benchmark functions and for each of the above-mentioned tables. The p value obtained between WOA versus WOAGWO.

Table 9 p Value of WOAGWO against WOA for 23 benchmark functions, CEC2005, CEC2019

Table 9 shows that WOAGWO obtained significant results against WOA in all unimodal and multimodal functions except functions 9 and 11 for the first column, which is 23 function results. However, WOAGWO does not have significant value in 6 functions while it is compared to WOA.

By obtaining the p value from comparing WOAGWO against WOA, Table 9 shows that WOAGWO has significant results in 13 functions out of 25 functions from CEC2005.

In addition, WOAGWO shows that the p value of CEC2019 test functions, it can be seen from Table 9 that WOAGWO obtained a significant result in 6 out of 10 functions against WOA.

The reasons behind these results as shown in Table 9 are WOA has a crucial technique to update the whale position in the exploration phase. GWO has very effective performance when it is used inside WOAGWO for the exploitation. GWO has a great impact on improving the performance of WOAGWO over WOA and GWO since Beta and Delta types of wolves save the best solutions. These solutions are used to update the position of the whale inside the WOAGWO. The other reason is that decreasing the value of a in the range of [− 1, 1] increases the capability of whales to attain the best solution in each iteration. These reasons have a significant impact on WOAGWO over the original WOA and GWO when tested on 23 classical benchmark functions, CEC2005, and CEC2019 functions.

5.6 Comparing WOAGWO with hybrid and metaheuristic algorithms

WOAGWO as the hybrid algorithm is compared with the WOA–BAT algorithm by using the CEC2019 test functions. The results of WOA–BAT is obtained from [18]. Table 10 presents that WOAGWO performs well in 6 out of 10 functions. This means that using GWO hunting techniques in the exploitation phase of WOA is the reason behind the achieved result. Though, WOA–BAT had improved WOA. But, WOAGWO achieves the best results. It is believed that WOAGWO performance better than WOA–BAT.

Table 10 Comparison results of WOA_BAT and WOAGWO using CEC2019

In addition, different metaheuristic results are presented in this section, which is obtained from CEC2005. These results are taken from various optimization algorithms, such as DE, ABC, BSO, WOA, and WOAGWO. Table 11 illustrates that each algorithm is better than other algorithms in a different number of functions out of 25 functions. The following points represent the conduct of each algorithm on the number of functions:

Table 11 ABC, DE, BSO, WOA, and WOAGWO comparison results on CEC2005
  • GA does not achieve the best results.

  • DE obtained the best results in 3 functions out of 25.

  • BSO attained well in 9 out of 25 functions.

  • WOA takes the best results in 4 out of 25 functions.

  • WOAGWO achieves the best results in 9 out of 25 functions.

As a result, each WOAGWO and BSO achieves best results in 9 out of 25 functions. Therefore, WOAGWO and BSO are better than the other three algorithms in 9 benchmark functions. WOAGOW is better than other algorithms in 2 unimodal functions and it is better in 7 hybrid benchmark functions. As a result, WOAGWO has sufficient capability of balancing between exploration and exploitation. In addition, WOA performs well in 4-hybrid benchmark functions. WOAGWO improves the performance of WOA from 4 to 9 functions in balancing exploration and exploitation.

However, BSO is better than the other algorithms in three unimodal functions, which means that BSO performs well in exploitation capability. BSO is also performed well in three multimodal functions. Therefore, the exploration performance of BSO is worse compared to WOAGWO.

DE has the third rank in comparison with the other algorithms in Table 11. It performs well in 2 unimodal functions and multimodal functions. However, ABC results have worse results compared to others. Finally, it can be said that WOAGWO is better than BSO, WOA, ABC, and DE in balancing between exploitation and exploration.

Overall, it can be said that WOAGWO is very competitive against DE, ABC, BSO, and WOA. WOAGWO performs well in 9 functions while BSO is better in 9 functions as well. Therefore, WOAGWO could improve the performance of WOA from 4 functions to 9 functions because of adding a conditioning technique inside the exploration phase to improve solution quality and adding the second condition inside the exploitation phase, which focuses on the A value, improves the exploitation capability of WOAGWO. Furthermore, adapting Eqs. (14), (15), (16) and (17) improves the performance of WOA as can be seen in Table 11, which shows that WOA is better than BSO only in 4 functions.

5.7 WOAGWO for solving pressure vessel design problem

Pressure Vessel design is a classical engineering problem. The main goal of this problem is to optimize the cost of three sections of the cylindrical pressure vessel. Those sections should be minimized, which are forming, material and welding. The head of the vessel has hemispherical shape while the end of both sides of the vessel is crapped. This problem has four variables to optimize. These variables are shell thickness \(T_{s}\), head thickness \(T_{h}\), inner radius \(R\), cylindrical length section without counting the head \(L\). Therefore, this problem has four constraints that can be optimized. The following equations describe the constraints of the problem.

$$\begin{aligned} &n = 1,2,3, 4 \\ &\vec{x} = \left[ {x_{1} x_{2} x_{3} x_{4} } \right] = \left[ {T_{s} T_{h} R L} \right], \\ & f\left( {\vec{x}} \right) = 0.6224x_{1} x_{3} x_{4} + 1.7781x_{2} x_{3}^{2} + 3.1661x_{1}^{2} x_{4} + 19.84x_{1}^{2} x_{3} , \\ \end{aligned}$$
(17)

Variable limitation

$$\begin{aligned} & 0 \le x_{1} \le 99, \\ & 0 \le x_{2} \le 99, \\ & 10 \le x_{3} \le 200, \\ & 10 \le x_{4} \le 200, \\ \end{aligned}$$

These are subjected to

$$g_{1} \left( {\vec{x}} \right) = - x_{1} + 0.0193x_{3} \le 0$$
(18)
$$g_{2} \left( {\vec{x}} \right) = - x_{3} + 0.00954x_{3} \le 0$$
(19)
$$g_{3} \left( {\vec{x}} \right) = - \pi x_{3}^{2} x_{4} - \frac{4}{3}\pi x_{3}^{3} + 1,296,000 \le 0$$
(20)
$$g_{4} \left( {\vec{x}} \right) = x_{4} + 240 \le 0$$
(21)

WOA achieved the best results for solving the problem [16]. Therefore, the authors used three metaheuristic algorithms to solve the problem, for example, WOA, WOAGWO, and FDO [54]. Table 12 shows that WOAGWO outperforms well compared to WOA and FDO. WOAGWO achieved results that are better than the other two algorithms. WOAGWO obtained these results 1.63, 1.43, 67.07, 10 for \(T_{s} , T_{h} , R \;{\text{and}} \;L\) respectively.

Table 12 Comparison WOA, WOAGWO, and FDO for pressure vessel design

6 Conclusion

To sum up, both WOA and GWO along with their modifications and hybridizations were presented. WOA and GWO with their limitations were highlighted. WOA and GWO with their algorithmic details were described in detail. The new approach “WOAGWO” was presented. The experimental results were explained to assess the performance of WOAGWO.

Several experiments were conducted to evaluate WOAGWO. WOAGWO was tested on 23 benchmark test functions to assess its performance in both exploitation and exploration. WOAGWO showed its superiority in 20 out of 23 functions compared to WOA and GWO. WOA and WOAGWO have the same result in 1 function, and WOA has slightly better than WOAGWO in 2 functions.

In addition, CEC2005 benchmark functions were used to evaluate WOAGWO. As a result, WOAGWO performed well in 14 functions. Though, it had the same result with WOA in 1 function. Nonetheless, WOA was better than WOAGWO in the other 3 functions. In spite of having a better overall result, WOAGWO was better than GWO in only 14 functions out of 25.

Furthermore, WOAGWO was evaluated by the CEC2019 benchmark function, and then results were compared to WOA and GWO. Consequently, WOAGWO had the same result with WOA in 1 function while it had better results in 7 functions. However, WOA performance was better than WOAGWO in 2 functions. WOAGWO was also compared to GWO, the results showed that WOAGWO was superior to GWO in 5 functions. They were also the same in 2 functions. In the face of having these results, GWO worked well in 3 functions.

Wilcoxon rank-sum test was used to evaluate the WOAGWO statistically, WOAGWO obtained significant results in 17 out of 23 benchmark functions. It was also tested on CEC2005 functions, so it achieved a better result in 13 functions. Furthermore, it had 6 significant results out of 10 by using CEC2019 test functions.

Then, WOAGWO was compared with DE, ABC, BSO, and WOA. Like WOAGWO, BSO was better in 9 benchmark functions. WOA was competitive in 4 functions. In addition, DE had a third rank in comparison. As a result, WOAGWO performance improved exploration capability. Overall, it can be said that WOAGWO improved the solution quality after each iteration, and it avoids local optima.

Finally, WOAGWO was used to solve a real-world problem in the field of engineering. The problem was pressure vessel design which was solved by WOAGWO, WOA, and FDO. WOAGWO attained an optimum solution that was better than WOA and FDO.

Generally, WOAGWO improved the WOA standard and could improve solutions for those problems that were related to poor performance and dwindling into local optima in the exploration phase. WOAGWO produced significant results in almost all unimodal and multimodal functions. WOAGWO produced better results in the benchmark test functions because of the two techniques that were included in WOAGWO. Using the condition which was added inside the exploration phase to avoid whales to move to positions which were not better than the previous positions and also to improve the exploration performance. Embedding conditions, related to a value and adapting four GWO equations in the exploitation phase of WOA, forced the whales to have better results. Improving the performance of WOAGWO over WOA also belonged to the exploitation ability of beta and delta wolves to save the best solutions and decreased a value that tried to stop the movement of the prey in order to hunt it by the whales. Another reason behind this improvement was that a new condition was added to the exploration phase for updating the whale.

Finally, the following potential research work can be conducted in the future:

  1. 1.

    Solving real-world problems such as medical problems and other engineering problems.

  2. 2.

    Hybridizing different techniques to improve the current results.

  3. 3.

    Implementing chaotic maps on the proposed hybridization for further enhancement.