1 Introduction

The optimization problem is common in all aspects of society and life. It is a research hotspot in many fields such as image compression [1], path planning aspect [2], structural optimization [3, 4], parameter estimation [5], skeletal structure size optimization [6], distribution system optimization aspect [7], and resource scheduling aspect [8]. In many practical applications, the optimization problem often exhibits dynamic, nonlinear, uncertain and high-dimensional characteristics [9]. The complexity of the engineering optimisation problem leads to an increase in the complexity of the algorithm. The increased complexity of the algorithm tends to make the algorithm less stable. The complexity of the algorithm leads to a significant increase in calculation costs.

To obtain more effective optimisation algorithms, many authors have proposed good optimisation algorithms. These optimization algorithms are divided into two categories: sequential algorithm and random algorithm. The sequential algorithm mainly consists of hill climbing method [10], Newton iteration method [11], and simplex method [12], least squares method [13, 14], etc. These algorithms have good convergence behavior. But for complex optimisation problems, they are easy to fall into local optimality. The random algorithm contains a random term. The random item has the characteristics of exploration and search. Under the same initial conditions, the random algorithms always inevitably produce different solutions, so the repeatability of the algorithms is very poor [15].

Most stochastic algorithms can be thought of as meta-heuristics. The meta-heuristics are inspired by the phenomenon of animal activity in nature. For example, the bat algorithm (BA) [16, 17], the firefly algorithm (FA) [4, 18], the gray wolf algorithm (GWO) [19,20,21], the moth-flame optimization (MFO) [22], the grasshopper optimisation algorithm (GOA) [23], the bacterial foraging optimization(BFO) [24], the ant lion optimization(ALO) [25], the harris hawks optimization (HHO) [26], and the barnacles Mating Optimizer (BMO) [27]. These meta-heuristic algorithms are emerging optimization algorithms [28,29,30]. Their advantages are that the algorithms can suppress local optima to a certain extent when solving complex nonlinear problems. Therefore, they are usually applied to solve practical engineering problems [31].

In 2016, Seyedali Mirjalili proposed a whale optimization algorithm (WOA) [32]. After the whale has found its prey, the humpback whale first dives to the bottom of the prey and then forms a distinctive bubble along a circular path. The WOA works in three parts: spiral hunting, envelope hunting and searching for prey. Khaled et al. [33] utilized the WOA to optimize the scheduling problem of the power system and realized the optimal scheduling of reactive power. Yu et al. [34] applied the WOA in the parameter optimization of the controller, and the optimized parameters can make the control system more robust.

However, the WOA still has risks such as low exploration efficiency, poor convergence behavior, and the possibility of falling into a local optimum. So as to strengthen the optimization performance of the WOA, many variants of the WOA have been proposed. When dealing with complex optimization scenarios, Huiling Chen et al. [35] believed that the traditional WOA has the problem of easily falling into local optimality. To solve the problem, they proposed a balanced variant algorithm called BWOA. Compared with the WOA, the BWOA is more suitable for optimizing complex scenes. Mohammad Tubishat et al. [36] encountered the problem of the WOA easily falling into local optima when optimizing a large number of data sets. To solve it, they proposed an improved algorithm (IWOA). Compared with other meta-heuristic algorithms, the IWOA is the best in the accuracy of sentiment analysis classification.

In 2017, Ying Ling et al. proposed a new whale optimization algorithm based on the Lévy flights (LWOA) [37]. In 2018, Yongquan Zhou used the LWOA to solve engineering optimization problems [38]. Huiling Chen et al. [39] believed that WOA has the disadvantages of poor convergence behavior, low exploration efficiency and easy falling into local optimal when optimizing complex unconstrained continuous problems. To overcome these shortcomings of the WOA, they proposed an enhanced variant called RDWOA. Experimental data prove that the RDWOA is a promising variant of WOA, which has better exploration efficiency than other state-of-the-art algorithms.

Although the WOA, IWOA, BWOA and RDWOA have unique search mechanisms for global optimization problems, they are not completely suitable for solving complex multi-dimensional problems. To strengthen the global exploration efficiency of the WOA and improve its convergence behavior, we conduct research on it. We believe that the switching mechanism of the WOA’s location update formula and the two strategies of the RDWOA have some shortcomings.

  1. 1.

    The switching mechanism of WOA’s three position update formulas relies on uniformly random distribution parameters, which are random, uncertain and blind.

  2. 2.

    The RDWOA’s random spare strategy makes each individual approach the optimal individual with a certain probability. This can indeed achieve better convergence behavior. It can cause premature RDWOA. The premature phenomenon will reduce the overall exploration efficiency of the RDWOA.

  3. 3.

    The double adaptive weight strategy proposed by RDWOA does improve the development accuracy of the algorithm and the global exploration capability. However, it can be seen from the iterative curve characteristics of these two weight parameters. In the iterative process of the RDWOA, the adaptive weights gradually tend to diverge rather than converge. This not only makes the RDWOA deviate from the local optimum, but also affects its convergence behavior.

To optimize the above three shortcomings, while strengthening the WOA’s global exploration capabilities and improving its convergence behavior, the paper proposes the EGE-WOA.

The novelties of the paper are as follows.

  1. 1.

    The judging mechanism for whale location update: replace the original random value based on the difference between the fitness of any individual and the best individual.

  2. 2.

    For constrained optimization and unconstrained optimization problems, the EGE-WOA introduces Lévy flights in different ways, respectively.

  3. 3.

    For constrained optimization and unconstrained optimization problems, the EGE-WOA introduces different convergent adaptive weights, respectively.

We arrange our articles in the following order: first, related research on the WOA, WOA and RDWOA are reviewed in Sect. 2. Section 3 presents the proposed EGE-WOA. Performance comparison of algorithms, comparison data and low-dimensional space and high-dimensional space simulation data are recorded in Sect. 4. Section 5 suggests the efficiency of our EGE-WOA through four real-world application case. Finally, the conclusions are given in Sect. 6.

2 Related research

The whale is the largest animal in the world. The adult whales can grow to 33 m long and weigh 181 tons [40]. Whales have infrasound/ultrasonic hearing and rely on unique echolocation to search for prey around or transmit information to each other. In the whale family, humpback whales are huge baleen whales. Due to the lack of chewing teeth, they can only prefer to prey on groups of small fish and shrimp, so a special foraging behavior has evolved, called bubble foraging. According to the hunting characteristics of humpback whales, their hunting process includes three stages: surrounding prey, spiral search, and random search.

The sperm whale is a large whale. The sound of the sperm whale is extraordinary. Its maximum sound reaches 234 decibels [41]. If humans stay by its side, this kind of sound may deaf the human ears. It is the biggest sound in the animal kingdom. The rumbling sound waves are like a bright light in the dark deep sea, which enables the sperm whale to detect the king squid within 500 m, as shown in the sub-figure (a, b) in Fig. 1. The squid’s hearing system cannot detect the high-frequency ultrasonic waves emitted by the sperm whale. Therefore, the squid cannot perceive the danger approaching quickly, as shown in the sub-figure (b), (c) and (d) in Fig. 1. It makes sperm whales to quickly capture their prey, as shown in the sub-figure (e).

Fig. 1
figure 1

A sperm whales use ultrasound to catch squid in this process

2.1 A brief of WOA

2.1.1 Surround prey

Whales can identify the location of their prey through echolocation and surround the prey. In the process of surrounding the prey, the current local optimal solution is the whale closest to the food. At this time, other whales gradually approach the whale that represents the optimal solution. In this way, the encirclement of the prey by the whale is completed.

The location update formula of each individual is as follows:

$$\left\{ \begin{gathered} X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) - A \cdot B \hfill \\ B = |C \cdot X_{{{\text{best}}}} (t) - X_{{{\text{local}}}} (t)| \hfill \\ \end{gathered} \right.,$$
(1)

where \(X_{{{\text{local}}}} (t)\) represents the spatial position coordinates of the whale in the tth iteration, and \(X_{{{\text{best}}}} (t)\) represents the spatial position coordinates of the optimal whale in the tth iteration. A and B are the coefficient matrices:

$$\left\{ \begin{gathered} A = 2h \cdot {\text{rand}} - h \hfill \\ B = 2 \cdot {\text{rand}} \hfill \\ h = 2 - h \cdot t/t_{\max } \hfill \\ \end{gathered} \right.,$$
(2)

where rand obeys [0 1] uniform random distribution. tmax is the maximum number of iterations. h represents the iteration variable, which decreases linearly from 2 to 0.

2.1.2 Spiral search prey

Gradually narrow the encircling circle in a spiral upward way to obtain the food. This algorithm design includes two mechanisms, shrinking envelopment and spiral update space location, and assumes that the selection of shrinking envelopment mechanism and spiral update location probability are both 0.5:

$$\left\{ \begin{gathered} X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) + B_{p} \cdot e^{bl} \cos (2\pi l) \hfill \\ B_{p} = |X_{{{\text{best}}}} (t) - X_{{{\text{local}}}} (t)| \hfill \\ \end{gathered} \right.,$$
(3)

where b stands for constant, and l obeys [− 1 1] uniform random distribution.

2.1.3 Random search prey

When the coefficient vector |A|≥ 1, it means that the whale is swimming outside the shrinking enclosure. At this time, individual whales conduct a random search based on each other’s position, and the mathematical model is as follows:

$$B = C \cdot X_{{{\text{rand}}}} (t) - X_{{{\text{local}}}} (t),$$
(4)
$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{rand}}}} (t) - A \cdot B,$$
(5)

where \(X_{{{\text{rand}}}} (t)\) is the location of a random whale.

2.2 Brief introduction of RDWOA

2.2.1 Random spare method

The method is to replace the current individual’s of the tth dimensional vector with the best individual’s corresponding vector values based on certain conditions. The condition is shown in the following equation:

$${\text{tan}}(\pi \cdot ({\text{rand}} - 0.5)) < (1 - {\text{iter}}/{\text{Max}}\_{\text{iter}}),$$
(6)

where rand obeys [0 1] uniform random distribution. iter represents the current iteration value. Max_iter represents the maximum iteration value.

When the condition satisfies inequality (6), the random backup mechanism is started. Although the method improves its convergence behavior and exploration ability, it may also lead to premature RDWOA, and reduce its exploration efficiency and convergence behavior.

2.2.2 Dual adaptive weight

$$w_{1} = (1 - {\text{iter}}/{\text{Max}}\_{\text{iter}})^{{1 - \tan (pi \times ({\text{rand}} - 0.5) \times s/{\text{Max}}\_{\text{iter}})}} .$$
(7)
$$w_{1} = (2 - 2 \times {\text{iter}}/{\text{Max}}\_{\text{iter}})^{{1 - \tan (pi \times ({\text{rand}} - 0.5) \times s/{\text{Max}}\_{\text{iter}})}} .$$
(8)

When the individual’s position is not updating, s will automatically increase by 1.

w1 and w2 have the same curve characteristics. We take w1 as the research object.

In the whole iterative process of the algorithm, when s takes different values, the curve characteristics change greatly.

Figure 2 shows the curve characteristic of w1 when s is a different constant. From the analysis of the local static graph, when the constant s takes different values, the curve characteristic of w1 changes greatly. When s = 1, the curve of w1 linearly converges to 0. When s = 200, the curve characteristic of w1 converges to 0 non-linearly. When s = 500, the curve characteristic of w1 is emitted and does not converge.

Fig. 2
figure 2

The curve characteristics of w1 with different s values

The first half of RDWOA is as follows, when FE/MaxFEs <  = 0.5:

$$X_{{{\text{local}}}} (t + 1) = w_{1} \times X_{{{\text{best}}}} (t) - A \times B,$$
(9)
$$X_{{{\text{local}}}} (t + 1) = w_{1} \times X_{{{\text{best}}}} (t) + B_{p} \cdot e^{bl} \cos (2\pi l),$$
(10)
$$X_{{{\text{local}}}} (t + 1) = w_{1} \times X_{{{\text{rand}}}} (t) - A \times B.$$
(11)

The second half of RDWOA is as follows, when FE/MaxFEs > 0.5:

$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) - w_{2} \times A \times B,$$
(12)
$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) + w_{2} \times B_{p} \cdot e^{bl} \cos (2\pi l),$$
(13)
$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{rand}}}} (t) - w_{2} \times A \times B.$$
(14)

For unconstrained optimization problems, it is demonstrated that compared with the FA, BA and IWOA, the RDWOA has better global exploration performance.

3 Proposed EGE-WOA

The optimization problems can be summarized into constrained optimization and unconstrained optimization problems [42]. The constrained optimization problems refer to a nonlinear programming problem with constraints. For example, the 5 real cases in Sect. 5.

The unconstrained optimization problems refer to the selection of the optimal solution according to a certain index from all possible alternatives of a problem. For example, the 36 benchmark functions in Table 1.

Table 1 Description of the 33 benchmark functions

3.1 The Lévy flights’ method

Lévy flights have been applied to optimization and optimal search, and the results show that it has strong global search capabilities [43]. In view of the Lévy flights’ method can improve the algorithm’s ability to explore the global space, the EGE-WOA introduces it. As shown in the following formula:

$${\rm{Levy}}(\eta )\sim \frac{\varphi \times \mu }{{|v|^{1/\eta } }},$$
(15)

where v and µ obey the standard normal distribution.\(\varphi\) is shown in the following formula:

$$\varphi = \bigg[\frac{\tau (1 + \eta ) \times \sin (\pi \times \eta /2)}{{\tau ((1 + \eta )/2) \times \eta \times 2^{(\eta - 1)/2} }}\bigg]^{1/\eta } ,(\eta = 1.5)$$
(16)

where τ is the standard Gamma function.

The Lévy flights’ method will only be used when the spatial positions of all individuals are not changing. Therefore, the paper uses the method when iter/Max_iter < 0.5:

$$X^{D} (t) = X^{D} (t) \times {\rm L}\acute{e}{\rm vy}(D).$$
(17)

In the formula, the spatial dimension of each individual X(t) is D; XD(t) is the D-dimensional location space. Lévy(D) is the Lévy distribution with a number of D.

3.2 The new random spare method

The method is to replace the current individual’s of the tth dimensional vector with the best individual’s corresponding vector values based on certain conditions. The condition is as shown in the following equation:

$${\text{rand }} < {1} - {0}{\text{.5}}{\text{.*iter/Max\_iter}}.$$
(18)

3.3 The convergent adaptive weight

It can be seen from Figure 2 that the weight curve characteristics of the RDWOA are divergent and unstable, which weakens the global exploration efficiency.

The paper proposes a new nonlinear, convergent adaptive weight:

$$\left\{ \begin{gathered} w_{11} = 2 \times ({\text{rand}} - 0.5) \cdot 1/\exp (\tan ({\text{iter}} \cdot \pi /{\text{Maxiter}})) \hfill \\ w_{22} = 0.5 \times ({\text{rand}} - 0.5) \cdot 1/\exp (\tan ({\text{iter}} \cdot \pi /{\text{Maxiter}}) \hfill \\ \end{gathered} \right..$$
(19)

Figure 3 shows the curve characteristic of the new adaptive weight. Compared with Fig. 2, the curve characteristic in Fig. 3 is convergent and nonlinear.

Fig. 3
figure 3

The curve characteristics of w11

3.4 The judgment mechanism of the whale’s position update formula

Based on this habit of whales, we propose a new algorithm that uses whale ultrasound to determine the distance between individuals, as a switching mechanism for the whale position update formula, instead of WOA’s original whale switching mechanism. We design different judgment mechanisms according to the types of continuous optimization problems (unconstrained optimization and constrained optimization problems).

According to the difference between the fitness value of the current best individual and any individual, a new judgment value of the switching position update formula is proposed:

$$d = |f(x^{*} (t)) - f(x_{{{\text{rand}}}} (t))|,$$
(20)

where f(x*(t)) stands for the adaptive fitness value of the best individual. f(xrand(t)) is the adaptive fitness value of any individual. d is the difference between the current optimal individual and the fitness value of any individual.

3.4.1 For continuous unconstrained optimization problems

The EGE-WOA proposed in the paper consists of two parts.

The first half of the EGE-WOA is as follows, when FEs/MaxFEs <  = 0.2:

$$X_{{{\text{local}}}} (t + 1) = w_{11} \times X_{{{\text{best}}}} (t) - A \times B,$$
(21)
$$X_{{{\text{local}}}} (t + 1) = w_{11} \times X_{{{\text{best}}}} (t) + B_{p} \cdot e^{bl} \cos (2\pi l),$$
(22)
$$X_{{{\text{local}}}} (t + 1) = w_{11} \times X_{{{\text{rand}}}} (t) - A \times B.$$
(23)

The second half of the EGE-WOA is as follows, when FE/MaxFEs > 0.2:

$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) - w_{22} \times A \times B,$$
(24)
$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) + w_{22} \times B_{p} \cdot e^{bl} \cos (2\pi l),$$
(25)
$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{rand}}}} (t) - w_{22} \times A \times B.$$
(26)

3.4.2 For continuous constrained optimization problem

The EGE-WOA proposed in the paper consists of two parts:

$$L = {\text{Levy}}(1).$$
(28)

The first half of the EGE-WOA is as follows, when FEs/MaxFEs <  = 0.2:

$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) - w_{22} \times A \times B,$$
(28)
$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) + B_{p} \cdot e^{bl} \cos (2\pi l) + {\text{sign}}({\text{rand}} - 0.5) \times L,$$
(29)
$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{rand}}}} (t) - w_{22} \times A \times B.$$
(30)

The second half of the EGE-WOA is as follows, when FE/MaxFEs > 0.2:

$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) - w_{22} \times A \times B \times L,$$
(31)
$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{best}}}} (t) + B_{p} \cdot e^{bl} \cos (2\pi l) + {\text{sign}}({\text{rand}} - 0.5) \times L,$$
(32)
$$X_{{{\text{local}}}} (t + 1) = X_{{{\text{rand}}}} (t) - w_{22} \times A \times B.$$
(33)

3.5 Opposition-based learning method (OBL)

OBL was first proposed by Tizhoosh [44] in 2005. The detailed description of OBL is as follows:

  1. 1.

    Suppose x is any real number in the interval [lb, ub]. The relative number xop of x is as follows:

    $$x_{op} = lb + ub - x,$$
    (34)

    where lb ≤ ub, lb and ub are any real number. Similarly, we apply it to multi-dimensional situations.

  2. 2.

    Suppose IP(x1, x2,…, xn) is a point in the n-dimensional system. Each xi is in the interval of [1b(i), ub(i)].

The opposite number OP of IP is defined as

$$x_{iop} = lb(i) + ub(i) - x_{i} ,\quad \forall i \in [1,n],$$
(35)

where xiop is the coordinate of OP.

3.6 Random distribution method

After each iteration, the position of each individual whale will be randomly redistributed in the area of radius k. The purpose of the method is to enhance the global search capability of the WOA:

$$X(t + 1) = X(t) + k \times {\text{rand}} \times {\text{sign}}({\text{rand}} - 0.5),$$
(36)

where \(k = |ub - lb|/2*s\), and s is the population size.

figure aa
figure ab

4 Numerical simulations

4.1 Benchmark function

We choose 33 benchmark functions [45, 46] in Table 1. The 33 benchmark functions belong to unconstrained optimization problems (Fig. 4; Table 2).

Fig. 4
figure 4

Flow chart of the EGE-WOA

Table 2 Algorithm parameter settings

4.2 Comparison with other variant algorithms

The population search space D is 30. The algorithm runs independently 20 times, recording its mean and variance.

As can be seen in Table 3, the mean and Std of the EGE-WOA and BMO are the smallest. It reflects that in the process of optimizing the benchmark function, the EGE-WOA and BMO have the highest exploration efficiency. For F5, F16 and F27, the mean and variance of the EGE-WOA are the smallest.

Table 3 Comparison of results for different variant algorithm

Compared with the GWO, FA, BA, MFO, BFO, FPA, GOA, ALO and HHO, the EGE-WOA has very obvious advantages. Its mean and variance are the smallest. Therefore, the EGE-WOA has a strong global exploration efficiency.

In Fig. 5, compared with the GWO, FA, BA, MFO, BFO, FPA, GOA, ALO and HHO, the convergence behaviors of the EGE-WOA and BMO are the best. Compared with the BMO, the convergence behavior of the EGE-WOA is significantly better than that of the BMO. For F4, F6 and F7, compared with the GWO, FA, BA, MFO, BFO, FPA, GOA and ALO, the HHO has better convergence behavior. The convergence behavior of the BMO is significantly better than that of the HHO. The convergence behavior of the EGE-WOA is the best. Therefore, in general, the global exploration efficiency and convergence behavior of the EGE-WOA is the best.

Fig. 5
figure 5figure 5

Simulation curve of the selected function

4.3 Compared with other variant algorithm

To objectively verify the global optimization performance of the EGE-WOA on 33 benchmark functions of unconstrained optimization, compare it with five representative whale algorithms: WOA, LWOA, BWOA, IWOA, and RDWOA.

In the experiment, F1–F11, the number of the search agents was set to 15. Each algorithm calculates 50 times independently, and the maximum number of iterations of all algorithms is 1000. The experimental result data are recorded in Table 4.

Table 4 Results of different variant algorithms

The data in Table 4 show that in the process of optimizing unconstrained benchmark functions, the mean and variance of the EGE-WOA are the smallest, such as F2, F3, F4, F5, F6, F12, F14, F15, F16, F17, F24, F25, F26, F27 and F28. It shows that the EGE-WOA has the best global exploration efficiency.

The BWOA, RDWOA and EGE-WOA have the same mean and variance such as F1, F7, F8, F9, F10, F12, F18, F19, F20, F23, F29, F30 and F31. At this time, they successfully avoided falling into the local optimum and obtained the global optimum. It shows that the EGE-WOA, BWOA and RDWOA have the best global exploration efficiency.

The EGE-WOA and BWOA have the smallest mean and variance, such as F11, F22, F32, and F33.

To verify the global exploration efficiency of the EGE-WOA, the paper selects F1, F2, F3, F4, F5, F11, F12, F14, F15, F16, F22, F25, F26 and F27 to display the convergence curves of five algorithms in Fig. 6.

Fig. 6
figure 6figure 6

The convergence trend of test functions

From the convergence curves of the six algorithms in Fig. 6, it can be seen that the convergence behavior of the EGE-WOA is the best. The EGE-WOA can effectively enhance the global optimization efficiency of the WOA and improve its convergence behavior. It can effectively avoid the risk of falling into a local optimum.

For example, F2, F3, and F4, from the convergence curve of the WOA, IWOA, BWOA and RDWOA, it can be seen that because they are trapped in the local optimal, they only obtain different local optimal solutions, but cannot obtain the global optimal solution.

For example, for F11, F12, F14 and F15, the EGE-WOA has the best convergence behavior. It has strong global exploration capabilities and can effectively avoid falling into local optimum. The convergence behaviors of the BWOA and RDWOA are worse than that of the EGE-WOA. The convergence behaviors of the WOA, LWOA and IWOA are the worst. They are unable to obtain the global optimal solution because they fall into the trap of local optimality.

Although LWOA introduced Lévy flights, it did not use Lévy flights correctly. It can be seen from the simulation data that for the unconstrained optimization problem of multi-dimensional space, LWOA cannot prevent WOA from falling into the local optimum.

In summary, the EGE-WOA has the best global exploration capability and convergence behavior. In contrast, the optimization efficiency of the RDWOA and BWOA for unconstrained optimization problems is significantly better than that of the IWOA and WOA (Table 5).

Table 5 Experimental setting of algorithm execution time

4.4 The execution time of different algorithms

The execution time of different algorithms is tested on the same computer in the same environment. The experimental results are recorded in Table 6. Each algorithm runs independently 20 times.

Table 6 The experimental results

Although Lévy flights can enhance the exploration efficiency of EGE-WOA in the search space, it is well known that it will increase the execution time of the algorithm.

It can be seen from Table 7 that for F1–F11, the WOA has the shortest execution time. The execution time of the EGE-WOA is the longest. For the four real cases, the WOA and RDWOA have the shortest execution time. Since the EGE-WOA introduces Lévy flights, its execution time is the longest.

Table 7 Experimental result data

5 Case studies of real-world applications

In the section, the purpose is to verify the optimization performance of the six whale algorithms for constrained real engineering cases. The WOA, LWOA, IWOA, BWOA, RDWOA and EGE-WOA are evaluated in five engineering real applications: Cantilever beam [47], pressure vessel design [47], speed reducer design [47], a three-bar tress design [47], and Welded beam design [42] (Fig. 7).

Fig. 7
figure 7

Schematic of cantilever beam

5.1 Cantilever beam

$$\begin{gathered} \min f(x) = 0.0624 \times (x_{1} + x_{2} + x_{3} + x_{4} + x_{5} ), \hfill \\ {\rm S.T.}g = \frac{61}{{x_{1}^{3} }} + \frac{37}{{x_{2}^{3} }} + \frac{19}{{x_{3}^{3} }} + \frac{7}{{x_{4}^{3} }} + \frac{1}{{x_{5}^{3} }} - 1 \le 0, \hfill \\ \end{gathered}$$

where \(0.01 \le x_{1} ,x_{2} ,x_{3} ,x_{4} ,x_{5} \le 100.\)

The abscissa of Fig. 8 is the number of iterations of each algorithm. Its ordinate is the average value of the value obtained in each iteration of each algorithm. It can be seen from Fig. 8 that when the whale optimization algorithms optimize for the constrained realistic engineering case, their convergence curves are obviously different from those of when they optimize for the unconstrained optimization problems.

Fig. 8
figure 8

The convergence curves of different algorithms

The f(x) in Table 7 records the optimal mean values of the six curves in Fig. 8. In Fig. 8 and Table 7, the optimal mean (f(x)) of the LWOA is the largest. The optimal means of the WOA and RDWOA are the smaller than that of the LWOA. The optimal mean of the BWOA is smaller than those of the WOA and RDWOA. The optimal mean value of the IWOA is smaller than that of the BWOA. The optimal mean of the EGE-WOA is the smallest. The convergence behavior of the EGE-WOA in Fig. 8 is the best (Fig. 9).

Fig. 9
figure 9

Schematic of pressure vessel. Ts: shell thickness, Th: spherical head thickness, R radius of cylindrical shell, L shell length

5.2 Pressure vessel design (PVD)

$$\begin{gathered} \min f(T_{s} ,T_{h} ,R,L) = 0.6224T_{s} RL + 1.7781T_{h} R^{2} + 3.1661T_{s}^{2} L + 19.84T_{h}^{2} L, \hfill \\ {\text{s.t}}.\left\{ \begin{gathered} g_{1} = - T_{s} + 0.0193R \le 0, \hfill \\ g_{2} = - T_{h} + 0.0095R \le 0, \hfill \\ g_{3} = - \pi R^{2} L - \frac{4}{3}R^{3} + 1296000 \le 0, \hfill \\ g_{4} = L - 240 \le 0, \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered}$$

where \(1.5 \times 0.0625 \le T_{s} ,T_{h} \le 99 \times 0.0625,{\text{and}}10 \le R,L \le 200\).

For the constrained practical engineering problem, comparing the mean iteration curves of six whale optimization algorithms in Fig. 10, it can be seen that LWOA has the worst convergence behavior. The convergence behavior of the WOA is better than that of the RDWOA. The convergence behavior of the BWOA is slightly better than that of the WOA. The convergence behavior of the EGE-WOA is the best.

Fig. 10
figure 10

The convergence curves of different algorithms

From the f(x) of the five algorithms in Table 8, it can be seen that the f(x) of the LWOA is 15,331.3268, which is the largest. The f(x) of the EGE-WOA is 5653.7587, which is the smallest. When optimizing the constrained real case, the optimization efficiency of the EGE-WOA is the best.

Table 8 Experimental result data

5.3 Speed reducer design (SRD)

The purpose of structural optimization is to minimize the total weight of the reducer (Fig. 11). The mathematical formula for this case is as follows:

$$\begin{gathered} \min f(b,m,z,l_{1} ,l_{2} ,d_{1} ,d_{2} ) = 0.7854bm^{2} (3.3333z^{2} + 14.9334z - 43.0934) - 1.508b(d_{1}^{2} + d_{2}^{2} ) \hfill \\ \begin{array}{*{20}c} {} & {} & {} & {\begin{array}{*{20}c} {} & {} & {} & {\begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {} & { + 7.477(d_{1}^{3} + d_{2}^{3} )} \\ \end{array} } \\ \end{array} } \\ \end{array} } \\ \end{array} + 0.7854(l_{1} d_{1}^{2} + l_{2} d_{2}^{2} ), \hfill \\ {\text{s.t}}.\left\{ \begin{gathered} g_{1} = \frac{27}{{bm^{2} z}}P - 1 \le 0, \hfill \\ g_{2} = \frac{397.5}{{bm^{2} z^{2} }} - 1 \le 0, \hfill \\ g_{3} = \frac{1.93}{{mzl_{1}^{3} d_{1}^{4} }} - 1 \le 0, \hfill \\ g_{4} = \frac{1.93}{{mzl_{1}^{3} d_{2}^{4} }} \le 0, \hfill \\ g_{5} = \frac{{\sqrt {(\frac{{745l_{1} }}{mz})^{2} + 1.69 \times 10^{6} } }}{{110d_{1}^{3} }} - 1 \le 0, \hfill \\ g_{6} = \frac{{\sqrt {(\frac{{745l_{1} }}{mz})^{2} + 157.5 \times 10^{6} } }}{{85d_{2}^{3} }} - 1 \le 0, \hfill \\ g_{7} = \frac{mz}{{40}} - 1 \le 0, \hfill \\ g_{8} = \frac{5m}{{B - 1}} - 1 \le 0, \hfill \\ g_{9} = \frac{b}{12m} - 1 \le 0, \hfill \\ \end{gathered} \right. \hfill \\ \hfill \\ \end{gathered}$$
Fig. 11
figure 11

Speed reducer

where \(2.6 \le b \le 3.6,\quad 0.7 \le m \le 0.8,\quad 17 \le z \le 28,\quad 7.3 \le l_{1} ,l_{2} \le 8.3,\quad 2.9 \le d_{1} \le 3.9,\quad 5.0 \le d_{2} \le 5.5.\)

In Fig. 12, the convergence behaviors of the WOA and LWOA are the worst. The convergence curve of the BWOA is significantly better than that of the RDWOA. The convergence behavior of the EGE-WOA is the best. From the f(x) in Table 9, it can be seen that the f(x) of the EGE-WOA is 2616.6264, which is the smallest. The f(x) of WOA and LWOA is 2695.7386 and 2695.7386, which are the largest. Through comparison, it can be seen that when optimizing the speed reducer design case, the exploration efficiency of the EGE-WOA is the best (Fig. 13).

Fig. 12
figure 12

The convergence curves of different algorithms

Table 9 Experimental result data
Fig. 13
figure 13

Schematic of three-bar tress

5.4 A three-bar truss design

$$\begin{gathered} \min f(A_{1} ,A_{2} ) = (2\sqrt {2A_{1} } + A_{2} )l, \hfill \\ {\text{s.t}}.\left\{ \begin{gathered} g_{1} = \frac{{\sqrt 2 A_{1} + A_{2} }}{{\sqrt 2 A_{1}^{2} + 2A_{1} A_{2} }}P - \sigma \le 0, \hfill \\ g_{2} = \frac{{A_{1} }}{{\sqrt 2 A_{1}^{2} + 2A_{1} A_{2} }}P - \sigma \le 0, \hfill \\ g_{3} = \frac{1}{{A_{1} + \sqrt 2 A_{2} }}P - \sigma \le 0, \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered}$$

with \(l = 100{\text{cm}},P = 2KN/CM^{2} ,{\text{and }}\sigma = 2KN/CM^{2} (0 \le A_{1} ,A_{2} \le 1).\)

The optimization of a three-bar truss design belongs to the constrained problem optimization. It can be seen from Fig. 14 that, except for the LWOA and RDWOA, the convergence behaviors of the other four algorithms are not much different. The convergence behavior of the EGE-WOA is slightly better than that of the WOA, IWOA and BWOA. It can be seen from Table 10 that the f(x) the EGE-WOA is 2.8284, which is slightly smaller than that of the WOA, LWOA, IWOA, BWOA and RDWOA. The optimal average value of the LWOA is 2.8693, which is the largest (Fig. 15).

Fig. 14
figure 14

The convergence curves of different algorithms

Table 10 Experimental result data
Fig. 15
figure 15

Welded beam structure

5.5 Welded beam design

Consider: \(x = [h,l,t,b] = [x_{1} ,x_{2} ,x_{3} ,x_{4} ],\)

$$\min f(x) = 1.10471x_{1}^{2} x_{2} + 0.04811x_{3} x_{4} (L + x_{2} ),$$

subject to

$$\begin{gathered} g_{1} (x) = \tau_{\max } - \tau (x) \ge 0, \hfill \\ g_{2} (x) = \sigma_{\max } - \sigma (x) \ge 0, \hfill \\ g_{3} (x) = x_{4} - x_{1} \ge 0, \hfill \\ g_{4} (x) = 0.10471x_{1}^{2} + 0.04811x_{3} x_{4} (14 + x_{2} ) - 5 \le 0, \hfill \\ g_{5} (x) = 0.125 - x_{1} \le 0, \hfill \\ g_{6} (x) = \delta (x) - \delta_{\max } \le 0, \hfill \\ g_{7} (x) = P - P{}_{c}(x) \le 0, \hfill \\ 0.125 \le h \le 2,0.1 \le l,t \le 10,0.1 \le b \le 2, \hfill \\ \end{gathered}$$
$$\begin{gathered} \tau (x) = \sqrt {(\tau )^{2} + 2\tau^{^{\prime}} \tau^{^{\prime\prime}} \frac{{x_{2} }}{2R} + (\tau^{^{\prime\prime}} )^{2} } ,\tau^{^{\prime}} = \frac{P}{{\sqrt 2 x_{1} x{}_{2}}},\tau^{^{\prime\prime}} = \frac{MR}{J},M = P\bigg(L + \frac{{x_{2} }}{2}\bigg),R = \sqrt {\bigg(\frac{{x_{2} }}{2}\bigg)^{2} + \bigg(\frac{{x_{1} + x_{2} }}{2}\bigg)^{2} } \hfill \\ J = 2\bigg(\sqrt 2 x_{1} x_{2} \bigg(\frac{{x_{2}^{2} }}{12} + \bigg(\frac{{x_{1} + x_{3} }}{2}\bigg)^{2} \bigg)\bigg),\sigma (x) = \frac{6PL}{{x_{3}^{2} x_{4} }},\delta (x) = \frac{{4PL^{3} }}{{Ex_{3}^{3} x_{4} }},P_{c} (x) = \frac{4.013E}{{L^{2} }}\sqrt {\frac{{x_{3}^{2} x_{4}^{6} }}{36}} \bigg(1 - \frac{{x_{3} }}{2L}\sqrt{\frac{E}{4G}} \bigg), \hfill \\ P = 6000lb,L = 14in,E = 30 \times 10^{6} psi,\tau_{\max } = 13600psi,\sigma_{\max } = 30000psi,\delta_{\max } = 0.25in,G = 12 \times 10^{6} psi. \hfill \\ \end{gathered}$$

It can be seen from Fig. 16 that the convergence curve of the LWOA is the worst. The convergence curve of the EGE-WOA is slightly better than that of the WOA, IWOA and BWOA. It can be seen from Table 11 that the f(x) of the EGE-WOA is 1.8433, which is the smallest.

Fig. 16
figure 16

The convergence curves of different algorithms

Table 11 Experimental result data

It can be seen from the five cases that LWOA has poor convergence behavior and exploration efficiency. It shows that the Lévy flights introduced by the LWOA do not play a positive role. It cannot solve the problem that the algorithm is easy to fall into local optimization.

5.6 The simulation optimal design of section parameters of hydraulic support top beam

The structural parts of the hydraulic support mainly include the top beam, the cover beam, the front and rear connecting rods, the base and so on. They are all box-shaped multi-cavity structures welded by steel plates. Their weight accounts for more than 70% of the total weight of the bracket. The following will carry out the simulation optimization design of structural parameters of the MTZ7200-20/32 hydraulic support top beam. The paper chooses to optimize the design of the most dangerous section of the roof beam under the action of the concentrated load at the middle end, as shown in section D–D in Fig. 17. The simulation optimization is aimed at the lightest weight.

Fig. 17
figure 17

The optimal design section of the MTZ7200 top beam

The simulation optimization problem of the top beam section parameters is a constrained minimization problem. The general form of its mathematical model is as follows:

The objective function: \(\min f(x).\)

The nonlinear constraints: \({\text{g}}_{i} ({\text{x}}) \le 0,i = 1,2,3, \cdots\).

Linear constraints: \(A(x) = B.\)

A represents the coefficient matrix of linear constraints. x is the design variable. B is a column vector.

5.6.1 The objective function and design variables

The external dimensions of the top beam of the hydraulic support are generally determined in the overall design. Therefore, taking the lightest weight is the ultimate goal of the simulation optimization design. That is to take the minimum actual material area of the top beam section as the goal of the simulation optimization design. According to Fig. 18, its objective function is as follows:

$$f(x) = 2[Ct_{1} + t_{4} (t_{2} + t_{3} )].$$
Fig. 18
figure 18

The D–D sectional structure

In the roof beam structure, the thickness of the upper and lower cover plates and the layout and size of the ribs should meet the requirements of the strength and stiffness of the roof beam. The reasonable selection of these section parameters directly determines the weight, reliability and structural stress distribution of the top beam. Therefore, it is necessary to select the section parameters of the top beam as the design variables of its structural simulation and optimization design.

Since the width of roof beam has been standardized, t1, t2, t3 and t4 are taken as design variables:

$$x = [t_{1} ,t_{2} ,t_{3} ,t_{4} ]^{{\text{T}}} = [x_{1} ,x_{2} ,x_{3} ,x_{4} ]^{{\text{T}}} .$$

5.6.2 The constraint condition

The constraint conditions of hydraulic support vary with the frame type, external load condition and basic shape of section. In addition to the strength conditions, geometric constraints should also be met.

5.6.2.1 The strength condition
  1. 1.

    The bending strength condition

    The maximum bending stress is used for checking at the section, and the bending strength condition is as follows:

    $$\frac{{\sigma_{{\text{s}}} }}{\sigma } \ge n_{{\text{s}}} ,$$
    $$\sigma = \frac{{3M(2t_{1} + t_{4} )}}{{t_{4}^{3} (t_{2} + t_{3} ) + Ct_{1}^{3} + 3Ct_{1} (t_{1} + t_{4} )^{2} }},$$

    where \(\sigma_{s}\) is the yield limit of the material, MPa. \(n_{s}\) is the allowable safety factor. \(\sigma_{s}\) is the maximum bending stress of the calculated section, MPa. \(M\) is the maximum bending moment of the calculated section, N mm.

    The \(g_{1} (x) = n_{{\text{s}}} - \frac{{\sigma_{{\text{s}}} }}{\sigma (x)} \le 0.\)

  2. 2.

    The shear strength condition

    When the shear stress of this section is the maximum, the shear strength needs to be checked:

    $$\frac{[\tau ]}{\tau } \ge n_{\tau } ,$$
    $$\tau = \frac{{Q[t_{1} C\frac{{t_{1} + t_{4} }}{2} + \frac{{t_{4}^{2} }}{2}(t_{2} + t_{3} )]}}{{\left[ {\frac{{t_{4}^{3} (t_{2} + t_{3} )}}{3} + \frac{{Ct_{1}^{3} }}{3} + (t_{1} + t_{4} )^{2} t_{1} C} \right](t_{2} + t_{3} )}},$$

    where \(n_{\tau }\) is the allowable safety factor. \([\tau ]\) is the allowable safety stress, MPa. \(\tau\) is the maximum shear stress of the calculated section, MPa. \(Q\) is the maximum shear force, N.

    Then, \(g_{2} (x) = n_{\tau } - \frac{[\tau ]}{{\tau (x)}} \le 0.\)

5.6.2.2 The geometric constraints
  1. 1.

    The limit of top beam thickness.

    The thicker the top beam is, the smaller the stress. However, considering the ventilation section of the support, gas emission, pedestrian passing and other factors, a limit thickness Tmax is usually given in the design:

    $$2t_{1} + t_{4} \le T_{\max } ,$$

    where Tmax is the ultimate thickness of top beam, mm.

  2. 2.

    The limitation of total web thickness.

    From the point of view of meeting the conditions of bending and shear strength, the thinner the web, the less material is used. However, considering that the top beam of the support should have a certain stiffness, a minimum thickness Cmin should be limited in the design:

    $$- 2(t_{2} + t_{3} ) \le - C_{\min } ,$$

    where Cmin is the lower bound of the total thickness of the web, mm.

  3. 3.

    The boundary conditions.

    The design variables are not only limited by the specifications of each plate, but also limited by the global or local stiffness and deformation, so their values cannot be too small:

    $$- t_{1} \le - 10, - t_{2} \le - 10, - t_{3} \le - 10, - t_{4} \le - 50.$$
5.6.2.3 The mathematical model

By substituting the known parameters M = 3041 × 106 N mm, C = 1430 mm, σs = nτ = 1.38, σs = 330 MPa, [τ] = 165 MPa, and Q = 3.566 MN into the objective function and constraints, the mathematical model is as follows:

$$\min f({\text{x}}) = 2[1043x_{1} + x_{4} (x_{2} + x_{3} )],$$
$$g_{1} (x) = 1.38 - \frac{{330(x_{2} + x_{3} )x_{4}^{3} + 4.719 \times 10^{5} x_{1}^{3} + 1.416 \times 10^{6} x_{1} (x_{1} + x_{4} )^{2} }}{{9.123 \times 10^{9} (2x_{1} + x_{4} )}} \le 0,$$
$$g_{2} (x) = 1.38 - \frac{{165(x_{2} + x_{3} )\left[ {\frac{{x_{4}^{3} (x_{2} + x_{3} )}}{3} + \frac{{1430x_{1}^{3} }}{3} + 1430x_{1} (x_{1} + x_{4} )^{2} } \right]}}{{3566 \times 10^{6} \times \left[ {715x_{1} (x_{1} + x_{4} ) + \frac{{x_{4}^{2} }}{2}(x_{2} + x_{3} )} \right]}} \le 0,$$
$$g_{3} (x) = 2x_{1} + x_{4} - T_{\max } \le 0,$$
$$g_{4} (x) = C_{\min } - 2(x_{2} + x_{3} ) \le 0,$$
$$- x_{1} \le - 10, - x_{2} \le - 10, - x_{3} \le - 10, - x_{4} \le - 50.$$

The coefficient matrix A and column vector B of linear constraint are

$$A = \left[ {\begin{array}{*{20}c} 2 & 0 & 0 & 1 \\ 0 & { - 2} & { - 2} & 0 \\ { - 1} & 0 & 0 & 0 \\ 0 & { - 1} & 0 & 0 \\ 0 & 0 & { - 1} & 0 \\ 0 & 0 & 0 & { - 1} \\ \end{array} } \right],\quad B = \left[ \begin{gathered} T_{\max } \hfill \\ - C_{\min } \hfill \\ - 10 \hfill \\ - 10 \hfill \\ - 10 \hfill \\ - 50 \hfill \\ \end{gathered} \right].$$

According to the different values of Tmax and Cmin, the minimization problem of the mathematical model is solved.

Considering the effect of different Tmax and Cmin values on the cross-sectional area, we choose eight different Tmax and Cmin for simulation optimization (Tables 12, 13).

Table 12 Experimental result data
Table 13 Experimental result data

The first case: when Tmax = 440 mm and Cmin = 80 mm. The optimization curves and data of the six whale optimization algorithms for the f(x) are as follows (Fig. 19).

Fig. 19
figure 19

The convergence curves of different algorithms

The second case: when Tmax = 440 mm and Cmin = 120 mm. The optimization curves and data of the six whale optimization algorithms for the f(x) are as follows (Fig. 20).

Fig. 20
figure 20

The convergence curves of different algorithms

The third case: when Tmax = 440 mm and Cmin = 160 mm. The optimization curves and data of the six whale optimization algorithms for the f(x) are as follows (Fig. 21).

Fig. 21
figure 21

The convergence curves of different algorithms

Case

Optimizer

Optimal design variables (x)

f(x)

Hydraulic support top beam

t 1

t 2

t 3

t 4

 

WOA

24.40032

44.70096

44.90829

300.0924

104,895.2481

 

LWOA

46.67345

47.00782

82.80762

185.2508

147,086.8826

 

IWOA

20.15372

48.28946

32.12598

343.5569

97,294.6936

 

BWOA

22.49702

66.63666

14.20119

326.811

99,690.9320

 

RDWOA

29.26249

26.44685

73.99081

261.4931

114,215.2917

 

EGE-WOA

17.61107

27.77492

13.308

323.8953

97,100.2436

The fourth case: when Tmax = 440 mm and Cmin = 240 mm. The optimization curves and data of the six whale optimization algorithms for the f(x) are as follows (Fig. 22).

Fig. 22
figure 22

The convergence curves of different algorithms

Case

Optimizer

Optimal design variables (x)

f(x)

Hydraulic support top beam

t 1

t 2

t 3

t 4

 

WOA

30.88702

90.23891

44.67687

241.2454

128,540.2534

 

LWOA

44.85422

63.15818

129.6596

173.4766

163,692.6104

 

IWOA

20.35711

73.59813

47.5478

317.4275

119,354.0369

 

BWOA

20.27847

50.32282

70.06599

318.5292

119,014.8264

 

RDWOA

30.91130

27.1493

110.7774

255.1978

132,821.1814

 

EGE-WOA

17.59403

10.08118

10.4889

266.9102

118,791.0842

The fifth case: when Tmax = 400 mm and Cmin = 160 mm. The optimization curves and data of the six whale optimization algorithms for the f(x) are as follows (Fig. 23).

Fig. 23
figure 23

The convergence curves of different algorithms

Case

Optimizer

Optimal design variables (x)

f(x)

Hydraulic support top beam

t 1

t 2

t 3

t 4

 

WOA

30.26314

39.09114

49.06834

271.2596

111,059.3476

 

LWOA

36.90048

79.22142

49.25809

216.5744

134,363.2622

 

IWOA

22.72781

51.88702

28.51705

321.9593

99,184.5120

 

BWOA

23.78628

32.7138

48.68922

307.8309

99,727.9174

 

RDWOA

26.5803

40.33903

64.66502

280.8413

114,842.3755

 

EGE-WOA

18.97744

25.59272

15.29235

317.9446

97,152.6332

The sixth case: when Tmax = 360 mm and Cmin = 160 mm. The optimization curves and data of the six whale optimization algorithms for the f(x) are as follows (Fig. 24).

Fig. 24
figure 24

The convergence curves of different algorithms

Case

Optimizer

Optimal design variables (x)

f(x)

Hydraulic support top beam

t 1

t 2

t 3

t 4

 

WOA

37.65714

44.42424

48.54884

214.1215

117,723.1160

 

LWOA

38.50779

161.3716

103.8331

195.9281

184,941.4289

 

IWOA

24.86709

33.70215

47.81462

299.0856

100,649.1497

 

BWOA

28.68981

47.134

34.06963

268.7986

103,480.5820

 

RDWOA

31.32396

35.98665

55.68149

251.4981

110,704.1973

 

EGE-WOA

23.37292

18.00547

12.62175

297.5980

99,315.4843

The seventh case: when Tmax = 320 mm and Cmin = 160 mm. The optimization curves and data of the six whale optimization algorithms for the f(x) are as follows (Fig. 25).

Fig. 25
figure 25

The convergence curves of different algorithms

Case

Optimizer

Optimal design variables (x)

f(x)

Hydraulic support top beam

t 1

t 2

t 3

t 4

 

WOA

33.83567

73.77938

33.84293

228.454

119,554.4854

 

LWOA

53.5757

120.2834

170.555

142.1108

191,718.6022

 

IWOA

32.56013

31.65445

49.404

242.3646

107,234.3334

 

BWOA

32.93235

43.19313

38.09523

239.6881

107,679.7659

 

RDWOA

51.21735

104.5579

51.75729

168.4327

153,268.0284

 

EGE-WOA

29.75091

37.49873

28.57456

255.7566

104,284.7457

The eighth case: when Tmax = 300 mm and Cmin = 160 mm. The optimization curves and data of the six whale optimization algorithms for the f(x) are as follows (Fig. 26).

Fig. 26
figure 26

The convergence curves of different algorithms

Case

Optimizer

Optimal design variables (x)

f(x)

Hydraulic support top beam

t 1

t 2

t 3

t 4

 

WOA

33.59803

33.59092

120.5783

219.466

143,773.5264

 

LWOA

47.45665

171.0472

116.2638

156.2057

202,026.0851

 

IWOA

37.12945

47.78879

38.6945

215.4439

114,859.8591

 

BWOA

39.69094

56.74338

24.04799

204.0625

115,766.3774

 

RDWOA

60.89707

101.6072

69.36508

128.3987

166,081.0218

 

EGE-WOA

34.44039

10.00754

10.40055

227.2008

109,277.5338

From the convergence curves and optimization results of the above eight cases, it can be seen that the convergence behavior of EGE-WOA is better. Its optimization efficiency is the highest.

6 Conclusions

In the process of global continuity optimization, the WOA has the problems of poor exploration efficiency and weak convergence behavior. To improve the global exploration efficiency of the WOA, IWOA and BWOA were proposed. Although these two algorithms have improved the global exploration efficiency and convergence behavior of the WOA to a certain extent, they have not effectively avoided the risk of falling into the local optimum. The exploration efficiency of the RDWOA for unconstrained continuous optimization problems is significantly higher than that of the IWOA, BWOA and WOA. From the experimental data in Sect. 5, it can be seen that compared with the WOA, IWOA and BWOA, the RDWOA has very poor exploration efficiency for constrained continuous optimization problems. Its convergence behavior is also significantly worse than other variant algorithms.

To enhance the exploration efficiency and convergence behavior of the WOA in unconstrained and constrained continuous optimization problems, we propose a novel whale optimization algorithm (EGE-WOA).

For the unconstrained global continuous optimization problem, it can be seen from the experimental results of 33 benchmark functions that compared with the WOA, IWOA, BWOA and RDWOA, the global exploration efficiency and convergence behavior of the EGE-WOA have been significantly improved. The EGE-WOA can effectively avoid the risk of the algorithm falling into a local optimum.

For the constrained global continuous optimization problem, it can be seen from the experimental results of five real engineering application cases that compared with the WOA, IWOA, BWOA and RDWOA, the EGE-WOA still has a strong global exploration efficiency and a better convergence curve.

In summary, the EGE-WOA can strengthen the global exploration efficiency and convergence behavior of the WOA.

In the future, we will research and apply whale optimization algorithm in many aspects, for example, in multi-objective optimization. In practical applications, we use WOA to provide the best parameters for image segmentation and machine-learning models.