Keywords

1 Introduction

The Whale Optimization Algorithm (WOA) is a population-based intelligent optimization algorithm proposed by Mirjalili et al., in 2016 [1]. The algorithm uses mathematical formulas to simulate the whale’s predation behavior. Compared with the traditional meta-heuristic optimization algorithm, the whale optimization algorithm has the characteristics of simple principle, less parameter settings, and strong optimization ability. At the same time, there are also some defects such as falling into local optimum, slow convergence speed and low convergence accuracy [2, 3]. Due to the existence of both advantages and disadvantages of the Whale Optimization algorithm, its applicability is limited. Therefore, this paper proposes an improved whale optimization algorithm by multi-mechanism fusion to address the problems existing on the traditional whale optimization algorithm.

For the defects of whale optimization algorithm, which is easy to fall into local optimal and has low convergence accuracy, scholars put forward many improved strategies. For example, Kaur et al. introduced chaos theory into WOA optimization process to adjust the parameters of whale optimization algorithm, enhance exploration and exploitation capacity [4]. Luo Jun et al. introduced a new position update strategy in the exploitation and exploration phases to avoid premature of the whale optimization algorithm [5]. Saha et al. adjusted the control parameters, used correction factors to reduce step size to improve the exploitation and exploration capability of the whale optimization algorithm [6].

In this paper, an improved whale optimization algorithm by multi-mechanism fusion is proposed. Firstly, introducing the nonlinear convergence parameter a, the improved whale algorithm can adapt to nonlinear problems by using the improved parameters. Secondly, referring to Harris hawks optimization algorithm [7], the shrinking encircling mechanism of whale optimization algorithm is improved to speed up individual whale’s search for the optimal position, avoiding the waste of computational resources by one individual exploring at a useless position as far as possible. Finally, at the end of each iteration of the algorithm, the fitness of the whale position is updated using the Gaussian detection mechanism [8], so that the whale algorithm has a better exploration position and accelerates the convergence of the whale optimization algorithm.

2 Basic WOA Algorithm

The whale optimization algorithm simulates the whale’s predation action. According to the characteristics of whale’s predation, the whale’s predation process is divided into three steps. The three types of position updating: contraction encircling, spiral position updating and random searching [9].

2.1 Shrinking Encircling Mechanism

The whale senses the area where the prey is located and surrounds it. Since the location of the optimal design in the hunting or search space is not consistent with the previous location, the WOA optimization algorithm assumes that the current best candidate solution is the target prey or close to the optimal solution. In this case, whales define the best search agent, the other search agents will try to change locations to the best search agent. The hunting behaviour of the shrinking encircling is described by the following formula:

$$\begin{aligned} {X(t + 1)} = {{X^*}(t)} - A \cdot {{D_1}} \end{aligned}$$
(1)
$$\begin{aligned} {{D_1}} = |C \cdot {{X^*}(t)} - {X(t)} | \end{aligned}$$
(2)

where, t represents the number of current iterations, A and C are vector coefficients, X(t) is the position at the current moment, \({X(t+1)}\) is the position at the next moment, \({D_1}\) is C times of the absolute value, that the difference between the prey position and the current whale position, and \({{X^*}(t)}\) is the position vector to obtain the optimal solution at present. If there is a better solution in the result of each iteration, the fitness value of the position at this point is less than the fitness value of \({{X^*}(t)}\), then the whale position vector should be set to the new \({{X^*}}\) in the iteration.

2.2 Location Update

The position of exploring and updating methods of whales is divided into two types: spiral updating position and random searching. In order to simulate the position updating mode of whales at a certain time, the whales are guaranteed to choose spiral updating position or random searching mode with equal probability at the same time. Set a random number p with values in the range [0, 1].

2.3 Spiral Updating Position

When \(p \ge 0.5\), the spiral updating position method is selected, that is established to update the position of the whale next time by simulating the whale’s spiral updating position surrounding the prey. The calculation formula is as follows:

$$\begin{aligned} X(t + 1) = {D_2} \cdot {e^{bl}} \cdot \cos (2\pi l) + {X^*}(t) \end{aligned}$$
(3)
$$\begin{aligned} {D_2} = |{X^*}(t) - X(t)| \end{aligned}$$
(4)

where, \(D_2\) represents the distance between prey and whale, b is the parameter controlling the shape of the spiral, set to 1 in this paper, and l takes values in the range of [–2, 1].

2.4 Random Searching

When \(p < 0.5\), select the position update formula of random searching. Random searching is divided into two ways, when \(|A| < 1\), indicates that the whale is moving towards the prey position, then use the shrinkage encirclement formula to simulate the action behaviour of the whale. Use formula (1) to encircle the prey.

When the \(|A| \ge 1\) indicates that the whale is moving beyond the location where the prey is present, right now the whales will give up before moving direction, random searching new update location to the other direction, avoid falling into local minima.

Fig. 1.
figure 1

The flow chart of improved whale optimization algorithm by multi-mechanism fusion

$$\begin{aligned} {{D}_{\mathrm{{rand}}}} = |C \cdot {X_{\mathrm{{rand}}}}(t) - X(t)| \end{aligned}$$
(5)
$$\begin{aligned} X(t + 1) = {X_{\mathrm{{rand}}}}(t) - A \cdot {{D}_{\mathrm{{rand}}}} \end{aligned}$$
(6)

where, \({X_{\mathrm{{rand}}}}\) denotes the randomly chosen whale position vector, and \({D_{\mathrm{{rand}}}}\) denotes the absolute value of the difference between C times \({X_{\mathrm{{rand}}}}\) and X(t).

3 Improved Whale Optimization Algorithm

In the basic whale optimization algorithm, the updating process of the whale position is by randomly selecting three updating mechanisms, so there is a problem that the most effective updating method cannot be selected in the whale position updating. Moreover, in the search process of the algorithm, there are many iterations but the leader \({{X^*}(t)}\) position is not changed, which leads to the early end of the convergence process [10,11,12], when solving the optimization problem, it may converge quickly to the local optimum, and the quality of the solution decreases. Aiming at the problems existing in the traditional whale optimization algorithm, this paper proposes an improved whale optimization algorithm by multi-mechanism fusion.

Firstly, a new nonlinear parameter a is proposed to make the whale optimization algorithm adapt to complex nonlinear problems and accelerate the convergence speed of the algorithm 0. The soft siege mechanism of the Harris hawks optimization algorithm is introduced to accelerate the hunting speed of whales. Finally, at the end of each whale hunting iteration, a position control mechanism using Gaussian detection is added to increase the optimization accuracy of the algorithm. The flow chart of the improved whale optimization algorithm by multi-mechanism fusion (IWOA) is shown in Fig. 1.

3.1 Nonlinear Parameter

For swarm intelligence optimization algorithms, exploration and exploitation capabilities are very important for their optimization performance. As for WOA, both shrinking encircling and random searching in position updating are related to the value of a. How to select an appropriate convergence factor a to coordinate the exploration and exploitation ability of WOA is a research problem worth further investigation. The exploration ability means that the population needs to detect a wider search area to avoid the algorithm falling into local optimum [13, 14]. The exploitation ability mainly uses the information already available to the population to conduct local search on certain neighbourhoods of the solution space, which has a decisive influence on the convergence speed of the algorithm. The convergence factor a with large variation has better global search ability and avoids the algorithm falling into local optimum. The smaller convergence factor a has a stronger local search ability, which can accelerate the convergence speed of the algorithm. However, the convergence factor a in the whale optimization algorithm decreases linearly from 2 to 0 with the number of iterations, which cannot fully reflect the exploration and exploitation process of WOA.

In this paper, a nonlinear decreasing convergence factor a with rapid change in the early stage and relatively slow change in the later stage is designed to balance the exploration and exploitation of WOA. The calculation formula is as follows:

$$\begin{aligned} {a} = 2 \cdot (1 - \sqrt{\frac{t}{{{T_{max}}}}} ) \end{aligned}$$
(7)

The parameter A is controlled by the coefficient a, and the change of the coefficient a leads to certain changes in both the random searching mechanism and the shrinking encircling mechanism. Where \({T_{max}}\) is the maximum number of iterations and t is the current number of iterations.

3.2 Harris Hawks Optimization Algorithm

The Harris hawks Optimization algorithm simulates the predatory movements of the Harris hawks by using mathematical formulas to simulate its movements. The algorithm vividly simulates the siege predation mechanism of the Harris hawks, making the algorithm extremely powerful in global search.

In the traditional whale optimization algorithm, the process of finding the optimal position is a random exploration by individual whales. The lack of communication between individuals and the group makes some individuals carry out several useless explorations at a distance from the prey. Therefore, we will refer to the soft besiege strategy of the Harris hawks optimization algorithm to improve the location of the whale optimization algorithm as follows:

$$\begin{aligned} X(t+1)=\left\{ \begin{array}{ll}Y &{} f(Y)<f(X(t)) \\ Z &{} f(Z)<f(X(t))\end{array}\right. \end{aligned}$$
(8)
$$\begin{aligned} Y={X^*}(t) - A \cdot {D_1} \end{aligned}$$
(9)
$$\begin{aligned} Z=Y+S * L F(D) \end{aligned}$$
(10)

where, S is a D-dimensional random vector on the uniform distribution of (1, D), f(x) is the fitness function, which means that a certain position is substituted into the fitness function to calculate its fitness value. LF(D) is a D-dimensional random vector generated by the Lévy flight. The Lévy flight formula is shown in formulas (11) and (12).

$$\begin{aligned} LF(D)=0.01 \times \frac{u \times \sigma }{|v|^{\frac{1}{\beta }}} \end{aligned}$$
(11)
$$\begin{aligned} \sigma =\left( \frac{\varGamma (1+\beta ) \times \sin \left( \frac{\pi \beta }{2}\right) }{\left. \varGamma \left( \frac{1+\beta }{2}\right) \times \beta \times 2^{\left( \frac{\beta -1}{2}\right) }\right) }\right) ^{\frac{1}{\beta }} \end{aligned}$$
(12)

where, u and v are random values between (0, 1), and \(\beta \) is set to 1.5.

3.3 Gaussian Detection Mechanism

This section uses the Gaussian variant for the current position and compares the position fitness of the variant with the position fitness before detection to select the optimal position. The main purpose is to improve the ability of the algorithm to jump out of the local optimum and enhance the optimization ability of the algorithm.

The formula of Gaussian detection mechanism is as follows:

$$\begin{aligned} X(N)=X(t)+X(t) * N(0,1) \end{aligned}$$
(13)
$$\begin{aligned} X_{t+1}=\left\{ \begin{array}{cc}X(N) &{} f(X(N))<f(X(t)) \\ X(t) &{} f(X(N))>f(X(t))\end{array}\right. \end{aligned}$$
(14)

where, N(0, 1) generates a random number with Gaussian distribution between 0 and 1. X(N) is the position vector generated after Gaussian mutation.

4 Experiment and Analysis

In this paper, the eight variable dimension benchmark functions are selected to test and evaluate the improved algorithm. All benchmark functions have a theoretical optimal value, which is the extreme value of this test function. The benchmark functions are shown in Table 1.

In this paper, the basic Whale Optimization Algorithm (WOA), Gray Wolf Optimization Algorithm (GWO) [15], Harris Hawks Optimization Algorithm (HHO), and other improved Whale Optimization Algorithms WOABAT [16], EWOA [17] are selected and compared with the improved whale optimization algorithm by multi-mechanism fusion in this paper, on F1-F8 variable dimensional benchmark functions.

Table 1. Benchmark function

In order to ensure the fairness of the comparison experiment, the population and the number of iterations are set to the same value. The population is set to

Table 2. 30 dimension optimization results comparison

30 and the number of iterations is set to 500. The convergence performance of different optimization algorithms in different dimensions is analyzed by testing multidimensional benchmark functions. To avoid randomness and ensure the accuracy of the experiments, all optimization algorithms are run 30 times independently. Since the mean value reflects the optimization accuracy of each algorithm, the standard deviation reflects the robustness and stability of each algorithm, the average convergence accuracy and stability of the optimization algorithms are analyzed by comparing the mean and standard deviation of the optimal fitness obtained by running each optimization algorithm separately for 30 times [18, 19]. First, the mean results of the IWOA algorithms in different dimensions are analyzed by means of the Sign test to determine whether they are better, equal or inferior to the comparison algorithms. Second, the Friedman test is performed to compare the performance of the optimization algorithms by averaging them over eight benchmark functions.

By setting the same dimensionality, population and number of iterations, the algorithms were compared in 30 and 100 dimensions using the variable dimensional benchmark function F1–F8 to validate the WOA, GWO, HHO, WOABAT, EWOA and IWOA algorithms.

Figure 2 shows the convergence curves of the three basic optimization algorithms WOA, GWO, HHO and the improved whale optimization algorithm IWOA in 30 and 100 dimensions. Figure 3 shows the convergence curves of the three different improved whale optimization algorithms WOABAT, EWOA, IWOA and the whale optimization algorithm WOA in 30 and 100 dimensions, where the horizontal axis is the number of iterations and the vertical axis indicates the optimal adaptation values. It is clear from the figure that the convergence characteristics of the selected optimization algorithms do not change significantly in different dimensions, and the IWOA algorithm shows excellent convergence accuracy and convergence speed.

Table 3. 100 dimension optimization results comparison
Fig. 2.
figure 2

Comparison of convergence curves for the base algorithm

Table 2 and Table 3 show the mean and standard deviation of the optimal fitness values of the six algorithms run separately for 30 times in 30 and 100 dimensions. According to the Friedman test results in Table 2 and Table 3, under different dimensions, the mean test results of IWOA are all the optimal values. The standard deviation results are excellent, which proves that the IWOA algorithm has the overall optimal convergence accuracy and stability in 30 and 100 dimensions. The average result of the IWOA algorithm is the 8 test functions in different dimensions. Only in the 100-dimensional F3 function is lower than that of the HHO, and all other conditions are better or equal to the other 5 algorithms. Summing up the results of the above three tests, the IWOA algorithm is superior and more stable than the WOA, HHO, IWOA, WOABAT, EWOA and IWOA algorithms in the 30-dimensional and 100-dimensional F1-F8 test functions.

5 Design Problems of Tension Spring

Design problems of tension spring [20] is a classic engineering design optimization problem, which is composed of four design variables: inner radius R, cylindrical length L, vessel thickness Ts and head thickness Th. The goal of this problem is to minimize the total cost while satisfying the production needs, which is equivalent to the problem of minimizing the objective function with constraints. If the four design variables are \(x_1\), \(x_2\), \(x_3\) and \(x_4\) respectively, the total cost is f(x), and the mathematical model is shown in formulas (15) and (16).

Objective function:

$$\begin{aligned} \min \mathrm{{f}}(\mathrm{{x}}) = 0.6224{\mathrm{{x}}_1}{\mathrm{{x}}_3}{\mathrm{{x}}_4} + 1.7781{\mathrm{{x}}_2}\mathrm{{x}}_3^2 + 3.1661\mathrm{{x}}_1^2{\mathrm{{x}}_4} + 19.84\mathrm{{x}}_1^2{\mathrm{{x}}_3} \end{aligned}$$
(15)
Fig. 3.
figure 3

Comparison of convergence curves for improved algorithms

Constraints:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{c}} {{\mathrm{{g}}_1}(\mathrm{{x}}) = - {\mathrm{{x}}_1} + 0.0193{\mathrm{{x}}_3} \le 0}\\ {{\mathrm{{g}}_2}(\mathrm{{x}}) = - {\mathrm{{x}}_2} + 0.00954{\mathrm{{x}}_3} \le 0}\\ {{\mathrm{{g}}_3}(\mathrm{{x}}) = - \pi \mathrm{{x}}_3^2{\mathrm{{x}}_4} - \frac{4}{3}\pi \mathrm{{x}}_3^2 + 1296000 \le 0}\\ {{\mathrm{{g}}_4}(\mathrm{{x}}) = {\mathrm{{x}}_4} - 240 \le 0} \end{array}} \right. \end{aligned}$$
(16)

where, the value range of \(x_1\) and \(x_2\) is [0, 99], and the value range of \(x_3\) and \(x_4\) is [10, 200].

The optimization algorithm mentioned in this paper was used to optimize the design problems of tension spring. All algorithms were run separately for 30 times, and the best value, mean value and standard deviation of the final results were compared, as shown in the Table 4.

Table 4. The Optimization Results of Tension Spring

From Table 4, it shows that the improved whale optimization algorithm by multi-mechanism fusion proposed in this paper has the best optimal value, mean and standard deviation in the design problems of tension spring, so there is some effectiveness and stability of IWOA in the application of engineering problems.

6 Conclusion

Aiming at the performance deficiency of the traditional whale optimization algorithm, this paper proposes an improved whale optimization algorithm by multi-mechanism fusion, which introduces the Harris hawks optimization algorithm and Gaussian detection mechanism based on the improvement of nonlinear parameters. The IWOA algorithm is analyzed by Friedman test, Sign test and design problems of tension spring. Comparing the Whale Optimization Algorithm (WOA), Gray Wolf Optimization Algorithm (GWO), Harris Hawk Optimization Algorithm (HHO), and other improved Whale Optimization Algorithms WOABAT and EWOA, it is proved that IWOA has excellent optimal stability and convergence accuracy under eight different benchmark functions. The experimental results show that IWOA has better optimization effect than the original Whale Optimization Algorithm, has better stability while ensuring convergence accuracy and speed, reflecting the effectiveness of the improved algorithm.