1 Introduction

Optimization is the search process for the best solution to a problem, i.e. to find the maximum or minimum value of an objective function. Optimization problems exist widely in engineering design, medicine, scientific research, economic management and other fields [34]. It is of great help to deal with optimization problems effectively in all fields. However, some traditional mathematical optimization methods, such as quasi-Newton process, conjugate gradient, strong steepest and sequential quadratic computing, are complex and highly restricted so they cannot solve various complex and non-differentiable optimization problems effectively [42, 49, 64].

Meta-heuristic algorithms are able to provide a useful and elegant solution to those optimization problems [16] due to their merits such as no limitation to specific problems, without needing gradient information and so on. Meta-heuristic algorithms can be generally divided into evolution-based methods, swarm intelligence-based methods and physics-based methods. The evolution-based algorithms are inspired by the theory of biological evolution, starting with the initial solutions and continuously obtaining higher quality solutions through evolution from generation to generation. The most famous meta-heuristic algorithm based on evolution is genetic algorithm (GA) [31] which simulates Darwin’s biological model of natural selection and genetic mechanism, evolving a new generation of better individuals through parental chromosome selection, crossover, and mutation. Other algorithms based on evolution include differential evolution (DE) [57], biogeography-based optimization (BBO) [56] and so on. In recent years, many new meta-heuristic optimization algorithms based on evolution have been proposed. [28] proposed an evolutionary algorithm based on ring theory (RTEA) and applied it to the combinatorial optimization problems. [27] proposed an optimization algorithm based on group theory (GTOA), which was applied to a variety of knapsack problems. [62] proposed an improved GTOA algorithm and a new operator to enhance the interpretability of group theory in discounted 0–1 knapsack problem. [29] proposed the Taper-shaped transfer functions, and on this basis, a binary differential evolution algorithm based on the Taper-shaped transfer functions (T-NBDE) was proposed to solve the knapsack problem with a single continuous variable and the uncapacitated facility location problem. Physics-based algorithms are to simulate some physical phenomena in nature, such as simulated annealing algorithm (SA) [60] which simulates the annealing phenomenon of objects, gravitational search algorithm (GSA) [52] based on universal gravitation and Newton’s second law. Swarm intelligence-based algorithms are to simulate the behavior of groups in nature and seek optimal solutions through the wisdom of groups, such as particle swarm optimizer (PSO) [37], firefly algorithm (FA) [67], butterfly optimization algorithm (BOA) [3], bat algorithm (BA) [9], shuffled frog-leaping algorithm (SFLA) [41], artificial bee colony (ABC) algorithm [24], etc. These swarm intelligence optimization algorithms have been applied to various practical optimization problems. For example, [69] proposed a distributed fault estimation algorithm based on a hybrid improved biogeography based optimization (HIBBO), which introduced differential evolution into standard biogeography based optimization (BBO) to determine the moments when intermittent faults occur and disappear in discrete components. [65] proposed an adaptive competitive swarm optimization (ACSO) for intermittent fault estimation, which introduced a parameter adaptive adjustment strategy into the original competitive swarm optimization and improved the search ability of the algorithm. There are also some algorithms inspired by other ideas, such as teaching-learning-based optimization (TLBO) [51] inspired by teaching and feedback between teachers and students, sine cosine algorithm (SCA) [45] and golden sine algorithm (Gold-SA) [58] that based on mathematical ideas.

Harris Hawks Optimizer (HHO) is a swarm intelligence-based meta-heuristic algorithm proposed by [30] that simulates the behaviour of Harris hawks cooperatively foraging and surrounding prey with multiple strategies. HHO consists of two phases of exploration and exploitation, switching between the two phases through the prey escape energy. HHO has strong competitive strengths compared with other swarm intelligence optimization algorithms [38]. On the one hand, HHO is easy to use because of its simple structure and no extra parameters except required parameters. On the other hand, the multiple strategies of HHO in the exploitation phase make the local search ability of the algorithm better. In addition, HHO simulates the continuous loss process of the prey escaping energy, and switches between exploration phase and exploitation phase based on the value of the prey escape energy, which fits the optimization process and makes the algorithm have good performance. HHO has been applied to various numerical optimization and practical problems. For example, Hussain et al. [32] applied HHO to feature selection, [21] adapted HHO to the traditional travel salesman problem, [53] applied HHO to image segmentation, [6] used HHO to cluster features, and [25] built a geohazard radar identification model using convolutional neural network and HHO. However, according to the No Free Lunch (NFL) theorem [63], no algorithm can achieve superior results on all optimization problems. HHO also suffers from low convergence accuracy, slow convergence, and unbalance in the exploration and exploitation phases. [50] pointed out that the global search ability based on the random strategy in HHO is insufficient in the exploration stage, and it is easy to fall into local optimum. In addition, the prey escape energy cannot fully reflect the actual search process, and then cannot balance the exploration and exploitation process well. [38] pointed out that when HHO optimizes complex problems, it is easy to converge with low accuracy and premature convergence. In recent years, there have been a lot of research addressing improvements to HHO. [35] combined HHO algorithm with SCA algorithm to enhance exploration capability of HHO. [8] introduced chaos strategy, topological multi-population strategy, and differential evolution strategy into the algorithm to enhance the performance of HHO. [13] used chaotic and changed the control parameters to enhance the ability of HHO. [2] discussed the influence of three selection strategies on the algorithm during the exploration phase. [18] added quasi-reflection-based learning mechanism into HHO to increase population diversity and speed of convergence. [22] used quantum particles to deal with dynamic optimization problems. In [33], opposition-based learning and chaotic local search strategy operator were combined to enhance the performance of HHO.

Self-organized criticality [5] refers to a system far from the equilibrium state. It can automatically adjust to a critical state without manual adjustment of any parameters. When a small disturbance occurs in the system, a chain reaction may occur and have a significant impact on the whole system. Extremal optimization (EO) is derived from the Bak-Sneppen (BS) [4] model, which shows self-organized criticality. Different from most optimization algorithms acting on the group, it acts on the individual and always selects the worst component in the individual for variation, which may lead to a huge avalanche effect on the individual system and complete the improvement. EO has the characteristic of few parameters. Many articles have also applied EO to numerical optimization and practical problems. [12] proposed a swarm-based EO algorithm (PEO) to deal with single objective function optimization problems. The application of EO algorithm to multi-objective problems in [11] shows that EO also has good performance on multi-objective problems. [10] hybridized PSO and EO to make a novel algorithm. An adaptive population extremal optimization-based multivariable proportional-integral-derivative (MPID) control method was proposed in [71].

In this paper, we propose an improved HHO hybridized with EO which is named IHHO-EO. The main contributions of this work are summarized as follows:

  • The Harris hawk individual’s own historical optimal value is retained and applied to the exploration phase to enhance the exploration ability.

  • The prey escape energy updating formula is changed to enable a better balance between the exploration phase and the exploitation phase.

  • After the execution of the original HHO process, refracted opposition-based learning operation with dynamic k value is performed on all individuals. Refracted opposition-based learning is a variant of opposition-based learning that is important for improving the population quality as well as for jumping out of the local optimum. At the same time, the k value in the refracted opposition-based learning is dynamically generated to better generate the dynamic opposition-based solutions.

  • Inspired by [10] that hybridized the PSO algorithm with EO, EO is added to our algorithm to enhance the local search capability.

  • A large number of test functions are tested and compared with other algorithms to verify the excellent performance of the proposed algorithm IHHO-EO, and IHHO-EO is applied to the pressure vessel design problem.

In order to verify the performance of our proposed algorithm, extensive experiments were carried out. Firstly, IHHO-EO is compared with the original HHO algorithm, other new intelligent optimization algorithms and various HHO variants on 23 classical basic functions [68], and the superiority of IHHO-EO is verified by Friedman test [20], Quade test [15] and Wilcoxon sign rank test [14]. Subsequently, the effectiveness of the each proposed strategy is verified on high dimensions situation of the same 23 functions. Then, IHHO-EO is compared with many of the latest optimization algorithms on 29 CEC2017 functions [39]. Finally, IHHO-EO is applied to the pressure vessel design problem [54] which is a practical engineering problem with constraints. The experimental results show that the proposed IHHO-EO algorithm has better performance than the original HHO algorithm and other models in numerical optimization and practical engineering optimization problems.

The remaining content of the paper is organized as follows: Sect. 2 introduces the original HHO, refracted opposition-based learning, and the concept of EO. Section 3 presents the improved HHO hybridized with EO in detail. Section 4 performs experimental simulations and results analysis. Section 5 further applies the proposed algorithm to pressure vessel design problem. Section 6 concludes the whole paper as well as gives an outlook on future work.

2 Background

2.1 Harris Hawks optimizer (HHO)

HHO is a new swarm intelligence optimization algorithm proposed by [30]. The algorithm simulates the predation characteristics of Harris hawks. HHO is divided into two phases: exploration and exploitation, and uses the escape energy of prey as the basis for switching phases. The most important characteristic of Harris hawks is cooperative foraging and can show multiple attack patterns according to the change of environment and prey escape patterns. In Harris hawks optimization algorithm, Harris hawks individuals constitute candidate solutions, and the individual with the highest fitness in each iteration is considered as the prey. The phases of HHO are shown in Fig. 1.

Fig. 1
figure 1

The phases of HHO [30]

2.1.1 Exploration phase

When exploring the prey location, Harris hawks often randomly perch at a location and quietly search for the prey. During this phase, Harris hawks update their positions by two strategies: one is to explore based on the information about a randomly selected individual and the information about itself, while the other is to explore based on the current optimal individual and the average information of all individuals. The two strategies are determined by the parameter q and have equal chances of execution. The mathematical model is as follows:

$$\begin{aligned} X_{i} (t+1)={\left\{ \begin{array}{ll} X_{rand} (t)-r_{1}|X_{rand} (t)-2r_{2}X_{i} (t) |&{} q\ge 0.5 \\ (X_{prey} (t)-X_{m} (t))-r_{3} (LB+r_{4} (UB-LB)) &{} q<0.5 \end{array}\right. } \end{aligned}$$
(1)

where \(X_{i} (t+1)\) and \(X_{i} (t)\) mean the position of the individual i in iteration \(t+1\) and iteration t respectively, \(q, r_{1}, r_{2}, r_{3}\), and \(r_{4}\) are random factors inside (0, 1), UB and LB are the upper and lower bounds of the search space, \(X_{rand} (t)\) is an individual randomly selected from the population in iteration t, \(X_{prey} (t)\) is the individual with the highest fitness in iteration t, and \(X_{m} (t)\) is the average position of all individuals in iteration t which is defined as below:

$$\begin{aligned} X_{m} (t)=\frac{1}{N}\sum _{i=1}^{N}X_{i} (t) \end{aligned}$$
(2)

where N denotes the total numbers of individuals.

2.1.2 Transition from exploration to exploitation

In HHO, the prey escape energy E is defined to simulate the physical exertion of the prey during the escape process. When \(|E |\geqslant 1\), the algorithm performs the exploration phase, reflecting the Harris hawks searching for the prey. When \(|E |< 1\), the Harris hawks pursue the prey and enter the exploitation phase. The equation for the change of the prey escape energy E is as follows.

$$\begin{aligned} E_{1}= & {}\, 2\times (1-\frac{t}{T}) \end{aligned}$$
(3)
$$\begin{aligned} E= & {}\, E_{2}\times E_{1} \end{aligned}$$
(4)

where \(E_{2}\) represents the initial value of the energy and ranges from \(-1\) to 1, which is regenerated randomly at each iteration, and T is the maximum number of iterations of the algorithm.

2.1.3 Exploitation phase

During the exploitation phase, Harris hawks will besiege the prey they have discovered, forming an encirclement and waiting for an opportunity to surprise pounce. In the actual hunting process, the prey has a chance to escape, and the Harris hawks will also make different chasing actions according to the behavior of the prey. To more realistically mimic the actual siege behavior of the Harris hawks, HHO introduces four strategies. The random parameter r is defined to represent the chance that the prey can successfully escape (\(r<0.5\)) or not successfully escape (\(r\ge 0.5\)) before the surprise pounce. The decision of which strategy to use is based on the parameter r and the prey escape energy E together. No matter how the prey reacts, Harris hawks will make a hard or soft encirclement to capture the prey. Along with the constant chase, the prey slowly loses energy and is eventually captured by the Harris hawks.

  • Soft besiege

When \(|E |\ge 0.5\) and \(r\ge 0.5\), the prey has enough energy to escape, and tries to escape the encirclement through some jumping behaviors, but ultimately fails. In this process, the hawks perform a high-altitude soft encirclement operation on the prey, constantly consuming the energy of the prey. After the prey is exhausted, the hawks swoop to capture the prey. The mathematical model is described as follows:

$$\begin{aligned} X_{i} (t+1)= & {} \, \Delta X (t)-E|JX_{prey} (t)-X_{i} (t) | \end{aligned}$$
(5)
$$\begin{aligned} \Delta X (t)= & {} \, X_{prey} (t)-X_{i} (t) \end{aligned}$$
(6)

where \(\Delta X (t)\) is the difference between the location of the prey and the current individual in iteration t, \(J=2 (1-r_{5})\) denotes the strength of the prey’s jump during escape, and \(r_{5}\) is a random number between (0, 1).

  • Hard besiege

When \(|E |<0.5\) and \(r\ge 0.5\), the prey is exhausted and does not have enough energy to escape, and there is no chance to escape. Harris hawks pounce on their prey in a hard-encircling fashion, the mathematical model is described as follows:

$$\begin{aligned} X_{i} (t+1)=X_{prey} (t)-E|\Delta X (t) | \end{aligned}$$
(7)
  • Soft besiege with progressive rapid dives

When \(|E |\ge 0.5\) and \(r<0.5\), the prey has enough energy to escape successfully. At this point, Harris hawks will adopt a more intelligent soft-encirclement strategy, perform multiple dives, and gradually adjust their directions and positions. Under this strategy, Harris hawks take two steps. The rule for the first step is as follows:

$$\begin{aligned} Y=X_{prey} (t)-E|JX_{prey} (t)-X_{i} (t) |\end{aligned}$$
(8)

Then, to judge whether this dive has achieved better results. if not, Harris hawks will also suddenly make an irregular and rapid dive when approaching the prey. The mathematical model is described as follows:

$$\begin{aligned} Z=Y+S\times LF (D) \end{aligned}$$
(9)

where D is the dimension of the optimization problem, S is a random vector of \(1\times D\) dimension, and LF is the Levy flight [66]. The expression for LF is as follows:

$$\begin{aligned} LF (x)=0.01\times \frac{u\times \sigma }{|v|^{\frac{1}{\beta }}},\sigma = \left (\frac{\tau (1+\beta )\times sin (\frac{\pi \beta }{2})}{\tau (\frac{1+\beta }{2})\times \beta \times 2^{(\frac{\beta -1}{2})}}\right)^{\frac{1}{\beta }} \end{aligned}$$
(10)

where u, v is a random number between (0, 1) and \(\beta\) is a constant 1.5. Thus, the update formula for the soft besiege with progressive rapid dives of HHO is shown below:

$$\begin{aligned} X_{i}(t+1)={\left\{ \begin{array}{ll} Y, &{} \text { if } F(Y)<F(X_{i}(t)) \\ Z, &{} \text { if } F(Z)<F(X_{i}(t)) \end{array}\right. } \end{aligned}$$
(11)
  • Hard besiege with progressive rapid dives

When \(|E |<0.5\) and \(r<0.5\), the prey has a chance to escape, but does not have enough energy to do so. The hawks form a hard besiege before attacking, trying to reduce the average distance between their group and the prey, and then take a dive to kill the prey. The mathematical model is as follows:

$$\begin{aligned} \dot{Y}= & {} \, X_{prey}(t)-E|JX_{prey}(t)-X_{m}(t) |\end{aligned}$$
(12)
$$\begin{aligned} \dot{Z}= & {} \, \dot{Y}+S\times LF(D) \end{aligned}$$
(13)
$$\begin{aligned} X_{i}(t+1)= & {} {\left\{ \begin{array}{ll} \dot{Y}, &{} \text { if } F(\dot{Y})<F(X_{i}(t)) \\ \dot{Z},&{} \text { if } F(\dot{Z})<F(X_{i}(t)) \end{array}\right. } \end{aligned}$$
(14)

This process is similar to soft besiege with progressive rapid dives, but it utilizes the average location information of the group. See Algorithm 1 for the pseudo-code of HHO.

figure a

2.2 Extremal optimization (EO)

EO is inspired by Bak-Sneppen model of biological evolution [7]. EO has no concept of population and operates on a single individual. The specific operation is to mutate each component of the individual one by one, and then evaluate the fitness of the mutated individual. For the new individual with the best fitness, the corresponding component of the new individual is considered to be the worst component of the original individual. Then judging whether the new individual is better than the original individual. The basic EO has no extra parameters and only a mutation operation, which has a very good performance on local search [10]. The pseudo-code of EO is shown in Algorithm 2.

figure b

2.3 Refracted opposition–based learning (ROBL)

2.3.1 Opposition–based learning (OBL)

Opposition-based learning is proposed by [59]. The basic idea is to expand the search by considering both the current solution and its inverse solution. Simultaneous search at that solution’s location and its opposite location improves the possibility of obtaining a better solution.

Suppose x is a solution in a one-dimensional space, then its opposite solution \(\tilde{x}\) is defined as follows:

$$\begin{aligned} \tilde{x}=lb+ub-x \end{aligned}$$
(15)

where \(x\in R\), lb and ub are the lower and upper bounds of x, respectively.

Extending to the D-dimensional problem, then the opposite solution of x in dimension i is defined as follows:

$$\begin{aligned} \tilde{x}_{i}=lb_{i}+ub_{i}-x_{i} \end{aligned}$$
(16)

where \({x}_{i}\in R\), \(x_{i}\in (lb_{i},ub_{i})\), \(\forall i\in 1,2,\cdots ,D\), \(lb_{i}\) and \(ub_{i}\) are the lower and upper bounds in ith-dimension.

Opposition-based learning is able to explore the opposite side of the current solution, making it more likely to get a high-quality solution. In recent years, opposition-based learning has been combined with various algorithms, such as GWO [70], SCA [1]. However, algorithms still can not jump out of the local optimal in the late phase despite the use of the basic OBL strategy [17].

2.3.2 Refracted opposition–based learning (ROBL)

In order to improve the effect of opposition-based learning, a variant of OBL based on combining the laws of refraction of light to better generate opposition-based solutions is presented by [17]. The mathematical definition of refracted opposition–based learning is shown in Fig. 2.

Fig. 2
figure 2

Refracted opposition-based learning

In Fig. 2, the x-axis represents the dividing line and the y-axis denotes the normal line. The upper halves and lower halves of the x-axis are regarded as two different medias. \(l_{1}\) and \(l_{2}\) are the length of the incident light and the length of the refracted light, respectively. \(\alpha\), \(\beta\) mean the angles of incidence and refraction. lb and ub denote the range of points on the x-axis. Point o is the midpoint of lb and ub. From Fig. 2, we can conclude the relational equation:

$$\begin{aligned} sin\alpha= & {} \, ((lb+ub)/2-x)/l_{1} \end{aligned}$$
(17)
$$\begin{aligned} sin\beta= & {} \, (\tilde{x}-(lb+ub)/2)/l_{2} \end{aligned}$$
(18)

From the definition of the refractive index n, the following equation can be derived:

$$\begin{aligned} n=\frac{sin\alpha }{sin\beta }=\frac{l_{2}(lb+ub)/2-x}{l_{1}(\tilde{x}-(lb+ub)/2)} \end{aligned}$$
(19)

Let \(k=l_{1}/l_{2}\), then Eq. (19) can be transformed into:

$$\begin{aligned} kn=\frac{sin\alpha }{sin\beta }=\frac{ (lb+ub)/2-x}{\tilde{x}- (lb+ub)/2} \end{aligned}$$
(20)

After the transformation of Eq. (20), the expression of the refracted opposition-based solution \(\tilde{x}\) can be derived as follows:

$$\begin{aligned} \tilde{x}=\frac{lb+ub}{2}+\frac{lb+ub}{2kn}-\frac{x}{kn} \end{aligned}$$
(21)

If k is equal to 1 and n is also equal to 1, then Eq. (21) turns into basic opposition–based learning. This shows that basic opposition–based learning is a special case of refracted opposition–based learning. ROBL can dynamically generate reverse learning solutions by adjusting the values of k and n which improve the diversity of reverse solutions and improve the possibility of the algorithm jumping out of the local optimal. It has been testified that ROBL can effectively improve the performance of the algorithm in the literature [40] and [61]. When n is set to 1 and the problem is multidimensional, the expression for refractive opposition–based solution is as follows:

$$\begin{aligned} \tilde{x}_{i,j}=\frac{lb_{j}+ub_{j}}{2}+\frac{lb_{j}+ub_{j}}{2k}-\frac{x}{k} \end{aligned}$$
(22)

where \(\tilde{x}_{i,j}\) refers to the refracted opposition–based solution of individual i in the jth-dimension. \(lb_{j}\) and \(ub_{j}\) are the lower and upper bounds in jth-dimension.

3 The proposed IHHO-EO algorithm

3.1 Introducing the individual’s own historical optimal location strategy

If \(|E |\ge 1\), Harris hawks perform the exploration phase, searching for the prey. When the parameter \(q\ge 0.5\), Harris hawk individuals fly to the next location based on a randomly selected companion and their own current locations. However, this way of randomly selecting a companion is too random and uses insufficient information. It is difficult to guide itself to fly to a better position and lacks exploration ability. Therefore, when \(q\ge 0.5\), the individual’s historical optimal location is introduced and the global optimal location of the current best individual is combined to enhance exploration ability. At the same time, considering that the individual’s historical optimal position has little reference value in the early stage but will become more and more reliable with the increase of the number of iterations, the linear increasing coefficient is added to the individual’s historical optimal position. The improved exploration formula is shown below:

$$\begin{aligned} X_{i} (t+1)={\left\{ \begin{array}{ll} r_{1}X_{i} (t)+r_{2}|X_{prey} (t)-\frac{t}{T}*X_{ipbest} (t)|&{} q\ge 0.5 \\ (X_{prey} (t)-X_{m} (t))-r_{3} (LB+r_{4} (UB-LB)) &{} q<0.5 \end{array}\right. } \end{aligned}$$
(23)

where \(X_{ipbest} (t)\) represents the historical optimal position of individual i in iteration t, and T is the maximum number of iterations.

3.2 Nonlinear change of prey escape energy coefficient \(E_{1}\)

In the basic HHO algorithm, the prey escape energy E controls whether the algorithm performs the exploration phase or the exploitation phase. From Eq. (4), E is determined by the initial energy \(E_{2}\) and a coefficient \(E_{1}\) related to the number of iterations t. However, \(E_{1}\) is linearly reduced from 2 to 0, which can not balance the exploration and exploitation phases well. Therefore, Eq. (25) is proposed which updates \(E_{1}\) as follows:

$$\begin{aligned} A= & {} \frac{t-\frac{T}{2}}{\frac{T}{2}} \end{aligned}$$
(24)
$$\begin{aligned} E_{1}= & {} {\left\{ \begin{array}{ll} 2-\sqrt{1-A^{2}} &{} t\le \frac{T}{2} \\ \sqrt{1-A^{2}} &{} t>\frac{T}{2} \end{array}\right. } \end{aligned}$$
(25)

Figure 3 shows a comparison between the original \(E_{1}\) update and the proposed \(E_{1}\) update.

Fig. 3
figure 3

The changing curve of \(E_{1}\)

From Fig. 3, it can be seen that \(E_{1}\) decreases faster in the early stage and is able to converge quickly to the optimal solution vicinity in the early stage. On the other hand, it decreases slower in the later stage and is able to better model the energy loss of the prey. Therefore it can switch adequately among the four exploitation strategies to perform a full local search.

3.3 ROBL with dynamic parameter k

Compared with the basic opposition-based learning, the refracted opposition-based learning introduced in Sect. 2.3 has two parameters k and n, which can produce opposition-based solutions more flexibly and improve the diversity of opposition-based solutions. It has a good performance on improving the quality of opposition-based solutions and helping the algorithm jump out of local optimal. When n is set to 1, k can be controlled to achieve the desired reverse effect. So if the value range of k is wide, diverse opposition-based solutions can be generated. The k value in basic ROBL is a fixed value, which can not generate dynamic opposition-based solutions well. Therefore, our proposed formula for the value of k is as follows:

$$\begin{aligned} k= \left (1-cos (2\pi *\frac{t}{T})\right)^{10} \end{aligned}$$
(26)
Fig. 4
figure 4

The changing curve of k value

Figure 4 shows the change of k value with the number of iterations.

Combining Fig. 4 and Eq. (26), it can be concluded as follows: When the number of iterations \(t<150\) or \(t>350\), the value of k is close to 0, while k obtains a larger value if \(150<t<350\). Therefore, the value of k can have a relatively wide range and is highly dynamic, which can effectively improve the diversity of opposition-based solutions and increase the possibility of jumping out of the local optimum.

In view of the shortcoming that HHO is easy to fall into local optimum, the proposed ROBL with dynamic parameter k is carried out on all individuals after the basic process of HHO is performed. The newly generated opposition-based solution population is mixed with the original population in order of fitness, and the N individuals with the highest fitness in two populations are greedily selected as the new population.

3.4 Combination with EO

The literature [10] has testified the strong local search ability of EO through a large number of experiments. The introduction of EO in Sect. 2.2 shows that EO acts on individuals, mutates each component in individuals, and generates new individuals with the same number of components. The new individual with the best fitness is judged with the original individual. If the new individual is superior to the original one, the original one will be replaced by the new one. Do the same for each individual.

In order to enhance the exploitation capability of HHO, EO operation is performed on all individuals after ROBL process. Since only mutation operation is performed on the elements in EO, mutation operation has a great influence on the effect of EO. The research work of [68] shows that Cauchy mutation can produce striding length for search with a higher probability than Gaussian mutation, so Cauchy mutation is more suitable for coarse tuning while Gaussian variation is more suitable for fine tuning. Hybrid Gaussian-Cauchy mutation (G-C mutation) [11] is used here. Cauchy mutation is used first, and if the newly generated variable is outside the range, a certain number of Cauchy mutation (TC) is repeated to make the new variable fall within the range. If it still exceeds the range, a certain number of Gaussian mutation (TG) is used to produce a smaller step size. Finally, if the range requirements are not met, the corresponding upper and lower boundaries are used. The value of TC determines the coarse-grained search time, and the value of TG determines the fine-grained search time. Therefore, the values of these two parameters should not be too large, or the search time will be greatly prolonged. [11] pointed out that the appropriate values of TC and TG were 2 \(\sim\) 4. The values of TC and TG in this paper are set to 3.

3.5 The IHHO-EO algorithm

In view of the shortcomings of HHO, we propose the above four improvement strategies. The introduction of the historical optimal position of the Harris hawk individual can better guide the individual position update and enhance the exploration ability. The improved prey escape energy update formula makes the algorithm achieve a better balance between exploration and exploitation. The combination of ROBL and the proposed variation formula of parameter k increase the diversity of the solutions and the possibility of jumping out of local optimum. The hybridization with EO enhances the local exploitation capability of the algorithm. The pseudo-code of IHHO-EO is shown in Algorithm 3, and the flow chart is shown in Fig. 5.

Fig. 5
figure 5

Flow chart of IHHO-EO algorithm

3.6 Computational complexity analysis

Assume that the maximum number of individuals in the population is N, the problem dimension is D, and the number of iterations is T. In the original HHO algorithm, the computational complexity mainly consists of three parts: initialization, fitness evaluation and individual location update. The computational complexity of HHO is as follows: \(O (HHO)=O(N)+O(T\times N)+O(T\times N\times D)\)

In the proposed IHHO-EO, the introduction of the individual’s own historical optimal position and the improvement of the prey escape energy update formula will not add additional complexity. The computational complexity of EO operation is \(O(T\times N\times D)+O(T\times N\times D)\), which is the complexity of generating mutation and fitness evaluation. The computational complexity of ROBL is \(O(T\times N\times D)+O(T\times 2N)\), where \(O(T\times N\times D)\) is the time of refracted reverse operation for individuals, and \(O(T\times 2N)\) is the complexity of fitness evaluation.

figure c

In summary, the complexity of IHHO-EO is \(O(N)+O(3\times T\times N)+O(4\times T\times N\times D)\). Although the complexity of IHHO-EO has increased slightly more than the original HHO, the complexity of which is \(O(N)+O(T\times N)+O(T\times N\times D)\), the performance of the proposed algorithm is excellent.

4 Experimental results and analysis

4.1 Experimental setup

Table 1 Summary of experimental conditions
Table 2 The description of unimodal functions \(F_{1}\sim F_{7}\) functions
Table 3 The description of multimodal functions \(F_{8}\sim F_{13}\) functions
Table 4 The description of fixed-dimendion multimodal functions \(F_{14}\sim F_{23}\) functions
Table 5 The description of CEC2017 benchmark functions
Table 6 Parameter settings

To fully illustrate the performance of IHHO-EO, the experiments are divided into 4 parts, as shown in Table 1. In order to verify the performance of the proposed IHHO-EO algorithm in solving numerical optimization problems, 23 classical standard test functions [68] and 29 CEC2017 test functions [39] are selected for evaluation. Among the 23 classic standard test functions, \(F_{1}\sim F_{7}\) are unimodal functions to test the exploitation performance of the algorithm, \(F_{8}\sim F_{13}\) are multimodal functions to test the exploration performance and the ability of the algorithm to jump out of the local optimal solution. \(F_{14}\sim F_{23}\) are multimodal functions with fixed low dimensions to test the stability of the algorithm. The details of the 23 classic standard test functions are shown in Tables 2, 3, 4. Among the 29 CEC2017 test functions(\(f_{1}\sim f_{30}\)), there are unimodal, multimodal and composite functions, which are more complex and are used to test the ability of the algorithm in complex numerical optimization. The details of CEC2017 test functions are shown in Table 5. The bold data in all tables represent the best mean result achieved on this function among all the algorithms being compared.

In Exp.1, IHHO-EO is compared in 23 classical standard test functions with some new intelligent optimization algorithms proposed in recent years and some famous classical algorithms. The comparison algorithms include original HHO, Whale Optimization Algorithm (WOA) [47], Salp Swarm Algorithm (SSA) [46], Grey Wolf Optimizer (GWO) [48], Butterfly Optimization Algorithm [3], Moth-Flame Optimization Algorithm (MFO) [43], Genetic Algorithm (GA) [31], Differential Evolution (DE) [57], Biogeography-Based Optimization (BBO) [56], Particle Swarm Optimizer (PSO) [37] . The number of population for each algorithm was set to 30 and each algorithm was run 30 times with 500 iterations.

Exp.2 compares the IHHO-EO with three improved variants of HHO, including HHO-SCA [35], QRHHO [18], and THHO [2].

Exp.3 conducts on the classical standard test functions in 50 dimensions to explore the effectiveness of the added strategies.

Exp.4 conducts on CEC2017 test functions and set each test function to 50 dimensions. Some intelligent optimization algorithms (Grasshopper Optimisation Algorithm (GOA) [55], Dragonfly Algorithm (DA) [44], Sine Cosine Algorithm [45], HHO) and a variant of HHO (SCHHO [32]) were compared. The number of individuals of each algorithm was set to 30, and 30 runs were performed with 1500 iterations each time.

The parameters settings of the algorithms used in the experiment are shown in Table 6. The parameter setting has a certain impact on the performance of the algorithm. Therefore, the values of the parameters of the compared algorithms were taken directly from their original papers for fair comparison. All experiments were performed on a 2.60GHz computer with i5-3320 M CPU, and 8 GB RAM.

4.2 Comparison with advanced meta-heuristic algorithms and famous classical algorithms

This part of the experiment compares IHHO-EO with HHO, WOA, SSA, GWO, BOA, MFO, GA, DE, BBO, PSO which are some of the swarm intelligence optimization algorithms proposed in recent years and some famous classical algorithms. Tables 7, 8, 9, 10, 11 show the experimental results.

4.2.1 Comparison with advanced meta-heuristic algorithms

In terms of mean, standard deviation, best and worst results, IHHO-EO has a significant advantage over all these algorithms or the results are essentially equivalent, which is obvious on functions \(F_{1}\sim F_{4}\). On the fixed dimensional functions \(F_{14}\sim F_{23}\), only the IHHO-EO algorithm was able to find the optimal value. Compared with the original HHO, IHHO-EO has similar performance on \(F_{5}\), \(F_{12}\) and \(F_{13}\), but slightly lower in terms of mean.

Table 7 Results of unimodal benchmark functions (\(F_{1}\sim F_{7}\))
Table 8 Results of mutilmodal benchmark functions(\(F_{8}\sim F_{13}\))
Table 9 Results of fixed-dimension multimodal benchmark functions(\(F_{14}\sim F_{23}\))

To further illustrate the difference between IHHO-EO and other algorithms, Wilcoxon signed rank test [14] was performed. If the p-value is less than 0.05, IHHO-EO can be considered significantly different from the compared algorithm on this function, and NaN indicates that there is essentially no difference between the two algorithms. It can be seen from Table 10 that IHHO-EO and the compared algorithms have obvious effect differences on most functions. The symbol ”+/\(\approx\)/-” indicates that IHHO-EO achieves better, similar, and worse results than other algorithms on the mean for 23 functions, and it is obvious that IHHO-EO has better results than the other algorithms for most functions.

Table 10 Results of Wilcoxon signed rank test

4.2.2 Convergence analysis

The convergence diagram of experiments was analyzed to further illustrate the performance of IHHO-EO. Figure 6 shows the convergence of each algorithm on some functions. On most functions, IHHO-EO algorithm has faster convergence speed and higher convergence accuracy. For function \(F_{1}\), IHHO-EO achieved the optimal value 0 in the first half of iterations, while the other algorithms couldn’t find the optimal value at the end of iterations. In functions \(F_{6}\), \(F_{7}\) and \(F_{8}\), IHHO-EO not only can achieve better results, but also has faster convergence speed, which reflects the effectiveness of the proposed strategies. In function \(F_{10}\), IHHO-EO algorithm has a very good convergence speed compared with other algorithms and finds the optimal value of the function first. In function \(F_{13}\), although the final result of IHHO-EO is slightly lower than that of the original HHO algorithm, the initial convergence speed is still higher than HHO. IHHO-EO still maintains good convergence speed and accuracy in function \(F_{14}\) with fixed dimension.

Fig. 6
figure 6

Convergence curve of some functions

4.2.3 Comparison with famous classical algorithms

As can be seen from Table 11, compared with GA, DE, PSO and BBO, IHHO-EO achieves the smallest mean and standard deviation on the 23 classical standard test functions, indicating the good performance of IHHO-EO. GA and BBO also find the theoretical optimal value on function \(F_{14}\) and \(F_{18}\).

Table 11 Results of comparison with the famous classical algorithms

4.3 Comparison with three variants of HHO

Since the HHO algorithm was proposed, many improved HHO variants have been proposed and achieved good results. In this experiment, three HHO modified variants(HHO-SCA, QRHHO, THHO) proposed in recent two years are selected for comparison. Table 12 shows the comparison of the results in 23 classical standard test functions. It can be seen that IHHO-EO obtains the best or comparable mean values for most of the functions. The standard deviation of IHHO-EO is also small, indicating good stability. Firedman test [20] and Quade test [15] are conducted on the results, as shown in Table 13. The results show that IHHO-EO ranked higher than other algorithms, indicating that IHHO-EO has better performance than other algorithms.

Table 12 Comparison results with other HHO variants
Table 13 Friedman and Quade tests of Exp.2

4.4 Impact of the added strategies on the optimizer

This experimental section explores the effectiveness of the various added strategies of the proposed IHHO-EO. Tests are performed on the 50 dimensions of \(F_{1}\sim F_{13}\) and on the \(F_{14}\sim F_{23}\) with fixed dimensions. IHHO1 is used to represent the addition of the own historical optimal position strategy and the nonlinear prey energy escaping factor strategy, IHHO2 to represent the addition of only the refracted opposition-based learning strategy, HHO-EO to represent the addition of only the EO strategy, and IHHO-EO to represent the addition of all strategies. As shown in Tables 14, 15, the original HHO, IHHO1, IHHO2, HHO-EO, and IHHO-EO are all able to find optimal values for functions \(F_{9}\), \(F_{10}\), \(F_{16}\), and \(F_{18}\), but the original HHO does not achieve as well as they do for the remaining functions, indicating the effectiveness of the added strategies. IHHO1 obtains better results than HHO algorithm on functions \(F_{1}\sim F_{4}\), \(F_{7}\), \(F_{8}\), \(F_{19}\) and \(F_{21}\). The improved effect of IHHO2 on functions \(F_{1}\sim F_{4}\) and \(F_{21}\sim F_{23}\) is obvious while HHO-EO achieved best results on functions \(F_{8}\), \(F_{12}\), \(F_{13}\) and \(F_{15}\). IHHO-EO combines these strategies and achieves good results on most functions.

Table 14 Comparison of various improvement strategies (\(F_{1}\sim F_{13}\) of 50 dimensions)
Table 15 Comparison of various improvement strategies (\(F_{14}\sim F_{23}\))

4.5 Comparison with advanced meta-heuristic algorithms and one variant of HHO on CEC2017 functions

In order to test the performance of IHHO-EO on complex function optimization problems, IHHO-EO was applied to solving more complex CEC2017 test functions. The results of IHHO-EO in comparison with GOA, DA, SCA, HHO and a modified HHO variant (SCHHO), are shown in Table 16. The mean in Table 16 is the difference between the actual run result and the optimal value of the function. Compared with GOA, DA, SCA and HHO, IHHO-EO has better performance with respect to the mean. GOA has a smaller mean value than IHHO-EO on funciton \(f_{7}\), while it does not perform as well as IHHO-EO on the remaining functions. IHHO-EO achieves better average results than DA, SCA, and HHO on all functions. Our algorithm outperforms SCHHO on 22 functions, and is slightly inferior to SCHHO on the remaining 7 functions.

Table 16 Results of benchmark functions (CEC2017 \(f_{1}\sim f_{30}\))

5 IHHO-EO for pressure vessel design problem

In this section, IHHO-EO algorithm is further applied to the pressure vessel design [36], which is a typical engineering design problem. The problem contains four variables: inner radius (R), the thickness of the shell (\(T_{s}\)), the thickness of the head (\(T_{h}\)) and the length of the cylindrical section without the head (L). The objective of the problem is to minimize the cost, including material cost, forming cost and welding cost. A diagram of the problem is shown in Fig. 7. The pressure vessel design problem can be abstracted as a single objective minimization problem containing four variables and four constraints. The mathematical model of the problem is as follows.

$$\begin{aligned}&Consider \quad \vec {x}=[x_{1}\ x_{2}\ x_{3}\ x_{4}]=[T_{s}\ T_{h}\ R\ H] \nonumber \\&\quad Minimize \quad f (\vec {x})=0.6224x_{1}x_{3}x_{4}+1.7781x_{2}x_{3}^{2} \nonumber \\&\qquad +3.1661x_{1}^{2}x_{4}+19.84x_{1}^{2}x_{3} \nonumber \\&\quad Subject \ to \quad g_{1} (\vec {x})=-x_{1}+0.0193x_{3}\le 0 \nonumber \\&\quad g_{2} (\vec {x})=-x_{2}+0.00954x_{3}\le 0 \nonumber \\&\quad g_{3} (\vec {x})=-\pi x_{3}^{2}x_{4}-\frac{4}{3}\pi x_{3}^{3}+1296000\le 0 \nonumber \\&\quad g_{4} (\vec {x})=x_{4}-240\le 0 \nonumber \\&\quad Variable \ range \quad 0\le x_{1},x_{2}\le 99 \nonumber \\&\quad 10\le x_{3},x_{4}\le 200 \end{aligned}$$
(27)
Fig. 7
figure 7

Pressure vessel design problem

IHHO-EO is compared with IEHHO [50], HHO, GWO [48], GA [47], HPSO [26], BA [19] and WOA [47] algorithms on this problem, and the optimal results obtained by them are shown in Table 17. It can be seen from the table that the proposed IHHO-EO algorithm is superior to all compared algorithms in solving this practical engineering optimization problem with constraints and has excellent performance.

Table 17 Comparison of results for pressure vessel design problem

6 Conclusion and future work

In this paper, four improved strategies were introduced to the original HHO and a novel algorithm, named IHHO-EO, was proposed. First of all, the historical optimal location of the individual is added to solve the defect of over-randomization and using insufficient information in the exploration stage of HHO when updating the location. The better location that has been searched is used to enhance the guidance and reduce the possibility of blind search. Second, the linear update of prey escape energy is improved to allow for a better balance between the exploration and exploitation phases. Refracted opposition-based learning is introduced and a dynamic approach to the value of k is proposed to produce dynamic opposition-based solutions on a larger scale, improving the quality of the population and increasing the likelihood of jumping out of the local optimum. Finally, EO operation is combined to enhance the local exploitation capability of the algorithm. To illustrate the performance of IHHO-EO, extensive experiments are conducted. Exp.1 tested IHHO-EO on 23 classical benchmark functions in 30 dimensions and compared it with various optimization algorithms such as HHO to illustrate the effectiveness of the IHHO-EO algorithm. The results showed that IHHO-EO algorithm was superior to other optimization algorithms in terms of the obtained best value, mean value and standard deviation. Through the Wilcoxon signed rank test, the p value of IHHO-EO in most functions was less than 0.05, indicating that IHHO-EO was significantly different from other algorithms in performance. In addition, IHHO-EO had faster convergence speed and accuracy according to the convergence graphs of the functions. In Exp.2, in order to illustrate the superior performance of the proposed algorithm, IHHO-EO was compared with several other HHO variants. Friedman test and Quade test were performed, and the results showed that IHHO-EO got the highest ranking for functional experiments among the four HHO variants compared. Exp.3 verified the effectiveness of each improved strategy of IHHO-EO. The results indicated that the mean and standard deviation of the algorithm are improved in most functions after adding the proposed strategies. IHHO-EO combined these strategies and obtained the best performance. In Exp.4, IHHO-EO was compared with several optimization algorithms and one HHO variant on the complex 50-dimensional CEC2017 test funcions. Experimental results indicated that IHHO-EO was also effective for complex funcion optimization problems. Finally, IHHO-EO was applied to the pressure vessel design problem and got the minimum manufacturing cost, which showed that the algorithm also had good competitiveness in the practical engineering design problem with constraints.

In future work, IHHO-EO can be applied for more engineering design problems, such as compression spring design problem, welded beam design problem [54], and typical combinatorial optimization problems such as traveling salesman problem [23].