1 Introduction

Optimization refers to the process of finding optimal values for the parameters of a given system from all the possible values to maximize or minimize its output (Mirjalili 2016). Optimization problems can be found in various fields which makes optimization methods essential, and provides an exciting research direction for researchers (Hussain et al. 2018). Optimization algorithms are an effective field of research with exceedingly important improvements in the resolve of intractable optimization problems. Significant advances have been made since the first algorithm was proposed, and many new algorithms are still being proposed (Dokeroglu et al. 2019). Conventional optimization methods such as the Newton method (Deuflhard 2011) and quadratic programming (Hillier and Hillier 2003) have problems such as local optimization stagnation and the need to derive the search space (Simpson et al. 1994). Stochastic optimization methods have become popular in the last two decades (Spall 2003; Parejo et al. 2012; Boussaïd et al. 2013). A meta-heuristic algorithm is an algorithmic framework that can be applied to various optimization problems with slight modifications. The use of meta-algorithms significantly increases the ability to find high-quality solutions to hybrid optimization problems. In other words, a meta-heuristic algorithm is a heuristic method that can search the search space to find high-quality answers. The meta-algorithms’ common goal is to solve the well-known challenging optimization problems (Dorigo and Stützle 2004). Meta-heuristic methods have the following common characteristics (Crawford et al. 2017):

  • These methods are somewhat probabilistic. This approach avoids placing the algorithm in the optimal local trap.

  • A meta-heuristic is a high-level strategy that guides a heuristic search process.

  • The goal is to efficiently explore the search space in order to find (close to) optimal solutions.

  • Meta-heuristics are not problem-special.

  • The basic concepts of meta-heuristics permit an abstract level of description.

  • Meta-heuristic algorithms are approximate and generally nondeterministic.

  • One of the common disadvantages in these methods is the difficulty of adjusting and matching of parameters.

Different criteria are used to classify meta-heuristic algorithms (Talbi 2009). In general, meta-heuristic algorithms are divided into two categories: one-solution-based algorithms and population-based algorithms (Boussaïd et al. 2013). A one-solution-based algorithm changes a solution during the search process (Fig. 1), whereas in the population-based algorithms, a population of solutions is considered (Fig. 2). The characteristics of these two types of algorithms are complementary to each other. One-solution-based meta-heuristic algorithms can focus on local search areas. In contrast, population-based meta-heuristic algorithms can lead the search to different solution space regions (Zhou et al. 2011).

Fig. 1
figure 1

One-solution-based meta-heuristic algorithm

Fig. 2
figure 2

Population-based meta-heuristic algorithm

Population-based optimization methods are inspired by human phenomena, collective intelligence, evolutionary concepts and physical phenomena (Shen et al. 2016; Wang et al. 2017; Zhang et al. 2018; Rizk-Allah 2018). Many studies have tried to classify optimization algorithms based on their type of inspiration. Figure 3 shows a type of this classification. The algorithms start with a random initial population (Heidari et al. 2017; Mafarja et al. 2018a), and this population is directed to the optimal areas in the search space by search mechanisms (Aljarah et al. 2018; Mafarja et al. 2018b). The search process involves two stages of exploration and exploitation. The well-designed algorithm and enriched random nature should explore different parts of the search space at the exploration stage. The exploitation stage is usually performed after the exploration phase. The algorithm focuses on good solutions and improves the search operation by searching around these right solutions in this stage. A good algorithm should balance the two steps to prevent premature convergence or belated convergence. The structure of optimization algorithms is almost similar, and their main difference is in how the exploration and exploitation phases are performed. How to balance the exploration and operation phases is another indicator that differentiates the performance of algorithms.

Fig. 3
figure 3

Categorization of meta-heuristic algorithms

2 Related works

This section introduces a number of popular optimization algorithms. These methods include genetic algorithms (GA) (Holland 1967; Holland and Reitman 1977), evolutionary programming(EP) (Fogel et al. 1966; Xin Yao et al. 1999), particle swarm optimization (PSO) (Eberhart and Kennedy 2002), ant colony optimization (ACO) (Colorni et al. 1991), differential evolution (DE) (Storn and Price 1995) and harmony search (HS) (Manjarres et al. 2013). Although these algorithms can solve many real and challenging problems, there are still issues that these algorithms have not been able to solve. Therefore, an algorithm can help solve one set of problems, but it is ineffective in another set of problems. Some of the new algorithms are gray wolf optimizer (GWO) (Mirjalili et al. 2014), artificial bee colony (ABC) algorithm (Basturk and Karaboga 2006), firefly algorithm (FA) (Yang 2010), imperialist competitive algorithm (ICA) (Atashpaz-Gargari and Lucas 2007), cuckoo search algorithm (CS) (Yang and Suash Deb 2009; Yang and Deb 2010; Rajabioun 2011), gravitational search algorithm (GSA) (Rashedi et al. 2009), charged system search (CSS) (Kaveh and Talatahari 2010), magnetic charged system search (Kaveh et al. 2013), thermal exchange optimization (TEO) (Kaveh and Dadras 2017), ray optimization (RO) algorithm (Kaveh and Khayatazad 2012), colliding bodies optimization (CBO) (Kaveh and Mahdavi 2014), sea lion optimization algorithm (SLOA) (Masadeh et al. 2019), Biogeography-based optimizer (BBO) (Simon 2008), dolphin echolocation (DE) (Kaveh and Farhoudi 2013) and bat algorithm (BA) (Yang and Hossein Gandomi 2012). The ant lion optimizer (ALO) algorithm mimics the hunting mechanism of ant lions (Mirjalili 2015a). The whale optimization algorithm (WOA) is inspired by the social behavior of humpback whales and mimics the bubble-net hunting method (Mirjalili and Lewis 2016). The Harris hawks optimizer (HHO) mimics the cooperative behavior, and chasing style of Harris’ hawks in nature called surprise pounce (Heidari et al. 2019). The Lévy flight distribution (LFD) algorithm mimics the Lévy flight random walk to find the optimal solution (Houssein et al. 2020). The tunicate swarm algorithm (TSA) is inspired by the swarm behaviors of tunicates and jet propulsion during the food search and navigation process (Kaur et al. 2020). The wild horse optimizer (WHO) algorithm mimics the social life behavior of wild horses (Naruei and Keynia 2021a).

In recent years, many optimization methods have been proposed. The question now arises why, in spite of these algorithms, it is still necessary to provide new algorithms or improve previous algorithms. This question can be answered by referring to the No Free Lunch (NFL) theory (Wolpert and Macready 1997). The NFL theorem logically proves that no one can present an algorithm that can solve all problems of optimization. According to this theorem, it can be concluded that the ability of an optimization algorithm to solve a specific set of issues does not guarantee that the algorithm is able to solve other problems. Therefore, all optimizers act on average considering all optimization problems despite higher performance in a subset of optimization issues. The NFL theorem allows researchers to suggest new optimization methods or improve existing methods to solve a subset of problems in different fields. This theory motivated us to propose a novel optimization method to solve challenging problems in this area.

This paper proposes a new population-based optimization algorithm called the hunter–prey optimization algorithm. This algorithm mimics the behavior of hunters such as lions, leopards, wolves and prey such as deer, stag, gazelle with a different scenario than the previously proposed algorithms in this field.

The rest of the paper is as follows. Section 3 provides the inspiration and scenario for this work and proposes the HPO algorithm. In Sect. 4, the results of the proposed method on different test functions are presented. In Sect. 5, the performance of the proposed algorithm for solving real-world problems was evaluated. Finally, conclusions and possible new researches are presented in Sect. 6.

3 Hunter–prey optimization algorithms

In this part, first, the inspiration and hunting scenario of HPO algorithm is explained, and then the mathematical model and HPO algorithm are described in detail.

3.1 Inspiration

Nature inspiration may be a good idea to solve problems. In nature, organisms interact with each other in different ways (Han et al. 2001; Krebs 2009). One of these interactions can occur between hunter and prey behavior. Hunter–prey cycles are one of the most remarkable observations in population biology, and thus, serious debate is taking place among ecologists (Berryman 2002; Turchin 2003). There are many scenarios of how animals hunt. Some of these scenarios have been converted into optimization algorithms. The scenario considered in this paper is different from others scenarios. Our scenario is that the hunter searches for prey, and since prey is usually swarmed, the hunter chooses a prey that is far from the swarm (average herd position). After the hunter finds his prey, he chases and hunts it. At the same time, the prey searches for food and escapes in a predator attack and reaches a safe place (Berryman 1992; Krohne 2000). We consider this safe place as the place of the best prey in terms of a fitness function. Figure 4 illustrates these behaviors.

Fig. 4
figure 4

Hunter behavior (left image a), prey behaviors (right image b) and next position adjustment

3.2 Mathematical model and algorithm

As mentioned in the previous section, the general structure of all optimization techniques is the same. First, the initial population is randomly set to \((\vec{x}) = \{ \vec{x}_{1} ,\vec{x}_{2} ,...\vec{x}_{n} \}\), and then the objective function is computed as \((\vec{O}) = \{ O_{1} ,O_{2} ,...O_{n} \}\) for all members of the population. The population is controlled and directed in the search space using a series of rules and strategies inspired by the proposed algorithm. This process is repeated until the algorithm is stopped. In each iteration, the position of each member of the population is updated according to the rules of the proposed algorithm, and the new position is evaluated with the objective function. This process causes the solutions to improve with each iteration. The position of each member of the initial population is randomly generated in the search space by Eq. (1)

$$ x_{i} = rand(1,d).*(ub - lb) + lb $$
(1)

where Xi is the hunter position or prey, lb is the minimum value for the problem variables (lower boundary), ub is the maximum value for the problem variables (upper boundary), and d is the number of variables (dimensions) of the problem. Equation (2) defines the lower boundary and the upper boundary of the search space. It should be noted that a problem may have the same or different lower and upper bounds for all its variables.

$$ lb = [lb_{1} ,\,lb_{2} ,...,\,lb_{d} ],\;ub = [ub_{1} ,\,ub_{2} ,...,\,ub_{d} ] $$
(2)

After generating the initial population and determining each agent’s position, each solution’s fitness is calculated using \(O_{i} = f(\vec{x})\), the objective function. F (x) can be maximum (efficiency, performance, etc.) or minimum (cost, time, etc.). Calculating the fitness function determines which solution is good or bad, but we do not achieve the optimal solution with a single run. A search mechanism must be defined and repeated several times to guide the search agents to the optimal position. The search mechanism usually involves two steps: exploration and exploitation. Exploration refers to the algorithm’s tendency to highly random behaviors so that solutions change significantly. The significant changes in solutions cause further exploration of the search space and discover its promising areas. After promising regions have been found, random behaviors must be reduced so that the algorithm can search around the promising regions, and this refers to exploitation. For the hunter search mechanism, we propose Eq. (3)

$$ x_{i,j} (t + 1) = x_{i,j} (t) + 0.5\left[ {(2CZP_{pos(j)} - x_{i,j} (t)) + (2(1 - C)Z\mu_{(j)} - x_{i,j} (t))} \right]. $$
(3)

Equation (3) updates the hunter position, where x (t) is the current hunter position, x (t + 1) is the hunter next position, Ppos is the prey position, \(\mu\) is the mean of all positions, and Z is an adaptive parameter calculated by Eq. (4)

$$ P = \vec{R}_{1} < C\,;\,\,\,\,IDX = (P = = 0);\;Z = R_{2} \otimes IDX + \vec{R}_{3} \otimes (\sim\;IDX) $$
(4)

where \(\vec{R}_{1}\) and \(\vec{R}_{3}\) are random vectors in the range [0,1], P is a random vector with values 0 and 1 equal to the number of problem variables, \(R_{2}\) is a random number in the range [0,1], and IDX is the index numbers of the vector \(\vec{R}_{1}\) which satisfies the condition (P =  = 0). The structure of parameter Z and its relationship with parameter C is shown in Fig. 5. It should be noted that Fig. 5 is just an example for the reader to understand. Numbers are random, and their value and order change in each iteration.

Fig. 5
figure 5

Z parameter structure. R2 is random vector (for example: [0.82, 0.19, 0.72, 0.25, 0.11 and 0.66]) and R3 is random number (for example: with value 0.51). (j is the length of the dimension with value 6)

C is the balance parameter between exploration and exploitation, whose value decreases from 1 to 0.02 over the course of iterations. C is calculated as follows:

$$ C = 1 - it\left( {\frac{0.98}{{MaxIt}}} \right) $$
(5)

where it is the current iteration value and MaxIt is the maximum number of iterations. As shown in Fig. (4a), the position of the prey (Ppos) is calculated so that we first calculate the average of all positions (\(\mu\)) based on Eq. (6) and then the distance of each of the search agents from this mean position

$$ \mu = \frac{1}{n}\sum\limits_{i = 1}^{n} {\vec{x}_{i} } . $$
(6)

We calculate the distance based on Euclidean distance according to Eq. (7)

$$ D_{euc(i)} = \left( {\sum\limits_{j = 1}^{d} {(x_{i,j} - \mu_{j} )^{2} } } \right)^{\frac{1}{2}} . $$
(7)

According to Eq. (8), the search agent with the maximum distance from the mean of positions is considered prey (Ppos)

$$ \vec{P}_{pos} = \vec{x}_{i} |i\,is\,index\,of\,Max(end)sort(D_{euc} ). $$
(8)

If we always consider the search agent with the maximum distance from the average position (\(\mu\)) in each iteration, the algorithm will have late convergence. According to the hunting scenario, when the hunter takes the prey, the prey dies, and the next time, the hunter moves to the new prey. To solve this problem, we consider a decreasing mechanism as Eq. (9)

$$ kbest = round(C \times N) $$
(9)

where N is the number of search agents.

Now, we change Eq. (8) and calculate the prey position as Eq. (10)

$$ \vec{P}_{pos} = \vec{x}_{i} |i\,is\,sorted\,D_{euc} (kbest). $$
(10)

Figure 6 shows how to calculate Kbest and select prey (Ppos) during algorithm running. At the beginning of the algorithm, the value of Kbest is equal to N (number of search agents). Therefore, the last search agent that is farthest from the search agents’ average position (\(\mu\)) is selected as prey and attacked by the hunter. As shown in Fig. 6, the Kbest value gradually decreases so that at the end of the algorithm, the Kbest value equals the first search agent (the shortest distance from the average position of the search agents (\(\mu\))). It should be mentioned that the search agents are sorted in each iteration based on the distance from the search agents’ average position (\(\mu\)).

Fig. 6
figure 6

How to calculate Kbest and select prey (Ppos) during algorithm running

As shown in Fig. 4b, when prey is attacked, it tries to escape and reach its safe place.

We assume that the best safe position is the optimal global position because it will give the prey a better chance of survival, and the hunter may choose another prey. Equation (6) is proposed to update the prey position

$$ x_{i,j} (t + 1) = T_{pos(j)} + CZ\cos (2\pi R_{4} ) \times (T_{pos(j)} - x_{i,j} (t)) $$
(11)

where x (t) is the current position of the prey, x (t + 1) is the next position of the prey, Tpos is the optimum global position, \(Z\) is an adaptive parameter calculated by Eq. (4), and R4 is a random number in the range [−1, 1]. C is the balance parameter between exploration and exploitation, whose value decreases during the algorithm’s iteration. It is calculated according to Eq. (5). The COS function and its input parameter allow the next prey position to be positioned at global optimum different radials and angles and increase the exploitation phase’s performance.

The question that arises here is how to choose the hunter and prey in this algorithm.

To answer this question, we combine Eqs. (3) and (11) as Eq. (12)

$$ x_{i} (t + 1) = \left\{ {\begin{array}{*{20}l} {x_{i} (t) + 0.5\left[ {(2CZP_{pos} - x_{i} (t)) + (2(1 - c)Z\mu - x_{i} (t))} \right]} \hfill & {if\,R_{5} < \beta \;(13a)} \hfill \\ {T_{pos} + CZ\cos (2\pi R_{4} ) \times (T_{pos} - x_{i} (t))} \hfill & {else\;(13b)} \hfill \\ \end{array} } \right. $$
(12)

where R5 is a random number in the range [0, 1], and β is a regulatory parameter whose value in this study is set to 0.1. If the R5 value is smaller than β, the search agent is considered a hunter, and the next position of the search agent is updated with Eq. (12a); if the R5 value is larger than β, the search agent will be considered prey, and the next position of the search agent will be updated with Eq. (12b). The flowchart of the proposed algorithm is shown in Fig. 7.

Fig. 7
figure 7

Flowchart of the hunter–prey optimization

3.3 Assumptions of the HPO algorithm

Theoretically, the proposed HPO algorithm can provide suitable solutions to various problems for the following reasons:

  • Exploring the search space by selecting the farthest search agent relative to the average search agent’s position as prey.

  • Exploring the search space is guaranteed by the random selection of hunter and prey and hunters’ random movements around the prey.

  • Due to random movements and random selection of hunter and prey, the probability of getting stuck in a local optimal is low.

  • The prey selection mechanism with the highest prey distance from the average search agent position is adaptively reduced during the iterations, ensuring both algorithm convergence and HPO algorithm exploitation.

  • The severity of the hunter and prey movement during the iterations is reduced by the adaptive parameter, ensuring the HPO algorithm’s convergence.

  • During optimization, the hunter gradually moves toward the best prey position, and the balance between the exploration and exploitation phases is maintained.

  • The calculation of the adaptive parameter and the random parameter for each hunter and prey in each dimension increases the population diversity.

  • Each search agent (hunter or prey) in each iteration is compared with the best solution obtained so far, and the best solution is stored.

  • The hunter directs the prey to promising positions in the search space.

  • The HPO algorithm is a non-gradient approximation algorithm that treats the problem as a black box.

  • The number of adjustment parameters of HPO algorithm is few, and some parameters are adjusted adaptively.

3.4 Computational complexity analysis

In general, the complexity of calculating the HPO algorithm depends on four components, namely initialization, updating of the hunter, updating prey and fitness evaluation. Note that with N search agents, the initialization process’s computational complexity is O (N). The computational complexity of the update process is O(T × N) + O((1-β) × T × N × D) + O(β × T × N × D), which includes updating the position vector of all prey and hunters to find the best optimal position. The computational complexity of the update process is O (T × N) + O ((1-β) × T × N × D) + O (β × T × N × D), T denotes the maximum number of iterations, D denotes the number of problem variables, and β is a regulatory parameter whose value in this study is set to 0.1. Therefore, the total complexity of HPO is O (N × (T + (1-β) TD + βTN + 1)).

4 Results and discussion

In this section, the HPO algorithm is evaluated on 43 test functions. The HPO has been compared to advanced swarm-based optimization algorithms. In general, benchmark functions can be divided into four groups such as unimodal, multi-modal, hybrid functions and composition function. The first 13 benchmark functions are the classical functions used by many researchers (Mirjalili 2016; Saremi et al. 2017). From these 13 classical functions, the first seven are unimodal, and the second six are multi-modal. The unimodal functions (f1–f7) are suitable for determining algorithms’ exploitation because they have a global optimum and no local optimum. Multi-modal functions (f8–13) have many local optimal and are useful for examining the exploration and avoiding the algorithms’ local optimal. These benchmark functions are given in Tables 1 and 2, where DIM represents the dimensions of the function, RANGE is the boundary of the function’s search space, and FMIN is the optimal value. The hybrid and composite functions are a combination of different unimodal and multi-modal test functions, rotary and displacement, from the CEC2017 session (Awad et al. 2017). These functions’ search space is very challenging; they are very similar to real search spaces and are useful for evaluating algorithms in terms of the balance of exploration and exploitation. To further evaluate the proposed algorithm, the HPO algorithm is implemented on nine real engineering problems such as rolling element bearing problem, reducer design problem, cantilever beam design, multi-plate disk clutch brake, welded beam, three-bar truss, step-cone pulley problem, pressure vessel designs and tension/compression spring. For justly comparison, the algorithm runs 30 times. The Wilcoxon statistical test will examine the null hypothesis that two populations are equal from the same distribution. The similarity objective can be used to determine whether two sets of solutions are statistically different. The result obtained by using the Wilcoxon statistical test is a parameter called p-value, which measures the significance level of the algorithms. In other words, if the p-value is less than 0.05, the two algorithms are statistically significant. A nonparametric Wilcoxon statistical test was performed on 30 runs to obtain statistically significant conclusions. Such statistical tests should be performed considering the meta-heuristic algorithms’ random nature (García et al. 2009; Derrac et al. 2011).

Table 1 Unimodal benchmark functions
Table 2 Multi-modal benchmark functions

4.1 Experimental setup

The experimentations were run on a PC with a Windows 10 64-bit professional and 12 GB RAM. The algorithms were implemented by MATLAB R2017b. The maximum iteration and population size in all methods are 500 and 30, respectively. For a fair comparison, all methods are run 30 times independently and then compared based on statistical indicators such as minimum, maximum, average and standard deviation. To verify performance, the proposed algorithm is compared with famous and new algorithms including Harris hawks optimizer, particle swarm optimization, ant lion optimizer, whale optimization algorithm, Lévy flight distribution and tunicate swarm algorithm. The parameters of the algorithms are set according to Table 3.

Table 3 Set parameters of algorithms

4.2 Convergence analysis of the proposed HPO algorithm

In optimization techniques, the search agents have sudden and abrupt movements at the beginning of the search process (exploration) and gradually shrinking their motions (exploitation) (van den Bergh and Engelbrecht 2006). Figure 8 shows the convergence behavior of the proposed HPO algorithm, the first column showing the three-dimensional figure of the test functions. The search history of the search agents (only in the first and second dimensions of the solution) is shown in the second column. It can be seen that while dealing with different cases, HPO exhibits a similar pattern, in which search agents attempt to maximize diversity in exploring desirable areas and then exploit around best areas. The third column shows the search paths. In the early stages of the algorithm, the movements of the search agents are sudden, and gradually the movements become slower, so that in the final stages of the algorithm, the search agents gather at one point. This is due to the comparative reduction in prey selection based on Eq. (9), making the search agents gradually converge to the best point during the iterations. This behavior ensures that the HPO algorithm changes during optimization from the exploration phase to the exploitation phase. The fourth column in Fig. 8 shows all search agents’ average fitness in each iteration and shows that the other search agents behave as the first search agent. The fifth column is the convergence curve that represents the best value of each iteration. In this graph, fast convergence can also be seen.

Fig. 8
figure 8

Convergence behavior and search history of the proposed HPO algorithm

4.3 HPO algorithm exploitation analysis

The results in Table 4 shows that the hunter–prey optimizer performs better in most of the unimodal functions (F1, F2, F3, F4 and F6) than the other algorithms compared in Table 4. The HPO performed better in F5 and F7 compared to others except for the HHO algorithm. The p-values and number function evaluation (NFE) in Table 5 also prove that the proposed algorithm’s superiority is significant in most cases. The point to be noted is that the HHO algorithm achieved these results with about 30,000 number function evaluations (NFE), while the HPO algorithm with 15,000 number function evaluations (NFE). This demonstrates the exploitation ability of the HPO algorithm due to the movement of prey toward the best position. An example of this is shown in Fig. 9.

Table 4 Results for the unimodal test functions
Table 5 P-values of the Wilcoxon test overall runs and number function evaluation (NFE)
Fig. 9
figure 9

Convergence curve of the methods on classical functions

4.4 HPO algorithm exploration analysis

The results in Table 6 indicate that the HPO method provides competitive results compared with other methods. In the F9, F10, F11 and F12 test functions, the proposed HPO algorithm performs better than others. The p-values in Table 7 also prove that the superiority of the HPO algorithm is significant in most cases. This indicates that the HPO method is capable of exploration, and this is due to the fact that the hunter moves toward a prey that is far from the group. This allows search agents (hunters) to explore different areas of the search space.

Table 6 Results for the multi-modal benchmark functions
Table 7 P-values of the Wilcoxon test overall runs and number function evaluation (NFE)

The results in Table 6 for the F9, F10 and F11 functions reveal that the proposed HPO algorithm and the HHO algorithm have obtained the same results. The convergence diagrams of these functions in Fig. 9 indicate that the HPO algorithm has reached the optimal point in fewer iterations compared to the HHO algorithm. Also, the number of NFEs of the HPO algorithm is less than half of the HHO algorithm. This shows that the HPO algorithm performs much better than the HHO.

4.5 Scalability analysis of the HPO algorithm

This part explains the effect of scalability on different test functions by using proposed HPO algorithm. The dimensions of the test functions varies as 10, 30, 50 and 100. The scalability results for all methods are also shown in Fig. 10. This reveals the impact of dimension on the quality of solutions for the HPO algorithm to diagnose its efficacy for problems with lower dimensions and higher dimension. This is due to the better capability of the proposed HPO for balancing between exploitation and exploration.

Fig. 10
figure 10

Scalability of different methods on classical functions

To more scrutinize the performance of HPO algorithm, the algorithms were tested on classical functions (F1–F13) with high dimensions (1000 variables), and the execution time of each algorithm was calculated and given in Table 8. Looking at the results in Table 8, it can be seen that the proposed HPO algorithm showed competitive and logical performance in solving high-dimensional classical functions than other algorithms. According to the results of the average execution time of algorithms on classical functions, the HPO performs faster than WOA, ALO, LFD and TSA algorithms. The HPO algorithm performed faster than the HHO algorithm in all functions except F 1, F 2 and F 4.

Table 8 Comparison of average running time (with 1000 dimensions and 30 independent runs)

4.6 The performance of the hunter and prey optimizer on the CEC2017

In this section, we evaluate the performance of the proposed HPO algorithm on the CEC2017 challenging function set. The CEC2017 set includes 30 functions, of which the first ten are unimodal and multi-modal functions, the second ten are hybrid functions, and the last ten are composition functions (Awad et al. 2017). Details of these functions are given in Table 9.

Table 9 CEC 2017 test functions (Range = [-100, 100], Dimension = 10)

The hunter–prey optimization (HPO) was tested on CEC2017 test functions and compared with whale optimization algorithm (WOA), particle swarm optimization (PSO), ant lion optimizer (ALO), Harris hawks optimizer (HHO), tunicate swarm algorithm (TSA) and Lévy flight distribution (LFD). Each algorithm tested 30 times with 60 search agents and 1000 iterations. The dimensions of all functions are considered 10. The functions were divided into three groups, such as unimodal–multi-modal, hybrid and composition. The result of unimodal and multi-modal functions given in Table 10. The Friedman test is used to find the differences in treatments or algorithms across multiple test attempts. This test ranks the data within each row (or block) and tests for a difference across columns. We adopt Friedman test to compare the comprehensive performance of every algorithm on each group of problems of CEC 2017 benchmark (Derrac et al. 2011). Hence, the Friedman test method allows us to determine which algorithms are significantly better/worse. The results in Table 10 show that the PSO algorithm performed better than the other algorithms. However, the HPO algorithm ranks second, and it was able to discover the closest optimal value in most functions.

Table 10 Results for the CEC2017 test functions (unimodal and multi-modal)

The algorithms were implemented with the same conditions of the first group on the functions of the second group of CEC2017, which are hybrids. The results of this experiment are shown in Table 11. From the results of Table 11, it can be seen that the HPO algorithm has the best performance in most hybrid functions compared to the PSO, WOA, ALO, LFD, TSA and HHO algorithms. The proposed algorithm was able to take the first rank in solving the hybrid functions of CEC2017. This shows the HPO algorithm has a good balance between exploration and exploitation phase.

Table 11 Results for the CEC2017 hybrid test functions

Table 12 shows the results of the implementation of algorithms on the third group of functions that are composition. The results show that the HPO algorithm and PSO algorithm show better performance than other algorithms on the composition functions. However, the proposed algorithm is ranked first.

Table 12 Results for the CEC2017 composition test functions

The nonparametric Wilcoxon statistical test can effectively evaluate the overall performance of algorithms. The Wilcoxon statistical test was performed at 95% significance level (a = 0.05) to detect significant differences between the results obtained from different algorithms. The results of the Wilcoxon statistical test are shown in Table 13.

Table 13 P-values of the Wilcoxon rank-sum test overall runs

The convergence curve of the methods in all 30 functions is shown in Fig. 11. It can be found that in most functions, the proposed algorithm has better convergence than other algorithms.

Fig. 11
figure 11

Convergence curve of the methods on the CEC2017 test function

Figure 12 shows the Friedman mean rank of all the compared methods for unimodal and multi-modal functions (group 1), hybrid functions (group 2) and composition functions (group 3). As per results of Fig. 12, HPO has indicated a trustworthy and sure behavior in two groups (hybrid and composition function) compared to the other algorithms.

Fig. 12
figure 12

Friedman mean rank (CEC2017)

5 Performance of HPO algorithm on constrained problems

Engineers and decision makers face problems that increase their complexity daily. These problems create different fields such as research in operation, mechanical systems, image processing and electronics design (Kumar et al. 2020). In all these areas, the problem can be expressed as an optimization problem. In optimization problems, one or more objective functions are defined that should be minimized or maximized considering all the parameters. Usually, constraints are defined in optimization problems. All solutions must apply to these constraints. Otherwise, the solutions will not be justified. There are nine real-world problems in engineering design that are used by many researchers, namely reducer design problem, rolling element bearing problem, cantilever beam design, multi-plate disk clutch brake, welded beam, three-bar truss, step-cone pulley problem, pressure vessel designs and tension/compression spring. Unlike basic test functions, real-world problems have equality and inequality constraints, so HPO should be equipped with a constraints control method to optimize such problems. Unlike basic test functions, real-world problems have equality and inequality constraints, so HPO should be equipped with a constraints control method to optimize such problems.

The performance of the algorithm in dealing with constrained optimization problems is significantly influenced by the employed constraint handling technique (CHT). In recent decades, many constraint control methods have been developed for optimization algorithms. Some popular CHTs among them are death penalty, co-evolutionary, adaptive, annealing, dynamic and static (Coello Coello 2002). The death penalty function is the simplest method, which assigns a big objective value. This method eliminates impossible solutions by optimization algorithms during optimization process. The advantages of this method are low computational cost and simplicity (Mirjalili and Lewis 2016). To compare the methods, we used the death penalty method because most of the algorithms used the same. The results of the HPO algorithm were compared with the algorithms that previously solved these problems. For all problems, the number of search agents are set to 30, and the maximum number of iterations are set to 500. Details of these problems are given in Table 14.

Table 14 Details of the nine constrained problems. h is the number of equality constraints, g is the number of inequality constraints, and D is the number of problem variables

Real-world problems have been used by many researchers, and any researcher may have tested these problems under different conditions, so only the best solution obtained by algorithms is reported.

5.1 Three-bar truss design problem

In general, one of the most important issues in the field of civil engineering is truss design. The goal in this problem is to design a truss with the least weight so that it does not violate any of the constraints of buckling, deflection and stress. The structure of this problem and its parameters are shown in Fig. 13.

Fig. 13
figure 13

Three-bar truss design problem (Mirjalili et al. 2016)

The formula for this problem and its constraints are in the form of Eq. (13).

$$ \begin{aligned} & Minimize:\,\,\,\,\,f(A_{1} ,A_{2} ) = (2\sqrt {2A_{1} } + A_{2} ) \times l \\ & Subject\,to: \\ & g_{1} = \frac{{\sqrt {2A_{1} } + A_{2} }}{{\sqrt 2 A_{1}^{2} + 2A_{1} A_{2} }}P - \sigma \le 0 \\ & g_{2} = \frac{{A_{2} }}{{\sqrt 2 A_{1}^{2} + 2A_{1} A_{2} }}P - \sigma \le 0 \\ & g_{3} = \frac{1}{{A_{1} + \sqrt 2 A_{2} }}P - \sigma \le 0 \\ & where \\ & 0 \le A_{1} \le 1\,\,\,\,and\,\,\,\,0 \le A_{2} \le 1;\,\,\,\,\,\,l = 100\,cm, \\ & P = 2KN/cm^{2} ,\,\,\,\,\,\,\sigma = 2KN/cm^{2} . \\ \end{aligned} $$
(13)

The performance of the proposed HPO algorithm on this problem was compared with cuckoo search algorithm (CS) (Gandomi et al. 2013), differential evolution with dynamic stochastic selection (DEDS) (Zhang et al. 2008), grasshopper optimization algorithm (GOA) (Saremi et al. 2017), Harris hawks optimizer (HHO) (Heidari et al. 2019), mine blast algorithm (MBA) (Sadollah et al. 2013), moth-flame optimization (MFO) algorithm (Mirjalili 2015c), multi-verse optimizer (MVO) (Mirjalili et al. 2016), poor and rich optimization (PRO) algorithm (Samareh Moosavi and Bardsiri 2019), particle swarm optimization with differential evolution (PSO-DE) (Liu et al. 2010), Ray and Sain (RAY and SAINI 2001), sine cosine gray wolf optimizer (SC-GWO) (Gupta et al. 2020) and salp swarm algorithm (SSA) (Mirjalili et al. 2017). The comparison results are shown in Table 15. The results in Table 15 show that the proposed HPO algorithm offers competitive results with the HHO, SSA, DEDS and PSO-DE algorithms. The HPO algorithm also performs better than other compared methods.

Table 15 Results for the three-bar truss design problem

5.2 Speed reducer design problem

The speed reducer design problem has seven design variables (Gandomi et al. 2013), as shown in Fig. 14. This test problem’s objective is to minimize the weight of a speed reducer with subject to different constraints on surfaces stress, bending stress, stresses in the shafts and transverse deflections of the shafts (Mezura-Montes and Coello, 2005). The formula for this problem and its constraints are in the form of Eq. (14)

$$ \begin{aligned} & Consider\;\overline{z} = [z_{1} \,z_{2} \,z_{3} \,z_{4} \,z_{5} \,z_{6} \,z_{7} ] = [b\,m\,p\,l_{1} \,l_{2} \,d_{1} \,d_{2} ], \\ & Minimize\;f(\overline{z}) = 0.7854z_{1} z_{2}^{2} (3.3333z_{3}^{2} + 14.9334z_{3} - 43.0934) \\ & - 1.508z_{1} (z_{6}^{2} + z_{7}^{2} ) + 7.4777(z_{6}^{3} + z_{7}^{3} ) + 0.7854(z_{4} z_{6}^{2} + z_{5} z_{7}^{2} ), \\ & Subject\,to: \\ & g_{1} (\overline{z}) = \frac{27}{{z_{1} z_{2}^{2} z_{3} }} - 1 \le 0, \\ & g_{2} (\overline{z}) = \frac{397.5}{{z_{1} z_{2}^{2} z_{3}^{2} }} - 1 \le 0, \\ & g_{3} (\overline{z}) = \frac{{1.93z_{4}^{3} }}{{z_{2} z_{7}^{4} z_{3} }} - 1 \le 0, \\ & g_{4} (\overline{z}) = \frac{{1.93z_{4}^{3} }}{{z_{2} z_{7}^{4} z_{3} }} - 1 \le 0, \\ & g_{5} (\overline{z}) = \frac{{[(745(z_{4} /z_{2} z_{3} ))^{2} + 16.9 \times 10^{6} ]^{1/2} }}{{110z_{6}^{3} }} - 1 \le 0, \\ & g_{6} (\overline{z}) = \frac{{[(745(z_{5} /z_{2} z_{3} ))^{2} + 157.5 \times 10^{6} ]^{1/2} }}{{85z_{7}^{3} }} - 1 \le 0, \\ & g_{4} (\overline{z}) = \frac{{1.93z_{4}^{3} }}{{z_{2} z_{7}^{4} z_{3} }} - 1 \le 0, \\ & g_{7} (\overline{z}) = \frac{{z_{2} z_{3} }}{40} - 1 \le 0, \\ & g_{8} (\overline{z}) = \frac{{5z_{2} }}{{z_{1} }} - 1 \le 0, \\ & g_{9} (\overline{z}) = \frac{{z_{1} }}{{12z_{2} }} - 1 \le 0, \\ & g_{10} (\overline{z}) = \frac{{1.5z_{6} + 1.9}}{{z_{4} }} - 1 \le 0, \\ & g_{11} (\overline{z}) = \frac{{1.1z_{7} + 1.9}}{{z_{5} }} - 1 \le 0, \\ & where, \\ & 2.6 \le z_{1} \le 3.6,\,\,0.7 \le z_{2} \le 0.8,\,\,17 \le z_{3} \le 28,\,\,\,7.3 \le z_{4} \le 8.3, \\ & 7.3 \le z_{5} \le 8.3,\,\,\,2.9 \le z_{6} \le 3.9,\,\,\,5.0 \le z_{7} \le 5.5. \\ \end{aligned} $$
(14)
Fig. 14
figure 14

Speed reducer design (Hassan et al. 2005)

The proposed HPO algorithm was tested on this problem, and the results were compared with artificial ecosystem-based optimization (AEO) (Zhao et al. 2020), chaotic multi-verse optimization (CMVO) (Sayed et al. 2018), Coot optimization algorithm (COOT) (Naruei and Keynia 2021b), emperor penguin optimizer (EPO) (Dhiman and Kumar 2018), genetic algorithm (GA), gray prediction evolution algorithm based on accelerated even (GPEAae) (Gao et al. 2020), gray wolf optimizer (GWO) (Mirjalili et al. 2014), sine cosine gray wolf optimizer (SC-GWO) (Gupta et al. 2020), artificial bee colony with enhanced food locations (I-ABC) (Sharma and Abraham 2020), spotted hyena optimizer (SHO) (Dhiman and Kumar 2017) and tunicate swarm algorithm (TSA) (Kaur et al. 2020). The compared results are shown in Table 16. As it is clear from the results, the proposed HPO algorithm has shown a very good performance for this problem. The HPO algorithm has found a better optimal value than other compared methods.

Table 16 Comparison results for speed reducer design problem

5.3 Cantilever beam design problem

Cantilever beams are one of the most important problems in the field of mechanics and civil engineering. The purpose of this problem is to minimize the weight of the beam. As shown in Fig. 15, a cantilever beam consists of five hollow elements with a square cross section. Each element is defined by a variable, and the thickness of all of them is constant. This problem has a constraint that should not be violated. The formula for this problem and its constraints are in the form of Eq. (15)

$$ \begin{aligned} & Minimize: \\ & \overline{x} = [x_{1} x_{2} x_{3} x_{4} x_{5} ], \\ & f(\overline{x}) = 0.6224(x_{1} + x_{2} + x_{3} + x_{4} + x_{5} ), \\ & g(\overline{x}) = \frac{61}{{x_{1}^{3} }} + \frac{37}{{x_{2}^{3} }} + \frac{19}{{x_{3}^{3} }} + \frac{7}{{x_{4}^{3} }} + \frac{1}{{x_{5}^{3} }} \le 1, \\ & 0.01 \le x_{1} ,x_{2} ,x_{3} ,x_{4} ,x_{5} \le 100. \\ \end{aligned} $$
(15)
Fig. 15
figure 15

Cantilever beam design problem (Mirjalili et al. 2016)

The performance of the proposed HPO algorithm on this problem was compared with AEO, ALO, COOT, CS, GPEAae, MVO, interactive autodidactic school (IAS) (Jahangiri et al. 2020) and symbiotic organisms search (SOS) (Cheng and Prayogo 2014). The comparison results are shown in Table 17. The results of Table 17 show that the proposed HPO algorithm offers a better solution to solve this problem at the lowest cost.

Table 17 Comparison results for the cantilever design problem

5.4 Welded beam design

This test problem’s objective is to minimize the welded beam’s fabrication cost shown in Fig. 16. This problem has four constraints such as end deflection of the beam (δ), buckling load on the bar (Pc), bending stress in the beam (σ), shear stress in weld (τ) and side constraints.

Fig. 16
figure 16

Welded beam design (Khalilpourazari and Khalilpourazary 2019)

Problem variables are thickness of the bar (b), the height of the bar (t), length of the attached part of the bar (l) and the thickness of weld (h). The formula for this problem and its constraints are in the form of Eq. (16)

$$ \begin{aligned} & \vec{x} = [x_{1} \,x_{2} \,x_{3} x_{4} ] = [hltb], \\ & f(\vec{x}) = 1.10471x_{1}^{2} x_{2} + 0.04811x_{3} x_{4} (14.0 + x_{2} ), \\ & g_{1} (\vec{x}) = \tau (\vec{x}) - \tau_{\max } \le 0, \\ & g_{2} (\vec{x}) = \sigma (\vec{x}) - \sigma_{\max } \le 0, \\ & g_{3} (\vec{x}) = \delta (\vec{x}) - \delta_{\max } \le 0, \\ & g_{4} (\vec{x}) = x_{1} - x_{4} \le 0, \\ & g_{5} (\vec{x}) = P - P_{c} (\vec{x}) \le 0, \\ & g_{6} (\vec{x}) = 0.125 - x_{1} \le 0, \\ & g_{7} (\vec{x}) = 1.10471x_{1}^{2} x_{2} + 0.04811x_{3} x_{4} (14.0 + x_{2} ) - 5.0 \le 0 \\ & 0.1 \le x_{1} \le 2, \\ & 0.1 \le x_{2} \le 10, \\ & 0.1 \le x_{3} \le 10, \\ & 0.1 \le x_{4} \le 2 \\ & \tau (\vec{x}) = \sqrt {(\tau ^{\prime})^{2} + 2\tau ^{\prime}\tau \frac{{x_{2} }}{2R} + (\tau )^{2} } , \\ & \tau ^{\prime} = \frac{p}{{\sqrt 2 x_{1} x_{2} }},\tau = \frac{MR}{J},M = P(L + \frac{{x_{2} }}{2}), \\ & R = \sqrt {\frac{{x_{2}^{2} }}{4} + \left( {\frac{{x_{1} + x_{3} }}{2}} \right)^{2} } , \\ & J = 2\left\{ {\sqrt 2 x_{1} x_{2} \left[ {\frac{{x_{2}^{2} }}{4} + \left( {\frac{{x_{1} + x_{3} }}{2}} \right)^{2} } \right]} \right\}, \\ & \sigma (\vec{x}) = \frac{6PL}{{x_{4} x_{3}^{2} }},\delta (\vec{x}) = \frac{{6PL^{3} }}{{Ex_{3}^{2} x_{4} }} \\ & P_{c} (\vec{x}) = \frac{{4.013E\sqrt {\frac{{x_{3}^{2} x_{4}^{6} }}{36}} }}{{L^{2} }}(1 - \frac{{x_{3} }}{2L}\sqrt{\frac{E}{4G}} ), \\ & P = 6000\,lb,L = 14\,in.,\,\,\,\,\delta_{\max } = 0.25\,in.\,, \\ & E = 30 \times 1^{6} \,psi,\,G = 12 \times 10^{6} \,psi, \\ & \tau_{\max } = 13600\,psi,\,\sigma_{\max } = 30000\,psi. \\ \end{aligned} $$
(16)

The optimal results of HPO versus those attained by AEO, an effective co-evolutionary differential evolution (CDE) (Huang et al. 2007), CMVO, GPEAae, GSA, HHO, HS, I-ABC, IAS, LFD, SHO and TSA are given in Table 18. The results showed that the proposed HPO algorithm found a better optimal value than other compared methods. It is noteworthy that the optimal value found by the HPO algorithm is very different from the optimal values found by other methods.

Table 18 Results for the welded beam

5.5 Tension/Compression spring design problem

The engineering test problem used is the tension/ compression spring design problem. The goal is to minimize the cost of building a spring with three parameters, namely number of active loops (N), average coil diameter (D) and wire diameter (d) (Arora 2017). Figure 17 shows the details of the spring and its parameters. The spring design problem has a number of inequality constraints, which are given in Eq. (17)

$$ \begin{aligned} & \vec{x} = [x_{1} \,x_{2} \,x_{3} ] = [dDN], \\ & f(\vec{x}) = (x_{3} + 2)x_{2} x_{1}^{2} , \\ & g_{1} (\vec{x}) = 1 - \frac{{x_{2}^{3} x_{3} }}{{71785x_{1}^{4} }} \le 0, \\ & g_{2} (\vec{x}) = \frac{{4x_{2}^{2} - x_{1} x_{2} }}{{12566(x_{2} x_{1}^{3} - x_{1}^{4} )}} + \frac{1}{{5108x_{1}^{2} }} \le 0, \\ & g_{3} (\vec{x}) = 1 - \frac{{140.45x_{1} }}{{x_{2}^{2} x_{3} }} \le 0, \\ & g_{4} (\vec{x}) = \frac{{x_{1} + x_{2} }}{1.5} - 1 \le 0, \\ & 0.05 \le x_{1} \le 2.00, \\ & 0.25 \le x_{2} \le 1.30, \\ & 2.00 \le x_{3} \le 15.0. \\ \end{aligned} $$
(17)
Fig. 17
figure 17

Tension/compression spring design problem (Mirjalili 2015b)

The problem of spring design has been solved by many researchers. The proposed HPO algorithm was tested to solve this problem and compared with popular and new methods such as AEO, BA, COOT, co-evolutionary particle swarm optimization (CPSO) (He and Wang 2007a), GPEAae, GSA, GWO, HHO, stochastic fractal search (SFS) (Salimi 2015), SHO, SSA, water cycle algorithm (WCA) (Eskandar et al. 2012), water evaporation optimization (WEO) (Kaveh and Bakhshpoori 2016) and WOA. Table 19 shows the comparison results of different methods to solve the spring design problem. The proposed HPO algorithm was able to solve this problem better than all the compared methods and find better values for the problem variables with the least weight.

Table 19 Results for tension/compression spring

5.6 Step-cone pulley problem

One of the important problems in the field of engineering is the problem of step-cone pulley. The goal is to minimize the weight of the four step-cone pulleys. Figure 18 shows the structure and parameters of this problem. As shown in Fig. 18, this problem has five variables, four of which are related to the diameter of the pulley, and one variable is the width of the pulley. The step-cone pulley is to be designed for transmitting a power of at least 0.75 hp. The formula for this problem and its constraints are in the form of Eq. (17)

Fig. 18
figure 18

Step-cone pulley problem (Savsani and Savsani 2016)


Minimize:

$$ f(\overline{x}) = \rho \omega \left[ {d_{1}^{2} \left\{ {11 + \left( {\frac{{N_{1} }}{N}} \right)^{2} } \right\} + d_{2}^{2} \left\{ {1 + \left( {\frac{{N_{2} }}{N}} \right)^{2} } \right\} + d_{3}^{2} \left\{ {1 + \left( {\frac{{N_{3} }}{N}} \right)^{2} } \right\} + d_{4}^{2} \left\{ {1 + \left( {\frac{{N_{4} }}{N}} \right)^{2} } \right\}} \right] $$

Subject to:

$$ \begin{aligned} & h_{1} (\overline{x}) = C_{1} - C_{2} = 0, \\ & h_{2} (\overline{x}) = C_{1} - C_{3} = 0, \\ & h_{3} (\overline{x}) = C_{1} - C_{4} = 0, \\ & g_{i = 1,2,3.4} (\overline{x}) = - R_{i} \le 2, \\ & g_{i = 1,2,3.4} (\overline{x}) = (0.75 \times 745.6998) - P_{i} \le 0 \\ & where, \\ & C_{i} = \frac{{\pi d_{i} }}{2}\left( {1 + \frac{{N_{i} }}{N}} \right) + \frac{{\left( {\frac{{N_{i} }}{N} - 1} \right)^{2} }}{4a} + 2a,\,i = (1,2,3,4), \\ & R_{i} = \exp \left( {\mu \left\{ {\pi - 2\sin^{ - 1} \left\{ {\left( {\frac{{N_{i} }}{N} - 1} \right)\frac{{d_{i} }}{2a}} \right\}} \right\}} \right),\,i = (1,2,3,4), \\ & P_{i} = st\omega (1 - R_{i} )\frac{{\pi d_{i} N_{i} }}{60},\,i = (1,2,3,4), \\ & t = 8mm,\,\,\,\,\,s = 1.75MPa,\,\,\,\,\,\mu = 0.35,\,\,\,\,\,\,\rho = 7200kg/m^{3} ,\,\,\,\,\,a = 3mm. \\ \end{aligned} $$
(18)

A schematic view of this problem is shown in Fig. 18.

The proposed HPO algorithm was tested on this problem and the results were compared with artificial electric field algorithm (AEFA) (Anita and Kumar 2020), passing vehicle search (PVS) (Savsani and Savsani 2016), artificial bee colony (ABC) (Rao et al. 2011) and Coot optimization algorithm (COOT). The results are shown in Table 20. As shown in Table 20, the HPO algorithm has discovered better optimal values than other algorithms and has been able to rank first.

Table 20 Comparison of results for step-cone pulley problem

5.7 Multiple disk clutch brake

The purpose of this test problem is to minimize the multi-disk clutch brake mass. This problem has five discrete decision variables, namely friction levels (Z), inner radius (ri), disk thickness (t), driving force (F) and outer radius (r0). The structure and parameters of this problem are shown in Fig. 19. The formula for this problem and its constraints are in the form of Eq. (19)

$$ \begin{aligned} & Minimize: \\ & f(\overline{x}) = \pi (x_{2}^{2} - x_{1}^{2} )x_{3} (x_{5} + 1)_{\rho } \\ & subject\,to: \\ & g_{1} (\overline{x}) = - p_{\max } + p_{rz} \le 0, \\ & g_{2} (\overline{x}) = p_{rz} v_{sr} - v_{sr,\max P\max } \le 0, \\ & g_{3} (\overline{x}) = \Delta R + x_{1} - x_{2} \le 0, \\ & g_{4} (\overline{x}) = - L_{\max } + (x_{5} + 1)(x_{3} + \delta ) \le 0, \\ & g_{5} (\overline{x}) = sM_{s} - M_{h} \le 0, \\ & g_{6} (\overline{x}) = T \ge 0, \\ & g_{7} (\overline{x}) = - v_{sr,\max } + v_{sr} \le 0, \\ & g_{8} (\overline{x}) = T - T_{\max } \le 0, \\ & where \\ & M_{h} = \frac{2}{3}\mu x_{4} x_{5} \frac{{x_{2}^{3} - x_{1}^{3} }}{{x_{2}^{2} - x_{1}^{2} }}N.mm, \\ & \omega = \frac{\pi n}{{30}}rad/s, \\ & A = \pi (x_{2}^{2} - x_{1}^{2} )mm^{2} , \\ & p_{rz} = \frac{{x_{4} }}{A}N/mm^{2} , \\ & v_{sr} = \frac{{\pi R_{sr} n}}{30}mm/s, \\ & R_{sr} = \frac{2}{3}\frac{{x_{2}^{3} - x_{1}^{3} }}{{x_{2}^{2} x_{1}^{2} }}mm, \\ & T = \frac{{I_{z} \omega }}{{M_{h} + M_{f} }}, \\ & \Delta R = 20mm,L_{\max } = 30mm,\mu = 0.6, \\ & V_{sr,\max } = 10m/s,\delta = 0.5mm,s = 1.5, \\ & T_{\max } = 15s,n = 250rpm,T_{z} = 55Kg.m^{2} , \\ & M_{s} = 40Nm,M_{f} = 2Nm,and\,p_{\max } = 1. \\ & with\,bounds: \\ & 60 \le x_{1} \le 80,90 \le x_{2} \le 110,1 \le x_{3} \le 3, \\ & 0 \le x_{4} \le 1000,2 \le x_{5} \le 9. \\ \end{aligned} $$
(19)
Fig. 19
figure 19

Multiple disk clutch brake (Azizyan et al. 2019)

The proposed HPO algorithm was tested on this problem, and the results were compared with AEO, CMVO, flying squirrel optimizer (FSO) (Azizyan et al. 2019), HHO, I-ABC greedy, PVS, quantum-behaved simulated annealing algorithm-based moth-flame optimization (QSMFO) (Yu et al. 2020), TLBO and WCA. The compared results are shown in Table 21. As it is clear from the results, the proposed HPO algorithm has shown a very good performance for this problem. The HPO algorithm has found a better optimal value than other compared methods, while other algorithms have performed almost similarly.

Table 21 Comparison of optimized designs for multi-plate disk clutch brake

5.8 Pressure vessel design

The problem of designing pressure vessels is another common engineering test problem in the optimization algorithm design. As shown in Fig. 20, one side of the dish is hemispherical, while the other side is flat. Problem parameters for optimization are length of cylindrical section without considering head (L), internal radius (R), head thickness (Th) and shell thickness (Ts). The problem of designing pressure vessels is formulated in Eq. (20)

$$ \begin{aligned} & \vec{x} = [x_{1} \,x_{2} \,x_{3} x_{4} ] = [T_{s} T_{h} RL], \\ & f(\vec{x}) = 0.6224x_{1} x_{3} x_{4} + 1.7781x_{2} x_{3}^{2} + 3.1661x_{1}^{2} x_{4} + 19.84x_{1}^{2} x_{3} , \\ & g_{1} (\vec{x}) = - x_{1} + 0.0193x_{3} \le 0, \\ & g_{2} (\vec{x}) = - x_{3} + 0.00954x_{3} \le 0, \\ & g_{3} (\vec{x}) = - \pi x_{3}^{2} x_{4} - \frac{4}{3}\pi x_{3}^{3} + 1296000 \le 0, \\ & g_{4} (\vec{x}) = x_{4} - 240 \le 0, \\ & 0 \le x_{1} \le 99, \\ & 0 \le x_{2} \le 99, \\ & 10 \le x_{3} \le 200, \\ & 10 \le x_{4} \le 200. \\ \end{aligned} $$
(20)
Fig. 20
figure 20

Pressure vessel design (Khalilpourazari and Khalilpourazary 2019)

The proposed HPO algorithm was tested to solve the pressure vessel design problem. The results of this experiment were compared with methods such as AEO, BA, charged system search (CSS) (Kaveh and Talatahari 2010), GA, GPEAae, Gaussian quantum-behaved particle swarm optimization (G-QPSO) (dos Santos Coelho 2010), GWO, HHO, hybrid particle swarm optimization (HPSO) (He and Wang 2007b), MFO, SC-GWO,WEO and WOA. Table 22 shows the results of this comparison. As can be seen from the results, the proposed HPO algorithm has found better values for the variables of this problem and solved it better than other methods.

Table 22 Results for the pressure vessel

5.9 Rolling element bearing design problem

Rolling element bearings have different geometric shapes that are optimized for different applications. The main goal of this problem is to maximize the dynamic load capacity by considering ten geometric design variables and nine constraints based on geometric and assembly constraints. Out of ten design variables, one design variable (number of balls in the bearing) is required to obtain the correct value. The formula for this problem and its constraints are in the form of Eq. (21)


Maximize:

$$ f(\overline{x}) = \left\{ {\begin{array}{*{20}l} {f_{c} Z^{2/3} D_{b}^{1.8} } \hfill & {,if\,D_{b} \le 25.4\,mm} \hfill \\ {3.647f_{c} Z^{2/3} D_{b}^{1.4} } \hfill & {,otherwise} \hfill \\ \end{array} } \right\} $$

Subject to:

$$ \begin{aligned} g_{1} (\overline{x}) = Z - \frac{{\phi_{0} }}{{2\sin^{ - 1} (D_{b} /D_{m} )}} - 1 \le 0, \\ g_{2} (\overline{x}) = 2D_{b} - K_{D\min } (D - d) > 0, \\ & g_{3} (\overline{x}) = K_{D\max } (D - d) - 2D_{b} \ge 0, \\ & g_{4} (\overline{x}) = \zeta B_{\omega } - D_{b} \le 0, \\ & g_{5} (\overline{x}) = D_{m} - 0.5(D + d) \ge 0, \\ & g_{6} (\overline{x}) = (0.5 + e)(D + d) - D_{m} < 0, \\ & g_{7} (\overline{x}) = 0.5(D - D_{m} - D_{b} ) - \varepsilon D_{b} \ge 0, \\ & g_{8} (\overline{x}) = f_{i} \ge 0.515, \\ & g_{9} (\overline{x}) = f_{0} \ge 0.515, \\ & where \\ & f_{c} = 37.91\left[ {1 + \left\{ {1.04\left( {\frac{1 - \gamma }{{1 + \gamma }}} \right)^{1.72} \left( {\frac{{f_{i} (2f_{0} - 1)}}{{f_{0} (2f_{i} - 1)}}} \right)^{0.41} } \right\}^{10/3} } \right]^{ - 0.3} ,\gamma = \frac{{D_{b} }}{{D_{m} }},\,f_{i} = \frac{{r_{i} }}{{D_{b} }},\,f_{0} = \frac{{r_{0} }}{{D_{b} }}, \\ & \phi_{0} = 2\pi - 2 \times \cos^{ - 1} \left( {\frac{{\left\{ {(D - d)/2 - 3(T/4)} \right\}^{2} + \left\{ {D/2 - (T/4) - D_{b} } \right\}^{2} - \left\{ {d/2 + (T/4)} \right\}^{2} }}{{2\left\{ {(D - d)/2 - 3(T/4)} \right\}\left\{ {D/2 - (T/4) - D_{b} } \right\}}}} \right) \\ & T = D - d - 2D_{b} ,\,D = 160,\,d = 90,\,B_{w} = 30. \\ \end{aligned} $$
(21)

With bounds:

$$ \begin{gathered} 0.5(D + d) \le D_{m} \le 0.6(D + d), \hfill \\ 0.15(D - d) \le D_{b} \le 0.45(D - d), \hfill \\ 4 \le Z \le 50, \hfill \\ 0.515 \le f_{i} \le 0.6, \hfill \\ 0.515 \le f_{0} \le 0.6, \hfill \\ 0.4 \le K_{D\min } \le 0.5, \hfill \\ 0.6 \le K_{D\max } \le 0.7, \hfill \\ 0.3 \le \varepsilon \le 0.4, \hfill \\ 0.02 \le e \le 0.1, \hfill \\ 0.6 \le \zeta \le 0.85. \hfill \\ \end{gathered} $$

The structure and parameters of the rolling element bearing design problem are shown in Fig. 21.

Fig. 21
figure 21

Rolling element bearing problem (Heidari et al. 2019)

The proposed HPO algorithm was tested to solve the rolling element bearing design problem, which is a maximization problem. The results of this experiment were compared with methods such as Harris hawks optimizer (HHO), passing vehicle search (PVS), teaching–learning-based optimization (TLBO), genetic algorithm (GA) and sine cosine algorithm (SCA). The results of this comparison are shown in Table 23. As can be seen from the results, the proposed HPO algorithm was able to provide the best solution to solve this problem.

Table 23 Comparison of optimized designs for rolling element bearing design problem

As mentioned before, in real-world problems, the best solutions obtained by different algorithms are reported. However, many of these algorithms achieved these results in better conditions, including more iterations, more population and number function evaluations (NFE). Despite all this, the proposed HPO algorithm still performed better in solving these problems. In summary, the best optimal value obtained by the proposed HPO algorithm for real-world problems is given in Table 24.

Table 24 Results of the proposed HPO algorithm in solving real-world engineering problems

6 Conclusion

In this paper, a new population-based optimization algorithm is proposed that is inspired by hunters’ behavior such as lions and leopards and prey such as deer and gazelle. The unique characteristics, such as hunting a prey out of the group and moving the prey toward the leader in front of the group, are the main motivation to create this optimization algorithm. In order to evaluate the algorithm in terms of exploration, exploitation and scalability, 43 test functions including seven unimodal test functions to evaluate algorithm exploitation, six multi-modal test functions to evaluate algorithm exploration, and CEC2017 test functions containing 30 functions which at least half of the functions are among the challenging hybrid and composition functions, to evaluate escape from the local optimal and the evaluation of the balance between the exploration phase and exploitation phase were used. The results showed that the proposed HPO algorithm has sufficient exploration and exploitation power to solve unimodal and multi-modal problems and establishes a good balance between these two phases. The HPO algorithm presented very competitive results compared to the well-known and new optimization algorithms. To further evaluate, the proposed algorithm was tested on nine real-world problems. The results showed that the proposed algorithm provided better solutions to solve these problems, while some algorithms were tested with better conditions. For future work, the proposed algorithm can be used to solve problems in different fields of study. The development of binary and the multi-objective versions is also proposed.