1 Introduction

Optimisation remains a significant challenge in artificial computation (Sameer et al. 2019; Tariq et al. 2018; Zaidan et al. 2017). Accordingly, many algorithms have been developed to solve such a problem. However, two issues should be addressed to guarantee a successful solution to this problem: how to identify the global and local optimisation and how to preserve such optimisation until the end of the search (Qu et al. 2012).

Over the last two decades, nature-inspired computation has attracted wide interest amongst researchers, given that nature is an important source of concepts, mechanisms and ideas for designing artificial computing systems that solve many complex mathematical problems (Barhen et al. 1997). Individuals must adapt to their surrounding environments to ensure the survival and long-term preservation of their breed (Whitley 2001). This process is known as evolution (Fogel 1995, 2009; Fogel et al. 1965; Schwefel 1995; Michalewicz and Attia 1994). Maintaining the time of reproduction can also sustain the features that foster the competitiveness of individuals and remove their weak features. Only those good individuals from the surviving species can transmit genetically modified genes to their descendants. This process, which is known as natural selection, has inspired ‘evolutionary algorithms’ (EAs), which are amongst the most widespread and successful algorithms being applied in research. Several types of EAs have been employed in the literature, including genetic algorithms (Houck et al. 1995; Joines and Houck 1994; Kazarlis and Petridis 1998; Holland 1992), genetic programming (Koza 1992), evolutionary programming and evolutionary strategies (Yao et al. 1999; Rechenberg 1994; Whitley 2001; Yao et al. 1999b; Yao and Liu 1997).

Evolutionary optimisation techniques are the most widely used intelligent computing techniques to solve many problems, such as the combinatorial and nonlinear optimisation problems (Fiacco and McCormick 1968). These techniques easily address different types of issues by using and integrating prior information into an evolutionary search yielding process to efficiently explore a state space of possible solutions. However, EAs are unable to find the optimal solution for numerous issues despite the aforementioned advantages; therefore, many researchers have merged these algorithms with extant technologies to improve their solutions (Whitley 2001).

Swarm intelligence (SI) is another form of intelligent computing technique that includes particle swarm optimisation (PSO) (Birge 2003; Shi and Eberhart 1998; Kennedy and Eberhart 1995), which mimics the swarm behaviour of birds or fish (Li 2003); ant colony optimisation (Dorigo et al. 2006), which mimics the foraging and schooling behaviour of ants and other algorithms, such as gravitational search (Rashedi et al. 2009), grey wolf optimiser (GWO) (Mirjalili et al. 2014), artificial bee colony (Karaboga and Basturk 2007), moth–flame optimisation (Mirjalili 2015), whale optimisation (Mirjalili and Lewis 2016), group search optimiser (He et al. 2006, 2009) and ant lion optimiser algorithms (Mirjalili 2015), as well as many algorithms modified on the PSO algorithms, such as comprehensive learning particle swarm optimiser (CLPSO) (Liang et al. 2006) and fitness-distance-ratio-based particle swarm optimisation (FDR-PSO) (Peram et al. 2003) and ensemble particle swarm optimiser (EPSO) (Lynn and Suganthan 2017). SI solves many problems by simulating the normal behaviour of some animals when moving from one place to another in search of food. This technique is generally influenced by the size and nonlinearity of the problem. Despite obtaining the optimal solution to computational and combinatorial problems, the majority of the existing analytical methods are unable to converge such problems.

SI offers many advantages. For example, each individual can improve his/her search efficiency by moving from one position to another, whilst all individuals within a swarm can improve their respective positions. In EAs, the weak and inefficient individuals are neglected and replaced by highly competent individuals. A swarm continuously explores new areas within the search space to rapidly reach global optimisation areas. However, SI also has its own disadvantages. For example, collective movement may induce a state of mass decline in the local optimum and the continued failure of individuals to escape from this area can lead to the early suspension of the exploration (Del Valle et al. 2008).

Nature-inspired techniques have been developed owing to their ability to address various issues and the possibility of integrating evolutionary techniques with swarm techniques to create new technologies that can solve these problems. Such technologies maximise the advantages offered by SI in terms of searching within the best position in the swarm and capitalise on the capability of evolutionary techniques to explore the search space frequently and avoid the local optimum. Thus, the introduction of techniques such as the bald eagle search (BES), which is a nature-inspired technique for solving optimisation problems that mimics the behaviour of bald eagles when searching for food, is useful. To the best of our knowledge, an algorithm that mimics the behaviour of bald eagles, which are known for pack hunting, has yet to be developed. Bald eagles search for food in three stages. In the first stage, the eagles identify an area where they will conduct their search. In the second stage, the eagles search for food within the selected area. In the third stage, the eagles choose and attack a prey (Sörensen 2015).

The movement of bald eagles in the three stages depends on a centre point. In the first stage, the eagles move from the centre point towards the selected search area. In the second stage, the eagles search within the search space and around the centre point. In the third stage, the eagles move towards a prey from the centre point of the search area. The mean point for all the points of search was based on this point as a central point for the launch and search of the eagle. The advantages of evolutionary and swarm techniques have been integrated into the construction of the BES algorithm. The first and second stages are considerably similar to the search behaviour of evolutionary techniques. Specifically, the first stage of the BES algorithm relays and collects all search points starting from the original point to the best point. The second stage is treated as an evolution stage for all search points and maximises the preferred search points along the direction leading to the centre point. The third stage mimics the behaviour of SI whilst moving towards the best point, with the benefit of the previous site for each point of search. Considering that the centre point is a local search point enables this stage to direct the search in a large area and overcome the shortcomings in the previous stages of search.

The main objective of this study is to propose a novel nature-inspired technique for solving optimisation problems. The remainder of this paper is organised as follows. Section 2 briefly reviews the search behaviour of bald eagles and describes each stage of the proposed BES algorithm. Section 3 discusses the evaluation methodology and the results. Section 4 presents the conclusions.

2 BES algorithm

2.1 Behaviour of bald eagle during hunting

‘Bald’ is a derivation of the old English word balde vgbyhuvu, which means ‘white’. Hence, bald eagles are not bald. Bald eagles are occasional predators and are at the top of the food chain only because of their size. Furthermore, bald eagles are considered scavengers that feast on any available, easy and protein-rich food. Bald eagles are an opportunistic forager that mainly select fish (alive or dead), especially salmon, as primary food. Birds that make optimal hunting decisions can evaluate the energetic cost of the hunting attempt, energy content of the prey and probability of success in various habitats utilising multiple attack methods (Todd et al. 1982). Bald eagles frequently hunt from perches but may also hunt whilst in flight. They are capable of spotting fish at enormous distances because obtaining fish from water is difficult. Thus, only 1 in 20 attempts of attack may succeed (Stalmaster and Kaiser 1997). Thereafter, bald eagles rest because hunting consumes substantial energy. Figure 1 shows the behaviour of bald eagles during hunting time. When they start to search for food over a water spot, these eagles set off in a specific direction and select a certain area to begin the search. Accordingly, finding the search space is achieved by self-searching and tracking other birds to the concentrations of fish (dead or alive) (Stalmaster 1987).

Fig. 1
figure 1

Behaviour of bald eagle during hunting

Thereafter, bald eagles will go directly to the particular area. The specified search space can be justified because space selection is the first stage of hunting behaviour (see Fig. 2). Foraging success is high within the range of 5 m from the shore (47%) compared with deep in the water. That is, an important consideration for foraging habitat is bald eagles select the middle space between the land surface and deep water that is distant from the shallow water (Stalmaster and Gessaman 1982). Specifically, a pair of eagles hunt daily over 250 ha of open pasture. When the eagles reach the area, they will begin their search; the selected area is not farther than 700 m from their nest because the energy aspect is a critical factor in searching (Lasserre 2001).

Fig. 2
figure 2

Co-sequences for the three main stages of hunting by BES

Furthermore, bald eagles take advantage of stormy air whilst flying high. Soaring is activated by increased wind speed, in which eagles consume substantial time on flying. Eagles are observed to have slithering, graceful, motionless flights for hours at a time (Stalmaster and Kaiser 1997; Hansen 1986; Hansen et al. 1984). They also have outstanding eyesight, thereby enabling them to observe fish in water or dead fish from hundreds of feet up in the air. An eagle’s eye is as large as a human eye but is more powerful. Moreover, an eagle’s eye has perfect vision, which is four times that of humans. Eagles can also see in two directions at the same time, forward and side views. Whilst eagled fly thousands of feet in the air, scanning becomes easy with a twisting motion and the eagle can spot a prey over an area of nearly 3 m2 (Hansen 1986). The second stage of hunting behaviour is seeing the prey (see Fig. 2). Once the eagles see the prey, they will start the last stage of hunting behaviour, which descends with a gradual flow of motion to reach the prey at a high speed and snatch the fish from the water (see Fig. 2). A consumption card estimated at five branches of the search energy spiral is used by eagles in search of fish (Liang et al. 2006).

2.2 BES algorithm

The proposed BES algorithm mimics the behaviour of bald eagles during hunting to justify the co-sequences of each stage of hunting. Accordingly, this algorithm can be divided into three parts, namely, selecting the search space, searching within the selected search space and swooping.

2.2.1 Select stage

In the select stage, bald eagles identify and select the best area (in terms of amount of food) within the selected search space where they can hunt for prey. Equation (1) presents this behaviour mathematically.

$$ Pnew,\;\;i = P_{best} + \alpha *r\left( {P_{mean} - P_{i} } \right) $$
(1)

where α is the parameter for controlling the changes in position that takes a value between 1.5 and 2 and r is a random number that takes a value between 0 and 1. In the selection stage, bald eagles select an area on the basis of the available information from the previous stage. The eagles randomly select another search area that differs from but is located near the previous search area. \( P_{best} \) denotes the search space that is currently selected by bald eagles based on the best position identified during their previous search. The eagles randomly search all points near the previously selected search space. Meanwhile, Pmean indicates that these eagles have used up all information from the previous points. The current movement of bald eagles is determined by multiplying the randomly searched prior information by α. This process randomly changes all search points (Hatamlou 2012).

We solve the Rosenbrock function in each stage based on the size of search point 10 to improve the efficiency of the random solution. Figure 3 shows that the selection stage effectively improves all solutions within the search based on the mean and best points. Figure 3a shows the location of the best and mean solutions within the search space. The mean point has the best location in the search space, whereas the selected space depends on the difference between the search and mean points. Figure 3b shows that search point 10 is located near the best point. Figure 3c shows the improvement in all points within the search space and the new best point within search point 3.

Fig. 3
figure 3

Improvised selection stage: a selection stage after one alteration, b selection stage after two alterations and c selection stage after three alterations

2.2.2 Search stage

In the search stage, bald eagles search for prey within the selected search space and move in different directions within a spiral space to accelerate their search. The best position for the swoop is mathematically expressed in Eq. (2).

$$ P_{i,new} = P_{i} + y\left( i \right)*\left( {P_{i} - P_{i + 1} } \right) + x\left( i \right)*\left( {P_{i} - P_{mean} } \right) $$
(2)
$$ \begin{aligned} & x\left( i \right) = \frac{xr\left( i \right)}{{{ \hbox{max} }\left( {\left| {xr} \right|} \right)}},\;\;y\left( i \right) = \frac{yr\left( i \right)}{{{ \hbox{max} }\left( {\left| {yr} \right|} \right)}}\quad \left( {\text{a}} \right) \\ & xr\left( i \right) = r\left( i \right)*\sin \left( {\theta \left( i \right)} \right), \;\;yr\left( i \right) = r\left( i \right)*\cos \left( {\theta \left( i \right)} \right)\quad \left( {\rm b} \right) \\ & \theta \left( i \right) = a*\pi *rand \quad \ldots \left( {\rm c} \right)\quad and\quad r\left( i \right) = \theta \left( i \right) + R*rand\quad \ldots \left( {\rm d} \right), \\ \end{aligned} $$

where a is a parameter that takes a value between 5 and 10 for determining the corner between point search in the central point and R takes a value between 0.5 and 2 for determining the number of search cycles. Figure 4 shows that bald eagles move in a spiral direction within the selected search space and determine the best position for swooping and hunting for prey. We use a polar plot property to mathematically represent this movement. This property also enables the BES algorithm discover new spaces and increase diversification by multiplying the difference between the current and next points by the point of polar in the y-axis and by adding the difference between the current and centre points with the point of polar in the x-axis. We use the mean solution in the search point because all search points move towards the centre point. All points in the polar plot take a value between − 1 and 1 and we use a special equation for the spiral shape (a–d). Moreover, a and R represent the parameters for the change in the spiral shape. Figure 5 shows the spiral shape when these parameters are changed.

Fig. 4
figure 4

Bald eagles searching within a spiral space

Fig. 5
figure 5

Shape of the spiral when parameters \( a \) and R are changed

The points move around the centre point during the search stage. When parameters a and R are changed, the algorithm increases diversification to escape from the local optimum and to continuously obtain an efficient solution. Figure 6 shows the improvement in the fitness function during the search stage, whilst Fig. 6a shows the location of the best and mean points. The best point has a better location compared with the mean point in the search space. Figure 6b shows the improvement in all points and the best solution that is obtained in point 4. Figure 6c shows the new location of all points in the search space and the best point that is obtained in point 7. The search space depends on the movement of points from one location to another, whereas the mean point is based on the movement of these points around a spiral.

Fig. 6
figure 6

Improvements in the fitness function during the search stage a after one alteration, b after two alterations and c after three alterations

2.2.3 Swooping stage

In the swooping stage, bald eagles swing from the best position in the search space to their target prey. All points also move towards the best point. Equation (3) mathematically illustrates this behaviour.

$$ P_{i,new} = rand*P_{best} + x1\left( i \right)*\left( {P_{i} - c1*P_{mean} } \right) + y1\left( i \right)*\left( {P_{i} - c2*P_{best} } \right) $$
(3)
$$ \begin{aligned} & x1\left( i \right) = \frac{xr\left( i \right)}{{\hbox{max} \left( {\left| {xr} \right|} \right)}},\;\;y1\left( i \right) = \frac{yr\left( i \right)}{{\hbox{max} \left( {\left| {yr} \right|} \right)}} \\ & xr\left( i \right) = r\left( i \right)*\sinh \left[ {\theta \left( i \right)} \right)\left] {,\quad yr\left( i \right) = r\left( i \right)*\cosh \left[ {\theta \left( i \right)} \right)} \right] \\ & \theta \left( i \right) = a*\pi *rand\quad and\quad r\left( i \right) = \theta \left( i \right) \\ & where\;\;c1,c2 \in \left[ {1,2} \right]. \\ \end{aligned} $$

The movement of the eagles takes different shapes. We use a polar equation to plot the movement of these eagles whilst swooping. Additionally, we compute for the best point by multiplying the difference between the current and centre points by the point of polar in the x-axis and multiplying the difference between the current and best points by the point of polar in the y-axis. The best solution must be multiplied by a random number because parameters c1 and c2 increase the movement intensity of bald eagles towards the best and centre points (see Fig. 7).

Fig. 7
figure 7

Shape of swoop when parameter \( a \) is changed

The movement of points in the swoop equation when the parameters were changed was circular to the best point. The mean of population in this stage can help the algorithm in intensification and diversification, where all solution approaches the best solution. Figure 9 shows the improvised swoop process, whilst Fig. 9a shows the location of the mean. The best point is in the same location and the location of the point. Figure 9b shows the improvement of all points in the search space, obtains very near location from the best solution in points 1, 2, 5 and 6 and obtains the new best solution in point 6. Figure 9c shows all point to the new location, which is better compared with the previous location, and obtains the new best solution in point 5, which is better compared with the mean point. The three stages are critical in obtaining a good solution with minimum iteration, where each stage depends on two crucial characteristics, intensification and diversification. These characteristics are crucial to obtain a new solution continuously and search a round optimal solution (Fig. 8).

Fig. 8
figure 8

Improvised swoop stage: a swoop stage after on alteration, b swoop stage after two alterations and c swoop stage after three alterations

2.2.4 Complete BES algorithm

The preceding sections introduced the main components of BES, which include the selection, searching and swooping stages. To describe the remaining operations and facilitate the implementation of BES, the pseudo-code of its complete algorithm is provided in Algorithm 1. The initialisation procedure is first activated in lines 1–2 of Algorithm 1. Population P is initialised to be generated in the space of problems, whilst the iteration number t is set to 0. For each solution in P, the positional information is randomly generated. Thereafter, the objectives of each particle are evaluated. We execute the following steps for each solution in the population P: selection area for the searching around the best solution using lines 4–12, evaluate the new area, as well as the searching and selection areas by using spiral movement, where random number generated in two axes and two movements. The solution moves towards the next point and the central point. We evaluate the new position for hunting by using lines 13–21. Thereafter, the swoop stage begins by using the new position in the searching space to swoop towards the prey. The new solution is evaluated by using lines 22–30. The iteration counter k is increased by 2 in line 31, as three steps are run. The preceding evolutionary phase is repeated until the pre-set maximum number of iterations is achieved. Lastly, the final solutions in P are reported as the final population and the best solution obtained in the population for solving the problem.

figure a

3 Computational experiment

This section evaluates the performance of the proposed BES algorithm. Firstly, we describe the evaluation methodology and present the results of the experiments, which are conducted in different optimisation problems. Secondly, we compare the performance of the BES algorithm with that of other intelligent computational techniques. Thirdly, we discuss our findings in detail. The no free lunch (NFL) theorem indicates that ‘for any algorithm, any elevated performance over one class of problems is exactly paid for in performance over another class’ (Wolpert and Macready 1997). A particular meta-heuristic may yield promising results for a set of problems but may perform poorly on another set of problems. With NFL, this field of study is highly active. Consequently, the extant approaches are enhanced, whilst new meta-heuristics are proposed every year.

3.1 Experimental settings and comparative methods

Firstly, we test the performance of the proposed BES on 30 benchmark functions of the CEC 2014 Competition on Single Objective Real-Parameter Numerical Optimisation (Liang et al. 2013) and 25 benchmark functions of the CEC 2005, because those benchmark testing problems are most frequency used by other researchers in order to test thier strong points that covers the various types of function optimisation a single objective problems in most cases as shown in Tables 1 and 2. Detailed definitions of the functions can be found in Suganthan et al. (2005).

Table 1 CEC 2005 test functions
Table 2 CEC 2014 test functions

The performance of the BES algorithm is also evaluated using the CEC 2014 benchmark functions (Liang et al. 2013). The set of CEC 2014 benchmark functions consist of 30 suits THAT are classified into four categories, namely, unimodal, simple multimodal, hybrid and composition functions. Table 2 describes the search range and global optimum values of all benchmark functions.

On the test functions, we compare BES with six recent popular meta-heuristic methods:

  • Differential evolution (DE) algorithm A basic variant of the DE algorithm works by having a population of candidate solutions (called agents). These agents are moved into the search space using simple mathematical formulas to combine the positions of existing agents in the population. If the new agent position is an improvement, then the position is accepted and is part of the population, otherwise the new position is simply discarded. The process is repeated and, in doing so, it is hoped, but not guaranteed, that a satisfactory solution will be discovered (Storn and Price 1997).

  • GWO algorithm The GWO algorithm mimics the hierarchy of leadership and hunting mechanism of grey wolves in the wild as proposed by Mirjalili et al. (2014). Four types of grey wolf, namely, alpha, beta, delta and omega, are used to simulate the leadership hierarchy. Additionally, three main stages of hunting are implemented to perform an optimisation, namely, search, encroachment and attack of prey.

  • EPSO A set of optimisation algorithms for particle swarms with self-adaptive mechanism is proposed by hybridising some PSO algorithms, called EPSO (Lynn and Suganthan 2017).

  • FDR-PSO This algorithm has been proposed to solve the problem of premature convergence observed in PSOs. In comparison with PSO, FDR-PSO added a social learning component, drawing lessons from the neighbouring particle’s (nbest) experiment. The neighbouring particles are selected on the basis of two criteria: (1) the particle must be near the particle being updated and (2) the particle must be better adapted compared with the particle being updated. Whether a neighbouring particle meets these criteria, the decision is made by the distance ratio fitness/distance one-dimensional called distance-fitness ratio (Peram et al. 2003).

  • CLPSO In PSO, the trajectory towards the global optimum is adjusted by the pbest and gbest particles. Although gbest is the best experience of the population, this particle may be a lower local optimum for a multimodal problem and far from the global optimum. To solve this problem, CLPSO has been proposed in Liang et al. (2006). In CLPSO, the best experiments of all particles are used to guide the search for a particle.

Notably, fine-tuning the control parameters for each problem can improve the performance of the algorithm. However, finding separate parameter settings for each problem can take a long time. Such tuning processes can lead to an unfair comparison for each algorithm in evaluating the algorithm’s overall performance over the entire test suite. Table 3 shows the recommended setting of the algorithm zones.

Table 3 Parameter settings for the algorithms used in the comparative study

In experiments, we use 30-D problems for test problems and set the maximum number of evaluations (NFE) at 100,000 for each problem algorithm to ensure a fair comparison. Each algorithm has been run 30 times (with different initial random values) on each test problem and the evaluation is based on the average performance over 60 runs.

3.2 Evaluation procedure

The experimental results will be described on the basis of the mean, standard deviation (SD), best point and Wilcoxon signed-rank test statistic of the function values.

  1. (a)

    Mean Mean (x) is computed as the sum of all the observed outcomes from the sample divided by the total number of these outcomes.

    $$ \bar{x} = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} x_{i} $$
  2. (b)

    SD SD is a measure that quantifies the variation or dispersion of a set of data for the function values.

    $$ SD = \sqrt {\frac{1}{N - 1}\mathop \sum \limits_{i = 1}^{N} \left( {x_{i} - \bar{x}} \right)^{2} } $$
  3. (c)

    Best point The best point reflects the minimum value.

  4. (d)

    Wilcoxon signed-rank test The Wilcoxon signed-rank test statistic determines the difference between two samples (Derrac et al. 2011) and provides an alternative test of location that is affected by the magnitudes and signs of these differences. This test answers the following hypotheses:

    $$ \begin{aligned} & H_{0 } : mean\left( A \right) = mean\left( B \right) \\ & H_{1 } : mean\left( A \right) \ne mean\left( B \right), \\ \end{aligned} $$

    where A and B denote the results of the first and second algorithms, respectively. This test also checks whether one algorithm outperforms the other. Let di denote the difference between the performance scores of two algorithms in solving ith out of n problems. Let R+ denote the sum of ranks for the problems, in which the first algorithm outperforms the second. Lastly, let R represent the sum of ranks for the problems in which the second algorithm outperforms the first. The ranks of \( d_{i} = 0 \) are divided evenly amongst the sums. If these sums have an odd number, then one of them is disregarded.

    $$ \begin{aligned} R^{ + } = & \mathop \sum \limits_{{d_{i} > 0}} rank\left( {d_{i} } \right) + \frac{1}{2}\mathop \sum \limits_{{d_{i} = 0}} rank\left( {d_{i} } \right) \\ R^{ - } = & \mathop \sum \limits_{{d_{i} < 0}} rank\left( {d_{i} } \right) + \frac{1}{2}\mathop \sum \limits_{{d_{i} = 0}} rank\left( {d_{i} } \right) \\ \end{aligned} $$

We use MATLAB to find the p value for comparing the algorithms at a significant level of \( \upalpha = 0.05 \). The null hypothesis is rejected when the p-value is less tha n the significant level. R+ represents a high mean algorithm that shows superiority over other algorithms across different sets of experiments. When \( \left( {{\text{R}}^{ + } = \frac{{{\text{n}} \times \left( {{\text{n}} + 1} \right)}}{2}} \right) \), this algorithm outperforms all algorithms across all experiments.

3.3 Experimental results

Tables 4, 5, 6, 7, 8, 9, 10 and 11 present the experimental results of the unimodal, multimodal, hybrid and expanded functions, where ‘Mean’ and ‘Best’ denote the mean and minimum values, respectively, of the algorithm amongst the 30 runs; ‘STD’ denote standard deviation and the index greater than the median indicates the rank of the algorithm in terms of mean values amongst the six algorithms; ‘Winner’ indicates the winner between the BES algorithm and other algorithms by using the Wilcoxon test and Superscript + denotes that BES has significant performance improvement over the comparative method and superscript − otherwise. The best results amongst the comparative algorithms of each problem are shown in italics.

Table 4 Comparative results on the unimodal benchmark functions. CEC 2005 test functions
Table 5 Comparative results on multimodal benchmark functions. CEC 2005 test functions
Table 6 Comparative results on the expanded benchmark functions. CEC 2005 test functions
Table 7 Comparative results on hybrid benchmark functions. CEC 2005 test functions
Table 8 Comparative results on the unimodal benchmark functions. CEC 2014 test functions
Table 9 Comparative results on simple multimodal benchmark functions. CEC 2014 test functions
Table 10 Comparative results on the hybrid benchmark functions. CEC 2014 test functions
Table 11 Comparative results on the composition benchmark functions. CEC 2014 test functions

3.4 Results and discussion

3.4.1 CEC 2005 benchmark functions

Table 4 shows the results of the unimodal functions. BES obtains the best result in functions f2–f5, obtains significant result compared with other algorithms and also determines the second best result after DE/best/1.

Table 5 shows the results of the seven multimodal functions. BES obtains the best results in four functions, obtains the second best mean values in one function (f10) and ranks third in f11 and fourth in f9, thereby exhibiting the best overall performance.

  • DE/best/1 ranks first in f9, EPSO ranks first in f10 and GWO obtains the best mean values in f11. BES ranks fourth and surpasses three other algorithms in f9, ranks second and surpasses five other algorithms in f10 and ranks third and surpasses four other algorithms in f11.

  • DE/best/1, EPSO and FDR-PSO rank first, second and third, respectively, in f9, followed by BES, which is significantly different from (better than) DE/rand/1, GWO and CLPSO. f9, the Rastrigin function shifted and rotated, has a very large number of local optima, thereby making it difficult for the algorithms to obtain the global optimum in at least one executions.

  • Of the remaining four benchmarks, BES constantly obtains the best results and its performance is extremely different from that of the other algorithms.

Particularly, BES results reach or substantially approximate the real optimum in such functions as f6 and f7, which is a narrow peak (or have an extremely narrow valley ranging from the local optimum to the global optimum).

Table 6 shows the results of the expanded functions. BES obtains the best result. In two functions, BES also obtains significant result compared with the other algorithms.

Table 7 shows the results of the 11 hybrid functions. BES occupies the first rank in five functions (i.e. f16, f1, f21, f24 and f25), third in function f22, fourth in functions f18, f19 and f23 and fifth in function f15. The relatively low performance of BES in f15 is partially consistent with those of the subfunctions, including f11, because hybrid functions also have many local optima. In summary, the overall performance of BES is the best amongst the six algorithms on the benchmark suite, including unimodal, multimodal, hybrid and expanded functions.

3.4.2 CEC 2014 benchmark functions

Table 8 shows the results of the unimodal functions. BES obtains the best result in all functions and also obtains significant result compared with other algorithms. Notably, numerous algorithms, such as GWO, work well with unimodal (Mirjalili et al. 2014) but have lost their performance on these functions. BES can be effective for solving these functions compared with other algorithms.

Table 9 shows the results of the 13 multimodal functions. BES obtains the best results in four functions (f4, f13, f14 and f16). Additionally, EPSO obtains the best results in four functions (f5, f9, f11 and f12), CLPSO obtains the best results in two function (f8 and f10), FDR-PSO obtains the best results in two functions (f6 and f15) and DE/rand/1 obtains the best result in f7. The results of BES were poor in these functions. The reason is that the number of localisation areas is extremely large, which makes approaching global optimisation difficult compared with other functions.

Table 10 shows the results of the six hybrid functions. BES obtains the best result in five functions (i.e. f17, f18, f20, f21 and f22). The statistical tests show that BES performance is significantly different from the other five algorithms. Note that in this group of hybrid functions, the variables are randomly divided into subcomponents, whilst the different basic functions are used for different subcomponents, thereby resulting in a significant reduction in the performance of algorithms (e.g. GWO and DE) but the performance of BES remains as competitive as the basic features.

Table 11 shows the results of the eight composition functions. BES obtains the first rank in six functions (i.e. f23, f24, f25, f26, f29 and f30). The relatively low performance of BES in f27 and f28 is partially consistent with those of the subfunctions, including f9, f6 and f11, given that compositional functions have numerous local optima.

In summary, the overall performance of BES is the best amongst the six comparative algorithms of the benchmark suite, including the two sessions of CEC 2005 and CEC 2014. On some test functions with many local optima, the performance of BES is not very satisfactory.

We mainly used a linearly reduced BES population size in our experiments. In later iterations, the number of solutions was reduced to a single digit. Thus, veering away from the local optimum is difficult. We also tested the use of BES. The relatively large population size fixed in BES can effectively improve the performance for this test function but loses performance in many other test functions. Generally, the strategies to reduce population size facilitate the improvement of the overall performance of BES but an effective algorithm for certain problems. However, we must compare these two strategies and choose the best strategy for the majority of real optimisation problems.

Amongst the other five algorithms, BES showed the best performance throughout the suite. However, in all test functions, all algorithms are not consistently better compared with the others. Each algorithm achieves the best result in some functions. BES ranks first amongst the 33 functions. DE/best/1, DE/rand/1, GWO, EPSO, CLPSO and FDR-PSO immediately complete 2, 7, 1, 6, 2 and 4 functions, respectively. Each algorithm shows advantages and disadvantages of its benchmark suite, which we consider to be the same for various real-world problems. Therefore, when choosing an EA for a new optimisation problem, using terms to describe and quantify the boundaries of effective algorithm performance is important. We solve the characteristics of problem instances by using objective measurements and tools.

Notably, BES performance is not considerably competitive compared with the top ranked algorithms in the CEC 2005 and CEC 2014 competitions. The majority of these algorithms use complex search mechanisms, such as blending operators, history memory, replacement strategies and super heuristic controllers, as well as fine-tuning settings for the test suite. However, our goal is simply to test the performance of BES on a test suite by using simple frameworks and parameters. We expect that BES will also significantly improve its performance by introducing more complex mechanisms and by combining powerful operators with other heuristics. Figures 9 and 10 show the overall performance of BES compared with other algorithms. Accordingly, we can observe the superiority of BES amongst the six algorithms.

Fig. 9
figure 9

Comparison amongst the algorithms by using CEC 2005

Fig. 10
figure 10

Comparison amongst the algorithms by using CEC 2014

4 Conclusion

This study proposed a novel optimisation algorithm that mimics the hunting strategy, social hierarchy and behaviour of bald eagles. The optimisation results and discussion confirm that the BES algorithm is the best competitor amongst the six comparative algorithms of the benchmark suite in two sessions of CEC 2005 and CEC 2014. These algorithms include GWO, DE/best/1, DE/rand/1, EPSO, FDR-PSO and CLPSO. On some test functions with numerous local optima, the performance of BES is not very satisfactory because we used a linearly reduced BES population size in our experiments. In later iterations, the number of solutions was reduced to a single digit. Thus, breaking away from local optimum was difficult. We also tested the use of BES. The relatively large population size fixed in BES can effectively improve the performance for this test function but loses many other test functions. Generallly, strategies to reduce population size help improve the overall performance of BES, but an effective algorithm for certain problems. However, we must compare these two strategies and choose the best strategy for the majority of the real optimisation problems. Amongst the other five comparison algorithms, BES showed the best performance throughout the suite. However, in all test functions, all algorithms are not consistently better compared with the others. In fact, each algorithm achieves the best result on some functions. BES ranks first in 33 functions. DE/best/1, DE/rand/1, GWO, EPSO, CLPSO and FDR-PSO quickly complete 2, 7, 1, 6, 2 and 4 functions, respectively. Each algorithm shows the advantages and disadvantages of its benchmark suite, which we consider to be the same for various real-world problems. Therefore, when choosing an EA for a new optimisation problem, using terms to describe and quantify the boundaries of effective algorithm performance is important. We solve the characteristics of problem instances by using objective measurements and tools. Notably, BES performance is not very competitive compared with the top-ranked algorithms in the CEC 2005 and CEC 2014 competitions. The majority of these algorithms use complex search mechanisms, such as blending operators, history memory, replacement strategies and super heuristic controllers, as well as fine-tuning settings for the test suite. However, our goal is simply to test the performance of BES on a test suite by using simple frameworks and parameters. We expect that BES will also significantly improve its performance by introducing more complex mechanisms and by combining powerful operators with other heuristics. Future studies may examine the potential of using the prey identification process of bald eagles to minimise energy consumption by taking advantage of several factors, such as wind and gravity, amongst others. Moreover, some searching patterns, such as cross-based ones, may outperform others in certain computational environments.