1 Introduction

During the last few decades, various algorithms have been proposed to solve a variety of engineering optimization problems [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]. These optimization problems are very complex in nature because they have more than one local optimum solution. These problems are categorized into various categories whether they are constrained or unconstrained, discrete or continuous, static or dynamic, single or multi-objective.

In order to increase the efficiency and accuracy of these problems [22,23,24,25,26], researchers have encouraged to rely on metaheuristic algorithms [27,28,29]. Metaheuristics become more popular in various field because they do not require gradient information and bypass the local optima problem.

Metaheuristics are classified into two main categories: single-solution and multiple-solution. In single-solution-based algorithms, the searching process starts with one candidate solution, whereas in multiple-solution-based algorithm, the optimization performs using a set of solutions (i.e., population). Multiple-solution or population-based metaheuristics have advantages over single-solution-based metaheuristics. These are as follows:

  • The searching process starts with random generated population, i.e, a set of multiple solutions.

  • The multiple solutions can share the information between each other around the search space and avoid local optimal solutions.

  • The exploration capability of multiple-solution or population-based metaheuristics is better than the single-solution-based metaheuristics.

The key phases of metaheuristic algorithms are exploration and exploitation. The exploration phase ensures that algorithm investigates the different promising regions in a given search space, whereas exploitation ensures the searching of optimal solutions around the promising regions. However, it is difficult to balance between these phases due to its stochastic nature. Therefore, the fine-tuning of these two phases is required to achieve the near-optimal solutions.

In recent years, a large number of metaheuristic algorithms have been developed. However, there is no single algorithm present which can solve all types of optimization problems. Some algorithms provide better optimal results as compared to the others. Therefore, developing a new metaheuristic algorithm is an open problem. This is the one fact which can motivates us to develop a novel metaheuristic algorithm for solving optimization problems.

This paper presents a hybrid bio-inspired metaheuristic algorithm named as emperor penguin and salp swarm algorithm (ESA). It is inspired by the huddling and swarm behavior of emperor penguin optimizer (EPO) [30] and salp swarm algorithm (SSA) [31], respectively. The main contributions of this work are as follows:

  • A hybrid bio-inspired swarm algorithm (ESA) is proposed.

  • The proposed ESA is implemented and tested on 53 benchmark test functions (i.e., classical and CEC-2017).

  • The performance of ESA is compared with well-known metaheuristics using sensitivity analysis, convergence analysis, and ANOVA test analysis.

  • The robustness of proposed ESA and other metaheuristics are examined for solving engineering problems.

The rest of this paper is structured as follows: Sect. 2 presents the background and related works of optimization problems. The proposed ESA algorithm is discussed in Sect. 3. The experimental results and discussion is presented in Sect. 4. Section 5 focuses on the applications of ESA in engineering problems. Finally, the conclusion and some future research directions are given in Sect. 6.

2 Background and related works

This section firstly describes the recently developed EPO and SSA algorithms followed by related works in the field of optimization.

2.1 Emperor penguin optimizer (EPO)

Emperor penguins are social animals that perform various activities for living like hunting, foraging in groups. Emperor penguins perform huddling during extreme winters in the Antarctic to survive. Each penguin contributes equally while huddling depicting the sense of collectiveness and unity in their social behavior [32]. The huddling behavior can be summarized as below [30]:

  • Create and discover huddling boundary.

  • Compute the temperature around the huddle.

  • Calculate the distance between each penguin.

  • Effective mover is relocated.

2.1.1 Mathematical modeling

The main objective of modeling is to identify effective mover. L-shape polygon plane is considered as the shape of the huddle. After the effective mover is identified, the boundary of the huddle is again computed.

2.1.1.1 Generate and determine the huddle boundary

To map the huddling behavior of emperor penguins, the first thing we need to consider is their polygon-shaped grid boundary. Every penguin is surrounded by at least two penguins while huddling. The huddling boundary is decided by the direction and speed of wind flow. Wind flow is generally faster as compared to penguins movement. Mathematically huddling boundary can be formulated as: let \(\eta\) represents the velocity of wind and \(\chi\) represents the gradient of \(\eta\):

$$\begin{aligned} \begin{aligned}&\chi = \nabla \eta . \end{aligned} \end{aligned}$$
(1)

Vector \(\alpha\) is integrated with \(\eta\) to obtain complex potential:

$$\begin{aligned} \begin{aligned}&G = \eta + i\alpha , \end{aligned} \end{aligned}$$
(2)

where i represents the imaginary constant and G defines the polygon plane function.

2.1.1.2 Temperature profile around the huddle

Emperor penguins perform huddling to conserve their energy and maximize huddle temperature \(T=0\) if \({X}> 0.5\) and \(T=1\) if \({X}< 0.5\), where X is the polygon radius. This temperature measure helps to perform exploration and exploitation task among emperor penguins. The temperature is computed as:

$$\begin{aligned} \begin{aligned}&T'=\Bigg (T- \dfrac{\text {Max}_{\text {itr}}}{y-\text {Max}_{\text {itr}}}\Bigg )\\&T= {\left\{ \begin{array}{ll} 0,&{} \text {if } X > 0.5\\ 1,&{} \text {if } X < 0.5, \end{array}\right. } \end{aligned} \end{aligned}$$
(3)

where y represents the current iteration, defines the current iteration, \(\text {Max}_{\text {itr}}\) represents the maximum count of iterations, X is the radius, and T is the time require to identify best optimal solution.

2.1.1.3 Distance between emperor penguins

After the huddling boundary is computed, distance between the emperor penguin is calculated. The current optimal solution is the solution with higher fitness value than previous optimum solution. The search agents update their positions corresponding to current optimal solution. The position updation can be mathematically represented as:

$$\begin{aligned} \begin{aligned}&\vec {M_{\text {ep}}}=\text {Abs}\Big (N(\vec {A})\cdot \vec {Q(x)}-\vec {A}\cdot \vec {Q_{\text {ep}}(x)}\Big ), \end{aligned} \end{aligned}$$
(4)

where \(\vec {M_{\text {ep}}}\) denotes the distance between the emperor penguin and best fittest search agent (i.e., with less fitness value), x represents the ongoing iteration. \(\vec {X}\) and \(\vec {A}\) help to avoid collision among penguin. \(\vec {Q}\) represents the best optimal solution (i.e., fittest emperor penguin), \(\vec {Q_{\text {ep}}}\) represents the position vector of emperor penguin. N() denotes the social forces that helps to identify best optimal solution. The vectors \(\vec {X}\) and \(\vec {A}\) are calculated as follows:

$$\begin{aligned}&\vec {X}=(M\times (T'+ R_{\text {grid}}(\text {Accuracy}))\times \text {Rand}())-T' \end{aligned}$$
(5)
$$\begin{aligned}&R_{\text {grid}}(\text {Accuracy})=\text {Abs}(\vec {Q}-\vec {Q_{\text {ep}}}) \end{aligned}$$
(6)
$$\begin{aligned}&\vec {C}=\text {Rand}(), \end{aligned}$$
(7)

where M is the movement parameter that maintains a gap between search agents for collision avoidance. The value of parameter M is set to 2. \(T'\) is the temperature profile around the huddle, \(P_{\text {grid}}(\text {Accuracy})\) defines the polygon grid accuracy by comparing the difference between emperor penguins, and Rand() is a random function lies in the range of [0, 1].

The function S() is calculated as follows:

$$\begin{aligned}&N(\vec {A})=\Bigg (\sqrt{f\cdot \text {e}^{-x/l} - \text {e}^{-x}}\Bigg )^{2}, \end{aligned}$$
(8)

where e defines the expression function. f and l are control parameters for better exploration and exploitation. The values of f and l lie in the range of [2, 3] and [1.5, 2], respectively. Note that it has been observed that EPO algorithm provides better results between these ranges.

2.1.1.4 Relocate the mover

The best obtained optimal solution (mover) is used to update the position of emperor penguins. The selected moves lead to the movement of other search agents in a search space. To find next position of a emperor penguin, following equations are used:

$$\begin{aligned}&\vec {Q_{\text {ep}}}(x+1)=\vec {Q(x)}-\vec {X}\cdot \vec {M_{\text {ep}}}, \end{aligned}$$
(9)

where \(\vec {Q_{\text {ep}}}(x+1)\) denotes the updated position of emperor penguin.

2.2 Salp swarm algorithm (SSA)

Salp swarm algorithm is a metaheuristic bio-inspired optimization algorithm developed by Mirjalili et al. [31]. This algorithm is based on the swarming behavior of salps when navigating and foraging in the deep sea. This swarming behavior is mathematically modeled named as salp chain. This chain is divided into two groups: leader and followers. The leader leads the whole chain from the front while the followers follow each other. The updated position of the leader in a n-dimensional search environment is described as follows:

$$\begin{aligned} \begin{aligned}&x_i^1= {\left\{ \begin{array}{ll} F_i+c_1((ub_i-lb_i)c_2+lb_i),&{} c_3 \ge 1\\ F_i-c_1((ub_i-lb_i)c_2+lb_i),&{} c_3 < 1, \end{array}\right. } \end{aligned} \end{aligned}$$
(10)

where \(x_i^1\) represents the first position of salp, i.e., leader in the \(i{\text {th}}\) dimension, \(F_i\) is the position of food source, \(ub_i\) and \(lb_i\) are the lower bound and upper bound of \(i{\text {th}}\) dimension, respectively. However, \(c_1, c_2,\) and \(c_3\) are random numbers.

The coefficient \(c_1\) is responsible for better exploration and exploitation which is defined as follows:

$$\begin{aligned}&c_1=2\text {e}^{-\Bigg (\dfrac{4l}{L}\Bigg )^2}, \end{aligned}$$
(11)

where l represents the current iteration and L is the maximum number of iterations; whereas, the parameters \(c_2\) and \(c_3\) are random numbers in range [0, 1].

To update the position of followers, the following equations are defined as follows:

$$\begin{aligned}&x_i^j=\dfrac{1}{2}AT^2+V_0T,&j \ge 2, \end{aligned}$$
(12)

where \(x_i^j\) shows the position of follower, T represents the time and \(V_0\) represents the initial speed. The parameter A is calculated as follows:

$$\begin{aligned} A= & {} \dfrac{V_{\text {final}}}{V_0}\nonumber \\ V= & {} \dfrac{x-x_0}{T}. \end{aligned}$$
(13)

Considering \(V_0=0\), the following equation can be expressed as:

$$\begin{aligned}&x_i^j=\dfrac{1}{2}(x_i^j+x_i^{j-1}). \end{aligned}$$
(14)

The SSA algorithm is able to solve high-dimensional problems using low computational efforts.

2.3 Related works

Multiple-solution-based metaheuristic algorithms are further classified into three categories such as evolutionary-based, physics-based, and swarm-based algorithms (see Fig. 1). The former one is generic population-based metaheuristic which is inspired from biological evolution, i.e., mutation, recombination, and selection. These do not make any assumptions about fitness landscape. The most popular evolutionary algorithm is genetic algorithm (GA) [33]. The evolution starts with randomly generated individuals from the given population. The fitness of each individual is computed in each generation. The crossover and mutation operators are applied on individual to create a new population. The best individuals can generate a new population during the course of iterations. However, compared to other stochastic methods, genetic algorithm has advantage that it can be parallelized with little effort and not necessarily remain trapped in a sub-optimal local maximum or minimum of the target function. GA may provides local minima of a function that can steer the search in the wrong direction for some of the optimization problems. differential evolution (DE) [34] is another evolutionary-based metaheuristic algorithm that optimizes a problem by maintaining a candidate solutions and creates new candidate solutions by combining the existing ones. It can keep the candidate solution which has best fitness value for optimization problem. It has an ability to handle non-differentiable and non-linear cost functions. There are only few parameters to steer the minimization problem. The parameter tuning is a main challenge in DE because same parameters may not guarantee the global optimum solution. Apart from these, some of the other popular evolutionary-based algorithms are genetic programming (GP) [35], evolution strategy (ES) [36], and biogeography-based optimizer (BBO) [37].

Fig. 1
figure 1

Classification of population-based metaheuristic algorithms

The second category is physics-based algorithms in which each search agent can move throughout the search space according to physics rules such as gravitational force, electromagnetic force, inertia force, and many more. The well-known physics-based metaheuristic algorithms are simulated annealing (SA) [38] and gravitational search algorithm (GSA) [39]. Simulated annealing is inspired from annealing in metallurgy that involves heating and controlled cooling attributes of a material. These attributes depend on its thermodynamic free energy. SA is advantageous in terms to deal with non-linear models and noisy data. The main advantage of SA over other search methods is its ability to search the global optimal solution. However, it suffers from high computational time especially if the fitness function is very complex and non-linear in nature. Gravitational search algorithm is based on the law of gravity and mass interactions. The population solutions are interact with each other through the gravity force and their performance is measured by its mass. GSA requires only two input parameters to adjust, i.e., mass and velocity. It is easy to implement. The ability to find near the global optimum solution makes GSA differ from the other optimization algorithms. However, it suffers from computational time and convergence problem if the initial population is not generated well. Some of the other popular algorithms are: big-bang big-crunch (BBBC) [40], charged system search (CSS) [41], black hole (BH) [42] algorithm, central force optimization (CFO) [43], small-world optimization algorithm (SWOA) [44], artificial chemical reaction optimization algorithm (ACROA) [45], ray optimization (RO) algorithm [46], galaxy-based search algorithm (GbSA) [47], and curved space optimization (CSO) [48].

The third category is swarm-based algorithms which are inspired by the collective behavior of social creatures. This collective intelligence is based on the interaction of swarm with each other. These are easier to implement than the evolutionary-based algorithms due to number of operators (i.e., selection, crossover, and mutation).

The most popular algorithm is particle swarm optimization (PSO) which was proposed by Kennedy and Eberhart [49]. In PSO, particles move around the search space using the combination of best solutions [50]. The whole process is repeated until the termination criterion is satisfied. The main advantage of PSO is that it has no overlapping and mutation computation. During simulation, the most optimist particle can transmit information among the other particles. However, it suffers from the stagnation problem.

Ant colony optimization (ACO) is another popular swarm intelligence algorithm which was proposed by Dorigo [51]. The main inspiration behind this algorithm is the social behavior of ants in ant colony. The social intelligence of ants is to find the shortest path between the source food and nest. ACO is able to solve the travelling salesman and similar problems in an efficient way that can be advantageous of ACO over the other approaches. The theoretical analysis of a problem is very difficult using ACO because the computational cost is high during convergence.

Bat-inspired algorithm (BA) [52] is inspired by the echolocation behavior of bats. Another well-known swarm-based metaheuristic is artificial bee colony (ABC) algorithm [53] which is inspired by the collective behavior of bees to find the food sources. Spotted hyena optimizer (SHO) [16] is a bio-inspired metaheuristic algorithm that mimics the searching, hunting, and attacking behaviors of spotted hyenas in nature. The main concept behind this technique is the social relationship and collective behavior of spotted hyenas for hunting strategy. Cuckoo search (CS) [54] is inspired by the obligate brood parasitism of cuckoo species. These species lay their eggs in the nest of other species. Each egg and a cuckoo egg represent a solution and a new solution, respectively.

Emperor penguin optimizer (EPO) [30] is a recently developed bio-inspired metaheuristic algorithm that mimics the huddling behaviors of emperor penguins. The main steps of EPO are to generate huddle boundary, compute temperature around the huddle, calculate the distance, and find the effective mover.

Grey wolf optimizer (GWO) [55] is a very popular bio-inspired based algorithm for solving real-life constrained problems. Grey wolf optimizer (GWO) is inspired by the behaviors of grey wolves. It mimics the leadership, hierarchy, and hunting mechanisms of grey wolves. GWO employs four types of grey wolves namely, alpha, beta, delta, and omega for optimization problems. The hunting, searching, encircling, and attacking mechanisms are also implemented. Further, to investigate the performance of GWO algorithm, it was tested on well-known test functions and classical engineering design problems.

Multi-verse optimizer (MVO) is a promising optimization algorithm proposed by Mirjalili et al. [56]. It is inspired by the theory of multi-verse in physics which consists of three main concepts, i.e., white hole, black hole, and worm hole. The concepts of white hole and black hole are appropriate for exploration and worm hole helps in the exploitation of the given search spaces.

Sine cosine algorithm (SCA) is proposed by Mirjalili [57] for solving numerical optimization problems. SCA generates multiple random solutions and fluctuate them towards the best optimal solution using mathematical models such as sine and cosine functions. The convergence speed of SCA is very high which is helpful for local optima avoidance.

The other well-known metaheuristic algorithms are fireworks algorithm (FWA) [58,59,60,61], monkey search [62], Bacterial foraging optimization algorithm [63], firefly algorithm (FA) [64], fruit fly optimization algorithm (FOA) [65], golden section line search algorithm [66], Fibonacci search method [67], bird mating optimizer (BMO) [68], Krill Herd (KH) [69], artificial fish-swarm algorithm (AFSA) [70], Dolphin partner optimization (DPO) [71], bee collecting pollen algorithm (BCPA) [72], and hunting search (HS) [73].

3 Proposed algorithm

In this section, the motivation and brief justification of the proposed algorithm are described in detail.

3.1 Motivation

Nature has inspired many researchers in many ways and thus is a rich source of inspiration. Nowadays, most new algorithms are nature-inspired because they have been developed by drawing inspiration from nature. The main source of inspiration for developing new algorithms is nature. Almost all new algorithms can be referred as nature-inspired algorithms. The majority of nature-inspired algorithms are based on some characteristics of biological system. Therefore, the largest fraction of nature-inspired algorithms are biology or bio-inspired. Among bio-inspired algorithms, a special class of algorithms have been developed by drawing inspiration from swarm intelligence. It has been observed from the literature that multiple-solution or population-based swarm intelligence algorithms are able to solve real-life optimization problems. They are able to explore throughout the search space, and exploit the global optimum. However, population-based techniques are more reliable than single-solution-based techniques because of more function evaluations.

According to no free lunch theorem [74], there is no optimization algorithm which is able to solve all optimization problems. This fact will attract the researchers of different fields to propose a new optimization algorithm. These motivate us to propose a new population-based metaheuristic algorithm.

The researchers have pointed out convergence and diversity difficulties for real-life problems. Hence, there is a need to develop an algorithm that maintains the convergence and diversity. In this paper, the navigation and foraging behaviors of SSA algorithm is used to maintain the diversity. The reasons to select these behaviors over others are:

  1. 1.

    SSA algorithm eliminates the problem of missing selection individuals.

  2. 2.

    The values of these behaviors are directly optimized, without any need for niching, that helps to maintain the diversity.

  3. 3.

    SSA ensures that any approximation set that has high-quality value for a particular problem contains all optimal solutions.

However, the calculation of SSA parameters requires high computational effort. To resolve this problem, EPO algorithm is employed. SSA suffers from overhead of maintaining the necessary information. For this, huddling behavior of EPO algorithm is used for maintaining the information. Therefore, a novel hybrid algorithm is proposed that utilizes the features of both EPO and SSA.

3.2 Hybrid emperor penguin and salp swarm algorithm (ESA)

The first step is to initialize the population and initial parameters of ESA algorithm as explained in Table 1. After the initialization, objective value of each search agent is calculated using FITNESS function as defined in line 4 of Algorithm 1. The best search agent is explored from the given search space. Further, the huddling behavior is defined using Eq. (9) until the suitable result is found for each search agent. In line 6 of Algorithm 1, position of each search agent is updated. Now, the leader and follower selection approaches are applied to update the positions of search agents using Eq. (14).

Again, the objective value of each search agent is calculated to find the optimal solutions. The condition is checked whether any search agent goes beyond the boundary in a given search space and if it happens then adjust it. Calculate the updated search agent objective value and update the parameters if there is a better solution from the previous one. The algorithm will be stopped when the stopping criterion is satisfied. This criterion is defined by user for how long the algorithm will be run, i.e., maximum number of iterations. Finally, the optimal solution is returned, after the stopping criterion is satisfied (see Fig. 2).

The pseudo-code of ESA algorithm is shown in Algorithm 1. There are some interesting points about the proposed ESA algorithm which are given below:

  • N(), A, and V assist the candidate solutions to behave more randomly in a search space and are responsible in avoiding conflicts between search agents.

  • The convergence behaviors of common optimization algorithms suggest that the exploitation tends to increase the speed of convergence, while exploration tends to decrease the convergence rate of the algorithm. Therefore, the possibility of better exploration and exploitation is done by the adjusted values of N(), A, and V.

  • The huddling and swarm behaviors of ESA in a search region defines the effectively collective behavior.

figure a
Fig. 2
figure 2

Flowchart of the proposed ESA algorithm

3.3 Computational complexity

In this subsection, the computational complexity of proposed ESA algorithm is discussed. Both the time and space complexities of the proposed algorithm are given below.

3.3.1 Time complexity

  1. 1.

    Population initialization process requires \(\mathcal {O}(n \times d)\) time, where n indicates the population size and d indicates the dimension of a given problem.

  2. 2.

    The fitness of each agent requires \(\mathcal {O}(\text {Max}_{\text {itr}} \times n \times d)\) time, where \(\text {Max}_{\text {itr}}\) is the maximum number of iterations to simulate the proposed algorithm.

  3. 3.

    It requires \(\mathcal {O}(N)\) time, where N defines the huddling and swarm behaviors of EPO and SSA for better exploration and exploitation.

Hence, the total time complexity of ESA algorithm is \(\mathcal {O}(\text {Max}_{\text {itr}}\times n \times d \times N)\).

3.3.2 Space complexity

The space complexity of ESA algorithm is the maximum amount of space used at any one time which is considered during its initialization process. Thus, the total space complexity of ESA algorithm is \(\mathcal {O}(n \times d)\).

4 Experimental results and discussion

This section describes the experimentation on 53 standard benchmark test functions to evaluate the performance of proposed algorithm. The detailed description of these benchmarks are presented below. Further, the results are compared with well-known metaheuristic algorithms.

4.1 Benchmark test functions

The 53 benchmark test functions are applied on the proposed algorithm to demonstrate its applicability and efficiency. These functions are divided into six main categories: unimodal [75], multimodal [64], fixed-dimension multimodal [64, 75], and IEEE CEC-2017 [76] test functions. The descriptions of these test functions are given in “Appendix”. In “Appendix”, Dim and Range indicate the dimension of the function and boundary of the search space, respectively. \(f_{\min }\) denotes the minimization function.

Appendix” shows the characteristics of unimodal, multimodal, fixed-dimension multimodal, and CEC-2017 benchmark test functions. The seven test functions (\(F_1\)\(F_7\)) are included in the first category of unimodal test functions. These functions have only one global optimum. The second category consists of six test functions (\(F_8\)\(F_{13}\)) and third category includes ten test functions (\(F_{14}\)\(F_{23}\)). There are multiple local solutions in these categories which are useful for examining the local optima problem. The fourth category consists of 30 CEC-2017 benchmark test functions (C1–C30).

4.2 Experimental setup

The proposed ESA is compared with well-known algorithms namely spotted hyena optimizer (SHO) [16], grey wolf optimizer (GWO) [55], particle swarm optimization (PSO) [49], multi-verse optimizer (MVO) [56], sine cosine algorithm (SCA) [57], gravitational search algorithm (GSA) [39], salp swarm algorithm (SSA) [31], emperor penguin optimizer (EPO) [30], and jSO [77]. The parameter values of these algorithms are set as they are recommended in their original papers. Table 1 shows the parameter settings of competitor algorithms. The experimentation has been done on Matlab R2014a (8.3.0.532) version using 64-bit Core i7 processor with 3.20 GHz and 8 GB main memory (Tables 2, 3).

Table 1 Parameter settings for algorithms
Table 2 The obtained optimal values on unimodal, multimodal, fixed-dimension multimodal, and CEC-2017 benchmark test functions using different simulation runs (i.e., 100, 500, 800, and 1000)
Table 3 The obtained optimal values on unimodal, multimodal, fixed-dimension multimodal, and CEC-2017 benchmark test functions where the number of iterations is fixed as 1000

4.3 Performance comparison

In order to demonstrate the effectiveness of the proposed algorithm, it is compared with well-known optimization algorithms on unimodal, multimodal, fixed-dimension multimodal, and CEC-2017 benchmark test functions. The average and standard deviation of the best optimal solution are mentioned in tables. For each benchmark test function, ESA algorithm utilizes 30 independent runs in which each run employs 1000 iterations.

4.3.1 Evaluation of test functions \(F_1\)\(F_7\)

The unimodal test functions (\(F_1\)\(F_7\)) are used to assess the exploitation capability of metaheuristic algorithm. Table 4 shows the mean and standard deviation of best optimal solution obtained from the above-mentioned algorithms on unimodal test functions. For \(F_1, F_2,\) and \(F_3\) test functions, SHO is the best optimizer whereas ESA is the second best optimizer in terms of mean and standard deviation. ESA provides better results for \(F_4, F_5, F_6,\) and \(F_7\) benchmark test functions. It is observed from results that ESA is very competitive as compared with other competitor algorithms and has better exploitation capability to find the best optimal solution very efficiently.

Table 4 Mean and standard deviation of best optimal solution for 30 independent runs on unimodal benchmark test functions

4.3.2 Evaluation of test functions \(F_8\)\(F_{23}\)

Multimodal test functions have an ability to evaluate the exploration of an optimization algorithm. Tables 5 and 6 depict the performance of above-mentioned algorithms on multimodal test functions (\(F_8\)\(F_{13}\)) and fixed-dimension multimodal test functions (\(F_{14}\)\(F_{23}\)), respectively. From these tables, it can be seen that ESA is able to find optimal solution for nine test problems (i.e., \(F_8, F_{10}, F_{13}, F_{14}, F_{15}, F_{17}, F_{18}, F_{19},\) and \(F_{22}\)). For \(F_9, F_{11},\) and \(F_{16}\) test functions, SHO provides better optimal results than ESA. ESA is the second best algorithm on these test functions. PSO attains best optimal solution for \(F_{12}\) and \(F_{20}\) test functions. For \(F_{21}\) test function, GWO and ESA are the first and second best optimization algorithms, respectively. The results reveal that ESA obtains competitive results in majority of the test problems and has better exploration capability.

Table 5 Mean and standard deviation of best optimal solution for 30 independent runs on multimodal benchmark test functions
Table 6 Mean and standard deviation of best optimal solution for 30 independent runs on fixed-dimension multimodal benchmark test functions

4.3.3 Evaluation of IEEE CEC-2017 test functions (\(C_1\)\(C_30\))

This special session is devoted to the algorithms and techniques for solving real parameter single objective optimization without making use of the exact equations of the test functions. Table 7 shows the performance of proposed ESA algorithm and other competitive approaches on IEEE CEC-2017 test functions. The results reveal that ESA achieves best optimal solution for most of the CEC-2017 benchmark test functions (i.e., C-4, C-5, C-8, C-10, C-11, C-12, C-13, C-20, C-22, C-25, C-26, C-29, C-30).

Table 7 Mean and standard deviation of best optimal solution for 30 independent runs on CEC-2017 benchmark test functions

4.4 Convergence analysis

Figure 3 shows the convergence curves of proposed ESA and other competitor algorithm. It is shown that ESA is very competitive over benchmark test functions. ESA has three different convergence behaviors. In the initial stage of iterations, ESA converges more quickly in the search space due to its adaptive mechanism. In the second step, ESA converges towards the optimum when final iteration reaches. The last step shows the express convergence from the initial step of iterations. The results reveal that ESA algorithm maintains a proper balance between exploration and exploitation to find the global optimum.

4.5 Sensitivity analysis

The proposed ESA algorithm uses four parameters namely, maximum number of iterations and number of search agents. The sensitivity investigation of these parameters has been discussed by varying their values and keeping other parameters fixed.

  1. 1.

    Maximum number of iterations: ESA algorithm was run for different number of iterations. The values of \(\text {Max}_{\text {iteration}}\) used in experimentation are 100, 500, 800, and 1000. Table 2 shows the effect of iterations over benchmark test functions. The results reveal that ESA converges towards the optimum when the number of iterations is increased.

  2. 2.

    Number of search agents: ESA algorithm was run for different values of search agent (i.e., 30, 50, 80, 100). Table 3 shows the effect of number of search agents on benchmark test functions. It is observed from table that the value of objective function decreases with the increase in number of search agents.

4.6 Scalability study

This subsection describes the effect of scalability on various test functions by using proposed ESA and other competitive algorithms. The dimensionality of the test functions is made to vary between the different ranges. Figure 4 shows the performance of ESA and other algorithms on scalable benchmark test functions. It is observed that the performance of ESA is not too much degraded when the dimensionality of search space is increased. The results reveal that the performance of ESA is least affected with the increase in dimensionality of search space. This is due to better capability of the proposed ESA for balancing between exploration and exploitation.

4.7 Statistical testing

Apart from standard statistical analysis such as mean and standard deviation, ANOVA test has been conducted. ANOVA test is used to determine whether the results obtained from proposed algorithm are different from other competitor algorithms in a statistically significant way. The sample size for ANOVA test is 30 with 95% confidence of interval. A p value determines whether the given algorithm is statistically significant or not. If the p value of the given algorithm is less than 0.05, then the corresponding algorithm is statistically significant. Table 8 shows the analysis of ANOVA test on the benchmark test functions. It is observed from Table 8 that the p value obtained from ESA is much smaller than 0.05 for all the benchmark test functions. Therefore, the proposed ESA is statistically different from the other competitor algorithms.

Fig. 3
figure 3

Convergence analysis of the proposed ESA and other competitor algorithms on benchmark test problems

Fig. 4
figure 4

Effect of scalability on the performance of ESA, SSA, and EPO algorithms

Table 8 ANOVA test results

5 ESA for engineering problems

The proposed ESA algorithm has been tested and validated on six constrained and one unconstrained engineering problems. These are pressure vessel, speed reducer, welded beam, tension/compression spring, 25-bar truss, rolling element bearing, and displacement of loaded structure design problems [78, 79]. These optimization design problems have different constraints with different nature. Different types of penalty functions are used to handle these problems such as static penalty, dynamic penalty, annealing penalty, adaptive penalty, co-evolutionary penalty, and death penalty [80].

However, death penalty function handles the solution which can violate the constraints. This function assigns the fitness value as zero to discard the infeasible solutions during optimization, i.e., it does not employ any information about infeasible solutions. Due to its low computational complexity and simplicity, ESA algorithm is equipped with death penalty function to handle both constrained and unconstrained engineering design problems.

5.1 Constrained engineering problems

This subsection describes six constrained engineering problems and compared it with other competitor approaches. The statistical analysis of these problems is also done to validate the efficiency and effectiveness of proposed algorithm.

5.1.1 Pressure vessel design problem

This problem was first proposed by Kannan and Kramer [81] to minimize the total cost consisting of material, forming, and welding of a cylindrical vessel. The schematic view of pressure vessel problem is shown in Fig. 5 which is capped at both ends by hemispherical heads. There are four design variables of this problem:

  • \(T_s\) (\(z_1\), thickness of the shell).

  • \(T_h\) (\(z_2\), thickness of the head).

  • R (\(z_3\), inner radius).

  • L (\(z_4\), length of the cylindrical section without considering the head).

Among these four design variables, R and L are continuous variables. \(T_s\) and \(T_h\) are integer values which are multiples of 0.0625 in. The mathematical formulation of this problem is given below:

$$\begin{aligned} \begin{aligned}&\text {Consider}\;\vec {z} = [z_1\;z_2\; z_3 \; z_4] = [T_s\; T_h\; R \; L],\\&\text {Minimize}\;f(\vec {z}) = 0.6224z_1z_3z_4 + 1.7781z_2z_3^2 + 3.1661z_1^2z_4 + 19.84z_1^2z_3,\\&\text {Subject to:}\\&g_1(\vec {z}) = -z_1 + 0.0193z_3 \le 0,\\&g_2(\vec {z}) = -z_3 + 0.00954z_3 \le 0,\\&g_3(\vec {z}) = -\pi z_3^2z_4 - \dfrac{4}{3}\pi z_3^3 + 1{,}296{,}000 \le 0,\\&g_4(\vec {z}) = z_4 - 240 \le 0,\\&\text {where}\\&1\times 0.0625 \le z_1,\; z_2 \le 99 \times 0.0625,\; 10.0 \le z_3,\; z_4 \le 200.0. \end{aligned} \end{aligned}$$
(15)

Table 9 reveals the obtained best comparison between ESA and other competitor algorithms such as EPO, SHO, GWO, PSO, MVO, SCA, GSA, and SSA. The proposed ESA provides optimal solution at \(z_{1-4} =(0.778092, 0.383236, 40.315052, 200.00000)\) with corresponding fitness value as \(f(z_{1-4}) = 5879.9558\). From this table, it can be seen that, ESA algorithm is able to find best optimal design with minimum cost.

Fig. 5
figure 5

Schematic view of pressure vessel problem

The statistical results of pressure vessel design problem are tabulated in Table 10. It can be seen from Table 10 that ESA surpassed other algorithms for providing the best solution in terms of best, mean, and median. The convergence behavior obtained by proposed ESA for best optimal design is shown in Fig. 6.

Fig. 6
figure 6

Convergence analysis of ESA for pressure vessel design problem

Table 9 Comparison of best solution obtained from different algorithms for pressure vessel design problem
Table 10 Statistical results obtained from different algorithms for pressure vessel design problem

5.1.2 Speed reducer design problem

The speed reducer design problem is a challenging benchmark problem due to its seven design variables [82] as shown in Fig. 7. The objective of this problem is to minimize the weight of speed reducer subject to constraints [83]:

  • Bending stress of the gear teeth.

  • Surface stress.

  • Transverse deflections of the shafts.

  • Stresses in the shafts.

There are seven design variables (\(z_1\)\(z_7\)) such as face width (b), module of teeth (m), number of teeth in the pinion (p), length of the first shaft between bearings (\(l_1\)), length of the second shaft between bearings (\(l_2\)), diameter of first (\(d_1\)) shafts, and diameter of second shafts (\(d_2\)). The mathematical formulation of this problem is formulated as follows:

$$\begin{aligned} \begin{aligned}&\text {Consider}\; \vec {z} = [z_1\; z_2\; z_3 \; z_4\; z_5\; z_6\; z_7] = [b\; m\; p\; l_1\; l_2\; d_1\; d_2],\\&\text {Minimize}\; f(\vec {z}) = 0.7854z_1z_2^2(3.3333z_3^2 + 14.9334z_3 - 43.0934)\\ {}&-\,1.508z_1(z_6^2 + z_7^2) + 7.4777(z_6^3 + z_7^3) + 0.7854(z_4z_6^2 + z_5z_7^2),\\&\text {Subject to:}\\&g_1(\vec {z}) = \dfrac{27}{z_1z_2^2z_3} - 1 \le 0,\\&g_2(\vec {z}) = \dfrac{397.5}{z_1z_2^2z_3^2} - 1 \le 0,\\&g_3(\vec {z}) = \dfrac{1.93z_4^3}{z_2z_6^4z_3} - 1 \le 0,\\&g_4(\vec {z}) = \dfrac{1.93z_5^3}{z_2z_7^4z_3} - 1 \le 0,\\&g_5(\vec {z}) = \dfrac{[(745(z_4/z_2z_3))^2 + 16.9 \times 10^6]^{1/2}}{110z_6^3} - 1 \le 0,\\&g_6(\vec {z}) = \dfrac{[(745(z_5/z_2z_3))^2 + 157.5 \times 10^6]^{1/2}}{85z_7^3} - 1 \le 0,\\&g_7(\vec {z}) = \dfrac{z_2z_3}{40} - 1 \le 0,\\&g_8(\vec {z}) = \dfrac{5z_2}{z_1} - 1 \le 0,\\&g_9(\vec {z}) = \dfrac{z_1}{12z_2} - 1 \le 0,\\&g_{10}(\vec {z}) = \dfrac{1.5z_6 + 1.9}{z_4} - 1 \le 0,\\&g_{11}(\vec {z}) = \dfrac{1.1z_7 + 1.9}{z_5} - 1 \le 0,\\&\text {where}\\&2.6 \le z_1 \le 3.6, \;0.7\le z_2 \le 0.8, \;17 \le z_3 \le 28,\;7.3 \le z_4 \le 8.3,\\&7.3 \le z_5 \le 8.3, \;2.9 \le z_6 \le 3.9, \;5.0 \le z_7 \le 5.5. \; \end{aligned} \end{aligned}$$
(16)

Table 11 shows the comparison of the best obtained optimal solution with various optimization algorithms. The proposed ESA algorithm provides optimal solution at \(z_{1-7} =(3.50120, 0.7, 17, 7.3, 7.8, 3.33415, 5.26531)\) with corresponding fitness value as \(f(z_{1-7}) = 2993.9584\). The statistical results of ESA and competitor optimization algorithms are given in Table 12.

Table 11 Comparison of best solution obtained from different algorithms for speed reducer design problem
Table 12 Statistical results obtained from different algorithms for speed reducer design problem

The results show that ESA outperforms than other metaheuristic optimization algorithms. Figure 8 shows the convergence behavior of ESA on speed reducer design problem.

Fig. 7
figure 7

Schematic view of speed reducer problem

Fig. 8
figure 8

Convergence analysis of ESA for speed reducer design problem

5.1.3 Welded beam design problem

The main objective of this design problem is to minimize the fabrication cost of welded beam as shown in Fig. 9. The optimization constraints of welded beam are shear stress (\(\tau\)), bending stress (\(\theta\)) in the beam, buckling load (\(P_c\)) on the bar, and end deflection (\(\delta\)) of the beam. There are four design variables (\(z_1\)\(z_4\)) of this problem.

  • h (\(z_1\), thickness of weld)

  • l (\(z_2\), length of the clamped bar)

  • t (\(z_3\), height of the bar)

  • b (\(z_4\), thickness of the bar)

The mathematical formulation is described as follows:

$$\begin{aligned} \begin{aligned}&\text {Consider}\; \vec {z} = [z_1\; z_2\; z_3 \; z_4] = [h\; l\; t\; b],\\&\text {Minimize}\; f(\vec {z}) = 1.10471z_1^2z_2 + 0.04811z_3z_4(14.0 + z_2),\\&\text {Subject to:}\\&g_1(\vec {z}) = \tau {(\vec {z})} - 13{,}600 \le 0,\\&g_2(\vec {z}) = \sigma {(\vec {z})} - 30{,}000 \le 0,\\&g_3(\vec {z}) = \delta {(\vec {z})} - 0.25 \le 0,\\&g_4(\vec {z}) = z_1 - z_4 \le 0,\\&g_5(\vec {z}) = 6000 - P_c(\vec {z}) \le 0,\\&g_6(\vec {z}) = 0.125 - z_1 \le 0,\\&g_7(\vec {z}) = 1.10471z_1^2 + 0.04811z_3z_4(14.0 + z_2)- 5.0 \le 0,\\&\text {where} \\&0.1 \le z_1,\; 0.1 \le z_2,\; z_3 \le 10.0,\; z_4 \le 2.0,\\&\tau {(\vec {z})} = \sqrt{(\tau ^{'})^2 + (\tau ^{''})^2 + (l\tau ^{'}\tau ^{''})/\sqrt{0.25(l^2 + (h + t)^2)}},\\&\tau ^{'} = \dfrac{6000}{\sqrt{2}hl}, \qquad \sigma {(\vec {z})} = \dfrac{504{,}000}{t^2b}, \qquad \delta {(\vec {z})} = \dfrac{65{,}856{,}000}{(30\times 10^6)bt^3},\\&\tau ^{''} = \dfrac{6000(14 + 0.5l)\sqrt{0.25(l^2 + (h + t)^2)}}{2[0.707hl(l^2/12 + 0.25(h + t)^2)]},\\&P_c(\vec {z}) = 64,746.022(1 - 0.0282346t)tb^3.\\ \end{aligned} \end{aligned}$$
(17)

The obtained best comparison between proposed ESA and other metaheuristics is presented in Table 13. Among other algorithms, the proposed ESA provides optimal solution at \(z_{1-4} =(0.203296, 3.471148, 9.035107, 0.201150)\) with corresponding fitness value equal to \(f(z_{1-4})=1.721026\). Table 14 shows the statistical comparison of the proposed algorithm and other competitor algorithms. ESA shows superiority to other algorithms in terms of best, mean, and median.

Fig. 9
figure 9

Schematic view of welded beam problem

Figure 10 shows the convergence analysis of best optimal solution obtained from ESA for welded beam design problem.

5.1.4 Tension/compression spring design problem

The objective of this design problem is to minimize the tension/ compression spring weight (see Fig. 11). The optimization constraints of this problem are described as follows:

  • Shear stress.

  • Surge frequency.

  • Minimum deflection.

Fig. 10
figure 10

Convergence analysis of ESA for welded beam design problem

There are three design variables such as wire diameter (d), mean coil diameter (D), and the number of active coils (P). The mathematical formulation of this problem is given below:

Table 13 Comparison of best solution obtained from different algorithms for welded beam design problem
Table 14 Statistical results obtained from different algorithms for welded beam design problem
$$\begin{aligned} \begin{aligned}&\text {Consider}\; \vec {z} = [z_1\; z_2\; z_3] = [d\; D\; P],\\&\text {Minimize}\; f(\vec {z}) = (z_3 + 2)z_2z_1^2,\\&\text {Subject to:} \\&g_1(\vec {z}) = 1 - \dfrac{z_2^3z_3}{71785z_1^4} \le 0,\\&g_2(\vec {z}) = \dfrac{4z_2^2-z_1z_2}{12566(z_2z_1^3-z_1^4)} + \dfrac{1}{5108z_1^2} \le 0,\\&g_3(\vec {z}) = 1 - \dfrac{140.45z_1}{z_2^2z_3} \le 0,\\&g_4(\vec {z}) = \dfrac{z_1+z_2}{1.5} - 1 \le 0,\\&\text {where}\\&0.05 \le z_1 \le 2.0,\; 0.25 \le z_2 \le 1.3,\; 2.0 \le z_3 \le 15.0. \end{aligned} \end{aligned}$$
(18)

Table 15 shows the comparison for the best solution obtained from the proposed ESA and other competitor algorithms in terms of design variables and objective values. ESA obtained best solution at design variables \(z_{1-3}=(0.051080, 0.342895, 12.0895)\) with an objective function value of \(f(z_{1-3})=0.012655526\). The results reveal that ESA performs better than the other competitor algorithms. The statistical results of tension/compression spring design problem for the reported algorithms are compared and tabulated in Table 16. It can be seen from Table 16 that ESA provides better statistical results than the other optimization algorithms in terms of best, mean, and median.

Table 15 Comparison of best solution obtained from different algorithms for tension/compression spring design problem
Table 16 Statistical results obtained from different algorithms for tension/compression spring design problem
Fig. 11
figure 11

Schematic view of tension/compression spring problem

Fig. 12
figure 12

Convergence analysis of ESA for tension/compression spring design problem

Figure 12 shows the convergence behavior of best optimal solution obtained from proposed ESA.

5.1.5 25-bar truss design problem

The truss design problem is a popular optimization problem [84, 85] (see Fig. 14). There are 10 nodes and 25 bars cross-sectional members. These are grouped into eight categories.

  • Group 1: \(A_1\)

  • Group 2: \(A_2, A_3, A_4, A_5\)

  • Group 3: \(A_6, A_7, A_8, A_9\)

  • Group 4: \(A_{10}, A_{11}\)

  • Group 5: \(A_{12}, A_{13}\)

  • Group 6: \(A_{14}, A_{15}, A_{17}\)

  • Group 7: \(A_{18}, A_{19}, A_{20}, A_{21}\)

  • Group 8: \(A_{22}, A_{23}, A_{24}, A_{25}\)

The other variables which affects on this problem are as follows:

  • p = 0.0272 N/cm\(^3\) (0.1 lb/in.\(^3\))

  • E = 68947 MPa (10,000 Ksi)

  • Displacement limitation = 0.35 in.

  • Maximum displacement = 0.3504 in.

  • Design variable set = \(\{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4 \}\)

Table 17 shows the member stress limitations for this problem. The loading conditions for 25-bar truss are presented in Table 18. The comparison of best obtained solutions among several algorithms is tabulated in Table 19. It can be seen that the proposed ESA is better than other algorithms in terms of best, average, and standard deviation. ESA converges very efficiently towards optimal solution as shown in Fig. 13.

Table 17 Member stress limitations for 25-bar truss design problem
Table 18 Two loading conditions for the 25-bar truss design problem
Table 19 Statistical results obtained from different algorithms for 25-bar truss design problem
Fig. 13
figure 13

Convergence analysis of ESA for 25-bar truss design problem

5.1.6 Rolling element bearing design problem

The main objective of this problem is to maximize the dynamic load carrying capacity of a rolling element bearing as depicted in Fig. 15. There are ten decision variables such as pitch diameter (\(D_m\)), ball diameter (\(D_b\)), number of balls (Z), inner (\(f_i\)) and outer (\(f_o\)) raceway curvature coefficients, \(K_{\text {Dmin}}\), \(K_{\text {Dmax}}\), \(\varepsilon\), e, and \(\zeta\) (see Fig. 15). The mathematical representation of this problem is given below:

$$\begin{aligned} \begin{aligned}&\text {Maximize}\; C_d = {\left\{ \begin{array}{ll} f_cZ^{2/3}D_b^{1.8}, \quad \quad &{}\text {if} \; D \le 25.4\, \text {mm}\\ C_d = 3.647f_cZ^{2/3}D_b^{1.4}, \quad \quad &{}\text {if} \; D > 25.4\, \text {mm}\\ \end{array}\right. }\\&\text {Subject to:}\\&g_1(\vec {z}) = \dfrac{\phi _0}{2sin^{-1}(D_b/D_m)} - Z + 1 \le 0,\\&g_2(\vec {z}) = 2D_b - K_{\text {Dmin}}(D - d) \ge 0,\\&g_3(\vec {z}) = K_{\text {Dmax}}(D - d) - 2D_b \ge 0,\\&g_4(\vec {z}) = \zeta B_w - D_b \le 0,\\&g_5(\vec {z}) = D_m - 0.5(D + d) \ge 0,\\&g_6(\vec {z}) = (0.5 + e)(D + d) - D_m \ge 0,\\&g_7(\vec {z}) = 0.5(D - D_m - D_b) - \varepsilon D_b \ge 0,\\&g_8(\vec {z}) = f_i \ge 0.515,\\&g_9(\vec {z}) = f_o \ge 0.515,\\&\text {where}\\&f_c= 37.91\Bigg [1 + \Bigg \{1.04 \Bigg (\dfrac{1 - \gamma }{1 + \gamma } \Bigg )^{1.72} \Bigg (\dfrac{f_i(2f_o - 1)}{f_o(2f_i - 1)} \Bigg )^{0.41} \Bigg \}^{10/3} \Bigg ]^{-0.3}\\&\quad \times \Bigg [\dfrac{\gamma ^{0.3}(1 - \gamma )^{1.39}}{(1 + \gamma )^{1/3}} \Bigg ] \Bigg [\dfrac{2f_i}{2f_i - 1} \Bigg ]^{0.41}\\&x=[\{(D - d)/2 - 3(T/4)\}^2 + \{D/2 - T/4 - D_b\}^2 - \{d/2 + T/4\}^2]\\&y=2\{(D - d)/2 - 3(T/4)\}\{D/2 - T/4 - D_b\}\\&\phi _o = 2\pi - 2\text {cos}^{-1}\Bigg (\dfrac{x}{y} \Bigg )\\&\gamma = \dfrac{D_b}{D_m}, \quad f_i = \dfrac{r_i}{D_b}, \quad f_o = \dfrac{r_o}{D_b}, \quad T = D - d - 2D_b\\&D = 160, \quad d = 90, \quad B_w = 30, \quad r_i = r_o = 11.033\\&0.5(D + d) \le D_m \le 0.6(D + d), \quad 0.15(D - d) \le D_b\\&\le 0.45(D - d), \quad 4 \le Z \le 50, \quad 0.515 \le f_i\; \text {and} \; f_o \le 0.6,\\&0.4 \le K_{\text {Dmin}} \le 0.5, \quad 0.6 \le K_{\text {Dmax}} \le 0.7, \quad 0.3 \le e \le 0.4,\\&0.02 \le e \le 0.1, \quad 0.6 \le \zeta \le 0.85. \end{aligned} \end{aligned}$$
(19)
Fig. 14
figure 14

Schematic view of 25-bar truss problem

Table 20 shows the performance comparison of best obtained optimal solution. The proposed ESA provides optimal solution at \(z_{1-10} =(125, 21.41750, 10.94109, 0.510, 0.515, 0.4, 0.7, 0.3, 0.02, 0.6)\) with corresponding fitness value equal to \(f(z_{1-10}) = 85070.085\). The statistical results obtained for rolling element bearing design problem are compared and tabulated in Table 21. The results reveal that the proposed ESA gives the best solution with considerable improvement.

Table 20 Comparison of best solution obtained from different algorithms for rolling element bearing design problem
Table 21 Statistical results obtained from different algorithms for rolling element bearing design problem
Fig. 15
figure 15

Schematic view of rolling element bearing problem

Figure 16 shows the convergence analysis of ESA algorithm and reveals that ESA is able to achieve best optimal solution.

5.2 Unconstrained engineering problem

This subsection describes the displacement of loaded structure design problem to minimize the potential energy.

5.2.1 Displacement of loaded structure design problem

A displacement is a vector which defines the shortest distance between initial and final position of a given point.

Fig. 16
figure 16

Convergence analysis of ESA for rolling element bearing design problem

The objective of this problem is to minimize the potential energy for reducing the excess load of structure. The loaded structure that should have minimum potential energy (\(f(\vec {z})\)) is shown in Fig. 17. The problem can be stated as follows:

$$\begin{aligned} \begin{aligned}&f(\vec {z}) = \text {Minimize}_{z_1, z_2}\; \pi \\&\text {where}\\&\pi = \dfrac{1}{2}K_1u_1^2 + \dfrac{1}{2}K_2u_2^2 - F_zz_1 - F_yz_2\\&K_1 = 8\,\text {N/cm}, K_2 = 1\,\text {N/cm}, F_y = 5\,\text {N}, F_z = 5\,\text {N}\\&u_1 = \sqrt{z_1^2+(10-z_2^2)-10}, \quad u_2 = \sqrt{z_1^2+(10+z_2^2)-10}. \end{aligned} \end{aligned}$$
(20)

Table 22 reveals the comparison of best optimal solution obtained from ESA and other metaheuristics including EPO, SHO, GWO, PSO, MVO, SCA, GSA, and SSA. The proposed ESA generates best optimum cost at \(\pi =167.2635\). It can be seen that ESA is able to minimize the potential energy for loaded structure problem.

The statistical results for the reported algorithms are tabulated in Table 23. From Table 23, it is noticed that the results obtained from ESA are far better than the other competitor algorithms in terms of best, mean, and median. Figure 18 shows the convergence analysis of best solution obtained from proposed ESA algorithm (Tables 24, 25, 26, 27).

In summary, ESA is an effective optimizer for solving both constrained and unconstrained engineering design problems with low computational cost and fast convergence speed.

Table 22 Comparison of best solution obtained from different algorithms for displacement of loaded structure problem
Fig. 17
figure 17

Schematic view of displacement of loaded structure

Fig. 18
figure 18

Convergence analysis of ESA for displacement of loaded structure problem

Table 23 Statistical results obtained from different algorithms for displacement of loaded structure problem

6 Conclusion and future works

This paper presents a hybrid swarm-based bio-inspired metaheuristic algorithm called emperor penguin and salp swarm algorithm (ESA). The fundamental concepts behind this algorithm are the huddling and swarm behaviors of EPO and SSA algorithms, respectively. The proposed ESA algorithm has been tested on fifty-three benchmark test functions. It is observed from statistical analysis that ESA attains global optimal solution with better convergence as compared to other competitive algorithms.

For CEC-2017 benchmark test functions, the performance of ESA is found accurate and consistent. The effect of scalability has also been investigated on the performance of ESA. The results reveal that the performance of ESA is less susceptible to scalability as compared to other algorithms. The sensitivity analysis has also been investigated on ESA.

Moreover, ESA is applied on six constrained and one unconstrained engineering design problems to show its effectiveness and efficacy. On the basis of results, it can be concluded that the proposed ESA is applicable to engineering design problems. In future, ESA may be extended for solving multi-objective optimization problems. The binary and many objective versions of ESA can be valuable contributions. ESA may also be extended for solving online large scale optimization and engineering applications.