Introduction

Efficient optimization techniques are essential to tackle countless real-world complex optimization applications across multiple technical and academic disciplines [1]. Traditional solution search strategies, such as enumeration search, branch and bound, and others struggle to address major optimization problems with acceptable convergence and accuracy in a reasonable time frame. In contrast to conventional methods, metaheuristic algorithms deliver better and optimal results in a reasonable amount of time. These algorithms are relatively easy to implement and are designed to provide global optima without falling into local optima regions. Metaheuristics are generally developed by taking inspiration from collaboration and indirect communication mechanisms inherited by natural and biological evolution, e.g., genetic programming (GP) [2]; differential evolution (DE) [3]; immune network algorithm (INA) [4]; dendritic cell algorithm (DCA) [5]; physical sciences, such as equilibrium optimizer (EO) [6]; Henry’s gas solubility optimization (HGSO) [7]; swarm-based behavior, e.g., Symbiotic Organism Search (SOS) [8] and whale optimization algorithm (WOA) [9]; and imitating problem-solving ways by humans such as teaching learning-based optimization (TLBO) [10].

The development of new metaheuristic algorithms with novel concepts is a common practice. Some of the recently published algorithms are discussed below. Abdollahzadeh et al. [11] designed the artificial gorilla troops optimizer (GTO) based on the social behavior of gorillas. Azizi [12] proposed the Atomic Orbital Search (AOS) algorithm inspired by quantum mechanics and the quantum-based atomic model concept. Hashim et al. [13] developed the Honey Badger Algorithm (HBA), motivated by honey badgers’ sophisticated foraging behavior. Hasani Zade and Mansouri [14] used the prey-predator interaction of animals to develop the predator–prey optimization (PPO) algorithm, etc.

The metaheuristic algorithms demonstrate effective solutions compared to the traditional optimization techniques, especially when applied to highly nonlinear, multidimensional, and large-scale problems. Apart from the different working and inspiration mechanisms of these algorithms, they have a common way of searching the solution space using exploration and exploitation processes. The algorithms efficiently explore the entire search space with the maximum number of random moves in the exploration phase, while the exploitation phase is responsible for finding better solutions close to the current global best solution. Balancing the two phases of an algorithm is the most critical task and the key to success [15].

Besides the numerous advantages of metaheuristics, the no-free-lunch (NFL) [16] hypothesis argues that none of these algorithms can solve all kinds of problems. There is no guarantee that the algorithm that provides the best solutions to a particular set of problems will perform consistently better than another set of functions or problems. In addition, the choice of values for various metaheuristic parameters influences the final solution quality. In addition, the metaheuristic often struggles with inherent problems, such as were already in practice.

Several researchers have improved the WOA throughout the years to address its shortcomings. The following are some of WOA’s recent enhancements and modifications: Kaur and Arora [17] developed the chaotic WOA (CWOA) by using chaotic maps to change WOA’s parameters and speed up convergence. Sun et al. [18] proposed the modified WOA (MWOA), which used a non-linear dynamic strategy, Levy flight, and quadratic interpolation to avoid local optima and make solutions more accurate. Chen et al. [19] employed the Levy flight and a chaotic local search mechanism in the balanced WOA (BWOA) to avoid early convergence by enhancing the solution variety. Laskar et al. [20] incorporated WOA in particle swarm optimization (PSO). They came up with the hybrid whale–PSO algorithm (HWPSO) to avoid the stagnation effect. The authors also added the forced whale and capping phenomenon ideas to avoid local optima and speed up convergence. Bozorgi et al. [21] presented two WOA variations: IWOA and IWOA+. They have increased WOA’s exploration capability by utilizing DE’s superior exploration ability.

A DE-based WOA with chaotic map and opposition-based learning (DEWCO) was proposed by Elaziz et al. [22]. To increase the solution-finding speed of WOA, Yildiz [23] put forward the hybrid whale–Nelder–Mead algorithm (HWOANM), a hybrid WOA with the aid of the Nelder–Mead (NM) algorithm. Chakraborty et al. [24] devised a new version of the WOA algorithm called WOAmM. The authors changed the mutualism strategy of the SOS algorithm and then used it in WOA to balance the search process. Khadanga et al. [25] suggested a modified WOA (MWOA) by using the encircling prey phase and a bubble-net attacking phase to avoid trapping at local optima and used the algorithm in the load frequency controller design of a power system consisting of a PV grid and thermal generator. In the random spare reinforced WOA (RDWOA) [26], the authors used a double adaptive weight mechanism to improve the ability to explore at the beginning of the search and the ability to exploit at the end. In success history–based adaptive DE with WOA (SHADE-WOA) [27], the authors merged success history–based adaptive DE (SHADE) with updated WOA to create a hybrid algorithm. An information-sharing mechanism was used to assist the algorithms in efficiently exploring and exploiting the search space.

In [28], the authors introduced an improved version of WOA, called the Levy-flight-based WOA (LWOA); the levy-flight mechanism was incorporated with the WOA to enhance the ability to avoid premature convergence and boost global searchability. The method was used to solve the underwater image-matching problem in an unmanned underwater vehicle vision system. Kushwah et al. [29] suggested a new WOA variant with a roulette wheel selection strategy to enhance the convergence speed of WOA and applied it to the weight-updating technique of artificial neural networks. Fuqiang et al. [30] designed a bi-level WOA to solve the scheduling of risk management problems from IT projects.

Anitha et al. [31] designed a modified whale optimization algorithm (MWOA). The authors controlled the whale positions using the cosine function, and the whales’ movements were controlled by applying correction factors while updating their positions. The hunger search–based WOA (HSWOA) [32] was proposed by Chakraborty et al., integrating the concept of hunger into the WOA to minimize the demerits of WOA. An improved WOA (ImWOA) [33] was proposed by altering the exploration phase of the basic WOA and incorporating a new whale hunting concept, “Cooperative hunting,” in the exploration phase of the WOA to balance the search activity. Lin et al. [34] developed the niching hybrid heuristic WOA (NHWOA); the niching strategy was used to diversify the solutions and control early convergence. Parameters of WOA were modified heuristically to encourage search agents’ capacity for exploration during evolution. Avoidance of local solutions was ensured by executing a perturbation to the location of all the solutions. An enhanced WOA (EWOA) [35] was designed by Cao et al. to introduce improved dynamic opposition-based learning, and they converted the “Encircling Prey” phase into an adaptive phase. The modifications struck a balance between global and local searches in the algorithm.

Contrary to the previous research, only the local or global elite solution is used in this work, and an elite-based form of WOA (EBWOA) is proposed. Choosing a local elite solution from a group of random solutions allows the search process to shift the quest into different regions of the search domain. Thus, the algorithm explores the local best solution, and using inertia weight, the process examines the surrounding of the potential solution during both exploration and exploitation. The algorithm’s convergence speed is accelerated by the use of the global best solution during the bubble-net attack phase. The following are the main contributions of the study:

  • The encircling prey or the bubble-net attack phase is selected with the local best or the global solution, and an inertia weight using a traversing parameter Ω is utilized to accomplish exploration or exploitation. The search prey phase of basic WOA is eliminated to reduce run time.

  • The numerical results of benchmark functions are compared with basic algorithms and WOA variants. The evaluated results of the IEEE CEC 2019 function set are compared with a list of modified variants.

  • Performance is verified using statistical tests and a variety of analytics.

  • EBWOA also solves two real engineering design problems and the classical cloud scheduling problem to schedule bag-of-tasks applications over cloud resources.

The rest of the work is structured as follows: “Whale Optimization Algorithm” presents the traditional WOA. “Proposed Elite-Based Whale Optimization Algorithm (EBWOA)” contains a complete discussion of the proposed algorithm. “Discussion of Numerical Results” compares the results of EBWOA with numerous basic and modified algorithms and two real engineering problems. “Analysis of EBWOA’s Performance with Various Metrics” examines the performance of EBWOA using various performance measurement metrics. “Solving Cloud Scheduling Problem using EBWOA” describes the cloud scheduling problem and compares the evaluated results. “Conclusion” concludes the research carried out with concluding remarks.

Whale Optimization Algorithm

The WOA was developed to pursue the behavior of humpback whales, and it comes under swarm-based techniques. WOA, like other metaheuristic algorithms, begins with a set of parameters and a set of search agents that make up the underlying population. The search cycle alternates between local and global search phases, with each iteration relying on parameter selection to discover the best solution. After a certain number of cycles, the method will be over, and the best value for the objective function and the solution that goes with it will be the result. The different phases of the WOA are discussed below:

Exploration Phase

The most random motions possible are preferable during this algorithm phase to explore the search space efficiently. The whales move around in this phase, investigating the whole search area. This method’s procedure can be stated numerically as follows:

$$\overline{Dt }=|C. {Sol}_{r}^{\left(i\right)}-{Sol}^{\left(i\right)}|$$
(1)
$${Sol}^{\left(i+1\right)}= {Sol}_{r}^{\left(i\right)}-{A}^{\prime}. \overline{Dt }$$
(2)

\(Sol\) represents a population solution, \(So{l}_{r}\) is selected arbitrarily from the present population, \(i\) is the current value of the iteration, and \(\overline{Dt }\) is the difference between \({Sol}_{r}^{\left(i\right)}\) and \({Sol}^{\left(i\right)}\) in Eqs. (1) and (2). The (.) operator represents component-by-component multiplication, and | | denotes the absolute value.

The following equations are used to calculate parameters A′ and C:

$${A}^{\prime}=2{a}^{\wedge }\times rnd-{a}^{^{\wedge}}$$
(3)
$$C=2\times rnd$$
(4)

With the rising iteration value, the variable \({a}^{^{\wedge}}\) traverses directly from 2 to 0, and \(rnd\) is an arbitrary value between [0, 1].

Exploitation Phase

Two hunting tactics used in WOA to accomplish local search are encircling the target prey and the bubble-net attacking approach. The following is a summary of these phases:

Encircling Prey Phase

The search agent with the best objective function value is considered the target solution during this phase. Other whales in the population are updated using the present best whale value. The updating method can be stated mathematically as follows:

$$Dt1=\left|C\cdot {Sol}_{\mathrm{best}}^{\left(i\right)}-{Sol}^{\left(i\right)}\right|$$
(5)
$${Sol}^{\left(i+1\right)}= {Sol}_{\mathrm{best}}^{\left(i\right)}-{A}^{\prime}\cdot Dt1$$
(6)

\({Sol}_{\mathrm{best}}\) is the best solution evaluated up to the present iteration. \(Dt1\) is the distance between the best solution and the current solution.

Bubble-Net Phase

The whales move in a spiral path during the attack. The process is mathematically expressed as follows:

$$Dt2=|{Sol}_{\mathrm{best}}^{\left(i\right)}-{Sol}^{\left(i\right)}|$$
(7)
$${Sol}^{\left(i+1\right)}=Dt2\cdot {e}^{bl}\cdot \mathrm{cos}\left(2\pi l\right)+{Sol}_{\mathrm{best}}^{\left(i\right)}$$
(8)

The spiral path is denoted by using the variable \(b\) in Eq. (8), with \(b\) having a constant value of 1, and the value of \(l\) is a random number calculated using the equation below:

$$l=\left({a}^{\$}-1\right) rnd+1$$
(9)

As the search process advances, the variable \({a}^{\$}\) changes between [-1,-2], and \(rnd\) is used to signify a random value inside [0, 1]. \(Dt2\) is the distance between the best solution and the current solution.

The requirement for moving between the global and local search stages is the absolute value of A. If |A| is less than 1, the algorithm runs Eq. (2) and then searches the search space. Otherwise, exploitation is done with Eq. (6) or Eq. (8). A probability value of 0.5 is used to confirm the choice between the exploitation strategies. The mathematical expression is as follows:

$$\left\{\begin{array}{ll}{Sol}^{\left(i+1\right)}={Sol}_{\mathrm{best}}^{i}-{A}^{\prime}\cdot Dt1& if\ pr< 0.5\\ {Sol}^{\left(i+1\right)}=Dt2\cdot {e}^{bl}\mathrm{cos}\left(2\pi l\right)+{Sol}_{\mathrm{best}}^{i}& if\ pr\ge 0.5\end{array}\right.$$
(10)

where \(pr\) is a random positive value between 0 and 1.

Proposed Elite-Based Whale Optimization Algorithm (EBWOA)

A swarm-based metaheuristic optimization algorithm, the whale optimization algorithm, was designed by Mirjalili and Lewis [9], impersonating humpback whales’ hunting behavior. WOA employs a basic yet effective mechanism with minimal control parameters [36]. WOA has a low convergence rate and cannot escape the best local solution due to the insufficient study of the search zone. This new variant is proposed as a means of overcoming these inherent limitations of WOA. The search prey phase and the prey circling or bubble-net attack approach have been used in basic WOA to conduct global and local searches. EBWOA, on the other hand, uses only modified encirclement and bubble-net attack strategies. Basic EBWOA no longer includes the search for prey phase of basic WOA. Modified equations for encircling prey and bubble-net attack phases are as follows:

Modified Encircling Prey Phase

$$Dt1=\left|C\cdot {l\_Sol}_{\mathrm{best}}^{\left(i\right)}-{Sol}^{\left(i\right)}\right|$$
(11)
$${Sol}^{\left(i+1\right)}= { {\omega }_{i}\cdot l\_Sol}_{\mathrm{best}}^{\left(i\right)}-{A}^{\prime}\cdot Dt1$$
(12)

In the above Eqns., \(l\_So{l}_{best}\) is the local best solution and \({\omega }_{i}\) is the inertia weight calculated as

$${\omega }_{i}=0.3+0.3*rnd$$
(13)

Modified Bubble-Net Attack Phase

$$Dt2=|{Sol}_{\mathrm{best}}^{\left(i\right)}-{Sol}^{\left(i\right)}|$$
(14)
$${Sol}^{\left(i+1\right)}=Dt2\cdot {e}^{bl}\cdot \mathrm{cos}\left(2\pi l\right)+{\omega }_{i}\cdot {Sol}_{\mathrm{best}}^{\left(i\right)}$$
(15)

In Eqs. (14) and (15), \(So{l}_{\mathrm{best}}\) is the global best solution.

EBWOA uses local and global elite solutions to update the solutions during the search process. A group solution is chosen. The solution with the minimum fitness value from the group is called the local elite solution, and the solution with the minimum fitness value in the entire population is used as the global elite solution. Choosing a local elite solution from a group of solutions allows the process to move to different regions of the search space. While exploring the search domain, updating solutions using the local elite value incrementally pushed the algorithm toward the best value. The inertia weight \({\omega }_{i}\) allows the process to exploit the nearby region effectively. The algorithm’s convergence speed is accelerated by updating other solutions with the global best solution during the bubble-net attack phase. In the bubble-net phase, the area surrounding the global elite solution is searched using the global best solution and the inertia weight \(w\). The selection parameter Ω is implemented to move between the phases. The parameter value progresses from 1 to 0 with the increase of the iteration value. A probability value is used to compare the value, and if it is higher, the modified encircling prey phase is selected. If this is not the case, a modified bubble-net attack phase is used. Figure 1 displays the suggested EBWOA’s pseudo-code, and Fig. 2 shows the algorithm’s flowchart.

Fig. 1
figure 1

Pseudo-code of the proposed EBWOA algorithm

Fig. 2
figure 2

Flowchart of the proposed EBWOA algorithm

Discussion of Numerical Results

A total of twenty-five benchmark functions are used to assess the performance of the proposed EBWOA. The functions used in the study can be found in Appendix (Table 21). Functions F1–F13 are of the unimodal type. They have a global optimum and are used to assess the algorithm’s local search capacity and convergence speed. Functions F14 and F25 are of the multimodal type. They have an abundance of local responses that grow exponentially as the size of the area grows. Solving these functions can test the algorithm’s local search capacity and ability to overcome the local optima. The results of evaluating benchmark functions are compared with basic algorithms and modified WOA variants. EBWOA is also used to assess the capabilities of IEEE CEC 2019. The function set contains ten multimodal, non-separable functions. There are many local optima in most of these functions. The definition of these functions can be found in [1]. The results of the IEEE CEC 2019 functions are compared against a list of modified algorithms.

The system parameters include an Intel I3 processor, 8 GB of RAM, and MATLAB 2015a software. A population of a size of 30 with over 24,000 function evaluations is kept as termination criteria. Most WOA variants are judged using 500 or 1000 iterations as the termination criteria. Our algorithm records convergence in around 500 to 800 iterations for most functions. For this reason, we kept the end criteria of the program to 24,000 function evaluations, equivalent to 800 iterations. Because metaheuristic algorithms are stochastic, the comparison is based on the mean and standard deviation of findings from 30 independent runs. All the algorithms being compared have the same parameters as their original studies.

Comparison of Optimization Results of Benchmark Functions with Basic Algorithms

Evaluated results of the benchmark functions are compared with the tunicate swarm algorithm (TSA) [37], bald eagle search (BES) [38], WOA, symbiotic organisms search (SOS), and teaching-learning-based optimization (TLBO). The comparison algorithms’ parameters were set similarly to the values provided in the studies. Table 1 shows the mean and standard deviation (SD) values evaluated by all algorithms. EBWOA outperformed all other comparison algorithms on functions F1, F2, F3, F4, F6, F7, F8, F9, F10, F11, F12, F15, F17, F18, F20, F21, F22, F23, and F24. This shows that the algorithm can solve unimodal and multimodal problems. This is only possible if the algorithm’s global and local search phases are balanced. The encircling prey algorithm phase is used for exploration using the best local solution. Choosing between exploration and exploitation allows the algorithm to move randomly between these phases. Since the local best value is chosen in the encircling prey phase, the search process gradually progresses to the optimal value.

Table 1 Comparison of EBWOA results with the basic algorithms

For this reason, omitting the search prey stage does not adequately reflect the exploratory capability of the algorithm. Table 2 shows the pairwise comparison of EBWOA with different algorithms. EBWOA outperforms TSA, BES, WOA, SOS, and TLBO at 22, 20, 19, 21, and 22 functions. Identical results are obtained with the algorithms on 3, 5, 6, 4, and 3 functions. The numerical results and statistical analysis in Table 3 show that the EBWOA outperforms all other tested algorithms.

Table 2 EBWOA results of pairwise comparison with the basic algorithms using Table 1 data
Table 3 Statistical test results using Friedman’s rank test

Comparison of Optimization Results of Benchmark Functions with Modified WOAs

The modified algorithms used for comparison in this study are ESSAWOA [39], WOAmM, and whale optimization algorithm modified with SOS and DE (m-SDWOA) [1], SHADE-WOA, and HSWOA. All comparison techniques used here are effective and recently published. All comparison methods use the same parameter settings proposed in the respective study. The evaluated results are shown in Table 4. The data in the table shows that EBWOA can solve both unimodal and multimodal functions. The local best solution improved the exploration ability of the algorithm during the encircling prey phase. The method uses the encircling prey phase to perform a survey using the locally best solution while being progressively exploited during exploration. By choosing the Ω value, the algorithm can alternate between exploration and exploitation at random. The random inertia weight \((w)\) helps the search process to explore and exploit the nearby region. Being highly balanced, the algorithm can solve both types of functions effectively. Analyzing table data, EBWOA outperformed all other compared algorithms in eleven unimodal functions (F1, F2, F3, F4, F6, F7, F8, F9, F10, F11, and F12) out of thirteen evaluated functions. In function F5, EBWOA is superior only to SHADE-WOA; all other algorithms evaluate optimal results like EBWOA. EBWOA has obtained superior optimal outcomes in multimodal functions F18 and F21. In functions F14, F15, F17, F19, F20, and F25, EBWOA evaluated the optimal outcome, though a few other algorithms also generated similar results. ESSAWOA and SHADE-WOA outperformed EBWOA in three multimodal functions (F22, F23, and F24).

Table 4 Comparison of EBWOA results with the WOA variants

Table 5 shows the pairwise comparison of numerical results with the WOA variants. From Table 5, it can be seen that EBWOA outperformed the compared algorithms in most features. A statistical comparison of the algorithms given in Table 6 also confirms the improved performance of EBWOA.

Table 5 EBWOA results of pairwise comparison with the WOA variants using Table 3 data
Table 6 Friedman’s rank test with the WOA variants

Comparison of Optimization Results of IEEE CEC 2019 Function Set with Modified Algorithms

Along with EBWOA, IEEE CEC 2019 functions are also evaluated using the methods, namely, modified whale optimization algorithm with population reduction (mWOAPR) [40], enhanced whale optimization algorithm (eWOA), enhanced whale optimization algorithm integrated with Salp Swarm Algorithm (ESSAWOA), self-adaptation butterfly optimization algorithm (SABOA) [41], sine cosine grey wolf optimizer (SC_GWO) [42], and improved sine cosine algorithm (ISCA) [43]. The optimal value of each function in the IEEE 2019 function set is 1. The results calculated by all algorithms are tabulated in Table 7. The function numbers F26 to F35 denote the IEEE CEC 2019 functions. Table 8 shows a pairwise comparison of results from EBWOA and other algorithms. According to tabular data, EBWOA outperformed all other comparison algorithms in five functions. In function F26, eWOA, and ISCA and function F27, ISCA achieved similar optimal results with EBWOA. The data from Table 8 show that mWOAPR, eWOA, and SC-GWO can only outperform EBWOA on 3, 2, and 1 occasions, respectively. This confirms the superiority of the proposed EBWOA in solving complex optimization problems. The search process slowly proceeds to the optimal solution by checking the surrounding area for the local or global best solution. All these newly incorporated properties made the algorithm efficient. The statistical analysis results in Table 9 further support the dominance of EBWOA.

Table 7 EBWOA results compared with the modified algorithms with the IEEE CEC 2019 function set
Table 8 EBWOA results of pairwise comparison with the modified algorithms using Table 7 data
Table 9 Friedman’s rank test with the modified algorithms

Comparison of Design Concepts of WOA Variants Used for Comparison and EBWOA

EBWOA is compared with a total of nine WOA variants. ESSAWOA, WOAmM, m-SDWOA, SHADE-WOA, and HSWOA are compared using the classical benchmark functions, whereas LWOA, mWOAPR, eWOA, and ImWOA are compared using IEEE CEC 2019 functions. In LWOA, the “Levy flight” mechanism was used with WOA to increase diversity in the solution and skip the local solution. The idea of a hybrid algorithm was used to create ESSAWOA. Two algorithms, the Salp Warm Algorithm (SSA) and WOA, were merged to develop it. Firstly, SSA was modified with a non-linear parameter to strengthen the convergence function of SSA; then, it was merged with WOA. A lens opposition-based learning strategy was used in the algorithm to amplify the diversity in the solution. The mutualism phase of symbiotic organisms search (SOS) was modified and used in WOA to increase solution diversity; the new method was named WOAmM. In m-SDWOA, a modified mutualism phase and DE mutation strategy were used to enhance the exploration capacity of WOA. The commensalism phase of the SOS algorithm was used to increase solution accuracy. While making SHADE-WOA, SHADE and WOA were combined with a way to share information and a new way to hunt called “cooperative hunting.”

The concept of hunger from the algorithm Hunger Games Search (HGS) was introduced in WOA to develop HSWOA. mWOAPR was proposed by introducing random initialization of the solution in the “Search for Prey” phase of WOA. Moreover, the values of parameters “A” and “C” were modified to explore in the beginning and exploit later in the search. Population reduction was employed to make the convergence faster. Another variant of WOA, namely eWOA, was proposed by modifying the parameters “A” and “C” and introducing a random movement while exploring to lessen the computational burden. An exhaustive search near the potential solution was confirmed by employing an inertia weight. ImWOA is a recent variant of WOA that was designed by modifying the random solution selection process of the “Search Prey” phase in WOA. The other modifications in the algorithm include incorporating “cooperative hunting” to exploit easily and dividing total iterations into two halves, one for exploration and the other for exploitation. Unlike all the WOA variants, EBWOA uses local and global solutions for exploration and exploitation. The local best solution is a randomly selected solution from the group of cluster best solutions. In EBWOA, the “Search for Prey” phase used in WOA for exploration is omitted; instead, exploration is confirmed with the “Encircling Prey” phase. Exploration is preferred in the algorithm as exploration and exploitation are performed with either local or global solutions.

Real-World Engineering Problem

The gear train design problem, a real-world, unconstrained engineering problem, is resolved using EBWOA. “Gear Train Design” presents a description of the problem and an analysis of the evaluation results.

Gear Train Design

Sandgren [44] presented this design challenge, which is unconstrained in nature. There are four choice variables, y1, y2, y3, and y4, which represent the number of teeth in each gear wheel. All variables fall inside the range [12—60] and are positive integers. The angular velocity of the output shaft and the ratio of the input shaft were used to define the gear ratio for decreasing a gear train. The objective of this design challenge was to reduce the cost of the gear ratio to as close to 1/6.931 as possible. This problem’s mathematical formulation is given below.

Objective Function

$$Min f\left(y\right)=\left[\left(\frac{1}{6.931}\right)-{\left({y}^{3}{y}^{4}/{y}^{1}{y}^{4}\right)}^{2}\right]$$
(16)

Subject To

$$\begin{array}{cc}12\le {y}^{p}\le 60,& p=\mathrm{1,2},\dots .,4.\end{array}$$

Analysis of Outcome

Calculated results from EBWOA are compared to four basic versions of the metaheuristic and six WOA variants. Table 10 contains the results evaluated by the proposed algorithm and the algorithms used for comparison. EBWOA and SHADE-WOA achieved the optimal result, and their evaluated results are similar. The component algorithm of EBWOA, i.e., WOA produced the worst result on this problem. This authenticates the extension of WOA.

Table 10 Evaluated results of the gear train design problem

Three-Bar Truss Design

The issue involves minimizing the volume of a three-bar truss that is statically loaded while meeting three limitations on stress, deflection, and buckling. To change the sectional areas, this problem has to optimize two variables (\({x}^{1}\) and \({x}^{2}\)). The search space for this topic is challenging and restricted. The following is the mathematical formulation for this issue:

$$\overrightarrow{{\varvec{x}}}=\left\{{x}^{1},{x}^{2}\right\}$$

Objective Function

$$Min. f\left(x\right)=L\left\{{x}^{2}+2\sqrt{2} {x}^{1}\right\},$$
(17)

Subject to

$${h}_{1}\left(x\right)=\frac{{x}^{2}}{2{x}^{2}{x}^{1}+\sqrt{2} {\left({x}^{1}\right)}^{2}} P-\sigma \le 0,$$
$${h}_{2}\left(x\right)=\frac{{x}^{2}+\sqrt{2}{x}^{1}}{2{x}^{2}{x}^{1}+\sqrt{2} {\left({x}^{1}\right)}^{2}} P-\sigma \le 0,$$
$${h}_{3}\left(x\right)=\frac{1}{{x}^{1}+\sqrt{2}{x}^{2}}P-\sigma \le 0,$$

where

$$\begin{array}{c}0\le {x}^{1},{x}^{2}\le 1,\ and\\ P=2,\ L=100\ \&\ \sigma =2.\end{array}$$

Analysis of Outcome

The optimal solution for this problem is 2.6389584338E+02. Table 11 shows the evaluated results. SOS, TLBO, m-SDWOA, SHADE-WOA, and ImWOA are the methods whose results are similar. However, among them, the standard deviation of EBWOA is the minimum. It reflects the consistency of the algorithm. Therefore, EBWOA has emerged as the best method among comparative methods.

Table 11 Evaluated results of three bar truss design problems

Analysis of EBWOA’s Performance with Various Metrics

In this section, the solution-finding speed of the proposed method, the time needed to search for the optimal solution, the exploration with the exploitability of the algorithm, and the performance index are analyzed.

Convergence Study

The algorithm’s ability to find solutions quickly is tested using the convergence curve. This section compared the solution-finding speed of the proposed algorithm with its segment WOA. The curves are plotted with a population size of 30, and the algorithms determine the best fitness value for a single function with a termination condition of 100 iterations. Figure 3 shows the comparison curves of some randomly chosen fundamental functions of the unimodal type, the multimodal types, and the IEEE CEC 2019 functions. The figure’s first six curves (a–f) are drawn using the benchmark functions, and the curves from (g–i) are generated using IEEE CEC 2019 functions. In each curve in the figure, EBWOA converges much faster than WOA. This means that WOA’s search speed has been increased after the modification.

Fig. 3
figure 3

Comparison of convergence curves of EBWOA with WOA

Runtime Analysis

Run time is the time taken by an algorithm to execute and produce the output. Here, we have evaluated the execution time by assessing the first function from the IEEE CEC 2019 function set, i.e., F26 in this study. The execution time of all the compared algorithms is given in Table 12. The table data reveals that EBWOA takes slightly greater time for execution than WOA. Similarly, SOS and TLBO are also faster than EBWOA. However, EBWOA takes less time than BES and TSA. Among the seven WOA variants used for runtime comparison, only two methods, m-SDWOA and mWOAPR, have less execution time than WOA. But analysis of the numerical outcomes already ensured that the performance of EBWOA is far better than the algorithms WOA, SOS, TLBO, m-SDWOA, and mWOAPR. Therefore, considering the high performance of EBWOA, a slight increase in run time compared to the component algorithm is acceptable.

Table 12 Comparison of Run time with basic and WOA variants

Analysis of Exploration with Exploitation Capacity

Exploration and exploitation are the two basic phases of an optimization algorithm. The distance between the solutions grows during exploration but reduces during exploitation. Diversity measurement is looked at and defined to determine how far apart search agents are getting closer or farther apart.

$${div}^{j}=\frac{1}{n}\sum\nolimits_{i=1}^{n}\left|\mathrm{median}\left({Sol}^{j}\right)-{Sol}_{i}^{j}\right|$$
(18)
$$div=\frac{1}{dim}\sum\nolimits_{j=1}^{dim}{div}^{j}$$
(19)

\(n\) and \(dim\) stand for the number of search agents and design variables, respectively. \({Sol}_{i}^{j}\) is the dimension \(j\) of the \(ith\) search agent, and \({Sol}^{j}\) is the median of the population for that dimension. \({div}^{j}\) is the diversity in each dimension, and mathematically, it is defined as the distance between each search agent’s \(jth\) dimension and the dimension’s median. The variety of the entire population \((div)\) is then determined by averaging each \(div\).

$$\mathrm{exploration\ percentage}= \left(\frac{div}{{div}_{\mathrm{maxi}}}\right)\times 100$$
(20)
$$\mathrm{exploitation\ percentage}= \left(\frac{\left|div-{div}_{\mathrm{maxi}}\right|}{{div}_{\mathrm{maxi}}}\right)\times 100$$
(21)

where “\({div}_{maxi}\)” is the maximum diversity value attained over the entire optimization process. The exploration percentage links the diversity in each iteration to the largest variety found during the search. The exploitation level, measured by the exploitation percentage, is the difference between the maximum diversity and the diversity of an iteration at the moment. The concentration of search agents causes this difference.

Figures 4 and 5 represent the exploration and exploitation graphs showing their percentage for EBWOA and WOA, respectively, on six random benchmark functions. In both the figures, diagrams (a), (b), and (c) depict the exploration and exploitation of unimodal functions, whereas (d), (e), and (f) show the graphs obtained by evaluating three multimodal functions. A comparison of diagrams in both the figures reveals that the exploration and exploitation ability of EBWOA is more balanced than that of WOA in function types, unimodal and multimodal.

Fig. 4
figure 4

Exploration vs. exploitation percentage of EBWOA

Fig. 5
figure 5

Exploration vs. exploitation percentage of WOA

Performance Index Evaluation

The performance index (PI) of EBWOA is evaluated in terms of an increase or decrease in performance. Performance upsurge or reduction of an algorithm is calculated using the below-given formula.

$$PI\left(\mathrm{\%}\right)=\frac{\mathrm{Performance}\ \left(\mathrm{other\ algorithm}\right)-\mathrm{Performance}\ (\mathrm{EBWOA})}{\mathrm{Performance}\ (\mathrm{EBWOA})}\times 100\mathrm{\%}$$
(22)

The performance of EBWOA is compared to the modified methods using the evaluated outcomes of the IEEE CEC 2019 function set which are given in Table 7. Table 13 holds the function-wise comparison data. The positive value in the table indicates an increase, and the negative value specifies a decrease in the performance of EBWOA on that particular function compared to the algorithm concerned. The value of 0.00 designates no improvement in the performance of EBWOA on that function compared to the specific algorithm.

Table 13 Performance index of EBWOA showing the percentage of increase or decrease in capacity

Solving Cloud Scheduling Problem using EBWOA

In this section, the proposed EBWOA strategy is applied to the classical NP-hard cloud scheduling problem for executing multiple independent bag-of-tasks (BoT) applications over virtual machines (VMs) of a cloud computing system [45, 46]. Each BoT application consists of several independent tasks requiring an equal number of processing elements for execution [47, 48]. The next sub-section briefly describes problem objectives, fitness functions, workloads, experimental setup, results, and analysis. In this paper, we aim to optimize both makespan (users’ perspective) and energy consumption (service providers’ perspective) metrics for the scheduling problem. The next sub-section briefly describes related work, objectives, fitness function, workloads, experimental setup, results, and analysis associated with the undertaken cloud scheduling problem.

Related Works

Using metaheuristic algorithms, a lot of researchers have tried to figure out how to solve the scheduling problem in the cloud [46,47,48,49]. This is because exhaustive solutions to the task scheduling problem are not feasible with large-scale scheduling problems [49]. The most common scheduling objectives addressed in the literature are makespan, utilization, energy efficiency, execution cost, degree of imbalance, etc. In [49], the authors proposed a fuzzy-based security-aware and energy-aware task scheduling algorithm called SAEA by introducing a parallel version of the squirrel search algorithm. The SAEA resulted in significant performance improvement over the baseline metaheuristics in terms of energy cost, makespan, degree of imbalance, and security levels. The authors in [50] introduced an improved ACO algorithm to schedule independent tasks over cloud resources to address three objectives minimizing waiting time, improving the degree of resource load balance, and reducing task completion time. On the other hand, authors in [51] presented a hybrid task scheduling algorithm by combining methods PSO and GA, which resulted in a reduction in total task completion time and improved convergence accuracy compared to the compared algorithms.

In [52], a multi-objective workflow scheduling method was presented for finding an optimal trade-off between makespan and execution cost by combining heterogeneous earliest end time (HEFT) and the ACO algorithm. Recently, a task scheduling approach called Parallel Reinforcement Learning Caledonian Crow (PRLCC) has been proposed by combining the New Caledonian crow learning algorithm (NCCLA), reinforcement learning (RL), and parallel strategy with the objectives of improving waiting time, energy consumption, security guaranty, and resource utilization [53]. The authors in [54] proposed a modified GA algorithm combined with a greedy strategy (MGGS) to optimize the task scheduling process, reduce the total completion time and average response time, and improve QoS parameters. The authors of [55] presented a multi-objective hybrid Fuzzy Hitchcock Bird-inspired approach (HBIA) with fuzzy logic and levy flight mechanism to address makespan and resource utilization goals. A recent research work [56] introduced a hybrid multi-verse optimizer with a genetic algorithm (MVO-GA) for independent scheduling tasks in a cloud environment, solving the task scheduling problem.

In another attempt, a hybrid metaheuristic solution was presented by combining WOA, Henry’s gas solubility optimization (HGSO), and comprehensive opposition-based learning (COBL) for task scheduling problems to reduce makespan [57]. The authors in [58] presented an enhanced version of the MVO algorithm (EMVO) for improving makespan, throughput, and utilization. Table 14 shows the comparison of a few task-scheduling algorithms.

Table 14 Comparison of task scheduling algorithms

Problem Objectives and Fitness Function

The objectives and fitness function of the cloud scheduling problem are described as follows:

Makespan Model

The makespan objective is the latest finish time of tasks in a set of BoT applications, which is calculated as per the following:

$$\mathrm{Makespan}={\mathrm{max}}_{j\in PTK }\left({\mathrm{FinishTime}}_{j}\right)$$
(23)

where \(j\) is a task belonging to a distinct BoT application and PTK is the set of BoT applications. A shorter makespan is desired since it indicates faster processing [48].

Energy Model

The energy consumption (energy consumption) of an individual CPU core \(\left({C}_{k}\right)\) can be expressed as follows:

$$\begin{aligned}\mathrm{EnergyConsumption}\left({C}_{k}\right)=&\;\int_{0}^{\mathrm{Makespan}}{\mathrm{EnergyConsumption}}_{\mathrm{comp}}\left({C}_{k},t\right)\\& +{\mathrm{EnergyConsumption}}_{\mathrm{idle}}\left({C}_{k} , t\right)dt,\end{aligned}$$
(24)

where \({\mathrm{EnergyConsumption}}_{\mathrm{comp}}\) and \({\mathrm{EnergyConsumption}}_{\mathrm{idle}}\) are the energy consumed during execution and during idle time, respectively [49].

The overall energy usage of the cloud data center, including all CPU cores and the number of virtual machines (Nvm), can be represented as follows:

$$\mathrm{EnergyConsumption}=\sum\nolimits_{{\mathrm{C}}_{\mathrm{k}}}^{Nvm}\mathrm{EnergyConsumption}\left({\mathrm{C}}_{\mathrm{k}}\right)$$
(25)

Objective Function

The cloud scheduling problem is modeled as a bi-objective combinatorial optimization problem in this research, and the weighted-sum method is used to reduce the workload makespan and overall energy consumption of cloud computing resources. The following is the definition of the cloud scheduling problem’s fitness function:

$$\begin{aligned}\mathrm{Fitness\ function}, F\left(X\right)=&\;\mathrm{min}\left(w1\times \mathrm{Makespan}+w2\right. \\ & \left.\times\; \mathrm{EnergyConsumption}\right)\end{aligned}$$
(26)

The weights of the makespan and energy consumption objectives are \(w1\) and \(w2\), respectively. This paper determines optimal weight values by conducting several independent experiments with varying weights.

Experimental Setup and Workloads

This section provides the details of real benchmarking workloads, experimental configurations, experimental results, and observations. The proposed EBWOA and baseline algorithms are implemented using Java and the JMetal 5.4 metaheuristic framework (http://jmetal.github.io/jMetal/) on the CloudSim 3.0.3 simulator. Experiments are done on a computer with an Intel i7-8550U processor running at 1.80–2.0 GHz (8 cores), 16 GB of RAM, and Windows 10 installed. Each experiment is done 30 times with the same input workload and experimental settings to eliminate any bias.

The experimental workloads in this research were derived from the logs of two real-supercomputing sites, CEA-Curie and HPC2N, which can be found at http://www.cs.huji.ac.il/labs/parallel/workload. Table 15 shows that the cloud computing system has a single data center with five different VMs that are already set up [59].

Table 15 Description of the VM configuration adopted for experimentation

Results and Analysis

The evaluated result of EBWOA on cloud scheduling problem and their comparison with other algorithms is discussed in this sub-section.

Statistical Results in Terms of Best, Average, and Worst Values

This sub-section analyzes the performance of the proposed EBWOA using a few statistical indicators against state-of-the-art baseline algorithms, viz. WOA [10], HSWOA [33], Gaussian cloud-whale optimization algorithm (GCWOAS2) [60], binary-enhanced WOA (BE-WOA) [61], multi-objective particle swarm optimization (MOPSO) [62], butterfly optimization algorithm (BOA) [63], moth flame optimization (MFO) [64], and improved WOA (IWOA) [22] algorithms. Three measures, e.g., best, average, and worst values of obtained results, have been considered for the analysis. The best, average, and worst values are the minimum, average, and maximum values, respectively, among 30 repeated executions of each tested algorithm’s independent experiment for makespan and energy consumption metrics. Before conducting final experiments, a convergence analysis is conducted to determine the optimal values of parameters of all metaheuristics involved in the cloud task scheduling problem (note: convergence study details can be obtained from the corresponding author at any time).

Tables 16 and 17 present the statistical findings of makespan and energy consumption for all scheduling algorithms for CEA-Curie workloads, whereas Tables 18 and 19 show the statistical results for HPC2N workloads. The least values are shown in bold. It is clear from Tables 16, 17, 18, and 19 that the proposed EBWOA method yielded significantly better minimum, average, and maximum values of makespan and energy consumption metrics than baseline algorithms.

Table 16 Statistical results of the makespan metric for CW0-CW9 workloads
Table 17 Statistical results of the energy consumption metric for CW0-CW9 workloads
Table 18 Statistical results of the makespan metric for HW0-HW9 workloads
Table 19 Statistical results of the energy consumption objective for HW0-HW9 workloads

Overall Makespan and Energy Consumption Results

The makespan and energy usage box plots of all scheduling experiments conducted on CEA-Curie and HPC2N workloads are shown in Figs. 6 and 7. The proposed EBWOA algorithm has produced a significantly better makespan and energy consumption than each tested baseline algorithm. The BE-WOA, GCWOAS2, and HSWOA algorithms alternatively ended up in first, second, and third runner-up positions compared to the EBWOA approach. These findings show that the EBWOA algorithm outperforms the baseline algorithms in terms of performance, robustness, and stability.

Fig. 6
figure 6

Box plots for CEA-Curie workloads. a Makespan. b Energy consumption

Fig. 7
figure 7

Box plots for HPC2N workloads. a Makespan. b Energy consumption

Summarizing the Overall Cloud Scheduling Results

Finally, the overall experimental outcomes of the suggested EBWOA strategy over baseline approaches are reported using the overall mean, median, and percentage of performance improvement rate (PIR%). PIR% helps in calculating the percentage reduction in makespan and energy consumption achieved by the EBWOA over baseline techniques, and it is calculated as follows:

$$\mathrm{PIR}\left(\mathrm{\%}\right)=\frac{\mathrm{Performance}\ \left(\mathrm{other\ Algorithm}\right)-\mathrm{Performance}\ (\mathrm{EBWOA}) }{\mathrm{Performance}\ (\mathrm{EBWOA})}\times 100\mathrm{\%}$$
(27)

Table 20 shows that the EBWOA approach significantly reduces makespan and energy consumption, as indicated by outstanding PIR% results over baseline scheduling approaches for both CEA-Curie and HPC2N workloads. In the case of CEA-Curie workloads, EBWOA resulted in makespan and energy consumption reductions in the range of 1.44–18.96% and 1.08–13.27%, respectively, over the baseline metaheuristics. On the other hand, for HPC2N workloads, EBWOA’s performance improvement in the range of 0.63–24.81% (for makespan) and 2.68–26.89% (for energy consumption) has been observed over the baseline metaheuristics.

Table 20 Overall mean, median, and PIR% results

Conclusion

WOA has several advantages, such as a simple structure, fewer parameters, and simplified implementation. Besides the advantages, WOA also has disadvantages, such as low exploratory ability, early convergence, and low solution accuracy. This research features a new WOA (EBWOA) variant with improved exploration capabilities and a balance between exploration and exploitation. The method updates solutions in the population using the local or global elite solution. The distinctiveness of the newly developed method is that the exploration is carried out using the encircling prey phase, which is used in the basic WOA for exploitation. Unlike simple WOA, it uses only two steps to circle the prey and bubble-net methods to update solutions. In the encircling prey phase, the locally best solution to update other solutions promoted exploitation while exploring the quest region. The addition of inertia weight enabled the method to conduct an exhaustive search for the best local and global solutions. The effectiveness of the proposed methods is evaluated using twenty-five classic benchmark functions, IEEE CEC 2019 functions, two design problems, and a cloud task scheduling problem. Comparisons of numerical results using various basic and modified algorithms, statistical analysis, convergence analysis, runtime analysis, exploration vs. exploitation capability, and performance index verification all show that the changes proposed in this document make WOA better at finding solutions.