1 Introduction

It is a great challenge for the research community to solve real life nonlinear optimization problems arising from various branches of scientific engineering. Researchers have developed various techniques to provide the best result for such problems but still there are many optimization problems to be solved. Therefore, researchers are working in this research area, hence, the literature available for these methods are becoming rich. For simplicity, the available literature of these methods to solve the given problem can be classified into deterministic methods and probabilistic methods. Deterministic methods follow a set of rules and for a particular input, they always produce the same output for a given problem. Deterministic methods are applicable only to a restricted class of problems. However, probabilistic methods are more general and they are applicable to a wide range of problems. The inspiration of mostly probabilistic methods is natural laws. Therefore, they are also known as Nature Inspired Algorithms.

From the most recent couple of decades, Nature Inspired Algorithms (NIA) are becoming more and more popular in engineering application problems since they are based on simple ideas and are easy to execute. The beauty of these algorithms is that they bypass local optima. Classical optimization methods require some information about the functions like gradient etc. but NIA does not require any such information. Therefore, it can be used in an extensive variety of problems covering diverse disciplines.

Nature Inspired optimization algorithms mimic biological or physical phenomena. The available literature of NIA is vast. Genetic Algorithms (GA) (Holland 1992), Evolution Strategy (Rechenberg 1978), Genetic Programing (Koza 1992), Probability-based Incremental Learning (Dasgupta and Zbigniew 2013), Biogeography-Based Optimization (Simon 2008), Simulated Annealing (Kirkpatrick et al. 1983), Gravitational Search Algorithms (Rashedi et al. 2009; Singh and Deep 2015, 2017a, b), Central Force Optimization (Formato 2007), Curved Space Optimization (Moghaddam et al. 2012), Big-Bang Big-Crunch (Erol and Eksin 2006), Small-World Optimization Algorithm (Du et al. 2006), Particle Swarm Optimization (Kennedy 2011), Ant Colony Optimization (Dorigo et al. 2006), Marriage in Honey Bees Optimization (Abbass 2001), Cuckoo Search (Yang and Deb 2009), Artificial Bee Colony (Basturk and Karaboga 2006), Bat-inspired Algorithm (Yang 2010), Monkey Search (Mucherino and Seref 2007), Firefly Algorithm (Yang 2010), Teaching Learning Based Optimization (Rao et al. 2011), Tabu Search (Glover 1989, 1990), Harmony Search Algorithm (Geem et al. 2001), Firework Algorithm (Tan and Zhu 2010) etc. are the popular nature inspired algorithms.

All NIA starts with a random population and share a common feature and their search process have an exploration phase and an exploitation phase. The exploration phase corresponds to global search i.e. algorithm is capable of exploring the search space globally. Whereas, exploitation corresponds to local search i.e. algorithm is capable of detailed investigation in the promising area(s) of the search space. It is extremely difficult to maintain proper balance between exploration and exploitation due to the stochastic nature of the optimization procedure.

The literature to solve optimization problems is very rich, but there is no single technique which can solve all the problems (Wolpert and Macready 1997). Therefore new algorithms are being introduced and existing algorithms are being developed with the hope that they have some advantage over the existing ones. Consequently, searching a new heuristic algorithm is an open issue (Wolpert and Macready 1997). The algorithms are being hybridized with the operator which has the capability to search locally (to improve solution quality) or search globally search (to skip premature convergence) or both.

Whale Optimization Algorithm (WOA) (Mirjalili and Lewis 2016) is a newly developed algorithm. It has been tested on various optimization problems with different difficulty level, but similar to other heuristic algorithms, WOA suffers with immature convergence i.e. it may rapidly converge towards a local optima instead of global optima and stagnation in local solutions while solving optimization problems. Hence, the quality of the final solution may decrease dramatically. These issues may happen when the optimizer could not have a fine balance between its exploration and exploitation.

WOA has been hybridized with simulated annealing (SA) to tackle the feature selection problems (Mafarja and Mirjalili 2017). Kaveh and Ghazaan (Kaveh and Ghazaan 2017) have proposed a modified whale optimization algorithm for sizing optimization of skeletal structures. Oliva et al. (Oliva et al. 2017) utilized a chaos-embedded WOA to deal with parameter estimation of photovoltaic cells. WOA has been used to solve multilevel thresholding image segmentation (El Aziz et al. 2017), the optimal renewable resources placement problem in distribution networks (Reddy et al. 2017) and to find the optimal weights in neural networks (Aljarah et al. 2018). It also has been used to attain the best parameters of SVM classifier (Tharwat et al. 2017) for predicting the drug toxicity. Yan et al. (2018) proposed ameliorative whale optimization algorithm to solve multi-objective water resource allocation optimization models. Nasiri and Khiyabani (2018) proposed whale clustering optimization algorithm for clustering. Hasanien (2018) used whale optimization algorithm to improve the performance of photovoltaic power systems. Kaur and Arora (2018) proposed chaotic whale optimization algorithm. Jadhav and Gomathi (2018) proposed a technique for data clustering using whale optimization algorithm. Elaziz and Oliva (2018) used opposition-based learning to enhance the exploration phase of whale optimization algorithm and used it to estimate the parameters of solar cells using three different diode models. Algabalawy et al. (2018) employed whale algorithm to find the optimal design of the system for minimizing the total annual cost and system emissions. Mehne and Mirjalili (2018) used whale optimization algorithm to solve optimal control problems. Xiong et et al. (2018) proposed improved whale optimization algorithm to extract the parameters of different solar photovoltaic models accurately. Ala’m et al. (2018) proposed a hybrid machine learning model based on support vector machines and whale optimization algorithm for the task of identifying spammers in online social networks. Saidala and Devarakonda (2018) proposed improved whale optimization algorithm and applied to a clinical dataset of an anaemic pregnant woman and obtained optimized clusters and cluster heads to secure a clear comprehension and meaningful insights in the clinical decision-making process. Horng et al. (Horng et al. 2017) proposed a novel multi-objective method for an optimal vehicle traveling based on Whale optimization algorithm. Hassan and Hassanien (2018) presented a novel automated approach for extracting the vasculature of retinal fundus images using whale optimization algorithm. Mostafa et al. (2017) proposed a technique for liver segmentation in MRI images based on whale optimization algorithm. Abdel-Basset et al. (2019) used modified whale optimization algorithm for solving 0–1 knapsack problem. El Aziz et al. (2018) proposed a new method for determining the multilevel thresholding values for image segmentation using whale optimization algorithm. El Aziz et al. (2018) proposed non-dominated sorting technique based on multi-objective whale optimization algorithm for content-based image retrieval. Nazari-Heris et al. (2017) used whale optimization to solve combined heat and power economic dispatch problem. Sun et al. (2018) proposed a modified whale optimization algorithm for large-scale global optimization problems. Abdel-Basset et al. (2018) proposed a modified whale optimization algorithm for cryptanalysis of Merkle-Hellman Knapsack Cryptosystem. Luo and Shi (2018) proposed hybrid whale optimization algorithm based on modified differential evolution for global optimization problems. Whale Optimization Algorithm has been used in many application problems like feature selection and land pattern classification (Bui et al. 2019). Sreenu and Sreelatha (2017) proposed the task scheduling algorithm based on the multi-objective model and the whale optimization algorithm for task scheduling in cloud computing. Eid (2018) proposed binary whale optimisation algorithm for feature selection. Ghahremani-Nahr et al. (2019) used whale optimization algorithm to minimize the total costs of closed-loop supply chain network. Hussien et al. (2019) proposed a binary whale optimization algorithm to select the optimal feature subset for dimensionality reduction and classifications problem using a sigmoid transfer function. Laskar et al. (2019) proposed hybrid whale-particle swarm optimization algorithm for solving complex optimization problems. Yousri et al. (2019) proposed four chaotic whale optimization variants. Elhosseini et al. (2019) proposed A-C WOA and applied on a biped robot to find the optimal settings of the hip parameters. The more literature may be study in Mirjalili et al. (2020).

In this paper, the performance of WOA is improved by hybridizing it with Laplace Crossover, which is a well-known Real Coded Genetic Algorithm operator. The rest of the paper is written as follows: In Sect. 2, the Whale Optimization Algorithm is clarified. In Sect. 3, Laplace Crossover is reproduced. In Sect. 4, LXWOA is described. In Sect. 5, the numerical results are investigated. In Sect. 6, the problem of extraction of compounds from gardenia is discussed and solved using LXWO and WOA. At last, in Sect. 7, the conclusions are drawn.

2 Whale optimization algorithm

Whale Optimization Algorithm (WOA) is proposed by Mirjalili and Lewis in 2016 (2016). It is inspired from the foraging behaviour of humpback whales. This foraging mechanism is known by bubble-net feeding method. In this mechanism, particular bubbles are created in a circle or ‘9’-shaped path as shown in Fig. 1. Goldbogen et al. (2013) utilized tag sensors to investigate the foraging behaviour of humpback whale and found two maneuvers (1) upward-spirals: humpback whales jump around 12 m down and after that begin to rise in a winding shape around the prey and swim towards the surface, and (2) double-loops: it incorporates three distinct stages: coral loop, lobtail, and capture loop. More details can be found in Goldbogen et al. (2013).

Fig. 1
figure 1

Bubble-net feeding behavior of humpback whales (Mirjalili and Lewis 2016)

Humpback whales use three strategies namely, searching of prey, encircling prey and bubble-net feeding method during the foraging behaviour. In the searching of prey, humpback whales search according to the positions of each other and explore new solutions randomly. In the encircling prey, humpback whales perceive the area of prey and recognize the location of prey after which they encircle them. Since there is no priori information about the position of the optimal solution in the search space, therefore, WOA algorithm expects that the current best solution is the target prey, or is close to the optimum. After defining it, other search agents update their positions towards the best search agent. In the bubble-net attacking, humpback whales swim around the prey within a shrinking circle and along a spiral-shaped path simultaneously. The working procedure of whale optimization algorithm is as follows:

Let Np is the size of population and Mitr is the maximum number of iterations. The algorithm has two parameters a and b. The parameter a decreases linearly from 2 to 0 and the parameter b decreases from − 1 to − 2 as the iteration increases from 1 to Mitr. Then the coefficients \( A_{i} \), \( C_{i} \) and a random number \( l_{i} \) for agent i are evaluated as follows:

$$ \begin{aligned} A_{i} &= a\left( {2r_{1} - 1} \right) \hfill \\ C_{i} &= 2r_{2} \hfill \\ \end{aligned} $$
(1)
$$ l_{i} = \left( {b - 1} \right)\,r_{3} + 1 $$
(2)

where \( r_{1} ,r_{2} \) and \( r_{3} \) are three uniformly distributed random numbers in \( [0,\,1] \). Then a random number \( p_{r} \) is generated if \( p_{r} < 0.5 \) and \( \left| {A_{i} } \right| \ge 1 \) then ith agent updates the dth component of next position i.e.\( x_{i}^{d} (t + 1) \) with the random agent by the following equations:

$$ D_{i}^{d} = \left| {C_{i} \cdot x_{rand}^{d} (t)\, - x_{i}^{d} (t)} \right| $$
(3)
$$ x_{i}^{d} (t + 1) = x_{rand}^{d} (t) - A_{i} \cdot D_{i}^{d} $$
(4)

where \( x_{rand}^{d} (t) \) is the position of a random agent in dth dimension chosen from the current population. If \( p_{r} < 0.5 \) and \( \left| {A_{i} } \right| < 1 \) then ith agent updates the dth component of next position i.e.\( x_{i}^{d} (t + 1) \) with the \( x_{best}^{d} \) agent by the following equations:

$$ D_{i}^{d} = \left| {C_{i} \cdot x_{best}^{d} (t)\, - x_{i}^{d} (t)} \right| $$
(5)
$$ x_{i}^{d} (t + 1) = x_{best}^{d} (t) - A_{i} \cdot D_{i}^{d} $$
(6)

if \( p_{r} \ge 0.5 \) then, the distance \( D_{i}^{'d} = \left| {x_{best}^{d} (t)\, - x_{i}^{d} (t)} \right| \) between the whale located at \( x_{i} (t) \) and prey located at \( x_{best} (t) \) in dth dimension is evaluated and is used to mimic the helix-shaped movement of humpback whales in a spiral equation.

$$ x_{i}^{d} (t + 1) = D_{i}^{'d} \cdot e^{{bl_{i} }} \cdot \cos (2\pi \,l_{i} ) + x_{best}^{d} (t) $$
(7)

The constant b controls the shape of spiral.

3 Laplace crossover

Deep and Thakur (2007) proposed Laplace Crossover (LX) to generate a pair of offspring \( y_{1} = \left( {y_{1}^{1} ,y_{1}^{2} , \ldots ,y_{1}^{m} } \right) \) and \( y_{2} = \left( {y_{2}^{1} ,y_{2}^{2} , \ldots ,y_{2}^{m} } \right) \) from a pair of parents \( x_{1} = \left( {x_{1}^{1} ,x_{1}^{2} , \ldots ,x_{1}^{m} } \right) \) and \( x_{2} = \left( {x_{2}^{1} ,x_{2}^{2} , \ldots ,x_{2}^{m} } \right) \). LX produces a pair of offspring in such a manner that both the offspring are symmetric with respect to the position of the parents. A Laplacian distributed random number \( l_{i} \) is created by the following rule:

$$ l_{i} = \left\{ \begin{aligned} p - q\,\log_{e} (u_{i} ),\,\,\,\,\,\,\,\,v_{i} \le 1/2 \hfill \\ p + q\,\log_{e} (u_{i} ),\,\,\,\,\,\,\,\,v_{i} > 1/2 \hfill \\ \end{aligned} \right.\, $$
(8)

where \( u_{i} ,v_{i} \) are two uniformly distributed random numbers in \( [0,1] \). \( p \in \,R \) is the location parameter and \( q > 0 \) is the scale parameter. The offspring are created by the following rules:

$$ \begin{aligned} y_{1}^{i} = x_{1}^{i} + l_{i} \left| {x_{1}^{i} - x_{2}^{i} } \right|,\, \hfill \\ y_{2}^{i} = x_{2}^{i} + l_{i} \left| {x_{1}^{i} - x_{2}^{i} } \right|, \hfill \\ \end{aligned} $$
(9)

If generated offspring does not belong to search space i.e. \( y^{i} < y_{low}^{i} \) or \( y^{i} > y_{up}^{i} \) for some \( i, \) then \( y^{i} \) is set a random number from the interval \( [y_{low}^{i} ,y_{up}^{i} ] \).

When q is a small value, then it is likely that LX generates the offspring near to the parents and when q is a great value, then it is likely that it generates the offspring far from the parents. When p and q are fixed values, then LX deploys offspring proportional to the spread of parents (Deep and Thakur 2007). In literature, LX has been used to improve the performance of GA (Deep and Thakur 2007), GSA (Singh and Deep 2015), BBO (Garg and Deep 2016), PSO (Deep and Bansal 2009), etc.

4 Proposed algorithm

In the present study, WOA has been hybridized with the above defined Laplace crossover, which is a real coded crossover operator for real coded genetic algorithm and LXWOA is proposed.

In LXWOA algorithm, first a randomly distributed population of size \( N_{p} \) is initialized. At each iteration, LXWOA follows the WOA procedure first, then two agents are selected in which first one is the best agent and the second one is selected randomly from the current population. Laplace crossover is applied to the best and randomly selected agents. It generates two offspring. The fitness of both the offspring are tested with the worst agent in the current population one by one. If offspring has better fitness then it is replaced with the worst particle of the current population. Best is updated and iteration is incremented. Algorithm follows this procedure till termination criteria is satisfied. The pseudo code of LXWOA algorithm is shown in Fig. 2.

Fig. 2
figure 2

Pseudo code of LXWOA algorithm

5 Results and discussion

The performance of LXWOA is tested on a set of 23 classical benchmark functions (Mirjalili and Lewis 2016) which consists of three types of problems (1) Scalable unimodal function (F1 to F7) are described in Table 1 (2) Scalable multimodal function (F8 to F13) are in Table 2 (3) low dimensional multimodal function with fixed dimension (F14 to F23) are in Table 3. The experiments are performed on the processor: Intel (R) Core (TM) i3-2350 M CPU @ 2.30 GHz, RAM: 4.00 GB, Operating System: Window 10, Integrated Development Environment: MATLAB 2013. The parameter of Laplace crossover \( p \) is set to zero and the value of \( q \) is finely tuned by conducting several experiments. In these experiments, \( q \) varies from 0.05 to 0.30 and LXWOA algorithm is run 30 times with population size Np = 30 and the termination criteria is set at 500 iterations. For \( q = 0.05,\,0.10,\,0.15,\,0.20,\,0.25 \) and 0.30, iteration wise average best so far is plotted for the function F1, F8, F14 and F16 and shown in the Fig. 3. From these figures, it is observed that the performance of LXWOA on F1 is best and approximately same when \( q = \,0.05 \) and \( q = \,0.10 \) and it is worst when \( q = \,0.30 \). The performance of LXWOA on F8 is best when \( q = \,0.10 \) and it is worst when \( q = \,0.15 \). The performance of LXWOA on F14 is best when \( q = \,0.10 \) and it is worst when \( q = \,0.05 \). The performance of LXWOA on F16 is worst when \( q = \,0.25 \) and it is best approximately same when \( q = \,0.05 \) and \( q = \,0.10 \). From these figures, it is concluded that the performance of LXWOA is best when \( q = \,0.10 \).

Table 1 Scalable unimodal functions
Table 2 Scalable multimodal functions
Table 3 Low dimensional multimodal test functions with fixed dimension
Fig. 3
figure 3

Convergence plot of LXWOA at different values of q

LXWOA, WOA, GSA and LXGSA are run 30 times each with population size Np = 30, \( q = \,0.10 \) and the termination criteria is set at 500 iterations for all the functions. For a fair comparison among WOA and LXWOA the first randomly generated population is used for the first run of WOA and LXWOA, second randomly generated population is used for the second run of WOA and LXWOA, and so on. Best, Average, Median and Standard deviation (SD) of the objective function values obtained from LXWOA, WOA GSA and LXGSA are calculated over 30 runs and shown in Table 4 for scalable unimodal functions with dimension 30, Table 5 for scalable multimodal functions with dimension 30 and Table 6 for low dimensional multimodal functions. The results of Particle Swarm Optimization (PSO), and Differential Evolution (DE) are taken from (Mirjalili and Lewis 2016) to compare with LXWOA and WOA. In (Mirjalili and Lewis 2016), results are given in terms of Average and SD.

Table 4 Best, average, median and standard deviation (SD) of objective function value obtained from LXWOA, WOA, GSA and LXGSA for scalable unimodal functions with dimension = 30
Table 5 Best, average, median and standard deviation (SD) of objective function value obtained from LXWOA, WOA, GSA and LXGSA for scalable multimodal functions with dimension = 30
Table 6 Best, average, median and standard deviation (SD) of objective function value obtained from LXWOA, WOA, GSA and LXGSA for low dimensional multimodal functions

From the Table 4, it is observed that out of 7 problems, there are 3 problems, namely F1, F2 and F7 in which the performance of LXWOA is better than WOA, PSO, DE, GSA and LXGSA and there are 4 problems, namely F3, F4, F5 and F6 DE has better Average and SD. If LXWOA and WOA are compared together, then LXWOA has better performance on all 7 problems. Hence, from the Table 4, it is concluded that the performance of LXWOA is better than WOA on scalable unimodal functions when dimension is 30.

From the Table 5, it is observed that out of 6 problems, there are 3 problems, namely F9, F10, and F11 in which the performance of LXWOA is better than PSO, DE, GSA and LXGSA. On the problem F8 and F12, LXWOA has better Best and Median in comparison to WOA, GSA and LXGSA. On F13, LXGSA has better Best and LXWOA has better Median. DE has better Average and SD on F8, F12 and F13. When only LXWOA and WOA are considered then the performance of LXWOA is better on 3 problems, namely F8, F11 and F12. On F9 and F10, LXWOA and WOA have the same solution. On F13, WOA has better performance in comparison to LXWOA. Hence, from the Table 5, it is concluded that the performance of LXWOA is better than WOA on scalable multimodal function when dimension is 30.

From the Table 6, it is observed that out of 10 problems, there are 2 problems, namely F19 and F20 in which LXGSA has better performance in comparison to others. On F16 and F17 (up to 4 decimal), all algorithms have approximately same solution. On F18, LXGSA, PSO, DE found optimal solution and the solution found by LXWOA and WOA are very close to optimal solution. On F14, F15, F21, F22 and F23, DE has least Average and SD in comparison to other algorithms consider in this study which means that DE performs better on these problems. Hence, it is concluded that the performance of DE is better than others on low dimensional multimodal problems. When only LXWOA and WOA are considered then it is found that the performance of LXWOA is better than WOA.

In order to observe the behaviour of the objective function value with a passage of iterations the convergence plots of the WOA and LXWOA are plotted and shown in Figs. 4 and 5. On the horizontal axis the iterations are shown, whereas on the vertical axis the average best-so-far is shown. Average best-so-far is the average value of objective function in each iteration over 30 runs. From the plots it is concluded that LXWOA is converging fast towards optima in comparison to WOA algorithm.

Fig. 4
figure 4

Iteration wise convergence plot of WOA and LXWOA

Fig. 5
figure 5

Iteration wise convergence plot of WOA and LXWOA

t test the performance of LXWOA has been compared with WOA, GSA and LXGSA using a pairwise one tailed t-test with 29° of freedom at 0.05 level of significance over the objective function value of all the problems considered. The null hypothesis is assumed that “there is no difference between algorithms” and alternative hypothesis is “there is difference”. The p-value and conclusions are shown in Table 7. The following criterion is used to conclude the results: A+ shows that p-value < 0.01 and LXWOA is highly significant than algorithm 1, A shows that p-value < 0.05 and LXWOA is significantly better than algorithm 1, B shows that p-value = 0.05 and LXWOA is alike algorithm 1, C shows that p-value < 0.1 and LXWOA is marginally significant than algorithm 1, D shows that p-value > 0.1 and LXWOA is not significant.

Table 7 A pairwise t-test results of objective function values with 95% confidence interval at 0.05 level of significance

On observing the results shown in Table 7, it can be concluded that if WOA versus LXWOA is considered then 8 out of the 20 problems show that LXWOA is highly significant than WOA. If GSA versus LXWOA is considered then 15 out of the 23 problems show that LXWOA is highly significant than GSA. If LXGSA versus LXWOA is considered then 12 out of the 22 problems show that LXWOA is highly significant than LXGSA. There are two problems, namely F1 and F16 in which p-value cannot be computed because the standard error of the difference is 0.

In the above experiment I, the maximum number of iterations (i.e. 500) are fixed. In one iteration, WOA requires 30 number of function evaluations and LXWOA requires 32 number of function evaluations for a population of size 30. Therefore, WOA and LXWOA evaluates 15,000 and 16,000 number of functions respectively in a run of 500 iteration. This shows that the cost of LXWOA is higher than the cost of WOA. Keeping in this mind, experiment II is conducted in which termination criteria is set to be “maximum number of function evaluation less than 5000”. The available results of PSO and DE are taken for 500 iteration, hence, experiment I has been conducted by fixing the termination criteria to be “maximum iteration = 500”. But in experiment II, author wants to utilize maximum exploration and exploitation capability of the algorithm to find the best quality of solution. So, the maximum number of function evaluation is set to 5000 for both the algorithms. The Best, Worst, Average, Median and Standard deviation (SD) of the objective function values obtained from LXWOA and WOA reported in Table 8 for all problems listed in Tables 1, 2 and 3.

Table 8 Best, worst, average, median and standard deviation (SD) of objective function value obtained from LXWOA and WOA, when number of function evaluation less than 5000

The results in Table 8 shows that the solutions obtained from LXWOA are better than WOA on scalable unimodal functions (F1 to F7). Out of 6 problems of scalable multimodal functions (F8 to F13), there are 2 problems, namely F9 and F10 in which LXWOA and WOA have the same set of solutions. There is one problem, namely F13 in which WOA turns to have better solution and there is one problem, namely F11 in which LXWOA has better solution. On F8 and F12, LXWOA has better Best but WOA has better Worst, Average and Median. Out of 10 problems of low dimensional multimodal functions (F14 to F23), there are 7 problems, namely F19, F20, F21, F22 and F23 in which the performance of LXWOA is better than WOA. There are 4 problems, namely F14, F16, F17 and F18 in which LXWOA and WOA showed approximately the same solutions. On F15, LXWOA has better Worst and Median but better Best and Average has been found by WOA.

From the above analysis, it can be concluded that the performance of LXWOA has improved in comparison to WOA on Scalable unimodal function, Scalable multimodal function and low dimensional multimodal functions for a fixed dimensions.

6 The problem of extraction of compounds from Gardenia

Three bioactive compounds, namely, crocin (\( Y_{1} \)), geniposide (\( Y_{2} \)) and total phenolic (\( Y_{3} \)) compounds are obtained from gardenia fruits, which are affected by three independent variables, concentrations of ethanol (\( X_{1} \)), extraction temperature (\( X_{2} \)) and extraction time (\( X_{3} \)). Young et al. (Yang et al. 2009) have presented a nonlinear multi-objective optimization problem using the method of least square fitting and has solved it using Response Surface Method. Shashi et al. (2010) used DDX-LLM algorithm and Garg and Deep (2016) used Laplacian biogeography based optimization (LX-BBO) algorithm to solve it. In this paper, the above problem has been resolved with WOA and LXWOA. The mathematical model of the above problem, which is mentioned in Yang et al. (2009) and Shashi and Katiyar 2010), is as follows.

In formulation, each yield is in the form of a function of three independent variables. These functions are the second order polynomial equation which is as follows:

$$ Y_{k} = b_{0} + \mathop \sum \limits_{i = 1}^{3} \,b_{i} X_{i} + \mathop \sum \limits_{i = 1}^{3} \,b_{ii} X_{i}^{2} + \mathop \sum \limits_{i \ne j = 1}^{3} \,b_{ij} X_{i} X_{j} $$
(10)

where, \( Y_{k} \) represents yield, \( b_{0} \) represents a constant, \( \,b_{i} ,\,b_{ii} \) and \( b_{ij} \) linear, quadratic and interactive coefficients respectively of the model. \( X_{i} \) and \( X_{j} \) are independent variables. The resulting equation of yield \( Y_{1} \), \( Y_{2} \) and \( Y_{3} \) are as follows:

$$ Y_{1} = 3.8384907903 + 0.0679672610 {\text{X}}_{1} + 0.0217802311{\text{X}}_{2} + .0376755412{\text{X}}_{3} - 0.0012103181{\text{X}}_{1}^{2} + 0.0000953785{\text{X}}_{2}^{2} - 0.0002819634X_{3}^{2} + 0.0005496524 {\text{X}}_{1} {\text{X}}_{2} - 0.0009032316{\text{X}}_{2} {\text{X}}_{3} + 0.0008033811{\text{X}}_{1} {\text{X}}_{3} $$
(11)
$$ Y_{2} = 46.6564201287 + 0.6726057655{\text{X}}_{1} + 0.4208752507{\text{X}}_{2} + 0.9999909858{\text{X}}_{3} - 0.0161053654{\text{X}}_{1}^{2} - 0.0034210643{\text{X}}_{2}^{2} - 0.0116458859{\text{X}}_{3}^{2} + 0.0122000907 {\text{X}}_{1} {\text{X}}_{2} - 0.0095644212{\text{X}}_{2} {\text{X}}_{3} + 0.0089464814{\text{X}}_{1} {\text{X}}_{3} $$
(12)
$$ Y_{3} = - 6.3629169281 + 0.4060552042{\text{X}}_{1} + 0.3277005337{\text{X}}_{2} + 0.3411029105{\text{X}}_{3} - 0.0053585731{\text{X}}_{1}^{2} - 0.0020487593{\text{X}}_{2}^{2} - 0.0042291040{\text{X}}_{3}^{2} + 0.0017226318 {\text{X}}_{1} {\text{X}}_{2} - 0.0011990977{\text{X}}_{2} {\text{X}}_{3} + 0.0007814998{\text{X}}_{1} {\text{X}}_{3} $$
(13)

This problem demonstrates a multi-objective optimization problem in which all three yield is needed to be maximized.

Multi-objective optimization problem is a problem in which more than one objective function are optimized together. The weighted method approach is a simple and effective technique to handle multi-objective optimization problems. In this technique, the user converts these problems into the single objective problem by defining different or equal weight for each objective function. In this research paper, it has been solved with WOA and LXWOA by converting the problem into one objective problem by giving equal weight to all the yields.

Mathematically, for the given yield function \( Y_{1} \), \( Y_{2} \) and \( Y_{3} \), the objective is to solve the following function:

$$ {\text{Maximum}}\;g = w_{1} Y_{1} + w_{2} Y_{2} + w_{3} Y_{3} $$
(14)

where \( w_{1} ,\,w_{2} \) and \( w_{3} \) are the weights given by the user. In this paper, \( w_{1} = 0.33,\,w_{2} = 0.33 \) and \( w_{3} = 0.34 \) are kept.

Results In this section, the results of the problem of the extraction of compounds from gardenia by WOA and LXWOA have been presented and these results have been compared with the results available in the literature. The algorithms have been run to maximum 100 iterations by keeping the number of candidates in the population as 30. The results obtained on the basis of 30 independent experiments have been shown using boldface in Table 9. The range of \( X_{1} ,X_{2} \) and \( X_{3} \) are taken in 0–100.

Table 9 The yield obtained by different algorithm

When WOA and LXWOA are used to solve the above problem, it is concluded that when the concentration of ethanol is \( X_{1} = 63.482211 \)%, extraction temperature is \( X_{2} = 1 0 0 \) °C and extraction time \( X_{3} = 27.290530 \) min, then crocin \( (Y_{1} = 9.6418) \), geniposide \( (Y_{2} = 117.7906) \) and phenolic \( (Y_{3} = 25.2781) \) are obtained in maximum amount.

7 Conclusions

In this paper, a novel Laplacian whale optimization algorithm (LXWOA) has been proposed and results are compared with Whale Optimization Algorithm, Particle Swarm Optimization, Differential Evolution, Gravitational Search Algorithm and Laplacian Gravitational Search Algorithm. From the above study, it is concluded that Laplacian whale optimization algorithm gives most promising results as compared to whale optimization algorithm, particle swarm optimization, differential evolution, gravitational search algorithm and Laplacian gravitational search algorithm on scalable unimodal and scalable multimodal functions when dimension is set 30. The performance of LXWOA has been improved as compared to WOA on Scalable unimodal function, Scalable multimodal function and low dimensional multimodal functions with fixed dimensions. The results obtained from LXWOA and WOA for the problem of extraction of compounds from gardenia are better as compared to the results available in the literature.