Introduction

Multidisciplinary design optimization and multidisciplinary system design optimization are emerging area for the solution of design and optimization problems incorporating a number of disciplines. With the advancement in technology, a new era of problem solving methods is emerging which make use of computers. It is becoming a common approach for solving complex problems. The problem with the conventional methods which require direct human involvement is bit sluggish which makes computer-aided design a widely adopted emphasizing on use of computer for design problems. They are fast and speedup the working process. The computer-aided design emphasizes not only on simulating a system but also to design the same; another revolutionary approach in this field is not only to design the system but also to find the optimal design with high accuracy, low cost, high speed and reliability. If we talk about the challenges in solving real engineering problems, they require specific tools to handle them. Optimization techniques are considered to be one of the best tools for solving the engineering problems and to find the optimal results for the problem. These approaches consider the problem as black box and find the optimal solution. The optimization process initializes with random set for specified problem and then improving them over predefined steps. The engineering problems to be tackled consist of various difficulties such as constraints, uncertainties, local solution, multiple objective, etc. Optimization technique should be able to discourse these issues.

The multi-objective focuses on finding solution for the problem having more than one objective. These problems have nature of multi-objectivity which makes them complex and difficult to solve. To solve such problems, two main approaches are used: posteriori and priori. In a priori approach, the multi-objective problem is first converted into single objective by accumulating the objectives. Weight is assigned to every objective depending on importance of the respective objective. The drawback in this approach is that the algorithm should be run multiple times to find Pareto optimal set. A posterior approach works opposite to priori approach by maintaining multi-objective nature of the problem and finding Pareto optimal set by running the algorithm just once. But this approach requires high computational cost but still this method is widely used to solve real-world problems. Once the optimization algorithm is selected, another fact to be taken into consideration is feature selection and data mining. Data mining consist of various intermediate steps such as data pre-processing, data cleaning, data integration, data transformation, data mining and pattern evaluation to present final data. Feature selection is also a major step which aims to eliminate irrelevant variables in data; these methods are categorized as wrappers and filters. Filters evaluate features of data set into subset depending on the data itself, whereas wrappers utilize learning algorithm to evaluate subset. Filters generally work faster than wrappers. While evaluating the optimization problem, exploration and exploitation are the criteria to be taken into account; based on these two features, the algorithms are classified into two categories consisting of population-based search algorithm which is exploration oriented and the other one is evolution-based algorithms which are exploitation focused. And there should be a good balance between them so as to enhance the working efficiency of the resultant algorithm. One of the methods to achieve this balance is using a hybrid algorithm which enhances performance by combining two techniques; the resulting technique is called memetic algorithm. In the recent years, various meta-heuristics search algorithms have been implemented such as Biogeography Based Optimizer (Simon 2008), Grey Wolf Optimizer (Mirjalili et al. 2014), Ant Lion Optimizer (Mirjalili 2015a, b, c), Moth Flame Optimizer (Mirjalili 2015a, b, c), Multi Verse Optimizer (Mirjalili et al. 2016a, b), Dragon Fly Algorithm (Mirjalili 2016a, b), Sine Cosine Algorithm (Mirjalili 2016a, b), Lightning Search Algorithm (Shareef et al. 2015), Seeker Optimization Algorithm (Chaohua et al. 2006),Virus Colony Search Algorithm (Li et al. 2016a, b), Whale Optimization Algorithm (Mirjalili and Lewis 2016), Wind Driven Optimization (Bayraktar et al. 2010), Water Cycle Algorithm (Eskandar et al. 2012), Salp Swarm Algorithm (Mirjalili et al. 2017), Symbiotic Organism Search (Cheng and Prayogo 2014), Search Group Algorithm (Gonçalves et al. 2015), Stochastic Fractal Search Algorithm (Salimi 2015), The Runner Root Algorithm (Merrikh-Bayat 2015), Ant Colony Optimization (Dorigo et al. 2006), Shuffled Frog Leaping Algorithm (Eusuff et al. 2006), Flower Pollination Algorithm (Yang 2012), Optics Inspired Optimization (Kashan 2014a, b), Cultural Evolution Algorithm (Kuo and Lin 2013), Grasshopper Optimization Algorithm (Saremi et al. 2017), Interior Search Algorithm (Gandomi 2014), Colliding Bodies Optimization (Kaveh and Mahdavi 2005), Krill Herd Algorithm (Gandomi and Alavi 2012), Competition over Resources (Mohseni et al. 2014), Binary Bat Algorithm (Nakamura et al. 2002), Mine Blast Algorithm (Sadollah et al. 2013), Biogeography Based Optimization (Du et al. 2009), Adaptive Cuckoo Search Algorithm (Mareli and Twala 2018), Bat Algorithm (Yang 2010a), Animal Migration Optimization (Li et al. 2014), Gravitational Search Algorithm (Rashedi et al. 2009), Branch and Bound Method (Cohen and Yoshimura 1983), Expert System Algorithm (Kothari and Ahmad 1995), Genetic Algorithm (Kazarlis et al. 1996), Binary Gravitational Search Algorithm (Rashedi et al. 2010), Collective Animal Behavior Algorithm (Cuevas et al. 2012a, b), Bird Swarm Algorithm (Meng et al. 2016), Cognitive Behavior Optimization (Li et al. 2016a, b), Electromagnetic Field Optimization (Abedinpourshotorban et al. 2016), Firework Algorithm (Tan et al. 2015), Water Wave Optimization (Zheng 2015), Earthworm Optimization Algorithm (Wang et al. 2015), Forest Optimization Algorithm (Ghaemi and Feizi-Derakhshi 2014), Mean Variance Optimization Algorithm (Erlich et al. 2010), League Championship Algorithm (Kashan 2014a, b), Chaotic Krill Herd Algorithm (Wang et al. 2014), Elephant Herding Optimization (Wang et al. 2016), Differential Evolution Algorithm (Storn and Price 1997), Imperialistic Competition Algorithm (Atashpaz-Gargari and Lucas 2007), Invasive Weed Optimization (Karimkashi and Kishk 2010), Particle Swarm Optimization Algorithm (Kennedy and Eberhart 1995), Crow Search Algorithm (Askarzadeh 2016), Self-Adaptive Bat Algorithm (Bavafa et al. 2014), Random Walk GWO (Gupta and Deep 2018). A large portion of these calculations depend on straight and nonlinear programming systems that require broad slope data and for the most part attempt to locate an enhanced arrangement in the region of a beginning stage. The main drawbacks of the numerical systems and dynamic programming technique are the size or measurements of the issue, huge computational time and many-sided quality in programming (Rust 1996). Branch and bound technique do not require priority ordering of units and can be extended to allow for probabilistic reserve constraints. The trouble of this technique is the exponential development in the execution time for frameworks of an extensive reasonable size (Cohen and Yoshimura 1983). The Lagrangian Relaxation approach takes care of the short UC issue that gives quicker procedure; yet it neglects to acquire arrangement achievability and ends up simply confusing if the quantity of units is more (Mukherjee and Adrian 1989). The mixed integer programming strategies for the unit commitment issues fall flat when the investment of number of units increases since they require an expansive memory and experience the ill effects of computational delay (Dakin 1965). The fuzzy theory strategy utilizes fuzzy set to understand the estimated stack plans, yet it experiences multifaceted nature (Kadam et al. 2009). The Hopfield neural system considers more supplies yet it might experience the ill effects of numerical meeting because of its preparation procedure (Swarup and Simia 2006). The Artificial Neural Network technique has the upsides of giving great arrangement quality and fast meeting. This technique can suit more confounded unit-wise limitations and gives numerical meeting and arrangement nature of the issue. The arrangement handling in every strategy is extremely one of a kind (Saini and Soni 2002). The Genetic Algorithm is a broadly useful stochastic and parallel pursuit strategy in light of the mechanics of characteristic choice and normal hereditary qualities. It is an inquiry technique to have capability of getting close worldwide minima. Also, it has the ability to acquire the exact outcomes inside brief time and the limitations are incorporated effortlessly (Kazarlis et al. 1996). The Evolutionary Programming is highly faster over customary GAs and acquires a good problem solving arrangement. The “Bane of dimensionality” is overcome, and the computational weight is relatively direct with the issue scale (Juste et al. 1999). Harmony search algorithm is able to solve non-linear, hard satisfactory and complex optimization problems within a reasonable time; however, it suffers from slow local convergence speed when the iteration solution approaches the optimal solution and requires large number of iterations for optimal solution (Afkousi-Paqaleh and Rashidinejad 2010). Particle swarm optimization algorithm is an intellectual technique having very fast search speed with no mutation calculation and overlapping. But it requires parameter tuning and selection and cannot solve the problems of non-coordinate system and also suffers from less exact regulation of its speed as well as direction due to the partial optimism (Kennedy and Eberhart 1995). Loss of Load Probability algorithm is a robust and swift algorithm which finds a solution close to the best one. Whereas few of the problem’s requirements are disregarded to be simple and fast (Booth 1972). Pattern search algorithm can be operated on functions which are neither differentiable nor continuous with high efficiency and fast speed. The pattern search respond to the early estimation of initial point and confirm the closeness of known initial point with respect to global solution, which make it more vulnerable to get stuck in the local minimum (Yin and Cagan 2000). Binary fireworks algorithm has fast convergence speed with high optimization accuracy. But sometimes when a bad firework occurs, the optimal solution becomes inefficient and inaccurate (Tan et al. 2015).

Shuffled frog leaping algorithm is capable of resolving continuous discrete, non-differentiable, multi-modal and non-linear optimization difficulties with faster convergence speed, but has limitations in local searching ability and has non-uniform initial population and sometimes suffers from premature convergence (Eusuff et al. 2006). Invasive weed optimization has a faster and accurate response and can be stretched for any duration and number of generating units for load scheduling. The convergence of results and the speed of execution are reduced because it obtains the outputs of UC from various other techniques (Karimkashi and Kishk 2010). Gravitational search algorithm is easy to implement and has low computational cost. It easily falls into local optima solution convergence speed slows down in the late search phase and also easily falls into local optima solution (Rashedi et al. 2009). Bat-inspired algorithm requires less execution efforts and finds better solutions at high convergence speed and is ease to implement. This algorithm has slow progress and also there is lack of variety in the population and due to local optima, this algorithm confronts improper convergence (Yang 2010a). Artificial bee colony algorithm is flexible, uses less control parameters and has advantage of easy hybridization with other algorithms. But its convergence performance for the local minima is slow and sometimes premature convergence of local minima occurs due to poor exploitation (Karaboga 2010). Imperialistic competition algorithm has good convergence rate with better global solution if parameters are adjusted properly; otherwise, efficiency of global optimal solution decreases and computational time also increases (Atashpaz-Gargari and Lucas 2007). Multi-particle swarm optimization could obtain global optimum solution more probably and the results show that this method is more efficient than genetic algorithms (Mostaghim and Teich 2003). Self-Adaptive Bat Algorithm has high speed of convergence in solving unit commitment problem and increases the population diversity and improves the exploration power of conventional bat algorithm (Bavafa et al. 2014). Whale Optimization Algorithm is much determined in comparison with the other meta-heuristic algorithms and traditional approaches (Mirjalili and Lewis 2016). Crow Search Algorithm is easier to find promising results as compared to numerous other optimization algorithms used so far (Askarzadeh 2016). Seeker optimization algorithm results are associated with other diverse algorithms issued in literature to inaugurate its pre-eminence (Chaohua et al. 2006). Hopfield Method uses a linear input–output model for neurons and weighting factor are calculated. But this method can only be applied to the systems consisting of only linear constraints (Swarup and Simia 2006). Ant Lion Optimization has high rate of meeting which gives us quick outcomes, and it gives preferred outcomes over differential development calculations for the multi-modular issues. These numerical improvement calculations give a valuable technique to get the worldwide ideal in basic and perfect models (Mirjalili 2015a, b, c). Environmental economic dispatch using novel differential evolution presents Differential Evolution method for solving the environmental economic hydrothermal system dispatch problem with the aim to reduce electricity generation fuel costs and emissions of thermal units. The problem is constrained by limitations on generations, active power balance, and amount of available water. Two modified techniques are used in this paper: one is modified mutation using balanced local and global search and other is modified selection to choose best solution and the results are promising then conventional algorithms (Le et al. 2018), Optimal generation scheduling Biologically Inspired Grasshopper Algorithm presented solution for economic dispatch problem of power system. The feasibility and validity of algorithm is evaluated by solving three different problems comprising of small-, medium- and large scale power systems and the results show that the proposed method is quite promising for solving a wide range of ED problems efficiently (Rajput et al. 2017), Island Bat Algorithm for optimization is a vital strategy to control diversity during the search. The drawback of conventional bat-inspired algorithm is its inability to preserve the diversity and thus the prematurity take place. This algorithm is evaluated using 15 benchmark functions. The algorithm is compared with 17 competitive methods and shows successful outcomes. The proposed algorithm is applied for three real-world cases of economic load dispatch problem where the results obtained prove considerable efficiency in comparison to other methods (Ahmadpour et al. 2018).

However, some certifiable designing and logical improvement issues are exceptionally intricate and hard to settle, utilizing these techniques. On the off chance that there are more than one neighborhood minima in the problem, the outcome may rely upon the choice of an underlying point, and the acquired minima may not really be the worldwide minima. Moreover, the gradient search may end up plainly troublesome and unstable when the target work andimperatives have different or sharp peaks. The computational disadvantages of existing numerical straight and nonlinear strategies have constrained analysts to depend on meta-heuristic calculations in light of reproductions to take care of building and logical streamlining issues. A few traditional techniques are accessible to take care of the unit commitment issue. Be that as it may, every one of these strategies requires the correct numerical model of the framework and there is a shot of stalling out at the nearby optima. Also, The No-Free-Lunch theorem for optimization allows developers to develop new algorithm or to improve the existing algorithm, because it logically proves that there is no such optimization algorithm which can solve all the optimization problems with equal efficiency for all. Some algorithms work best for few problems and worst for the rest of the problems. So, there is always a scope or improvement to develop the algorithm which could work well for most of the problems.

Hybrid Grey Wolf Optimizer

Grey Wolf Optimizer (Mirjalili et al. 2014) is swarm intelligence-based, recently developed, meta-heuristics search algorithm inspired from the hunting mechanism and leadership hierarchy of grey wolves in nature and requires few control parameters and was initially applied to solve 29 benchmark problems and three classical engineering design problems such as tension/compression spring, welded beam, pressure vessel designs problem and real-world optical engineering. Further, it was successfully applied to solve various engineering optimization problems such as economic load dispatch problem (Vlachogiannis and Lee 2009), Economic load dispatch problem with valve point effect (Herwan Sulaiman et al. 2015), unit commitment problem (Bhardwaj et al. 2012), training multi-layer perceptron (Mirjalili 2015a, b, c), solving optimal reactive power dispatch (Sulaiman et al. 2015), Feature subset selection approach (Emary et al. 2015a, b), parameter estimation in surface wave (Song et al. 2015), Power point tracking for photovoltaic system (Mohanty et al. 2016), Multi criterion optimization (Mirjalili et al. 2016a, b), shop scheduling problem (Komaki and Kayvanfar 2015), training q-Gaussian radial base function (Muangkote et al. 2014), combined heat and power dispatch problem (Jayakumar et al. 2016), Automatic generation control problem (Sharma and Saikia 2015; Gupta and Saxena 2016), automatic generation control with TCPS (Lal et al. 2016), load frequency control of interconnected power system (Guha et al. 2016a, b), optimal control of DC motor (Madadi and Motlagh 2014), For solving multi-input–multi-output system (El-Gaafary et al. 2015), smart grid system (Mahdad and Srairi 2015), multi-objective optimal power flow (El-Fergany and Hasanien 2015), combined economic emission dispatch problem (Mee Song et al. 2014), 3D stacked SoC (Zhu et al. 2015), economic load dispatch problem (Kamboj et al. 2016), hyperspectral band selection problem (Medjahed et al. 2016), sizing of multiple distributed generation (Sultana et al. 2016), capacitated vehicle routing problem (Korayem et al. 2015), for clustering analysis (Zhang and Zhou 2015), System reliability optimization (Kumar et al. 2017), stabilizer design problem (Shakarami and Davoudkhani 2016), Dynamic scheduling in welding industry (Lu et al. 2017), photonic crystal filter optimization (Chaman-Motlagh 2015), attribute reduction (Emary et al. 2015a, b), Tuning of fuzzy controller (Precup et al. 2017), tuning of Fuzzy PID controller (Saxena and Kothari 2016), doubly fed induction generator-based wind turbine (Yang et al. 2017), robust generation control strategy (EBSCOhost 1033), aligning multiple molecular sequences (Jayapriya and Arock 2015), image registration (Rathee et al. 2015), training LSSVM for price forecasting (Mustaffa et al. 2015), unmanned combat aerial vehicle path planning (Zhang et al. 2016), automated offshore crane design (Hameed et al. 2016), decision tree classifier for cancer classification on gene expression data (Vosooghifard and Ebrahimpour 2015), human recognition system (Sánchez et al. 2017), For solving optimizing key values in the cryptography algorithms (Shankar and Eswaran 2016), and optimal design of double later grids (Gholizadeh 2015). Several algorithms have also been developed to improve the convergence performance of Grey Wolf Optimizer that includes parallelized GWO (Pan et al. 2016), hybrid GWO with Genetic Algorithm (GA) (Tawhid and Ali 2017), hybrid DE with GWO (Jitkongchuen 2015), Hybrid Grey Wolf Optimizer using Elite opposition-based learning strategy and simplex method (Zhang et al. 2017), Modified Grey Wolf Optimizer (mGWO) (Mittal et al. 2016), Mean Grey Wolf Optimizer (MGWO) (Singh and Singh 2017a), Hybrid particle swarm optimization with Grey Wolf Optimizer (HPSOGWO) (Singh and Singh 2017b) and RW-GWO (Gupta and Deep 2018).

Problem Formulation

Primarily developed Grey Wolf Optimizer is a transformative calculation algorithm, based on grey wolves, which recreate the social level and hunting component of grey wolves in view of three principle ventures of chasing: scanning for prey, encompassing prey and assaulting prey and its mathematical model was designed in view point of hierarchy level of different wolves. The fittest solution was designated as alpha (α). Accordingly, the second and third best solutions are named beta (β) and delta (δ) individually. Whatever is left of the hopeful solution are thought to be kappa (\(\kappa\)), lambda (\(\lambda\)) and omega (ω). For the fitness value calculation, the advancement (i.e., chasing) is guided by α, β and δ. The ω, \(\kappa\) and \(\lambda\) wolves trail these three wolves. In GWO, encircling or trapping of prey was achieved by calculating \(\vec{D}\) and \(\vec{X}_{GWolf}\) vectors described by Eqs. (1) and (2).

$$\vec{D} = \left| {\vec{C} \cdot \vec{X}_{\Pr ey} (iter) - \vec{X}_{GWolf} (iter)} \right|$$
(1)
$$\vec{X}_{GWolf} (iter + 1) = \vec{X}_{\Pr ey} (iter) - \vec{A}.\vec{D}$$
(2)

where iter demonstrates the present iteration, \(\vec{A}\) and \(\vec{C}\) are coefficient vectors, \(\vec{X}_{\Pr ey}\) is the position vector of the prey and \(\vec{X}_{GWolf}\) shows the position vector of a Grey Wolf and the vectors \(\vec{A}\) and \(\vec{C}\) are calculated as follows:

$$\vec{A} = 2\vec{a}.\overrightarrow {\mu }_{1} - \vec{a}$$
(3)
$$\vec{C} = 2.\overrightarrow {\mu }_{2}$$
(4)

where \(\overrightarrow {\mu }_{1} ,\overrightarrow {\mu }_{2} \in r\;{\text{and}}\;(0,1)\) and \(\vec{a}\) decreases linearly from 2 to 0.

The hunting of prey is achieved by calculating the corresponding fitness score and positions of alpha, beta and delta wolves using Eqs. 5a, 5b, 6b, 6b and 7a, 7b, respectively, and final position for attacking towards the prey was calculated by Eq. (8).

$$\vec{D}_{Alpha} = abs(\vec{C}_{1} .\vec{X}_{Alpha} - \vec{X})$$
(5a)
$$\vec{X}_{1} = \vec{X}_{\text{Alpha}} - \vec{A}_{1} \cdot \vec{D}_{\text{Alpha}}$$
(5b)
$$\vec{D}_{\text{Beta}} = abs(\vec{C}_{2} \cdot \vec{X}_{\text{Beta}} - \vec{X})$$
(6a)
$$\vec{X}_{2} = \vec{X}_{\text{Beta}} - \vec{A}_{2} \cdot \vec{D}_{Beta}$$
(6b)
$$\vec{D}_{\text{Delta}} = abs(\vec{C}_{3} \cdot \vec{X}_{\text{Delta}} - \vec{X})$$
(7a)
$$\vec{X}_{3} = \vec{X}_{\text{Delta}} - \vec{A}_{3} \cdot \vec{D}_{\text{Delta}}$$
(7b)
$$\vec{X}(iter + 1) = \frac{{(\vec{X}_{1} + \vec{X}_{2} + \vec{X}_{3} )}}{3}$$
(8)

Pattern Search

Pattern search method, also known as black box method, is a derivative-free method having local search capability and suitable for search problem, where the derivative of the objective function is inconvenient or unknown. The method involves two moves while performing its operation: one is exploratory search which is local search looking for improving the direction to be moved, and the other move is the pattern move which is a larger search for improving the direction; in this move, step size is increased unless the improvement is not altered. The pattern move requires two points: one is the current point and the other one is some random point having better value of the objective function which guides the search direction; the consideration of new point is guided by Eq. (9).

$$x^{(2)} = x^{(0)} + \psi [x^{(1)} - x^{(0)} ]$$
(9)

where \(\psi\) is positive acceleration factor, which is used to multiply the length of the direction improvement vector. The PSEUDO code for the Pattern Search algorithm is shown in Fig. 1.

Fig. 1
figure 1

PSEUDO code for pattern search method

In the proposed hybrid Grey Wolf Optimizer-Pattern Search (hGWO-PS) algorithm, the randomly generated position vector \(\vec{X}\) has been further modified using Pattern search method and the modified position vector \(\vec{X}\) has been applied to grey wolves to evaluate alpha, beta and delta scores. To hybridize the GWO and PS, the heuristics procedure has been adopted. The impact of newly obtained positions vectors as two-dimensional and three-dimensional positions vector and conceivable neighbors are outlined in Fig. 2, which shows, a Grey Wolf poser of (X, Y) can update its position with respect to the newly obtained position vectors as indicated by the position of the prey (X*, Y*) and exploit the search space in better way. ​Better places around the search space can be explored by altering the present position of  A and C vectors.

Fig. 2
figure 2

2D and 3D view of position vectors and possible next location with respect to Prey

The exploration phase in hGWO-PS is similar to classical GWO. To explore the search space globally, vectors \(\vec{A}\) and \(\overrightarrow {C}\) are used, which mathematically model divergence. The absolute value of \(\vec{A}\) greater than 1 forces the grey wolves to diverge from the prey to optimistically find an adequate prey and has been depicted in Fig. 3. The PSEUDO code of proposed hGWO-PS algorithm has been shown in Fig. 4 and flow chart of the proposed hybrid algorithm is depicted in Fig. 5.

Fig. 3
figure 3

Exploration phase of grey wolves

Fig. 4
figure 4

PSEUDO code of proposed hGWO-PS algorithm

Fig. 5
figure 5

Flow chart of proposed hGWO-PS algorithm

Test System and Standard Benchmark

To validate the performance of the proposed hGWO-PS algorithm, 23 benchmark functions (Mirjalili et al. 2014) have been taken into consideration and have been shown in Tables 9, 10, 11 in Appendix 1. Table 9 depicts the Unimodal Benchmark Function, Table 10 depicts the multi-modal benchmark functions and Table 11 depicts the fixed dimensions benchmark problems. The 3D view of unimodal, multi-modal and fixed dimensions benchmark problems are shown in Figs. 14, 15 and 16 in Appendix 2, respectively. To validate the stochastic nature of proposed algorithm, 30 trial runs have been performed and results are evaluated for mean, worst and best values of fitness including standard deviation. In the whole research study, 30 search agents are taken into consideration and algorithm is simulated for maximum iterations of 500.

Results and Discussion

To overcome the stochastic nature of proposed hGWO-PS algorithm and validate the results, 30 trial runs are taken into consideration and each objective function has been evaluated for average, standard deviation, worst and best values. To validate the exploitation phase of proposed algorithm, unimodal benchmark functions F1, F2, F3, F4, F5, F6 and F7 are taken into consideration. Table 1 shows the solution of unimodal benchmark function using hGWO-PS algorithm. The comparison results for unimodal benchmark functions are shown in Table 2, which are compared with other recently developed meta-heuristics search algorithms GWO (Mirjalili et al. 2014), PSO (Kennedy and Eberhart 1995), GSA (Rashedi et al. 2009), DE (Storn and Price 1997), FEP (Yao et al. 1999), SMS (Cuevas et al. 2012a, b, 2014), BA (Yang 2010a), FPA (Yang 2012), CS (Yang and Deb 2009; Yang 2010a), FA (Yang 2010a, b, c), GA (John 1992), BA (Yang 2010a), SMS (Cuevas et al. 2014), MVO (Mirjalili et al. 2016), BDA (Mirjalili 2016), BPSO (Kennedy and Eberhart 1997), BGSA (Rashedi et al. 2010), SCA (Mirjalili 2016), BA (Yang 2010a), FPA (Yang et al. 2014), SSA (Mirjalili et al. 2017), FEP (Yao et al. 1999) and DE (Storn and Price 1997) in terms of average and standard deviation. The convergence curve of hGWO-PS for unimodal benchmark functions is shown in Fig. 6 and the trial solutions for unimodal benchmark functions are shown in Fig. 7. To validate the exploration phase of proposed algorithm, the multi-modal benchmark functions F8, F9, F10, F11, F12 and F13 are taken into consideration, as these functions have many local optima with the number increasing exponentially with dimension. Table 3 shows the solution of multi-modal benchmark function using hGWO-PS algorithm. The comparison results for multi-modal benchmark functions are shown in Table 4, which are compared with other recently developed meta-heuristics search algorithms GWO (Mirjalili et al. 2014), PSO (Kennedy and Eberhart 1995), GSA (Rashedi et al. 2009), FEP (Yao et al. 1999), SMS (Cuevas et al. 2012a, b, 2014), BA (Yang 2010a), FPA (Yang 2012), CS (Yang and Deb 2009; Yang 2010a), FA (Yang 2010a, b, c), GA (John 1992), BA (Yang 2010a), SMS (Cuevas et al. 2014), MVO (Mirjalili et al. 2016), DA (Mirjalili 2016), BDA (Mirjalili 2016), BPSO (Kennedy and Eberhart 1997), BGSA (Rashedi et al. 2010), SCA (Mirjalili 2016), BA (Yang 2010a), FPA (Yang et al. 2014), SSA (Mirjalili et al. 2017), FEP (Yao et al. 1999), and DE (Storn and Price 1997), in terms of average and standard deviation. The convergence curve of hGWO-PS for multi-modal benchmark functions are shown in Fig. 8 and their corresponding trial solutions are shown in Fig. 9. The over fitting of the curve in multi-model benchmark functions is due to presence of multiple optimal points. It has been experimentally observed that the computational time of the algorithm has been slightly increased due to increase in number of fitness evaluations.

Table 1 Results of hybrid GWO-PS algorithm for unimodal benchmark function
Table 2 Comparison of unimodal benchmark functions
Fig. 6
figure 6

Convergence curve of hGWO-PS for unimodal benchmark functions

Fig. 7
figure 7

Trial solutions for unimodal benchmark functions

Table 3 Results of hybrid GWO-PS algorithm for multi-modal benchmark function
Table 4 Comparison of multi-modal benchmark functions
Fig. 8
figure 8

Convergence curve of hGWO-PS for multi-modal benchmark functions

Fig. 9
figure 9

Trial solutions of hGWO-PS for multi-modal benchmark functions

The test results for fixed dimension benchmark problems are shown in Tables 5, 6, 7. The comparison results for multi-modal benchmark functions are shown in Tables 6 and 7, which are compared with other recently developed meta-heuristics search algorithms GWO (Mirjalili et al. 2014), PSO (Kennedy and Eberhart 1995), GSA (Rashedi et al. 2009), FEP (Yao et al. 1999), SMS (Cuevas et al. 2012a, b, 2014), BA (Yang 2010a), CS (Yang and Deb 2009; Yang 2010a), FA (Yang 2010a, b, c), GA (John 1992), BA (Yang 2010a), SMS (Cuevas et al. 2014), MVO (Mirjalili et al. 2016), SCA (Mirjalili 2016), BA (Yang 2010d), SSA (Mirjalili et al. 2017), and FEP (Yao et al. 1999) in terms of average and standard deviation. The trial solutions for fixed dimension benchmark functions and their convergence curve are shown in Figs. 10, 11, respectively.

Table 5 Results of hybrid GWO-PS algorithm for fixed dimension benchmark function
Table 6 Comparison of fixed dimension benchmark functions
Table 7 Comparison of results for Fixed Dimension Benchmark functions
Fig. 10
figure 10

Trial solutions of hGWO-PS for fixed dimension benchmark functions

Fig. 11
figure 11

Convergence curve of hGWO-PS for fixed dimension benchmark functions

To verify the performance of proposed hGWO-PS algorithm for engineering optimization, two biomedical engineering problems (XOR, Iris) are taken into consideration and their corresponding results for GWO and hGWO-PS are evaluated for 30 trial runs and are reported in Table 8. The convergence curve and trial runs solutions for these biomedical engineering problems are depicted in Figs. 12 and 13, respectively.

Table 8 Solution of bio-medical-problems using GWO and hGWO-PS
Fig. 12
figure 12

Convergence of GWO and hGWO-PS for real world biomedical problems

Fig. 13
figure 13

Trial runs solutions of GWO and hGWO-PS for real world biomedical problems

Conclusion and Future Scope

In present research, the authors has developed the hybrid version of existing grey wolf optimizer by combining pattern search algorithm (local search algorithm) with grey wolf optimizer (global search algorithm), which improves the exploitation phase of the existing Grey Wolf Optimizer and named as hGWO-PS. Experimentally, it has been found that the exploitation phase of the existing GWO algorithm has been improved; however, there is no improvement in the exploration phase of the existing algorithm and hence it can be concluded that algorithm combining pattern search algorithm with Grey Wolf Optimizer is not a good choice and it can be only used to exploit local search space.