1 Introduction

1.1 Background

Optimization is critical to achieving the best possible parameters for a given system while keeping the cost minimal (Izci and Ekinci 2021). As optimization problems arise in many fields, such as engineering, science, economics, and business, there has been an enormous surge in the development of optimization algorithms (Gupta and Deep 2020). However, not all real-world optimization problems can be solved effectively using conventional mathematical programming approaches. Problems with unique characteristics such as non-differentiable nature, many decision variables, and objective functions, require alternative methods (Abualigah et al. 2021). Such challenges have led to the development new optimization techniques that can effectively solve complex problems that conventional methods cannot handle (Izci 2022). Therefore, metaheuristic algorithms have garnered immense attention as a powerful and effective alternative to tackle complex optimization problems (Gharehchopogh et al. 2023; Zaman and Gharehchopogh 2022; Gharehchopogh 2023; Shishavan and Gharehchopogh 2022; Mohammadzadeh and Gharehchopogh 2021a).

The realm of metaheuristic algorithms is characterized by an impressive diversity of approaches, each with unique strengths and limitations (Izci et al. 2023). These approaches, inspired by a range of physical and biological phenomena, are able to tackle complex optimization problems that cannot be effectively solved through conventional methods (Trojovský and Dehghani 2022). With their flexible and simple structure and the ability to avoid local optima through a random search, they have been widely studied and employed to solve a broad range of problems (Bharti et al. 2017, 2021; Ekinci et al. 2023a; Niu et al. 2022; Mohammadzadeh and Gharehchopogh 2021). Therefore, metaheuristic techniques have taken the scientific community by storm in the past few years, emerging as a prominent research area for tackling complex real-world problems. Their remarkable ability to provide optimal solutions in a computationally efficient manner and ease of implementation have made them a go-to tool for diverse engineering design applications (Carbas et al. 2021). Because the problem is considered a black box, and the algorithm attempts to solve it without being concerned about its nature (Izci et al. 2022). This property enables metaheuristic algorithms to be readily applicable to engineering optimization problems.

1.2 Motivation

The improved versions of HGS algorithm have been proposed in the literature and used to solve a variety of problems, indicating the feasibility of designing more efficient algorithms. However, achieving a balance between exploration and exploitation remains a challenge. Thus, there is a need to establish an effective balance strategy for optimization problems (Ekinci et al. 2023b). Also, it is important to note that each metaheuristic algorithm possesses unique strengths and weaknesses, and the "No Free Lunch theorem" (Wolpert and Macready 1997) asserts that there is no single algorithm that can optimally solve all problems (Agushaka et al. 2022; Ezugwu et al. 2022). Therefore, the motivation of this study is to develop an improved version of the HGS algorithm that is capable of effectively tackling a wide range of engineering optimization problems. In line with this motivation, this paper proposes an efficient solution: a novel HGS algorithm with elite opposite solution capability is proposed as an improved HGS algorithm. The improved version of the HGS algorithm proposed in this study uses pattern search (Torczon 1997) and elite opposition-based learning (Zhao et al. 2017) mechanisms. The latter is used to enhance the exploration, while the former is used to improve the exploitation of the HGS algorithm. To leverage superior performance, all those algorithms are embedded wisely within the HGS algorithm instead of running them sequentially.

1.3 Contributions

In line with the discussion provided in previous subsections, the performance of the improved HGS algorithm is initially evaluated across the CEC2019 test suite (Chakraborty et al. 2021) using moth-flame optimization (Mirjalili 2015a), capuchin search algorithm (Braik et al. 2021), Harris hawks optimization (Hussien et al. 2022), and arithmetic optimization algorithm (Zheng et al. 2021) for comparisons. Then the performance of the improved version of the HGS algorithm is tested on CEC2020 test suite (Snášel et al. 2023) as a more challenging platform. The latter platform is used to provide an insight on the contributions of each technique (elite OBL and PS) adopted in the improved HGS algorithm Besides, hybrid HGS algorithm with differential evolution, chaotic local search and evolutionary population dynamics techniques (Li et al. 2021) and quantum Nelder-Mead HGS algorithm (Xu et al. 2022) are also used as different reported variants of HGS algorithm. Furthermore, state-of-the-art algorithms of Runge–Kutta optimizer (Ahmadianfar et al. 2021), dwarf mongoose optimization (Agushaka et al. 2022), gazelle optimization algorithm (Agushaka et al. 2022b), and prairie dog optimization (Ezugwu et al. 2022) are also used to test the performance of the improved version of the HGS algorithm from a wider perspective. The related evaluations show the superior performance of the proposed method in terms of optimization capability. To further demonstrate the superiority of the improved version of the HGS algorithm proposed in this study, three different engineering design problems, with different natures and complexities, are also considered. In this regard, the identification of an IIR model (Mohammadi et al. 2022), training MLP (Agahian and Akan 2022), and designing a proportional-integral-derivative controller for a DFIG-based wind turbine system (Labdai et al. 2022) are considered in this paper in order to demonstrate the superior performance of the proposed method from a wider perspective of engineering design problems. To briefly list the contributions, the following remarks can be listed:

  • A novel improved version of HGS algorithm is proposed by wisely integrating the elite OBL and PS mechanisms.

  • Better performance of the proposed algorithm is demonstrated against CEC2019 and CEC2020 benchmark suites by using state-of the art algorithms.

  • The proposed method is shown to have good performance for a wide range of real-world complex engineering problems know as identification of an IIR model, training MLP, and designing a proportional-integral-derivative controller for a DFIG-based wind turbine system.

  • The superior performance for IIR identification model is shown comparatively using reported methods relying on cat swarm optimization (Panda et al. 2011), genetic algorithm (Panda et al. 2011), differential evolution algorithm (Yang et al. 2018), and opposition-based hybrid coral reefs optimization algorithm (Yang et al. 2018) alongside the original HGS algorithm.

  • The performance of the proposed algorithm for training MLP is compared with the reported methods based on atom search optimization (Eker et al. 2021), dragonfly algorithm (Mirjalili 2016), bat optimization (Shehab et al. 2023), and grey wolf optimizer (Mirjalili 2015b) alongside the original HGS algorithm. The comparisons confirm the superior performance of the proposed method over the competitive methods in the literature.

  • In terms of DFIG based wind turbine system, this work particularly utilizes the proposed algorithm in conjunction with the reptile search algorithm (Izci et al. 2022), gravitational search algorithm (Bharti et al. 2021), particle swarm optimization (Bharti et al. 2021), and bacterial foraging optimization (Bharti et al. 2021) which are the reported methods in the literature. The analyses show that the proposed method is also a good candidate for better controllability, further indicating the performance of the proposed method for a wide range of engineering applications.

2 Related works

2.1 Infinite impulse response system identification

Identifying infinite impulse response (IIR) models is critical in signal processing and system identification. IIR models are widely used to model systems with long-term memory, such as filters and control systems. They have a compact representation that can provide efficient computational solutions (Sharifi and Mojallali 2015; Singh et al. 2019). However, identifying IIR models can be challenging due to their nature, which requires specialized techniques such as maximum likelihood or subspace identification. Accurate identification of infinite impulse response models is essential for designing and optimizing systems that rely on them, such as digital signal processing and control systems (Zhao et al. 2020; Karaboga and Akay 2009). It can also lead to a better understanding and analysis of complex systems, enabling more effective and efficient decision-making.

Several metaheuristic algorithm examples for the IIR system identification problem can be found in the literature (Cuevas et al. 2023). For example, the application of genetic algorithm, particle swarm optimization, gravitational search algorithm, and inclined planes system optimization were explored for the design and optimization of digital IIR filters (Mohammadi et al. 2019). The performance and efficiency of these methods are evaluated using mean squared error, demonstrating the success of the research in achieving accurate results. In (Mohammadi et al. 2021), a novel approach was introduced for designing optimal IIR filters using a variable length particle swarm optimization algorithm with a weighted sum fitness function. The approach incorporates the filter order as a discrete variable in the particle vector to intelligently minimize the order and reduce the complexity of IIR filters. The proposed algorithm is evaluated through simulation results and demonstrates improved identified structures and performance. In another work, a metaheuristic algorithm called average differential evolution with local search was proposed for identifying optimal coefficients of unknown IIR systems (Durmuş 2022). By minimizing the error between the unknown system output and the adaptive IIR filter output, the proposed algorithm enables rapid convergence to global solutions in system identification problems, resulting in precise prediction of filter coefficients on multimodal error surfaces. The performance of average differential evolution with local search algorithm was demonstrated through comparisons with other methods, showing its efficiency in terms of convergence rate and mean square error value. It is feasible to extend the examples such as firefly algorithm (Upadhyay et al. 2016), teacher learner-based optimization algorithm (Singh et al. 2019), whale optimization algorithm (Luo et al. 2020), selfish herd optimization algorithm (Zhao et al. 2020) and bat algorithm (Kumar et al. 2016) as metaheuristic approaches reported for the IIR system identification problem.

2.2 Multilayer perceptron training

Training a multilayer perceptron (MLP) is a crucial task in machine learning and artificial neural networks. Multilayer perceptron is a feed-forward neural network consisting of multiple layers of interconnected nodes, with each layer performing a specific computation (Irmak et al. 2022). To train an MLP, the network weights and biases are adjusted based on the error between the predicted output and the actual output. This process, known as backpropagation, requires using an optimization algorithm, such as stochastic gradient descent, to minimize the error and improve the model's accuracy. Training an MLP requires careful tuning of various hyperparameters, such as the learning rate and the number of hidden layers, to balance the trade-off between overfitting and underfitting. Successful training of an MLP can lead to highly accurate and reliable models that can be used for a wide range of applications, including image recognition, natural language processing, and speech recognition.

Several metaheuristics-based approaches have also been reported for the MLP training. For example, in a study, the authors proposed a hybrid training technique that combines the ant lion optimizer with MLP (Heidari et al. 2020). In the related study, an encoding scheme and objective formula were introduced, and the model was validated on sixteen standard datasets. Comparative experiments demonstrated that the proposed approach outperforms other well-known metaheuristic algorithms in terms of classification accuracy and convergence rates. In another study (Hong et al. 2020), a methodological approach for generating a landslide susceptibility map was presented by classifying landslide variables, weighting them using the certainty factor method, and optimizing the neural network's structural parameters using a genetic algorithm. The proposed model outperformed logistic regression and random forest models in terms of prediction accuracy, area under the curve, and relative landslide density, demonstrating its effectiveness as a spatial investigation tool for landslide susceptibility mapping. In Lee and Lee (2022), the researchers focused on predicting runoff in urban streams using an MLP with a harmony search optimizer. The proposed approach outperformed MLPs using other existing optimizers, resulting in more accurate runoff predictions with the smallest error compared to observed runoff peak values. The findings demonstrated the effectiveness of the proposed approach in accurately predicting urban stream runoff based on pump station discharge and rainfall information. In Li et al. (2022), improving the accuracy of medical data classification was investigated by using a modified biogeography-based optimization algorithm. The proposed algorithm incorporates different probability distributions into the migration process of biogeography-based optimization to enhance the performance and overcome issues such as local minimum, slow convergence, and sensitivity to initial values. Experimental results demonstrated that the proposed algorithm outperforms both the standard biogeography-based optimization algorithm and other adopted algorithms, leading to improved classification accuracy in medical data analysis. In a different study (Turkoglu and Kaya 2020), the application of the artificial algae algorithm was introduced for training artificial neural networks in various problem domains, particularly in classification tasks. Ten different classification datasets were used to test the performance of the proposed approach. The results indicated that the artificial algae algorithm is a reliable and effective approach for training artificial neural networks, demonstrating its potential as an alternative method for optimizing neural network parameters. In Lee (2023), the researchers focused on predicting the inflow of a centralized reservoir in an urban drainage system as a non-structural measure for preemptive operation. A new MLP model was proposed, which combined existing optimizers with an improved harmony search algorithm, resulting in improved accuracy compared to other existing optimizers in terms of mean square error and mean absolute error. The study in Bacanin et al. (2022) proposed an enhanced brainstorm optimization-based algorithm for the training. The algorithm demonstrated improved performance in terms of classification accuracy and convergence speed compared to other state-of-the-art approaches on a variety of benchmark datasets, outperforming them by 1–2% on average in terms of accuracy and dominating in terms of mean accuracy on the majority of datasets. In Bansal et al. (2019) a combination of an MLP with the lion optimization algorithm was proposed to optimize the architecture and training of the MLP for classification tasks. The proposed approach outperformed existing state-of-the-art techniques in terms of accuracy on various classification problems, showcasing its effectiveness in achieving improved performance. The study in Ma (2022) introduces an improved version of the moth-flame optimizer called adaptive moth flame optimization with opposition-based learning, which addresses the issues of slow convergence and local stagnation. The performance of the proposed algorithm was evaluated through benchmark function tests and compared to other algorithms in MLP training, demonstrating its superior accuracy, convergence rate, and classification performance. Lastly in Moghanian et al. (2020), an intrusion detection system that utilizes an artificial neural network trained using the grasshopper optimization algorithm was presented to improve accuracy in detecting network intrusion patterns. The proposed method demonstrated higher accuracy compared to state-of-the-art techniques in detecting abnormal and malicious traffic and attacks based on evaluation with different datasets.

2.3 Doubly fed induction generator-based wind turbine system control

The last engineering design problem considered for this paper is about controlling a doubly fed induction generator (DFIG) based wind turbine system, as it is crucial for ensuring the safe and efficient operation of the system. DFIG-based wind turbines are widely used due to their high efficiency and ability to generate power at variable speeds. Still, they are also highly complex systems that require sophisticated control algorithms to maintain stable operation (Labdai et al. 2022). The control system for a DFIG-based wind turbine must regulate the power output, voltage, and frequency of the system while also protecting it from grid disturbances and other operational issues (Sudarsana Reddy and Mahalakshmi 2022). Failure to control the system effectively can result in voltage and frequency fluctuations, power losses, and even system failure, leading to downtime and reduced energy production. Effective control of doubly fed induction generator-based wind turbine systems is therefore essential for ensuring wind energy generation's reliability, safety, and economic viability.

Similar to the first two engineering design problems, metaheuristic approaches have also been employed for efficient control of DFIG-based wind turbine systems, as well. For example, in a study (Qouarti et al. 2023), the researchers focused on wind energy generation using DFIG, aiming to maximize power extraction from wind. Two control approaches were proposed: a combined control based on maximum power point tracking and sliding mode control, and a super twisting control based on particle swarm optimization and grey wolf optimization. Results demonstrated that the super twisting control tuned by particle swarm optimization algorithm outperforms other strategies in optimizing power extraction, as evidenced by comparing generator speed signals across different control scenarios. In another study (Mostafa et al. 2023), the challenge of operating a wind power system at the optimum power point, especially in the presence of uncertain wind speeds was addressed by using a metaheuristic optimization approach called driving training algorithm. Three maximum power point tracking scenarios were considered, and the proposed approach was shown to achieve efficient maximum power point tracking under different wind speeds compared to water cycle algorithm and particle swarm optimizer. In Benamor et al. (2019), a control strategy called root tree optimization was introduced to address chattering phenomena, minimize harmonic currents, and improve the performance of a DFIG system. The root tree optimization was utilized to adjust the parameters of a proportional-integral controller, resulting in improved dynamic and steady performance. The study in Palanimuthu et al. (2022) focuses on the design of fuzzy integral sliding mode control for a DFIG-based wind energy system using a membership function-dependent approach. The proposed approach utilized Takagi–Sugeno fuzzy modeling to represent the nonlinear DFIG-based wind energy system as a sum of local sub-models. A suitable FISMC was designed with a reaching law condition to handle disturbances, and a fuzzy-based Lyapunov function was constructed to evaluate the system's performance. The results showed that the membership function-dependent approach improves the performance index by 10% compared to the conventional approach, and simulation results demonstrated the stability and effectiveness of the proposed approach. In another work (Izci et al. 2022), reptile search algorithm was reported to tune a proportional-integral-derivative controller for improving the DFIG based wind turbine system’s transient performance. Comparisons with other design approaches, such as gravitational search algorithm, bacterial foraging optimization, and particle swarm optimization, using the same controller, confirmed the enhanced efficiency and reliability of the system. In Muisyo et al. (2022), the application of a static synchronous compensator to enhance the low voltage ride-through capability of a 9 MW DFIG-based wind power plant during grid faults was investigated. The static synchronous compensator was tuned using the water cycle algorithm, particle swarm optimization, and a hybrid version of those algorithms. Simulation results showed that incorporating the static synchronous compensator with hybrid algorithm tuning effectively improves the wind power plant's low voltage ride-through capability, reducing voltage fluctuations and achieving better performance. In Ahmed et al. (2019), the researchers proposed the use of the Harris hawks algorithm for optimizing the tuning of integral classical controllers in load frequency control applications. The study focused on a two-area interconnected power system with a DFIG-based wind turbine in one area. By applying the Harris hawks algorithm, the performance of the system was enhanced, resulting in improved frequency and tie-line power oscillation damping, particularly when the DFIG participation level is high. With the work reported in Boureguig et al. (2023), artificial bee colony and grey wolf optimizer, were used for optimal feedback linearization control in a DFIG system. Simulation results using a 1.5 MW DFIG wind turbine demonstrated reduced overshoot, settling time, and steady-state error, indicating the effectiveness of the proposed approach. Moreover, a teaching learning-based optimization algorithm-assisted fractional order controller for the control of the rotor side converter and grid side converter in a DFIG-based wind turbine system was introduced (Karad and Thakur 2022). Simulation results demonstrated that the proposed approach outperforms the genetic algorithm and particle swarm optimization-based approaches in terms of DC link voltage control, solution time, and time domain performance parameters. Lastly, a swarm moth-flame optimizer was introduced in another work for optimizing the parameters of four interacting proportional-integral loops in a DFIG-based wind turbine (Huang et al. 2019). The proposed algorithm aimed to achieve maximum power point tracking and improved fault ride-through capability. The results of three case studies demonstrated that proposed approach outperforms existing metaheuristic techniques in terms of global convergence, optimal power tracking, and fault ride-through capability.

2.4 Improvement strategies for hunger games search algorithm

The hunger games search (HGS) algorithm (Yang et al. 2021) is one of the metaheuristic approaches that has been developed specifically to tackle specific optimization problems. Despite widespread use of HGS algorithm, critical issues, such as local minimal stagnation and immature convergence, remain to be addressed. Achieving a balance between exploration and exploitation stages is critical to enhancing the capabilities as the latter problems are caused by the randomized exploration and exploitation operators of metaheuristics, which necessitate the need for innovative strategies to mitigate or improve them (Luo et al. 2020; Gülcü 2022). In this regard, different enhancement strategies have been offered to improve the performance of the HGS algorithm.

For example, in (Kutlu Onay and Aydemı̇r 2022), ten different chaotic maps applied to the HGS algorithm. The proposed chaotic HGS algorithm was evaluated on CEC2017 and 23 classical benchmark problems, as well as real engineering problems, such as cantilever beam design, tension/compression, and speed reducer. The results showed that chaotic HGS outperformed classical HGS and other state-of-the-art algorithms in the literature, indicating its promising potential for optimization tasks. In Nguyen and Bui (2021), the researchers proposed a novel soft computing model using HGS and artificial neural network and utilized it for predicting ground vibration intensity induced by mine blasting in the mining industry. They compared the performance of the proposed model with three other benchmark models based on different metaheuristic algorithms and found that the proposed model achieved the best results in terms of statistical criteria, thus, can be widely applied in open-pit mines to optimize blast patterns and minimize environmental effects. In Ma et al. (2022), the researchers developed a multi-strategy HGS and its binary variant by integrating HGS with a multi-strategy framework. The proposed algorithm was applied to global optimization and its binary variant to the feature selection problem. The experimental results showed that the proposed algorithm outperforms existing techniques in terms of classification accuracy, number of selected features, fitness values, and execution time, suggesting it to be a superior optimizer and a valuable feature selection technique. In Chakraborty et al. (2022), the researchers proposed a combination of the HGS algorithm with the whale optimization algorithm. By incorporating the hunger concept from HGS algorithm and the food searching techniques of whales, the proposed hybrid algorithm aims to address the limitations such as local optima trapping and premature convergence. Statistical analyses, complexity analysis, convergence analysis, and solving real-world engineering problems were conducted to demonstrate the efficacy of the newly designed algorithm. The results confirmed the improved performance of the proposed algorithm compared to other algorithms, supporting its effectiveness in optimization tasks. In Izci and Ekinci (2022), the researchers developed a novel control method for buck converter system. The proposed method involved the utilization of a fractional-order proportional-integral-derivative controller and the development of an improved version of the HGS algorithm. The proposed algorithm in the latter work was enhanced by incorporating the Nelder-Mead simplex method and a random learning mechanism in order to improve intensification and diversification abilities. The results confirmed the superiority of the proposed method for controlling a buck converter system in terms of performance, robustness, and effectiveness compared to the state-of-the-art approaches. In Abushanab et al. (2021), the researchers developed an artificial intelligence-based predictive model for friction stir welding of dissimilar polymeric materials. The related model combines a random vector functional link model with the HGS algorithm. Comparative analysis demonstrated that the proposed HGS algorithm-based model outperforms other models optimized with the state-of-the-art optimizers. In Izci et al. (2022c), the researchers developed a novel algorithm by incorporating a logarithmic spiral opposition-based learning technique into the HGS algorithm. The performance of the proposed algorithm was evaluated for both function optimization using benchmark functions from the CEC 2017 test suite and controller design for a magnetic ball suspension system. Comparative assessments were conducted, and the results demonstrated that the proposed algorithm outperforms other state-of-the-art methods, providing significant improvements in terms of transient response-related parameters and bandwidth for the magnetic ball suspension system. In Izci and Ekinci (2023), the researchers addressed the challenge of designing an effective controlling scheme for power converters. They proposed a novel approach by utilizing a fractional-order proportional-integral-derivative controller and a hybrid metaheuristic algorithm that combines HGS algorithm with simulated annealing. The proposed algorithm effectively tuned the controller, resulting in improved performance in terms of time and frequency domains, as well as disturbance rejection. Comparative analysis with other existing approaches further confirmed the superior performance of the proposed hybrid algorithm-based controlling scheme for a buck converter system. In Izci et al. (2022d), the researchers developed a novel algorithm by combining a modified opposition-based learning technique with the HGS algorithm. The developed algorithm was designed to tune a fractional order proportional-integral-derivative controller for a magnetic ball suspension system. The algorithm's performance was evaluated using challenging benchmark functions from the CEC 2017 test suite, and its effectiveness in controlling the magnetic ball suspension system was demonstrated through various evaluations including statistical analysis, convergence profile, transient response, frequency response, disturbance rejection, and robustness. The results confirmed the superior ability of the proposed approach for controlling the magnetic ball suspension system.

3 HGS and proposed Imp-HGS algorithms

3.1 Hunger games search algorithm

Nature is filled with examples of animals surviving through hunger-driven activities. Their motions and behavioral choices are crucial to their survival. The Hunger Games Search (HGS) algorithm captures this reality and models it as a set of game rules, making it a powerful metaheuristic optimization tool (Yang et al. 2021). The game rules simulate cooperation between animals during foraging, accounting for their reluctance to cooperate. The HGS algorithm is represented mathematically by the game rule in Eq. (1):

$${\overline{X(t+1)}}=\left\{\begin{array}{ll}{Game}_{1}:{\overline{X(t)}}\cdot \left(1+randn\left(1\right)\right),&\quad {r}_{1}<C\\ {Game}_{2}:\overline{{W }_{1}}\cdot \overline{{X }_{b}}+\overline{R }\cdot \overline{{W }_{2}}\cdot \left|\overline{{X }_{b}}-{\overline{X(t)}}\right|, &\quad {r}_{1}>C, {r}_{2}>E\\ {Game}_{3}:\overline{{W }_{1}}\cdot \overline{{X }_{b}}-\overline{R }\cdot \overline{{W }_{2}}\cdot \left|\overline{{X }_{b}}-{\overline{X(t)}}\right|, &\quad {r}_{1}>C, {r}_{2}<E\end{array}\right.$$
(1)

Here, \(randn\left(1\right)\) is a random number with normal distribution whereas \({r}_{1}\) and \({r}_{2}\) are two other random numbers within \([\mathrm{0,1}]\). \(t\) denotes the current iteration, \(\overline{{X }_{b}}\) stands for the location of the best individual, \(\overline{{W }_{1}}\) and \(\overline{{W }_{2}}\) are the hunger weights and \(\overline{{X(t) }}\) is the current iteration-based location of individuals.

The process of selecting between designated rules (\({Game}_{1}\), \({Game}_{2}\) and \({Game}_{3}\)) is determined by the parameter \(C\), while the \(\overline{R }\) parameter is calculated as \(\overline{R }=2\times shr\times rand-shr\) where \(shr=2\times (1-\left(t/T\right))\). The \(E\) parameter controls the variations of all positions and is defined as \(E=\mathit{sec}h(\left|F\left(i\right)-BFit\right|)\) where \(F\left(i\right)\) is the fitness value of each individual, \(BFit\) is the best fitness obtained so far, and \(i\in \mathrm{1,2},\dots ,n\). The hyperbolic function, \(\mathit{sec}h(x)=\left(2/\left({e}^{x}+{e}^{-x}\right)\right)\), is used to calculate \(E\).

The HGS algorithm consists of two search categories that simulate self-dependent individuals and teamwork to enhance diversification. Additionally, the algorithm mimics the starvation characteristics of each individual with the following equations, which are reflected in the hunger weights given by Eq. (1).

$$\overline{{W }_{1}\left(i\right)}=\left\{\begin{array}{ll} hng\left(i\right)\cdot \frac{N}{Shng}\times {r}_{4},&\quad {r}_{3}<C\\ 1,&\quad {r}_{3}>C\end{array}\right.$$
(2)
$$\overline{{W }_{2}\left(i\right)}=(1-{e}^{-\left|hng\left(i\right)-Shng\right|})\times {r}_{5}\times 2$$
(3)

These weights are expressed as a sum of the hungry feelings of all individuals denoted by \(Shng\), while \(N\) represents the number of individuals. Equations (2) and (3) utilize \({r}_{3}\), \({r}_{4}\) and \({r}_{5}\), which are distinct random numbers selected from the range \([0, 1]\). The value of \(hng\left(t\right)\) is determined by the following definition where \(Allfit(i)\) represents the fitness of each individual in the current iteration.

$$hng\left(i\right)=\left\{\begin{array}{ll}0,&\quad Allfit(i)==BFit\\ hng\left(i\right)+H, &\quad Allfit(i)!=BFit\end{array}\right.$$
(4)

The hunger sensation, \(H\), is defined as follows, where \(TH=\left(\left(F\left(i\right)-BFit\right)/\left(WF-BFit\right)\right)\times {r}_{6}\times 2\times (UB-LB)\).

$$H=\left\{\begin{array}{ll}LH\times \left(1+r\right), &\quad TH<LH\\ TH, &\quad TH\ge LH\end{array}\right.$$
(5)

In here, \(LH\) is the lower bound, and \(WF\) is the worst fitness value obtained so far. The feature space-related upper and lower bounds are respectively represented by \(UB\) and \(LB\). Overall, the HGS algorithm strives to optimize the selection of rules and parameters to achieve desirable outcomes. Figure 1 provides a flowchart representing the logic of the HGS algorithm during the optimization tasks.

Fig. 1
figure 1

Logic of HGS during optimization (Yang et al. 2021)

3.2 Improved hunger games search algorithm

3.2.1 Elite opposition-based learning

The original form of the opposition-based learning (OBL) technique (Tizhoosh 2005) has been a staple among researchers looking to enhance optimization algorithms (Izci et al. 2022c). The elite OBL is a unique approach within the realm of OBL that considers the best and current agents to generate opposite solutions of those agents (Izci et al. 2023b; Özmen et al. 2023). For the definition of elite OBL, \({X}^{o}=\langle {x}_{1}^{o}, {x}_{2}^{o}\dots ,{x}_{m}^{o}\rangle\) can be used by considering \(X=\langle {x}_{1}, {x}_{2}\dots ,{x}_{m}\rangle\) to be an elite candidate solution with \(m\) decision variables where \({x}_{i}^{o}=\delta \left(d{a}_{i}+d{b}_{i}\right)-{x}_{i}\), \(\delta\) is a parameter within (0, 1), \(d{a}_{i}\) and \(d{b}_{i}\) are the dynamic boundaries that are defined as \(d{a}_{i}=min\left({x}_{i}\right)\) and \(d{b}_{i}=max\left({x}_{i}\right)\), respectively. In this study, three random variables of \(a\), \(b\) and \(c\), all of which are within \([0, 1]\), are adopted to redefine the EOBL as \({x}_{i}^{o}=\delta \left(a\cdot d{a}_{i}+b\cdot d{b}_{i}\right)-c\cdot {x}_{i}\). The solution in elite OBL is kept within boundaries defined by \({x}_{i}^{o}=rand\left(L{b}_{i}, U{b}_{i}\right)\) where \(L{b}_{i}\) is the lower and \(U{b}_{i}\) is the upper limit, whereas \(rand\left(L{b}_{i}, U{b}_{i}\right)\) is a random number within (\(L{b}_{i}, U{b}_{i}\)). Figure 2a provides a visual representation of the OBL mechanism.

Fig. 2
figure 2

OBL and PS mechanisms

3.2.2 Pattern search algorithm

The Pattern Search (PS) algorithm is an optimization technique that can find the optimal solution without using derivatives. The algorithm generates a point that may or may not be close to the solution. Around this point, the algorithm creates a collection of points called a mesh, updated as the algorithm progresses. The user defines the starting point for the search, and in the first iteration, the mesh size is considered as 1. The algorithm then constructs pattern vectors (\({X}_{0}+[0 1]\), \({X}_{0}+[1 0]\), \({X}_{0}+[-1 0]\) and \({X}_{0}+[0-1]\)) and uses them to produce new mesh points. The objective function of each new point is calculated and compared to the current best point. If a better point is found, the search point is relocated to the new point, and the mesh size is expanded. If no better point is found, the mesh size is reduced, and the algorithm continues to search. This process continues until the optimal solution or termination condition is met. Figure 2 (b) demonstrates the mesh points and the search directions used in the PS mechanism.

3.2.3 Proposed algorithm

The improved hunger games search (Imp-HGS) algorithm proposed in this paper employs the elite opposition-based learning (OBL) mechanism (Zhao et al. 2017) to increase the exploration performance of the original HGS algorithm. In contrast, it uses the pattern search (PS) mechanism (Torczon 1997) to enhance its exploitation capability further. Figure 3 demonstrates the process of the proposed Imp-HGS algorithm.

Fig. 3
figure 3

Flowchart of proposed Imp-HGS algorithm

As observed from the detailed flowchart in the latter figure, the original HGS algorithm is aided by the modified elite OBL mechanism and the PS strategy. The Imp-HGS reaches further explorative performance with the modified elite OBL mechanism and exploitative performance with the PS mechanism. The Imp-HGS begins performing with the original HGS algorithm, and the produced best solution is further processed with the elite OBL mechanism. At this point, \(N\) best solutions are obtained, which are then used by the PS mechanism for reaching better exploitation. It is worth noting that the PS mechanism is not performed in each iteration; instead, it operates two times during the entire process and runs for \(100\times D\) iterations where \(D\) is the dimension size of the problem. Such a processing style has been decided after extensive simulations. Consequently, the ability of the original HGS algorithm is improved significantly with the design proposed in this work. In the proposed method, all adopted algorithms are used with their default parameter values in order to have a have a fair conclusion of the performance during the optimization tasks. It is also worth noting that the variable parameters, population size and the number of iterations have certain limitations on the performance of the proposed algorithm for this specific problem. Choosing bigger population sizes and iteration numbers will have a considerable increase on the performance of the proposed method at the expense of relatively higher computational load, however, after a certain limit there will be no increase.

4 Experimental results on benchmark functions

4.1 CEC2019 benchmark functions

To evaluate the performance of the Imp-HGS algorithm, challenging test functions from the CEC2019 test suite were adopted. The details of those test functions are provided in Table 1. As the CEC2019 test suite is composed of the complex and highly difficult benchmark set, it can be used as a good test bed for the overall performance evaluation of the algorithms. This set of challenging and complex functions is specifically designed to push the limits of optimization algorithms and test their robustness and performance under extreme conditions. With its innovative and cutting-edge approach, CEC2019 sets a new standard for benchmarking in the optimization community and provides a crucial tool for advancing the state-of-the-art in this field.

Table 1 Details of CEC2019 benchmark function problems

For the statistical comparison purpose, moth-flame optimization (MFO) (Mirjalili 2015a), capuchin search algorithm (CapSA) (Braik et al. 2021), Harris hawks optimization (HHO) (Heidari et al. 2019) and arithmetic optimization algorithm (AOA) (Abualigah et al. 2021) are used in this study. The default parameter values for the latter listed algorithms are adopted during the tests. For the optimization of the CEC2019 test suite, a total iteration of 500 and a population size of 30 was used, and each of the algorithms was run 30 times in order to provide a fair comparison. Table 2 lists the corresponding values of the statistical metrics of average (mean), standard deviation, and best and worst; the bold font refers to the best results in the tables. From the observation of this table, one can see that the proposed Imp-HGS algorithm demonstrated consistently superior performance compared to the other algorithms used for comparison across the CEC 2019 test functions (F1–F10). With the lowest average values for all functions, Imp-HGS exhibited a remarkable level of efficiency in optimizing the objective functions. This indicates that Imp-HGS consistently converged towards optimal solutions and achieved highly competitive results. In contrast, the other algorithms (HGS, MFO, CapSA, HHO, and AOA) displayed higher average values, suggesting a comparatively less efficient performance. The consistent dominance of Imp-HGS across all functions underscores its effectiveness and highlights it as a promising choice for solving optimization problems.

Table 2 Comparative statistical results of Imp-HGS, HGS, MFO, CapSA, HHO and AOA algorithms on CEC2019 benchmark functions

In addition to achieving the lowest average values, the Imp-HGS algorithm also demonstrated notable performance in other metrics when compared to the other algorithms across the CEC 2019 test functions (F1 to F10). Firstly, the Imp-HGS algorithm consistently exhibited lower standard deviation values, indicating a higher level of stability and robustness in its optimization process. This suggests that Imp-HGS consistently generated solutions that were closer to each other, resulting in a more reliable and predictable optimization outcome.

Furthermore, the Imp-HGS algorithm consistently achieved competitive results in terms of the best and worst values. While it may not always achieve the absolute best value for a given function, it consistently performed well and approached the optimal solutions. On the other hand, the worst values obtained by Imp-HGS were consistently better than or comparable to the other algorithms, indicating that even in worst-case scenarios, Imp-HGS produced solutions that were relatively close to the optimal or acceptable range.

Taken together, the Imp-HGS algorithm's performance in terms of standard deviation, best, and worst values further reinforces its efficacy and reliability. Its ability to consistently provide low standard deviation values, competitive best results, and satisfactory worst results indicates its capability to explore and exploit the search space effectively, leading to a robust and efficient optimization process. In addition, the convergence curves provided in Fig. 4 further supports the claim that the proposed Imp-HGS algorithm is efficient and can reach better solutions.

Fig. 4
figure 4figure 4

Comparative convergence curves obtained for CEC2019 benchmark fuctions

4.2 CEC2020 benchmark functions

As part of the effort to demonstrate the more excellent performance of the proposed Imp-HGS algorithm, benchmark functions from CEC2020 test suite have also been adopted for this study. This test suite is used as a challenging platform for performance evaluation of optimization algorithms. Table 3 lists the details of the CEC2020 benchmark functions (F1_CEC2020 to F10_CEC2020).

Table 3 Details of CEC2020 benchmark function problems

The performance of the Imp-HGS algorithm on CEC2020 benchmark functions was performed to provide an insight on the contributions of each technique (elite OBL and PS) adopted in Imp-HGS algorithm. Besides, hybrid HGS algorithm with differential evolution, chaotic local search and evolutionary population dynamics techniques (DECEHGS) (Li et al. 2021) and quantum Nelder-Mead HGS (IHGS) algorithm (Xu et al. 2022) are used in this study as different reported variants of HGS algorithm. Furthermore, state-of-the-art algorithms of Runge–Kutta optimizer (RUN) (Ahmadianfar et al. 2021), dwarf mongoose optimization (DMO) algorithm (Agushaka et al. 2022), gazelle optimization algorithm (GOA) (Agushaka et al. 2022b), and prairie dog optimization (PDO) algorithm (Ezugwu et al. 2022) are also used to test the performance of the proposed Imp-HGS algorithm from a wider perspective. The results listed in Table 4 are obtained with 30 independent runs using a population size of 50 and maximum number of iterations of 1000 for fair comparison.

Table 4 Statistical results of CEC-2020 test functions

As demonstrated in Table 4, the proposed Imp-HGS algorithm has the best statistical performance on these test functions when it is being used with the integration of both elite OBL and PS mechanisms. When considering the average performance, Imp-HGS consistently achieves competitive results. Across different test functions (F1–F10), the average values obtained by Imp-HGS are comparable to or better than other algorithms. This suggests that Imp-HGS demonstrates robustness and effectiveness in finding good solutions for a wide range of optimization problems.

Similarly, when examining the best and worst solutions found, the Imp-HGS algorithm consistently performs well. It is able to find high-quality solutions, as evidenced by the best solution values obtained, which are comparable to or better than other algorithms. Additionally, the worst solution values obtained by Imp-HGS are generally lower than those of other algorithms, indicating a more reliable and stable performance. In terms of standard deviation, which measures the spread or variability of solutions, the Imp-HGS algorithm demonstrates favorable results. The standard deviation values obtained by Imp-HGS are generally lower or comparable to other algorithms. A lower standard deviation indicates that the algorithm produces more consistent and reliable results.

Overall, the better performance of the Imp-HGS algorithm in the CEC 2020 test suite can be attributed to its ability to effectively explore the solution space, find high-quality solutions, and maintain stability and consistency in its optimization process. The algorithm's balance between exploration and exploitation enables it to efficiently navigate complex landscapes and converge to promising solutions.

5 Experimental results on IIR model identification

5.1 IIR filter design

System identification refers to the representation of an unknown system mathematically by considering input and output data. An optimization algorithm is used to minimize an error function (between the candidate model's output and the actual plant's output) in order to obtain an optimal model for the unknown plant. On the other hand, fewer model parameters can be used via infinite impulse response models to meet the performance specifications and produce a more accurate representation of physical plants for real-world applications (Mohammadi et al. 2022). An arbitrary system's infinite impulse response identification model is illustrated in Fig. 5, where \(y(k)\) and \(d(k)\) respectively represent the output of the infinite impulse response filter and the unknown plant. On the other hand, \(x(k)\) stands for the applied input signal whereas \(m\) and \(n\) are respectively the coefficients of the numerator and denominator that are described in the following subsection.

Fig. 5
figure 5

Detailed structure of IIR filter

5.2 Experimental setup and IIR model identification results

The following form represents the transfer function of an infinite impulse response (IIR) system considering the details provided in the previous subsection.

$$\frac{Y(z)}{X(z)}=\frac{{b}_{0}+{b}_{1}{z}^{-1}+{b}_{2}{z}^{-2}+\dots +{b}_{m}{z}^{-m}}{1+{a}_{1}{z}^{-1}+{a}_{2}{z}^{-2}+\dots +{a}_{n}{z}^{-n}}$$
(6)

Here, the pole and zero parameters of the infinite impulse response model are denoted by \({a}_{i}\) and \({b}_{j}\) where \(i=1, 2,\dots , n\) and \(j=0, 1,\dots , m\). The difference equation form of the transfer function can be written as follows where x \(\left(k\right)\) and \(y\left(k\right)\) represent the input and the output of the filter, respectively.

$$y\left(k\right)+\sum_{i=1}^{n}{a}_{i}\cdot y\left(k-i\right)=\sum_{j=0}^{m}{b}_{j}\cdot x\left(k-j\right)$$
(7)

Figure 6 demonstrates the block diagram of an adaptive IIR system identification system designed via the Imp-HGS algorithm. Here, \(e(k)\) denotes the error between the model and the actual plant (\(e(k)=d(k)-y(k)\)), which can be used for representing the IIR model identification problem as a minimization problem using mean squared error (MSE) given in the following definition where \(W\) is the number of samples employed in the simulation.

Fig. 6
figure 6

Imp-HGS algorithm-based IIR system identification problem

$$f(\theta )=\frac{1}{W}\sum_{k=1}^{W}{\left(d(k)-y(k)\right)}^{2}$$
(8)

This work uses four benchmark examples of the IIR system identification presented in Table 5. In the experiments, the parameters of the algorithm are set as \(30\) runs, \(1000\) total iteration numbers (\({t}_{max}\)) and \(30\) population size. The benchmark examples' statistical results are compared with the available literature. In this context, cat swarm optimization (CSO) (Panda et al. 2011), genetic algorithm (GA) (Panda et al. 2011), differential evolution (DE) algorithm (Yang et al. 2018) and opposition-based hybrid coral reefs optimization (OHCRO) algorithm (Yang et al. 2018) are used in this study alongside the original HGS algorithm. The results presented are presented in Table 6.

Table 5 Four digital IIR system identification problems
Table 6 The statistical results obtained from the IIR system identification benchmark examples

The statistical results in Table 6 demonstrate the performance of various algorithms, including Imp-HGS, HGS, CSO, GA, DE, and OHCRO. In terms of the average values, Imp-HGS consistently outperformed HGS, CSO, GA, DE, and OHCRO across all the benchmark examples. This suggests that Imp-HGS was able to achieve lower average errors or losses in the system identification process compared to the other algorithms. Furthermore, when considering the standard deviation, Imp-HGS exhibited lower values than HGS, CSO, GA, DE, and OHCRO. A lower standard deviation indicates that Imp-HGS consistently produced more stable and reliable results. It indicates that the optimization process of Imp-HGS generated solutions that were closer to each other, leading to a higher level of consistency in performance. Examining the best values obtained, Imp-HGS achieved either the best or highly competitive results among the algorithms for the majority of the benchmark examples. This indicates that Imp-HGS was successful in finding optimal or near-optimal solutions in the system identification process. Regarding the worst values, Imp-HGS consistently outperformed HGS, GA, DE, and OHCRO, while being comparable to CSO. This suggests that even in worst-case scenarios, Imp-HGS produced solutions that were relatively close to the optimal or acceptable range, outperforming several other algorithms in this regard.

Taken together, the statistical results demonstrate the significance of Imp-HGS in the IIR system identification benchmark examples. It consistently achieved lower average errors, lower standard deviations, competitive best results, and satisfactory worst results compared to the other algorithms. This indicates that Imp-HGS offers a more reliable, robust, and efficient optimization approach for system identification tasks in comparison to the alternative algorithms.

In terms of computational time, the original version of HGS algorithm has achieved 1.0218 s and 0.7420 s for the same and reduced order cases of Example I, respectively. It achieved 1.4772 s and 1.3327 s for the same and reduced order cases of Example II; 1.8541 s and 1.6338 s for the same and reduced order cases of Example III; lastly, 2.3912 s and 2.1765s for the same and reduced order cases of Example IV, respectively. In the case of the proposed Imp-HGS algorithm, those values were found to be 1.0998 s, 0.7938 s, 1.5717 s, 1.4240 s, 1.9677 s, 1.7386 s, 2.5351 s and 2.3073 s, respectively. As seen, although different mechanisms are embedded within the Imp-HGS algorithm, there is no significant difference between the computational times; signifying efficacy of the proposed Imp-HGS algorithm in terms of computational complexity, as well, for the IIR filter design.

6 Experimental results on multilayer perceptron training

6.1 Multilayer perceptron

Multilayer perceptron (MLP) is a feed-forward neural network where neurons are arranged in a one-directional manner. The network is structured with parallel layers, including the input, hidden, and output layers, where data transition occurs. As shown in Fig. 7, the input layer consists of \(n\) nodes, the hidden layer has \(h\) nodes, and the output layer has \(m\) nodes. The output of MLP is computed in two steps. Firstly, the weighted sums are calculated using \({s}_{j}=\sum_{i=1}^{n}({W}_{ij}{X}_{i})-{\theta }_{j}\) where \(J=\mathrm{1,2},\dots ,h\), \({W}_{ij}\) represents the connection weight from the \({i}^{th}\) node of the input layer to the \({j}^{th}\) node of the hidden layer, \({X}_{i}\) denotes the \({i}^{th}\) input, and \({\theta }_{j}\) is the bias of the \({j}^{th}\) hidden node. Next, each hidden node's output is determined via the sigmoid function, which is computed as \({S}_{j}=sigmoid({s}_{j})=1/\left(1+{e}^{-{s}_{j}}\right)\) where \(j=\mathrm{1,2},\dots ,h\) and \({s}_{j}\) is the weighted sum calculated in the previous step for the \({j}^{th}\) hidden node. Once the output of hidden nodes is determined, the final output is computed using \({o}_{k}=\sum_{j=1}^{h}({\omega }_{jk}{S}_{j})-{\theta }_{k}{\prime}\) and \({O}_{k}=sigmoid({o}_{k})=1/\left(1+{e}^{-{o}_{k}}\right)\) where \(k=\mathrm{1,2},\dots ,m\). The connection weight from the \({j}^{th}\) hidden node to the \({k}^{th}\) output node is denoted as \({\omega }_{jk}\). In order to obtain the desired outputs for defined inputs, it is crucial to train the MLP by finding optimal values for the biases and connection weights. The quality of the MLP's final output is dependent on these factors.

Fig. 7
figure 7

Structure of MLP neural network

6.2 Experimental setup and analysis of results on classification datasets

The implementation of the proposed Imp-HGS algorithm to MLP training is illustrated in Fig. 8. The classification data sets listed in Table 7 are used to evaluate the performance of the Imp-HGS algorithm for MLP training. The population size and maximum iteration number are selected to be 200 and 250, respectively, and the algorithms are run 30 times. The datasets are classified, and the comparative statistical results are obtained using other recent approaches in literature. In this regard, atom search optimization (ASO) (Eker et al. 2021), dragonfly algorithm (DA) (Gülcü 2022), bat optimization (BAT) (Gülcü 2022) and grey wolf optimizer (GWO) (Mirjalili 2015b) are used alongside the original HGS algorithm in this paper for classification comparisons.

Fig. 8
figure 8

Imp-HGS algorithm-based MLP trainer

Table 7 Properties of the used datasets

The statistical results obtained from those data sets are displayed in Table 8. Analyzing the average values, it can be observed that Imp-HGS achieved better results across all datasets as it obtained a significantly lower average values compared to HGS, ASO, DA, BAT, and GWO algorithms. Considering the standard deviations, Imp-HGS demonstrated consistent and stable results across the datasets. In the Iris dataset, Imp-HGS achieved a lower standard deviation compared to HGS, ASO, DA, BAT, and GWO, indicating more reliable and consistent optimization outcomes. Similarly, in the XOR dataset, Imp-HGS exhibited a lower standard deviation than HGS, ASO, BAT, and GWO, suggesting a higher level of stability in its performance. In the Heart dataset, Imp-HGS obtained a competitive standard deviation compared to HGS, ASO, and BAT, while outperforming GWO. In summary, based on the statistical results, Imp-HGS demonstrates promising performance in the employed datasets. It achieved lower average values and exhibited more stable results, as indicated by lower standard deviations, in comparison to HGS, ASO, DA, BAT, and GWO in various datasets. Therefore, the results indicate the better performance of the Imp-HGS algorithm for MLP training.

Table 8 Statistical results obtained from different data sets

Table 9 provides the average classification rates obtained using different algorithms. This table provides insights into the performance of the Imp-HGS, HGS, ASO, DA, BAT, and GWO algorithms for different datasets. By analyzing the results, we can observe that the proposed Imp-HGS algorithm consistently demonstrates better performance compared to the other algorithms in several datasets. In the Iris dataset, the Imp-HGS algorithm achieved an impressive average classification rate of 98.85%, outperforming HGS, ASO, DA, BAT, and GWO. This suggests that the Imp-HGS algorithm is effective in accurately classifying Iris flowers. For the XOR dataset, the Imp-HGS algorithm achieved a perfect classification rate of 100%, demonstrating its ability to accurately solve the XOR problem. In contrast, the other algorithms had lower classification rates, indicating the superiority of Imp-HGS in this particular dataset. In the Breast Cancer dataset, the Imp-HGS algorithm achieved a perfect average classification rate of 100%, surpassing ASO, DA, BAT, and GWO. This indicates that Imp-HGS is highly effective in classifying breast cancer cases, potentially aiding in accurate diagnosis. Similarly, in the Balloon dataset, the Imp-HGS algorithm achieved a perfect average classification rate of 100%, outperforming all other algorithms. This suggests that Imp-HGS is well-suited for accurately classifying instances in the Balloon dataset. In the Heart dataset, although the Imp-HGS algorithm obtained a lower average classification rate of 94.15%. It outperformed HGS, ASO, DA, BAT, and GWO, indicating its effectiveness in heart disease classification. While the Wine dataset does not provide the classification rates for ASO and GWO, Imp-HGS achieved a higher average classification rate (96%) compared to HGS, indicating its superior performance in wine classification. In the Thyroid dataset, the Imp-HGS algorithm achieved an average classification rate of 86.35%, surpassing HGS, DA and BA. This suggests that Imp-HGS is effective in accurately classifying thyroid instances. Overall, the Imp-HGS algorithm consistently demonstrates better performance in terms of average classification rates across various datasets, showcasing its effectiveness in solving classification problems. Its superior performance can be attributed to its optimization capabilities and ability to find optimal solutions for the classification tasks at hand. Therefore, the proposed Imp-HGS algorithm also behaves as a good tool for MLP training purposes.

Table 9 Average classification rate (%)

In terms of computational time, the original HGS algorithm achieved 826.8312 s, 2.2250 s, 2734.0174 s, 34.4750 s, 983.1200 s, 936.2310 s and 912.4155 s for Iris, XOR, Breast cancer, Balloon, Heart, Wine and Thyroid datasets, respectively. Those values were obtained as 892.1974s, 2.5813 s, 3072.2021s, 37.5413 s, 1102.3558 s, 1068.3757 s and 1029.6366 s, respectively, by the proposed Imp-HGS algorithm. As seen, although different mechanisms are embedded within the Imp-HGS algorithm, there is no significant difference between the computational times; signifying efficacy of the proposed Imp-HGS algorithm in terms of computational complexity, as well, for the MLP training.

7 Experimental results on controller design

7.1 Doubly fed induction generator-based wind turbine system

Figure 9 showcases a simplified yet powerful illustration of the cutting-edge doubly fed induction generator (DFIG)-based wind turbine system, representing a remarkable breakthrough in renewable energy. Comprising the wind turbine, drive train, induction generator, AC/DC/AC converters, and power transformer, this state-of-the-art system harnesses the wind's potential energy in two stages. First, wind power is ingeniously converted into mechanical power, converted into clean, efficient, and sustainable electrical power (Nasef et al. 2022).

Fig. 9
figure 9

DFIG-based wind turbine system

The DFIG-based wind turbine system provides a versatile solution to harness wind energy. As shown in Fig. 9, this system comprises a wind turbine, drive train, induction generator, AC/DC/AC converters, and a power transformer. This highly efficient system converts wind power to mechanical power, subsequently converted to electrical power. To achieve variable speed, the rotor circuitry of the DFIG is electrically controlled, and two operating modes exist (i) where the rotor speed is greater than synchronous speed and (ii) where the opposite is true. In mode (i), a super-synchronous operation occurs, where the rotor and stator winding communicate electrical power for the grid. In contrast, in mode (ii), a sub-synchronous operation occurs, where the stator winding supplies power together with the grid and the rotor winding (Bounar et al. 2019).

7.2 Experimental setup and analysis of results on PID controller design

The optimal controller design for a doubly fed induction generator-based wind energy conversion system was achieved through a combination of advanced optimization algorithms. In particular, the Imp-HGS algorithm has been utilized in conjunction with the reptile search algorithm (RSA) (Izci et al. 2022), gravitational search algorithm (GSA) (Bharti et al. 2021), particle swarm optimization (PSO) (Bharti et al. 2021), and bacterial foraging optimization (BFO) (Bharti et al. 2021). These algorithms were applied to the \(6th\) order transfer function model, as presented in Eq. (9), to study the transient behavior of the system.

$$G\left(s\right)=\frac{0.000324{s}^{6}-1.75{s}^{5}-2366{s}^{4}+7.9\times {10}^{6}{s}^{3}+7.5\times {10}^{9}{s}^{2}+5\times {10}^{12}s+2.18\times {10}^{14}}{{s}^{6}+2340{s}^{5}+8.67\times {10}^{6}{s}^{4}+4.79\times {10}^{9}{s}^{3}+2.7\times {10}^{12}{s}^{2}+1.27\times {10}^{14}s+9.6\times {10}^{14}}$$
(9)

A detailed discussion of this transfer function model can be found in the work of Ko et al. (2008). The step response of this model yields a percent overshoot \(\%OS=0\), a rise time \({t}_{r}=0.2359\) s, a settling time \({t}_{s}=0.4197\) s, and a peak time \({t}_{p}=0.7780\) s. However, in this study, we aimed to further enhance these values by applying the proposed Imp-HGS algorithm-based PID controller. The detailed implementation procedure of the Imp-HGS algorithm-based PID controller design for the DFIG-based wind turbine system is provided in Fig. 10.

Fig. 10
figure 10

Imp-HGS algorithm-based PID controller design

The PID controller has the following form where the gains known as proportional, integral, and derivative are denoted by \({K}_{p}\), \({K}_{i}\), and \({K}_{d}\), respectively (Ekinci et al. 2022).

$$C\left(s\right)={K}_{p}+\frac{{K}_{i}}{s}+{K}_{d}s$$
(10)

To have an efficient optimization for the gains provided in Eq. (10), the \({F}_{ZLG}\) cost function given in Eq. (11) has been utilized in this work (Ekinci et al. 2022).

$${F}_{ZLG}=\left(1-{e}^{-\rho }\right)\left(\frac{\%OS}{100}+{e}_{ss}\right)+{e}^{-\rho }({t}_{s}-{t}_{r})$$
(11)

Here, the balancing coefficient was set to \(\rho =1\) as it was determined to be the most suitable value. For the optimization task, the boundaries for the gains were set as \({10}^{-1}\le {K}_{p}\le 20\), \(1\le {K}_{i}\le 250\), and \({10}^{-3}\le {K}_{d}\le 1\). Besides, the total iteration number was set to 50 using a population size of 30. Each algorithm was run 30 times to obtain the results. Table 10 presents the obtained closed-loop transfer functions obtained from performing different algorithms. Those transfer functions have been formulated by using the gain parameters obtained from performing different algorithms (see Table 11 for those values) and implementing the procedure explained in Fig. 10 for each approach. The transient response specifications demonstrated in Table 11 and the step response shown in Fig. 11 illustrate the more excellent feature of the proposed method.

Table 10 Closed-loop transfer functions obtained for different algorithms
Table 11 Controller parameters obtained from performing different algorithms and the obtained transient response specifications
Fig. 11
figure 11

Step response obtained from different approaches

Comparing the results of the Imp-HGS algorithm with HGS, RSA, GSA, BFO, and PSO algorithms, it is evident that the Imp-HGS algorithm exhibits better performance in several aspects. When considering the percent overshoot, the Imp-HGS algorithm achieves a value of 0%, indicating that it effectively eliminates any overshoot in the system response. On the other hand, HGS shows a percent overshoot of 0.4898, while RSA, GSA, BFO, and PSO algorithms have even higher values. This suggests that the Imp-HGS algorithm provides a more stable response with minimal oscillations, ensuring better control of the wind turbine system. In terms of rise time, settling time, and peak time, the Imp-HGS algorithm achieves superior performance by attaining values of 0 for rise and peak times and the lowest value (0.1131 s) for settling time. This implies that the Imp-HGS algorithm provides the fastest response with no significant delays or deviations from the desired output. In contrast, the other algorithms (HGS, RSA, GSA, BFO, and PSO) exhibit non-zero values for peak time and higher settling times, indicating comparatively slower response times and longer settling times.

The better performance of the Imp-HGS algorithm in terms of these transient response specifications can be attributed to its optimization capabilities. The algorithm is able to fine-tune the PID controller parameters efficiently, ensuring optimal system performance and minimizing transient effects such as overshoot, rise time, settling time, and peak time. Overall, the Imp-HGS algorithm demonstrates better control over the wind turbine system's transient response compared to the other algorithms considered. Its ability to eliminate overshoot, achieve faster rise time, shorter settling time, and minimal peak time showcases its effectiveness in achieving stable and efficient operation of the wind turbine system.

In addition, the original HGS algorithm achieved a computational time of 25.7423 s while the proposed Imp-HGS algorithm achieved 28.5960 s for the PID design. Similar to other engineering design cases, although different mechanisms are embedded within the Imp-HGS algorithm, there is no significant difference between the computational times; signifying efficacy of the proposed Imp-HGS algorithm in terms of computational complexity, as well, for the PID design.

8 Conclusion and future works

Based on the findings of this study, it can be concluded that the Imp-HGS algorithm represents a significant advancement in the field of metaheuristic optimization. By integrating PS and elite OBL mechanisms, the algorithm successfully addresses the limitations of original HGS algorithm and demonstrates superior performance in solving complex optimization problems with non-differentiable nature and many decision variables. The empirical evaluation using the CEC2019 and CEC2020 test suites, as well as diverse engineering problems such as IIR model identification, MLP training, and PID controller design for a DFIG-based wind turbine system, consistently showcases the algorithm's effectiveness and outperformance compared to state-of-the-art algorithms. The contributions of this work can be summarized twofold. Firstly, the introduction of the Imp-HGS algorithm provides a valuable tool for various engineering applications, surpassing the limitations of traditional optimization approaches and demonstrating improved performance in solving complex optimization problems compared to state-of-the-art reported approaches. Secondly, the extensive experiments conducted on diverse engineering problems provide empirical evidence of the algorithm's capabilities, highlighting its potential to revolutionize optimization and decision-making processes.

Looking towards the future, further research can be conducted to enhance the algorithm's performance and adaptability. This may involve investigating additional pattern search strategies and intelligent learning mechanisms, as well as exploring hybridization with other metaheuristic techniques. Real-world applications and benchmarking against existing optimization methods can provide deeper insights into the algorithm's practical applicability and identify areas for improvement. Additionally, comprehensive parameter tuning, sensitivity analysis, and scalability considerations are essential for unlocking the algorithm's full potential. In conclusion, the Imp-HGS algorithm presents a promising and valuable solution for solving a wide range of complex engineering optimization problems. Its enhanced performance, demonstrated through empirical evaluations, positions it as a significant advancement in the field of metaheuristic optimization. Further research and practical applications are encouraged to fully explore the algorithm's capabilities and contribute to the ongoing development of efficient optimization techniques.