Abstract
In recent decades, the demand for optimization techniques has grown due to rising complexity in real-world problems. Hence, this work introduces the Hyperbolic Sine Optimizer (HSO), an innovative metaheuristic specifically designed for scientific optimization. Unlike conventional approaches, HSO takes a unique approach by engaging individual members of the population, ensuring a comprehensive exploration of solution spaces. Employing distinctive exploration and exploitation phases, coupled with hyperbolic \(sinh\) function convergence, the optimizer enhances speed, simplify parameter adjustment, alleviates slow convergence, and demonstrates efficiency in high-dimensional optimization. This approach is designed to tackle optimization challenges and enhance adaptability in unpredictable real-world scenarios. The evaluation of HSO's performance unfolds through four distinct testing phases. Initially, a set of 65 widely recognized benchmark functions is employed. These functions cover both unimodal and multi-modal varieties across dimensions of 30, 100, 500, and 1000, including fixed-dimensional functions, to comprehensively assess the exploration, exploitation, local optima avoidance, and convergence capabilities of the proposed algorithm. The results of the HSO algorithm are then compared to those of 15 state-of-the-art metaheuristic algorithms and 8 recently published algorithms. Secondly, HSO's performance is assessed in comparison with the benchmark suite from the Institute of Electrical and Electronics Engineers (IEEE) Congress on Evolutionary Computation (CEC). This suite includes 15 benchmark functions for CEC-2015 and an additional 30 benchmark functions for CEC-2017. During the third phase, HSO tackles seven real-world classical engineering design problems by addressing both the constrained and unconstrained optimization challenges of IEEE CEC-2020. Finally, HSO undertakes training for a multilayer perceptron, utilizing four distinct datasets. To qualitatively assess HSO's performance, two statistical analyses—the Friedman and T tests—are employed. The findings of HSO showcase its adaptability and effectiveness as a high-performing optimizer in engineering optimization challenges. Note that the source code of the HSO algorithm are publicly accessible via https://github.com/Shivankur07/Hyperbolic-Sine-Optimizer.git.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The rapid progress of science and technology over the past decade has elevated the complexity of real-world optimization challenges, driving the creation of efficient and effective optimization algorithms. In recent times, addressing optimization challenges has emerged as a compelling and dynamic subject across various research domains. The rapidly expanding domain of decision-making problems can be characterized as instances of optimization problems. The first step of optimization involves creating an objective function that can be either maximized or minimized. Subsequently, after formulating the optimization problem, it becomes necessary to employ an optimization algorithm to explore the optimal variables leading to the most favourable solution. Real-world optimization challenges are expressed in mathematical terms as outlined below:
where \(X=\{{x}_{1}, {x}_{2}, \dots , {x}_{n} \}\) is an \(n-\) dimensional solution, \({R}^{n}\) is the domain of definition, \(f\left(X\right)\) is the objective function, and \(g\left(X\right)\) is the constraint function.
Generally, optimization challenges can be addressed through deterministic or stochastic approaches. Deterministic methods such as linear and non-linear programming, which rely on gradient information, are commonly used to discover optimal solutions. However, these traditional deterministic approaches may converge to local optima. In order to address the constraints associated with conventional methods, stochastic approaches like meta-heuristic algorithms can be applied to tackle complex real-world optimization problems. Therefore, metaheuristic algorithms have gained significant popularity and are extensively employed to address practical optimization challenges across various domains, including cloud computing [1], scheduling [2], neural networks [3], feature selection [4], image segmentation [5], fuzzy control [6], photovoltaic models [7], civil engineering [8], reliability-based design [9], and more.
The primary advantages of meta-heuristic algorithms include their simplicity, flexibility, capacity to evade local optima, and mechanisms that operate without relying on derivatives. The search process of meta-heuristic algorithms involves two distinct phases: exploration and exploitation. During the exploration stage, the algorithm extensively covers the search space to identify promising regions that may lead to the optimal solution. Inadequate exploration may result in being trapped in local optima. The exploitation phase concentrates on searching around the promising regions identified during exploration. Failure to execute effective exploitation can substantially diminish solution accuracy. Striking a balance between exploration and exploitation stands as a significant challenge for meta-heuristic approaches. An effective meta-heuristic algorithm is characterized by: (i). Optimal balance between exploration and exploitation (ii). Achievement of a high level of accuracy. (iii). Convergence towards the optimal solution while avoiding local optima. (iv). Consistent and stable performance, ensuring minimal variation in results across independent runs.
In recent years, numerous metaheuristic algorithms have been developed and implemented in a wide range of fields. For an in-depth and comprehensive review of these algorithms, please refer to Table 1. Despite numerous recently proposed algorithms in the field of optimization, a fundamental question remains about the necessity of introducing more optimization techniques. The No Free Lunch (NFL) theorem logically establishes that no single algorithm can solve all optimization problems. Therefore, the success of an algorithm in addressing a specific set of problems does not ensure effectiveness across all optimization problems with varied types and characteristics. In simpler terms, most optimization methods perform similarly on average, even though they may be really good at solving specific types of problems. The NFL theorem encourages researchers to develop new optimization algorithms designed to specific subsets of problems in diverse fields. This motivation underlies the presented work, which introduces a simple yet effective optimization technique designed for real problems with unknown search spaces. However, the development of numerous metaheuristic algorithms aimed at discovering optimal solutions, significant gaps persist between the theoretical concepts and the practical implementation of these algorithms. Some of these gaps are as follows:
-
Metaheuristic algorithms may struggle to fully explore solution spaces.
-
Slow convergence can lead to premature settling for suboptimal solutions.
-
High-dimensional problems pose challenges due to increased computational demands.
-
Some methods require complex parameter tuning, making them resourcE−intensive.
-
Performance varies based on specific optimization problem characteristics.
-
Adaptation to real-world problems, especially in dynamic or uncertain environments, may be challenging for metaheuristics.
To address the problems in existing algorithms, researchers are encouraged to develop new and creative algorithms with practical solutions for current challenges. It is suggested to create algorithms that reliably reach a solution, have a strong system for controlling and adjusting parameters, and perform well within reasonable time limits. This study also demonstrates that a meta-heuristic doesn't necessarily require direct inspiration; instead, basic mathematical functions, such as the hyperbolic \(sinh\) function, can be employed to create optimization algorithms in this domain. The proposed algorithm utilizes the hyperbolic \(sinh\) function to navigate and exploit the search space, leading to improved solutions. This research introduces the Hyperbolic Sine Optimizer (HSO), a novel meta-heuristic optimization algorithm designed to address a diverse range of optimization challenges in scientific applications. The algorithm places a primary emphasis on mathematical concepts, specifically exploring mechanisms related to algebraic and hyperbolic functions within the context of population members. A distinctive aspect of this study lies in the development of exploration and exploitation phases for metaheuristic algorithms, departing from conventional mechanisms and relying entirely on behavioural patterns associated with hyperbolic function convergence. The exploration and exploitation phases are pivotal in any meta-heuristic algorithm, and this research introduces a unique approach to these stages. Unlike many existing population-based meta-heuristic optimization algorithms, the HSO ensures that population members actively contribute to the exploration of the search region. This deviation from the usual practice, where the population commonly stays inactive during the exploration phase, underscores the importance of involving the contributions of individual population members to enhance the efficiency of investigating the search region. The researchers have addressed this gap by developing an optimizer that utilizes the collective impact of population members to effectively explore the search region. To delve deeper into the functioning of the optimizer, the subsequent subsection provides additional insights. The effectiveness of any meta-heuristic algorithm is contingent upon achieving a well-balanced interplay between exploration and exploitation. In this context, the authors have presented the exploration and exploitation stages using the hyperbolic \(sinh\) function, marking a notable advancement for the HSO. The application of the \(sinh\) hyperbolic function in behavior analysis adds an exciting dimension to the optimizer, showcasing its potential to expedite problem-solving processes and reach global minima or maxima. The exploitation phase, crucial in optimization, relies on a set of algebraic equations based on population elements, further underlining the algorithm's versatility and efficacy in handling complex optimization challenges. Overall, the Hyperbolic Sine Optimizer presents a promising avenue for meta-heuristic optimization, offering a fresh perspective on exploration and exploitation dynamics in the pursuit of global extrema for diverse problem domains.
The main contributions of this study, are as follows.
-
HSO actively involves individual population members, deviating from conventional practices, ensuring a more comprehensive exploration of solution spaces.
-
Utilizes distinctive exploration and exploitation phases with hyperbolic \(sinh\) function convergence to enhance convergence speed, mitigate premature convergence, and actively involve population members for addressing slow convergence
-
Demonstrates efficiency in handling high-dimensional optimization problems, with comprehensive evaluations across dimensions (30, 100, 500, and 1000) showcasing applicability.
-
Utilizes mathematical principles to simplify parameter adjustment, employing behavioural dynamics with algebraic and hyperbolic \(sinh\) functions. This potentially reduces resource demands compared to complex-tuned algorithms.
-
Emphasizes mathematical concepts and unique exploration/exploitation phases, providing a robust, versatile approach to address variability in performance across different optimization problem characteristics.
-
Active involvement of population members in exploration enhances adaptability, and unique dynamics introduced by hyperbolic functions contribute to improved adaptability in real-world and uncertain environments.
The main objective of this paper is to develop a robust optimization algorithm suitable for addressing various challenges encountered in real-world optimization scenarios. This research effectively addresses substantial challenges in the research field related to the theoretical foundation by presenting a clear and straightforward layout. In contrast to other metaheuristic algorithms, this study stands out for its precise mathematical and theoretical underpinnings, exemplified by the incorporation of the hyperbolic \(sinh\) function.
2 Literature review
The focus of current research is on using metaheuristic algorithms for complex engineering problems, driven by their proven effectiveness. This has led to a substantial growth in literature on metaheuristic techniques. The theoretical research within literature can be categorized into three primary avenues: enhancing existing techniques, combining diverse algorithms, and suggesting novel algorithms. In the initial approach, scholars endeavour to enhance algorithmic performance by incorporating various mathematical or stochastic operators. The second prevalent research direction involves merging different algorithms to enhance overall performance or address particular issues. Finally, proposing new algorithms stands out as a widely pursued research path. The inception of innovative algorithms often draws inspiration from evolutionary phenomena, the collective behaviour of creatures (utilizing swarm intelligence techniques), fundamental physical principles, and concepts related to human experiences. The literature is progressively adopting classification methods based on these sources of inspiration, leading to the identification of four main categories of algorithms: swarm-based, evolutionary-based, physics/chemistry-based, and social/human-based algorithms. Table 1 summarizes notable metaheuristic algorithms.
-
Swarm intelligence based algorithms: Swarm Intelligence simulates labour division and collaboration among organisms, evolving the population through interactions. Examples of swarm intelligence based algorithms include: PSO [10], a widely recognized swarm-based metaheuristic algorithm introduced by Kennedy and Eberhart in 1995. Another classical algorithm is Ant Colony Optimization (ACO), proposed by Dorigo et al. [11], inspired by ants' foraging behaviours and the communication of chemical pheromone trails to find optimal paths. The Artificial Bee Colony (ABC) algorithm [12] is inspired by bee foraging, involving employed bees, onlooker bees, and scouts. Various swarm-based metaheuristic algorithms have gained attention, such as the Grey Wolf Optimizer (GWO) [13], simulating grey wolves' cooperative hunting with alpha, beta, delta, and omega groups. The Whale Optimization Algorithm (WOA) [14] mimics whale foraging, demonstrating strong optimization convergence. The Salp Swarm Algorithm (SSA) [15] is inspired by the salp chain, attracting attention across diverse fields. Harris Hawks Optimization (HHO) [16] excels in engineering optimization, simulating hawks' preying with distinct chasing patterns. The Marine Predator Algorithm [17] draws from predator–prey behaviours, utilizing velocity ratio, Levy, and Brownian movement. Seagull Optimization Algorithm (SOA) [18] mimics seagull foraging for optimization. Other notable algorithms include Monarch Butterfly [19], Lion [20], Pity Beetle [21], Squirrel Search [22], Butterfly [23], Slime Mould [24], Golden Eagle [25], Red Fox [26], Starling Murmuration Optimizer (SMO) [27], Aquila Optimizer (AO) [28], Gannet optimization algorithm (GOA) [29], Bacteria phototaxis optimizer (BPO) [30], Sea horse optimizer (SHO) [31], Coati Optimization Algorithm (COA) [32], Dwarf Mongoose Optimization Algorithm (DMOA) [33], Snake Optimizer (SO) [34], Reptile Search Algorithm (RSA) [35], Prairie Dog Optimization Algorithm (PDO) [36], Ebola Optimization Search Algorithm [37] and more.
-
Evolutionary algorithms: The second class of metaheuristic algorithms falls under the evolutionary-based category. These algorithms imitate biological evolution through processes like crossover, mutation, and selection, achieving evolution by preserving highly adaptable individuals (solutions). For instance, GA, introduced by Holland in 1975 [38], stands as a pioneer in metaheuristics, drawing inspiration from Darwin's theory of natural competition. This approach proves suitable for a diverse range of optimization problems. Storn and Price developed Differential Evolution (DE) [39], a widely used algorithm for optimization. Biogeography-based Optimization [40] is derived from the migration and mutation of biological organisms, with the best solution obtained by updating the habitat suitability index through migration and mutation. Additionally, various variants of evolutionary-based metaheuristics have been introduced, including Evolution Strategy [41], Gene Expression Programming [42], and Memetic Algorithm [43, 44].
-
Physics and chemistry based algorithms: Physics and chemistry-based algorithms, inspired by physical forces like electromagnetic and gravity, as well as chemical concepts, emphasize theoretical and mathematical aspects. This makes their convergence easier to demonstrate with field-specific theorems. Examples include simulated annealing [45], gravitational search algorithm [46], big-bang big-crunch algorithm [47], charged system search [48], ray optimization [49], stochastic fractal search [50], equilibrium optimizer [51], sine cosine algorithm [52], water cycle algorithm [53], thermal exchange optimization [54], Archimedes optimization algorithm (AOA) [55], Kepler Optimization Algorithm (KOA) [56], Fick' s Law Algorithm (FLA) [57], Propagation Search Algorithm (PSA) [58], weighted mean of vector (INFO) [59] and others. Simulated annealing [45], inspired by the physical law of metal cooling and annealing, is effective in solving complex optimization problems. The gravitational search algorithm [46], drawing inspiration from the law of gravity, attracts particles based on mass weight to achieve optimal solutions.
-
Human based algorithms: The final category of metaheuristic algorithms is based on social or human behaviours. Brainstorm Optimization [60], developed by Shi, simulates intense ideological collisions among people. Each idea represents a candidate solution, and solution updates involve clustering and fusion. Teaching–Learning-Based Optimization [61] draws inspiration from the teaching and learning process in classrooms, where students learn not only from teachers but also from peers. TLBO is recognized as a high-quality algorithm in the metaheuristic field. Other notable social or human-inspired metaheuristics include Tabu Search [62], Harmony Search [63], Political Optimizer [64], Imperialist Competitive Algorithm [65], League Championship Algorithm [66], Interactive Autodidactic School [67], Arithmetic Optimization Algorithm [68], Language Education Optimization (LEO) [69], Skill Optimization Algorithm (SOA) [70], Group learning algorithm (GLO) [71], Growth Optimizer (GO) [72], Success History Intelligent Optimizer (SHIO) [73], Child Drawing Development Optimization (CDDO) [74] and more.
The constant progress and improvement of initial metaheuristic optimization algorithms represent an ongoing process of getting better and more innovative. Researchers continuously work on refining these fundamental algorithms by incorporating inventive methods and strategies (Table 2). Hybrid approaches take advantage of how different algorithms complement each other, making the exploration of the search space more effective and efficiently using potential solutions. This blending of ideas not only pushes the limits of optimization performance but also creates adaptable and strong algorithms that can handle various complex optimization challenges more effectively. In recent years, various enhanced and hybridized versions of existing metaheuristic algorithms have emerged. These variants are essential for optimizing algorithm performance, overcoming limitations, and adapting to diverse problem characteristics, ensuring efficient solutions. For instance, The Conscious Neighbourhood-based Crow Search Algorithm (CCSA) [75] addresses crow search algorithm concerns with three novel strategies, achieving outstanding results in benchmarks and engineering problems, surpassing state-of-the-art swarm intelligence. The Quantum-based Avian Navigation Optimizer Algorithm (QANA) [76] refines differential evolution, introducing self-adaptive quantum elements, success-based distribution, and V-echelon communication. QANA proves superior to DE and other swarm algorithms. The Enhanced Whale Optimization Algorithm (E−WOA) [77] improves feature selection, outperforming existing WOA variants. Binary E−WOA (BE−WOA) excels in medical datasets like COVID-19 detection. The Improved Binary QANA (IBQANA) [78] enhances binary metaheuristics for medical data pre-processing, surpassing seven alternatives. Binary QANA (BQANA) [79] optimizes medical feature selection, outperforming alternatives. Binary Starling Murmuration Optimizer (BSMO) [80] excels in feature selection, surpassing rivals. The Levy Arithmetic Algorithm [81] enhances the Arithmetic Optimization Algorithm, proving superior in diverse benchmarks and real-world scenarios. E-mPSOBSA [82] integrates the modified Backtracking Search Algorithm (BSA) and Particle Swarm Optimization (PSO) to improve global exploration and local exploitation. The hybrid HSMA [83] combines quadratic approximation with SMA, while the novel m-SCBOA [84] merges a modified Butterfly Optimization Algorithm with the Sine Cosine Algorithm, enhancing both exploratory and exploitative searches. E-SOSBSA [85], a hybrid of SOS and BSA, addresses SOS limitations through adaptive mutation and crossover operators. NwSOS [86] modifies the symbiotic organisms search algorithm, enhancing exploration/exploitation balance. ImBSA [87] improves BSA with a multi-population approach, diverse mutation strategies, and adaptive control parameters. MlSOS [88], an improved SOS variant, tackles local optima and weak convergence issues, demonstrating enhanced search performance on benchmarks. The modified DE algorithm in paper [89] addresses control parameter challenges in optimizing real-world problems. QRSMA [90] integrates SMA with quasi-reflection-based learning for improved diversity, convergence, and search efficiency. Similarly, gQR-BSA [91] modifies the backtracking search algorithm with quantum Gaussian mutations, adaptive parameters, and quasi-reflection-based jumping. mLBOA [92] is a BOA variant utilizing self-adaptive parameters and Lagrange interpolation, while m-DMFO [93] addresses premature convergence issues in the moth flame optimization algorithm.
The preceding paragraphs showcase a range of metaheuristics devised by researchers and the broad domains where these algorithms find application. In addition to, Table 25 presents a comprehensive overview of the advantages and disadvantages associated with various metaheuristic techniques, offering insights into their strengths and limitations for optimization tasks. The subsequent section presents the Hyperbolic Sine Optimizer (HSO), a novel metaheuristic optimization algorithm crafted to tackle a diverse array of optimization challenges in scientific applications.
3 The proposed algorithm: Hyperbolic Sine Optimizer
The Hyperbolic Sine Optimizer (HSO) is a novel meta-heuristic optimization method for scientific problems. HSO stands out for its utilization of algebraic and hyperbolic functions, particularly focusing on population members. The algorithm comprises two phases: exploration and exploitation, uniquely relying on behaviour mechanisms associated with hyperbolic function convergence, departing from conventional methods. Highlighting the significant role of population members in efficient search region exploration, HSO diverges from traditional population-based meta-heuristic optimization algorithms. While most algorithms minimize the influence of population members in exploring the search region, HSO emphasizes their importance for effective exploration. To address this research gap, the study proposes an optimizer actively involving population members in the exploration process. The paragraph underscores the critical need for balancing exploration and exploitation in meta-heuristic algorithms, with the authors introducing the hyperbolic \(sinh\) function to accelerate convergence towards global minima or maxima. In the exploitation phase, algebraic equations based on population constituents are employed. The paragraph concludes by noting the proposal of a specific equation for position updates during the exploratory phase, emphasizing the crucial role of these mathematical tools in achieving efficient optimization results. Overall, HSO introduces a new approach that considers population dynamics, behaviour mechanisms, and mathematical functions to enhance the exploration and exploitation phases in meta-heuristic optimization.
3.1 Exploration phase
The exploration stage within the framework of the Hyperbolic Sine Optimizer (HSO) signifies a revolutionary transformation in the field of meta-heuristic optimization. Unlike traditional methodologies where populations remain passive during exploration, the HSO takes an active approach by involving individual members and acknowledging their distinct contributions. This approach enhances overall efficiency by tapping into the combined impact of population members, introducing dynamism to the exploration process. Emphasizing the vital interplay between exploration and exploitation, the HSO utilizes the hyperbolic \(sinh\) function during the exploration phase, providing a new outlook on shaping exploration dynamics. The incorporation of the \(sinh\) hyperbolic function in behaviour analysis introduces an inventive aspect, prompting problem-solving and facilitating the effective traversal of the search region. Closely connected to mathematical principles, the exploration phase capitalizes on algebraic and hyperbolic functions, marking a significant advancement. In conclusion, the exploration phase of the HSO, characterized by active involvement and guided by mathematical principles, sets it apart with an innovative approach, ensuring adept handling of intricate optimization challenges and pursuit of global extrema across various problem domains.
In the exploration phase, we evaluate a random number \({x}_{i,j}^{\alpha }\) based on uniform random distribution between population members \({x}_{i,j}^{p}\) and \({r}_{1}\) as of Eq. (1). The definition of \({r}_{1}\) is defined as on Eq. (2) and thereafter replaces original population members with these newly generated members. Here \({x}_{j}^{best}\) represents the population local best.
Afterwards, replaces all population members, finding the best local solution \({[x]}_{1\times n}^{{\alpha }_{best}}\). Now the exploration phase ends, and the exploitation phase initialize.
3.2 Exploitation phase
The Hyperbolic Sine Optimizer (HSO) relies on a crucial exploitation phase for its meta-heuristic optimization. Unlike traditional methods, HSO prioritizes mathematical concepts, especially algebraic and hyperbolic functions within the population, recognizing the importance of exploration and exploitation in meta-heuristic algorithms. The HSO stands out by actively involving population members during search region exploration. In the exploitation phase, departing from common inactivity, HSO prioritizes individual contributions to enhance search region investigation efficiency, emphasizing the algorithm's commitment to harnessing the collective impact of population members. Researchers stress the exploitation phase's importance, relying on algebraic equations based on population elements. This underscores HSO's adaptability and effectiveness in handling complex optimization challenges, presenting a distinctive approach to meta-heuristic optimization.
During the exploitation phase, the following equations are used to update the population:
\(and\) \({r}_{4} is a random number between lb and ub.\)
Hereafter evaluate the target objective function value \({[f]}_{m\times 1}^{{\beta }_{obj}}\) concerning each search agent of the population \({[X]}_{m\times n}^{\beta }\) Subsequently, finding the best-fit search agent \({[x]}_{1\times n}^{{\beta }_{best}}\) concerning the objective function value \({f}_{i}^{{\beta }_{obj}}\). Now Repeat these procedures over some number of iterations (Fig. 1).
3.3 Hypotheses about the HSO algorithm
For the following reasons, the HSO algorithm theoretically achieves the global optimum of optimization problems (Fig. 2):
-
Effective search space exploration is guaranteed by the population dispersion behaviour of each generation around the population members.
-
The population members' adaptively diverse boundaries guarantee efficient utilization of search space.
-
Utilizing random adaptive variables and population update techniques greatly increases the likelihood of escaping the local optimum stagnation.
-
A new exploration and exploitation phase based on hyperbolic functions gradually decreases the rates at which population members are modified over the course of iterations to assure convergence of the HSO algorithm.
-
Each iteration's population diversity is increased by producing random population members within the search boundary for each search agent.
-
Individual search agents are directed by the population to investigate more fruitful areas of the search space.
-
Each iteration's best fitness value is noted and compared with the other previous values.
-
HSO has a great ability to avoid local optima due to its population-based characteristics.
-
There aren't many parameters to adjust when using the HSO algorithm.
-
The HSO algorithm approaches the problem as a "black box," not using gradients.
4 Experimental results and discussions
The performance of the proposed HSO algorithm is assessed using 110 well-known standard benchmark functions, and the outcomes have been compared with those of other 15 metaheuristic algorithms, including GA [ 38 ], PSO [ 10 ], BBO [ 40 ], FPA [ 103 ], GWO [ 13 ], BAT [ 104 ], FA [ 105 ], CS [ 106 ], MFO [ 107 ], GSA [ 46 ], DE [ 39 ], TSA[ 108 ], SCA [ 52 ], WOA [ 14 ], AOA[ 68 ]. In addition to table 8 showcases the outcomes produced by the proposed algorithm (HSO) and contrasts them with the results derived from eight recently introduced algorithms. Following the experimental setup and comparing algorithms, details of the Benchmark problems are described first. The examinations for HSO's qualitative, quantitative, and scalability are then completed. Seven actual constrained and unconstrained optimization tasks are used to evaluate the performance of HSO
4.1 Benchmark problems and experimental setup
A set of 23 standards, well-known optimization benchmark functions, including unimodal, multimodal, and fixed-dimensional functions, as well as 42 additional functions based on various modalities, including many local minima, bowl-shaped, plate-shaped, valley-based, steep ridges and drops, and others, are used to evaluate the effectiveness of the proposed HSO. Next, all 15 benchmark suits from IEEE CEC-2015 and all 30 benchmark suits from IEEE CEC-2017 are applied. A summary of standard 23 and further 42 benchmarking problems are provided in Appendices A and B, respectively. The IEEE CEC-2015 and IEEE CEC-2020 test suite collections, respectively, are included in Appendices C and D. Each algorithm is subjected to 30 independent runs with 500 iterations per run and a population size of 30 in the experimental setup in order to make fair comparisons between the proposed algorithm HSO and all other fifteen algorithms. The best, worst, mean, and standard deviation values for each function are reported for each algorithm. Results are computed on an Intel(R)Core (TM) i5 7200U processor running at 2.5–2.71 GHz, with 8 GB of primary memory and 1 TB of secondary memory, respectively.
4.2 Exploitation efficiency of proposed HSO
The findings of the first experiment, which was carried out using 23 common benchmark functions suggested by Yao et al., are shown in Appendix Table 40. A set of seven unimodal and six multimodal problems are considered to have a maximum dimension of 30. The number of iterations (30 × 500) is 15,000, which is the maximum number of viable evaluations. Table 3 presents the results for a total of 13 functions, of which 6 are multimodal and 7 are unimodal. In addition, the fact that the unimodal functions have a single (global) optimum serves as evidence of the suggested algorithm's effectiveness in exploitation. Moreover, evidenced by the mean value in the results Table 3, the proposed HSO outperforms the other metaheuristics for unimodal functions F1, F2, F3, F4, and F7 in 30 dimensions. The proposed HSO successfully achieves the exact global optimum for the functions F1, F2, F3, and F4. Tables 4, 5, and 6 show the results for 100, 500, and 1000 dimensions, respectively. Even though the challenge was scaled up to 100 and 500 dimensions, significant progress was still apparent. It is obvious that the suggested HSO can locate the exact global optimum for each of the four functions we are considering (F1, F2, F3, and F4). The suggested algorithm's stability and predictability are further illustrated by the standard deviation. The Rosenbrock function, a non-convex function used to evaluate optimization strategies, was created by Howard H. Rosenbrock in 1960. Other names include Banana Function or Rosenbrock's Valley. The global minimum is located in a long, thin, flat valley with a parabolic shape. It is simple to locate the valley. It is difficult to converge to the global optimum. F5 is the Rosenbrock function in AppendixTable 40. For F5, the suggested approach performed better than alternative metaheuristics in all dimensions (30, 100, 500, and 1000). The proposed algorithm produces adequate results for functions F6 and F7.
4.3 Exploration efficiency of proposed HSO
Multimodal functions have more local optima than unimodal functions, which makes them harder to solve. In order to find a global optimum, it is, therefore, essential to avoid local optimal stagnation. Tables 3, 4, 5, and 6 present the results for the 30, 100, 500, and 1000 dimensions, respectively. When the proposed HSO is compared to various metaheuristics, it can be seen that it outperforms them significantly for functions F8, F9, F10, F11, F12, and F13. The results demonstrate that applied mechanisms for population distribution behaviour improve the HSO's exploration capability while exploitation mechanisms with hyperbolic function \(sinh\) strike a good balance between exploration and exploitation to improve convergence in the later stages of a generation.
4.4 Performance of proposed HSO over fixed dimension multimodal functions
These reference problems also exhibit multimodality in fixed dimensions. Each challenge is run through 500 iterations to arrive at the solution. Table 7 findings make it apparent that the suggested HSO can locate global optimum solutions for the specified functions F14, F15, F16, F17, F18, F21, F22, and F23. Additionally, the results for functions F19 and F20 are notably better than those of other metaheuristics. The proposed algorithm HSO's best, worst, mean, and standard deviation values are shown in Table 8 for 23 common benchmark functions with dimensions of 30, 100, 500, and 1000.
4.5 Performance of proposed HSO over some 42 extra functions with different modalities
To evaluate the effectiveness of the suggested HSO algorithm, 42 more supplementary functions (mentioned in Appendix Table 42) must be utilized, including various local minima, bowl- or plate-shaped features, valley-based features, steep ridges and drops, and others (Table 9). The 42 supplementary functions' best, worst, mean, and standard deviation results are shown in Table 10. The functions G1 through G15 can be found in numerous local minima categories. For the following functions: G3, G4, G6, G7, G11, G12, G13, and G15, HSO finds the exact global optimum. In addition to this, other operations would also result in positive effects. The bowl shape category takes into consideration seven functions that span G16–G22. In this class as well, HSO demonstrates its usefulness by generating exact global optimum levels for G16, G18, G19, G20, and G21. Other than these, HSO performs satisfactorily. For the platE−shaped categories (G23-G27), five functions are taken into account. For functions like G24 and G27, HSO generates the exact global optimum. Four functions (G28–G31) are considered in order to identify a valley-shaped function. For G28 and G29, HSO offers an exact global optimum. In addition to this, HSO performs well. Four functions (G32-G35) are taken into account while evaluating the shape of steep ridges and drops. For G33, HSO generates the exact global optimum. Except for them, HSO yields acceptable results. Seven further functions are considered to constitute other categories with diverse forms in the range G36-G42 aside from these. All functions in this range from (G36-G42) have exact global minima produced by HSO.
4.6 Performance of the proposed algorithm over IEEE CEC benchmark functions
4.6.1 Experimental set up for IEEE CEC benchmark problems
A set of 15 and 30 standard IEEE CEC-2015 and CEC-2017 benchmark functions, respectively, renowned for their optimization characteristics, is utilized to evaluate the effectiveness of the proposed HSO. These functions include variations with shifts, rotations, hybrids, and composites, and a brief summary is provided in Appendix Table 43. In the experimental setup, each algorithm undergoes 30 independent runs, involving 1000 iterations per run and a population size of 30. This thorough configuration ensures fair comparisons between the HSO algorithm and the remaining fifteen algorithms. Mean and standard deviation values for each algorithm are recorded for every function. The results are obtained from computations performed on an Intel(R) Core (TM) i5 7200U processor operating at speeds ranging from 2.5 to 2.71 GHz, equipped with 8 GB of primary memory and 1 TB of secondary memory, respectively.
4.6.2 Analysis of outcomes for IEEE CEC-2015 benchmark functions
Table 11. represents the performance evaluation of the Hyperbolic Sine Optimizer (HSO) algorithm on various CEC benchmark functions reveals noteworthy achievements, particularly when compared to competing algorithms such as MPA, TSA, WOA, GWO, TLBO, GSA, GA, and PSO. To assess the efficacy of HSO, the mean values obtained for each function were scrutinized, with an emphasis on achieving smaller mean values indicative of better performance. Among the benchmark functions, HSO demonstrated its superiority in terms of mean values in several instances, outperforming its competitors.
For CEC-3, CEC-6, CEC-7, CEC-9, CEC-12, and CEC-15, HSO secured the best mean values compared to MPA, TSA, WOA, GWO, TLBO, GSA, GA, and PSO. Notably, in CEC-3, HSO exhibited an average of 3.04E+02 with a standard deviation of 1.30E+00, showcasing its efficiency in minimizing the objective function. Similarly, for CEC-6, HSO obtained an average of 6.00E+02 with a minimal standard deviation of 7.40E−02, signifying its robustness in handling the complexities of this function. The consistent outperformance across various functions underscores the effectiveness of HSO in achieving superior mean values. It is worth highlighting that these results signify HSO's capability to converge towards optimal solutions, as reflected by the smaller mean values. The standard deviations, indicating the variability of results, were also competitive, emphasizing the stability of HSO across different functions. This comparative analysis establishes HSO as a promising optimization algorithm for addressing complex optimization problems, especially in scenarios where achieving lower objective function values is critical. The robust performance of HSO across these CEC benchmark functions positions it as a noteworthy candidate for diverse optimization applications.
4.6.3 Analysis of outcomes for IEEE CEC-2017 benchmark functions
In this section, we evaluate the effectiveness of the proposed method in optimizing the CEC 2017 test suite. This suite consists of thirty benchmark functions, categorized into three unimodal functions (C17-F1 to C17-F3), seven multimodal functions (C17-F4 to C17-F10), ten hybrid functions (C17-F11 to C17-F20), and ten composition functions (C17-F21 to C17-F30). The exclusion of C17-F2 from the simulation studies is due to its unstable behaviour, and a comprehensive description of the CEC 2017 test suite can be found in Appendix Table 43. To conduct a scalability analysis, we employed HSO and competitor algorithms to solve the test suite for problem dimensions (number of decision variables) set at 10, 30, 50, and 100. The results of implementing the proposed HSO approach and competitor algorithms on the CEC 2017 test suite are presented in Tables 12, 13, 14 and 15. The simulation outcomes indicate that, for dimensions equal to 10, HSO emerges as the most effective optimizer for handling C17-F1, C17-F3, C17-F5, C17-F6, C17-F8, C17-F9, C17-F10, C17-F19, C17-F20, C17-F21, C17-F22, C17-F24, C17-F27, and C17-F30. For dimensions equal to 30, HSO stands out as the best optimizer for solving C17-F1 to C17-F9, C17-F13 to C17-F15, C17-F18, C17-F23, C17-F25, and C17-F28. Likewise, for dimensions equal to 50, HSO proves to be the most efficient optimizer for handling C17-F1 to C17-F5, C17-F6, C17-F7, C17-F9, C17-F11, C17-F12, C17-F13, C17-F15, C17-F16, C17-F18, C17-F21, C17-F26, C17-F27, and C17-F30. Finally, for dimensions equal to 100, HSO excels as the optimal optimizer for solving C17-F1 to C17-F7 and C17-F9 to C17-F23, C17-F26, C17-F29, and C17-F30.
4.7 Analysis of exploration and exploitation using diversity graph
This section uses 23 standard benchmark functions with 30 dimensions to analyse the behaviour of exploration and exploitation in HSO. The aforementioned experiments examine the effectiveness of HSO using measures like standard deviation and mean value. The HSO algorithm can outperform its competitors on a variety of 23 benchmark functions, as explained in this section. Hussain et al. presented a method to evaluate the ability for exploitation and exploration in metaheuristic algorithms in [109]. This method expands on the mathematical model of dimension-wise variety introduced in [109]. The following formulas are presented as equivalents:
where the \(jth\) dimension's median is represented by the notation \(median({z}^{j})\). Following that, the formulas used to determine the exploration percentage and exploitation % are outlined as follows:
where \({Divs}_{max}\) stands for the highest level of diversity. The terms exploration percentage and exploitation percentage, respectively, are \(Epl\%\) and \(t\%\). The HSO algorithm can enable the preservation of the metaheuristics diversity and successfully prevent premature convergence when applied to handle difficult benchmarks like unimodal, multimodal and fixed dimensional functions because it can also achieve an appropriate level of exploration percentage. In conclusion, the HSO algorithm effectively recognizes a tradE−off between exploration and exploitation as illustrated in Table 24. Figure 3 depicted diversity graph of 23 standard benchmark functions.
4.8 Sensitivity analysis
The proposed HSO algorithm exhibits the capability to address optimization problems through a process centred on repetitions, utilizing examinations of the search space by members within the population. Consequently, the performance of HSO is subject to the influence of both the population size \((N)\) and the number of iterations \((T)\) performed by the algorithm.
Within this specific section, a sensitivity analysis of HSO with regard to parameters \((N)\) and \((T)\) is undertaken. To investigate the sensitivity to parameter \((N)\), the proposed HSO is employed with various population sizes, including 20, 30, 50, and 100, to address problems F1 to F23. The simulation results of the sensitivity analysis regarding parameter \((N)\) are presented in Table 16. The analysis of these simulation results indicates that an increase in the population enhances the search capabilities of HSO, resulting in a reduction of the values associated with the objective functions. For an examination of sensitivity to parameter \((T)\), the proposed HSO algorithm is executed with diverse values of \((T)\), specifically 100, 500, 1000, and 2000, applied to benchmark functions F1 to F23. The outcomes of the HSO sensitivity analysis under changes in parameter \((T)\) are documented in Table 17. Clearly evident from the simulation results of the sensitivity analysis is that an escalation in the values of \((T)\) prompts the algorithm to converge towards improved solutions, thereby decreasing the values associated with the objective functions.
5 Statistical measures
5.1 Student t tests
The student t-test [110] is a statistical method used to compare the means of two groups and determine if their differences are statistically significant. Named after William Sealy Gosset, who published under the pseudonym "Student," this parametric test is widely employed in research. It assumes that the data follow a normal distribution and calculates a t-statistic based on sample means, standard deviations, and sample sizes. A smaller p-value indicates a higher likelihood of rejecting the null hypothesis, suggesting a significant difference between the group means. The t-test is valuable in experimental design, allowing researchers to assess the significance of observed differences and make informed conclusions about the populations from which the samples are drawn. In this experimental configuration, the proposed algorithm HSO underwent a t-test in comparison to 15 competitor metaheuristics, including GA, FA, BAT, DE, TSA, GSA, PSO, WOA, GWO, CS, BBO, SCA, MFO, FPA, and AOA. Evaluations were conducted on 23 standard benchmark functions across 30 dimensions, with a focus on noting p-values. Here, p-values less than 0.05 indicate more statistically significant results.
5.1.1 Discussion of the results of the proposed algorithm (HSO) using t-tests across 30 dimensions
The provided Table 18 contains p-values resulting from Student t-tests, which were conducted to compare the performance of different metaheuristics across a range of functions (F1 to F23). The table is organized with each row corresponding to a specific function, and each column representing a distinct metaheuristic such as GA, FA, BAT, DE, TSA, GSA, PSO, WOA, GWO, CS, BBO, SCA, MFO, FPA, and AOA. The reported p-values are in scientific notation, and the symbols (+), (−), and (=) accompanying them signify whether the p-value is less than 0.05 (indicating statistical significance), greater than 0.05 (indicating lack of statistical significance), or equal to 0.05, respectively. To interpret the results, p-values in parentheses approaching zero (e.g., 3.49E−15) signify highly statistically significant outcomes. The presence of a symbol (+) next to a p-value indicates that the corresponding metaheuristic demonstrates statistically significant performance compared to others on the specific function. Conversely, a symbol (−) suggests that the metaheuristic in question does not exhibit statistically significant performance on that particular function. A symbol (=) indicates that the result is not statistically significant, with the p-value being greater than or equal to 0.05. For instance, in function F1, all metaheuristics show p-values less than 0.05, signifying that they perform significantly well on F1. In contrast, in function F2, all metaheuristics, except GWO, have p-values less than 0.05. Furthermore, in function F5, GA, FA, BAT, DE, TSA, GSA, PSO, WOA, and CS demonstrate p-values less than 0.05, suggesting statistically significant performance on F5. It is crucial to note that a lower p-value generally indicates stronger evidence against the null hypothesis, emphasizing the statistical significance of the observed differences. Additionally, considering the practical significance and the context of the study is essential when interpreting these results.
5.2 Friedman ranking tests
The Friedman test [111] is a non-parametric statistical test used to determine if there are significant differences among the means of three or more related groups. Named after economist Milton Friedman, this test is applicable when the data are measured on an ordinal scale and the assumptions for parametric tests cannot be met. The procedure involves ranking the data within each group, calculating average ranks, and then computing a test statistic based on the differences between the observed and mean ranks. The null hypothesis assumes no significant differences among the groups. If the resulting test statistic is statistically significant, it indicates that there are variations among the groups. Post-hoc tests can be applied to pinpoint specific group differences. The Friedman test is particularly valuable for analyzing repeated measures data where normality assumptions may not hold, providing a robust method for assessing differences in central tendencies across multiple related groups. In this experimental configuration, the proposed algorithm HSO underwent Friedman ranking tests in comparison to 15 competitor metaheuristics, encompassing GA, FA, BAT, DE, TSA, GSA, PSO, WOA, GWO, CS, BBO, SCA, MFO, FPA, and AOA. Evaluations were carried out on 23 standard benchmark functions across varying dimensions, specifically 30, 100, 500, and 1000.
5.2.1 Discussion of the results of the proposed algorithm (HSO) using Friedman ranking tests across 30 dimension
Table 19 displays the results of Friedman ranking tests conducted on the proposed algorithm (HSO) in comparison to 15 competitor metaheuristics across 30 dimensions. The HSO ranking algorithm, assessed against fifteen optimization methods across benchmark functions (F1 to F13) on 30 dimensions, secured the top position with an average rank of 1. Outperforming algorithms like GA, FA, BAT, and more, HSO consistently demonstrated effectiveness in individual functions, asserting the top rank in F1, F2, F3, F4, F6, F7, F9, F10, F11, F12, and F13. With an average rank of 2.69, HSO showcased superiority, addressing a diverse range of functions and emerging as a promising choice for optimization tasks. In conclusion, HSO stands out as a robust and effective optimization method, consistently exceeding counterparts across various benchmark functions.
5.2.2 Discussion of the results of the proposed algorithm (HSO) using Friedman ranking tests across 100 dimension
The proposed HSO ranking algorithm is evaluated across benchmark functions (F1 to F13) in 100 dimensions. Table 20 results show that HSO consistently outperforms, securing top ranks. In F1, it stands out, while the Genetic Algorithm (GA) lags at 14. In F2, HSO leads at 1, surpassing GA and Firefly Algorithm (FA). This trend persists, emphasizing HSO's superiority. GA averages 11th, while FA and BAT Algorithm rank 8 and 16. Overall, HSO maintains dominance with an average rank of 1, affirming its efficacy. In conclusion, HSO demonstrates promising results, positioning it as a robust choice for optimization tasks compared to other algorithms.
5.2.3 Discussion of the results of the proposed algorithm (HSO) using Friedman ranking tests across 500 dimension
The HSO algorithm excels across benchmark functions (F1 to F13) in 500 dimensions, outperforming fifteen well-known optimization algorithms, as represented in Table 21. In F1, HSO secures the top rank, surpassing GA, FA, BAT, and others. This trend persists across functions; for example, in F5, HSO ranks 4, showcasing its effectiveness. With an average rank of 1.69, HSO demonstrates superior overall performance. Conversely, GA averages the 11th rank, while FA and BAT average ranks of 8 and 15. Overall, HSO emerges as a robust and competitive optimization approach, consistently surpassing or matching other algorithms. The average ranks affirm HSO's efficacy, making it a promising choice for diverse optimization tasks.
5.2.4 Discussion of the results of the proposed algorithm (HSO) using Friedman ranking tests across 1000 dimension
The HSO algorithm excels when compared against fifteen different optimization algorithms across functions F1 to F13 in 1000 dimensions, as illustrated in Table 22. For instance, in F1, HSO claims the top position with a rank of 1, surpassing PSO (rank 2) and leaving GA and WOA behind with ranks 15 and 16. This trend repeats across functions, emphasizing HSO's consistent superiority. In F5, HSO secures a competitive rank of 5, showcasing its adaptability, even though PSO claims the top spot. Overall, HSO maintains robust performance, exemplified by its rank of 3 in F6, outshining GA and WOA. HSO's lead in F7 and F8, with ranks 1, emphasizes its adaptability and consistent excellence. Function F10 sees HSO with a rank of 2, highlighting its strong performance, while in F11, HSO secures the top position, showcasing versatility. Even in F13, HSO maintains its robustness, securing the third position. The average ranks affirm HSO's dominance with an average of 1.85, surpassing GA and FA with average ranks of 9.77 and 11.69. This concise analysis highlights HSO's exceptional optimization capabilities across a diverse set of functions, making it a promising choice for various optimization tasks.
5.2.5 Discussion of the results of the proposed algorithm (HSO) using Friedman ranking tests across fixed dimension
In the proposed algorithm, HSO is ranked against fifteen optimization algorithms across different benchmark functions (F1 to F13) on fixed dimensions. The performance is assessed based on the average ranks of each algorithm, as shown in Table 23. For Function F14, HSO secures the top position with an average rank of 4.10, outperforming the rest of the algorithms. Similarly, in F15, HSO maintains its dominance with an average rank of 3, indicating its superior performance compared to the others. In F16, HSO again emerges as the top-ranked algorithm with an average rank of 2. However, in F17, HSO experiences a slight decline in performance, securing the 7th position with an average rank of 7.40. This trend continues in F18, where HSO is outranked by GA, securing the 2nd position with an average rank of 15. In F19, HSO bounces back, regaining its top position with an average rank of 1. HSO consistently performs well in F20, F21, and F22, securing the 1st, 1st, and 4th positions, respectively. In F23, HSO maintains a competitive edge, securing the 3rd position with an average rank of 5.90. The overall average ranks highlight HSO's strong performance, securing the 1st position with an average rank of 1. In summary, the proposed algorithm HSO demonstrates robust performance across various benchmark functions, consistently outperforming other optimization algorithms in terms of average ranks.
6 Computational time complexities
The algorithm initiates a search optimization procedure utilizing a population of search agents within an \(m\times n-\) dimensional domain. In the initialization phase, operating in \(O(m\times n)\) time complexity, random values are assigned to the population in the \(m\times n-\) dimensional space. Simultaneously, the evaluation of the target objective function during this phase incurs a time complexity of \(O(m)\) , performed for each of the m search agents. Moving forward, the primary iteration loop demonstrates a time complexity of \((iter \times m\times n)\), reflecting its dependence on \({\prime}iter{\prime}\) iterations. In each iteration, the loop traverses through each of the \({\prime}m{\prime}\) search agents and each of the \({\prime}n{\prime}\) dimensions. Within the exploration phase, the nested loops responsible for updating the population contribute to a time complexity of \(O(m\times n)\) . Additionally, the evaluation of the target objective function in this phase incurs a time complexity of\(O(m)\). Similarly, in the exploitation phase, the nested loops for updating the population and the evaluation of the target objective function both carry a time complexity of \(O(m\times n)\) and \(O(m)\) , respectively. The conditional checks within both the exploration and exploitation phases minimally impact the overall time complexity, involving fundamental arithmetic operations and comparisons. In conclusion, the aggregate time complexity of the algorithm is succinctly expressed as\(O(iter \times m\times n)\), where \({\prime}iter{\prime}\) represents the number of iterations, \({\prime}m{\prime}\) denotes the quantity of search agents, and \({\prime}n{\prime}\) signifies the number of dimensions within the search space.
7 Classical real-world engineering design problems
7.1 Constrained optimization based on IEEE−CEC-20 benchmarks suits
7.1.1 Pressure vessel design
The primary purpose of this pressure vessel design [120] is to keep the overall cost to a minimum. This total cost includes the material, forming, and welding of a cylindrical vessel. The vessel is capped on both ends, and the head is hemispherical in shape. The four variables in this problem are the thickness of the shell (Ts), the thickness of the head (Th), the inner radius (R), and the length of the cylindrical section without considering the head (L).
There are four limits on the subject of this problem. This problem's mathematical model is as follows:
Variable range
This problem is very common for all optimizers. Researchers and Scientists solve this problem using conventional optimization algorithms and meta-heuristic Nature Inspired algorithms. The comparison table of HSO with some of Nature Inspired Algorithm are as follows:
Pressure Vessel Design is one of the most popular constrained optimization problems due to the complexity of its constraint search spaces (Tables 24 and 25), and it is included in the CEC 2020 real-world constrained optimization benchmarks. Using metaheuristics, eight competitive algorithms are used to solve this problem to ensure fair comparisons. Due to its optimal cost value, HSO ranked first among all algorithms, while TDO and TSA ranked second and third, respectively, in the experimental results. According to Tables 26 and 27, the HSO results are superior. Figure 4 depicts the convergence curve of HSO after 500 iterations. Each run consists of 500 iterations, and 30 runs are performed. All parameter charts for algorithms should be represented in Figs. 5 and 6.
7.1.2 Welded beam design
The major goal of this topic is to reduce the time it takes to fabricate a welded beam [121]. In this problem, there are five restrictions to consider: In the beam, shear stress (\({\varvec{\uptau}}\)), bending stress (\(\theta \)), the buckling load on the bar (\({P}_{c}\)), the beam's end deflection (\(\delta \)), and the side limitations.
The thickness of the weld (h), the length of the attached part of the bar (l), the height of the bar (t), and the thickness of the bar (t) are the four variables in this problem (b). The following is the mathematical model for this problem:
Consider \(\overrightarrow{X}=\left[{X}_{1}, {X}_{2}, {X}_{3}, {X}_{4}\right]=\left[h, l, t, b\right]\)
Minimize \(f\left(\overrightarrow{X}\right)=1.10471{X}_{1}^{2}{X}_{2}+0.04811{X}_{3}{X}_{4}\left(14.0+{X}_{2}\right),\)
Variable range
where \(\tau \left(\overrightarrow{X}\right)=\sqrt{{{(\tau }{\prime})}^{2}+2{\tau }{\prime}{\tau }^{{\prime}{\prime}}\frac{{X}_{2}}{2R}+ {{(\tau }^{{\prime}{\prime}})}^{2}},\)
\({\sigma }_{max}= 30000 psi\).
The following are the comparison findings of welded beam design of HSO with several Nature Inspired Optimization algorithms are as follows:
The experimental results for the welded beam design problem are displayed in Table 28. To ensure fairness, HSO is compared to seven other competitor algorithms. HSO outperforms other algorithms when determining optimal beam design costs. HSO was ranked first for this problem, while TDO and TSA were ranked second and third, respectively (Figs. 7 and 8). The CEC 2020 benchmark suite for real-world constraint optimization includes welded beam design. The statistical comparison between HSO and other algorithms for this problem is displayed in Table 29. Figures 7 and 9 depict, respectively, a statistical graph and a convergence analysis of HSO over 500 iterations. Figure 8 is a comparison of the HSO parameters graph to other algorithms.
7.1.3 Tension compression spring design
The purpose of this Tension Compression Spring Design [122] challenge is to make a tension/compression spring that is as light as feasible. Shear stress, surge frequency, and minimum deflection must all be considered during the minimization process. This optimization is influenced by wire diameter (d), mean coil diameter (D), and the number of active coils (N) (Figs. 9 and 10).
The following is the mathematical model for this problem:
Variable range
This optimization problem is solved by both mathematical and heuristic approach. To solve this problem by HSO, various Nature Inspired algorithms and their comparison results are reported in comparison table:
Tension compression spring design is an additional CEC 2020 real-world engineering problem benchmark suite. This problem is recognized due to its complex mathematical model and complex constraint search spaces. The experimental results for HSO for this problem are shown in Tables 30 and 31 respectively. Then, compare it to seven additional metaheuristics algorithms. The results indicate that HSO performed exceptionally well and placed first, while TDO and TSA placed second and third, respectively, due to their optimal cost. Statistical and parameter graphs are depicted in Figs. 11 and 12, respectively. The convergence curve of HSO after 500 iterations is depicted in Fig. 10.
7.1.4 Speed reducer design
The Speed reducer Design Problem [123], which is a discrete problem, has four design constraints: the bending stress of the gear teeth, the covering stress, the transverse deflections of the shafts, and the stresses in the shafts. The main goal of the problem is to determine the minimum weight of the speed reducer to satisfy these constraints. As a result, six continuous variables and one discrete variable are detected. In this instance, \({x}_{1}\) represents the face width, \({x}_{2}\) the module of teeth, and \({x}_{3}\) is a discrete design variable that displays the pinion's teeth. Similar to this, \({x}_{4}\) represents the distance between the first and second shafts' bearings, and \({x}_{5}\) represents that distance. The diameters of the first and second shafts, \({x}_{6}\) and \({x}_{7}\), respectively, are the sixth and seventh design variables.
The mathematical model of this problem are as follows:
Consider: \(X=[{x}_{1}, {x}_{2}, {x}_{3},{x}_{4}, {x}_{5},{x}_{6},{x}_{7}]\)
Minimize: \(f\left(x\right)=0.7854{x}_{1}{x}_{2}^{2}\left(3.3333{x}_{3}^{2}+14.9334{x}_{3}-43.0934\right)-1.508{x}_{1}\left({x}_{6}^{2}+{x}_{7}^{2}\right)+7.4777\left({x}_{6}^{3}+{x}_{7}^{3}\right)+0.7854({x}_{4}{x}_{6}^{2}+{x}_{5}{x}_{7}^{2})\)
Subject to: \({g}_{1}\left(x\right)=\frac{27}{{x}_{1}{x}_{2}^{2}{x}_{3}}-1\le 0,\)
Speed Reducer Design is yet another constrained real-world engineering design problem (Fig. 13). This problem falls under the CEC 2020 benchmarking suit. This problem is recognized due to its complex nature, complicated search space, and high dimensionality with constraints. This problem is solved using HSO, and the experimental results are shown in Tables 32 and 33 respectively. The experimental results show that HSO performs marginally well in this problem. TDO and TSA outperforms and were ranked first and second, respectively. The statistical and parameters graph of HSO with other algorithms are shown in Figs. 14 and 15, respectively, while the convergence curve of HSO is shown in Fig. 13.
7.1.5 3 Bar truss design
The goal of this engineering design problem is to build a truss [121] with three bars to save weight. The search space for this problem is extremely constrained.
The mathematical model of this problem are as follows:
Consider: \(\left[{x}_{1}, {x}_{2}\right],\)
Minimize: \(f\left(x\right)=\left(2\sqrt{2}{x}_{1}+{x}_{2}\right)\times L,\)
Subject to:
The comparison table of HSO with various algorithms are as follows:
3 Bar truss is another CEC 2020 real-world constrained optimization problem benchmarking suit. Table 34 shows the comparison results of HSO for the three Bar truss problems. HSO slightly outperforms other competing algorithms in this case. DEDS and SSA performed similarly and were ranked first, with MBA and PSO-DE ranking second and third. In this category, HSO was ranked sixth, with a slight advantage over the others in determining the optimal weight. Figures 16 and 17 depicts a convergence and parameters graph of the threE−bar truss design problem respectively.
7.1.6 Optimal gas production facilities
Since the consumption of gas products is so great today in the actual world, it is imperative to upgrade the facilities for the production of gas. It is quite challenging to supply the most facilities required for gas production [124] everywhere because there are many locations where they cannot be. Therefore, determining the ideal production facility capacity that joins to form an oxygen producing and storing system is an optimization problem.
This optimization problem can be modelled mathematically as:
Consider: \(X=[{X}_{1},{X}_{2}]\)
Minimize:\(F\left(x\right)=61.8+5.72{x}_{1}+0.2623{\left[\left(40-{x}_{1}\right)In\left(\frac{{x}_{2}}{200}\right)\right]}^{-0.85}+0.087\left(40-{x}_{1}\right)In\left(\frac{{x}_{2}}{200}\right)+{700.23{x}_{2}}^{-0.75}\)
Subject to: \({x}_{1}\ge 17.5, {x}_{2}\ge 300\)
Bounds: \(17.5\le {x}_{1}\le 40, 300\le {x}_{2}\le 600\)
Determining the optimal capacity of gas production facilities is another IEEE CEC-2020-based constrained optimization problem. The HSO has been evaluated for this issue, and it outperformed four other algorithms in terms of performance (Fig. 18). With only 120 evaluations, HSO received the highest score and ranked first, followed by ALO and DE, in that order. The outcomes of a comparison between HSO and other algorithms for this problem, as well as the parameters graph, are displayed in Table 35 and Fig. 19, respectively. The convergence curve of this problem is illustrated in Fig. 18.
7.2 Unconstrained real-world engineering design problems
7.2.1 Gear train design problem
Cost minimization of the gear ratio of a complex gear train [125] is the goal of this unconstrained discrete optimization gear train design problem. The definition of the gear ratio is:
Four gears make up the gear design problem, and there should be less of a discrepancy between the necessary gear ratio (1/6.931) and the actual gear ratio. The total number of gearwheel teeth is the number of variables. Therefore, the goal is to determine the four-gear train's best tooth configuration to reduce the gear ratio.
The mathematical model of the problem is:
Consider:\(X=\left[{X}_{1}, {X}_{2}, {X}_{3}, {X}_{4}\right]=\left[{n}_{1}, {n}_{2}, {n}_{3},{n}_{4}\right],\)
Minimize: \(f\left(X\right)={\left(\frac{1}{6.931}-\frac{{X}_{3}{X}_{2}}{{X}_{1}{X}_{4}}\right)}^{2},\)
Subject to: \(12\le {X}_{1}, {X}_{2}, {X}_{3}, {X}_{4}\le 60,\)
The gear train design problem is an unconstrained real-world engineering design problem (Fig. 20). Table 36 shows the results of testing HSO to solve this problem. For this problem, HSO is compared to eight other competing algorithms. HSO excelled in this category, finishing first out of 500 total evaluations. ALO and CS were ranked second and third, respectively. Figure 21 shows a parameters graph of HSO with other algorithms while Fig. 20 represents convergence curve of HSO of this problem.
7.2.2 Parameter estimation for frequency–modulated (FM) sound waves.
Parameter estimation for Frequency-Modulated (FM) [126] sound waves one of the most challenging problem. The most challenging factors of today’s modern music systems are the FM sound wave synthesis. Its play’s a significant role for sound systems. To optimize the FM synthesizer parameter, the major issue is that it contains six dimensions. The six parameters vector of \(X\) are as follows:
\(X=\left[{a}_{1}, {\omega }_{1}, {a}_{2}, {\omega }_{2}, {a}_{3}, {\omega }_{3}\right].\) The following equation is used to optimize a sound wave using this vector. This is one of the most difficult problems since it involves a multimodal problem with a strong mathematical concept. This problem is optimized by various Nature Inspired optimization algorithms. This problem contains the lowest value \(f\left(\overrightarrow{{X}_{sol}}\right)=0\). The following is the mathematical model for this problem:
Here \(\theta =2\pi /100\) and the parameters are defined in the range of \(\left[-6.4, 6.35\right]\) and the cost functions is calculated by using the sum of the square errors between the estimate wave and the target wave as follows:
Frequency modulation sound waves represent an unrestricted engineering design problem in the real world. Using HSO to resolve this issue would yield marginal results at best. HSO achieved satisfactory results and ranked fifth in the minimization problem, whereas GWO and MFO achieved outstanding results and ranked first and second, respectively. The results of all algorithms discovered for this problem are summarised in Table 37. Figures 22 and 23 depict the parameters graph and convergence curve for this problem, respectively.
8 Effect of the proposed algorithm (HSO) on training multilayer perceptron
The proposed MSCA is put to use for the purposes of multilayer perceptron training within this section (MLP). The term "MLP" refers to the neural networks that only have a single hidden layer and connections that only go in one direction between their neurons. Additionally, MLPs only have one hidden layer. In these MLPs, the layer that comes first is known as the input layer, and the layer that comes last is known as the output layer. After the weights, inputs, and biases have been provided, the following steps need to be taken in order to calculate the outputs of MLP:
-
1.
The weighted sum of inputs is first calculated as follows:
$${s}_{j}=\sum_{i=1}^{n}{W}_{i,j}{X}_{i}-{\theta }_{j}, j=\mathrm{1,2},\dots .,h$$\({W}_{i,j}\) represents the connection weights from the \({i}^{th}\) input node to the \({j}^{th}\) node in the hidden layer; \({x}_{i}\) is the \({i}^{th}\) input; and \({\theta }_{j}\) is the threshold or bias of the \({j}^{th}\) node in the hidden layer.
-
2.
Each node's output in the hidden layer is calculated as follows:
$${S}_{j}=\frac{1}{1+{e}^{{-s}_{j}}}=sigmoid\left({s}_{j}\right), j=\mathrm{1,2},..,h$$ -
3.
Ultimately, the final output can be calculated using the outputs of the hidden layer nodes.
$${o}_{k}=\sum_{j=1}^{h}{W}_{j,k}{S}_{j}-{\theta }_{k}{\prime}, k=\mathrm{1,2},\dots ,m$$$${o}_{k}=\frac{1}{1+{e}^{{-o}_{k}}}=sigmoid\left({o}_{k}\right), k=\mathrm{1,2},\dots ,m$$where \({W}_{j,k}\) indicate the connection weight from the \({j}^{th}\) hidden node to the \({k}^{th}\) output node, \({\theta }_{k}{\prime}\) is the threshold or bias of the \({k}^{th}\) output node. It is evident that the biases and weights used in an MLP model are what ultimately decides what the model produces in response to a set of inputs. Since training MLP's goal is to achieve highest classification accuracy for both training and testing samples. A common metric, which is used to evaluate the MLP is Mean Square Error (MSE) and is defined as follows:
$$MSE=\sum_{i=1}^{m}{({o}_{i}^{k}-{d}_{i}^{k})}^{2}$$The desired output for the \({i}^{th}\) input is denoted by\({d}_{i}^{k}\), and the actual output, \({o}_{i}^{k}\), is given when the \({i}^{th}\) training sample appears in the input (where \(m\) is the total number of outputs). The MLP's efficacy can be tested by ensuring it optimises itself to produce the smallest mean squared error (MSE) across all training samples. Consider the following definition for the mean squared error,\(F\):
$$F=\frac{\sum_{k=1}^{N}\sum_{i=1}^{m}{({o}_{i}^{k}-{d}_{i}^{k})}^{2}}{N}$$Here \(N\) denotes the number of training samples. Thus, during the training of MLP, the task is to minimize the objective function \(F\) given above with the decision variables called weights and biases.
The number of samples used for training, denoted here by. Training an MLP entails, therefore, minimizing the objective function \(F\) with the decision variables weights and biases (Fig. 24).
In this section, the effectiveness of the proposed HSO for training MLP is evaluated through the utilization of four different classification data sets, namely XOR, balloon, iris, and breast cancer. These data sets were obtained from the Machine Learning Repository at the University of California, Irvine (UCI). The experimental results are compared with eight popular competitor metaheuristics algorithm such as Sine Cosine Algorithm (SCA) [52], Equilibrium Optimizer (EO) [51], Whale Optimization Algorithm (WOA) [14], Grey Wolf Optimizer (GWO) [13], Moth Flame Optimization (MFO) [107], Arithmetic Optimization Algorithm (AOA) [68], Particle Swarm Optimization (PSO) [127], Multiverse Optimizer (MVO) [128]. In this scenario, the decision variables are constrained to a range of [10, 10]. For the XOR and Balloon dataset, the population size of all algorithms is set to 50, while for all other datasets, it is set to 200. Each algorithm has a hard limit of 250 iterations. Table 38 provides information about the datasets. The number of hidden nodes is assumed to be 2N+1 in this work, where N is the total number of inputs to the dataset. Table 39 displays the mean and standard deviation value of the objective function F (average of MSE) based on 10 replicate runs of each algorithm on each dataset. Classification success rates for all used algorithms are summarized in the same table. The proposed HSO outperforms the other classical meta heuristics algorithms in terms of MSE, standard deviation, classification rate, and statistical outcomes, suggesting that it is an effective MLP trainer. The HSO's enhanced capability to avoid the local optima during the search procedure is the root cause of the reduced MSE. Thus, the proposed HSO is preferable to other classical metaheuristics for use in practical optimization tasks like MLP training, as shown by the comparative study.
9 Conclusion and future scope
In summary, this research presents the Hyperbolic Sine Optimizer (HSO), an innovative meta-heuristic algorithm developed to address scientific optimization issues. What distinguishes HSO from traditional approaches is its emphasis on mathematical principles, particularly the investigation of algebraic and hyperbolic \(sinh\) function in the context of population dynamics. This study stands out by uniquely focusing on the behavioral patterns linked to hyperbolic function convergence in both exploration and exploitation phases, thereby promoting the active involvement of population members and improving overall efficiency.
The conducted experiments, spanning 23 standard and 42 varied benchmark functions, including prominent IEEE CEC-2015 and CEC-2017 benchmarks, demonstrate the superior performance of HSO. Notably, HSO outperforms other metaheuristics for unimodal functions in 30 dimensions and achieves exact global optima for specific functions across various dimensions. The stability and predictability of HSO are further affirmed in results for 100, 500, and 1000 dimensions. In addressing multimodal functions, where local stagnation avoidance is crucial, HSO exhibits remarkable performance, surpassing various metaheuristics for specific functions and providing exceptional outcomes for others. The evaluation extends to 42 supplementary functions, showcasing HSO's ability to attain precise global optima across diverse categories. Extensive analysis and qualitative assessment, including trajectory plots, average fitness values, contour plots, and diversity graphs, reinforce the optimization prowess of HSO. Real-world engineering design problems, both constrained and unconstrained, further validate HSO's superiority in solution quality and computational efficiency over well-known optimization algorithms. Moreover, our exploration extends to the application of HSO in training multilayer perceptron’s, revealing its superiority over alternative metaheuristics.
As we look towards the future, this study identifies key research directions to enhance optimization strategies and problem-solving capabilities. These directions include combining metaheuristics, developing real-time adaptive metaheuristics for dynamic environments, scaling for large problems using parallel and distributed computing, and improving explain-ability and interpretability for critical decision-making. As a future extension of this research, incorporating arithmetic and evolutionary operators, along with integrating complementary methods such as Levy flight, disruption, mutation, and opposition-based learning, is planned. This aims to propose new and enhanced versions of HSO tailored for optimization problems involving binary, discrete, or multiple objectives. Boosting the efficiency of HSO will involve further integration with stochastic elements like local search or global search strategies. Exploring the diverse applications of HSO in various fields signifies a significant advancement and opens avenues for continued innovation in meta-heuristic optimization.
References
Dinesh-Babu, L.D., Venkata Krishna, P.: Honey bee behavior inspired load balancing of tasks in cloud computing environments. Appl Soft Comput 13(5), 2292–2303 (2013). https://doi.org/10.1016/j.asoc.2013.01.025
Li, K., Xu, G., Zhao, G., Dong, Y., Wang, D.: Cloud task scheduling based on load balancing ant colony optimization,” In: 2011 Sixth Annual Chinagrid Conference, 2011, pp. 3–9. doi: https://doi.org/10.1109/ChinaGrid.2011.17
Goh, A.T.C.: Back-propagation neural networks for modeling complex systems. Artif. Intell. Eng. 9(3), 143–151 (1995). https://doi.org/10.1016/0954-1810(94)00011-S
Pedrycz, W.: Fuzzy sets in pattern recognition: methodology and methods. Pattern Recognit. 23(1), 121–146 (1990). https://doi.org/10.1016/0031-3203(90)90054-O
Yin, P.-Y.: A fast scheme for optimal thresholding using genetic algorithms. Signal Process. 72(2), 85–95 (1999). https://doi.org/10.1016/S0165-1684(98)00167-4
Moghadam, A., Seifi, A.R.: Fuzzy-TLBO optimal reactive power control variables planning for energy loss minimization. Energy Convers. Manag. (2014). https://doi.org/10.1016/j.enconman.2013.09.036
Abdel-Basset, M., Mohamed, R., Jameel, M., Abouhawwash, M.: Spider wasp optimizer: a novel meta-heuristic optimization algorithm. Artif. Intell. Rev. 56(10), 11675–11738 (2023). https://doi.org/10.1007/s10462-023-10446-y
Kaveh, A., Zolghadr, A.: Cyclical parthenogenesis algorithm: a new meta-heuristic algorithm. Asian J. Civ. Eng. 18(5), 673–701 (2017)
Ettappan, M., Vimala, V., Ramesh, S., Kesavan, V.T.: Optimal reactive power dispatch for real power loss minimization and voltage stability enhancement using Artificial Bee Colony Algorithm. Microprocess. Microsyst. 76, 103085 (2020). https://doi.org/10.1016/j.micpro.2020.103085
Kennedy, J., Eberhart, R.: Particle swarm optimization, In: Proceedings of ICNN’95 - International conference on neural networks, vol. 4, pp. 1942–1948, (1995) doi: https://doi.org/10.1109/ICNN.1995.488968.
Dorigo, M., Di Caro G.: Ant colony optimization: a new meta-heuristic, In: Proceedings of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406), (1999), vol. 2, pp. 1470–1477
Karaboga, D., Basturk, B.: Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems, vol. 4529, pp. 789–798. (2007) doi https://doi.org/10.1007/978-3-540-72950-1_77.
Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014). https://doi.org/10.1016/j.advengsoft.2013.12.007
Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016). https://doi.org/10.1016/j.advengsoft.2016.01.008
Mirjalili, S., Gandomi, A.H., Mirjalili, S.Z., Saremi, S., Faris, H., Mirjalili, S.M.: Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017). https://doi.org/10.1016/j.advengsoft.2017.07.002
Heidari, A.A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., Chen, H.: Harris hawks optimization: algorithm and applications. Futur. Gener. Comput. Syst. 97, 849–872 (2019). https://doi.org/10.1016/j.future.2019.02.028
Faramarzi, A., Heidarinejad, M., Mirjalili, S., Gandomi, A.H.: Marine predators algorithm: a naturE−inspired metaheuristic. Expert Syst. Appl. 152, 113377 (2020). https://doi.org/10.1016/j.eswa.2020.113377
Dhiman, G., Kumar, V.: Seagull optimization algorithm: theory and its applications for largE−scale industrial engineering problems. KnowledgE−Based Syst. 165, 169–196 (2019). https://doi.org/10.1016/j.knosys.2018.11.024
Wang, G.-G., Deb, S., Cui, Z.: Monarch butterfly optimization. Neural Comput. Appl. 31(7), 1995–2014 (2019). https://doi.org/10.1007/s00521-015-1923-y
Yazdani, M., Jolai, F.: Lion optimization algorithm (LOA): a naturE−inspired metaheuristic algorithm. J. Comput. Des. Eng. 3(1), 24–36 (2016). https://doi.org/10.1016/j.jcde.2015.06.003
Kallioras, N.A., Lagaros, N.D., Avtzis, D.N.: Pity beetle algorithm – a new metaheuristic inspired by the behavior of bark beetles. Adv. Eng. Softw. 121, 147–166 (2018). https://doi.org/10.1016/j.advengsoft.2018.04.007
Jain, M., Singh, V., Rani, A.: A novel naturE−inspired algorithm for optimization: squirrel search algorithm. Swarm Evol. Comput. 44, 148–175 (2019). https://doi.org/10.1016/j.swevo.2018.02.013
Arora, S., Singh, S.: Butterfly optimization algorithm: a novel approach for global optimization. Soft. Comput. 23(3), 715–734 (2019). https://doi.org/10.1007/s00500-018-3102-4
Li, S., Chen, H., Wang, M., Heidari, A.A., Mirjalili, S.: Slime mould algorithm: a new method for stochastic optimization. Futur. Gener. Comput. Syst. 111, 300–323 (2020). https://doi.org/10.1016/j.future.2020.03.055
Mohammadi-Balani, A., Dehghan Nayeri, M., Azar, A., Taghizadeh-Yazdi, M.: Golden eagle optimizer: a naturE−inspired metaheuristic algorithm. Comput. Ind. Eng. 152, 107050 (2021). https://doi.org/10.1016/j.cie.2020.107050
Połap, D., Woźniak, M.: Red fox optimization algorithm. Expert Syst. Appl. 166, 114107 (2021). https://doi.org/10.1016/j.eswa.2020.114107
Zamani, H., Nadimi-Shahraki, M.H., Gandomi, A.H.: Starling murmuration optimizer: a novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 392, 114616 (2022). https://doi.org/10.1016/j.cma.2022.114616
Abualigah, L., Yousri, D., Abd Elaziz, M., Ewees, A.A., Al-qaness, M.A.A., Gandomi, A.H.: Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 157, 107250 (2021). https://doi.org/10.1016/j.cie.2021.107250
Pan, J.-S., Zhang, L.-G., Wang, R.-B., Snášel, V., Chu, S.-C.: Gannet optimization algorithm: a new metaheuristic algorithm for solving engineering optimization problems. Math. Comput. Simul 202, 343–373 (2022). https://doi.org/10.1016/j.matcom.2022.06.007
Pan, Q., Tang, J., Zhan, J., Li, H.: Bacteria phototaxis optimizer. Neural Comput. Appl. 35, 1–32 (2023). https://doi.org/10.1007/s00521-023-08391-6
Zhao, S., Zhang, T., Ma, S., Wang, M.: Sea-horse optimizer: a novel naturE−inspired meta-heuristic for global optimization problems. Appl. Intell. 53(10), 11833–11860 (2023). https://doi.org/10.1007/s10489-022-03994-3
Dehghani, M., Montazeri, Z., Trojovská, E., Trojovský, P.: Coati Optimization Algorithm: a new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 259, 110011 (2023). https://doi.org/10.1016/j.knosys.2022.110011
Agushaka, J.O., Ezugwu, A.E., Abualigah, L.: Dwarf mongoose optimization algorithm. Comput. Methods Appl. Mech. Eng. 391, 114570 (2022). https://doi.org/10.1016/j.cma.2022.114570
Hashim, F.A., Hussien, A.G.: Snake optimizer: a novel meta-heuristic optimization algorithm. KnowledgE−Based Syst. 242, 108320 (2022). https://doi.org/10.1016/j.knosys.2022.108320
Abualigah, L., Elaziz, M.A., Sumari, P., Geem, Z.W., Gandomi, A.H.: Reptile search algorithm (RSA): a naturE−inspired meta-heuristic optimizer. Expert Syst. Appl. 191, 116158 (2022). https://doi.org/10.1016/j.eswa.2021.116158
Ezugwu, A.E., Agushaka, J.O., Abualigah, L., Mirjalili, S., Gandomi, A.H.: Prairie dog optimization algorithm. Neural Comput. Appl. 34(22), 20017–20065 (2022). https://doi.org/10.1007/s00521-022-07530-9
Oyelade, O.N., Ezugwu, A.E.−S., Mohamed, T.I.A., Abualigah, L.: Ebola optimization search algorithm: a new naturE−inspired metaheuristic optimization algorithm. IEEE Access 10, 16150–16177 (2022). https://doi.org/10.1109/ACCESS.2022.3147821
Holland, J.H.: Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press, Cambridge (1992)
Storn, R., Price, K.: Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341–359 (1997). https://doi.org/10.1023/A:1008202821328
Simon, D.: Biogeography-based optimization. IEEE Trans. Evol. Comput. 12(6), 702–713 (2008). https://doi.org/10.1109/TEVC.2008.919004
Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.: Natural Evolution Strategies, In: 2008 IEEE Congress on evolutionary computation (ieee world congress on computational intelligence), pp. 3381–3387. (2008) https://doi.org/10.1109/CEC.2008.4631255
Zhong, J., Feng, L., Ong, Y.: Gene expression programming: a survey [Review Article]. IEEE Comput. Intell. Mag. 12, 54–72 (2017). https://doi.org/10.1109/MCI.2017.2708618
Eusuff, M., Lansey, K., Pasha, F.: Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization. Eng. Optim. 38(2), 129–154 (2006). https://doi.org/10.1080/03052150500384759
Barkat Ullah, A. S. S. M., Sarker, R., Comfort, D., Lokan, C.: An agent-based memetic algorithm (AMA) for solving constrained optimazation problems, In: 2007 IEEE congress on evolutionary computation, pp. 999–1006, (2007) doi https://doi.org/10.1109/CEC.2007.4424579
Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983). https://doi.org/10.1126/science.220.4598.671
Rashedi, E., Nezamabadi-pour, H., Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sci. (Ny) 179(13), 2232–2248 (2009). https://doi.org/10.1016/j.ins.2009.03.004
Erol, O.K., Eksin, I.: A new optimization method: Big Bang-Big Crunch. Adv. Eng. Softw. 37(2), 106–111 (2006). https://doi.org/10.1016/j.advengsoft.2005.04.005
Kaveh, A.: Charged system search algorithm. In: Advances in metaheuristic algorithms for optimal design of structures, pp. 45–89. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-46173-1_3
Zhao, W., Zhang, Z., Wang, L.: Manta ray foraging optimization: an effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 87, 103300 (2020). https://doi.org/10.1016/j.engappai.2019.103300
Salimi, H.: Stochastic fractal search: a powerful metaheuristic algorithm. KnowledgE−Based Syst. 75, 1–18 (2015). https://doi.org/10.1016/j.knosys.2014.07.025
Faramarzi, A., Heidarinejad, M., Stephens, B., Mirjalili, S.: Equilibrium optimizer: a novel optimization algorithm. KnowledgE−Based Syst. 191, 105190 (2020). https://doi.org/10.1016/j.knosys.2019.105190
Mirjalili, S.: SCA: a sine cosine algorithm for solving optimization problems. KnowledgE−Based Syst. 96, 120–133 (2016). https://doi.org/10.1016/j.knosys.2015.12.022
Eskandar, H., Sadollah, A., Bahreininejad, A., Hamdi, M.: Water cycle algorithm – a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 110–111, 151–166 (2012). https://doi.org/10.1016/j.compstruc.2012.07.010
Kaveh, A.: Thermal exchange metaheuristic optimization algorithm, pp. 733–782. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-59392-6_23
Hashim, F.A., Hussain, K., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W.: Archimedes optimization algorithm: a new metaheuristic algorithm for solving optimization problems. Appl. Intell. 51(3), 1531–1551 (2021). https://doi.org/10.1007/s10489-020-01893-z
Abdel-Basset, M., Mohamed, R., Azeem, S.A.A., Jameel, M., Abouhawwash, M.: Kepler optimization algorithm: a new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. KnowledgE−Based Syst. 268, 110454 (2023). https://doi.org/10.1016/j.knosys.2023.110454
Hashim, F.A., Mostafa, R.R., Hussien, A.G., Mirjalili, S., Sallam, K.M.: Fick’s Law Algorithm: a physical law-based algorithm for numerical optimization. KnowledgE−Based Syst. 260, 110146 (2023). https://doi.org/10.1016/j.knosys.2022.110146
Qais, M.H., Hasanien, H.M., Alghuwainem, S., Loo, K.H.: Propagation search algorithm: a physics-based optimizer for engineering applications. Mathematics (2023). https://doi.org/10.3390/math11204224
Ahmadianfar, I., Heidari, A.A., Noshadian, S., Chen, H., Gandomi, A.H.: INFO: an efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 195, 116516 (2022). https://doi.org/10.1016/j.eswa.2022.116516
Shi, Y.: Brain storm optimization algorithm. In: Advances in Swarm intelligence, pp. 303–309. Springer, Berlin (2011)
Rao, R.V., Savsani, V.J., Vakharia, D.P.: Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Des. 43(3), 303–315 (2011). https://doi.org/10.1016/j.cad.2010.12.015
Wang, F.-S., Chen, L.-H.: Tabu search. In: Dubitzky, W., Wolkenhauer, O., Cho, K.-H., Yokota, H. (eds.) Encyclopedia of systems biology, p. 2120. Springer, New York (2013). https://doi.org/10.1007/978-1-4419-9863-7_413
Kim, J.H.: Harmony search algorithm: a unique music-inspired algorithm. Procedia Eng. 154, 1401–1405 (2016). https://doi.org/10.1016/j.proeng.2016.07.510
Askari, Q., Younas, I., Saeed, M.: Political optimizer: a novel socio-inspired meta-heuristic for global optimization. KnowledgE−Based Syst. 195, 105709 (2020). https://doi.org/10.1016/j.knosys.2020.105709
Atashpaz-Gargari, E., Lucas, C.: Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition, In: 2007 IEEE congress on evolutionary computation, pp. 4661–4667. (2007) doi: https://doi.org/10.1109/CEC.2007.4425083
Husseinzadeh Kashan, A.: League championship algorithm (LCA): an algorithm for global optimization inspired by sport championships. Appl. Soft Comput. 16, 171–200 (2014). https://doi.org/10.1016/j.asoc.2013.12.005
Jahangiri, M., Hadianfard, M.A., Najafgholipour, M.A., Jahangiri, M., Gerami, M.R.: Interactive autodidactic school: a new metaheuristic optimization algorithm for solving mathematical and structural design optimization problems. Comput. Struct. 235, 106268 (2020). https://doi.org/10.1016/j.compstruc.2020.106268
Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M., Gandomi, A.H.: The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng 376, 113609 (2021). https://doi.org/10.1016/j.cma.2020.113609
Dehghani, P., Milková, E.: Language education optimization: a new human-basedmetaheuristic algorithm for solving optimization problems. Comput. Model. Eng. Sci. 136, 1–47 (2023). https://doi.org/10.32604/cmes.2023.025908
Givi, H., Hubálovská, M.: Skill optimization algorithm: a new human-based metaheuristic technique. Comput. Mater. Contin. 74, 179–202 (2023). https://doi.org/10.32604/cmc.2023.030379
Rahman, C.M.: Group learning algorithm: a new metaheuristic algorithm. Neural Comput. Appl. 35(19), 14013–14028 (2023). https://doi.org/10.1007/s00521-023-08465-5
Zhang, Q., Gao, H., Zhan, Z.-H., Li, J., Zhang, H.: Growth optimizer: a powerful metaheuristic algorithm for solving continuous and discrete global optimization problems. KnowledgE−Based Syst. 261, 110206 (2023). https://doi.org/10.1016/j.knosys.2022.110206
Fakhouri, H., Hamad, F., Alawamrah, A.: Success history intelligent optimizer. J. Supercomput. (2022). https://doi.org/10.1007/s11227-021-04093-9
Abdulhameed, S., Rashid, T.A.: Child drawing development optimization algorithm based on child’s cognitive development. Arab. J. Sci. Eng. 47(2), 1337–1351 (2022). https://doi.org/10.1007/s13369-021-05928-6
Zamani, H., Nadimi-Shahraki, M.H., Gandomi, A.H.: CCSA: conscious neighborhood-based crow search algorithm for solving global optimization problems. Appl. Soft Comput. 85, 105583 (2019). https://doi.org/10.1016/j.asoc.2019.105583
Zamani, H., Nadimi-Shahraki, M.H., Gandomi, A.H.: QANA: quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 104, 104314 (2021). https://doi.org/10.1016/j.engappai.2021.104314
Nadimi-Shahraki, M.H., Zamani, H., Mirjalili, S.: Enhanced whale optimization algorithm for medical feature selection: a COVID-19 case study. Comput. Biol. Med. 148, 105858 (2022). https://doi.org/10.1016/j.compbiomed.2022.105858
Fatahi, A., Nadimi-Shahraki, M.H., Zamani, H.: An improved binary quantum-based avian navigation optimizer algorithm to select effective feature subset from medical data: a COVID-19 case study. J. Bionic Eng. (2023). https://doi.org/10.1007/s42235-023-00433-y
Nadimi-Shahraki, M.H., Fatahi, A., Zamani, H., Mirjalili, S.: Binary approaches of quantum-based avian navigation optimizer to select effective features from high-dimensional medical data. Mathematics (2022). https://doi.org/10.3390/math10152770
Nadimi-Shahraki, M.H., Asghari Varzaneh, Z., Zamani, H., Mirjalili, S.: Binary starling murmuration optimizer algorithm to select effective features from medical data. Appl. Sci. (2023). https://doi.org/10.3390/app13010564
Barua, S., Merabet, A.: Lévy arithmetic algorithm: an enhanced metaheuristic algorithm and its application to engineering optimization. Expert Syst. Appl. 241, 122335 (2024). https://doi.org/10.1016/j.eswa.2023.122335
Nama, S., Saha, A.K., Chakraborty, S., Gandomi, A.H., Abualigah, L.: Boosting particle swarm optimization by backtracking search algorithm for optimization problems. Swarm Evol. Comput. 79, 101304 (2023). https://doi.org/10.1016/j.swevo.2023.101304
Chakraborty, P., Nama, S., Saha, A.K.: A hybrid slime mould algorithm for global optimization. Multimed. Tools Appl. 82(15), 22441–22467 (2023). https://doi.org/10.1007/s11042-022-14077-3
Sharma, S., Saha, A.K., Roy, S., Mirjalili, S., Nama, S.: A mixed sine cosine butterfly optimization algorithm for global optimization and its application. Cluster Comput. 25(6), 4573–4600 (2022). https://doi.org/10.1007/s10586-022-03649-5
Nama, S., Saha, A.K., Sharma, S.: Performance up-gradation of symbiotic organisms search by backtracking search algorithm. J. Ambient. Intell. Humaniz. Comput. 13(12), 5505–5546 (2022). https://doi.org/10.1007/s12652-021-03183-z
Chakraborty, S., Nama, S., Saha, A.K.: An improved symbiotic organisms search algorithm for higher dimensional optimization problems. KnowledgE−Based Syst. 236, 107779 (2022). https://doi.org/10.1016/j.knosys.2021.107779
Nama, S., Saha, A.K.: A bio-inspired multi-population-based adaptive backtracking search algorithm. Cognit. Comput. 14(2), 900–925 (2022). https://doi.org/10.1007/s12559-021-09984-w
Nama, S.: A modification of I-SOS: performance analysis to large scale functions. Appl. Intell. 51(11), 7881–7902 (2021). https://doi.org/10.1007/s10489-020-01974-z
Nama, S., Saha, A.K.: A new parameter setting-based modified differential evolution for function optimization. Int. J. Model. Simulation Sci. Comput. 11(4), 2050029 (2020). https://doi.org/10.1142/S1793962320500294
Nama, S.: A novel improved SMA with quasi reflection operator: Performance analysis, application to the image segmentation problem of Covid-19 chest X-ray images. Appl. Soft Comput. 118, 108483 (2022). https://doi.org/10.1016/j.asoc.2022.108483
Nama, S., Sharma, S., Saha, A.K., Gandomi, A.H.: A quantum mutation-based backtracking search algorithm. Artif. Intell. Rev. 55(4), 3019–3073 (2022). https://doi.org/10.1007/s10462-021-10078-0
Sharma, S., Chakraborty, S., Saha, A.K., Nama, S., Sahoo, S.K.: mLBOA: a modified butterfly optimization algorithm with lagrange interpolation for global optimization. J. Bionic Eng. 19(4), 1161–1176 (2022). https://doi.org/10.1007/s42235-022-00175-3
Sahoo, S.K., Saha, A.K., Nama, S., Masdari, M.: An improved moth flame optimization algorithm based on modified dynamic opposite learning strategy. Artif. Intell. Rev. 56(4), 2811–2869 (2023). https://doi.org/10.1007/s10462-022-10218-0
Abdel-Basset, M., Mohamed, R., Jameel, M., Abouhawwash, M.: Nutcracker optimizer: a novel naturE−inspired metaheuristic algorithm for global optimization and engineering design problems. KnowledgE−Based Syst. 262, 110248 (2023). https://doi.org/10.1016/j.knosys.2022.110248
Deng, L., Liu, S.: Snow ablation optimizer: a novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 225, 120069 (2023). https://doi.org/10.1016/j.eswa.2023.120069
Han, M., Du, Z., Yuen, K.F., Zhu, H., Li, Y., Yuan, Q.: Walrus optimizer: a novel naturE−inspired metaheuristic algorithm. Expert Syst. Appl. 239, 122413 (2024). https://doi.org/10.1016/j.eswa.2023.122413
ALRahhal, H., Jamous, R.: AFOX: a new adaptive naturE−inspired optimization algorithm. Artif. Intell. Rev. 56(12), 15523–15566 (2023). https://doi.org/10.1007/s10462-023-10542-z
Xue, J., Shen, B.: Dung beetle optimizer: a new meta-heuristic algorithm for global optimization. J. Supercomput. 79(7), 7305–7336 (2023). https://doi.org/10.1007/s11227-022-04959-6
Ghaedi, A., Bardsiri, A.K., Shahbazzadeh, M.J.: Cat hunting optimization algorithm: a novel optimization algorithm. Evol. Intell. 16(2), 417–438 (2023). https://doi.org/10.1007/s12065-021-00668-w
Braik, M., Hammouri, A., Atwan, J., Al-Betar, M.A., Awadallah, M.A.: White Shark Optimizer: a novel bio-inspired meta-heuristic algorithm for global optimization problems. KnowledgE−Based Syst. 243, 108457 (2022). https://doi.org/10.1016/j.knosys.2022.108457
Hashim, F.A., Houssein, E.H., Hussain, K., Mabrouk, M.S., Al-Atabany, W.: Honey Badger Algorithm: new metaheuristic algorithm for solving optimization problems. Math. Comput. Simul 192, 84–110 (2022). https://doi.org/10.1016/j.matcom.2021.08.013
Zhong, C., Li, G., Meng, Z.: Beluga whale optimization: a novel naturE−inspired metaheuristic algorithm. KnowledgE−Based Syst. 251, 109215 (2022). https://doi.org/10.1016/j.knosys.2022.109215
Yang, X.-S.: Flower pollination algorithm for global optimization,” In: Unconventional Computation and Natural Computation, pp. 240–249 (2012).
Fister, jr I., Fister, I., Yang, X.-S., Fong, S., Zhuang, Y.: Bat algorithm: recent advances, In: CINTI 2014 - 15th IEEE International Symposium Computer Intelligences Informatics, Proceedings, pp. 163–167, (2014) doi: https://doi.org/10.1109/CINTI.2014.7028669
Johari, N., Zain, A., Mustaffa, N., Udin, A.: Firefly algorithm for optimization problem. Appl. Mech. Mater. (2013). https://doi.org/10.4028/www.scientific.net/AMM.421.512
Yang, X.-S., Deb, S.: Cuckoo search via Lévy flights. In: 2009 World congress on nature & biologically inspired computing (NaBIC), 2009, pp. 210–214.
Mirjalili, S.: Moth-flame optimization algorithm: a novel naturE−inspired heuristic paradigm. KnowledgE−Based Syst. 89, 228–249 (2015). https://doi.org/10.1016/j.knosys.2015.07.006
Kiran, M.S.: TSA: treE−seed algorithm for continuous optimization. Expert Syst. Appl. 42(19), 6686–6698 (2015). https://doi.org/10.1016/j.eswa.2015.04.055
Hussain, K., Salleh, M.N.M., Cheng, S., Shi, Y.: On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 31(11), 7665–7683 (2019). https://doi.org/10.1007/s00521-018-3592-0
Mishra, P., Singh, U., Pandey, C.M., Mishra, P., Pandey, G.: Application of student’s t-test, analysis of variance, and covariance. Ann. Card. Anaesth. 22(4), 407–411 (2019). https://doi.org/10.4103/aca.ACA_94_19
Jussila, J.J.: Using Friedman test for creating comparable group results of nonparametric innovation competence data using Friedman test for creating comparable group results of nonparametric innovation competence Data 2 specific features of nonnumeric and nonparametric, No. December 2008 (2014)
Gholizadeh, S., Danesh, M., Gheyratmand, C.: A new Newton metaheuristic algorithm for discrete performancE−based design optimization of steel moment frames. Comput. Struct. 234, 106250 (2020). https://doi.org/10.1016/j.compstruc.2020.106250
Moazzeni, A.R., Khamehchi, E.: Rain optimization algorithm (ROA): A new metaheuristic method for drilling optimization solutions. J. Pet. Sci. Eng. 195, 107512 (2020). https://doi.org/10.1016/j.petrol.2020.107512
Askari, Q., Saeed, M., Younas, I.: Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl. 161, 113702 (2020). https://doi.org/10.1016/j.eswa.2020.113702
Liu, Y., Li, R.: PSA: a photon search algorithm. J. Inf. Process. Syst. 16(2), 478–493 (2020)
Qais, M.H., Hasanien, H.M., Alghuwainem, S.: Transient search optimization: a new meta-heuristic optimization algorithm. Appl. Intell. 50(11), 3926–3941 (2020). https://doi.org/10.1007/s10489-020-01727-y
Anita, Yadav, A.: AEFA: artificial electric field algorithm for global optimization. Swarm Evol. Comput 48, 93–108 (2019). https://doi.org/10.1016/j.swevo.2019.03.013
Hosseini, E., Sadiq, A.S., Ghafoor, K.Z., Rawat, D.B., Saif, M., Yang, X.: Volcano eruption algorithm for solving optimization problems. Neural Comput. Appl. 33(7), 2321–2337 (2021). https://doi.org/10.1007/s00521-020-05124-x
Zhang, Y., Jin, Z.: Group teaching optimization algorithm: a novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 148, 113246 (2020). https://doi.org/10.1016/j.eswa.2020.113246
Sharma, R., Pachauri, A.: A review of pressure vessels regarding their design, manufacturing, testing, materials, and inspection. Mater. Today Proc. (2023). https://doi.org/10.1016/j.matpr.2023.03.258
Erdoğan Yildirim, A., Karci, A.: Application of three bar truss problem among engineering design optimization problems using artificial atom algorithm, pp. 1–5 (2018) doi https://doi.org/10.1109/IDAP.2018.8620762.
Celik, Y., Kutucu, H.: Solving the tension/compression spring design problem by an improved firefly algorithm. In: IDDM, (2018)
Lin, M.-H., Tsai, J.-F., Hu, N.-Z., Chang, S.-C.: Design optimization of a speed reducer using deterministic techniques. Math. Probl. Eng. 2013, 1–7 (2013). https://doi.org/10.1155/2013/419043
Krishnamoorthy, D., Fjalestad, K., Skogestad, S.: Optimal operation of oil and gas production using simple feedback control structures. Control. Eng. Pract. 91, 104107 (2019). https://doi.org/10.1016/j.conengprac.2019.104107
Babu, A.H., Naresh, P., Madhava, V., Reddy, M.S.: Minimum weight optimization of a gear train by using GA. IJETAS 1, 43–50 (2016)
Bogere, P., Akol, R., Butime, J.: Optimization of frequency modulation band for terrestrial radio broadcasting: the Case of Uganda, (2015) doi: https://doi.org/10.1109/COMCAS.2015.7360389.
Eberhart, Shi, Y.: Particle swarm optimization: development, applications and resources, In: Proceedings of the IEEE conference on evolutionary computation, ICEC, September, vol. 1, pp. 81–86 (2001) doi: https://doi.org/10.1109/CEC.2001.934374.
Mirjalili, S., Mirjalili, S., Hatamlou, A.: Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput. Appl. (2015). https://doi.org/10.1007/s00521-015-1870-7
Acknowledgements
The first author wishes to express his gratitude to Doon University in Uttarakhand, India, for providing all of the essential resources for this study.
Author information
Authors and Affiliations
Contributions
Shivankur Thapliyal, and Narender Kumar wrote the main manuscript text . All authors reviewed the manuscript
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Thapliyal, S., Kumar, N. Hyperbolic Sine Optimizer: a new metaheuristic algorithm for high performance computing to address computationally intensive tasks. Cluster Comput 27, 6703–6772 (2024). https://doi.org/10.1007/s10586-024-04328-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10586-024-04328-3