1 Introduction

Technologies in the present world have less feature space and depend on priorities and financial limits in information, knowledge-dependent, and expert systems. A practicable solution and adequate information are to be found by researchers using various algorithms for different problems in areas like image segmentation [34], optimization [31], target tracking systems [50], QoS-aware and social recommendation [28,29,30], problems in scheduling [40], prediction of gold prize [48], etc.

The method of determining appropriate values for variables of a given problem to minimize and optimize the objective function is termed optimization. There are difficulties of optimization in various fields of analysis. Various steps should be taken to solve an optimization problem. Next, the parameters of the problem need to be defined. Issues are categorized as continuous or discrete, depending on the type of parameters. Secondly, it is important to consider the restrictions applied to the parameters [42].

Constraints split optimization problems into confined and unregulated ones. The purposes of the issue should be examined and addressed [10, 36]. Mathematical optimization depends primarily on gradient-based knowledge of functions to obtain an optimum solution. While these methods are already being utilized by various researchers, and have few drawbacks. Mathematical optimization methods are influenced by local optima entrapment. This applies to a method that believes that the local solution is a global solution and fails to reach the optimum goal. They are most often unsuccessful in problems with uncertain or computationally costly derivatives [39]. Another form of an optimization algorithm that removes these major disadvantages is stochastic optimization [45].

Stochastic approaches rely on arbitrary operators to prevent local optimization. It begins the optimization by generating one or a series of arbitrary solutions to problems. Compared to mathematical optimization methods, it is not important to determine the solution’s gradient to test the solutions using an objective function (s). Decisions on how to enhance outcomes are taken based on the realistic principles measured. Thus, the problem is assumed to be a black box, and it’s a very effective tool for solving actual issues of undefined search spaces. Because of the above advantages, stochastic methods are commonly used [37]. Nature-inspired, population-dependent methods are the most common among stochastic optimization methods [51].

These approaches emulate normal problem-solving strategies, often those employed by species. Survival is the primary objective of all species. They have evolved and changed in various ways to accomplish this goal. It is also prudent to pursue guidance from nature as the greatest and oldest optimizing compiler on earth. These algorithms are categorized into two major groups: single-solution and multi-solution dependent. In the first stage single random solution for a specific problem is developed and improved and in the second multiple solutions for a specific issue are developed and improved. Multi-solution methods are more common than single-solution approaches [38]. Multi-solution models have an inherently higher local optimum-avoidance owing to the improvement of multiple solutions during the optimization process. The solution stuck in a locally optimal is supported by certain methods to leaping from the locally optimal.

Existing solutions are searching a greater portion of search space relative to single-solution methods, so the likelihood of reaching global optimum is large. Commonly used methods dependent on single-solution are simulated annealing and hill-climbing [14, 27]. They both are ideal, but the avoidance of the local optimum of SA is high the stochastics’ cooling factor. Some new methods dependent on single-solution are iterated local search and tabu search [18, 20, 32]. Some multi-solution dependent popular methods are particle swarm optimization [15], genetic algorithms (GA) [23], differential evolution (DE) [46] and ant optimization (ACO) [11]. Darwin’s evolutionary theory influenced the GA method. Solutions are viewed as entities in this method, and solution parameters take over the place of genes. Survival of the fittest species is the key motivation of this method, where the strongest appears to be more interested in improving bad solutions.

Previous studies state different swarm intelligence optimization methods like firefly algorithm (FA) [52, 57], dolphin echolocation (DEL) [25, 26], grey wolf optimizer (GWO) [39] and bat algorithm (BA) [53]. BA and DEL copy echolocation of dolphins for finding prey and bats navigate. But FA copies the mating nature of fireflies. In Cuckoo Search (CS) [54, 56] method, the cuckoo’s reproductive behavior is used in the optimization method. The hunting behavior of grey wolves is used in the swarm method technique called GWO. Other methods like the state of matter search (SMS) [12, 13] utilize different types of matter for problem optimization.

In contrast, the flower pollination algorithm (FPA) [55] utilizes the pollination and survival behavior of flowers for pollination. Now, the question is, what is the need for new methods when there are many pre-existing. This query’s solution is in the No Free Lunch (NFL) [49] theorem, which has scientifically demonstrated no optimization strategy to solve all optimization problems. In other terms, techniques in this area work similarly on average when considering all optimization problems. This theorem has inspired the fast-growing algorithms that have been proposed in the last decades. It is also one of the reasons behind this paper.

HGS is influenced by animals’ social behavior, whose search for food is related to their extent of hunger. This method is applied and implemented based on the common characteristics of animals and their food search. In this paper, we proposed a hybrid method using two meta-heuristic methods AOA and HGS. Because of gradient-free and simple structure, higher local optimum avoidance and letting issues as black-box nature-inspired methods are used as large in engineering and other problems [8, 21, 59] [34]. We are, therefore, still exploring the utilization of suggested methods to solve actual problems.

The main contributions of the paper are:

  1. 1.

    The application of global optimization utilizing AOA-HGS gives better results when experimental results are compared with HGSO, AOA, GOA, GWO, SCA.

  2. 2.

    AOA-HGS reduced the computational complexity and it also works efficiently for both high and low dimensional problems.

The paper is structured as: in section 2 expounds on HGS. AOA is discussed in section 3. Proposed work is discussed in section 4. Results are discussed in section 5. Conclusion and future scope are discussed in section 6.

2 Hunger games search (HGS) optimization

2.1 Approach food

To express its approaching behavior in mathematical formulas, the following formulas are proposed to imitate the contraction mode [58]:

$$ \overrightarrow{X\left(t+1\right)}=\left\{\begin{array}{c}\overrightarrow{X(t)}\cdotp \left(1+ randn(1)\right),\kern3.75em {r}_1<l\\ {}\begin{array}{c}\overrightarrow{W_1}\cdotp \overrightarrow{X_b}+\overrightarrow{R}\cdotp \overrightarrow{W_2}\cdotp \left|\overrightarrow{X_b}-\overrightarrow{X(t)}\right|,\kern3.75em {r}_1>l,{r}_2>E\\ {}\overrightarrow{W_1}\cdotp \overrightarrow{X_b}-\overrightarrow{R}\cdotp \overrightarrow{W_2}\cdotp \left|\overrightarrow{X_b}-\overrightarrow{X(t)}\right|,\kern4em {r}_1>l,{r}_2<E\end{array}\end{array}\right. $$
(1)

where \( \overrightarrow{R} \) is in the range of [−a, a]; r1 and r2 respectively represent random numbers, which are in the range of [0, 1]; randn(1) is a random number satisfying normal distribution; t indicates that the current iterations; \( \overrightarrow{W_1} \) and \( \overrightarrow{W_2} \) represent the weights of hunger; \( \overrightarrow{X_b} \) represents the location information of a random individual in all the optimal individuals; \( \overrightarrow{X(t)} \) represents each individual’s location, and the value of l has been discussed in the parameter setting experiment.

The formula of E is as follows:

$$ E=\operatorname{sech}\left(\left|F(i)- BF\right|\right) $$
(2)

where i ∈ 1, 2, …, nF(i) represents the fitness value of each individual, and BF is the best fitness obtained in the current iteration process. Sech is a hyperbolic function \( \left(\operatorname{sech}(x)=\frac{2}{e^x+{e}^{-x}}\right) \).

The formula of \( \overrightarrow{R} \)is as follows:

$$ \overrightarrow{R}=2\times a\times \mathit{\operatorname{rand}}-a $$
(3)
$$ a=2\times \left(1-\frac{t}{\mathit{\operatorname{Max}}\_ iter}\right) $$
(4)

where rand is a random number in the range of [0, 1]; and Max _ iter stands for the largest number of iterations.

2.2 Hunger role

The starvation characteristics of individuals in search are simulated mathematically.

The formula of \( \overrightarrow{W_1} \) in Eq. (5) is as follows:

$$ \overrightarrow{W_1(i)}=\left\{\begin{array}{c} hungry(i)\cdotp \frac{N}{SHungry}\times {r}_4,\kern0.75em {r}_3<l\\ {}1\kern6.5em {r}_3>l\end{array}\right. $$
(5)

The formula of \( \overrightarrow{W_2} \) in Eq. (6) is shown as follows:

$$ \overrightarrow{W_2(i)}=\left(1-\mathrm{e} xp\left(-\left| hungry(i)- SHungry\right|\right)\right)\times {r}_5\times 2 $$
(6)

where hungry represents the hunger of each individual; N represents the number of individuals, and SHungry is the sum of hungry feelings of all individuals, that is sum(hungry). r3r4 and r5 are random numbers in the range of [0, 1].

The formula for hungry(i) is provided below:

$$ hungry(i)=\left\{\begin{array}{c}0,\kern8.25em AllFitness(i)= BF\\ {} hungry(i)+H,\kern2em AllFitness(i)= BF\end{array}\right. $$
(7)

where AllFitness(i) preserves the fitness of each individual in the current iteration.

The formula for H can be seen as follows:

$$ TH=\frac{F(i)- BF}{WF- BF}\times {r}_6\times 2\times \left( UB- LB\right) $$
(8)
$$ H=\left\{\begin{array}{c} LH\times \left(1+r\right),\kern2.75em TH< LH\\ {} TH,\kern4.75em TH\ge LH\end{array}\right. $$
(9)

where r6 is a random number in the range of [0, 1]; F(i) represents each individual’s fitness value of individual; BF is the best fitness obtained in the current iteration process; WF stands for the worst fitness obtained in the current iteration process; and UB and LB indicate the upper and lower bounds of the search space, respectively. The hunger sensation H is limited to a lower bound, LH.

figure a

3 Arithmetic optimization algorithm

AOA [2] is a new meta-heuristic method that uses common mathematical operations such as Division (D), Addition (A), Multiplication (M), Subtraction (S), as shown in Fig. 1, which is applied and modeled to execute optimization in a wide variety of search fields [8]. Commonly, population-based algorithms (PBA) launch their improvement processes by randomly selecting several candidate strategies. This defined solution is enhanced incrementally by a set of optimization standards and analyzed sequentially by a particular objective function, and that’s the basis of optimization techniques. Although PBA is stochastically trying to find some efficient strategy to optimization problems, a single-run solution is not guaranteed. However, the chance of an optimum global solution to the problem is enhanced by a large set of possible solutions and optimization simulations [21]. Considering variations among meta-heuristic methods [43, 44] in PBA approaches, the optimization process comprises two cycles: exploitation vs. exploration. The previous examples for extensive coverage are search fields through search agents to bypass local solutions. Above is an increase in the performance of solutions achieved during the exploration process.

Fig. 1
figure 1

Shows AOA search phases [2]

3.1 Motivation

Arithmetic is a key component of mathematics and its most important components of modern math, including analysis, geometry and algebra. Arithmetic operators (AO) [2] are traditionally used for the study of numbers [59]. These basic math functions are used for optimization for finding ideal elements, particularly with selected solutions. Optimization [1, 3, 4, 16, 17, 22, 41] challenges have appeared in all mathematical fields, such as engineering [5,6,7, 9, 19, 24, 47], economics and computer science to organizational analysis and technology, and the advancement of optimization methods has drawn mathematics [33] attention from time to time. The key motivation of the new AOA is the use of AO to solve problems. The behavior of AO and their effect mostly on existing algorithms, the arrangement of AO and their superiority is shown in Fig. 2. AOA is then proposed based on a statistical model. [35]

Fig. 2
figure 2

Shows arithmetic operators according to superiority

3.2 Initial stage

The method of optimization starts with selected sets denoted by A as in Eq. 10. The ideal set in every iteration is created randomly and is taken as the optimum solution [2].

$$ A=\left[\begin{array}{ccccccc}{a}_{1,1}& {a}_{1,2}& ..& ..& {a}_{1,j}& {a}_{1,1}& {a}_{1,n}\\ {}{a}_{2,1}& {a}_{2,2}& ..& ..& {a}_{2,j}& ..& {a}_{2,n}\\ {}{a}_{3,1}& {a}_{3,2}& ..& ..& ..& ..& ..\\ {}..& ..& ..& ..& ..& ..& ..\\ {}{a}_{N-1,1}& ..& ..& ..& {a}_{N-1,j}& ..& {a}_{N-1,n}\\ {}{a}_{N,1}& ..& ..& ..& {a}_{N,j}& {a}_{N,n-1}& {a}_{N,n}\end{array}\right] $$
(10)

Exploitation/Exploration should be carefully chosen at the start of AOA. The coefficient of math optimizer accelerated (MOA) is defined in Eq. 11.

$$ MOA\ \left({C}_{iter}\right)=\mathit{\operatorname{Min}}+{C}_{iter}\ x\ \left(\frac{\mathit{\operatorname{Max}}-\mathit{\operatorname{Min}}\ }{M_{iter}}\right) $$
(11)

where

MOA (Citer) = ith iteration function value.

Miter= Max. no. of iteration.

Max &  Min = Accelerated Function of Max. and Min. Values.

Citer= current iteration (within 1 and Miter).

3.3 Exploration stage

The exploratory nature of AOA is discussed, as per the AO, mathematical calculations whether using Division (D) or Multiplication (M) operator have obtained high distribution values or decisions that contribute to an exploration search method. However, as opposed to other operators, these D and M operators never easily reach the objective due to the high distribution of S and A operators. AOA exploration operators exploit the search field arbitrarily through many regions and seek a better alternative dependent on two key search techniques M and D search techniques as shown in Eq.12 [58].

$$ {a}_{i,j}\left({C}_{iter}+1\right)=\left\{\begin{array}{c} best{a}_j\div \left( MOP\div \varepsilon \right)\times \left(\left(U{B}_j-L{B}_j\right)\times \mu +L{B}_j\right),{r}_2<0.5\\ {} best{a}_j\times MOP\times \left(\left(U{B}_j-L{B}_j\right)\times \mu +L{B}_j\right), otherwise\end{array}\right. $$
(12)

where,

ai(Citer + 1) = ith solution of next iteration.

ai, j(Citer + 1) = jth position in current iteration.

μ= control parameter ≤0.5.

LBj & UBj= Lower & Upper bound limit

ε=smallest integer no.

bestaj = jth position of optimum solution till now

$$ MOP\left({C}_{iter}\right)=1-\frac{{C_{iter}}^{\frac{1}{\alpha }}}{{M_{iter}}^{\frac{1}{\alpha }}} $$
(13)

where,

Math Optimizer Probability (MOP)= coefficient

MOP(Citer) = ithiteration function value

Citer= current iteration

Miter= Max. iterations ≤5

3.4 Exploitation stage

The exploitation nature of AOA is discussed, as per AO mathematical formulas, whether using addition (A) or subtraction (S) as they provided high-density results. AOA exploitation operators exploit the search field deeply through many regions and seek a better alternative dependent on two key search techniques A and S search techniques as shown in Eq.14 [2].

$$ {a}_{i,j}\left({C}_{iter}+1\right)=\left\{\begin{array}{c} best{a}_j- MOP\times \left(\left(U{B}_j-L{B}_j\right)\times \mu +L{B}_j\right),{r}_3<0.5\\ {} best{a}_j+ MOP\times \left(\left(U{B}_j-L{B}_j\right)\times \mu +L{B}_j\right), otherwise\end{array}\right. $$
(14)

4 Proposed work

Several population-dependent approaches have recently been proposed. Despite their adoption in a wide area of engineering applications, we are still examining suggested methods to solve real problems. As a result, researchers need to substantially modify and improve their methods, often based on major evolutionary processes, to achieve faster integration, a more consistent balance with high-quality efficiency and optimization. Therefore, a new hybrid method using Hunger Games Search (HGS) and Arithmetic Optimization Algorithm (AOA) is proposed in this paper. HGS is a recently proposed population-dependent optimization method that stabilizes the features and efficiently performs unconstrained and constrained problems.

In contrast, AOA is a modern meta-heuristic optimization method. They can be applied to different problems, including image processing, machine learning, wireless networks, power systems, engineering design etc. The method proposed is analyzed in context with HGS and AOA. To evaluate the performance, each method is tested on the same parameters like population size and no. of iteration. The proposed method (AOA-HGS) is evaluated by varying the dimensions. The impact of varying dimensions is a standard test utilized in previous studies for optimizing test functions that show the effect of varying dimensions on the efficiency of AOA-HGS. From this, it is noted that it works efficiently for both high and low dimensional problems. In high dimensional problem population, dependent methods give efficient search results. Figure 3 below demonstrates the step-by-step flow and implementation of proposed model.

Fig. 3
figure 3

Demonstrates the step-by-step flow and implementation of proposed model

From Fig. 3, the different steps utilized in the proposed method according to their implementation are discussed. The 1st step is defining parameters in this different parameter to be used are defined. The 2nd step is to generate the solution by means of the defined parameters. The 3rd step is to estimate the fitness function followed by choosing the best solution in 4th step. In 5th step, there is a condition, if the value of the random number (rand) is larger than 0.5, then HGS is not used, and if the value is smaller than 0.5, AOA is used. Then in 6th step, if the desired criteria are met, then the best solution is returned in 7th step; else, by feedback lope, it is again fed to 3rd step for estimating the fitness function.

The complexity of the proposed AOAHGS is based on the complexity of original AOA and HGS and it is given as follows:

$$ O\left( AOA HGS\right)=(N)\times O(AOA)\times O(HGS) $$
(15)
$$ O(AOA)=O\ \left(N\times \left(t\times Dim+1\right)\right) $$
(16)
$$ O(HGS)=O\ \left(N\times \left(t\times Dim+1\right)\right) $$
(17)

Therefore, the total complexity of the proposed AOAHGS is given as follows:

$$ O(AOAHGS)=O\ \left(t\times N\times \left( Dim+N\right)\right) $$
(18)

where, N presents the number of solutions, Dim presents the solution size, and t is the number of iterations.

5 Results & discussion

The proposed approach is examined, and the proposed system’s efficiency is correlated with the performance of existing methods. The implementation and testing are done in i5 − 1.70GHz processor using MATLAB software. The proposed method’s performance (i.e., AOA + HGS) is examined on 23 test functions as described in Fig. 4. The results are further compared with HGSO, AOA, GOA, GWO, SCA.

Fig. 4
figure 4

The description of the tested functions

To evaluate the performance, each method is tested on the same parameters like population size and no. of iteration. The proposed AOA-HGS is evaluated by varying the dimensions. The impact of varying dimensions is a standard test used in previous studies for optimizing test functions that show the effect of varying dimensions on the efficiency of AOA-HGS. From this, it is noted that it works efficiently for both high and low dimensional problems. In high dimensional problem population, dependent methods give efficient search results. In this work, AOA-HGS is used to test scalable multi, and unimodal test functions (F1-F13) with two different dimensions (10 & 1000) and results of comparative methods are tested using ten benchmark functions (F14-F23). The AOA-HGS is dealing with 13 functions (F1-F13) with two different dimensions is compared using standard deviation (SD) and average fitness value (Avg).

AOA-HGS gives the most optimum results in 10, which shows that any optimization methods give optimum results in low dimensions. Now. AOA-HGS is testing using high dimensions (1000), which also gave the optimum results. To verify, the proposed AOA-HGS is compared with previously state-of-art algorithms using the same dimensions (10 & 100) and test function. When the results were analyzed, it showed that AOA-HGS is the most optimum in different cases.

It is compared with other methods to evaluate the convergence behavior of AOA-HGS, as shown in Fig. 5. The curves of convergence with test functions (F1-F13) can be seen in Fig. 5. After observing the Fig. 5, it becomes clear that AOA-HGS gives low and stable convergence compared with other methods. AOA-HGS has more optimum global search capability and fast convergence and attained optimum results than other methods on the same test functions concerning convergence speed and global search capability.

Fig. 5
figure 5figure 5figure 5figure 5

Convergence behavior of AOA-HGS along with other methods (a) F1 (b) F2 (c) F3 (d) F4 (e) F5 (f) F6 (g) F7 (h) F8 (i) F9 (j) F10 (k) F11 (l) F12 (m) F13 (n) F14 (o) F15 (p) F16 (q) F17 (r) F18 (s) F19 (t) F20 (u) F21 (v) F22 (w) F23

The average runtime of the proposed AOA-HGS algorithm is compared with other pre-existing methods with 10 dimensions is shown in Table 1, and with 1000 dimensions is shown in Table 2. As AOA-HGS depends on the population method, there is no need for optimization so, the running time needed by AOA-HGS is less in terms of seconds when compared with other methods. So, the computational efficiency of the proposed AOA-HGS is much optimum than other methods. Observing the results shows that the AOA-HGS is on top, followed by other methods.

Table 1 The comparative methods result using thirteen benchmark functions (F1-F13), where the dimension is fixed to 10
Table 2 The comparative methods result using thirteen benchmark functions (F1-F13), where the dimension is fixed to 100

If we notice the values in Table 3, the AOA-HGS is very competitive and superior compared with others on test functions (F14-F23). No. of optimization methods obtained optimum results, but AOA-HGS has the best when compared with all. So, AOA-HGS is capable of obtaining optimum results. It is that the ability of the proposed method gains the advantages of two powerful methods together. This proposal can also tackle the weaknesses of the signal search method, as clearly stated in the used tables and figures.

Table 3 The results of the comparative methods using ten benchmark functions (F14-F23)

6 Conclusion & future scope

This work introduces a hybrid method (AOA-HGS) dependent on the population-based model to address optimization problems depending on the types of social animals in search of food (HGS) along with a meta-heuristic optimization method (AOA). The proposed AOA-HGS method is validated on an exhaustive collection of 23 functions (F1-F23). Results achieved were compared with other state-of-the-art methods like HGSO, AOA, GOA, GWO, SCA. Observing the experimental results, it became clear that AOA-HGS has more fast convergence with optimum global search capability. The AOA-HGS is very competitive and superior compared with others on test functions. No. of optimization methods obtained optimum results, but AOA-HGS has the best when compared with all. So, AOA-HGS is capable of obtaining optimum results. The main advantages of the proposed method it can find better results by combining the search process of two powerful methods and some limitations can be defined as other applications can be tested to further validation of the proposed method.

Further, AOA and HGS can be integrated with other existing state-of-the-art algorithms, improving the algorithm and giving more precise results with less computational time, which is needed in real-time applications and problems.