1 Introduction

Several optimization techniques have been utilized in solving various complex problems in many domains, such as data mining, engineering applications, machine learning, energy, networks, economics, text mining, medical, and others. It is mainly applied to obtain the optimal solution by getting various optimal values (decisions) to produce a candidate solution in order to solve the underlying problem completely. Usually, meta-heuristic optimization algorithms are running based on considering the optimal value by minimizing or maximizing the objective function to earn the better decision. The main objective of decision making is to discover the optimal decision value of a range of available options. The final result, the optimal solution of optimization procedures is to find the best decision value from all given alternatives as one unit (solution). The best expression refers to a satisfactory obtained solution, which is the best-obtained option so far to address the underlying problem. Furthermore, the obtained best solution over a series of development processes is considered as a satisfactory solution [1].

Lately, many research topics in optimization domains are still compelling. They need promising outcomes due to their reality in life and nature (i.e., mathematical problems, industrial-type problems, real-world problems, and other problems). These problems are considered as NP-hard problems (hard optimization problems) [2]. Usually, the principal purpose of utilizing the available optimization techniques is to gain the best decisions (problem-solution) of a problem by optimizing (minimize or maximize) its objective function or fitness function. There are various kinds of optimization problems, which are classified into four main classes: the first class includes constrained and unconstrained problems, and the second class explores continuous and discrete problems. The third class includes single or multi-objective problems. The fourth class includes static or dynamic problems.

Recently, several algorithms that are inspired by nature or biology have been introduced to find near optimal solutions to different classes of difficult optimization problems. These algorithms have been effectively used in solving such optimization problems applied in real-world cases. Further, their strong searchability and capacity to address high-dimensional instances makes them a better choice than other methods (such as calculus-based methods). Normally, optimization algorithms analyze natural events, such as when animals seek for food [3]. The popular kinds of these optimization algorithms are shown in Fig. 1, which is divided into local, evolutionary, and swarm search algorithms as follows:

  • The first kind of algorithms, the local search, runs with only one solution, which will be improved (increase its fitness function) through a specific number of iterations (termination criteria), such as simulated tabu search [4], annealing [5], B-hill climbing [6], and hill climbing [6].

  • The second kind of algorithms, the evolutionary search, runs with a population design (a set of initial solutions) which will be improved iteratively through a specific number of iterations by manipulating the available solutions in order to get the optimal solution, such as genetic algorithm [7], genetic programming [8], salp swarm algorithm [9], firefly algorithm [10], harmony search algorithm [11], and ant colony optimization [12].

  • Eventually, the third kind of algorithms, the swarm search, runs with a population system at each iteration. These algorithms develop through historical development information of a number of available solutions. Algorithms such as artificial bee colony algorithm [13], krill herd algorithm [37], bacterial foraging algorithm [14], flower pollination algorithm [15], particle swarm optimization algorithm [16], biogeographical-based optimization [17], and other similar algorithms can be found in [9] (see Fig. 1).

Fig. 1
figure 1

Kinds of optimization algorithms

The main purpose of this paper is to review the published papers that use the grasshopper optimization algorithm. This paper conducted a full study of all perspectives of the grasshopper optimization algorithm and the researchers’ viewpoints. Further, we investigate how interested researchers utilize the grasshopper optimization algorithms to solve various optimization problems in complex applications. This work highlights the strengths of the grasshopper optimization algorithm and the modifications necessary to overcome its weaknesses.

In this work, all published papers that studied the grasshopper optimization algorithm in the past are collected to review. These collected papers belong to several well-known publishers in the domain of computer sciences (IEEE, Springer, Elsevier, Taylor and Francis, Hindawi, Inderscience, and other publishers). Figure 2 presents the number of published papers: (journal papers, book, conference paper, chapter books, and others), which are arranged based on the publisher. Figure 3 presents the distribution of these publications based on the kind of problem (application).

Fig. 2
figure 2

Number of publications of grasshopper optimization algorithm per publisher

Fig. 3
figure 3

Applications of the grasshopper optimization algorithm

This paper reviews, explains, and presents the grasshopper optimization algorithm based on two main classes:

  • Theoretical perspectives of the grasshopper optimization algorithm cover its versions of modifications, hybridizations, binary, chaotic, and multi-objective. Figure 3 presents the classification of the theoretical perspectives of the grasshopper optimization algorithm based on the kinds of changes.

  • Applications of the grasshopper optimization algorithm involve test functions applications, machine learning applications, engineering applications, image processing, network applications, parameter controller applications, and other applications of the grasshopper optimization algorithm.

This paper is arranged as follows: The grasshopper optimization algorithm and its procedures are shown in Sect. 2. In Sect. 3, the variants of the grasshopper optimization algorithm and its improvement details are summarized. In Sect. 4, applications of the grasshopper optimization algorithm are highlighted. Results and comparisons are provided in Sect. 5. Assessment and evaluation of the grasshopper optimization algorithm are shown in Sect. 6. Finally, in Sect. 7, the conclusion suggested future works and potential research directions of the grasshopper optimization algorithms.

2 Grasshopper optimization algorithm

In this section, the general procedures of the grasshopper optimization algorithm, a recent swarm optimization algorithm proposed by Saremi et al. [18], are shown and this section illustrates its master components. Moreover, we fully explain the exploitation (intensification) and exploration (diversification) search strategies of this algorithm.

2.1 The behavior of grasshopper optimization algorithm

Grasshoppers are bugs that live in nature. The general life cycle of grasshoppers is presented in Fig. 4.

 Although grasshoppers are normally recognised individually in nature, they meet in one of the largest group of all creatures. The volume of the group may be of a continental scale and a nightmare for farmers. The unique features of the grasshopper group are that grouping behaviour is found in both nymph and adulthood. In their way, they eat almost all vegetation. After that, when they become adult, they form a group in the air.

Fig. 4
figure 4

a Real grasshopper and b life cycle of grasshoppers [18]

Consequently, a mathematical model is introduced in [18] to present this behavior through a proposed new nature-inspired optimization algorithm called grasshopper optimization algorithm.

2.2 The general procedure of grasshopper optimization algorithm

This section presents the mathematical model used to mimic the swarming behavior of grasshoppers as follows [1820]:

$$\begin{aligned} X_{i}=S_{i}+G_{i}+A_{i} \end{aligned}$$
(1)

where \(X_{i}\) represents the position of the ith grasshopper, \(S_{i}\) represents the social communication, \(G_{i}\) represents the pressure force on the ith grasshopper, and \(A_{i}\) represents the  wind advection.

The equation can be formulated as Eq. (2).

$$\begin{aligned} X_{i} = r_{1}S_{i} + r_{2}G_{i} + r_{3}A_{i} \end{aligned}$$
(2)

where \(r_{1}\), \(r_{2}\), and \(r_{3}\) are random values in [0,1].

$$\begin{aligned} S_{i} = \sum _{j=1}^{N} s\left( d_{ij} \right) \widehat{d_{ij}} \end{aligned}$$
(3)

where \(d_{ij}\) represents the distance value between the ith and the jth grasshopper, measured by Eq. (4).

$$\begin{aligned} d_{ij} =\left| d_{j} - d_{i} \right| \end{aligned}$$
(4)

where s is a mathematical function to determine the power of social organizations, as shown in Eq. (5), and \(\widehat{d_{ij}}=\frac{x_{j}-x_{i}}{d_{ji}}\) represents a block vector from the ith grasshopper to the jth grasshopper.

The social forces value is defined as s, which is calculated as follows:

$$\begin{aligned} s\left( r \right) =fe_{\frac{-r}{l}} - e^{-r} \end{aligned}$$
(5)

where f means the power of attraction and l is the attractive range measure. The impact of function s on the grasshopper algorithm is represented in Fig. 5 to explain how it influences the social communication (attraction and repulsion) of grasshoppers.

Fig. 5
figure 5

(Left-side) function s with l = 1.5 and f = 0.5 (right-side) range of function s [18]

As stated in [18], it is observed in Fig. 5 that the distance values from zero to fifteen are studied.  Repulsion occurs in the interval [0 2.079].  When a grasshopper unit is 2.079 means that the current grasshopper is far away from another grasshopper, there is neither repulsion nor attraction. This is described as the encouragement distance. Figure 5 also explains that the attraction distance grows from 2.079 to almost 4 and then slowly reduces. Adjusting the parameter (i.e., l and f) values in Eq. (5)  results in various social behaviours in artificial grasshoppers. To view the influences of these couple of parameters, the function s is redrawn in Fig. 6 changing l and f individually. This figure confirmed that these mentioned two parameters improve the convenience zone, attraction area, and repulsion range [18]. Note the attraction or repulsion areas are very short for several values (\(l=1.0\) or \(f=1.0\)); from these values, the selected values are (\(l=1.5\) or \(f=0.5\)).

Fig. 6
figure 6

The impact of l or f on the behavior of the function s [18]

A graphical representation of the grasshoppers activates and the convenience zone utilizing the function  S is represented in Fig. 7. In a clear way, this social communication was an interesting  force in some initial grasshopper grouping patterns [21].

Fig. 7
figure 7

Rudimentary improving patterns among individuals in a group of grasshoppers

Although the function S is capable of separating the space among two grasshoppers into three regions: repulsion, comfort, and attraction, this function yields results near to 0 with distances larger than 10 as shown in Figs. 5 and 6. Hence, this function is not suitable to use a powerful force between grasshoppers with long distances among them. To fix this problem, the distance values of grasshoppers are mapped in the interval between 1 and 4. The form of the function s in this period is shown in Fig. 7 [18].

The G part in Eq. (1) is computed as follows:

$$\begin{aligned} G_{i}=-g\widehat{e_{g}} \end{aligned}$$
(6)

where g presents the attractive constant and \(\widehat{e_{g}}\) presents a combination vector toward the middle of the surface.

The A part in Eq. (1) is computed as follows:

$$\begin{aligned} A_{i}=u\widehat{e_{w}} \end{aligned}$$
(7)

where u is a constant value and \(\widehat{e_{w}}\) is a combination vector toward the wind. Grasshoppers in the nymph stage have not yet developed wings, so their motion depends entirely on the wind.

Replacing S, A, and G in Eq. (1), this mathematical equation is developed as follows:

$$\begin{aligned} A_{i}=u\widehat{e_{w}}X_{i}=\sum _{j=1}^{N} s\left( \left| x_{j}-x_{i} \right| \right) \frac{x_{j}-x_{i}}{d_{ij}}- g\widehat{e_{g}}+u\widehat{e_{w}} \end{aligned}$$
(8)

where \(s\left( r \right) =fe^{\frac{-r}{l}}-e^{-r}\) and N is the number of grasshoppers.

Because of the nymph grasshoppers area, their position should not go below a threshold (there is a specific value and the positions of the grasshoppers should not go below this value). But, this equation is not used in the group simulation and optimization technique because it prevents the algorithm from local and global search around a solution (agent). In particular, the model used for the group is in the free range. Hence, Eq. (8) is utilized in the swarm and it mimics the communication among grasshoppers. The operation of two swarms utilizing this equation is shown in Figs. 8 and 9. In these figures, twenty grasshoppers are needed to go over ten periods of time [18].

Figure 8 explains how Eq. (8) takes the initial random solutions (population) nearer until they form a unified, organized swarm. After ten periods of time, all the grasshoppers give the rest zone and do not run anymore. The identical behavior is recognized in a 3D area in Fig. 9. This explains that the mathematical notation model is powerful to mimic a swarm of grasshoppers in 2D, 3D, and hyper-dimensional areas [18].

Fig. 8
figure 8

The behavior of swarm in a 2D space

Fig. 9
figure 9

The behavior of swarm in a 3D space

However, this mathematical model is not intended to address any optimization problems, because the grasshoppers move quickly towards the convenience zone and the swarm does not concentrate on a particular point. A revised version (modified) of this mathematical equation is introduced as follows to address any optimization problems:

$$\begin{aligned} X_{i}^{d}=c\left( \sum _{j=1}^{N} c\frac{ub_{d}- lb_{d}}{2} s \left( \left| x_{j}^{d} - x_{i}^{d} \right| \right) \frac{x_{j}-x_{i}}{d_{ij}} \right) +\widehat{T_{d}} \end{aligned}$$
(9)

where \(ub_{d}\) presents the higher bound in the \(d_\mathrm{th}\) solution, \(lb_{d}\) presents the smaller bound in the \(d_\mathrm{th}\) solution, \(\widehat{T_{d}}\) presents the value of the \(d_\mathrm{th}\) solution in the target (current best solution), and c presents a decreasing coefficient to narrow the convenience region, repulsion region, and attraction region. Note S presents almost the same to the S part in Eq. (1). However, gravity is not considered (no G) and suppose that the wind direction (A) always propels towards the best solution (\(\widehat{T_{d}}\)).

Equation (9) displays that the next place (position) of a grasshopper is determined based on its existing position, the position of the best solution (target), and the position of all other solutions. The first part of this equation reflects the position of the current grasshopper with regard to other grasshoppers. In fact, the status of each grasshopper is considered to determine the area of search operators around the best solution.

There are two main reasons to use the adaptive parameter c twice in Eq. (9) as follows:

  • The first parameter (c) from the left side is like the inertial weight (w) that has been used in the particle swarm optimization algorithm. It decreases the motions of grasshoppers nearest the best solution. In another statement, this parameter is used to make judgments during the exploration and exploitation search strategies near the best solution.

  • The second parameter (c) reduces the attraction region, comfort region, and repulsion region among grasshoppers. Consider the part \(\left(c\frac{ub_{d}- lb_{d}}{2} s \left( \left| x_{j}^{d} - x_{i}^{d} \right| \right)\right)\) in Eq. (9). \(c\frac{ub_{d}- lb_{d}}{2}\) linearly reduces the area that the grasshoppers should investigate using exploration and exploitation methods. This part \(s \left( \left| x_{j}^{d} - x_{i}^{d} \right| \right)\) indicates whether a grasshopper should be dismissed from exploring to exploiting the best solution.

Note the interior c provides the decrease in attraction or repulsion capabilities among grasshoppers relative to the number of generations, while the exterior c decreases the coverage rate around the current best solutions as the generation number increases.

In brief summary, the first part of Eq. (9), the total, analyzes the current position of the remaining grasshoppers and signifies their communication. The second term, \(\widehat{T_{d}}\), simulates their tendency to move toward the source of food. Also, the adjusting parameter c mimics the slowing of grasshoppers addressing the source of food and finally utilizing it. To produce another random mode, and as an option, both terms in Eq. (9) can be added with irregular values (randomly). Furthermore, individual terms can be calculated with random values to produce random modes that reflect either the cooperation of grasshoppers or their movement towards a food source.

The proposed mathematical model has the capability to investigate and diversify (exploit and explore) the search area. Nevertheless, a mechanism to change (tune) the search strategies, exploration to exploitation or vice versa, is required. In reality, grasshoppers initially move and search for foods nearby because in nymph stage, they have no wings. They later move easily in the air and search a much wider region. In stochastic optimization, nevertheless, exploration occurs first because of the demand for discovering encouraging regions of the search area. Next, finding encouraging regions, exploitation requires search agents to explore locally to obtain a precise approximation of the global best.

In order to make a balance among the search strategies (exploration and exploitation), the adjusting parameter c is needed to be reduced proportionally to the volume of generations. This approach encourages exploitation as the generation count increments. The coefficient c decreases the comfort region equivalent to the number of generations and is computed by Eq. (10).

$$\begin{aligned} c=c_{\max }-l\frac{c_{\max }-c_{\min }}{L} \end{aligned}$$
(10)

where \(c_{\max }\) is the highest value, \(c_{\min }\) is the smallest value, l presents the current generation, and L is the largest number of generations. Generally, the values of \(c_{\max }\) and \(c_{\min }\) are taken as 1 and 0.00001, respectively.

Fig. 10
figure 10

a The behavior of grasshoppers nearby a motionless and the best solution in 2D space and b 3D space and c the behavior of grasshoppers on a uni-modal examination mathematical function and a multi-modal examination mathematical function

The impact of this adjusting parameter on the evolution and convergence of grasshoppers is represented in Fig. 10. The sub-figures show the location story of grasshoppers over one-hundred generations. The experiment on constrained and unconstrained objectives is given to view how the group progresses towards particular goals and follows them. This figure explained that the group converges slowly toward a constraint target in both 2D and 3D areas. This operation is because of decreasing the comfort region by the parameter c. Figure 10 also explains the group fitly pursues an unconstraint objective as well. This is because of the last part of Eq. (8), \(\widehat{T_{d}}\) in which grasshoppers are dragged toward the objective. The attractive model is the regular convergence of grasshoppers toward the objective over the path of iteration, which is repeated because of reducing the factor (c) value. These operations will help the grasshopper optimization algorithm not to converge toward the objective too fast and consequently not to be stuck in local optima. In the last levels of the optimization process, nevertheless, grasshoppers will converge toward the objective as much as possible, which is necessary for exploitation search.

The above analyses reveal that the mathematical paradigm introduced needs grasshoppers to go toward an objective slowly over the path of generations. In a real search area, there is no objective because the global best, the foremost objective, is unknown exactly. Consequently, the objective of grasshoppers needs to discover each process of optimization. In the grasshopper optimization algorithm, it is supposed that the appropriate grasshopper during the optimization process is the objective. This will support the grasshopper optimization algorithm to keep the most encouraging objective in the search area in each iteration and needs grasshoppers to run toward it. This is produced with the purpose of getting a better and more perfect objective as the near optimal for the existing global optimum in the search area. Figure 10 comprises two examination mathematical functions and proved that the proposed paradigm and objective updating procedure are sufficient in difficulties with unknown optimal solutions as well [18].

The pseudo-code of the grasshopper optimization algorithm is Algorithm 1. The grasshopper optimization algorithm begins its optimization processes by generating a population (a collection of random solutions that is represented in Eq. 11 as 2D matrix). The search solutions update their positions based on Eq. (9). The current obtained position of the best objective is updated in each process. Moreover, the factor c is computed using Eq. (10) and the distance values among grasshoppers are formulated in each iteration [22, 23]. Position updating values are produced continuously to reach the termination criteria. The solution and its fitness function of the best objective are finally returned as the best solution for the investigated problem [18].

figure a
$$\begin{aligned} X_{i}=\begin{bmatrix} x_{1}^{1} &{} x_{1}^{2} &{} \ldots &{} \ldots &{} x_{1}^{d} \\ x_{2}^{1} &{} x_{2}^{2} &{} \ldots &{} \ldots &{} x_{2}^{d} \\ x_{j}^{1} &{} \ldots &{} x_{j}^{i} &{}\ldots &{} x_{j}^{d} \\ \vdots &{} \vdots &{} \ldots &{} \ldots &{} \vdots \\ x_{n}^{1} &{} x_{n}^{2} &{} \ldots &{} \ldots &{} x_{n}^{d} \end{bmatrix} \end{aligned}$$
(11)

However, the earlier simulations and investigations illustrated the effectiveness of the grasshopper optimization algorithm in determining the optimal global agent in a search area. Note that the free codes of the grasshopper optimization algorithm are available at http://www.alimirjalili.com/Projects.html and http://au.mathworks.com/matlabcentral/profile/authors/2943818-seyedali-mirjalili.

3 Variants of grasshopper optimization algorithm

The grasshopper optimization algorithm proposed in 2017 is a recent meta-heuristic optimization algorithm in comparison with the whale optimization algorithm, Krill herd algorithm, cuckoo optimization algorithm, seeker optimization algorithm, firefly algorithm, harmony search algorithm, and ant colony algorithm that were introduced in 2016, 2012, 2011, 2010, 2008, 2001, and 1992, respectively. However, the grasshopper optimization algorithm has been redesigned for various different changes developed by many researchers in this domain to handle and address various hard optimization problems. Most of these changes and modifications (binary, modified, hybridized and other versions stated below) will be widely but not exhaustively represented. A short summary of all variants and versions of the grasshopper optimization algorithm is recorded in Table 1.

3.1 Binary grasshopper optimization algorithm

Feature selection (FS) is a complex machine learning job used for decreasing the number of selected features by removing non-informative features and retaining the highest level of classification accuracy. Feature selection is observed as a complex optimization problem in the domain of machine learning. Because of the challenge of this dilemma and the large number of candidate solutions, stochastic algorithms are encouraging methods to tackle this selection problem. As an influential attempt, binary versions of the grasshopper optimization algorithm are introduced in [24] to pick the optimal subset of features for classification goals within a wrapper-based structure. Two techniques are applied to produce a binary grasshopper optimization algorithm: the first one is the Sigmoid model (transfer function) and V-shaped functions (transfer function) called BGOA-S and BGOA-V, respectively, while the second technique utilizes a novel method that merges the best solution reached so far. Moreover, a mutation operator is applied to improve the exploration search strategy in the grasshopper optimization algorithm. The proposed techniques are tested and validated using 25 standard datasets collected from the UCI repository. The results are compared with eight well-known optimization wrapper-based techniques and six well-known filter-based techniques. The results revealed the proposed technique got better performance in comparison with other related techniques published in the literature.

In engineering science domains, many hard optimization problems need to be solved efficiently. A big group of these engineering problems is considered as NP-hard, and it can hardly be solved by full methods. Hence, producing binary algorithms based on swarm meta-heuristic optimization algorithm is a popular field in operational and artificial intelligence researches. A common binarization technique is proposed in [25] based on the percentile theory. The percentile theory is used in grasshopper optimization algorithm to address the multi-dimensional knapsack optimization problem. Experiments are produced to illustrate the efficiency of the percentile theory in binarization version. Moreover, the performance of the proposed algorithm is tested using benchmark cases. The obtained results proved that binary grasshopper optimization algorithm achieved satisfactory results when it is assessed versus another different state of the art methods.

Several of the common optimization problems, solved at the industrial terms, are of a combinatorial (arrangement of elements in sets) class, and a sub-assembly not smaller than these is of the NP-hard class. The idea of optimization algorithms that address these problems is based on the consecutive meta-heuristic optimization algorithm of group intelligence; it is a common important field of interest at an industrial domain. A general binarization technique of constant optimization meta-heuristic algorithm is proposed in [26] based on the percentile theory. Percentile theory is incorporated into the grasshopper optimization algorithm in order to address the set covering problem. The experiments are produced with the object of illustrating the advantage of the percentile idea in binarization version. Moreover, the performance of the proposed algorithm is verified through standard instances. The results showed the binary version of the grasshopper optimization algorithm achieved satisfactory results when assessed with other methods.

3.2 Modifications of grasshopper optimization algorithm

A modified version of the grasshopper optimization algorithm is proposed in [27] for tackling two optimization problems (i.e., continuous optimization problems and financial stress forecast (prediction) problems). The grasshopper algorithm is a recently introduced optimization algorithm motivated by the grouping operation of grasshoppers in life. This algorithm is verified to be useful in solving several global optimization problems (constrained and unconstrained). But, there are some drawbacks in the original version of the grasshopper optimization algorithm, such as it is likely to be stuck in local optima and its convergence velocity will be delayed. To solve these mentioned shortcomings, an enhanced grasshopper optimization algorithm, which mixes three search operators (strategies) to obtain a more fitting balance between the search strategies such as exploitation and exploration, was introduced. Firstly, Gaussian mutation operator is operated to improve solutions’ diversity, which can produce powerful local search capability in the grasshopper optimization algorithm. Second, Levy-flight procedure is utilized to improve the randomness of the search space, which can produce a powerful global search ability in the grasshopper optimization algorithm. Finally, an opposition-based learning strategy is imported into grasshopper optimization algorithm for a more powerful search space. As the experimental results presented, the three proposed operators increased the effectiveness of grasshopper optimization algorithm and the proposed learning system ensured a more constant kernel model with higher performance compared to other published methods.

3.3 Hybridizations of grasshopper optimization algorithm

A hybrid grasshopper optimization algorithm with the opposition-based learning (OBL) strategy is proposed in [28] for solving two common optimization problems (i.e., benchmark test functions and engineering optimization problems) called OBLGOA. The proposed grasshopper optimization algorithm consists of two steps. In the first step, the initial solutions (population) are created and its opposite using the opposition-based learning strategy. In the second step, opposition-based learning is incorporated as a new step into the grasshopper optimization algorithm population in each generation. But, opposition-based learning is employed to a balance part of the population to decrease the execution time. To examine the effectiveness of the proposed hybrid grasshopper optimization algorithm with opposition-based learning, six collections of experiment groups are conducted, and they carry 23 standard test functions and four engineering optimization problems. The experiments reported that the results of the proposed hybrid algorithm (OBLGOA) get superior results in comparison with other well-known published algorithms. Finally, the achieved results confirmed that the proposed hybrid algorithm produced better results in solving engineering problems in comparison with other well-known published algorithms.

A hybrid classification technique, utilizing the support vector machine and the grasshopper optimization algorithm, is introduced in [29] for computerized seizure exposure in EEG called the GOA-SVM method. Different adjusting parameters are selected and applied as the leaders to train the support vector machines with radial support function kernel function classification technique. The grasshopper optimization algorithm is used for choosing the useful feature subset and the better values of the support vector machine parameters in order to achieve a strong EEG classification. The test results proved that the proposed technique is able to recognize the onset of epileptic seizures, and, it could thus improve the investigation of epilepsy with a high accuracy ratio (100%) in regular subjects. Moreover, the proposed technique is examined with a particle swarm optimization algorithm with support vector machine and one that uses a radial support function kernel function. The results showed that the proposed technique obtained better results in terms of classification accuracy and it outperformed the other comparative methods.

Bio-inspired swarm intelligence algorithms are performing strongly in the domain of optimization techniques over recent decades. A big number of novel swarm intelligence algorithms are being produced. The current optimization algorithms are also changed, often, either by modifying and maybe hybridizing them with some other algorithms components or by combining local exploitation strategies. A novel local search approach is proposed in [30] based on the jumping mechanism of the grasshopper optimization algorithm. The proposed search approach is described as GH local search procedure. Moreover, the proposed approach is combined into an effective swarm intelligence algorithm (i.e., artificial bee colony optimization). The proposed hybrid algorithm is examined on thirty-seven numerical benchmark functions. The results proved that the proposed hybrid algorithm is a satisfactory method for addressing numerical problems.

3.4 Chaotic grasshopper optimization algorithm

With the evolution of the nonlinear dynamic problems, chaos system has been widely operated to enhance both search strategies (i.e., exploration and exploitation) [31]. Heretofore, the chaos system has been successfully merged with various meta-heuristic optimization algorithms in order to improve its performance [32].

The grasshopper algorithm is a new meta-heuristic optimization algorithm motivated by the grouping operation of grasshoppers. Chaos theory systems are combined into optimization procedures of the grasshopper optimization algorithm so as to increase the speed of its global convergence [33]. The chaotic systems maps are operated to make a fair and efficient balance between the exploration and exploitation of search strategies and the reduction in the operating forces of repulsion (disharmony) and attraction between grasshoppers in the optimization procedures. The proposed chaotic grasshopper optimization algorithm is evaluated on 13 standard test functions (benchmarks). The results confirmed that in general, the chaotic system maps are effective in obtaining better performance; in particular, the circle map is considered as the optimal map in getting better results during the running of the grasshopper optimization algorithm.

The production and transmission development strategies, more suited for the convenience of intelligent (smart) grid technology, typically include weight shifting, natural property, and cost reductions. Demand response means in smart grids have emerged in contests on transmission development strategy, particularly with regard to compromising system safety. The proposed strategy is built in [34], a new transmission development strategy solution with demand response resources that reduce cost by reducing the peak amount of the basic method. Chaotic strategies are utilized in the grasshopper optimization algorithm to find the optimal results for transmission improvement strategies. Finally, the proposed model gives system engineers and network arrangements reports convenient to variations in preparation and capable of determining the best method to utilize demand response purposes for aims in the system.

3.5 Multi-objective grasshopper optimization algorithm

A new multi-objective optimization algorithm is proposed in [35] using the grasshopper optimization algorithm, which is motivated by the exploration of grasshopper groups in nature. Firstly, a mathematical representation model is applied to the communication of individuals in the group including pull (attractive) force, disharmony (repulsion) force, and encouragement (comfort) zone. Secondly, an approximating mechanism is suggested to utilize finding a near-optimal global solution in a single-objective exploration search space. Eventually, an obtained and objective selection technique is combined into the proposed algorithm, grasshopper optimization algorithm, to determine the Pareto optimal display for multi-objective optimization problems. To test the effectiveness and performance of the proposed grasshopper algorithm, a collection of different common multi-objective examination problems are employed. The results are matched with the most well-known algorithms published in the literature of multi-objective optimization algorithms using three achievement indicators quantitatively and diagrams qualitatively. The results proved that the proposed multi-objective grasshopper algorithm produced better results in terms of accuracy ratio of obtained solutions and their diversity.

Recently, locating the most proper path in robot exploration and navigation has been a critical challenge. Several numbers of various techniques have been suggested to tackle this mentioned problem (proper path in robots). Heuristic techniques are a group of efficient techniques that have been effectively employed in numerous soft and complex optimization problems. A novel algorithm for robot path arranging in a stable environment is proposed in [36]. The foremost objective is to handle a multi-objective method through minimizing several measurements outputs such as cost, energy, distance and time. The most effective use of the range, path regularity, and robot path managing time are studied to find the optimal alternatives in this paper. The contribution of this paper is to compute a fitting fitness function at each generation individually to obtain the best solution so far. The achieved results are analyzed and compared with the particle swarm optimization algorithm. The proposed algorithm exhibited better execution features in terms of time and path regularity than the particle swarm optimization algorithm, and the received path distances are smaller than those taken by particle swarm optimization.

Table 1 Summary of grasshopper optimization algorithm variants

4 Applications of grasshopper optimization algorithm

Many applications of GOA have been reported from various fields. For instance, GOA has been employed to solve benchmark optimization and real-world problems. More details of the GOA applications are illustrated below, followed by a brief summary of the main applications of GOA as shown in Table 2.

4.1 Constrained and unconstrained test functions

The grasshopper is a kind of modern optimization algorithm used for solving version optimization problems such as constrained and unconstrained problems [30, 37]. This algorithm is a group-based nature impulse algorithm which simulates and mathematically models the attitude of grasshopper groups in life.

The proposed algorithm in [38] can be utilized for tackling many different engineering problems. For this purpose, the proposed grasshopper optimization algorithm (GOA), the original version, is examined for various benchmark examination functions to verify and validate the effectiveness of this algorithm. Results achieved from the proposed algorithm (GOA) are analyzed and compared with the actual results (best results) of the test benchmark functions. The results obtained from the proposed algorithm proved that this algorithm is qualified to produce reliable results in this domain. The results of constrained and unconstrained problems, test functions, obtained by using the proposed grasshopper optimization algorithm validated that this algorithm provided trustworthy results. Constraints approach is employed to transform constrained problems into the unconstrained problems so that the problem can be controlled by the grasshopper optimization algorithm. Static ratio mode is utilized as a constraint controlling technique in this paper. Various engineering optimization problems existing in real life can be solved by this algorithm.

Many sciences, mathematics science, computer science, and operational research science, include several soft and hard optimization problems. Application of the optimization methods is used to tackle the engineering problems and to execute the most common algorithms from the available sources. This procedure is utilized to get the near-optimal solution to a particular problem, which is the demand of today. By performing the global (exploration) and local (exploitation) search strategies in the provided domain, various search space algorithms are produced. In [39], the purpose of this paper is to obtain the near-optimal solution available from several optimization algorithms published in the literature by the recent researches. Three kinds of optimization algorithms (i.e., bat optimization algorithm, particle swarm optimization algorithm, and grasshopper optimization algorithm) are chosen and experimented to get a better optimum solution. Bat optimization algorithm is motivated by echolocation response of bats, particle swarm optimization algorithm is a nature-inspired algorithm motivated by the group created by the birds in exploration of food in the location. The grasshopper optimization algorithm is motivated by the group created by the grasshopper in the mature age and the style is merged with the larval platform of the grasshoppers. Experimental approaches are investigated using standard benchmark functions, and the results show that grasshopper optimization algorithm produced better results in comparison with the bat optimization algorithm and the particle swarm optimization algorithms.

4.2 Machine learning applications

4.2.1 Feature selection

Support vector machine (SVM) is recognized as one of several powerful machine learning algorithms, and it is utilized for a wide search space of real-world problems [40]. The effectiveness of the support vector machine algorithm and its production chiefly rely on the kernel model and its main adjusting parameters [41]. Moreover, the feature selection of the optimal subset of features employed to train the support vector machine model is an extra significant factor that has an important influence on its distribution (classification) accuracy. The feature selection of the optimal subset of features is a highly critical step in machine learning domain, particularly when solving high-dimensional datasets [42]. Most of the recent researches examined these critical factors individually [43].

A hybrid strategy is proposed in [44] using the grasshopper optimization algorithm to solve the unsupervised learning feature selection problem, which is a novel algorithm motivated by the biological reaction shown in the groups of grasshoppers. The goal of the proposed algorithm is to find the optimal value of the parameters of the support vector machine model and find the optimal subset of features concurrently. Eighteen different benchmark datasets with low and high dimensions are utilized to assess the efficiency of the proposed algorithm. For confirmation, the proposed algorithm is analyzed and compared with seven well-known published algorithms in the domain. Moreover, the experiments of the proposed algorithm are examined with the grid search, which is the common technique for adjusting the main parameters of the support vector machine. The results proved that the proposed algorithm, grasshopper optimization algorithm, got superior results in comparison with the other comparative algorithms in most cases in terms of classification accuracy value while reducing the number of chosen features.

Epilepsy is one of the most well-known neurological diseases in the world. It is diagnosed by investigating a large electroencephalogram (EEG) reporting in a clinical context, which may be apt to errors, mistakes, and a computational time task. A new method for the classification of an epileptic seizure is introduced in [45] for investigating EEG signals. EEG signal is disintegrated into intrinsic method functions (IMFs) utilizing experimental method decomposition (EMD). A fusion of the selected nonlinear and spike-based characteristics from each of the intrinsic method functions signals is produced. The adjusting parameters of several common machine learning techniques, including k-nearest neighbor technique, extreme learning tool, random forest technique, support vector machine, and artificial NNs, are optimized along with a collection of the important features extracted by utilizing the grasshopper optimization algorithm. These classification techniques with their optimal obtained parameters are aligned with each other for the analysis of epileptic seizures. Results confirm that the combined classification techniques obtain better performance when compared to individual classification techniques. A comparison of the proposed algorithm with the other well-known epileptic seizure discovery systems is also presented for testing.

Feature selection problem is one of the main difficulties in machine learning domain to find the smaller number of informative features among a huge amount of feature space which guides the maximum classification ratio. A novel feature selection technique is proposed in [19] based on the mathematical representation of cooperation between grasshopper optimization algorithm in obtaining food sources. Several other modifications are implemented to the grasshopper optimization algorithm in order to make the algorithm of grasshopper optimization more stable in solving the selection problem of a subset of features. The proposed method (GOFS) is improved by statistical tests during its generations to repair the duplicate features with more encouraging (informative) features. Various common possible datasets with different dimension space, number of cases, and aim classes are examined to assess the effectiveness of the grasshopper optimization algorithm. Experiments are conducted on 12 well-known datasets, and several modern feature selection approaches published in the literature are analyzed and compared with the proposed algorithm. The results showed the superiority of the proposed algorithm in comparison with other well-known methods.

Non-constant elements and turbulence in bearing wave signals are particularly challenging. Removing fault features, establishing a minimum entropy de-convolution ratio, creating a maximum connected kurtosis de-convolution, and obtaining a fast spectral kurtosis cannot produce enough outcomes. Nonetheless, the filter volume and duration area of multi-point optimal minimum entropy measurement de-convolution adjusted (MOMEDA) are required to be fixed in improvement, so it is hard to obtain competent filtering decisions (results). Pointing at these dilemmas, the optimal parameter value in MOMEDA feature selection approach based on a grasshopper optimization algorithm is introduced in [46]. Firstly, the multi-point kurtosis of the proposed filtered signal method is utilized as the optimization goal (objective), and the optimal filter volume and a periodic first value which equaled with the vibration signal can be defined experimentally through multiple generations of the grasshopper optimization algorithm. Secondly, the periodic influence included in the vibration signal is evoked by the optimized MOMEDA, and the error features in the influence signal are extracted by Hilbert case demodulation. Finally, the simulation signal and bearing signal are treated by the proposed method. The results revealed that the introduction of the grasshopper optimization algorithm not only tackles the problem of parameter choice in MOMEDA but also performs better results compared with other well-known optimization techniques. Meantime, the utility and superiority prospect of the proposed approach are sufficiently verified by comparing it with the three methods after finding the optimal parameter values. Hence, this approach presented a novel process and solution for fault investigation of the rolling bearing, material, and handle.

4.2.2 Data clustering

Data clustering technique is a well-known procedure that is utilized for clustering related or different data in a given dataset according to the distance measure value between data objects. That means that similar objects are grouped together and dissimilar objects are classified into several categories as appropriate [47, 48]. This technique has been employed in various domains such as design classification, decision-making, information compression, image segmentation, data mining, document retrieval, text categorization, machine learning, and large businesses transaction for the client such as traffic management, marketing examination, and document control [48].

Dividing a big dataset into separate groups of similar structure, identified as data clustering technique, establishes a critical problem of the data study and analysis. This problem of data clustering can be solved with a wide range of clustering techniques using mathematical methods or heuristic methods. The heuristic methods usually consist of several tools recognized in nature as they are identified to help as beneficial components of active optimizers. The possibility of utilizing the novel optimization technique, grasshopper optimization algorithm, is investigated in [49] to produce accurate and more precise data clustering. The clustering validation test, known as the Calinski-Harabasz index, is applied as a metric to generate solutions. This paper presented a full explanation of the proposed algorithm (grasshopper optimization algorithm for solving data clustering problem) along with its experimental testing for a collection of benchmark datasets. Over the area of this study, it was observed that the clustering technique using the proposed algorithm obtained a high accuracy ratio when analyzed with other standard K-means methods.

4.2.3 Training neural networks

As one of the world's deadliest diseases, cancer must be accurately classified. Consequently, the precise classification of the disease is needed in the beginning stages. The early detection causes the need for a reliable and precise technique that gives the knowledge of cancer in the case (patient) that allows excellent decision-making by the doctors [50].

A cancer classification approach is introduced using grasshopper optimization algorithm in [51] by utilizing the data of the gene expression. The proposed algorithm, grasshopper optimization algorithm, is based on deep knowledge Neural Networks (NNs) called GOA-based DBN. This proposal more accurately classifies cancer through the application of logarithmic transmutation and through the Bhattacharya length technique. Logarithmic transmutation is considered a pre-process step to explore genetic data. Further, it is used to decrease the complexity linked with the classification ratio, and the Bhattacharya range selects the more informative genes. The weighting score is updated with the grasshopper optimization algorithm and the Gradient Descent technique. Experiments are performed on data from both the human colon and from patients diagnosed with leukaemia. The proposed algorithm performs better with an accuracy rate of 0.9534 and an expression ratio of 0.9666.

4.3 Engineering applications

The grasshopper optimization algorithm was utilized in solving several numbers of various optimization problems, where these problems are established in engineering applications areas, such as scheduling problems, image processing problems, handle of power systems problems, and renewable power systems problems. The following subsections explain the effectiveness and performance of grasshopper optimization algorithm in solving the mentioned problems in the engineering application area.

4.3.1 Scheduling

The economic dispatch is a critical optimization problem linked to the modern energy system that is used to discover the optimal output from an amount of allocated power generating units, to match the system load need, at the cheapest possible cost, controlled to possible operational restrictions [52].

The grasshopper optimization algorithm is introduced in [53] for solving the problem of the economic dispatch related to electrical energy scheduling. The grasshopper optimization algorithm is a new evolutionary approach that simulates the operation of a grasshopper. To investigate the utility and validity of grasshopper optimization algorithm, it is applied to address 3 various kinds of economic dispatch problems including three categories: small-, medium- and large-scale energy systems having various complexity standards. The experimental results revealed that the proposed grasshopper optimization algorithm is very promising for addressing a wide variety of economic dispatch problems accurately. The effectiveness of the proposed grasshopper optimization algorithm is superior compared to other newly published algorithms in the literature.

4.3.2 Control of power systems

 The trajectory optimization of the solar-powered UAVs (SUAVs) cooperative target tracking in the urban environment. In this paper [54], the grasshopper optimization algorithm is introduced for the distributed model predictive method. First, tracking targets in public environments is difficult and requires kinematic reductions, urban constraints, and limitations due to blocked sight lines. In this paper, the sight occlusions in a public environment for solar-power are taken into attention for the first time. A measurement method of sight occlusions for solar-powered is introduced to perform the computation of the energy index more accurate. Second, based on accurate modeling, the distributed model predictive control method is utilized as the structure for trajectory optimization. Third, the grasshopper optimization algorithm, a novel optimization technique that simulates the operations of grasshoppers in life, is suggested to be the distributed model predictive control addresser. The proposed grasshopper optimization algorithm has a greater searching capability than the original grasshopper optimization algorithm and some other comparative algorithms by adding some enhancement measures such as the natural choice strategy, the common decision-making tool, and the effective feedback tool based on the 1/5 system. Eventually, the better performance of the proposed approach is shown by the simulations results.

A new technique is proposed in [55] for solving concurrent optimal configuration systems and optimal distributed generation placement problems by minimizing current power need. A meta-heuristic optimization algorithm, grasshopper optimization algorithm motivated by the grouping life of grasshoppers, is performed to find the optimal current solution to solve this mentioned problem. The proposed technique is employed to thirty-three and sixty-nine bus examination cases. The achieved results are analyzed and examined alongside other optimization algorithms. The comparisons prove the effectiveness of the proposed technique to find the optimal solution.

Big energy needs and voltage change problems in the allocation system are common for energy directors because of the large diffusion of dynamic vehicles. Battery mutation stations are an essential part of the infrastructure to recharge dynamic (or electric) vehicles. Power loss decrease and energy stability agents are improved by optimal distributed generation and battery mutation stations using the grasshopper optimizer algorithm [56]. The zone-based approach with dispersed distributed generation guarantees adequate range among battery mutation stations to appropriately assist the motorists and improve system effectiveness. The operational restrictions of battery mutation stations, distributed generation and loads in each region for thirty-three-and sixty-nine bus networks are examined. The grasshopper optimizer algorithm is analyzed and compared with organized and common methods. The results revealed the superiority of grasshopper optimizer algorithm in regard to system effectiveness and convergence speed. Optimal positions and dimensions for distributed generation-battery mutation stations arrangement in the method are useful to dynamic vehicles users, battery mutation stations developers, electricity network, and service.

A coordinated model for a multi-incorporated power mode based on linearized coupling is verified in [57]. To create a precise design for unified power hubs and to address the power control problem, a combined approach is proposed by the integration of the extended weighted score and grasshopper optimization algorithm. The proposed method enhanced overall power efficiency and achieved optimal regional coordination. Finally, the results indicated that the running cost of the multi-incorporated power system with coordination is decreased by almost 3.2%, which definitely proved the scalability, compliance, and economic effectiveness of the given combined approach.

A new method is introduced in [58] based on grasshopper optimization algorithm to address the reconfiguration operation of the partly shaded PV array efficiently. The purpose of this proposed method is to obtain the large power extricated from the array via the suggested objective function. The PV array allocation achieved by the proposed strategy merged with a grasshopper optimization algorithm. Moreover, various shadow models through a day are examined and reconfiguration organization is collected at each individual hour. The collected results proved the performance and the response of the proposed grasshopper optimization algorithm in evaluating the global maximum energy point collected from the partially shaded PV array.

An integrated method for the economic process is proposed in [59]. A hybrid method contains traditional thermal creators, and renewable power causes like windmills are applied to utilize the grasshopper optimization algorithm. A regular system, including six thermic parts and two wind fields, is utilized for examining the dispatch form of three various loads. The grasshopper optimization algorithm results are analyzed and matched with the results achieved by optimization algorithms recently published. The obtained results showed the effectiveness and efficacy of grasshopper optimization algorithm to the compared algorithms in terms of convergence speed and the smallest fitness function. Performance of the study using wind power integration and emission minimization further proved the power of the grasshopper optimization algorithm.

The increase in power needs in an energy system can produce a speedy decrease in voltage form that can confuse the system balance and thus cause a complete system failure as the operation has to run under emergencies and stress requirements. The grasshopper optimization algorithm is used in [60] for tackling the reactive energy planning, examining a line outage happening in the operation with Thyristor Managed Series Compensator that reduces transmission energy waste. A standard IEEE 30-bus examination system is employed. The optimal configuration of all handle variables is defined by the grasshopper optimization algorithm and by energy flow plans. The examination is conducted through simulation software in MATLAB or MATPOWER.

Despite the advantage of adding renewable power sources to traditional power grids, issues of stability and energy quality emerge when the power sources are combined. Large shifts in voltage may result in too much or too little energy. Distortions in energy may also occur from extreme inverter switching rates. Hence, the improvement of a powerful and smart handler for the grid connected micro-grid is the demand right now. A robust optimal energy flow handler utilizing a grasshopper optimization algorithm is introduced in [61] to find the optimal effective response and energy nature of the grid-connected micro-grid. To prove the performance of the handler of the proposed grasshopper optimization algorithm, its effectiveness in producing the desired energy sharing degree with optimal response and energy control quality is matched with the handler of the particle swarm optimization algorithm under micro-grid injection and abrupt weight shifting conditions. The proposed method provides a good response at all levels.

4.3.3 Micro-grid power system

Because of the loss of inactivity and change in the determination of optimal equivalent integral controller increases, the energy and frequency differences are larger. An approach for the energy produced by photovoltaic (PV) grids uses the grasshopper optimization algorithm; that study is found in [62].

The ability of the grasshopper optimization algorithm is employed [63] to find the optimal proportional integral handler parameters. In order to verify the effectiveness of the proposed power architecture, its effectiveness in controlling micro-grid voltage, wavelength, and energy characteristic is examined with the recent artificial intelligence-based power structures for the similar control aims. The performance of the proposed grasshopper optimization algorithm is also verified by examining its performance with regard to the micro-grid system compared to several common algorithms-based techniques. The results confirmed that the grasshopper optimization algorithm accelerated the solutions compared to algorithms which resulted in the smallest energy and frequency overshoot. Another paper proposed load frequency power of multi-area combined with the micro-grid power system. The grasshopper optimization algorithm is used for tuning of controller gains in [64]. The system performance is analyzed and matched with and without power storage methods in micro-grids. The system performance is also analyzed with changes in parameters and perturbations in random levels.

Because energy sources are unpredictable, we use the grasshopper optimization algorithm to investigate various real-world situations to find the best gains [65]. The wavelength responses of the suggested micro-grid are matched with various standard controllers and some successful optimization algorithms. Eventually, the equivalent–integral–acquired handler with grasshopper optimization algorithm is chosen for the case investigations under four instances of source modifications with level load disturbance and one case of contemporary source and level load changes. The results of these scenarios showed satisfactory results in regard to rate responses.

4.4 Image processing

Multilevel thresholding consideration is an essential method in the domain of the image processing, particularly for image segmentation processes, which has attracted much attention through the past recent years. The entropy classification is performed for its performance and integrity. Although it is useful and gives an adequate result in the case of bi-level thresholding, its development enhances the execution complexity, while the number of thresholds rises. To solve the mentioned problem, the meta-heuristic combinatorial algorithms are utilized in [66] for finding the optimal threshold values. A modified grasshopper optimization algorithm is used to provide an efficient multi-level Tsallis entropy and to decrease its execution complexity. The Levy flight is operated to the grasshopper optimization algorithm in order to qualify the original grasshopper optimization algorithm and to use the exploration and exploitation search strategies of the grasshopper optimization algorithm. Experiments are carried out using five other meta-heuristic optimization algorithms and the proposed grasshopper optimization algorithm for image segmentation. In extension, the proposed algorithm is analyzed and matched with thresholding methods depending on the between-class variance (Otsu) scheme and the Renyi entropy classification. Two classes of images, real-life and plant images, are employed in the experiments to examine the effectiveness of all involved algorithms. Qualitative results illustrated that the proposed segmentation algorithm produces fewer generations and demonstrates a higher segmentation precision value.

4.5 Network applications

Common social networks have become an essential component in today’s society. Issues of privacy and anonymity have the potential to make communication via social networks dangerous. A novel architectural of de-anonymization attack is proposed in [67] using the grasshopper optimization algorithm. An anonymity metric is further evaluated by the grasshopper optimization algorithm which allows the comparative ranking of nodes based on their re-identification speeds. Finally, the robustness of the proposed grasshopper optimization algorithm is characterized in solving character division and the privacy procedure is improved, which helps in the concealment of structural knowledge.

Various optimization methods are employed to spread (distribution) system re-configuration such as particle swarm optimization algorithm, bacterial foraging optimization algorithm, genetic algorithm optimization, artificial bee colony algorithm, and grey wolf optimization algorithm. The grasshopper optimization algorithm is applied in [68] to reproduce spread network re-configuration to discover the optimal switches mixture for maximizing energy wastes reduction taking into account the limitations of the arrangement architecture. The proposed algorithm mathematically assumes feeder reconfiguration motivated by the behavior of grasshopper in life, and then it is utilized to obtain the optimal configuration of the network distribution. The proposed algorithm is examined on IEEE 33-bus radial configuration system and produces the optimal solution for the distribution problem in a reduced time of computation. The simulation results showed consistent results in computing time and high effectiveness of the proposed algorithm in comparison with other published optimization algorithms.

The growing cost of power generation and the power generation and the constraints of limited fuel supplies make it important to optimize distribution networks. In this concern, optimal indicator choice can decrease the electrical energy waste, while improving the charge outlined in a cost-effective way. A full overview of the optimal network arrangement problem is shown in [69]. Moreover, a new method is introduced to address the optimal network arrangement problem of radial distribution (spread) networks utilizing a novel optimization algorithm (i.e., grasshopper optimization algorithm). A modern directors library is launched based on real company data that covers 29 various kinds of conductors. The objective function is to reduce the merged yearly cost of power need and expense cost of the conductors. The investigated restrictions are the bus energy limits and the conductors’ popular leading capabilities. The proposed method is used to two separate systems: the first system is a small-scale twenty-six bus arrangement, and the second system is a large-scale 85-bus arrangement. The achieved results are analyzed and matched with other methods published in the literature to confirm the performance of the proposed grasshopper optimization algorithm in decreasing the network waste and increasing overall profits while keeping the high yearly load.

4.6 Parameter controller

The original frequency bandwidth adjusting parameters have vital impacts on the decomposition outcome of the variational decomposition system. In the standard variational decomposition system, the rates of decomposition parameters are provided before, which makes it challenging to obtain adequate results. To tackle this argument, a parameter-adaptive variational decomposition system based on grasshopper optimization algorithm is proposed in [70] to investigate vibration signals from switching machinery. In this proposal, the optimal number and frequency bandwidth models for the handle parameter that matches with the investigated vibration signal can be determined experimentally. Firstly, a judgment index termed weighted is created by utilizing the correlation coefficient and the kurtosis index. Then, the grasshopper optimization algorithm is used to find the variational decomposition system parameters by the maximum weighted as an optimization objective (kurtosis index). Eventually, weak features can be obtained by examining the sensitive method with the maximum objective function. Two case comparisons show that the proposed approach is useful to analyze this problem (machinery wave signal for fault judgment). Furthermore, comparisons with the current fixed-parameter standard variational decomposition system and the well-known fast system highlight the benefits of the proposed approach.

A new design approach is performed in [71] to find the optimum proportional-integral-derivative handler parameters of an electronic voltage regulator (AVR) mode, and it utilized the grasshopper optimization algorithm to optimize. The proposed approach is an easy and powerful algorithm that is capable of addressing various optimization problems even those with undiscovered search spaces. The integrity of the algorithm produces high-quality tuning of optimal proportional-integral-derivative handler parameters. The whole time-weighted squared error is adopted as the achievement index to verify the effectiveness of the proposed grasshopper optimization algorithm for the proportional-integral-derivative handler. While analyzed and compared to the other proportional-integral-derivative handlers, the proposed approach is observed as a very powerful approach with the strength to improve the electronic voltage regulator system response.

The grasshopper optimization algorithm is introduced in [72] for velocity acknowledgment development during moving and steady-state requirements for an electronically commutated motor drive. An objective function is expressed to decrease the aggregate square error measure in such a way that the profits of the velocity controller parameter are optimally adjusted. To identify the efficacy of the proposed modified grasshopper optimization algorithm, simulation tests are performed to obtain the conventional adjusting of velocity controller parameter runs in MATLAB/Simulink and then the same runs are done in off-line for the device coding. The proposal obtained superior velocity controller and better effectiveness during moving and steady-state situations.

4.7 Other applications

Meta-heuristic algorithms can take random strategies and devise methods to elicit optimization. Among optimization algorithms, nature-inspired algorithms and population-based algorithms are the ones most commonly recommended [47, 73]. Before-mentioned algorithms simulate natural problem, usually those employed by researchers. Continuance is the foremost purpose for all researchers. To accomplish this aim, they have been growing and modifying in various methods. The grasshopper optimization algorithm is produced in [74] to get optimal solutions for three-dimensional architectural configuration (design) problems. The final produced designs demonstrated that the proposed grasshopper optimization algorithm is effective in producing superior results matched to the state-of-the-art algorithms published in the literature.

The upcoming generation series provide billions of small deoxyribonucleic acid (DNA) series in a large parallel manner. The grasshopper optimization algorithm is employed as assembler, a tool that assembles a machine or its parts, in [75]. It follows the way of the overlay layout consensus sequence. The grasshopper optimization algorithm utilized an efficient GPU application for the series alignment through the graph production platform and a greedy hyper-heuristic approach at the fork exposure frame. A two-part fork exposure technique is used to recognize repeated parts of a genome and to replace them without disassemblies. The benchmark datasets of bacteria are assessed with the golden popular tool QUAST. In matching with other well-known algorithms, the proposed grasshopper optimization algorithm produced congius that treated the biggest part of the genomes and, simultaneously, got good results values of other measurements, e.g., NG50 and disassembly measure.

The fundamental notions of the grasshopper optimization algorithm are covered in [76]. The motivation, mathematical representation, and the algorithm are fully shown. A short literature survey of this algorithm inclusive of several variants, development, hybrids, and uses is also provided. The effectiveness of grasshopper optimization algorithm is examined on a collection of benchmark functions including three models (i.e., unimodal, multi-modal, and composite). The results approved the ability of grasshopper optimization algorithm to increase the type of random solutions, transiting from diversification (exploration) to intensification (exploitation), giving fast coverage rate of the search area, and accelerating the convergence trajectory across the development of generations. As well, the grasshopper optimization algorithm is employed to a difficult problem in the domain of hand posture estimation. The obtained results mentioned that grasshopper optimization algorithm got a precise arrangement for a 3D hand model to suit an assigned hand image taken by a camera.

Recently, origami tessellation has been vigorously investigated by researchers across the whole world. The renewed folding design in the tessellation produces a folded element that is developable within a 3-D covering. It provides a class of given surface that is fit to modify the building construction. Because of its complex folded cover, it is hard to form it quickly utilizing a design tool. Meanwhile, the simulation modeling tool for hard origami tessellation has a weakness. This is due to the folding movement is restrained by the large freedom degree in the folding design. But, if the position of the actuator that can measure the elasticity of the origami is detected and accurately controlled, a rigid building can be accomplished. The restriction can be recognized by examining the loops combining the aspects bound by a bound or a collection of bounds. Hence, a new method, to form and produce the folding of origami tessellation, is proposed in [77]. This proposed method needs an easy programming mechanism such as the grasshopper optimization algorithm and does not need any hard computation. The obtained results folded state is obtained utilizing the grasshopper optimization algorithm. However, the user is needed to produce the mountain-valley design for the origami tessellation and produces the revolution angle for all the actuators to affect the folding. The obtained results of the used tool are utilized to reproduce various well-known varieties of origami tessellation.

One of the common traditional techniques used for determining suspended sediment of rivers and lakes is the sediment evaluation curve. For a reliable calculation of the size of ejected sediment based on the sediment curve evaluation notations, it is desirable to find optimal coefficients. One of the techniques utilized for finding the coefficients of the sediment curve evaluation balance is getting the advantage of optimization algorithms. The main aim of this study is to adapt the grasshopper optimization algorithm to find the optimal association among emptying and sediment discharge and to match the results of this design with well-known optimization algorithms [78]. With regard to the objective function, which reduces the variance among the averaged values of the sediment and the measured values of that, the optimal benefits of these coefficients are achieved. The achieved results showed that the proposed grasshopper optimization algorithm obtained better performance compared with well-known algorithms in terms of the objective function. Thence, the grasshopper algorithm got the best performance in solving the mentioned problem and then the particle swarm optimization algorithm and genetic algorithm.

Table 2 Summary of grasshopper optimization algorithm applications

5 Results and comparisons

This section presents a wide range of benchmark test problems with different characteristics (as shown in Table 3) to investigate, analyze, and confirm the effectiveness of the GOA [18] compared to other similar optimization algorithms (i.e., particle swarm optimization (PSO) [79], generic algorithm (GA) [80], bat algorithm (BA) [81], firefly algorithm (FA) [10], and gravitational search algorithm (GSA) [82]). Results values are normalized between 0 and 1 to analyze and compare the results of all benchmark functions. To conclude the significance of the obtained results, a ranking statistical test called Friedman ranking test is conducted and is shown in Table 4.

Table 3 The average results for solving benchmark functions
Table 4 The results of the Friedman ranking test

As shown in Table 3, the obtained results show that the GOA got better results in almost all test cases. Firstly, the GOA gives better results on three out of six unimodal benchmark functions. Because of the properties of the unimodal test functions, these obtained results confirmed that the GOA has high exploitation search and convergence rates. Secondly, as shown in Table 3, the results confirmed that the GOA got better results compared with all optimization algorithms employed on the multi-modal benchmark functions (F7, F9, F11, and F12). The obtained results confirmed the GOA benefits from high exploration search and averts from the trapped in local optima. Finally, the results of the GOA on the composite benchmark functions confirmed the excellence of GOA in solving optimization problems with extended search spaces. According to the normalization results, the overall performance of all comparative algorithms can be compared also. The last row of Table 3 shows the summation of the average results of all algorithms on all benchmark functions. It is observed that GOA gives the minimum average value and GOA reliably overwhelms other comparative algorithms. Table 4 shows the summation, average, and the final ranking of all algorithms on all benchmark functions. The ranking results in Table 4 show that performance of the GOA is statistically significant; it got the first rank followed by FA, PSO, GSA, GA, and BA.

6 Assessment and evaluation of grasshopper optimization algorithm

As reviewed earlier, the grasshopper optimization algorithm has been generally employed to address various optimization problems since it was proposed. The simple motivation, few parameters, and adaptive exploratory operation are the essential purposes for the success of this algorithm. In comparison with other proposed meta-heuristic optimization algorithms, however, it has several limitations and suffers from certain drawbacks.

The main limitations and restrictions have occurred because of no free lunch theorem in the search and optimization domain (NFL theorem), which says no suitable optimization algorithm exists to solve all varieties of optimization problems. In other words, the performance of all kinds of optimization algorithms matching over a standard finite set (F) of benchmark test functions is similar iff (if and only if) F is produced under permutation. It means that the grasshopper optimization algorithm may need modification, adjustment, and changing when solving real-world optimization problems. Another limitation is the objective function (i.e., single-objective nature) of this algorithm that makes it able to address just single-objective problems. Multi-objective problems can be assessed with the grasshopper optimization algorithm, but special operators such as binary, dynamic, discrete, multi-objective, continuous, and others.

The foremost drawback of the grasshopper optimization algorithm is the low power to manage the complexities of multi-modal search procedures, as it considers the adjusting parameters (c) tend to approach the near-optimal solution. Adding a further random sophisticated approach to improve the solutions through the optimization operations will enhance the likelihood of finding the optimum solution when addressing complex multi-model optimization problems. The performance of the grasshopper optimization algorithm increases depending on the number of decisions. This is possible because of the ability of the initial population in a local optimum solution when addressing such problems. Meanwhile, there is no specific procedure or operation to address the failure of local optima in the literature.

The authors of the grasshopper optimization algorithm executed a sufficient experiment and realized that investigating using four test functions results in the best average performance. Moreover, they also used a collection of common dimensional space real-world problems to test the algorithm performance. Last but not least important, the fast convergence speed is obtained using stimulated local search mechanism to run ahead to find the local optimum solution when addressing difficult optimization problems with a significant number of decisions. Mechanisms should be arranged to reduce the convergence velocity and exploitations processes if the algorithm is trapped in local solutions. Adaptive mechanisms are considered as beneficial tools in this regard to improving the quality of the convergence acceleration corresponding to the number of iterations of the best solution reached so far.

7 Conclusion and possible future directions

The grasshopper optimization algorithm is a promising algorithm that has been successful in solving various optimization problems. Its advantages over the other optimization algorithms, include ease of implementation, speed in searching, and ease of modifying algorithm components. Furthermore, it has a unique feature, namely an adjustable parameter S, which balances between search strategies (exploration and exploitation) [83]. However, it suffers from slow convergence and is prone to getting stuck in local optima [84].

For this review, over 50 research papers published between 2017 and the beginning of 2019 were collected and analyzed to highlight the advantages and disadvantages, robustness, successes, and weaknesses of the grasshopper optimization algorithm and its variants. Many of these papers proposed variants of the grasshopper optimization algorithm which enhance the performance of the original grasshopper optimization algorithm and enable it to solve many kinds of optimization problems. The algorithm has been applied in many different fields, including machine learning (feature selection, data clustering, and training neural networks), engineering (scheduling, control of power systems, and micro-grid power systems), image processing, and others.