Keywords

1 Introduction

Over the decades, the evolutionary computation (EC) algorithms have been successfully smeared to solve the different practical computational problems like optimization of the objective functions [1, 2], optimization of filter parameters [3, 4], optimization of different parameters for improvising image processing results [5, 6], optimal feature selection in pattern recognition [79], etc. Aforementioned engineering applications are basically motivated by the near-global optimization norm of the evolutionary computational methods. This permits those algorithms to accomplish the task within a very big search space (of a given problem). However, certain ECs are not yet investigated for solving a particular tricky efficiently. The room of the proposed idea is to conduit the slit. It may generate concern among the readers doing research in this area. There is a strong need to develop new hybrid algorithms to suit a particular problem in hand. Further, by presenting a widespread simulation results on the performance of 2 moderately firsthand ECs, we try to convince the readers of their importance. In this chapter, an attempt is made to attract the researchers regarding the aptness of the proposed technique for solving the delinquent of the function optimization.

Many types of heuristic search ECs were proposed by investigators Genetic algorithm (GA) [10], Ant colony algorithm (ACA) [11], Particle swarm optimization (PSO) algorithm [12], Bacterial foraging optimization (BFO) algorithm [13], Cuckoo search (CS) algorithm [1419], Gravitational search algorithm (GSA) [20], etc. But a particular algorithm is not efficient to solve different types of optimization problems. They never provide us with the best solutions. Certain algorithms offer best solutions for particular (given) problems only. Thus, it is necessary to devise hybridized heuristic population-based optimization methods for solving different applications efficiently. In this chapter, a population-based hybridized optimization algorithm is proposed. In this work, the thrust is mainly to cartel the social thinking capability found in the Cuckoo birds and the local search capability observed in Gravitational search method. Actually, an efficient optimization scheme is well umpired by its two key features—(i) exploration and (ii) exploitation. Note that exploration is the capability of an EC to explore the complete search space, and exploitation is the skill to congregate to a better result. Combining these two features, the best solutions can be obtained with less number of function evaluations. Therefore, here an attempt is made to hybridize CS with GSA in order to provide equilibrium for both exploration and exploitation ability. This chapter describes the use of both the methods in a balanced manner. Note that hybridized approach provides significant improvements in the solutions. Interestingly, the CS is popular for its simplicity and its ability to search for a near-global solution. Further, GSA provides a better local search method with good initial estimates to solve a particular problem.

In fact, EC techniques were initially proposed by different researchers as an approach to the artificial intelligence. Later, it has become more popular and is used directly to solve analytical and combinatorial optimization problems. However, the demerit of an EC method is its slow convergence in solving multimodal optimization problems, to find near (global) optimum values. In this context, various EC techniques are proposed. Recently, there has been a strong need to develop new hybridized EC techniques to find better results. This chapter discusses the application of a new hybrid EC method for optimization. Extensive empirical evaluations are important to measure the strength and weaknesses of a particular algorithm. In this connection, we consider 23 standard benchmark [21] functions. The proposed CS–GSA algorithm performs better than GSA and CS techniques for multimodal functions with many local minima. In addition, the proposed algorithm is faster than GSA. The levy flight incorporated in the algorithm helps us for a speed to reach the near (global) optimum very quickly. The idea behind this is simple and straightforward. Thinking ability of the Cuckoo birds is very useful for exploitation. In this study, we propose the hybridization of these two algorithms. Here, 23 standard benchmark functions [21] are utilized to relate the performance of our method. The results are compared to both the CS and GSA algorithms. In this work, the solutions presented in the chapter reveal the fact that our proposed algorithm is well suited for function minimization.

For this work, we consider 23 benchmark functions. These functions are given in Table 1. More details on such test functions are discussed in the Appendix. Note that the functions F 1F 13 refer to the high-dimensional problems, whereas the functions F 1F 7 are known as the unimodal functions. The function F 5 is a step function. This has only one minimum. Further, the function is discontinuous. Here, the function F 7 is coined as the quartic function with noise. Here, rand (0, 1) is basically a uniformly distributed random variable in the range (0, 1). Here, we also consider multimodal functions F 8F 13 for our experimental study. For these functions, the amount of local minima surges exponentially with the surge in the problem size. It is important to solve these types of functions to validate our algorithm for optimization. The functions F 14F 23 are called as the functions with the low dimension. These functions have only a few local minima. It is relatively easy to optimize the unimodal functions. However, the optimization is much more significant in the case of multimodal functions. The ability of an algorithm is well judged by its behavior, capable of escaping from the poor local optima while localizing the best near (global) solution. Note that the functions F 14F 23 are multimodal in nature, but are fixed-dimensional functions. These functions are also very useful for the measure of the accuracy of the hybrid soft computing techniques.

Table 1 Unimodal benchmark functions

The organization of the rest of the chapter is as follows: Sect. 1 is the introduction part. Related work is discussed in Sect. 2. The hybridized population-based CS–GSA algorithm is discussed in Sect. 3. Extensive results and discussions are presented in Sect. 4. In this chapter, conclusions are presented in Sect. 5.

2 Related Works

The main objective of an optimization algorithm is to find a near or near-global optimal solution. Several EC algorithms are presented in the literature, but the CS algorithm [1416] has its own importance. Interestingly, it consists of less parameters for search, and so it is a faster optimization method. Recently, Chetty and Adewumi [22] have done a case study on an annual crop planning problem using a CS algorithm along with the GA and glow worm swarm optimization (GSO); the former has shown superior results. Chen and Do [23] used the CS to train the feed-forward neural networks for predicting the student academic performances. Swain et al. [24] have proposed neural network based on CS and apply the same for the noise cancellation. Khodier [25] used the CS algorithm for optimization of the antenna array. Although the CS gives better performance, some researcher adds some more characteristics to the search process. Zhao and Li [26] proposed opposition-based learning to upsurge the exploration proficiency of the CS algorithm.

The other non-evolutionary algorithms are also good at finding the near-global optima. The GSA [20] is founded on the rule of gravity. It has better convergence in the search space. Saha et al. [27]; Rashedi et al. [28] used the GSA for the filter design and modeling. Many authors proposed new schemes to enhance the performance of the GSA such as Disruption operator [29], black hole operator [30], Niche GSA [31], and binary GSA (BGSA) [32].

Every algorithm has its own merits and demerits. To improve search performance of one algorithm, some researcher proposed hybrid algorithm combining the features of more than one algorithm. Mirjalili and Hashim [33] proposed hybrid algorithm PSOGSA by integrating the ability of exploitation in PSO and the ability of exploration in GSA. Jiang et al. [34] proposed HPSO-GSA for solving economic emission load dispatch problems by updating the particle position with PSO velocity and GSA acceleration. Ghodrati and Lotfi [35] proposed hybrid algorithm CS/PSO by combining the idea of cuckoo birds awareness of each other position via swarm intelligence of PSO. A hybrid algorithm on GSA–ABC was proposed by Guo [36] to update the ant colony with the help of an artificial bee colony algorithm (ABC) and GSA. Sun and Zhang [37] have proposed GA–GSA hybrid algorithm for image segmentation based on the GA and GSA. Yin et al. [38] proposed a hybrid IGSAKHM approach for clustering by combining K-harmonic means (KHM) clustering technique and improved gravitational search algorithm (IGSA).

In this section, we discuss the potential features of two different algorithms used for optimization.

2.1 Cuckoo Search (CS) Algorithm

Cuckoo search (CS) method is basically a nature-inspired technique. This method is introduced by Yang and Deb [1417]. The CS is inspired by an interesting event how the Cuckoo bird leaves eggs in the nest of another horde bird. The available host nests are fixed. The egg laid by the Cuckoo bird may be exposed by the horde bird with a probability \(p_{a} \in \left[ {0,1} \right]\). Then, the horde birds either throw those eggs or abandon the present nest. Sometimes, the horde bird builds a new nest in a totally different location [18] to deceive the Cuckoo bird. Here, each egg in the nest represents a solution. It is interesting to note here that the CS has similarity to the well-known hill climbing algorithm. Here, in the Cuckoo search, note that a particular pattern corresponds to a nest, whereas an individual feature of that pattern resembles to that of an egg of the Cuckoo bird.

Interestingly, the CS method is founded on the succeeding three more idealized strategies:

  1. i.

    Every Cuckoo bird puts an egg at a time, it junk yards its egg in a randomly selected nest.

  2. ii.

    Nests having the best class of eggs are carried over to the subsequent generations.

  3. iii.

    Always the quantity of available horde nests is fixed. Note that the egg placed by a Cuckoo bird is exposed by the horde bird with a probability p a Є [0, 1]. Then, the worst nests are revealed and discarded from future calculations.

The CS algorithm is mathematically modeled as follows: For a new search space \({\rm X}_{i} \left( {t + 1} \right)\) for Cuckoo i \(\left( {{\text{for }}i = 1,2, \ldots ,N} \right)\) at time \(t + 1\),

$${\rm X}_{i}^{t + 1} = {\rm X}_{i}^{t} + \alpha \oplus {\text{L{\'e}vy}} \left( \lambda \right),$$
(1)

where \({\rm X}{}_{i}\left( t \right)\) is the current search space at time t, represented as \({\rm X}_{i} = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\), \(\alpha > 0\) is the step size connected to the range of the problem, \(\oplus\) is the entry wise multiplication, and \({\text{L{\'e}vy}}\left( \lambda \right)\) is the random walk through the Lévy flight. The Lévy flight [19] provides random walk for step size from the Lévy distribution \(\left( {{\text{L{\'e}vy}}\sim u = t^{ - \lambda } } \right)\) by considering \(\lambda\) such that it satisfies \(1 < \lambda < 3\).

Here, time t denotes the number for a recent group (t = 1, 2, 3, …, t max) and t max represents the pre-determined extreme cohort position. Here, the initial values of the dth attribute of the ith pattern are found by

$$x_{i}^{d} \left( {t = 0} \right) = \text{rand} \cdot \left( {u^{d} x_{i}^{d} - l^{d} x_{i}^{d} } \right) + l^{d} x_{i}^{d} ,$$
(2)

where l d and u d are called as the lower and the upper search space limits of the dth attributes, respectively. These attributes are useful for implementations. This method is used to provide control over the boundary conditions in every calculation step. Note that the value for the interrelated attribute is restructured, when it exceeds the allowed search space limits. This is achieved by considering the value for the nearby limit corresponding to the linked trait. In this discussion, it is seen that the CS method identifies the best fruitful pattern as the X best pattern. This process is accomplished before starting the iterative search method. Actually, here in the CS algorithm, the iterative growth part of the pattern matrix initiates by the discovery step of the \(\Phi\) as

$$\varPhi = \left( {\frac{{\varGamma \left( {1 + \gamma } \right) \cdot \sin \left( {\pi \cdot \gamma /2} \right)}}{{\varGamma \left( {\left( {\frac{1 + \gamma }{2}} \right) \cdot \gamma \cdot 2^{{\frac{\gamma - 1}{2}}} } \right)}}} \right)^{{\frac{1}{\gamma }}} .$$
(3)

The \(\varGamma\) in the above expression represents a gamma function and \(\gamma - 1 = \lambda\).

The evolution phase of the Xi pattern is defined by the donor vector v, where v = X i. The required step size value is calculated as

$${\text{stepsize}}_{d} = 0.01 \cdot \left( {\frac{{u_{d} }}{{v_{d} }}} \right)^{{\frac{1}{\gamma }}} \cdot (v - {\rm X}_{\text{best}} ).$$

Note that u = Ф randn[n] and v = randn[n].

Here, a donor pattern is randomly mutated as

$$v = v + {\text{stepsize}}_{d} \cdot {\text{randn}}[n].$$
(4)

In this scheme, the update procedure of X best in CS is stated as

$${\text{X}}_{\text{best}} \leftarrow f\left( {{\text{X}}_{\text{best}} } \right) \le f\left( {{\text{X}}_{i} } \right).$$
(5)

It is noteworthy to mention here that the controller constraints of the CS technique are coined as scale factor (λ) followed by the mutation probability (p a ). The flow chart for the CS algorithm is shown in Fig. 1.

Fig. 1
figure 1

Flow chart of CS

PseudoCode for CS

First identifies the search space-like dimension of search problem ‘n,’ the range of the objective function, and objective function F(X). Let us choose some important parameters N, p a , α, λ, t max, and t = 1. Also, randomly initialize the population of N host nests \({\rm X}_{i} \left( t \right) = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\) with n dimension for i = 1, 2, …, N.

2.2 Gravitational Search Algorithm (GSA)

The GSA is based on the underlying principle of the Newton’s theory. This was introduced in [20]. The Newton’s theory states that ‘Every particle in the universe attracts every other particle with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.’ Note that the force is inversely proportional to the distance between them.

This algorithm is quite interesting and considered as a collection of N agents (masses) only. Here, the masses relate to the solution of an optimization (given) problem. It is interesting to note here that the heavier mass has greater force of attraction. This may be very near to the global optima. Let us initialize the search space of the ith agent as \({\rm X}_{i} = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\) \(\left( {{\text{for }}i = 1,2, \ldots ,N} \right)\), where n signifies the dimension of the search space. Note that at a time t, the force of attraction between mass ‘i’ by mass ‘j’ is defined as

$$F_{\text{ij}} \left( t \right) = G\left( t \right)\frac{{M_{\text{pi}} \left( t \right) \times M_{\text{aj}} \left( t \right)}}{{R_{{{\text{ij}}}} \left( t \right) + \varepsilon }}\left( {x_{j} \left( t \right) - x_{i} \left( t \right)} \right),$$
(6)

where \(M_{\text{aj}}\) and \(M_{\text{pj}}\) are the active and passive gravitational masses connected to the agent i, \(G\left( t \right)\) is coined as a gravitational constant at a particular time t, and \(R_{\text{ij}} \left( t \right)\) is the Euclidian distance between agents ‘i’ and ‘j,’ which is written as

$$R_{\text{ij}} = \left\| {X_{i} \left( t \right) \cdot X_{j} \left( t \right)} \right\|_{2} .$$
(7)

It is noteworthy to mention here that the gravitational constant G gradually decreases with respect to the time. This event actually helps us to reach at the minima in the search space. Therefore, the gravitational constant G is considered as a function of the initial value G 0 together with the time t. The phenomenon can mathematically be modeled as

$$G\left( t \right) = G_{0} \times e^{{\left( { - \beta \frac{t}{{t_{\hbox{max} } }}} \right)}} ,$$
(8)

where \(\beta\) is the descending coefficients. Note that t max is the maximum number of iterations considered for the simulation work.

Thus, the total amount of the force that acts on the agent i is \(F_{i} \left( t \right)\), which can be computed from Eq. (6) as

$$F_{i} \left( t \right) = \sum\nolimits_{j = 1,j \ne i}^{N} {{\text{rand}}_{j}\, F_{\text{ij}} \left( t \right)} .$$
(9)

Here, different masses are computed from the fitness values. Further, the masses are updated by the following set of equations:

$$M_{\text{ai}} = M_{\text{pi}} = M_{\text{ii}} = M_{i} ,i = 1,2, \ldots ,N,$$
(10)
$$m_{i} \left( t \right) = \frac{{{\text{fit}}_{i} \left( t \right) - {\text{worst}}\left( t \right)}}{{{\text{best}}\left( t \right) - {\text{worst}}\left( t \right)}},$$
(11)
$$M_{i} \left( t \right) = \frac{{m_{i} \left( t \right)}}{{\sum\nolimits_{j = 1}^{N} {m_{j} \left( t \right)} }},$$
(12)

where \({\text{fit}}_{i} \left( t \right)\) designates the fitness cost of an agent i at a particular time t. Here, best(t) signifies the best fitness value and worst(t) represents the worst fitness value among the N agents. Here, the acceleration of agent i at time t can be given by

$$a_{i} \left( t \right) = {\raise0.7ex\hbox{${F_{i} \left( t \right)}$} \!\mathord{\left/ {\vphantom {{F_{i} \left( t \right)} {M_{i} \left( t \right)}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${M_{i} \left( t \right)}$}},$$
(13)

where \(M_{i} \left( t \right)\) is coined as the mass of an agent i.

Finally, the velocity and the position of an agent in the search space are computed as given below:

$$x_{i} \left( {t + 1} \right) = x_{i} \left( t \right) + v_{i} \left( {t + 1} \right),$$
(14)
$${\text{and}},v_{i} \left( {t + 1} \right) = {\text{rand}}_{i} \times v_{i} \left( t \right) + a_{i} \left( t \right).$$
(15)

Note that the positions are updated iteratively using the above equations till the GSA algorithm reaches the global or near-global minima. Actually, no further change in the mass should undergo after attaining the global or near-global minima. The flow chart for GSA is displayed in Fig. 2.

Fig. 2
figure 2

Flow chart of GSA

PseudoCode for GSA

In the beginning, it recognizes the search space, dimension of the search problem ‘n,’ the range of the objective function, and the objective function itself, i.e., F(X). Then choose some important parameters N, G 0, β, t max, and t = 1. After choosing the parameters, randomly initialize the population of N agents (or masses) \({\rm X}_{i} \left( t \right) = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\) with n dimension for i = 1, 2, …, N.

3 A Hybrid CS–GSA Algorithm

It is well known from the literature that the CS is a heuristic search method based on evolutionary computational approach. The beauty of the CS algorithm is that it uses the randomized walk via a Lévy flight, as described in [14, 15]. Here, the Lévy flight is quite effectual in discovering the search space. Note that the step size is booked from the Lévy distribution as explained in [19].

Let us choose α as 1 (as \(\alpha > 0\)). So Eq. (1) is reduced to

$${\rm X}_{i} \left( {t + 1} \right) = {\rm X}_{i} \left( t \right) + {\text{L{\'e}vy}}\left( \lambda \right).$$
(16)

From the above Eq. (16), it is clearly showed that the new search space (the new solution) only rest on a Lévy distribution. Let us introduce a term \(l\,{\text{Best}}\;\left( t \right)\), which provides the best local solution among i = 1, 2, …, N at time t. Here, the \(l{\text{Best}}\left( t \right)\) can be expressed by

$$\begin{aligned} l{\text{Best}}\left( t \right) = \left. {{\rm X}_{j} \left( t \right)} \right|\forall \, j = = i, \, \hfill \\ {\text{ for which }}f\left( {{\rm X}_{i} \left( t \right)} \right){\text{ is minimum, }} \hfill \\ {\text{ for }}i = 1,2, \ldots ,N{\text{ at time }}t. \hfill \\ \end{aligned}$$
(17)

Let us again incorporate an additional term (coined as the proportionate term) to the new solution, thereby incorporating the difference between the current solution and the local best solution at time t. Therefore, Eq. (16) can be expressed by

$${\rm X}_{i} \left( {t + 1} \right) = {\rm X}_{i} \left( t \right) + {\text{L{\'e}vy}}\left( \lambda \right) \times \left( {l{\text{Best}}\left( t \right) - {\rm X}_{i} \left( t \right)} \right).$$
(18)

Further, let us see how every solution differs from the other at time t. In this sense, the acceleration of an agent i at a time t is used to provide enhancement to the local search in the GSA algorithm. Once more, we incorporate Eq. (13) in Eq. (18) as expressed by

$${\rm X}_{i} \left( {t + 1} \right) = {\rm X}_{i} \left( t \right) + {\text{L{\'e}vy}}\left( \lambda \right) \times \left( {l{\text{Best}}\left( t \right) - {\rm X}_{i} \left( t \right)} \right) + a_{i} \left( t \right).$$
(19)

It is noteworthy to mention here that \(a_{i} (t)\) is defined in Eq. (13). If we choose α as the proportionate measure of the step size, then Eq. (19) can be re-organized as

$${\rm X}_{i} \left( {t + 1} \right) = {\rm X}_{i} \left( t \right) + \alpha \times {\text{L{\'e}vy}}\left( \lambda \right) \times \left( {l{\text{Best}}\left( t \right) - {\rm X}_{i} \left( t \right)} \right) + a_{i} \left( t \right).$$
(20)

Thus, Eq. (20) provides the new solution space for Cuckoo Search–Gravitational Search Algorithm (CS–GSA) from the list of current solutions obtained in this method. The flow chart for our new hybrid CS–GSA algorithm is shown in Fig. 3.

Fig. 3
figure 3

Flow chart of CS–GSA

PseudoCode for CSGSA

In the opening, the objective function \(f\left( {\rm X} \right)\) with the dimension n is recognized. Then choose the parameters N, p a , G 0, α, λ, β, t max, and t = 1 to control the algorithm while in iteration. Let us initialize randomly the population of N horde nests \({\rm X}_{i} \left( t \right) = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\) with n dimension for i = 1, 2, …, N at t = 1.

Finally, report the best \(f\left( {{\rm X}_{i} } \right)\) with i = 1, 2, …, N, also report the corresponding \({\rm X}_{i}\).

4 Results and Discussions

In this work, the main thrust is to improve the CS algorithm in comparison to the standard CS methodology. Here, it is also important to bring some improvement over the GSA. In this context, a new CS–GSA algorithm has been developed in the previous Section.

4.1 Performance Evaluation Using Standard Benchmark Functions

In the performance evaluation of the proposed algorithm, we consider 23 standard benchmark functions [21] displayed in Tables 1, 2 and 3. Note that the convergence rate of the unimodal benchmark functions is important to validate an optimization algorithm. These useful functions are listed in Table 1.

Table 2 Multimodal benchmark functions
Table 3 Multimodal benchmark functions with fixed dimension

It is noteworthy to mention here that the multimodal benchmark functions also have a significant role in validating optimization algorithms. These multimodal functions have many local minima, so it is difficult to optimize these functions. Such functions are displayed in Table 2 and are used for the performance measure.

Further, multimodal functions with fixed dimensions are also considered in this work. These types are displayed in Table 3. Generally, these functions have similar performances for all types of optimization algorithms.

In this simulation study, the range of x i is different for different functions. The ranges of these functions are described in Appendix. The choice of parameters is important for evaluating the optimization algorithm. We have chosen best parameters for our study based on extensive simulation results. These parameters are used for all the three algorithms. The parameter setting for GSA, CS, and CS–GSA is shown in Table 4, for validating the benchmark functions given in Tables 1, 2 and 3.

Table 4 Parameter setting for GSA, CS, and CS–GSA for benchmark functions (F 1F 23)

The benchmark functions are categorized into three different tables as unimodal test functions (F 1F 7), multimodal test functions (F 8F 12), and multimodal test functions with fixed dimensions (F 13F 23). The ranges of the objective functions and the global minima are given in the Appendix. The search dimension ‘n’ is taken as 10, 50 for the unimodal functions given in Table 1 and multimodal benchmark functions given in Table 2. The search dimension ‘n’ for the multimodal functions with fixed dimension is given in Table 3.

The performance evaluations of GSA, CS, and CS–GSA for the unimodal benchmark functions are presented in Tables 5 and 6.

Table 5 Performance evaluation of GSA, CS, and CS–GSA for the unimodal benchmark functions (displayed in Table 1) with n = 10
Table 6 Performance evaluation of GSA, CS, and CS–GSA for the unimodal benchmark functions (displayed in Table 1) with n = 50

From Table 5, it is observed that the performance of GSA for the unimodal benchmark functions F 1, F 2, and F 4 with n = 10 seems to be better than the other two algorithms. But for other functions, the performance of the CS–GSA algorithm is better. For all benchmark functions, final results are reflected as the ‘Best,’ ‘Median,’ and ‘Ave’ among 50 independent runs. Here, ‘Best’ implies the best fitness value obtained from 50 independent runs. ‘Median’ refers to the median of 50 fitness values obtained from 50 independent runs. The ‘Ave’ denotes the average value of 50 fitness values obtained from 50 independent runs. Within a function, the performance of GSA, CS, and CS–GSA is compared. The best solutions among all three algorithms are shown in boldface letters.

The performance evaluation of GSA, CS, and CS–GSA for the unimodal benchmark functions F 1 to F 7 with n = 50 is displayed in Table 6. Here, the proposed CS–GSA algorithm performs well as compared to other algorithms.

A comparison of these algorithms for the unimodal benchmark functions F 4 and F 7 with n = 50 is shown in Fig. 4. It is seen that CS–GSA offers us best values compared to other algorithms. Note that the maximum number of iterations considered here is 1000. Here, GSA is the second best.

Fig. 4
figure 4

Performance comparison of GSA, CS, and CS–GSA for the unimodal functions F 4 and F 7 with n = 50

The performance evaluation of GSA, CS, and CS–GSA for the multimodal benchmark functions F 8 to F 12 with n = 10 is displayed in Table 7. Here, the proposed CS–GSA algorithm performs well for all functions except F 12.

Table 7 Performance evaluation of GSA, CS, and CS–GSA for the multimodal benchmark functions given in Table 2 with n = 10

The performance evaluation of GSA, CS, and CS–GSA for the multimodal benchmark functions F 8 to F 12 with n = 50 is displayed in Table 8. Here, the new hybrid CS–GSA algorithm performs well for all functions except F 9. For the function F 9, GSA performs better.

Table 8 Performance evaluation of GSA, CS, and CS–GSA for the multimodal benchmark functions given in Table 2 with n = 50

A comparison of these algorithms for the multimodal benchmark functions F 8 and F 12 with n = 50 is shown in Fig. 5. It is observed that the performance of the CS–GSA algorithm is better compared to other algorithms. Here, the maximum number of iterations is 1000. From Fig. 5, it is observed that the GSA is the second contestant.

Fig. 5
figure 5

Performance comparison of CS–GSA, CS, and GSA for the multimodal functions F 8 and F 12 with n = 50

The performance evaluation of GSA, CS, and CS–GSA for the multimodal benchmark functions F 13 to F 23 with fixed dimension is displayed in Table 9. Here, the new hybrid CS–GSA algorithm performs well for all functions except F 14 and F 15. For the function F 14, CS performs better than GSA and CS–GSA algorithms. For the function F 15, GSA performs better than CS and CS–GSA algorithms. From the knowledge of the ‘Best,’ ‘Median,’ and ‘Average’ values, one can claim that CS–GSA can be used for optimization of such type of functions.

Table 9 Performance evaluation of GSA, CS, and CS–GSA for the multimodal benchmark functions given in Table 3 with fixed dimension

A comparison of these algorithms for the multimodal benchmark functions F 15 and F 17 with fixed dimension is shown in Fig. 6. It is observed that the performance of the CS–GSA algorithm is better compared to other algorithms. In this study, the maximum number of iterations is 1000. From Fig. 6, it is seen that the GSA is the second contestant.

Fig. 6
figure 6

Performance comparison of CS–GSA, CS, and GSA for the multimodal functions F 15 and F 17 with fixed dimension

The performances of the proposed algorithm are summarized as follows:

  • For the unimodal test functions (F 1F 7): When the best results are concerned, CS–GSA outperforms GSA and CS. When the median and the average values are concerned, GSA outperforms CS–GSA and CS. However, CS–GSA has significant improvements over the CS.

  • For the multimodal test functions (F 8F 12): For functions F 8 to F 12 (except F 11), the results are dominated by the CS–GSA over GSA and CS.

  • For the multimodal test functions with fixed dimensions (F 13F 23): The result in these functions is not varying so much, but still CS–GSA outperforms the other two algorithms GSA and CS.

The convergence of four benchmark functions, out of 23 such functions, is shown in Figs. 4, 5 and 6 using CS–GSA, GSA, and CS. Here, we consider 1000 iterations. In most of the cases, CS–GSA has shown a better convergence rate than GSA and CS. Reason is that CS has the ability to abandon the worst solutions, while searching for the best solutions quickly. From Figs. 4, 5 and 6, it is observed that CS–GSA provides best fitness function values compared to GSA and CS algorithms, because of the fact that the GSA has the ability to provide the best local search mechanism. Hence, by combining these features of CS and GSA in the hybridized CS–GSA, we get the best results.

4.2 Solving the Constrained Optimization Problems

In this Section, we discuss the use of CS–GSA algorithm for solving the constrained optimization problems. Here, we consider two different constrained optimization issues. These examples are very interesting and may create interest among the readers to explore the idea further.

4.2.1 Minimizing the Function

Here, we present an application of the proposed CS–GSA algorithm for function minimization. This is a constrained optimization problem. We like to minimize the function given in Eq. (21) [39]

$$\begin{aligned} f\left( x \right) = &\, \left( {x_{1} - 10} \right)^{2} + \,5\left( {x_{2} - 12} \right)^{2} + \,x_{3}^{4} + 3\left( {x_{4} - 11} \right)^{2} \, + \,10x_{5}^{6} \, + \,7x_{6}^{2} + x_{7}^{4} - 4x_{6} x_{7} \\ & - 10x_{6} - 8x_{7} \\ \end{aligned}$$
(21)

subject to the following constraints [39]:

$$\begin{aligned} & g_{1} \left( x \right) = 127 - 2x_{1}^{2} - 3x_{2}^{4} - x_{3} - 4x_{4}^{2} - 5x_{5} \ge 0 \\ & g_{2} \left( x \right) = 282 - 7x_{1} - 3x_{2} - 10x_{3}^{2} - x_{4} + x_{5} \ge 0 \\ & g_{3} \left( x \right) = 196 - 23x_{1} - x_{2}^{2} - 6x_{6}^{2} + 8x_{7} \ge 0 \\ & g_{4} \left( x \right) = - \,4x_{1}^{2} - x_{2}^{2} + 3x_{1} x_{2} - 2x_{3}^{2} - 5x_{6} + 11x_{7} \ge 0 \\ & \quad \quad \quad \;\, - 10 \le x_{i} \le 10,\quad \;i = 1,2,3,4,5,6,7. \\ \end{aligned}$$

For the evaluation of constrained problem described in this section, we have taken parameter for various algorithms given in Table 4. From Table 10, it is seen that the proposed CS–GSA scheme is well suited for this constrained optimization problem. The attribute values obtained by CS–GSA (marked as bold face letters) seem to be very close to the optimal values, presented in the table for a ready reference. From Table 11, it is seen that the ‘Best,’ the ‘Median,’ and the ‘Average’ values obtained by CS–GSA (marked as bold face letters) seem to be better than the other two algorithms.

Table 10 Comparison of the best solutions given by GSA, CS, and CS–GSA with the optimal solution for the constrained problem
Table 11 Comparisons of the statistical results of GSA, CS, and CS–GSA for the constrained problem

4.2.2 One-Dimensional (1-D) Recursive Filter Design

Newly, an increasing interest is seen in the application of the EC algorithms for solving the problems of traditional filter design methods. There is a merit of not necessitating virtuous first estimate of the filter coefficients. Here, we present the design of 1-D recursive filters using GSA, CS, and CS–GSA algorithms. Note that the design of the IIR digital filter is well-thought-out here as a constrained optimization tricky. Inbuilt constraint handling is found to guarantee stability. Here, the best results are obtained through the convergence of the proposed CS–GSA method in order to confirm the quality. Interestingly, a faster result is achieved through the convergence of a meta-heuristic hybrid algorithm coined as CS–GSA algorithm. To be precise, the proposed constraint management competence makes the proposed method very eye-catching in the design of 1-D IIR digital filters. Results are compared to GSA and CS techniques.

The block diagram showing IIR filter optimization is displayed in Fig. 7. Evolutionary computational technique is very useful for synthesis of digital IIR filters. The filter coefficients are chosen very accurately using evolutionary algorithms. To reduce the error, the filter coefficients are optimized. Then it is used for different applications. For this reason, EAs proved to be beneficial for the design of 1-D digital IIR filters. In this section, we discuss the design of 1-D IIR digital filter design by three fairly new EC methods.

Fig. 7
figure 7

One-dimensional (1-D) recursive filter optimization using EC methods

The GSA was used for the design of 1-D recursive filters [27] and IIR filter design [40]. However, they are silent regarding the constraint treatment and diminishing tool, which is required to guarantee stability of the 1-D recursive filters. The design was not considered as the constrained optimization problem. Recently, the authors in [40] considered this as a constrained optimization work and then resolved this in designing the 1-D digital IIR filters. In this section, three optimization algorithms GSA, CS, and CS–GSA are deployed to design 1-D IIR filters.

Note that the system is called as recursive provided the preset output depends on the present input, the past input, and the past output of the system. So a 1-D system can be represented as

$$y\left( m \right) = \left\{ {x\left( m \right),x\left( {m - 1} \right), \ldots ,x\left( {m - M} \right),y\left( {m - 1} \right), \ldots ,y\left( {m - M} \right)} \right\}.$$
(22)

For the evaluation of our proposed algorithm, let us consider a 1-D recursive filter transfer function. The transfer function can be represented as

$$H\left( z \right) = H_{0} \frac{{\sum\nolimits_{i = 0}^{S} {a_{i} z^{i} } }}{{\sum\nolimits_{i = 0}^{S} {b_{i} z^{i} } }}, \, a_{0} = 1,{\text{ and }}b_{0} = 1.$$
(23)

The stability conditions are described as

$$H\left( z \right) = \frac{A\left( z \right)}{B\left( z \right)},{\text{ with }}B\left( z \right) \ne 0,{\text{ and }}\left| z \right| \ge 1.$$
(24)

For S = 2, one can write

$$H\left( z \right) = H_{0} \frac{{1 + a_{1} z + a_{2} z^{2} }}{{1 + b_{1} z + b_{2} z^{2} }}.$$
(25)

In this section, the objective is to optimize the vector (X), where \(X = \left[ {a_{1} ,a_{2} ,b_{1} ,b_{2} ,H_{0} } \right]\), subject to the constraints given below:

$$\left( {1 + b_{i} } \right) > 0,\,{\text{and}}\,\left( {1 - b_{i} } \right) > 0.$$
(26)

Hence, this problem is known as a constrained optimization problem. Here, an attempt is made to solve this problem using three different EC algorithms GSA, CS, and CS–GSA. To evaluate the proposed 1-D recursive function, let us consider M d as the desired magnitude response of the digital IIR filter expressed as a function of frequency \(\omega \in \left[ {0,\pi } \right]\):

$$M_{d} \left( \omega \right) = \left[ {\begin{array}{*{20}l} 1 \hfill & {{\text{if }}\omega \le 0.05\pi } \hfill \\ {\frac{1}{\sqrt 2 }} \hfill & {{\text{if }}0.05\pi < \omega \le 0.08\pi } \hfill \\ {\frac{1}{2\sqrt 2 }} \hfill & {{\text{if }}0.08\pi < \varpi \le 0.1\pi } \hfill \\ 0 \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right..$$
(27)

Here, our objective is to find H(z), such that it will closely approximate the desired magnitude response. The approximation can be attained by a fitness function J such as

$$J = \sum\limits_{n = 0}^{P} {\left[ {\left| {M\left( \omega \right)} \right| - M_{d} \left( \omega \right)} \right]^{2} } ,$$
(28)

where \(M\left( \omega \right)\) is the Fourier transform of the \(H\left( z \right)\), i.e., \(M\left( \omega \right) = H\left( Z \right)\left| {_{{z = e^{ - j\omega } }} } \right.\) and \(\omega = \left( {{\pi \mathord{\left/ {\vphantom {\pi P}} \right. \kern-0pt} P}} \right)n\).

The results (the filter parameters) are shown in Table 12.

Table 12 Comparisons of the best solution given by GSA, CS, and CS–GSA for the 1-D recursive filter design. “EP” refers to the estimated parameters

From Table 12, it is observed that the filter parameter obtained by CS–GSA algorithm is better than CS and GSA algorithms. They are optimized using CS–GSA algorithm to reduce the error. A comparison of the statistical results of GSA, CS, and CS–GSA for 1-D recursive filter design is presented in Table 13.

Table 13 Comparisons of statistical results of GSA, CS, and CS–GSA for 1-D recursive filter design

From Table 13, it is seen that the ‘Best,’ ‘Median,’ and ‘Average’ values of the filter parameters for 50 independent runs obtained by CS–GSA algorithm are better than CS and GSA algorithms. These statistical parameters are obtained using CS–GSA algorithm for a better performance.

Figure 8 displays the frequency responses of the 1-D recursive filter designed using CS–GSA, GSA, and CS as the three competitive methods. The above amplitude responses are plotted exhausting the solutions achieved by minimizing J for 50,000 function evaluations, and the parameter is given in Table 4. Note that in the cited numerical example, the solution attained by utilizing CS–GSA method offers us a better approximation of the proposed transfer function than latter methods. It seems closer to that of the desired frequency response. The GSA method is the second contestant in this work. The performance of the CS–GSA is quite better than the latter methods. From Fig. 8, we see that the frequency response attained by utilizing CS–GSA method is better than the frequency responses exhibited by other two algorithms CS and GSA.

Fig. 8
figure 8

Amplitude responses of 1-D recursive filter of desired and using CS–GSA, GSA, and CS

5 Conclusions

In this chapter, the proposed hybrid algorithm CS–GSA outperforms both CS and GSA algorithms in terms of obtaining best solutions. In fact, the GSA is used to explore the local search ability, whereas CS is used to speed up the convergence rate. The convergence speed of the proposed hybrid algorithm is faster than CS and GSA algorithms. Interestingly, CS simulates the social behavior of Cuckoo birds, while GSA inspires by a physical phenomenon. This proposal can easily be extended to develop multi-objective optimization applications by considering different inbuilt constraints. Finally, it may be noted that the better convergence of CS algorithm and local search ability of the GSA produce good results that are beneficial.