Abstract
The chapter presents a hybridized population-based Cuckoo search–Gravitational search algorithm (CS–GSA) for optimization. The central idea of this chapter is to increase the exploration capability of the Gravitational search algorithm in the Cuckoo search (CS) algorithm. The CS algorithm is common for its exploitation conduct. The other motivation behind this proposal is to obtain a quicker and stable solution. Twenty-three different kinds of standard test functions are considered here to compare the performance of our hybridized algorithm with both the CS and the GSA methods. Extensive simulation-based results are presented in the results section to show that the proposed algorithm outperforms both CS and GSA algorithms. We land up with a faster convergence than the CS and the GSA algorithms. Thus, best solutions are found with significantly less number of function evaluations. This chapter also explains how to handle the constrained optimization problems with suitable examples.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction
Over the decades, the evolutionary computation (EC) algorithms have been successfully smeared to solve the different practical computational problems like optimization of the objective functions [1, 2], optimization of filter parameters [3, 4], optimization of different parameters for improvising image processing results [5, 6], optimal feature selection in pattern recognition [7–9], etc. Aforementioned engineering applications are basically motivated by the near-global optimization norm of the evolutionary computational methods. This permits those algorithms to accomplish the task within a very big search space (of a given problem). However, certain ECs are not yet investigated for solving a particular tricky efficiently. The room of the proposed idea is to conduit the slit. It may generate concern among the readers doing research in this area. There is a strong need to develop new hybrid algorithms to suit a particular problem in hand. Further, by presenting a widespread simulation results on the performance of 2 moderately firsthand ECs, we try to convince the readers of their importance. In this chapter, an attempt is made to attract the researchers regarding the aptness of the proposed technique for solving the delinquent of the function optimization.
Many types of heuristic search ECs were proposed by investigators Genetic algorithm (GA) [10], Ant colony algorithm (ACA) [11], Particle swarm optimization (PSO) algorithm [12], Bacterial foraging optimization (BFO) algorithm [13], Cuckoo search (CS) algorithm [14–19], Gravitational search algorithm (GSA) [20], etc. But a particular algorithm is not efficient to solve different types of optimization problems. They never provide us with the best solutions. Certain algorithms offer best solutions for particular (given) problems only. Thus, it is necessary to devise hybridized heuristic population-based optimization methods for solving different applications efficiently. In this chapter, a population-based hybridized optimization algorithm is proposed. In this work, the thrust is mainly to cartel the social thinking capability found in the Cuckoo birds and the local search capability observed in Gravitational search method. Actually, an efficient optimization scheme is well umpired by its two key features—(i) exploration and (ii) exploitation. Note that exploration is the capability of an EC to explore the complete search space, and exploitation is the skill to congregate to a better result. Combining these two features, the best solutions can be obtained with less number of function evaluations. Therefore, here an attempt is made to hybridize CS with GSA in order to provide equilibrium for both exploration and exploitation ability. This chapter describes the use of both the methods in a balanced manner. Note that hybridized approach provides significant improvements in the solutions. Interestingly, the CS is popular for its simplicity and its ability to search for a near-global solution. Further, GSA provides a better local search method with good initial estimates to solve a particular problem.
In fact, EC techniques were initially proposed by different researchers as an approach to the artificial intelligence. Later, it has become more popular and is used directly to solve analytical and combinatorial optimization problems. However, the demerit of an EC method is its slow convergence in solving multimodal optimization problems, to find near (global) optimum values. In this context, various EC techniques are proposed. Recently, there has been a strong need to develop new hybridized EC techniques to find better results. This chapter discusses the application of a new hybrid EC method for optimization. Extensive empirical evaluations are important to measure the strength and weaknesses of a particular algorithm. In this connection, we consider 23 standard benchmark [21] functions. The proposed CS–GSA algorithm performs better than GSA and CS techniques for multimodal functions with many local minima. In addition, the proposed algorithm is faster than GSA. The levy flight incorporated in the algorithm helps us for a speed to reach the near (global) optimum very quickly. The idea behind this is simple and straightforward. Thinking ability of the Cuckoo birds is very useful for exploitation. In this study, we propose the hybridization of these two algorithms. Here, 23 standard benchmark functions [21] are utilized to relate the performance of our method. The results are compared to both the CS and GSA algorithms. In this work, the solutions presented in the chapter reveal the fact that our proposed algorithm is well suited for function minimization.
For this work, we consider 23 benchmark functions. These functions are given in Table 1. More details on such test functions are discussed in the Appendix. Note that the functions F 1–F 13 refer to the high-dimensional problems, whereas the functions F 1–F 7 are known as the unimodal functions. The function F 5 is a step function. This has only one minimum. Further, the function is discontinuous. Here, the function F 7 is coined as the quartic function with noise. Here, rand (0, 1) is basically a uniformly distributed random variable in the range (0, 1). Here, we also consider multimodal functions F 8–F 13 for our experimental study. For these functions, the amount of local minima surges exponentially with the surge in the problem size. It is important to solve these types of functions to validate our algorithm for optimization. The functions F 14–F 23 are called as the functions with the low dimension. These functions have only a few local minima. It is relatively easy to optimize the unimodal functions. However, the optimization is much more significant in the case of multimodal functions. The ability of an algorithm is well judged by its behavior, capable of escaping from the poor local optima while localizing the best near (global) solution. Note that the functions F 14–F 23 are multimodal in nature, but are fixed-dimensional functions. These functions are also very useful for the measure of the accuracy of the hybrid soft computing techniques.
The organization of the rest of the chapter is as follows: Sect. 1 is the introduction part. Related work is discussed in Sect. 2. The hybridized population-based CS–GSA algorithm is discussed in Sect. 3. Extensive results and discussions are presented in Sect. 4. In this chapter, conclusions are presented in Sect. 5.
2 Related Works
The main objective of an optimization algorithm is to find a near or near-global optimal solution. Several EC algorithms are presented in the literature, but the CS algorithm [14–16] has its own importance. Interestingly, it consists of less parameters for search, and so it is a faster optimization method. Recently, Chetty and Adewumi [22] have done a case study on an annual crop planning problem using a CS algorithm along with the GA and glow worm swarm optimization (GSO); the former has shown superior results. Chen and Do [23] used the CS to train the feed-forward neural networks for predicting the student academic performances. Swain et al. [24] have proposed neural network based on CS and apply the same for the noise cancellation. Khodier [25] used the CS algorithm for optimization of the antenna array. Although the CS gives better performance, some researcher adds some more characteristics to the search process. Zhao and Li [26] proposed opposition-based learning to upsurge the exploration proficiency of the CS algorithm.
The other non-evolutionary algorithms are also good at finding the near-global optima. The GSA [20] is founded on the rule of gravity. It has better convergence in the search space. Saha et al. [27]; Rashedi et al. [28] used the GSA for the filter design and modeling. Many authors proposed new schemes to enhance the performance of the GSA such as Disruption operator [29], black hole operator [30], Niche GSA [31], and binary GSA (BGSA) [32].
Every algorithm has its own merits and demerits. To improve search performance of one algorithm, some researcher proposed hybrid algorithm combining the features of more than one algorithm. Mirjalili and Hashim [33] proposed hybrid algorithm PSOGSA by integrating the ability of exploitation in PSO and the ability of exploration in GSA. Jiang et al. [34] proposed HPSO-GSA for solving economic emission load dispatch problems by updating the particle position with PSO velocity and GSA acceleration. Ghodrati and Lotfi [35] proposed hybrid algorithm CS/PSO by combining the idea of cuckoo birds awareness of each other position via swarm intelligence of PSO. A hybrid algorithm on GSA–ABC was proposed by Guo [36] to update the ant colony with the help of an artificial bee colony algorithm (ABC) and GSA. Sun and Zhang [37] have proposed GA–GSA hybrid algorithm for image segmentation based on the GA and GSA. Yin et al. [38] proposed a hybrid IGSAKHM approach for clustering by combining K-harmonic means (KHM) clustering technique and improved gravitational search algorithm (IGSA).
In this section, we discuss the potential features of two different algorithms used for optimization.
2.1 Cuckoo Search (CS) Algorithm
Cuckoo search (CS) method is basically a nature-inspired technique. This method is introduced by Yang and Deb [14–17]. The CS is inspired by an interesting event how the Cuckoo bird leaves eggs in the nest of another horde bird. The available host nests are fixed. The egg laid by the Cuckoo bird may be exposed by the horde bird with a probability \(p_{a} \in \left[ {0,1} \right]\). Then, the horde birds either throw those eggs or abandon the present nest. Sometimes, the horde bird builds a new nest in a totally different location [18] to deceive the Cuckoo bird. Here, each egg in the nest represents a solution. It is interesting to note here that the CS has similarity to the well-known hill climbing algorithm. Here, in the Cuckoo search, note that a particular pattern corresponds to a nest, whereas an individual feature of that pattern resembles to that of an egg of the Cuckoo bird.
Interestingly, the CS method is founded on the succeeding three more idealized strategies:
-
i.
Every Cuckoo bird puts an egg at a time, it junk yards its egg in a randomly selected nest.
-
ii.
Nests having the best class of eggs are carried over to the subsequent generations.
-
iii.
Always the quantity of available horde nests is fixed. Note that the egg placed by a Cuckoo bird is exposed by the horde bird with a probability p a Є [0, 1]. Then, the worst nests are revealed and discarded from future calculations.
The CS algorithm is mathematically modeled as follows: For a new search space \({\rm X}_{i} \left( {t + 1} \right)\) for Cuckoo i \(\left( {{\text{for }}i = 1,2, \ldots ,N} \right)\) at time \(t + 1\),
where \({\rm X}{}_{i}\left( t \right)\) is the current search space at time t, represented as \({\rm X}_{i} = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\), \(\alpha > 0\) is the step size connected to the range of the problem, \(\oplus\) is the entry wise multiplication, and \({\text{L{\'e}vy}}\left( \lambda \right)\) is the random walk through the Lévy flight. The Lévy flight [19] provides random walk for step size from the Lévy distribution \(\left( {{\text{L{\'e}vy}}\sim u = t^{ - \lambda } } \right)\) by considering \(\lambda\) such that it satisfies \(1 < \lambda < 3\).
Here, time t denotes the number for a recent group (t = 1, 2, 3, …, t max) and t max represents the pre-determined extreme cohort position. Here, the initial values of the dth attribute of the ith pattern are found by
where l d and u d are called as the lower and the upper search space limits of the dth attributes, respectively. These attributes are useful for implementations. This method is used to provide control over the boundary conditions in every calculation step. Note that the value for the interrelated attribute is restructured, when it exceeds the allowed search space limits. This is achieved by considering the value for the nearby limit corresponding to the linked trait. In this discussion, it is seen that the CS method identifies the best fruitful pattern as the X best pattern. This process is accomplished before starting the iterative search method. Actually, here in the CS algorithm, the iterative growth part of the pattern matrix initiates by the discovery step of the \(\Phi\) as
The \(\varGamma\) in the above expression represents a gamma function and \(\gamma - 1 = \lambda\).
The evolution phase of the Xi pattern is defined by the donor vector v, where v = X i. The required step size value is calculated as
Note that u = Ф randn[n] and v = randn[n].
Here, a donor pattern is randomly mutated as
In this scheme, the update procedure of X best in CS is stated as
It is noteworthy to mention here that the controller constraints of the CS technique are coined as scale factor (λ) followed by the mutation probability (p a ). The flow chart for the CS algorithm is shown in Fig. 1.
PseudoCode for CS
First identifies the search space-like dimension of search problem ‘n,’ the range of the objective function, and objective function F(X). Let us choose some important parameters N, p a , α, λ, t max, and t = 1. Also, randomly initialize the population of N host nests \({\rm X}_{i} \left( t \right) = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\) with n dimension for i = 1, 2, …, N.
2.2 Gravitational Search Algorithm (GSA)
The GSA is based on the underlying principle of the Newton’s theory. This was introduced in [20]. The Newton’s theory states that ‘Every particle in the universe attracts every other particle with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.’ Note that the force is inversely proportional to the distance between them.
This algorithm is quite interesting and considered as a collection of N agents (masses) only. Here, the masses relate to the solution of an optimization (given) problem. It is interesting to note here that the heavier mass has greater force of attraction. This may be very near to the global optima. Let us initialize the search space of the ith agent as \({\rm X}_{i} = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\) \(\left( {{\text{for }}i = 1,2, \ldots ,N} \right)\), where n signifies the dimension of the search space. Note that at a time t, the force of attraction between mass ‘i’ by mass ‘j’ is defined as
where \(M_{\text{aj}}\) and \(M_{\text{pj}}\) are the active and passive gravitational masses connected to the agent i, \(G\left( t \right)\) is coined as a gravitational constant at a particular time t, and \(R_{\text{ij}} \left( t \right)\) is the Euclidian distance between agents ‘i’ and ‘j,’ which is written as
It is noteworthy to mention here that the gravitational constant G gradually decreases with respect to the time. This event actually helps us to reach at the minima in the search space. Therefore, the gravitational constant G is considered as a function of the initial value G 0 together with the time t. The phenomenon can mathematically be modeled as
where \(\beta\) is the descending coefficients. Note that t max is the maximum number of iterations considered for the simulation work.
Thus, the total amount of the force that acts on the agent i is \(F_{i} \left( t \right)\), which can be computed from Eq. (6) as
Here, different masses are computed from the fitness values. Further, the masses are updated by the following set of equations:
where \({\text{fit}}_{i} \left( t \right)\) designates the fitness cost of an agent i at a particular time t. Here, best(t) signifies the best fitness value and worst(t) represents the worst fitness value among the N agents. Here, the acceleration of agent i at time t can be given by
where \(M_{i} \left( t \right)\) is coined as the mass of an agent i.
Finally, the velocity and the position of an agent in the search space are computed as given below:
Note that the positions are updated iteratively using the above equations till the GSA algorithm reaches the global or near-global minima. Actually, no further change in the mass should undergo after attaining the global or near-global minima. The flow chart for GSA is displayed in Fig. 2.
PseudoCode for GSA
In the beginning, it recognizes the search space, dimension of the search problem ‘n,’ the range of the objective function, and the objective function itself, i.e., F(X). Then choose some important parameters N, G 0, β, t max, and t = 1. After choosing the parameters, randomly initialize the population of N agents (or masses) \({\rm X}_{i} \left( t \right) = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\) with n dimension for i = 1, 2, …, N.
3 A Hybrid CS–GSA Algorithm
It is well known from the literature that the CS is a heuristic search method based on evolutionary computational approach. The beauty of the CS algorithm is that it uses the randomized walk via a Lévy flight, as described in [14, 15]. Here, the Lévy flight is quite effectual in discovering the search space. Note that the step size is booked from the Lévy distribution as explained in [19].
Let us choose α as 1 (as \(\alpha > 0\)). So Eq. (1) is reduced to
From the above Eq. (16), it is clearly showed that the new search space (the new solution) only rest on a Lévy distribution. Let us introduce a term \(l\,{\text{Best}}\;\left( t \right)\), which provides the best local solution among i = 1, 2, …, N at time t. Here, the \(l{\text{Best}}\left( t \right)\) can be expressed by
Let us again incorporate an additional term (coined as the proportionate term) to the new solution, thereby incorporating the difference between the current solution and the local best solution at time t. Therefore, Eq. (16) can be expressed by
Further, let us see how every solution differs from the other at time t. In this sense, the acceleration of an agent i at a time t is used to provide enhancement to the local search in the GSA algorithm. Once more, we incorporate Eq. (13) in Eq. (18) as expressed by
It is noteworthy to mention here that \(a_{i} (t)\) is defined in Eq. (13). If we choose α as the proportionate measure of the step size, then Eq. (19) can be re-organized as
Thus, Eq. (20) provides the new solution space for Cuckoo Search–Gravitational Search Algorithm (CS–GSA) from the list of current solutions obtained in this method. The flow chart for our new hybrid CS–GSA algorithm is shown in Fig. 3.
PseudoCode for CS–GSA
In the opening, the objective function \(f\left( {\rm X} \right)\) with the dimension n is recognized. Then choose the parameters N, p a , G 0, α, λ, β, t max, and t = 1 to control the algorithm while in iteration. Let us initialize randomly the population of N horde nests \({\rm X}_{i} \left( t \right) = \left( {x_{i}^{1} , \ldots ,x_{i}^{d} , \ldots ,x_{i}^{n} } \right)\) with n dimension for i = 1, 2, …, N at t = 1.
Finally, report the best \(f\left( {{\rm X}_{i} } \right)\) with i = 1, 2, …, N, also report the corresponding \({\rm X}_{i}\).
4 Results and Discussions
In this work, the main thrust is to improve the CS algorithm in comparison to the standard CS methodology. Here, it is also important to bring some improvement over the GSA. In this context, a new CS–GSA algorithm has been developed in the previous Section.
4.1 Performance Evaluation Using Standard Benchmark Functions
In the performance evaluation of the proposed algorithm, we consider 23 standard benchmark functions [21] displayed in Tables 1, 2 and 3. Note that the convergence rate of the unimodal benchmark functions is important to validate an optimization algorithm. These useful functions are listed in Table 1.
It is noteworthy to mention here that the multimodal benchmark functions also have a significant role in validating optimization algorithms. These multimodal functions have many local minima, so it is difficult to optimize these functions. Such functions are displayed in Table 2 and are used for the performance measure.
Further, multimodal functions with fixed dimensions are also considered in this work. These types are displayed in Table 3. Generally, these functions have similar performances for all types of optimization algorithms.
In this simulation study, the range of x i is different for different functions. The ranges of these functions are described in Appendix. The choice of parameters is important for evaluating the optimization algorithm. We have chosen best parameters for our study based on extensive simulation results. These parameters are used for all the three algorithms. The parameter setting for GSA, CS, and CS–GSA is shown in Table 4, for validating the benchmark functions given in Tables 1, 2 and 3.
The benchmark functions are categorized into three different tables as unimodal test functions (F 1–F 7), multimodal test functions (F 8–F 12), and multimodal test functions with fixed dimensions (F 13–F 23). The ranges of the objective functions and the global minima are given in the Appendix. The search dimension ‘n’ is taken as 10, 50 for the unimodal functions given in Table 1 and multimodal benchmark functions given in Table 2. The search dimension ‘n’ for the multimodal functions with fixed dimension is given in Table 3.
The performance evaluations of GSA, CS, and CS–GSA for the unimodal benchmark functions are presented in Tables 5 and 6.
From Table 5, it is observed that the performance of GSA for the unimodal benchmark functions F 1, F 2, and F 4 with n = 10 seems to be better than the other two algorithms. But for other functions, the performance of the CS–GSA algorithm is better. For all benchmark functions, final results are reflected as the ‘Best,’ ‘Median,’ and ‘Ave’ among 50 independent runs. Here, ‘Best’ implies the best fitness value obtained from 50 independent runs. ‘Median’ refers to the median of 50 fitness values obtained from 50 independent runs. The ‘Ave’ denotes the average value of 50 fitness values obtained from 50 independent runs. Within a function, the performance of GSA, CS, and CS–GSA is compared. The best solutions among all three algorithms are shown in boldface letters.
The performance evaluation of GSA, CS, and CS–GSA for the unimodal benchmark functions F 1 to F 7 with n = 50 is displayed in Table 6. Here, the proposed CS–GSA algorithm performs well as compared to other algorithms.
A comparison of these algorithms for the unimodal benchmark functions F 4 and F 7 with n = 50 is shown in Fig. 4. It is seen that CS–GSA offers us best values compared to other algorithms. Note that the maximum number of iterations considered here is 1000. Here, GSA is the second best.
The performance evaluation of GSA, CS, and CS–GSA for the multimodal benchmark functions F 8 to F 12 with n = 10 is displayed in Table 7. Here, the proposed CS–GSA algorithm performs well for all functions except F 12.
The performance evaluation of GSA, CS, and CS–GSA for the multimodal benchmark functions F 8 to F 12 with n = 50 is displayed in Table 8. Here, the new hybrid CS–GSA algorithm performs well for all functions except F 9. For the function F 9, GSA performs better.
A comparison of these algorithms for the multimodal benchmark functions F 8 and F 12 with n = 50 is shown in Fig. 5. It is observed that the performance of the CS–GSA algorithm is better compared to other algorithms. Here, the maximum number of iterations is 1000. From Fig. 5, it is observed that the GSA is the second contestant.
The performance evaluation of GSA, CS, and CS–GSA for the multimodal benchmark functions F 13 to F 23 with fixed dimension is displayed in Table 9. Here, the new hybrid CS–GSA algorithm performs well for all functions except F 14 and F 15. For the function F 14, CS performs better than GSA and CS–GSA algorithms. For the function F 15, GSA performs better than CS and CS–GSA algorithms. From the knowledge of the ‘Best,’ ‘Median,’ and ‘Average’ values, one can claim that CS–GSA can be used for optimization of such type of functions.
A comparison of these algorithms for the multimodal benchmark functions F 15 and F 17 with fixed dimension is shown in Fig. 6. It is observed that the performance of the CS–GSA algorithm is better compared to other algorithms. In this study, the maximum number of iterations is 1000. From Fig. 6, it is seen that the GSA is the second contestant.
The performances of the proposed algorithm are summarized as follows:
-
For the unimodal test functions (F 1–F 7): When the best results are concerned, CS–GSA outperforms GSA and CS. When the median and the average values are concerned, GSA outperforms CS–GSA and CS. However, CS–GSA has significant improvements over the CS.
-
For the multimodal test functions (F 8–F 12): For functions F 8 to F 12 (except F 11), the results are dominated by the CS–GSA over GSA and CS.
-
For the multimodal test functions with fixed dimensions (F 13–F 23): The result in these functions is not varying so much, but still CS–GSA outperforms the other two algorithms GSA and CS.
The convergence of four benchmark functions, out of 23 such functions, is shown in Figs. 4, 5 and 6 using CS–GSA, GSA, and CS. Here, we consider 1000 iterations. In most of the cases, CS–GSA has shown a better convergence rate than GSA and CS. Reason is that CS has the ability to abandon the worst solutions, while searching for the best solutions quickly. From Figs. 4, 5 and 6, it is observed that CS–GSA provides best fitness function values compared to GSA and CS algorithms, because of the fact that the GSA has the ability to provide the best local search mechanism. Hence, by combining these features of CS and GSA in the hybridized CS–GSA, we get the best results.
4.2 Solving the Constrained Optimization Problems
In this Section, we discuss the use of CS–GSA algorithm for solving the constrained optimization problems. Here, we consider two different constrained optimization issues. These examples are very interesting and may create interest among the readers to explore the idea further.
4.2.1 Minimizing the Function
Here, we present an application of the proposed CS–GSA algorithm for function minimization. This is a constrained optimization problem. We like to minimize the function given in Eq. (21) [39]
subject to the following constraints [39]:
For the evaluation of constrained problem described in this section, we have taken parameter for various algorithms given in Table 4. From Table 10, it is seen that the proposed CS–GSA scheme is well suited for this constrained optimization problem. The attribute values obtained by CS–GSA (marked as bold face letters) seem to be very close to the optimal values, presented in the table for a ready reference. From Table 11, it is seen that the ‘Best,’ the ‘Median,’ and the ‘Average’ values obtained by CS–GSA (marked as bold face letters) seem to be better than the other two algorithms.
4.2.2 One-Dimensional (1-D) Recursive Filter Design
Newly, an increasing interest is seen in the application of the EC algorithms for solving the problems of traditional filter design methods. There is a merit of not necessitating virtuous first estimate of the filter coefficients. Here, we present the design of 1-D recursive filters using GSA, CS, and CS–GSA algorithms. Note that the design of the IIR digital filter is well-thought-out here as a constrained optimization tricky. Inbuilt constraint handling is found to guarantee stability. Here, the best results are obtained through the convergence of the proposed CS–GSA method in order to confirm the quality. Interestingly, a faster result is achieved through the convergence of a meta-heuristic hybrid algorithm coined as CS–GSA algorithm. To be precise, the proposed constraint management competence makes the proposed method very eye-catching in the design of 1-D IIR digital filters. Results are compared to GSA and CS techniques.
The block diagram showing IIR filter optimization is displayed in Fig. 7. Evolutionary computational technique is very useful for synthesis of digital IIR filters. The filter coefficients are chosen very accurately using evolutionary algorithms. To reduce the error, the filter coefficients are optimized. Then it is used for different applications. For this reason, EAs proved to be beneficial for the design of 1-D digital IIR filters. In this section, we discuss the design of 1-D IIR digital filter design by three fairly new EC methods.
The GSA was used for the design of 1-D recursive filters [27] and IIR filter design [40]. However, they are silent regarding the constraint treatment and diminishing tool, which is required to guarantee stability of the 1-D recursive filters. The design was not considered as the constrained optimization problem. Recently, the authors in [40] considered this as a constrained optimization work and then resolved this in designing the 1-D digital IIR filters. In this section, three optimization algorithms GSA, CS, and CS–GSA are deployed to design 1-D IIR filters.
Note that the system is called as recursive provided the preset output depends on the present input, the past input, and the past output of the system. So a 1-D system can be represented as
For the evaluation of our proposed algorithm, let us consider a 1-D recursive filter transfer function. The transfer function can be represented as
The stability conditions are described as
For S = 2, one can write
In this section, the objective is to optimize the vector (X), where \(X = \left[ {a_{1} ,a_{2} ,b_{1} ,b_{2} ,H_{0} } \right]\), subject to the constraints given below:
Hence, this problem is known as a constrained optimization problem. Here, an attempt is made to solve this problem using three different EC algorithms GSA, CS, and CS–GSA. To evaluate the proposed 1-D recursive function, let us consider M d as the desired magnitude response of the digital IIR filter expressed as a function of frequency \(\omega \in \left[ {0,\pi } \right]\):
Here, our objective is to find H(z), such that it will closely approximate the desired magnitude response. The approximation can be attained by a fitness function J such as
where \(M\left( \omega \right)\) is the Fourier transform of the \(H\left( z \right)\), i.e., \(M\left( \omega \right) = H\left( Z \right)\left| {_{{z = e^{ - j\omega } }} } \right.\) and \(\omega = \left( {{\pi \mathord{\left/ {\vphantom {\pi P}} \right. \kern-0pt} P}} \right)n\).
The results (the filter parameters) are shown in Table 12.
From Table 12, it is observed that the filter parameter obtained by CS–GSA algorithm is better than CS and GSA algorithms. They are optimized using CS–GSA algorithm to reduce the error. A comparison of the statistical results of GSA, CS, and CS–GSA for 1-D recursive filter design is presented in Table 13.
From Table 13, it is seen that the ‘Best,’ ‘Median,’ and ‘Average’ values of the filter parameters for 50 independent runs obtained by CS–GSA algorithm are better than CS and GSA algorithms. These statistical parameters are obtained using CS–GSA algorithm for a better performance.
Figure 8 displays the frequency responses of the 1-D recursive filter designed using CS–GSA, GSA, and CS as the three competitive methods. The above amplitude responses are plotted exhausting the solutions achieved by minimizing J for 50,000 function evaluations, and the parameter is given in Table 4. Note that in the cited numerical example, the solution attained by utilizing CS–GSA method offers us a better approximation of the proposed transfer function than latter methods. It seems closer to that of the desired frequency response. The GSA method is the second contestant in this work. The performance of the CS–GSA is quite better than the latter methods. From Fig. 8, we see that the frequency response attained by utilizing CS–GSA method is better than the frequency responses exhibited by other two algorithms CS and GSA.
5 Conclusions
In this chapter, the proposed hybrid algorithm CS–GSA outperforms both CS and GSA algorithms in terms of obtaining best solutions. In fact, the GSA is used to explore the local search ability, whereas CS is used to speed up the convergence rate. The convergence speed of the proposed hybrid algorithm is faster than CS and GSA algorithms. Interestingly, CS simulates the social behavior of Cuckoo birds, while GSA inspires by a physical phenomenon. This proposal can easily be extended to develop multi-objective optimization applications by considering different inbuilt constraints. Finally, it may be noted that the better convergence of CS algorithm and local search ability of the GSA produce good results that are beneficial.
References
Du W, Li B (2008) Multi-strategy ensemble particle swarm optimization for dynamic optimization. Inf Sci 178:3096–3109
Panda R, Naik MK (2012) A crossover bacterial foraging optimization algorithm. Appl Comput Intell Soft Comput, 1–7. Hindawi Publication
Mastorakis NE, Gonos IF, Swamy MNS (2003) Design of two-dimensional recursive filters using genetic algorithm. IEEE Trans Circuits Syst-I Fundam Theory Appl 50:634–639
Panda R, Naik MK (2013) Design of two-dimensional recursive filters using bacterial foraging optimization. In: Proceedings of the 2013 IEEE Symposium on Swarm Intelligence (SIS), pp 188–193
Cordon O, Damas S, Santamari J (2006) A fast and accurate approach for 3D image registration using the scatter search evolutionary algorithm. Pattern Recogn Lett 26:1191–1200
Panda R, Agrawal S, Bhuyan S (2013) Edge magnitude based multilevel thresholding using cuckoo search technique. Expert Syst Appl 40:7617–7628
Panda R, Naik MK, Panigrahi BK (2011) Face recognition using bacterial foraging strategy. Swarm Evol Comput 1:138–146
Liu C, Wechsler H (2000) Evolutionary pursuit and its application to face recognition. IEEE Trans Pattern Anal Mach Intell 22:570–582
Zheng WS, Lai JH, Yuen PC (2005) GA-Fisher: a new LDA-based face recognition algorithm with selection of principal components. IEEE Trans Syst Man Cybern Part B 35:1065–1078
Mitchell M (1998) An introduction to genetic algorithms. MIT Press, Cambridge
Dorigo M, Maniezzo V, Colorni A (1996) The ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man Cybern Part B 26:29–41
Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks vol 4, pp 1942–1948
Gazi V, Passino KM (2004) Stability analysis of social foraging swarms. IEEE Trans Syst Man Cybern Part B 34:539–557
Yang XS, Deb S (2009) Cuckoo search via Lévy flights. In: Proceedings of the world congress on nature and biologically inspired computing, (NaBIC 2009), pp 210–214
Yang XS, Deb S (2013) Cuckoo search: recent advances and applications. Neural Comput Appl 24(1):169–174
Cuckoo Search and Firefly Algorithm. http://springerlink.bibliotecabuap.elogim.com/book/10.1007%2F978-3-319-02141-6
Pinar C, Erkan B (2011) A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif Intell Rev, Springer. doi:10.1007/s10462-011-9276-0
Chakraverty S, Kumar A (2011) Design optimization for reliable embedded system using cuckoo search. In: Proceedings of the international conference on electronics, computer technology, pp 164–268
Barthelemy P, Bertolotti J, Wiersma DS (2008) A Lévy flight for light. Nature 453:495–498
Rashedi E, Nezamabadi S, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179:2232–2248
Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3:82–102
Chetty S, Adewumi AO (2014) Comparison study of swarm intelligence techniques for annual crop planning problem. IEEE Trans Evol Comput 18:258–268
Chen J-F, Do QH (2014) Training neural networks to predict student academic performance: a comparison of cuckoo search and gravitational search algorithms. Int J Comput Intell Appl 13(1):1450005
Swain KB, Solanki SS, Mahakula AK (2014) Bio inspired cuckoo search algorithm based neural network and its application to noise cancellation. In: Proceedings of the international conference on signal processing and integrated networks (SPIN), pp 632–635
Khodier M (2013) Optimisation of antenna arrays using the cuckoo search algorithm. IET Microwaves Antennas Propag 7(6):458–464
Zhao P, Li H (2012) Opposition based Cuckoo search algorithm for optimization problems. In: Proceedings of the 2012 fifth international symposium on computational intelligence and design, pp 344–347
Saha SK, Kar R, Mandal D, Ghosal SP (2013) Gravitational search algorithm: application to the optimal IIR filter design. Journal of King South University, 1–13
Rashedi E, Nezamabadi-pour H, Saryazdi S (2011) Filter modeling using gravitational search algorithm. Eng Appl Artif Intell 24:117–122
Rashedi E, Nezamabadi-pour H, Saryazdi S (2011) Disruption: A new operator in gravitational search algorithm. Sci Iranica D 18:539–548
Doraghinejad M, Nezamabadi-pour H, Sadeghian AH, Maghfoori M (2012) A hybrid algorithm based on gravitational search algorithm for unimodal optimization. In: Proceedings of the 2nd international conference on computer and knowledge engineering (ICCKE), pp 129–132
Yazdani S, Nezamabadi-pour H, Kamyab S (2013) A gravitational search algorithm for multimodal optimization. Swarm Evol Comput 1–14
Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) BGSA: binary gravitational search algorithm. Nat Comput 9(3):727–745
Mirjalili S, Hashim SZM (2010) A new hybrid PSOGSA algorithm for function optimization. In: 2010 international conference on computer and information application, pp 374–377
Jiang S, Ji Z, Shen Y (2014) A novel hybrid particle swarm optimization and gravitational search algorithm for solving economic emission load dispatch problems with various practical constraints. Electr Power Energy Syst 55:628–644
Ghodrati A, Lotfi S (2012) A hybrid CS/PSO algorithm for global optimization. Lect Notes Comput Sci 7198:89–98
Guo Z (2012) A hybrid optimization algorithm based on artificial bee colony and gravitational search algorithm. Int J Digit Content Technol Appl 6(17):620–626
Sun G, Zhang A (2013) A hybrid genetic algorithm and gravitational using multilevel thresholding. Pattern Recognit Image Anal 7887:707–714
Yin M, Hu Y, Yang F, Li X, Gu W (2011) A novel hybrid K-harmonic means and gravitational search algorithm approach for clustering. Expert Syst Appl 38:9319–9324
Liu H, Cai Z, Wang Y (2010) Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl Soft Comput 10:629–640
Sarangi SK, Panda R, Dash M (2014) Design of 1-D and 2-D recursive filters using crossover bacterial foraging and cuckoo search techniques. Eng Appl Artif Intell 34:109–121
He J (2008) An experimental study on the self-adaption mechanism used by evolutionary programing. Prog Nat Sci 10:167–175
Ji M (2004) A single point mutation evolutionary programing. Inf Process Lett 90:293–299
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: Benchmark Functions
Appendix: Benchmark Functions
-
a.
Sphere Model
$$F_{1} \left( {\rm X} \right) = \sum\nolimits_{i = 1}^{n} {x_{i}^{2} } , \, - 95 \le x_{i} \le 95,{\text{ and min}}\left( {F_{1} } \right) = F_{1} \left( {0, \ldots ,0} \right) = 0$$ -
b.
Schwefel’s Problem 2.22 [21, 42]
$$F_{2} \left( {\rm X} \right) = \sum\nolimits_{i = 1}^{n} {\left| {x_{i} } \right| + \prod\nolimits_{i = 1}^{n} {\left| {x_{i} } \right|} } , \, - 12 \le x_{i} \le 12 , {\text{ and min}}\left( {F_{2} } \right) = F_{2} \left( {0, \ldots ,0} \right) = 0$$ -
c.
Schwefel’s Problem 1.2
$$F_{3} \left( {\rm X} \right) = \sum\nolimits_{i = 1}^{n} {\left( {\sum\nolimits_{j = 1}^{i} {x_{j} } } \right)^{2} } , { } - 90 \le x_{i} \le 90,{\text{ and min}}\left( {F_{3} } \right) = F_{3} \left( {0, \ldots ,0} \right) = 0$$ -
d.
Schwefel’s Problem 2.21
$$F_{4} \left( {\rm X} \right) = \mathop {\hbox{max} }\limits_{i} \left\{ {\left| {x_{i} } \right|,1 \le i \le n} \right\} , { } - 90 \le x_{i} \le 90,{\text{ and min}}\left( {F_{4} } \right) = F_{4} \left( {0, \ldots ,0} \right) = 0$$ -
e.
Generalized Rosenbrock’s Function
$$\begin{aligned} & F_{5} \left( {\rm X} \right) = \sum\nolimits_{i = 1}^{n - 1} {\left[ {100\left( {x_{i + 1} - x_{i}^{2} } \right)^{2} + \left( {x_{i} - 1} \right)^{2} } \right]} , { } - 30 \le x_{i} \le 30 \\ & { \hbox{min} }\left( {F_{5} } \right) = F_{5} \left( {0, \ldots ,0} \right) = 0. \\ \end{aligned}$$ -
f.
Step Function
$$F_{6} \left( {\rm X} \right) = \sum\nolimits_{i = 1}^{n} {\left( {\left\lfloor {x_{i} + 0.5} \right\rfloor } \right)^{2} } , { } - 100 \le x_{i} \le 100,{\text{ and min}}\left( {F_{6} } \right) = F_{6} \left( {0, \ldots ,0} \right) = 0.$$ -
g.
Quartic Function i.e. Noise
$$\begin{aligned} & F_{7} \left( {\rm X} \right) = \sum\nolimits_{i = 1}^{n} i x_{i}^{4} + \text{random}\left[ {0,1} \right), \, - 1.28 \le x_{i} \le 1.28 \\ & { \hbox{min} }\left( {{\text{F}}_{ 7} } \right) = F_{7} \left( {0, \ldots ,0} \right) = 0 \\ \end{aligned}$$ -
h.
Generalized Rastrigin’s Function
$$\begin{aligned} & F_{8} \left( {\rm X} \right) = \sum\nolimits_{i = 1}^{n} {\left[ {x_{i}^{2} - 10\cos \left( {2\pi x_{i} } \right) + 10} \right] , { } - 5.12 \le x_{i} \le 5.12} \\ & \hbox{min} \left( {F_{8} } \right) = F_{8} \left( {0, \ldots ,0} \right) = 0. \\ \end{aligned}$$ -
i.
Ackley’s Function
$$\begin{aligned} & F_{9} \left( {\rm X} \right) = - 20\exp \left( { - 0.2\sqrt {\frac{1}{n}\sum\nolimits_{i = 1}^{n} {x_{i}^{2} } } } \right) - \, \exp \left( {\frac{1}{n}\sum\nolimits_{i = 1}^{n} {\cos \left( {2\pi x_{i} } \right)} } \right) + 20 + e \\ & - 32 \le x_{i} \le 32,{\text{ and min}}\left( {F_{9} } \right) = F\left( {0, \ldots ,0} \right) = 0. \\ \end{aligned}$$ -
j.
Generalized Griewank Function
$$\begin{aligned} & F_{10} \left( {\rm X} \right) = \frac{1}{4000}\sum\nolimits_{i = 1}^{n} {x_{i}^{2} - \prod\limits_{i = 1}^{n} {\cos \left( {\frac{{x_{i} }}{\sqrt i }} \right) + 1} } , { } - 600 \le x_{i} \le 600 \\ & \hbox{min} \left( {F_{10} } \right) = F_{10} \left( {0, \ldots ,0} \right) = 0. \\ \end{aligned}$$ -
k.
Generalized Penalized Function 1
$$\begin{aligned} F_{11} \left( {\rm X} \right) = & \frac{\pi }{n}\left\{ {10\sin \left( {\pi y_{i} } \right) + \sum\nolimits_{i = 1}^{n} {\left( {y_{i} - 1} \right)^{2} } \left[ {1 + 10\sin^{2} \left( {\pi y_{i + 1} } \right)} \right] + \left( {y_{n} - 1} \right)^{2} } \right\} \\ + & \sum\nolimits_{i = 1}^{n} {u\left( {x_{i} ,10,100,4} \right)} , \\ \end{aligned}$$where
$$\begin{aligned} & u\left( {x_{i} ,a,k,m} \right) = \left\{ {\begin{array}{*{20}c} {k\left( {x_{i} - a} \right)^{m} , \, x_{i} > a} \\ {0,{ - }a < x_{i} < a,} \\ {k\left( { - x_{i} - a} \right)^{m} , \, x_{i} < - a} \\ \end{array} } \right.{\text{ and }}y_{i} = 1 + \frac{1}{4}\left( {x_{i} + 1} \right) \\ & - 50 \le x_{i} \le 50,{\text{ and min}}\left( {F_{11} } \right) = F_{11} \left( {1, \ldots ,1} \right) = 0. \\ \end{aligned}$$ -
l.
Generalized Penalized Function 2
$$\begin{aligned} F_{12} \left( {\rm X} \right) = 0.1\left\{ {\sin^{2} \left( {3\pi x_{1} } \right) + \sum\nolimits_{i = 1}^{n} {\left( {x_{i} - 1} \right)^{2} } } \right.\left[ {1 + \sin^{2} \left( {3\pi x_{i} + 1} \right)} \right] + \left( {x_{n} - 1} \right)^{2} \cdot \hfill \\ \, \left. {\left[ {1 + \sin^{2} \left( {2\pi x_{n} } \right)} \right]} \right\} + \sum\nolimits_{i = 1}^{n} {u\left( {x_{i} ,5,100,4} \right)} ,\hfill \\ \end{aligned}$$where
$$\begin{aligned} & u\left( {x_{i} ,a,k,m} \right) = \left\{ {\begin{array}{*{20}c} {k\left( {x_{i} - a} \right)^{m} , \, x_{i} > a} \\ {0,{ - }a < x_{i} < a} \\ {k\left( { - x_{i} - a} \right)^{m} , \, x_{i} < - a} \\ \end{array} } \right. , {\text{ and }}y_{i} = 1 + \frac{1}{4}\left( {x_{i} + 1} \right) \\ & - 50 \le x_{i} \le 50,{\text{ and min}}\left( {F_{12} } \right) = F_{12} \left( {1, \ldots ,1} \right) = 0. \\ \end{aligned}$$ -
m.
Generalized Schwefel’s Problem 2.26
$$\begin{aligned} & F_{13} \left( {\rm X} \right) = \sum\nolimits_{i = 1}^{30} { - x_{i} \sin \left( {\sqrt {\left| {x_{i} } \right|} } \right)} , \, - 500 \le x_{i} \le 500 \\ & \hbox{min} \left( {F_{13} } \right) = F_{13} \left( {420.9687, \ldots ,420.9687} \right) = - 12569.5. \\ \end{aligned}$$ -
n.
Shekel’s Foxholes Function
$$\begin{aligned} & F_{14} \left( {\rm X} \right) = \left( {\frac{1}{500} + \sum\nolimits_{j = 1}^{25} {\frac{1}{{j + \sum\nolimits_{i = 1}^{2} {\left( {x_{i} - a_{\text{ij}} } \right)^{6} } }}} } \right)^{ - 1} ,- 65.536 \le x_{i} \le 65.536 \\ & \hbox{min} \left( {F_{14} } \right) = F_{14} \left( { - 32, - 32} \right) \approx 1, \\ \end{aligned}$$where
$$a_{\text{ij}} = \left( {\begin{array}{*{20}l} { - 32} \hfill & { - 16} \hfill & 0 \hfill & {16} \hfill & {32} \hfill & { - 32} \hfill & \cdots \hfill & 0 \hfill & {16} \hfill & {32} \hfill \\ { - 32} \hfill & { - 32} \hfill & { - 32} \hfill & { - 32} \hfill & { - 32} \hfill & { - 16} \hfill & \cdots \hfill & {32} \hfill & {32} \hfill & {32} \hfill \\ \end{array} } \right).$$ -
o.
Kowalik’s Function
$$F_{15} \left( {\rm X} \right) = \sum\nolimits_{i = 1}^{11} {\left[ {a_{i} - \frac{{x_{1} \left( {b_{i}^{2} + b_{i} x_{2} } \right)}}{{b_{i}^{2} + b_{i} x_{3} + x_{4} }}} \right]^{2} , { } - 5 \le x_{i} \le 5}$$\(\hbox{min} \left( {F_{15} } \right) \approx F_{15} \left( {0.1928,0.1908,0.1231,0.1358} \right) \approx 0.0003075\). The coefficients are displayed in Table 14 [21, 41, 42].
-
p.
Six-Hump Camel-Back Function
$$F_{16} \left( {\rm X} \right) = 4x_{1}^{2} - 2.1x_{1}^{4} + \frac{1}{3}x_{1}^{6} + x_{1} x_{2} - 4x_{2}^{2} + 4x_{2}^{4} , \, - 5 \le x_{i} \le 5$$$$\begin{aligned} & X_{\hbox{min} } = \left( {0.08983, - 0.7126} \right),\left( { - 0.08983,0.7126} \right) \\ & \hbox{min} \left( {F_{16} } \right) = - 1.0316285. \\ \end{aligned}$$ -
q.
Branin Function
$$\begin{aligned} & F_{17} \left( {\rm X} \right) = \left( {x_{2} - \frac{5.1}{{4\pi^{2} }}x_{1}^{2} + \frac{5}{\pi }x_{1} - 6} \right) + 10\left( {1 - \frac{1}{8\pi }} \right)\cos x_{1} + 10 \\ & - 5 \le x_{1} \le 10, \, 0 \le x_{2} \le 15 \\ \end{aligned}$$$$\begin{aligned} & X_{\hbox{min} } = \left( { - 3.142,12.275} \right),\left( {3.142,2.275} \right),\left( {9.425,2.425} \right) \\ & \hbox{min} \left( {F_{17} } \right) = 0.398. \\ \end{aligned}$$ -
r.
Goldstein-Price Function
$$\begin{aligned} & F_{18} \left( {\rm X} \right) = \left[ {1 + \left( {x_{1} + x_{2} + 1} \right)^{2} \left( {19 - 14x_{1} + 3x_{1}^{2} } \right.} \right.\left. {\left. {{ - 14}x_{2} + 6x_{1} x_{2} } \right)} \right] \times \left[ {\left( {2x_{1} - 3x_{2} } \right)^{2} } \right. \\ & \quad \quad \quad \quad \times \left( { 1 8 { - 32}x_{1} + 12x_{1}^{2} + 48x_{2} } \right.\left. {\left. { - 36x_{1} x_{2} + 27x_{2}^{2} } \right) + 30} \right] \\ & - 2 \le x_{i} \le 2,{\text{ and }}\hbox{min} \left( {F_{18} } \right) = F_{18} \left( {0, - 1} \right) = 3. \\ \end{aligned}$$ -
s.
Hartman’s Family
$$F\left( {\rm X} \right) = - \sum\limits_{i = 1}^{4} {c_{i} \exp \left( { - \sum\limits_{j = 1}^{n} {a_{\text{ij}} \left( {x_{j} - p_{\text{ij}} } \right)^{2} } } \right)} , \, 0 \le x_{j} \le 1, \, n = 3,6$$for F 19(X) and F 20(X), respectively. \(X_{\hbox{min} } {\text{ of }}F_{19} = \left( {0.114,0.556,0.852} \right)\), and \(\hbox{min} \left( {F_{19} } \right) = - 3.86\). \(X_{\hbox{min} } {\text{ of }}F_{20} = \left( {0.201,0.150,0.477,0.275,0.311,0.657} \right)\), and \(\hbox{min} \left( {F_{20} } \right) = - 3.32\).
The coefficients are shown in Tables 15 and 16, respectively.
-
t.
Shekel’s Family \(F\left( X \right) = - \sum\nolimits_{i = 1}^{m} {\left[ {\left( {X - a_{i} } \right)\left( {X - a_{i} } \right)^{T} + c_{i} } \right]^{ - 1} , \, m = 5,7,{\text{ and 10, for }}F_{21} ,F_{22} , {\text{ and }}F_{23} }\) \({ 0} \le x_{j} \le 10,\,x_{{{\text{local\_optima}}}} \approx a_{i} ,\,{\text{and}}\,F\left( {x_{{{\text{local\_optima}}}} } \right) \approx {1 \mathord{\left/ {\vphantom {1 {c_{i} }}} \right. \kern-0pt} {c_{i} }}\,{\text{for}}\,1 \le i \le m.\)
These functions have five, seven, and ten local minima for \(F_{21} ,F_{22} , {\text{ and }}F_{23}\), respectively. The coefficients are shown in Table 17.
Rights and permissions
Copyright information
© 2016 Springer India
About this chapter
Cite this chapter
Naik, M.K., Samantaray, L., Panda, R. (2016). A Hybrid CS–GSA Algorithm for Optimization. In: Bhattacharyya, S., Dutta, P., Chakraborty, S. (eds) Hybrid Soft Computing Approaches. Studies in Computational Intelligence, vol 611. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2544-7_1
Download citation
DOI: https://doi.org/10.1007/978-81-322-2544-7_1
Published:
Publisher Name: Springer, New Delhi
Print ISBN: 978-81-322-2543-0
Online ISBN: 978-81-322-2544-7
eBook Packages: EngineeringEngineering (R0)