1 Introduction

By observing the great laws of nature, several modern metaheuristic algorithms (Gandomi et al. 2013a; Yang et al. 2013) are generally applied to address myriads of complex optimization problems (Yang 2010b). Some of these optimization problems include feature selection (Li and Yin 2013a), image segmentation (Zhang et al. 2011), flow shop scheduling (Li and Yin 2013b), reliability problem (Zou et al. 2010, 2011) and knapsack problem (Zou et al. 2011). These kinds of metaheuristic algorithms are well capable of finding the best solutions by extracting the useful information from a group of individuals. Since the genetic algorithms (GAs) (Goldberg 1998) were put forward during the 1950s and 1960s, several metaheuristic algorithms have been proposed such as artificial plant optimization algorithm (APOA) (Cai et al. 2012), ant colony optimization (ACO) (Zhang and Feng 2012; Zhang et al. 2014; Dorigo et al. 1996), differential evolution (DE) (Storn and Price 1997; Li and Yin 2012), bat algorithm (BA) (Gandomi et al. 2013b; Yang and Gandomi 2012; Yang 2010a; Mirjalili et al. 2013), artificial physics optimization (Xie et al. 2012), biogeography-based optimization (BBO) (Simon 2008; Wang et al. 2013a; Mirjalili et al. 2014a), krill herd (KH) (Gandomi and Alavi 2012; Wang et al. 2014a), harmony search (HS) (Geem et al. 2001; Wang et al. 2014b), monarch butterfly optimization (MBO) (Wang et al. 2015), flower pollination algorithm (FPA) (Yang et al. 2014), animal migration optimization (AMO) (Li et al. 2014), particle swarm optimization (PSO) (Kennedy and Eberhart 1995; Mirjalili and Lewis 2013; Talatahari et al. 2013; Zhao et al. 2014a, b), grey wolf optimizer (GWO) (Mirjalili et al. 2014b) and firefly algorithm (FA) (Gandomi et al. 2011; Yang et al. 2012; Wang et al. 2014c).

Recently, Yang and Deb (2010) proposed a new metaheuristic optimization algorithm, called CS method. CS is inspired by smart incubation behavior of a type of birds called cuckoos in nature. For the single-object case, in CS, the number of eggs, cuckoos and nests are equal, and each one represents an available solution. CS is well capable of finding the best solutions by continuously using new and potentially better solution to replace a not-so-good cuckoo in the population. Conceptually, extremely simple, CS is very easy to implement.

In most cases, CS performs local search well, but sometimes it may have no ability of escaping from local optima which restricts its ability to carry out full search globally. In order to overcome this and enhance the searching ability of the CS method, a strategy has been provided (Li and Yin 2015), which uses self-adaptive method towards adjusting parameters.

On the other hand, several scholars have paid more attention in nonlinear dynamics, especially of chaos. And recently, chaotic theory has found applications in the area of metaheuristics. Up to now, chaotic sequences have been combined with several metaheuristic algorithms, such as imperialist competitive algorithm (Talatahari et al. 2012), FA (Gandomi et al. 2013c), charged system search (Nouhi et al. 2013), chaotic swarming of particles (CSP) (Kaveh et al. 2014), BA (Gandomi and Yang 2014), KH algorithm (Wang et al. 2014d), memetic DE algorithm (Jia et al. 2011) and PSO (Gandomi et al. 2013d).

The present manuscript introduces a chaotic CS-based method, intended for accelerating convergence. In serials of chaotic CS-based algorithms, various one-dimensional chaotic maps are utilized in place of the step size of CS. Therefore, various approaches that use chaotic maps as efficient alternatives to pseudorandom sequences have been put forward. The chaotic CS-based methods are experimented on 27 benchmark functions and an engineering case. Performance comparison with the other approaches demonstrated the superiority of the proposed strategy. Series of the simulation show that CCS performs more efficiently and accurately than the basic CS and other metaheuristic methods. The enhancement of the new methods are revealed due to the usage of determinate chaotic signals substitute for original step size.

The paper is organized into different sections: Sect. 2 describes the basic CS algorithm and 12 chaotic maps. The proposed CCS approach is described in Sect. 3. Subsequently, the tuning of the step size \(\alpha \) and finding the best chaotic CS are discussed in Sect. 4. In addition, comparing with eight other methods, the CCS algorithm is also evaluated through 27 functions and an engineering case. Finally, Sect. 5 provides a summary of our work.

2 Preliminary

First and foremost, we will provide a brief preliminary on the CS algorithm and 12 chaotic maps.

2.1 The CS algorithm

By idealizing and simplifying the brood behavior of some cuckoo species, CS is put forward in combination with the Levy flight, which is a swarm intelligence optimization method.

In order to describe the CS method more easily, Yang and Deb have proposed the following three hypothetical rules:

  1. 1.

    each cuckoo lays and places one egg in a randomly chosen nest at a time;

  2. 2.

    the optimal nests cannot be corrupted;

  3. 3.

    the number of host nests is fixed and the egg can be found by the host bird with a fixed probability \(p_\mathrm{a} \in [0,1]\).

Based on three rules, the pseudo-code of the CS can be represented as shown in Fig. 1.

Fig. 1
figure 1

Pseudo-code of the CS algorithm

For cuckoo \(i\), when a new cuckoo \(x^{(t+1)}\) is generated, a Levy flight is implemented

$$\begin{aligned} x_i^{(t+1)} =x_i^{(t)} +\alpha \oplus \mathrm{L}\hat{\mathrm{e}}\mathrm{vy}(\lambda ) \end{aligned}$$
(1)

where \(\alpha \) \(>\) 0 is the step size. In most cases, \(\alpha \) is simply set to 1. The product \(\oplus \) means entrywise multiplications.

2.2 Chaotic maps

In optimization field, chaotic optimization algorithm (COA) is a kind of random-based methods that use chaotic variables. In COA, because the chaos has the property of the non-repetition and ergodicity, full search can be implemented at higher speeds than stochastic searches that rely on probabilities. In order to serve for this objective, 12 one-dimensional non-invertible maps are applied to generate chaotic sets (see Table 1). Note that M1, M2, \(\ldots \), M12 in Tables 3 and 4 are short for the 12 responding chaotic maps. More details about COA method and 12 chaotic maps can be found in Wang et al. (2013b).

Table 1 Twelve different chaotic maps

3 Chaotic CS

As presented in Eq. (1), the main parameters of CS are the step size \(\alpha \) and discovery rate \(p_{\mathrm{a}}\) that characterizes the variations of the global best step size, and their values have a great influence on the convergent speed and how CS performs.

The basic CS is well capable of finding the best solutions, but the solutions are still swinging slightly around the optima. As per the previous literature (Yang 2010b), we used \({\alpha }= O(L/10)\) and \(p_{\mathrm{a}}= 0.25\) for the standard CS, where \(L\) is the characteristic scale of the problem of interest.

The step size used in CS remains unchanged. The improved CS method with a chaotic varying step size \(\alpha \) may be better than the basic one, which may also accelerate its convergence. By normalizing all chaotic maps, their variations are always in [0, 2]. After normalization, chaotic maps are able to tune step size \(\alpha \), and this improved CS method is referred as the chaotic CS.

The simple description of pseudo-code of CCS algorithm can be provided as shown in Fig. 2.

Fig. 2
figure 2

Pseudo-code of the CCS algorithm

In addition, to protect the best cuckoos, elitism strategy is introduced into CCS. This forbids the best cuckoos from being corrupted by cuckoo updating operator. Note that an elitism strategy is used to save the property of the best cuckoos in the CCS process, so even if cuckoo updating operation destroys its corresponding cuckoo, the best cuckoos can be reverted back if needed.

4 Simulation experiments

In order to illustrate the benefits of the CCS method, it is tested by means of an array of experiments implemented on benchmark functions and an engineering case. In order to obtain the real and fair results, we did all the implementations under the same conditions as Wang et al. (2014e) and Guo et al. (2014). The benchmark functions as shown in Table 2 are standard testing functions, and their detailed properties can be found in Wang et al. (2014f, (2014g). In the following experiments, the best value for each function is shown in bold. The dimension of the function is 20 in the present work.

Table 2 Benchmark functions

4.1 CCS with different chaotic maps

Different chaotic CS variants were benchmarked using 14 high-dimensional distinguished numerical examples (see Table 2). In this subsection, tuning the step size \(\alpha \) is carried out. Here, the value of \(\alpha \) is replaced with 12 different chaotic maps (see Sect. 2.2).

For CCS algorithm, we set population size, elitism parameter and maximum generation to 50, 2 and 50, respectively. 1000 independent runs are implemented to get typical performances. The results are illustrated in Tables 3 and 4.

From Tables 3 and 4, it can be seen that the CCS performs more effectively with the M9 (Sine map) and M11 (Sinusoidal map) than others. For these two maps, on average, the M11 significantly outperforms the M9 on most benchmarks. Further, from Table 4, we can see that the M11 yields an output, having only slight difference with the optimal value when multiple runs are made. Comprehensive consideration suggests that we select M11 as the final optimal map to be used for chaotic CS (CCS). In addition, from the numerical simulations, the obtained results indicate that choosing an appropriate chaotic map is crucial in leading to making full use of the advantage of the chaotic CS. M11 can provide more information for CS method to guide its search while other chaotic maps cannot.

Table 3 Minimum values obtained by CCS algorithm with different chaotic maps
Table 4 Best values obtained by CCS algorithm with different chaotic maps

4.2 General performance of CCS

In order to investigate the benefits of CCS, it was compared with nine other metaheuristic algorithms, which are ACO (Dorigo and Stutzle 2004), DE (Storn and Price 1997), ES (Beyer 2001), GA (Goldberg 1998), HS (Geem et al. 2001), PBIL (Shumeet 1994) and PSO (Kennedy and Eberhart 1995).

In the following experiments, for CS and CCS, we will use the same parameters that is discovery rate \(p_{\mathrm{a}}=0.25\). In addition, the settings of population size, elitism parameter and maximum generation are the same as Sect. 4.1. For other methods, their parameters are set as follows (Wang et al. 2014g): For ACO, initial pheromone value \(\tau _{0}=1\mathrm{E}{-}6\), pheromone update constant \(Q=20\), exploration constant \(q_{0}=1\), global pheromone decay rate \(\rho _{\mathrm{g}}=0.9\), local pheromone decay rate \(\rho _{\mathrm{l}}=0.5\), pheromone sensitivity \(\alpha =1\) and visibility sensitivity \(\beta =5\); for DE, a weighting factor \(F=0.5\) and a crossover constant CR \(=0.5\); For ES, the number of offspring \(\lambda =10\) produced in each generation, and standard deviation \(\sigma =1\) for changing solutions. For GA, we used roulette wheel selection, single-point crossover with a crossover probability of 1 and a mutation probability of 0.01. For HS, we set HM accepting rate \(= 0.75\), and pitch adjusting rate \(= 0.7\). For KH, we used the foraging speed \(V_{\mathrm{f}}= 0.02\), the maximum diffusion speed \(D^{\mathrm{max}}= 0.005\), the maximum induced speed \(N^{\mathrm{max}}= 0.01\). For PBIL, we used a learning rate of 0.05, 1 good population member and 0 bad population members to use to update the probability vector each generation, an elitism parameter of 1 and a 0 probability vector mutation rate. For PSO, an inertial constant \(= 0.3\), a cognitive constant \(=1\) and a social constant for swarm interaction \(=1\).

It is well known that the metaheuristic methods are generally based on random distribution. In order to remove the influence of the randomness, 1000 independent runs are implemented for each method. Tables 5, 6 and 7 illustrate the results of the simulations.

Table 5 Mean optimization results
Table 6 Best optimization results
Table 7 Worst optimization results

From Table 5, we see that, on average, CCS is well capable of finding the best values except F03, F04, F07, F10, F19 and F27. ACO performs best on F07, F19, F24 and F27 function. Table 6 shows that CCS performs the best except F03, F04, F06, F07, F09, F10, F12, F19 and F27. GA and ACO can find the best values on F04, F06, F09, F19 and F07, F12, F27, respectively. Moreover, for the worst values as shown in Table 7, CCS is able to search for the best values on 23 of the 27 benchmarks (F01, F03–F06, F08–F18 and F20–F26). ACO performs best on F01, F02, F07, F10, F19 and F27 function. As is evident, replacing step size \(\alpha \) with chaotic maps can definitely improve the performance of the CS.

Moreover, by carefully looking at the Tables 5, 6 and 7, we can see, the obtained results indicate that incorporating chaotic maps in the CS model would lead to a significant increase in the performance of the CS. It is apparent that when chaotic map is taken into account, the performance is improved as compared to the other methods. This verifies the effectiveness and the ability of the proposed CCS algorithm in solving the global numerical optimization problem.

Further, to prove the superiority of the CCS more clearly, several representative convergence results for CCS and CS are shown in Figs. 3, 4, 5, 6 and 7. The values shown in Figs. 3, 4, 5, 6 and 7 are the average objective function optimum.

Fig. 3
figure 3

Convergent curves of the F01, F02, F05 and F06 function

Fig. 4
figure 4

Convergent curves of the F07–F10 function

Fig. 5
figure 5

Convergent curves of the F11, F12, F14 and F17 function

Fig. 6
figure 6

Convergent curves of the F18–F20 and F22 function

Fig. 7
figure 7

Convergent curves of the F23–F25, F05 and F27 function

From Fig. 3, we can draw the conclusion that CCS is significantly superior to the CS method, especially F01.

Figure 4 illustrates the function values for F07–F10 function. For this case, the figure clearly shows that CCS performs significantly better than CS method.

From Fig. 5, CCS is significantly superior as compared to the CS algorithm, though both CCS and CS have similar performance on F11 at the end of optimization.

Figure 6 shows the results for F18–F20 and F22 function. From Fig. 6, apparently, CCS outperforms CS method in this example.

Figure 7 shows the performance achieved for F23–F25, F05 and F27 function. It can be seen that CCS has a faster speed of convergence than CS method.

From above analyses about the Figs. 3, 4, 5, 6 and 7, we conclude that the substitution of step size with an appropriate chaotic map demonstrates superiority in solving global optimization problems.

4.3 Sensor selection problem

Except the standard functions discussed in the section above, one more engineering optimization problem is also used to validate the CCS method. The sensor selection problem can be considered as a test problem to validate the CCS method.

The Modular Aero Propulsion System Simulation (MAPSS) (Simon 2008) is used as the engine simulation in sensor selection problem. The model of the turbofan engine can be represented in the form of the discretized time invariant equations shown as follows:

$$\begin{aligned} \begin{array}{l} x(k+1)=f[x(k),u(k),p(k)]+w_x (k) \\ p(k+1)=p(k)+w_p (k) \\ y(k)=g[x(k),u(k),p(k)]+e(k), \\ \end{array} \end{aligned}$$
(2)

where \(k\), \(x\), \(u\), \(p\) and \(y\) are the time index, three-element state vector, three-element control vector, ten-element health parameter vector and measurement vector, respectively. Between measurement times their deviations can be approximated by the zero mean noise \(w_{p}(k)\). The \(w_{x}(k)\) and \(e(k)\) mean inaccuracies in the system model and measurement noise, respectively.

The state vector \(x\) and the health parameter vector \(p\) in Eq. (2) can be estimated by a Kalman filter, and its uncertainty can be given by the error covariance \(\sum \). Here, the covariance \(\sum \) is a \(13\times 13\) matrix, because this problem includes three states and ten health parameters. In the sensor selection problem, only the health parameter estimation errors should be taken into account, so close attention can be simply paid to the diagonal elements \(\sum (i, i)\) (\(i=4, 5, \ldots , 13\)).

Based on the above analyses, the object function of the health estimation problem can be expressed as

$$\begin{aligned} J=\sum \limits _{i=4}^{13} {\sqrt{\frac{\sum {(i,i)} }{\sum {_0 (i,i)} }} } +\frac{\alpha C}{C_0 }, \end{aligned}$$
(3)

where \(\sum _{0}\) and \(C_{0}\) are used for normalization. \(\sum _{0 }\)is the covariance, and \(C_{0}\) is the cost of setting the aircraft engine with all 11 sensors. \(\alpha \) is a scale factor that can balance the importance of financial cost and estimation accuracy.

It is clear that the selection of sensors to generate the minimum for \(J\) is essentially an optimization problem.

In fact, optimization methods can be used to solve the sensor selection problem. Here, nine optimization methods are used to search for a sub-optimal sensor set. The results on the sensor selection problem are recorded in Table 8. It can be seen that CCS yields better performance than the other eight methods in terms of average, best and worst performance. Furthermore, the Std of CCS is much smaller than other methods. That is, CCS has a relatively high possibility of finding the satisfactory sensor set.

Table 8 The final values for the sensor selection problem

5 Conclusion

In the proposed study, chaos has been combined with the standard CS which yields a new improved version of the CS algorithm, namely CCS algorithm. Twelve chaotic maps are used to tune the step size, \(\alpha \), of the CS. By series of simulations on various chaotic CS variants, the algorithm in combination of Sinusoidal map in place of \(\alpha \) is the best chaotic CS. From the experimental results, we observe that the tuned CS significantly enhances the ability of the global search as well as the quality of the results. As per the results of the nine approaches on the benchmarks and sensor selection problem, it can be seen that the CCS significantly enhances the search ability of the CS on most benchmark problems and engineering problem.

Various issues are worthy of further study in optimization problems, and some more efficient methods may be put forward, relying on the specific engineering problems. In future, we will focus on such challenging issues. On the one hand, the CCS method would be utilized to solve other practical engineering problems. We strongly believe that CCS can become an efficient and promising method for solving other real-world engineering problems. At the same time, some new meta-hybrid approaches would also be put forward to address optimization problems, based on the current studies.