Abstract
Cuckoo search (CS) is an excellent population-based algorithm and has shown promising performance in dealing with single- and multi-objective optimization problems. However, for many-objective optimization problems (MaOPs), CS cannot be directly employed. So far, few paper have been reported to use CS to solve MaOPs. In this paper, we try to propose a hybrid many-objective cuckoo search (HMaOCS) for MaOPs. In HMaOCS, the standard CS is firstly modified to effectively deal with MaOPs. Then, non-dominated sorting and the strategy of reference points are employed to ensure the convergence and diversity. In order to verify the performance of HMaOCS, DTLZ and WFG benchmark sets are utilized in the experiments. Experimental results show that HMaOCS can achieve promising performance compared with five other well-known many-objective optimization algorithms.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In the real world, there are many complex optimization problems (Abdel-Baset et al. 2018; Cortés et al. 2018; Cui et al. 2017a, b; Wang et al. 2018). Many researchers have contributed great efforts to solve them (Pandey et al. 2018; Shan et al. 2018; Sun et al. 2014; Zhao et al. 2018) in the past decades. In theory, although many efficient methods (Cai et al. 2016, 2018; Cui et al. 2019b; Fan et al. 2018) have been proposed, there are still some intractable problems (Cui et al. 2018; Pooja et al. 2018; Wang et al. 2017; Yigit et al. 2018) whose complexity is beyond current available methods. Multi-objective optimization problems (MOPs) are the problems which have two or three objectives, and they arise in many areas, such as engineering, economics and biology (Coelho et al. 2013; Cui et al. 2019a; Niu et al. 2018). Such optimization problems often have multiple conflicting objectives, which cannot be optimized simultaneously. Thus, no single best solution can be obtained. Instead, a set of trade-off solutions are needed to approximate the true Pareto front. Over the past several years, many efficient and effective algorithms have been proposed to tackle MOPs, such as NSGA-II (Deb et al. 2002b), MOEA/D (Zhang and Li 2007) and SPEA2 (Zitzler et al. 2001). However, in practical applications, there are still many problems with more than three objectives, and these problems are called many-objective optimization problems (MaOPs) (Cui et al. 2019c). MaOPs pose various challenges to the existing multi-objective optimization algorithms. The first one is high proportion of the non-dominated solutions. With increase in objectives, more and more solutions become non-dominated and sometimes the number of solutions is more than the population slots, thus resulting in slowing down the search process. The second challenge is diversity maintenance. Crowding distance and clustering operator are two common operators, and they will become computationally expensive when dealing with MaOPs. Visualization of the approximated front is the third difficulty which makes decision-makers more confused in evaluating one algorithm.
In recent years, many researchers have paid attention to the above challenges. Although the third difficulty cannot be avoided effectively, some algorithmic changes to the existing methods may be possible for the first two challenges. According to the characteristics of some recently proposed many-objective optimization algorithm, they can be divided into the following three categories.
-
1.
The first category is to modify the classical Pareto dominance to enhance the selection pressure toward the true front. This idea has been widely used in many papers, and many variants of dominance have been proposed. Laumanns et al. (2002) proposed ε-dominance. Deb et al. (2005) designed a steady-state algorithm (ε-MOEA) which was based on the conception of ε-dominance and achieved a well-distributed and well-converged set of solutions. Yang et al. (2013) presented grid dominance to strengthen the selection pressure toward the optimal direction and then applied it to multi-objective evolutionary algorithm. The above modifications aim to enlarge the dominating area, and thus, more solutions are dominated. Zou et al. (2008) proposed another variant of Pareto dominance called L-dominance, which not only takes into account the number of improved objective values, but also considers the values of improved objective functions if all objectives have the same importance. Experimental results demonstrated that the new algorithm based on L-dominance obtained good performance.
-
2.
The second type is to reduce the impact of diversity maintenance. Adra and Fleming (Adra and Fleming 2011) proposed a diversity management mechanism, namely DMI, to determine whether or not activate diversity promotion which is deactivated when population is excessively diverse. In terms of both convergence and diversity, evolutionary multi-objective optimization algorithm-based DMI shows excellent performance. Li et al. (2010) employed grid dominance to define an adaptive neighborhood for each individual, whose size varies with the number of objectives. Moreover, adaptive neighborhood and average ranking are integrated to pick out well-converged individuals and prohibit or postpone the archive of adjacent individuals. Similar to (Li et al. 2010), Yang et al. (2013) modified grid dominance and further proposed grid ranking, grid crowding distance and grid coordinate point distance to maintain an extensive and uniform distribution among solutions. Li et al. (2014) developed a general modification of density estimation (SDE) to make Pareto-based algorithms suitable for MaOPs. Different from the traditional density estimation that only concerns the distribution of individuals in the population, SDE shifted the position of other solutions according to convergence comparison with the current solution. Thus, both the distribution and the convergence information of the solutions were considered.
-
3.
The third category is intuitive, that is to say, it employs aggregation functions to differentiate many-objective solutions. One of the representative algorithms is the multi-objective evolutionary algorithm based on decomposition (MOEA/D) (Zhang and Li 2007). The main idea of MOEA/D is to decompose a complex many-objective optimization problem into a number of scalar optimization subproblems using aggregation functions, and each subproblem is then optimized simultaneously. Different from MOEA/D, Hughes (Huband et al. 2006) employed vector angle distance scaling and weighted Tchebycheff methods to rank individuals and then proposed multiple single-objective Pareto sampling (MSOPS) which can perform a parallel search of multiple conventional target vector-based optimization. Based on NSGA-II (Deb et al. 2002b), Deb and Jain suggested a reference point-based many-objective evolutionary algorithm framework (NSGA-III) (Deb and Jain 2014) which emphasized population members that were non-dominated, yet close to a set of supplied reference points. These discussed ideas are effective, and they can avoid dominance relationship which is a great challenge for MaOPs.
There are also a lot of other many-objective optimization algorithms, which employ new ideas different from the above three categories. For example, to tackle the problem of high computational effort required for hyper volume calculation, Bader and Zitzler (2011) used Monte Carlo simulation to approximate the exact hyper volume values. The actual indicator values are not important, but rather the rankings of solutions induced by hyper volume indicator. Zitzler and Kunzli (2004) analyzed how preference information of decision-maker can be integrated into searching process and further proposed to use predefined optimization goal to measure the contribution of each solution.
Cuckoo search (CS) (Yang and Deb 2010b) is an effective swarm intelligence-based algorithm. Since the introduction of CS, it has attracted much attention due to its simplicity and efficiency in dealing with nonlinear optimization problems and real-world engineering applications. In recent years, many researchers have tried to propose new modified CS for MOPs and practical applications (Zhang et al. 2018). However, to the best of our knowledge, no modified CS for MaOPs has been reported in the literature. In this paper, we try to modify CS for MaOPs. First, we remove the global best solution in global search because there is no best solution in MaOPs and then add a new factor to global search. Second, we employ non-dominated sorting and the strategy of reference points in NSGA-III to update the population which is combination of the parent and offspring individuals. To verify the performance of the proposed algorithm, two well-known benchmark sets DTLZ and WFG are utilized in the experiments.
The organization of this paper is as follows. In Sect. 2, the standard CS is briefly introduced. The literature of CS for MOPs is given in Sect. 3. The proposed approach is described in Sect. 4. In Sect. 5, experimental results and analysis are presented. Finally, the work is concluded and summarized in Sect. 6.
2 Standard Cuckoo search
Cuckoo search (CS) (Yang and Deb 2010a, b) was firstly proposed by Yang in 2009, which was inspired by breeding behavior of cuckoo. Cuckoos are very interesting birds. They do not engage the obligate brood parasitism directly, but lay their eggs in other species’ nests. Although these eggs may be very similar to the eggs of the host birds in appearance, there is still a fraction of some eggs being discovered by host birds. Under this situation, these host birds can either throw these alien eggs away, or abandon the nests. To describe the phenomenon, Yang and Deb idealized the following three rules (Yang and Deb 2010b):
-
1.
Each cuckoo lays one egg at a time and dumps it in a randomly selected nest.
-
2.
The nests with quality egg are carried over to the next generation.
-
3.
Once the alien eggs (solutions) are discovered by host birds, the nests will be abandoned with a probability \( p_{a} \in [0,1] \).
In fact, the standard CS not only mimics the behavior of cuckoos, but also employs Lévy flight to enhance search ability. Lévy flight is a random walk process. Some studies showed that flight behavior of various birds demonstrates typical characteristic of Lévy flight (Barthelemy et al. 2008; Reynolds and Frye 2007).
In CS, a solution Xi is for cuckoo i, and it can be generated with the following Lévy flight:
where α > 0 is the step size, which should be related to the scale of specified problems. The product \( \oplus \) is entry-wise multiplications. The step size α can be defined as follows:
where α = 0.01 and \( X_{\text{best}} \) is the optimal position in the population.\( {\text{Levy}}(\uplambda) \) satisfies the following Lévy distribution.
where \( 1 < \lambda < 3 \). Essentially, Eq. (3) has an infinite variance with an infinite mean. Thus, the steps follow a random walk process with a power-law step-length distribution with a heavy tail.
However, once the alien eggs are discovered by host birds, they may be abandoned with a probability \( p_{\text{a}} \). Under this situation, new eggs can be regenerated with the following way:
where \( X_{k} (t^{'} ) \) and \( X_{j} (t^{'} ) \) are two randomly selected positions from the whole population. r is a random value ranging from 0 to 1. To have a clear description of the standard CS, we present its pseudo-code in Algorithm 1.
3 Related work
Cuckoo search (CS) (Yang and Deb 2010a, b) algorithm was firstly proposed by Yang in 2009, which was initially designed for single-objective optimization problems. Since then, CS has got a lot of recognitions for its excellent performance in handling complex problems. In fact, many practical engineering problems are MOPs, and some of them have more than three objectives. In the past decades, many efforts have been made to modify CS for MOPs. In this section, a brief review of CS for MOPs is presented.
According to the No-Free-Lunch theorem (Wolpert and Macready 1997), no one algorithm can deal with all kinds of problems effectively. Thus, hybridization of multiple strategies may improve the performance of cuckoo search. Chandrasekaran and Simon (2012) employed fuzzy set theory to create the fuzzy membership search domain and proposed a hybrid cuckoo search algorithm for solving multi-objective unit commitment problem. Rani et al. (2013) combined a modified cuckoo search (MCS) with particle swarm optimization (PSO) and genetic algorithm (GA) in weighted-sum multi-objective optimization. Coelho et al. (2013) presented an improved multi-objective cuckoo search (IMCS), which employs a nearest neighbor density estimation method to obtain the density value for selecting the global best nest. Zhang et al. (2018) proposed a hybrid multi-objective cuckoo search with dynamical local search (HMOCS), in which non-dominated sorting is employed to generate the Pareto front and a dynamical local search is used to enhance the local search.
To solve multi-objective job shop scheduling problem, Hanoun et al. (2012) developed a Pareto archived multi-objective cuckoo search (PAMOCS) to find the set of non-dominated Pareto-optimal solutions. Wang et al. (2012) combined cuckoo search with non-dominated sorting procedure of NSGA-II to solve the optimal design problems of water distribution systems. Zhou et al. (2016, proposed a multi-objective discrete cuckoo search algorithm with local search (MDCL) for community detection problem. In MDCL, the nest location updating strategy and abandon operator of cuckoo are redefined in discrete form, while a local search strategy and a clone operator are proposed to obtain the optimal initial population. To minimize heat transfer and fluid friction irreversibility degrees in plate-fin heat exchangers (PFHEs), Wang and Li (2015) proposed an improved multi-objective cuckoo search algorithm (IMOCS), in which a non-uniform mutation operator is used to create a perfect balance between exploration and exploitation. Moreover, a differential evolution operator is employed to boost cooperation and information exchange among the groups and enhance the quality of convergence.
The above CS variants have shown good performance in dealing with MOPs. With development of society, many MOPs become more and more complex and some of them are MaOPs (Raja et al. 2017; Tozer et al. 2017), which have more than three objectives. For MaOPs, the abovementioned CS variants are not effective. Moreover, to the best of our knowledge, CS and its modifications have not been applied to MaOPs so far. In this paper, we try to design a hybrid many-objective CS to challenge MaOPs.
4 Proposed approach
To solve MaOPs, this paper proposes a hybrid many-objective CS (HMaOCS), which is a combination of modified CS, non-dominated sorting and the strategy of reference points. The standard CS is modified to adapt for MaOPs. Non-dominated sorting is an effective method, which is used to ensure the convergence. The strategy of reference points used in NSGA-III is employed to obtain a uniform distribution of solutions.
In single-objective optimization algorithm, any two solutions can be compared directly and the global best solution can also be easily identified. However, in many-objective optimization algorithms, there exist two relationships: dominated and non-dominated. That is to say, there does not exist the global best solution and the comparison between two solutions is more complex. Therefore, based on the above analysis, two problems should be addressed before modifying CS for MaOPs. The first problem is how to modify the single-objective CS for MaOPs, since there exists the global best solution in Lévy flight, and the second problem is how to update the population for ensuring convergence and diversity in each iteration. The following two parts will address the above two problems, respectively.
4.1 Modified CS
In Eq. (2), there exists a global best solution. To incorporate Lévy flight into MaOPs, the new updating equation is redefined as follows:
where \( {\alpha > 0} \) is the scale of the movement controlling the moving step and generally equal to 1.0. L is a random number based on Lévy distribution, and r is a random number sampling with standard Gaussian distribution N(0,1). \( X_{\text{upper}} \) is the upper bound of variable and \( X_{\text{lower}} \) is the lower bound of variable.\( r_{1} \) is a controlling factor, which can be defined as follows:
where \( C_{\text{evaluation}} \) is the total number of evaluations and \( C_{\text{evaluated}} \) is the current number of evaluations. By employing \( (X_{\text{upper}} - X_{\text{lower}} ) \times r_{1} \), individual \( X_{i} (t) \) not only can converge with the evolvement of algorithm, but also can avoid being affected by other individuals. However, in practical problems, each dimension of \( (X_{\text{upper}} - X_{\text{lower}} ) \times r_{1} \) is not equally important for improving individual \( X_{i} (t) \). Hence, we assume \( X_{i} (t) = (x_{i}^{1} ,x_{i}^{2} , \ldots ,x_{i}^{D - 1} ,x_{i}^{D} ) \) and the corresponding objectives \( F(X_{i} (t)) = (f_{1} (X_{i} (t)),f_{2} (X_{i} (t)), \ldots ,f_{M} (X_{i} (t))) \), where D is the number of variables and M is the number of objectives. As shown in Fig. 1, each dimension of \( X_{i} (t) \) may be involved in \( f_{1} (X_{i} (t)) \), and meanwhile only \( x_{i}^{D - 2} , \)\( x_{i}^{{D{ - }1}} \) and \( x_{i}^{D} \) may be involved in \( f_{{M{ - }1}} (X_{i} (t)) \). In this case, we introduce a parameter \( r_{2} = (r^{1} ,r^{2} , \ldots ,r^{i} \ldots ,r^{D} ) \), where \( r^{i} \) can be randomly set to − 1, 0, or 1. \( r^{i} { = }0 \) means that the ith dimension will not be changed, and \( r^{i} { = 1} \) or −1 means that the ith dimension will be updated accordingly.
The probability pa plays an important role in updating individuals. In the standard CS and its most modifications, pa is set to 0.25 (Yang and Deb 2010a, b; Zhang et al. 2018). However, for MaOPs, pa = 0.25 may be not the best choice. Thus, to have a suitable pa for HMaOCS, we will systematically investigate it in later experiments.
For Eq. (4), we further modify it as follows:
where both \( X_{k} (t^{{\prime }} ) \) and \( X_{j} (t^{{\prime }} ) \) are two individuals randomly selected from the current population, and r ranges from 0 to 1. \( r_{3} = (r^{1} ,r^{2} , \ldots ,r^{i} \ldots ,r^{D} ) \) is defined as follows:
where \( {\text{rand}} \) is a random value from 0 to 1 and M is the number of objectives.\( r^{i} = 0 \) means ith dimension \( X_{i} (t^{{\prime }} ) \) will not be updated, and \( r^{i} = 1 \) means it will be changed accordingly. Thus, by introducing the parameter M, Eq. (8) can not only dynamically adjust the probability, but also handle MaOPs with various numbers of objectives.
4.2 Population updating
In many-objective optimization algorithms, there are two important goals which must be considered. One is to make the final solutions as close to the true front as possible, and the other is to get an even distribution of solutions. Convergence and diversity are two main factors for many-objective optimization algorithms. Thus, to ensure good convergence and diversity, we employ non-dominated sorting and the strategy of reference points in NSGA-III (Deb and Jain 2014; Jain and Deb 2014) to update the population.
Assume that \( P_{t} \) and \( Q_{t} \) are the parent and offspring population at the tth generation, respectively. Both of them have N individuals. Then, the next step is to select the best N individuals from the combined population \( R_{t} = P_{t} \cup Q_{t} . \) To implement this task, the procedure of non-dominated sorting is employed. First, the combined population is sorted into multiple non-dominated levels (\( F_{1} ,F_{2} \) and so on). Individuals from non-dominated level 1 to level l are included in the new population \( S_{t} \) until \( |S_{t} | = N \) or \( |S_{t} | > N \) for the first time. If \( |S_{t} | > N \), the level l will be included partially. Assume the last included non-dominated level is level l, and the number of the accepted individual in level l is \( K = N - |F_{1} \cup F_{2} \cup \cdots \cup F_{l - 1} | \). As for how to select the K individuals from level l, we employ the strategy of reference points used in NSGA-III (Deb and Jain 2014; Jain and Deb 2014).
To make reference points [which can be predefined in a structured manner (Das and Dennis 2006)] and objective values have an identical range, they are firstly normalized. Then, the ideal point of the set becomes the zero vector. Each individual in \( S_{t} \) is associated with a reference point according to the proximity of the member with a reference line obtained by joining the ideal point with the reference point. This operation can determine the number of the population members associated with each supplied reference point in \( S_{t} \backslash F_{l} \). A niching procedure is conducted to select population members from \( F_{l} \) which are not included in \( S_{t} \backslash F_{l} \). The reference points with least number of association in \( S_{t} \backslash F_{l} \) are looked for an associated point in \( F_{l} \). Thus, individuals in \( F_{l} \) are then selected one at a time until \( |P_{t + 1} | = N \). Although the strategy of reference points has a computational complexity of \( O(N^{2} \log N) \) where N is population size, it helps to deal with MaOPs. The pseudo-code of population updating is presented in Algorithm 2.
4.3 Framework of HMaOCS
Based on modified CS and population updating method, we give the pseudo-code of our proposed HMaOCS in Algorithm 3.
From Algorithm 3, it can be seen that there is no guidance information (the global best individual) in Eq. (5), and each individual in the population is not affected by any other individuals. When conducting the population updating method, the new population is generated from the combined parent and offspring populations to ensure better convergence and diversity. A fraction (pa) of the population is updated according to Eq. (7). In fact, Eq. (7) performs a local search to focus on the potential individuals around the current individuals. The newly generated population is then combined together with the former population. Obviously, the population is updated twice. The first update is after the Lévy flight, and the second update is after Eq. (7). The reason for the two updates is that they can prevent the loss of individuals with better convergence, as well as keep an even distribution of population.
5 Experimental results and analysis
In order to verify the performance of our approach HMaOCS, two well-known benchmark sets are utilized, including DTLZ (Deb et al. 2002a) and WFG (Hughes 2003) with 2, 3, 4, 6, 8 and 10 objectives. The whole experiments consist of two parts:
-
1.
Investigation of the effects of pa on the performance of HMaOCS;
-
2.
Comparison of HMaOCS with five other famous many-objective optimization algorithms.
In these experiments, inverted generational distance (IGD) (Zou et al. 2008) is used as a performance indicator. The involved algorithms are listed as follows.
-
MOEA/D (Zhang et al. 2018);
-
NSGA-III (Deb and Jain 2014);
-
KnEA (Zhang and Li 2007);
-
GrEA (Jain and Deb 2014);
-
HypE (Bader and Zitzler 2011);
-
Proposed HMaOCS.
5.1 Parameter settings
To have a fair comparison, the same parameter settings for all compared algorithms are used according to their original papers. Authors can also refer to paper (Zhang et al. 2015) which has listed most of the parameters used in this paper.
-
1.
Population size (N) and parameters (p1, p2) in the algorithms: Table 1 presents the parameters p1, p2, and N under different objectives. The parameters p1 and p2 are used to control the numbers of references points in NSGA-III and MOEA/D (Deb and Jain 2014; Zhang and Li 2007). Note that the strategy of two-layered references points is used as developers suggested in original papers to generate uniformly distributed weighted vectors (Deb and Jain 2014).
-
2.
Test instances: To validate the performance of HMaOCS, two benchmark sets DTLZ and WFG are employed. Table 2 shows some parameter settings on the DTLZ benchmark set with seven test instances DTLZ1 to DTLZ7, where M is the number of objectives, D is the number of variables, and MaxIter is the maximum number of iterations. Table 3 presents some parameter settings on the WFG benchmark set with nine test instances WFG1 to WFG9.
-
3.
Stopping condition and number of runs: For each test instance, each algorithm is run 20 independent times on the same machine with Intel Core i5-2400 3.10 GHz CPU, 6.00 GB memory, and Windows 7 operating system with Matlab 7.9. In this paper, the maximum number of iterations (MaxIter) is used as the stopping criterion. For the DTLZ test suit, the MaxIter is listed in Table 2. For WFG1 and WFG2, the MaxIter is set to 1000 and 700, respectively. For WFG3 to WFG9, the MaxIter is set to 250.
-
4.
Performance assessment: Inversed Generational Distance (IGD) (Li and Zheng 2009; Zou et al. 2008) is a widely used indicator for evaluating many-objective optimization algorithms. IGD can not only evaluate the convergence (closeness to the true Pareto-optimal fronts), but also the distribution of obtained final solutions. Let P* be a set of uniformly distributed points in the objective space along the true Pareto front (PF). Let P be a set of obtained solutions. Then, the IGD can be defined by
$$ IGD(P,P^{*} ) = \frac{{\sum\nolimits_{{v \in P^{*} }} {dist(v,P)} }}{{|P^{*} |}}, $$(9)where dist(v,P) is the minimum Euclidean distances between v and point in P. Note that P* is a set of sampling points. In general, the number of P*is predefined. For all test problems, the number (the closest integer to 500 among the possible values) of reference points is used for the IGD calculation.
-
5.
Compared algorithms: Table 4 shows parameter settings for NSGA-III and MOEA/D (Deb and Jain 2014; Zhang et al. 2018), where D is the number of decision variables. The parameter div in Table 5 is the number of divisions in each dimension in GrEA. To get the best setting of div, we tested many values by the suggestion of (Zitzler et al. 2001), and the values that gain the best performance are listed in Table 5. Table 6 shows the settings of T in KnEA, which is used to control the ratio of knee points to the non-dominated solutions.
5.2 Investigation of the effect of p a on the performance of HMaOCS
As discussed in Sect. 5.2, the parameter pa plays an important role in selecting the fraction of the current population. In single-objective optimization algorithms, many references (Yang and Deb 2010a, b; Zhang et al. 2018) have suggested some proper settings of pa. However, few papers have been reported in many-objective optimization. Hence, to investigate the effect of the parameter pa, HMaOCS is tested on the DTLZ test suit with different pa values. In the experiments, pa is set to 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9, respectively. pa = 0 means that all individuals are selected. When pa = 1, all individuals will not be updated according to Eq. (7). Therefore, pa = 1 will not be investigated.
Table 7 presents the mean IGD results of HMaOCS with different pa on DTLZ1-DTLZ7, where the best results are shown in boldface. From the results, we can roughly draw the conclusion that pa = 0.6, 0.7 and 0.8 have similar performance, and they obtain better IGD values than other pa settings.
To have a clear comparison, we use Friedman test to calculate the mean ranks of HMaOCS with different pa values. Friedman test is commonly used in intelligent optimization field for comparing different algorithms and showing their performance. The resulted values, namely the ranks, indicate the compared results. The smallest rank corresponds to the best algorithm. Detailed basic explanations for Friedman test can be found in paper (Sun et al. 2014). The Friedman test procedure can be explained as follows. Firstly, the average IGD results of HMaOCS with different pa settings on test problems with various numbers of objectives are computed (As presented in Table 7). Then, the rank values of different pa settings are computed using the function Friedman test in IBM SPSS Statistics. (IBM SPSS Statistics is a widely used statistical software.) The resulted rank values are then presented and listed in Table 8. Table 8 tabulates the corresponding rank results of HMaOCS with different pa, where the best rank is shown in boldface. As shown in Table 8, pa = 0.7 obtains the smallest rank, which demonstrates that pa = 0.7 is the relatively best setting for the test suite.
5.3 Comparison of HMaOCS with other algorithms on the DTLZ test suit
As mentioned before, the second experiment focuses on the comparison of HMaOCS with five other well-known many-objective optimization algorithms. Table 9 presents the mean IGD results of MOEA/D, NSGA-III, KnEA, GrEA, HypE and HMaOCS on the DTLZ test suite, where the best results are shown in boldface. The comparison results among HMaOCS and other algorithms are summarized as “w/l/t”, which means that HMaOCS wins in w functions, loses in l functions, and ties in t functions.
From the last line of Table 9, it is obvious that HMaOCS outperforms KnEA, GrEA and HypE. However, HMaOCS is slightly worse than MOEA/D, which is famous for its high computational efficiency. The same results also appear between HMaOCS and NSGA-III. In terms of each instance, HMaOCS is superior over MOEA/D on DTLZ2 and DTLZ4 with 2 and 4 objectives. The possible reason is that DTLZ4 is a series of non-uniform many-objective optimization problems, and evenly distributed weights used in MOEA/D are unable to effectively deal with them. Both NSGA-III and HMaOCS employ non-dominated sorting and the strategy of reference points, and their differences are the selection and crossover operations. Comparison results illustrate that the selection and crossover operations play a similar role as the modified Lévy flight and Eq. (7). On DTLZ2, DTLZ4 and DTLZ6 with 2, 4 and 6 objectives, HMaOCS performs better than KnEA. Compared with GrEA, HMaOCS works better on DTLZ1, DTLZ3 and DTLZ6 with 4, 6, 8 and 10 objectives. It is obvious that HMaOCS outperforms HypEon DTLZ2, DTLZ4 and DTLZ7. On DTLZ1 with 4, 6, 8 and 10 objectives, HMaOCS also shows better performance than HypE. The above analysis of comparison results demonstrates that HMaOCS is promising in dealing with most many-objective optimization problems.
To have a clear comparison, Fig. 2 presents the obtained solutions of six algorithms on DTLZ4 with 3 objectives. As exhibited in Fig. 2, HMaOCS, MOEA/D and NSGA-III perform similar performance regarding the diversity, whereas GrEA, KnEA and HypE fail to keep an even distribution of obtained solutions. The reason may be that the knee points in KnEA are used to enhance the convergence. HypE is also poor because the HV (Deb and Kalyanmoy 2001) used in HypE has a bias toward typical knee points (Jain and Deb 2014). It is easy to understand that HMaOCS performs similar performance as NSGA-III does because both of them employ the same strategy of reference points.
5.4 Comparison of HMaOCS with other algorithms on the WFG test suit
Table 10 presents the mean IGD results of MOEA/D, NSGA-III, KnEA, GrEA, HypE and HMaOCS on the WFG test suite, where the best results are shown in boldface. From the last line of Table 10, HMaOCS outperforms MOEA/D on 39 functions, while MOEA/D is better than HMaOCS on 6 functions. We can easily get similar conclusions from comparison results of HMaOCS, KnEA and HypE.
Although HMaOCS is based on the careful modifications of CS and is further incorporated with the strategy of reference points used in NSGA-III, HMaOCS does not show obvious advantage over NSGA-III and wins in only 23 functions. On WFG4 to WFG8, HMaOCS obviously outperforms KnEA and obtains better results on WFG2 with 2, 4, 6 and 8 objectives. Compared to GrEA, HMaOCS performs better on all WFG test instances except for WFG1, WFG8 and WFG9. HypE is better than HMaOCS on WFG3 and other instances with 2 objectives. The above analysis demonstrates that HMaOCS shows excellent performance.
Figure 3 presents the obtained solutions of six algorithms on WFG6 with 3 objectives. In terms of diversity, NSGA-III shows the best performance. HMaOCS has no obvious advantage over MOEA/D, but both of them are slightly worse than NSGA-III. KnEA, GrEA and HypE show the worst performance regarding the distribution of obtained solutions.
Table 11 summarizes the comparison results between HMaOCS and five other many-objective optimization algorithms on the DTLZ and WFG benchmark sets, where “w/l/t” means that HMaOCS wins in w functions, loses in l functions and ties in t functions. From the results, though HMaOCS is slightly worse than MOEA/D on the DTLZ benchmark set, it performs much better than MOEA/D on the WFG benchmark set. The total results show that HMaOCS outperforms MOEA/D. However, both NSGA-III and HMaOCS obtain similar performance. NSGA-III is slightly better than HMaOCS on the DTLZ benchmark set, while HMaOCS outperforms NSGA-III on the WFG benchmark set. The total results show that NSGA-III is slightly better than HMaOCS. Compared to KnEA, GrEA and HypE, HMaOCS achieves better results on two benchmark sets. Although HMaOCS is slightly worse than NSGA-III, it outperforms MOEA/D, KnEA, CrEA and HypE.
6 Conclusion and future work
In this paper, we propose a hybrid many-objective cuckoo search (HMaOCS) for MaOPs. The standard CS is only suitable for single-objective optimization problems. To deal with MaOPs, the standard CS is modified. Moreover, non-dominated sorting and the strategy of references points in NSGA-III are employed to ensure the convergence and diversity. In order to evaluate the performance of HMaOCS, two well-known benchmark sets DTLZ and WFG are utilized in the experiments.
The parameter pa can affect the performance of HMaOCS. Simulation results show that a fixed pa is not suitable for all test instances. pa = 0.7 is the relative best choice for the test suite. Compared to other famous many-objective optimization algorithms, HMaOCS can achieve promising performance. MOEA/D performs better than HMaOCS on the DTLZ benchmark set, while HMaOCS is much better than MOEA/D on the WFG benchmark set. For two benchmark sets, both NSGA-III and HMaOCS obtain similar performance. HMaOCS outperforms KnEA, GrEA and HypE on the DTLZ and WFG benchmark sets.
References
Abdel-Baset M, Zhou Y, Ismail M (2018) An improved cuckoo search algorithm for integer programming problems. Int J Comput Sci Math 9(1):66–81
Adra S, Fleming P (2011) Diversity management in evolutionary many-objective optimization. IEEE Trans Evol Comput 15(2):183–195
Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol Comput 19(1):45–76
Barthelemy P, Bertolotti J, Wiersma D (2008) A Lévy flight for light. Nature 453(7194):495
Cai X, Gao X, Xue Y (2016) Improved bat algorithm with optimal forage strategy and random disturbance strategy. Int J Bio-inspired Comput 8(4):205–214
Cai X, Wang H, Cui Z, Cai J, Xue Y, Wang L (2018) Bat algorithm with triangle-flipping strategy for numerical optimization. Int J Mach Learn Cybernet 9(2):199–215
Chandrasekaran K, Simon S (2012) Multi-objective scheduling problem: hybrid approach using fuzzy assisted cuckoo search algorithm. Swarm Evol Comput 5:1–16
Coelho L, Guerra F, Batistela N (2013) Multiobjective cuckoo search algorithm based on duffing’s oscillator applied to jiles-atherton vector hysteresis parameters estimation. IEEE Trans Magn 49(5):1745–1748
Cortés P, Muñuzuri J, Onieva L, Guadix J (2018) A discrete particle swarm optimisation algorithm to operate distributed energy generation networks efficiently. Int J Bio-Inspired Comput 12(4):226–235
Cui Z, Cao Y, Cai X, Cai J, Chen J (2017a) Optimal LEACH protocol with modified bat algorithm for big data sensing systems in internet of things. J Parallel Distrib Comput 10:1–12. https://doi.org/10.1016/j.jpdc.2017.12.014
Cui Z, Sun B, Wang G, Xue Y, Chen J (2017b) A novel oriented cuckoo search algorithm to improve DV-Hop performance for cyber-physical systems. J Parallel Distrib Comput 103:42–52
Cui Z, Xue F, Cai X, Cao Y, Wang G, Chen J (2018) Detection of malicious code variants based on deep learning. IEEE Trans Industr Inf 14(7):3187–3196
Cui Z, Du L, Wang P, Cai X, Zhang W (2019a) Malicious code detection based on CNNs and multi-objective algorithm. J Parallel Distrib Comput 129:50–58
Cui Z, Li F, Zhang W (2019b) Bat algorithm with principal component analysis. Int J Mach Learn Cybernet 10(3):603–622
Cui Z, Zhang J, Wang Y, Cao Y, Cai X, Zhang W, Chen J (2019c) A pigeon-inspired optimization algorithm for many-objective optimization problems. Sci China Inf Sci 62(7):070212. https://doi.org/10.1007/s11432-018-9729-5
Das I, Dennis J (2006) Normal-Boundary Intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM J Optim 8(3):631–657
Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601
Deb K, Kalyanmoy D (2001) Multi-objective optimization using evolutionary algorithms. Wiley, New York
Deb K, Thiele L, Laumanns M, Zitzler E (2002a) Scalable multi-objective optimization test problems. In: Proceedings of the 2002 congress on IEEE evolutionary computation. CEC ‘02, pp 825–830
Deb K, Pratap A, Agarwal S, Meyarivan T (2002b) A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
Deb K, Mohan M, Mishra S (2005) Evaluating the ε-domination based multi-objective evolutionary algorithm for a quick computation of Pareto-optimal solutions. Evol Comput 13(4):501–525
Fan J, Li Y, Tang L, Wu G (2018) RoughPSO: rough set-based particle swarm optimization. Int J Bio-Inspired Comput 12(4):245–253
Hanoun S, Nahavandi S, Creighton D, Kull H (2012) Solving a multiobjective job shop scheduling problem using Pareto archived cuckoo search. Emerg Technol Factory Autom IEEE 43:1–8
Huband S, Hingston P, Barone L, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506
Hughes E (2003) Multiple single objective Pareto sampling. In: The 2003 congress on IEEE evolutionary computation (CEC), vol 4, pp 2678–2684
Jain H, Deb K (2014) An Evolutionary Many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: handling constraints and extending to an adaptive approach. IEEE Trans Evol Comput 18(4):602–622
Laumanns M, Thiele L, Deb K, Zitzler E (2002) Combining convergence and diversity in evolutionary multi- objective optimization. MIT Press, Cambridge, pp 263–282
Li M, Zheng J (2009) Spread assessment for evolutionary multi-objective optimization. In: International conference on evolutionary multi-criterion optimization, Springer, pp 216–230
Li M, Zheng J, Li K, Yuan Q, Shen R (2010) Enhancing diversity for average ranking method in evolutionary many-objective optimization. In: Parallel problem solving from nature, PPSN XI. Springer, Berlin, pp. 647–656
Li M, Yang S, Liu X (2014) Shift-based density estimation for Pareto-based algorithms in many-objective optimization. IEEE Trans Evol Comput 18(3):348–365
Niu Y, Tian Z, Zhang M, Cai X, Li J (2018) Adaptive two-SVM multi-objective cuckoo search algorithm for software defect prediction. Int J Comput Sci Math 9(6):547–554
Pandey H, Chaudhary A, Mehrotra D (2018) Bit mask-oriented genetic algorithm for grammatical inference and premature convergence. Int J Bio-Inspired Comput 12(1):54–69
Pooja P, Chaturvedi P, Kumar P, Tomar A (2018) A novel differential evolution approach for constraint optimization. Int J Bio-Inspired Comput 12(4):254–265
Raja B, Jhala R, Patel V (2017) Many-objective optimization of cross-flow plate-fin heat exchanger. Int J Therm Sci 118:320–339
Rani K, Malek M, Neoh S (2013) Hybrid multiobjective optimization using modified cuckoo search algorithm in linear array synthesis. In: IEEE antennas and propagation conference, pp 1–4
Reynolds AM, Frye MA (2007) Free-flight odor tracking in Drosophila is consistent with an optimal intermittent scale-free search. PLoS ONE 2(4):e354
Shan X, Ye B, Zhang L (2018) Analysis of flow field of hydrodynamic suspension polishing disk based on multi-fractal method. Int J Comput Sci Math 9(1):13–20
Sun B, Cui Z, Dai C (2014) DV-hop localization algorithm with cuckoo search. Sensor Lett 12(2):444–447
Tozer B, Mazzuchi T, Sarkani S (2017) Many-objective stochastic path finding using reinforcement learning. Expert Syst Appl. https://doi.org/10.1016/j.eswa.2016.10.045
Wang Z, Li Y (2015) Irreversibility analysis for optimization design of plate fin heat exchangers using a multi-objective cuckoo search algorithm. Energy Convers Manag 101:126–135
Wang Q, Liu S, Wang H (2012) Multi-objective cuckoo search for the optimal design of water distribution systems. In: International conference on civil engineering and urban planning. https://doi.org/10.1061/9780784412435.072
Wang H, Wang W, Zhou X, Sun H, Zhao J, Yu X, Cui Z (2017) Firefly algorithm with neighborhood attraction. Inf Sci 382(383):374–387
Wang H, Wang W, Cui Z, Zhou X, Zhao J, Li Y (2018) A new dynamic firefly algorithm for demand estimation of water resources. Inf Sci 438:95–106
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Yang X, Deb S (2010a) Cuckoo search via Lévy flights. In: Nature and biologically inspired computing 2009, NaBIC 2009, world congress on IEEE, pp 210–214
Yang X, Deb S (2010b) Engineering optimisation by cuckoo search. Int J Math Model Numer Optim 1(4):330–343
Yang S, Li M, Liu X, Zheng J (2013) A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 17(5):721–736
Yigit T, Unsal O, Deperlioglu O (2018) Using the metaheuristic methods for real-time optimisation of dynamic school bus routing problem and an application. Int J Bio-Inspired Comput 11(2):123–133
Zhang QF, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731
Zhang X, Tian Y, Jin Y (2015) A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 19(6):761–776
Zhang M, Wang H, Cui Z, Chen J (2018) Hybrid multi-objective cuckoo search with dynamical local search. Memet Comput 10(2):199–208
Zhao B, Xue Y, Xu B, Ma T, Liu J (2018) Multi-objective classification based on NSGA-II. Int J Comput Sci Math 9(6):539–546
Zhou X, Liu Y, Li B (2016) A multi-objective discrete cuckoo search algorithm with local search for community detection in complex networks. Mod Phys Lett B 30(07):1650080
Zitzler E, Kunzli S (2004) Indicator-based selection in multi objective search. In: Lecture notes in computing science, pp 832–842
Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the strength Pareto evolutionary algorithm for multiobjective optimization. In: Giannakoglou K, et al. (eds) EUROGEN 2001. International Center for Numerical Methods in engineering (CIMNE), pp 95–100
Zou X, Chen Y, Liu M, Kang L (2008) A new evolutionary algorithm for solving many-objective optimization problems. IEEE Trans Syst Man Cybern B Cybern 38(5):1402–1412
Funding
This study is funded by the National Natural Science Foundation of China under Grant Nos. 61806138, U1636220, 61663028, 71771176, 51775385, 61703279 and 71371142, Natural Science Foundation of Shanxi Province under Grant No. 201801D121127, PhD Research Startup Foundation of Taiyuan University of Science and Technology under Grant No. 20182002, the Distinguished Young Talents Plan of Jiang-xi Province under Grant No. 20171BCB23075, the Natural Science Foundation of Jiang-xi Province under Grant No. 20171BAB202035.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
Author Zhihua Cui declares that he has no conflict of interest. Author Maoqing Zhang declares that he has no conflict of interest. Author Hui Wang declares that he has no conflict of interest. Author Xingjuan Cai declares that she has no conflict of interest. Author Wensheng Zhang declares that he has no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Communicated by A. Di Nola.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Cui, Z., Zhang, M., Wang, H. et al. A hybrid many-objective cuckoo search algorithm. Soft Comput 23, 10681–10697 (2019). https://doi.org/10.1007/s00500-019-04004-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-019-04004-4