Abstract
Many evolutionary algorithms have been proposed for multi-/many-objective optimization problems; however, the tradeoff of convergence and diversity is still the challenge for optimization algorithms. In this paper, we propose a modified particle swarm optimization based on decomposition framework with different ideal points on each reference vector, called MPSO/DD, for many-objective optimization problems. In the MPSO/DD algorithm, the decomposition strategy is used to ensure the diversity of the population, and the ideal point on each reference vector can draw the population converge faster to the optimal front. The position of each individual will be updated by learning the demonstrators in its neighborhood that have less distance to the ideal point along the reference vector. Eight state-of-the-art evolutionary multi-/many-objective optimization algorithms are adopted to compare the performance with MPSO/DD for solving many-objective optimization problems. The experimental results on seven DTLZ test problems with 3, 5, 8, 10, 15 and 20 objectives, respectively, show the efficiency of our proposed method on solving problems with high-dimensional objective space.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Multi-objective optimization problems (MOP) are widely involved in the real-world applications, for example, industrial scheduling [21], software engineering [19], and control system design [10]. The mathematical models of the multi-objective optimization problems are given as follows:
where \(\mathbf {x} = (x_1, x_2, \ldots , x_D) \in \mathfrak {R}^D\) is a solution in the D-dimensional decision space, \(F: \mathfrak {R}^D \rightarrow \mathfrak {R}^m\) consists m objective functions \(f_i(\mathbf {x}), i=1,2,\ldots ,m\), and \(\mathfrak {R}^m\) denotes the m-dimensional objective space. In general, due to the conflicting nature of the objectives, no solution can be the optimum of all objective functions simultaneously, instead, a set of trade-off solutions, called Pareto optimal solutions or non-dominant solution set [7], will be found for the optimization problem. The set of all Pareto-optimal solutions is called the Pareto set (PS) and its mapping to the objective space is the Pareto front (PF).
Different evolutionary multi-objective optimization (EMO) methods have been proposed for solving multi-objective optimization problems [1, 2, 9, 31]. Especially, in recent years, optimization methods for problems with more than three objectives, which are called many-objective optimization problems (MaOPs), have been obtained more and more attentions because the performances of canonical algorithms for multi-objective problems will be degraded much quickly with the number of objective increases. Generally, the approaches proposed for solving MaOPs can be roughly classified into three categories.
The first category is multi-/many-objective optimization algorithms based on dominance relationship. The most representative one for multi-objective problems is NSGA-II [9], which was proposed by Deb in 2002. However, the performance of NSGA-II will be deteriorated when the number of objective increases because of the loss the selection pressure. Therefore, scholars have focused on finding more and more efficient strategies on dominance-based evolutionary algorithms for solving many objective optimization problems, such as \(\varepsilon \)-dominance [12], \(\theta \)-dominance [30], and fuzzy-Pareto-dominance [23]. Yang et al. [27] proposed a grid-based many-objective evolutionary algorithm (GrEA), in which grid domination and grid difference were used to improve the selection pressure. Zhang et al. [32] proposed a many-objective evolutionary algorithm based on knee point (KnEA), in which the distance between hyperplane and a knee point was used to select better non-dominant solutions, which greatly improves the selection pressure.
The second category is multi-/many-objective optimization algorithms based on decomposition strategy, which can further be divided into two types, one is that the multi-/many-objective optimization problems are transformed to a set of single-objective optimization problems [14, 15, 25, 29, 31], and the other is that the complex multi-objective algorithms are transformed to a set of simple multi-objective optimization problems [8, 18]. In [26], Xiang et al. proposed a vector angle-based many-objective evolutionary algorithm (VaEA) which uses maximum-vector-angle-first principle and worse-elimination principle to maintain the diversity and convergence of the population. Cheng et al. [4] proposed a many-objective evolutionary algorithm guided by a set of reference vectors (RVEA) and Jiang et al. [11] proposed a many-objective evolutionary algorithm based on reference direction (SPEA/R).
The indicator-based evolutionary algorithms fall into the third category, in which the performance indicator is used instead of fitness to select individuals. Zitzler and kunzli proposed the indicator-based evolutionary algorithm (IBEA) [33], in which a binary performance measure was proposed in the selection process. Bader et al. [1] proposed a hypervolume estimation algorithm, called HypE, for many-objective optimization, in which the Monte Carlo simulation was utilized to approximate the exact hypervolume values. Tian et al. [22] proposed an indicator-based multi-objective evolutionary algorithm with reference point adaptation (AR-MOEA), in which the algorithm adjusted the position of reference points based on the contribution of the indicator to improve the performance of irregular Pareto frontier problems.
In recent years, there are also other algorithms that combine the above three strategies. Li et al. [13] proposed a MOEA/DD algorithm, in which both decomposition and dominance strategies were utilized. Based on the performance indicator and domination relationship, Wang et al. [24] proposed a Two-arch2 algorithm. Deb and Jain [8] extended the well-known NSGA-II and proposed the NSGA-III algorithm to deal with many-objective optimization problem, in which a set of reference points were utilized to maintain the diversity of the population during the search with non-dominated sorting mechanism.
Literature reviews show that there are a small number of algorithms with the PSO framework proposed for solving many-objective optimization problems. The reason, we analyze, is because of the quick convergence of the PSO algorithm, which may not be able to provide a good diversity for finding the optimal Pareto front. In this paper, a modified particle swarm optimization with decomposition strategy and different ideal points, called MPSO/DD, will be proposed, in which the decomposition strategy is adopted to ensure the uniformity of the final outputs, and multiple ideal points are utilized to drive the population to quickly convergence to the optimal front. The learning strategy proposed by Cheng and Jin [3] is adopted to update the position of each individual, in which the demonstrators are those with less distance to the ideal point along the reference vector.
The paper is organized as follows: Section 2 describes our proposed method in detail. Experimental results are given in Section 3 with some discussions. Finally, Section 4 gives the conclusions and talks about some work we can do in the future.
The proposed MPSO/DD
Overall framework
Algorithm 1 gives the pseudocode of our proposed MPSO/DD algorithm. A series of reference vector \({\lambda }_i=(\lambda _{i1},\lambda _{i2},\ldots ,\lambda _{i,m}),i=1,2,\ldots ,N\) will be generated in the objective space at first. Then a population, each individual in which has its own position \(\mathbf {x}_i = (x_{i1}, x_{i2}, \ldots , x_{iD}), i=1, 2, \ldots , N\) and velocity \(\mathbf {v}_i = (v_{i1}, v_{i2}, \ldots , v_{iD}), i=1, 2, \ldots , N\), will be generated in the upper and lower bounds, and evaluated using the objective functions. All non-dominated solutions in the population will be saved to the archive Arc. If the stopping criteria is not met, the following steps will be repeated. Determine the ideal point for each reference vector using all non-dominated solutions. Sort the Tchebycheff values of an individual on all reference vectors in an ascending order, find the first reference vector after the sorting which has not been associated with any individual, and assign this reference vector with the current individual. Therefore, each individual will be associated with one and only one reference vector. After that, neighbors of each individual will be used to update the position of this individual, and correspondingly a new offspring population will be generated. Next, a new parent population will be selected from the parent and offspring populations according to the environmental selection strategy proposed in [20]. Finally, all non-dominated solutions stored in the external archive will be updated using the current population obtained by environmental selection and output when the terminal condition is satisfied, which can be seen in Step 13 of Algorithm 1.
In the following, we will give a detailed description on main parts of Algorithm 1:
Ideal point generation
Different from decomposition-based methods proposed previously where only one ideal point is used in the whole evolution, in our method, each reference vector has its own ideal point, which was determined by the objective values of individuals in the non-dominated archive, to speed up the convergence along the reference vector. Figure 1 gives a simple example to show our strategy to generate the ideal point for each reference vector. In Fig. 1, given an arbitrary reference vector \({\lambda }_i\), the circles in red represent the non-dominated individuals in Arc, and the circle in yellow is the ideal point which has the minimum distance among five non-dominated individuals along the reference vector \({\lambda }_i\) to the origin. Equation (2) gives the method to calculate the distance of each individual in Arc along the reference vector to the origin.
where
In Eqs. (2) and (3), \(\mathbf {F}_{norm}=(\mathbf {F}_{norm, 1}, \mathbf {F}_{norm, 2}, \ldots , \mathbf {F}_{norm,m})\) is the objective vector after normalization, \({\lambda }_i\) refers to the current reference vector. \(f_{\max ,k}\) and \(f_{\min ,k}\) are the maximum and minimum objective values on kth objective in the non-dominated solution set Arc, respectively. K is the size of the current non-dominated archive Arc.
Algorithm 2 gives the pseudocode of the determination of the ideal points. In Algorithm 2, |Arc| and \(|{\lambda }|\) represent the number of non-dominated solutions in the archive Arc and the number of reference vectors, respectively. The distance between the point, projected by a non-dominated solution in the archive Arc on the reference vector \({\lambda }_i\), and the origin will be calculated. The point with minimal distance to the origin along the reference vector will be the ideal point of this reference vector.
The offspring generation
In the original social learning particle swarm optimization proposed by Cheng and Jin [3], the velocity and position of each individual are updated as follows:
where \(v_{ij}\) and \(x_{ij}\) are the jth velocity and position of individual i, respectively. \(x_{wj}\) is the jth position of individual w whose fitness is better than individual i. \(\bar{x}_{j}\) is the mean position of the current population on jth dimension. \(r_1\), \(r_2\) and \(r_3\) are random numbers generated uniformly between 0 and 1. The original social learning particle swarm optimization was proposed for single-objective problems and has been shown a good performance to find better optimal solutions especially on large-scale optimization because of its good diversity. However, as we know, in the multi-/many-objective optimization, normally the individuals do not dominate each other and it is difficult to tell which individual is better than another based on the objective values, especially when the number of objectives increase. Therefore, in our method, for an individual i, we first calculate the distances between individuals in the neighborhood and the ideal point of the reference vector individual i associated with (shown in Eq. (6)). In Eq. (6), \(\mathbf {F}(\mathbf {x}_j), j = 1, 2 ,\ldots ,|NI|\) the objective vector of individual j in the neighborhood of individual i, |NI| is the number of neighbors of individual \(\mathbf {x}_j\), \(\mathbf {IP}_i\) is the ideal point of reference vector \({\lambda }_i\), and \(d1_{i,j}\) is the distance between individual j and the origin along the reference vector \({\lambda }_i\). All distances will be sorted in a descent order, and correspondingly, individual i can learn from those neighbors who has better convergence to the Pareto front. Both Eqs. (7) and (8) are used for updating the velocity of an individual with probabilities to prevent the population from falling into local optima. As we know, the convergence speed of the social learning particle swarm optimization algorithm is limited because of its good diversity; therefore, the coefficient proposed in [5], i.e., 0.729, is utilized in Eq. (7) to speed up the convergence. In Eq. (7), \(x_{wj}\) represents the jth dimension of individual w whose distance to the original along with the current reference vector is better than individual i. \(r_1\) and \(r_2\) are random number generated uniformly between 0 and 1. Equation (8) is used to randomly initialize the velocity so as to jump out of the local optimal position.
Algorithm 3 gives the pseudocode of the generation of an offspring. For each reference vector, the distance to the ideal point of its neighbor individuals will first be calculated, and sorted in a descending way. Figure 2 gives a simple example to show how to select the demonstrator according to the distance to the ideal point. The best position on the right hand is the individual that has the minimal distance to the ideal point along the reference vector. A threshold, 0.99, is given empirically in line 7 of Algorithm 3, for determining which equation is to be used for velocity updating. To see the efficiency of parameter settings, we conducted three cases of empirical experiments on DTLZ3 with different number of objectives:
Case1: Without the coefficient 0.729 in Eq. (7).
Case2: Only Eq. (7) is utilized in the proposed method.
Case3: The threshold using Eq. (8) is set to a half of 0.99, i.e. 0.495.
Table 1 gives the results of three cases as well as our proposed setting. From Table 1, we can see that the results obtained in Case2 and Case3 are all worse than those obtained by MPSO/DD, which shows that 0.99 is best to be the threshold to select equation for velocity updating. Compared to Case1, we can see that our proposed MPSO/DD obtained better or competitive results on DTLZ3 problem with more than 10 objectives, which shows that the coefficient 0.729 is significant for the convergence of the algorithm to optimize the problems with high-dimensional objectives.
The environmental selection
After the objective evaluation of each offspring, both parent and offspring individuals will be combined together and calculated the CAD proposed in [20] on each reference vector, where
In Eq. (9), \(\cos \theta _{i,j}\) represents cosine of the angle between the ith reference vector and the jth individual in the combination population of parent and offspring. The larger the \(\cos \theta \) is (the smaller the angle is), the closer the individual and reference vector are, i.e. the even distribution of the individuals in the objective space. \(d1_{i,j}\) is calculated using Eq. (6). It can be seen obviously that the larger the CAD value is, the better between the balance on diversity and convergence of individuals.
Algorithm 4 gives the pseudocode of the environment selection. Each individual in the parent and offspring population will be calculated the CAD value related to each reference vector, and the individual with maximum CAD value to each reference vector will be kept to the next generation.
The archive updating
All non-dominated solutions will be saved in the archive Arc. When new population is generated and evaluated on objective functions, they will be used to update the archive Arc. Algorithm 5 gives the pseudocode of the archive updating. In Algorithm 5, \(Arc(t-1)\) represents the archive at the \(t-1\)th generation and P is the offspring population. Note that the size of the archive is fixed to the size of population N.
In Algorithm 5, the offspring population P will first be combined together with the non-dominated individuals in the archive \(Arc(t-1)\). If the size of Arc(t) is larger than the size of population, we will keep the individual with the maximum CAD value on the corresponding reference vector into Arc(t). Otherwise, all individuals in \(Arc(t-1)\) will be kept to Arc(t).
Experimental results and discussion
Parameter setting
To verify the effectiveness of our proposed MPSO/DD algorithm on many-objective optimization problems, seven DTLZ test functions are selected and tested on 3, 5, 8, 10, 15 and 20 objectives, respectively. The obtained results are compared with those of NSGA-III, KnEA, RVEA, MOEA/DD, SPEAR, GrEA and BiGE [16] that are state-of-the-art algorithms for many-objective problems, and also compared with NMPSO [17], which is proposed for many-objective optimization based on PSO. The experimental results of these seven algorithms are run on PlatEMO proposed by Tian [28]. The parameters of MPSO/DD are given in Table 2. Also, the relationship between the number of objectives and the dimension of variables are given in Table 2.
Table 3 gives other parameters used in the experiments that related to different number of objectives, including the number of objective evaluations and correspondingly the size of reference vector set. The reference vectors are generated uniformly according to the strategy proposed in MOEA/D [31]. The number of population is consistent with the size of reference vectors. All other parameters needed to be set in our experiments are analyzed and given in Sect. 2.3. The parameters used in the comparison algorithms are set same as those used in the corresponding method.
Performance metrics
To compare the performance of different algorithms, the inverted generational distance (IGD) [6] is used as indicator to evaluate the performance of different algorithms. Suppose \(P^*\) is a set of points uniformly distributed on the optimal Pareto surface in the objective space and P is a set of non-dominated solutions, then the IGD value is defined as follows:
where \(dist(\mathbf {x},\mathbf {y})\) represents Euclidean distance between two positions \(\mathbf {x}\) and \(\mathbf {y}\). Therefore, the IGD is the average value of the minimum distance from each position in \(P^*\) to P, which is used to measure the convergence and diversity of the non-dominated solution set P that obtained. In our experiments, we selected 10,000 solutions from the real Pareto front \(P^*\) for comparison. The smaller the IGD value is, the better the P is.
Experimental result
Table 4 gives the statistical IGD results of the proposed MPSO/DD algorithm and the other seven algorithms on seven DTLZ test functions. The results of Wilcoxon rank sum test are also given: ‘\(+\)’, ‘−’ and ‘\(\approx \)’, respectively, indicate that the results of MPSO/DD algorithm are superior, inferior and similar to those of the comparative algorithm. The data in boldface in Table 4 represents the best results of all algorithms. All results are obtained on 20 independent running. From Table 4, we can clearly see that our proposed MPSO/DD method obtained better results on DTLZ 2, 3, 5 and DTLZ6 problems with high-dimensional objective space. Except the DTLZ2 with 20 objectives, MPSO/DD obtained better results on these problems with 10, 15, and 20 objectives, which showed the competition of our proposed method to solve problems with high dimensions on objectives. Table 5 gives a summary on the results given in Table 4. From Table 5, we can clearly see that generally, MPSO/DD obtained better results than NSGA-III, KnEA, RVEA, SPEAR, GrEA, and BiGE, and competitive results with MOEA/DD.
To show the effectiveness of our proposed algorithm, Figs. 3 and 4 plot the parallel coordinates of the non-dominated solution set obtained by different algorithms on 5-objective and 20-objective DTLZ1 test problems, respectively. From Figs. 3 and 4, we can see that the objective values of MOEA/DD and MPSO/DD can decline much quicker than other algorithms, and the solutions are better distributed than other algorithms in a limited number of evaluations. While compared to MOEA/DD, we can see that the performance of MPSO/DD is comparative on 5-objective DTLZ1, but not better than MOEA/DD on 20-objective DTLZ1. The reason, we analyze, is that the diversity of MPSO/DD is still not better than that of MOEA/DD on DTLZ1. Therefore, to see whether our proposed MPSO/DD algorithm is competitive with those many-objective optimization algorithms based on PSO, we compare the results on DTLZ with NMPSO [17], in which a balanceable fitness estimation method and a novel velocity update equation were presented so as to effectively solve the many-objective optimization problems. Table 6 shows the mean IGD results of MPSO/DD compared to NMPSO, where the best results are highlighted. From Table 6, we can see that the MPSO/DD obtained better results than NMPSO on DTLZ1–DTLZ6 with high-dimensional objectives, which further showed that our proposed MPSO/DD is competitive to solve the problems with high-dimensional objectives. However, the results on DTLZ7 obtained by MPSO/DD is not better than NMPSO, the reason, we analyze, may result from the BFE method proposed in NMPSO which strongly prefers the solutions with well converged and less crowded.
Table 5 also shows the summary on the results given in Table 6. From Table 5, we can clearly see that the proposed MPSO/DD obtained 26/42 better results than NMPSO, which showed the better performance of our proposed MPSO/DD than NMPSO.
Conclusion
This paper proposed a modified particle swarm optimization algorithm, in which the decomposition strategy and different ideal points are utilized, for many-objective problems. The experimental results showed that the proposed algorithm has advantages on solving problems with high-dimensional objective space, but many works are still remained for us to study further. In the future, we will try to add some strategies to prevent the algorithm from falling into local optima to achieve better results on all problems.
References
Bader J, Zitzler E (2011) Hype: an algorithm for fast hypervolume-based many-objective optimization. Evol Comput 19(1):45–76
Beume N, Naujoks B, Emmerich M (2007) Sms-emoa: multiobjective selection based on dominated hypervolume. Eur J Oper Res 181(3):1653–1669
Cheng R, Jin Y (2015) A social learning particle swarm optimization algorithm for scalable optimization. Inf Sci 291(6):43–60
Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 20(5):773–791
Clerc M, Kennedy J (2002) The particle pwarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73
Coello CAC, Sierra MR (2004) A study of the parallelization of a coevolutionary multi-objective evolutionary algorithm. In: Mexican international conference on artificial intelligence
Deb K (2001) Multiobjective optimization using evolutionary algorithms. Springer, New York
Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601
Deb K, Pratap A, Agarwal S, Meyarivan T, Fast A (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
Herrero JG, Berlanga A, López JMM (2009) Effective evolutionary algorithms for many-specifications attainment: application to air traffic control tracking filters. IEEE Trans Evol Comput 13(1):151–168
Jiang S, Yang S (2017) A strength pareto evolutionary algorithm based on reference direction for multi-objective and many-objective optimization. IEEE Trans Evol Comput 21(3):329–346
Laumanns M, Thiele L, Deb K, Zitzler E (2002) Combining convergence and diversity in evolutionary multiobjective optimization. Evol Comput 10(3):263–282
Li K, Deb K, Zhang Q, Kwong S (2015) An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Trans Evol Comput 19(5):694–716
Li K, Kwong S, Zhang Q, Deb K (2015) Interrelationship-based selection for decomposition multiobjective optimization. IEEE Trans Cybern 45(10):2076–2088
Li K, Zhang Q, Kwong S, Li M, Wang R (2014) Stable matching based selection in evolutionary multiobjective optimization. IEEE Trans Evol Comput 18(6):909–923
Li M, Yang S, Liu X (2015) Bi-goal evolution for many-objective optimizationproblems. Artif Intell 228(C):45–65
Lin Q, Liu S, Zhu Q, Tang C, Song R, Chen J, Coello CAC, Wong KC, Zhang J (2016) Particle swarm optimization with a balanceable fitness estimation for many-objective optimization problems. IEEE Trans Evol Comput 22(1):32–46
Liu HL, Gu F, Zhang Q (2014) Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems. IEEE Trans Evol Comput 18(3):450–455
Mkaouer MW, Kessentini M, Bechikh S, Deb K, Ó Cinnéide M (2014) High dimensional search-based software engineering: finding tradeoffs among 15 objectives for automating software refactoring using NSGA-III. In: Proceedings of the 2014 annual conference on genetic and evolutionary computation, ACM, pp 1263–1270
Qin S, Sun C, Jin Y, Lan L, Tan Y (2019) A New Selection Strategy for Decomposition-based Evolutionary Many-Objective Optimization. In: 2019 IEEE congress on evolutionary computation (CEC), IEEE, pp 2426–2433
Sülflow A, Drechsler N, Drechsler R (2007) Robust multi-objective optimization in high dimensional spaces. In: International conference on evolutionary multi-criterion optimization, Springer, New York, pp 715–726
Tian Y, Cheng R, Zhang X, Cheng F, Jin Y (2017) An indicator based multi-objective evolutionary algorithm with reference point adaptation for better versatility. IEEE Trans Evol Comput PP(99):1–1
Wang G, Jiang H (2007) Fuzzy-dominance and its application in evolutionary many objective optimization. In: International conference on computational intelligence security workshops
Wang H, Jiao L, Yao X (2015) Two_arch2: An improved two-archive algorithm for many-objective optimization. IEEE Trans Evol Comput 19(4):524–541
Wang Z, Zhang Q, Li H (2015) Balancing convergence and diversity by using two different reproduction operators in moea/d: some preliminary work. In: 2015 IEEE international conference on systems, man, and cybernetics, pp 2849–2854. https://doi.org/10.1109/SMC.2015.496
Xiang Y, Zhou Y, Li M, Chen Z (2017) A vector angle based evolutionary algorithm for unconstrained many-objective optimization. IEEE Trans Evol Comput 21(1):131–152
Yang S, Li M, Liu X, Zheng J (2013) A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 17(5):721–736
Ye T, Ran C, Zhang X, Jin Y (2017) Platemo: a matlab platform for evolutionary multi-objective optimization. IEEE Comput Intell Magn 12(4):73–87
Ying S, Li L, Zheng W, Li W, Wang W (2017) An improved decomposition-based multiobjective evolutionary algorithm with a better balance of convergence and diversity. Appl Soft Comput 57:S1568494617301655
Yuan Y, Hua X, Bo W, Xin Y (2016) A new dominance relation based evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 20(1):16–37
Zhang Q, Hui L (2007) Moea/d: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731
Zhang X, Tian Y, Jin Y (2015) A knee point driven evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 19(6):761–776
Zitzler E, Künzli S (2004) Indicator-based selection in multiobjective search. In: International conference on parallel problem solving from nature, Springer, New York, pp 832–842
Acknowledgements
This work was supported in part by National Natural Science Foundation of China (Grant no.: 61876123), Natural Science Foundation of Shanxi Province (201801D121131, 201901D111264, 201901D111262), Fund Program for the Scientific Activities of Selected Returned Overseas Professionals in Shanxi Province, Shanxi Science and Technology Innovation project for Excellent Talents (201805D211028), Postgraduate Education Innovation Project of Shanxi Province (2019SY494), the Doctoral Scientific Research Foundation of Taiyuan University of Science and Technology (20162029), and the China Scholarship Council (CSC).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Qin, S., Sun, C., Zhang, G. et al. A modified particle swarm optimization based on decomposition with different ideal points for many-objective optimization problems. Complex Intell. Syst. 6, 263–274 (2020). https://doi.org/10.1007/s40747-020-00134-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40747-020-00134-7