1 Introduction

Since the real-world problems need to optimize multiple conflicting objectives simultaneously (Tate et al. 2012), multi-objective optimization problems (MOPs) get widespread attention in recent decades. The MOPs need to find the optimal solution set named Pareto optimal Front (PF) that consists of multiple solutions. As Evolutionary Algorithms (EAs) are population-based algorithms which can find multiple solutions in a run, researchers try to extend it to solve the MOPs. Thus, multi-objective evolutionary algorithms (MOEAs) get intensive and extensive study (Deb 2001; Xia et al. 2014).

Particle swarm optimization algorithm (PSO) (Kennedy and Eberhart 1995; Shi and Eberhart 1998) is a population-based algorithm that simulates the movements of a flock of birds to find food. Because of the fast convergence and simple implementation, PSO is also extended to solve MOPs (Margarita et al. 2006) (named MOPSO), and successfully applied in dealing with many real-world problems (Lalwani et al. 2013; Xue et al. 2013). Compared with other algorithms (such as MOEAs), MOPSO has unique particle update method and personal best (pbest) to record particle experiences. In fact, there are two types of archives (Coello et al. 2004) in MOPSOs, one is composed of global optimum (gbest) and another is composed of all of the pbest which are recorded by particles.

However, similar to other multi-objective optimization algorithms, MOPSOs also encounter the problem of premature convergence for the complex MOPs. Many researchers noticed this problem, and learned from Genetic Algorithm (GA) which utilizes a mutation operation to enhance the ability to avoid the local optimum, thus the mutation operation was introduced in the PSOs and MOPSOs as well (Paul 2006). Other researchers try to combine variety of algorithms to solve MOPs (Luis et al. 2006; Daneshyari and Yen 2011). Besides, the comprehensive learning PSO (CLPSO) (Liang et al. 2006) aims to enhance the diversity of the population to improve the algorithm’s ability to escape from local optima. Based on this, the Multi-objective comprehensive learning particle swarm optimizer (MOCLPSO) (Huang et al. 2006) was proposed to solve MOPs with excellent performance.

For the simple and effective characteristics, most of MOPSOs use the Pareto relationship to update gbest and pbest. Moore and Chapman (1999) utilized a ring topology to solve the MOPs problems. In the proposed topology, the pbest of a particle is a list of all non-dominated solutions found in its trajectory. In the process of updating the velocity and position, one pbest is selected randomly from the list. Li (2003) proposed the non-dominated sorting particle swarm optimizer (NSPSO) that incorporates the main mechanisms of the NSGA-II (Deb et al. 2002). In NSPSO, once a particle has updated its position, all thepbest of the particles and all the new positions obtained recently are combined as a solution set and sorted into various non-domination levels using the non-dominated sorting technology. Niche count and crowding distance are used to estimate the density. N (population size) best particles are selected as the new swarm. This approach applies a mutation operator only on the particle with the smallest crowding distance value (or the largest niche count). Mostaghim and Teich (2003) proposed sigma-MOPSO, in which the gbest is selected according to the closest sigma value. Speed-constrained Multi-objective PSO (SMPSO) (Nebro et al. 2009) is an improved version of OMOPSO (Sierra and Coello 2005), which uses an adaptive grid technology to update external archive and utilizes roulette-wheel method to select gbest. In MOCLPSO, the gbest is selected randomly from archive, but the pbest is selected from different particles of swarm to enhance the diversity of algorithm. The co-evolutionary multi-swarm PSO (CMPSO) (Zhan and Zhang 2013) introduced a multiple populations for multiple objectives (MPMO) technology with each objective corresponding to a population. The elitist learning strategy (ELS) is also applied in the archive to strengthen the ability of exploring.

Inspired by the MOEA/D (Zhang and Li 2007) which decomposes the original MOP into a number of scalar aggregation problems, Moubayed et al. (2010) proposed a novel smart MOPSO using decomposition (SDMOPSO), which associates every particle with a vector according to the best scalar aggregated fitness value and utilizes archival storage of non-dominated solutions. Hybrid technologies had been also applied to enhance the MOPSOs. In literature (Tang and Wang 2013), the hybrid multi-objective evolutionary algorithm (HMOEA) was reported. A self-adaptive selection mechanism is developed to choose an appropriate crossover operator from several candidates.

The MOPSOs exhibits a strong ability to learn but with weakness in exploration. Assume that there is only one global optimum, particles in the population will soon fly to the global optimum and begin to explore new areas. As the population concentrates to a small area, the ability to explore new areas becomes weaker. MOPSOs can benefit from the introduction of archive which contains multiple global optimum, but exploring ability has not been strengthened.

Fig. 1
figure 1

a Archive size and convergence curve of the ZDT1 and ZDT4 problems. For ZDT1 problem, to facilitate the observation, the convergence value multiplied by 100. b The distribution of each generation of the archive for ZDT4 problem

Through the investigation on the size of archive, it can be found that it will soon increase to the maximum value for simple problems. However, the archive size for complex functions tends to be small and remains unchanged over a period of time. For verification, Fig. 1a shows the trend for the archive size (na) for the ZDT1 and ZDT4 issues obtained by the conventional PSO algorithm. As can be seen from the figure, the archive size for ZDT1 problems quickly increases to the limit of archive (NA = 100) and remains unchanged. For ZDT4 problem, the archive size does not reach NA, and maintains a very small value. Furthermore, there is no change in the size of the archive (such as 400 and 500 generations), and the convergence value (generational distance indicator GD) of the algorithm did not change during this period. To observe the results intuitively, the solutions in the archive of 400, 500, 600, 700, 800, 900 generations are also shown. Figure 1b shows the distribution of archive for 400–500 and 600–900 generation. It can be observed the archive remains unchanged, because there is no new particle added to the archive when the archive is updated. There exist two cases: one is the new particle does not dominate any solution in the archive, this would be possible to make the archive size smaller. The other is the population did not generate any particle that non-dominates any solution in archive, which will increase the size of the archive. This trend should be a manifestation of complex problems, the population cannot produce effective particle to update the archive.

To overcome this problem, the population recombination strategy (PRS) and a new mutation strategy are proposed. The main idea is to discard most of current position information which is useless for generating better solution, and effectively use the information contained in the archive and all of the pbest to construct a new particle. Moreover, the mutation operator is introduced to enhance the algorithm’s ability to escape from local optimum. PRS was implemented using the best variables information found to construct the new population, which can increase the probability of particles to be included in the archive. At the same time, the mutation operation is also combined to strengthen the ability to jump out of local optimum. The detailed description about PRS and new mutation strategy will be described in the paper. Based on the PRS and new mutation strategies, a multi-objective particle swarm optimization algorithm called RMOPSO is proposed and verified.

The novelties of proposed RMOPSO algorithm lie in:

  1. 1.

    Different from existing algorithms in which the swarm is updated continuously, the PRS recombines the swarm in a certain stage to discard some useless information which is likely to mislead the direction of evolution. The chance for the swarm to regroup, therefore, increases.

  2. 2.

    Different from existing algorithms that mutate particle by probability or the value of density like the NSPSO, the new mutation strategy aims to mutate particles that had been included in the archive. Therefore, it becomes more effective when the algorithms fall into local optimum.

The rest of this paper is organized as follows. Section 2 constructs a MOPSO framework based on the predecessors’ research and describes the PRS and new mutation strategy in detail. The experimental analysis of proposed strategy is presented in Sect. 3. Section 4 reports and analyzes the experimental results on benchmark problems. The comparison with other state-of-the-art MOEAs and MOPSOs is also conducted. Finally, some conclusions are drawn in Sect. 5.

Fig. 2
figure 2

The pseudo-codes of the basic RMOPSO, a pseudo-code of the swarm initialization process; b pseudo-code of the swarm update process; c pseudo-code of the archive update process; d pseudo-code of the basic framework of the RMOPSO

2 RMOPSO

2.1 Basic framework of RMOPSO

Based on previous studies, RMOPSO which utilizes the Pareto relationship to evaluate the quality of the solutions is constructed. The basic framework is composed of 3 processes, which are swarm initialization process, swarm update process and archive update process. The detailed pseudo-code can be found in Fig. 2.

Fig. 3
figure 3

The pseudo-code of the PRS. random[0,1] is used to generate a random number between 0 and 1, and A denotes the archive

In Swarm_update_process described in Fig. 2b, the updated velocity and position may exceed their corresponding boundary, they should be adjusted to meet the velocity constraints \( V_{i}\) and position constraints \(X_{i}\) after the update process:

  1. Case 1:

    if   \(V_{i}> V_{{max},i}, \) then set \( V_{i}\) as \( V_{{max},i}\).

  2. Case 2:

    if   \(V_{i}< V_{{min},i}, \) then set \( V_{i}\) as \( V_{{min},i}.\)

Here \(V_{{max},i}= \theta X_{{max},i}\) and \( V_{{min},i} = \theta X_{{min},i}\). Literature (Zhan et al. 2009) suggests that \(\theta = 0.2 \ (V_{{max},i }= \,0.2 X_{{max},i}\) and \( V_{{min},i }= 0.2X_{{min},i})\). However, it was found that \(\theta =1.0\) can improve the diversity of the algorithm, thus 1.0 is used in this paper. For the position constraints are given as follow:

  1. Case 1:

    if   \(X_{i}> X_{{max},i}\), then \(X_{i}= X_{{max},i} \) and \(V_{i}= -V_{i.}\)

  2. Case 2:

    if    \(X_{i}< X_{{min},i}\), then \(X_{i}= X_{{min},i}\) and \(V_{i}= -V_{i.}\)

The archive denoted as A is adopt to store all non-dominated solutions found so far; it is initialized to be empty and updated in every generation. Research shows that it is better to use an archive with a fixed maximum size, as the number of non-dominated solutions may increase very fast. Therefore, the maximum size of the archive is denoted as NA and the current size is denoted as na. In the update process of archive, all the particles in swarm and solutions in archive are combined as a set firstly, and all particles dominated by any other particles are deleted. The density of each solution is calculated and sorted using the crowding distance technology. If the index value is larger than the maximum limit NA, the corresponding solution will be deleted.

2.2 Population recombination strategy

The pseudo-code of the PRS is given in Fig. 3. The position information of current particle is not completely useless and PRS do not discard all of the position information belong to the particles, it will be replaced by the pbest or gbest based on the probability \(p_{1}\) for each dimension.

As a storage pool, the archive saves all optimal solutions found so far. However, when the algorithms fall into a local optimum, the stored information is not reliable. pbest is the best position that each particle in swarm discovers and records. In most cases, pbest should also be included in the archive, but when the archive size is smaller than the size of population, obviously, some pbest of the particle in swarm is significantly different from the solutions in archive. Thus, all of the pbest have some diversity information of the swarm. Therefore, all of the information stored in archive and pbest in swarm are important and should be fully utilized. However, it is difficult to know whether the information is valid in the process. Thus, the probability \(p_{2}\) is introduced to determine the use of pbest and gbest.

After PRS, the velocity is set as 0. It should be noted that the PRS utilizes variables information directly and does not create any new position information and the pbest which belongs to each recombined particle is unchanged.

A problem arises when the PRS is used to construct new swarm. The PRS cannot be used in every generation, because it cannot create any new position information but involves adjustment based on existing variables. Therefore, a parameter called generation interval T is introduced, which means the PRS is used for every T generation. In the Sect. 3, experimental analysis on this parameter is conducted and the results show that the algorithm exhibits better performance when T is chosen between 5 and 10.

2.3 New mutation strategy

As the mutation operator can enhance the diversity of population and strengthen the exploration ability of the algorithm. It was introduced in the PSO and MOPSO by researchers. Similar to the MOEAs, the probability for the mutation of one particle is 1/n, where n is the number of variables. Besides, MOPSO uses a mutation operation whose probability decreases with increasing iterations. While the NSPSO mutates the particles which has a smallest density value.

The mutation operator is also employed here. During the archive update process, some particles will be added to the archive if these particles dominate or non-dominate solutions in archive. When updating the population, especially when the algorithms fall into local optimum, some particles in swarm are the global optimum and located in archive, the pbest will choose agbest from the archive to learn. However, they have the same level, and the effective of explore variable space is reduced. Accordingly, mutation of these particles is a good choice. When the particle is selected for mutation, a random dimension d is selected to perform Gaussian perturbation as:

$$\begin{aligned}&\text {If } random [0,1] > 0.5\\&\quad POP_{i} [d] = POP _{i}[d]\\&\qquad +\, (X_{max,d} - X_{min,d} )*gasussian(0,1)\\&\mathrm{else}\\&\quad POP_{i} [d] = POP_{i}[d] \\&\qquad -\, (X_{max,d} - X_{min,d} )*gasussian(0,1) \end{aligned}$$

where i is particle index selected to mutation, \(X_{max,d}\) and \( X_{min,d}\) are the upper and lower bounds of the dth dimension, respectively. gasussian(0,1) is a random value generated by a Gaussian distribution with a mean value of 0 and a standard deviation of 1.

2.4 Population recombination multi-objective particle swarm optimization algorithm

Based on the PRS and new mutation strategy, Population Recombination Multi-objective particle swarm optimization algorithm (RMOPSO) is implemented. The detailed flowchart is described in Fig. 4. In the flowchart, t is the number of generation of the RMOPSO, T is the Generation Interval, and \(t \ {\%} \ T\) is the modulo operation used to determine whether the PRS process will be utilized. The swarm update process is same as the Swarm_update_process described in Fig. 2b except the new mutation operation.

Fig. 4
figure 4

The pseudo-code of complete RMOPSO algorithm

The archive update process is similar to the Archive_update_process, as described in Fig. 2c. The particles included in the archive from swarm are marked for further mutation operation. The count is a variable used to record the number of the function evaluations (FEs).

3 Experimental analysis

To verify the effectiveness of new mutation strategy and PRS described in Sects. 2.2 and 2.3, an experimental analysis was carried out in this section. It includes analysis on different generation interval T for PRS, and the impacts of PRS and mutation operator. The basic framework of the RMOPSO described in Fig. 2d without the PRS and mutation operation will be used as the baseline algorithm in this section to demonstrate the advantages of the PRS and mutation operation.

3.1 Test problems

Six benchmark problems are adopted as test problems in this section. Among them, the problems (ZDT1, ZDT2, ZDT3, ZDT4 and ZDT6) are selected from the ZDT test set, and the WFG1 is selected from the WFG test set (Huband et al. 2005). ZDT1 and ZDT2 are simple problems and their PF is convex and concave, respectively, while the PF of ZDT3 is convex and disconnected. WFG1 has a mixed PF. ZDT4 is multimodal problems that have local optimum, thus it is used to test the ability of algorithm to jump out of the trap. These test problems consider various characteristics of the MOP and can be used to evaluate the performance of proposed algorithm regarding different issues.

3.2 Performance metric

In this section, the generational distance (GD) and \(\Delta \) indicators are used to evaluate the convergence and diversity performance, respectively. The detailed definition can be found in Appendix 1. As these indicators require sufficient reference points to simulate the PF, 500 reference points are generated based on the known PF for each problem.

3.3 The impact of generation interval T

In this subsection, the impact of T on the RMOPSO is studied. The related parameter setting of the RMOPSO is: \(\textit{NP} = 50\), \(\textit{NA}= 100\), \(w = 0.729\), \(c_{1} = c_{2}= 2.05\), \(p_{1} = 0.05\), \(p_{2} = 0.5\). In the experiment, each algorithm is run independently for 30 times and the maximum number of function evaluations (FEs) is 25,000 (If there are no special instructions, in the subsection 3.4, this setting will be also applied). The T is set to 100, 50, 20, 10, 7, 5 and 3, respectively, for comparison, \( T = 0\) means that the PRS has not been used, which refers to the original MOPSO.

Table 1 shows the results for the RMOPSO with different values of T. The result shows that for convergence, with the decrease of T, convergence value significantly improves, especially for ZDT4 which is a complex problem. Overall, with the decrease of T,  standard deviation of the value decreases as well, which indicates that the probability for the algorithm to converge to the optimal front increases. It is worthy to mention that for simple problems, the corresponding convergence values for \(T = 10,\) 7, 5 and 3 have the trend to decrease, but the differences between them are not significant. While for the complex problems (ZDT4 problem), the corresponding differences are still large. Thus, for the complex problems, the proposed algorithm is superior favorable to jump out of local optimum with smaller T.

Table 1 Generational distance (GD) and the diversity performance (\(\Delta )\) of RMOPSO with different values of T

The results of the diversity performance are summarized in Table 1. It can be observed that diversity value also decreases with decreasing T,  but it is different from the convergence values (ZDT4 excluded) which decrease monotone with the decreasing T. When \(T = 10\), diversity performance is found to be the best. However, for ZDT4 and DTLZ1 diversity value also reduces with decreasing T,  and achieves the best when \(T = 3.\)

From the convergence and diversity results, it can be concluded that for the convergence of the MOPs, the smaller T can result in better performance for simple problems, and the smaller T (less than 5) affects diversity performance insignificantly. Hence, to improve the convergence performance without causing deterioration in diversity, the interval T is recommended to be between 5 and 10.

3.4 The impacts of the PRS and new mutation strategy

The effects of T had been studied in previous section; it is also necessary to investigate the impact of the PRS and new mutation strategy separately compared to the original MOPSO. As the supplement to the proposed PRS, the purpose of the new mutation strategy is to strengthen the diversity of population, and thereby enhance the ability of the algorithm to avoid local optimum. To analyze the impact of mutation operation independently, the related operation in the RMOPSO is deleted.

Therefore, the results obtained with mutation operator eliminated can be employed to investigate the effects of PRS, and the results given in Sect. 3.3 can be used to show the effects of new mutation strategy.

Table 2 shows the convergence and diversity values with only PRS employed. The results in Table 2 show that: (1) for convergence performance, with decreasing T for the simple problems, the convergence value becomes smaller. While for the complex problems ZDT4, although they exhibit the same trend for smaller value, but the results obtained do not fully converge to the PF until \(T = 3.\) (2) For diversity performance, the trend is similar. However, the best values obtained when \(T = 3\) (for the ZDT6, the best value is obtained when T = 7) are always larger than the corresponding results in the Table 1. This is because the PRS does not involve variable information for population and cannot enhance diversity performance.

Table 2 The generational distance (GD) and diversity performance (\(\Delta \)) of RMOPSO for different values of T with population recombination only

Therefore, it can be concluded that the PRS can strengthen the convergence performance of the MOPSO, but for the diversity performance, the effect is not significant.

Fig. 5
figure 5

The comparison between the RMOPSO (denote as PRS + Mutation) and the RMOPSO deleted the mutation operation (denote as only PRS). 1 Comparison of the convergence performance. Where a is ZDT1 problem, b is ZDT2, c is ZDT3, d is ZDT4, e is ZDT6, f is WFG1. (2) Comparison of the diversity performance. Where a is ZDT1 problem, b is ZDT2, c is ZDT3, d is ZDT4, e is ZDT6, f is WFG1

Figure 5 shows the comparison between the results in Tables 1 and 2. In the figure, “PRS \(+\) Mutation” denotes the results obtained by the algorithm with the combination of PRS and the new mutation strategy, while the “only PRS” denotes the results obtained by algorithm with PRS only. The comparison shows that: (1) in terms of convergence, for ZDT1, ZDT2, ZDT3 and ZDT4 problems, convergence performance is better for the case using PRS only when \(T > 20\), and the case with the combination of PRS and mutation has better performance when \(T < 20\). For the ZDT6 and WFG1 problems, the case with the combination of PRS and mutation has better performance. However, it should be noticed that the best value is obtained when \(T < 20\), which is similar to the case for the simple problems. (2) With the mutation operation employed, the diversity performance is better than that of approach with only PRS used. As can be seen from the above comparison, the mutation operation can improve the diversity of population and strengthen the algorithm’s ability to jump out of local optimum simultaneously. It can be concluded from the above results that:

  1. 1.

    PRS can effectively improve the convergence performance of the algorithm, because it utilizes the information of individual optimal and global optimal variable to generate new population, so the new particle has higher probability to approach the PF compared to the original particles.

  2. 2.

    The introduction of new mutation strategy can improve the diversity of population, and then strengthen the ability to jump out of local optimum.

  3. 3.

    The combination of PRS and mutation strategy can improve both the convergence and diversity performance simultaneously.

3.5 Conclusion of the experimental analysis

In this section, through the experimental study on the influence of T for RMOPSO, it can be concluded that the convergence and diversity performance can be improved at the same time with decreasing T. The impacts of the PRS and mutation operators for the algorithm are investigated, the experimental results show that the case using only PRS can effectively strengthen the convergence performance, but it has no obvious advantage in diversity performance. The introduction of the mutation operators enhances the diversity of the population and then strengthens the ability to jump out of local optimum. The experimental results also verify that the combination of two proposed strategies always has better performance in both convergence and diversity.

4 Comparison with state-of-the-art algorithms

This section is devoted to present the comparison between proposed algorithms and state-of-the-art algorithms. The brief introduction and corresponding parameter setting for some state-of-the-art algorithms are given first. Next, the benchmark and performance indicator are used to evaluate the convergence and diversity performance. Finally, the results obtained by different algorithms are compared.

4.1 Algorithm and parameters settings

In this paper, NSGA-II, generalized differential evolution 3 (GDE3) (Kukkonen and Lampinen 2009), MOCLPSO, SMPSO, MOEA/D-DE (Li and Zhang 2009), CMPSO, 6 MOEAs and MOPSOs are selected to compare with RMOPSO. NSGA-II is the classical multi-objective evolutionary algorithm based on the Pareto dominate relation. GDE3 is a MOEA based on differential evolution (DE), which has a competitive performance. MOCLPSO is a multi-objective version of the CLPSO and famous for its ability of avoiding the local optimum. SMPSO introduces a velocity constriction mechanism and extends the range of parameters \(c_{i}\) for OMOPSO. It had been found to exhibit much better performance compared with some of the MOPSOs (Durillo et al. 2009). MOEA/D-DE is the DE version of MOEA/D which employs SBX crossover operator as DE operator, CMPSO has competitive performance based on MPMP and ELS compared with some MOPSOs. Therefore, these algorithms are representative and helpful to make the comparisons more comprehensive and convincing.

The related parameters for all of algorithms are set according to the original literatures, as given in Table 3. In the experiments, each test problems will be independently executed for 30 times for each algorithm. Regarding different properties of problems, the maximum FEs is 25,000 for ZDT problems, 110\(^{5}\) for DTLZ problems, \(3\times 10^{5}\) for UF problems. Except RMOPSO and CMPSO, the population size of algorithms is set to 100 for ZDT problems, 200 for DTLZ problems, 300 for UF problems, respectively. While the population size is 50 for RMOPSO, and 20 for CMPSO. The results obtained by different algorithms are compared with that by RMOPSO using the Wilcoxon rank sum test with significant level \(\alpha = 0.05\).

Table 3 Parameters setting for all algorithms

4.2 Test problems

To verify the ability of the algorithm to deal with various problems, three-objective problems DTLZ1, DTLZ2 and DTLZ3 are added in this section, as they are complex problems with local optimum. The UF (Zhang et al. 2009) test set (UF1-7) with complicated Pareto sets is adopted here as well.

4.3 Performance metric

In this section, inverted generational distance (IGD) indicator is adopted, because it can measure the convergence and diversity performance simultaneously and had been widely used in the MOP community. The main difference between IGD and GD indicators is the changed order of the reference set. IGD utilizes the Pareto set obtained by the algorithm as reference and then calculates the minimum distance away from the reference point that is generated before running to the Pareto set. If the Pareto set is not widely distributed, the distance will be large for the some reference point resulting in large IGD value. Therefore, only when the solutions are close to the PF and widely distributed, small IGD value can be obtained. The detailed description of the IGD indicator had been given in the Appendix 1.

4.4 Experimental results

The experimental results for the IGD indicator are given in Table 4. Table 5 reveals that the proposed RMOPSO obtains the best values in 7 out of the 15 test MOPs. NSGA-II and GDE3 have a competitive result for 1 problem. While MOEAD/D-DE and CMPSO achieve competitive results for 3 problems. Moreover, for all of the ZDT series of MOPs, RMOPSO obtains the best result except the ZDT1 and ZDT2 cases, but the values obtained for these two cases are comparable with the best results obtained by CMPSO. For the three-objective DTLZ series of MOPs, RMOPSO obtains best result in 2 out of 3 problems, which are both complex problems. For DTLZ3 problem, the result obtained by RMOPSO is much better than others. For DTLZ2 problem, RMOPSO does not obtain the best result, but the result is still close to the best values obtained by CMPSO. In addition, RMOPSO has the best value of standard deviation (Std.), indicating a good robustness.

Table 4 IGD performance of all algorithms on different problems

The ZDT4, DTLZ1, DTLZ3 are the complex problems used in the experiment to verify the algorithm’s ability to jump out of local optimum. For the ZDT4 problem, RMOPSO has a much better performance compared with others, with a value of 0.0425. It should be noticed that this value is the mean of all the 30 independent runs, but the best value is 0.0036. The best value obtained by other algorithm is about 0.05. As can be seen from the above results, the RMOPSO exhibits best ability to jump out of local optimum for ZDT4 problem. For the DTLZ1 problems, the result for MOCLPSO is 18.75 and that for SMPSO is 0.65, and the CMPSO is 0.0567, all of these three algorithms are based on particle swarm optimization algorithm. Therefore, RMOPSO obtains much better result compared with MOPSOs (SMPSO and MOCLPSO, CMPSO). For the DTLZ3, all of the algorithms do not get convergence results but the result obtained by RMOPSO is found to be close to the PF.

From above results, the RMOPSO has been found to have stronger ability to overcome premature convergence problem and exhibits the best performance among the existing MOPSOs.

For UF series problems, RMOPSO obtains best value for UF2 and UF5, and MOEA/D-DE performs best on UF1, UF3 and UF7. CMPSO performs best on UF4 problem, and NSGAII performs best on UF6 problem. For MOPSOs, RMOPSO performs best on UF2 and UF5 problems and has similar value on UF4 compared with the best one obtained by CMPSO. For other problems, it is obvious that MOEA/D-DE obtains best performance. Except MOEA/D-DE, other algorithms utilize the Pareto dominate relation to judge the merits of the solutions. The purpose to use Pareto-domination-based selection is to drive the whole population towards the PS or PF. However, it has no direct control on the movement of each individual in its population. No mechanism is employed to control the distribution of its computational efforts for different regions of the PF or PS. In contrast, MOEA/D decomposes MOPs into a set of single-objective sub-problems, the computational effort can be evenly distributed among these sub-problems. Therefore, MOEA/D-DE is found to outperform on UF series problems which have complicated PS.

Since none of algorithms can outperform all others on all the problems, the ranks of each algorithm on different problems are summed up to evaluate the overall performance, as shown in Table 4. Based on this definition, it can be found that the proposed RMOPSO ranks the first of all algorithms.

4.5 Impact of parameter settings

In Sect. 4.1, the parameter settings for RMOPSO are: \(w = 0.729\), \(c_{1} = 2.05\) and \(c_{2} = 2.05\). To investigate the impact of w and \( c_{i}\), the performance of RMOPSO with different values of w and \( c_{i }\) is studied. The study is conducted on umimodal objective functions (ZDT1 and DTLZ2) and multimodal objective functions (ZDT4 and DTLZ1).

The parameter settings used in SMPSO and MOCLPSO are adopted for comparison. The detailed information is listed as following:

Parameter 1: \(w = 0.729\), \(c_{1} = 2.05\) and \(c_{2} = 2.05\).

Parameter 2: \(w =\) random(0.1,0.5), \(c_{1} =\) random(1.5,2.5) and \(c_{2 }=\) random(1.5,2.5).

Parameter 3: w is linearly decreasing from 0.9 to 0.2, and \(c_{1} = c_{2} = 2.05\).

The two new parameter settings (parameter 2 and 3) are applied in ROMPSO, respectively. The obtained results are compared with those for the original parameters settings (parameter 1). The IGD indicator will be employed to evaluate the convergence property.

Table 5 shows the results obtained by different parameter settings. It reveals that parameter 2 exhibits better performance on RMOPSO to solve the complex problems (e.g. ZDT4 and DTLZ1). This is because the RMOPSO utilizes the recombination of every T generations and population distribution changes after recombination. The impact of the linear change of parameters is not obvious. So the random variation used by the SMPSO can achieve better results.

Table 5 The IGD performance of RMOPSO with different parameter settings

4.6 Discussion

In the pseudo-code of the PRS given in Fig. 3, \(p_{1}\) is used to determine how the mutation will be applied on corresponding dimension. If \(p_{1} = 1\), the PRS will be no any effect. If \(p_{1} = 0\), then each dimension will be replaced. To maximize the effectiveness of PRS, \(p_{1}\) is set to 0.05. \(p_{2}\) is used to determine the selection of pbest and gbest. If \(p_{2} = 1\), the gbest will not be selected; if \(p_{2} = 0\), the pbest will not be selected; if \(p_{2}\) is set to 0.5, pbest and gbest have the same probability to be selected. Therefore, \(p_{2}\) is used to balance the effects of the pbest and gbest. However, the accurate sets of \(p_{1}\) and \(p_{2}\) need further experimental studies. PRS is similar to the cross operation used in GA, although the comparison with some cross operations like SBX (only replaced PRS with SBX in RMOPSO) found that PRS outperforms SBX. The relevant investigation will be considered as the future work.

5 Conclusion

In this paper, the population recombination strategy is first proposed to avoid premature convergence occurred in MOPSOs. The PRS effectively uses all variable information found by MOPSO to construct new population and makes the new population inherit the original particle’s position information. Since there is no new variable information created in the population recombination process, a new mutation strategy is introduced to strengthen diversity performance. The experiment to analyze the effects of the proposed mutation strategy is also carried out. Experimental results show that mutation operation can improve the diversity and convergence performance simultaneously.

The proposed algorithms are also compared with other state-of-the-art MOEAs and MOPSOs. According to the results, for the simple test problems, the performance of the proposed algorithm and other algorithms is comparable. However, for the complex test problems, the performance of the proposed algorithm is better even superior to other algorithms. It has been also proved that the use ofPRS and new mutation strategy strengthen the ability to jump out of local optimum.