1 Introduction

Multi-objective optimization problems (MOPs) aim to the simultaneous optimization of multiple (often conflicting) objectives. When the number of objectives is more than 3, MOPs are often termed many-objective optimization problems (MaOPs) (Ishibuchi et al. 2008). Since the first Pareto-based MOEA was reported to solve MOPs in the mid-1980s (Schaffer 1985), a variety of nature-inspired heuristic algorithms have been proposed and have aroused the interests of many researchers, such as evolutionary algorithms (EAs) (Deb et al. 2002; Zitzler et al. 2003; Wang et al. 2015), artificial immune algorithms (Chen et al. 2011; Lin and Chen 2013; Lin et al. 2015, 2016), and particle swarm optimization algorithms (Coello Coello and Lechuga 2002; Al et al. 2013; Zhan et al. 2013; Huaping and Yao 2013), showing the promising performance in tackling MOPs. Particularly, during the last decades, numbers of multi-objective evolutionary algorithms (MOEAs) have been presented (Coello et al. 2006; Li and Zhang 2009; Li and Landasilva 2011; Li et al. 2012; Chen et al. 2014). The early reported MOEAs were designed based on the Pareto dominance relationship, as this relationship is very simple and straightforward to reflect the superiority of different solutions. For example, NSGA-II (Deb et al. 2002) and SPEA2 (Zitzler et al. 2001) are the two well-known and frequently used MOEAs. Although they showed the effective performance for MOPs, they failed to solve MaOPs as pointed out in (Jaimes and Coello 2015). Actually, a number of new challenges are brought by MaOPs to the Pareto-based MOEAs, which are often referred to as the curse of dimensionality in MaOPs (Powell 2011; Joshi and Deshpande 2014).

The main challenges brought by MaOPs can be mainly categorized into three classes. The first one is the inefficiency of selection operators. In most Pareto-based MOEAs, a Pareto-based sorting approach is firstly used to select the solutions with better convergence, and then the diversity indicators are adopted to choose some less-crowded ones. However, in MaOPs, the number of non-dominated solutions increases exponentially with the number of objectives, which leads to the result that the solutions with poor convergence, but good diversity are preferred. Such behaviour greatly weakens the selection pressure and slows down the overall search process. Over the past few years, many state-of-the-art MOEAs were redesigned to enhance the underlying selection pressure, such as \(\epsilon \)-dominance (Zou et al. 2008; Hadka et al. 2012), adaptive \(\epsilon \)-ranking (Aguirre and Tanaka 2009), and fuzzy dominance (Wang and Wu 2007). The second one is the high computation cost for MaOPs. If a continuous MOP with M objectives satisfies the regularity property, the dimensionality of its PF is \(M-1\); thus, the number of points needed to approximate the entire PF increases exponentially with M (Corne and Knowles 2009), which shows considerable computation cost. To overcome this problem, a number of effective MOEAs were presented, e.g. decomposition-based MOEAs (Zhang and Li 2007; Mashwani and Salhi 2014), indicator-based MOEAs (Bader and Zitzler 2011; Zitzler and Künzli 2015), MOEAs with decision makers (Parmee and Cvetkovic 2002; Giagkiozis and Fleming 2014), and objective reduction (OR) for MOEAs (Fonseca and Fleming 1995; Gal and Hanne 1999; Deb and Saxena 2005; Coello 2005; Jaimes et al. 2008; Brockhoff and Zitzler 2009; Jaimes et al. 2014). However, the difficulties caused by a large number of objectives in MaOPs still exist. For decomposition-based MOEAs, the unsuitable arrangement of weight vectors will lead to poor performance on MaOPs with highly correlated objectives (Ishibuchi et al. 2009, 2011). Regarding indicator-based MOEAs, they may result in the unsatisfactory diversity on PF (Hadka and Reed 2011). With respect to MOEAs with decision makers, different decision makers have different aims for different objectives, which needs multi-target search (Wang et al. 2013). At last, concerning the OR methods for MOEAs, they have to effectively and efficiently identify the conflicting objectives, which are the main components to represent the true PF. However, most OR methods only adopt linear statistical tools to measure both linear and nonlinear correlation, attempting to identify the conflicting objectives. This way, the nonlinear correlation may be substantially weakened using the linear description and may result in misleading effect for reducing the objectives. The last one is the difficulty in visualizing the objective space of MaOPs, which in turn makes the task of decision making more difficult (Walker et al. 2012). Although some techniques, such as self-organizing maps (Obayashi and Sasaki 2003) and decision maps (Lotov et al. 2004), were proposed to aid in visualization, they required a large number of solutions to do this.

For MaOPs, the difficulties are mainly induced by a large number of conflicting objectives. One natural solution for MaOPs is to reduce its dimensions of objective space, while retaining as much information of MaOPs as possible. Thus, the existing MOEAs can be directly applied and are easier to solve the MaOPs with redundant objectives. Generally, different explanations of the conflicting objectives can give rise to different understanding of OR. Obviously, the conflicts between different objectives indicate that the improvement on one objective would deteriorate on other objectives. The main idea of OR is to select a subset with the smallest number of objectives, which would not change the structure of the original problem (Gal and Hanne 1999). The dataset, which is used to analyse the conflict objectives, is constructed by the approximated PF found by the underlying MOEA. To a certain extent, the Pareto dominance relationship among the individuals could be used to represent the conflict and the redundancy among the objectives.

Hence, roughly speaking, OR can be classified into two kinds. The first class is dominance relation preservation based OR. This approach uses a measure to estimate the changes of dominance relation (Gal and Hanne 1999; Brockhoff and Zitzler 2006, 2009). For instance, use an additional term \(\delta \) to measure the difference of dominance relation between two subsets (Brockhoff and Zitzler 2006). However, this technique is only applicable to the linear PF with more uniform distribution (Saxena et al. 2013). Another category is machine learning based OR. For example, principal component analysis (PCA) (Deb and Saxena 2005) and maximum variance unfolding (MVU) (Saxena and Deb 2007) are used to identify the main conflicting objectives. Accordingly, they find the correct lower-dimensional interactions of objective space. Another machine learning technique could be seen as the process of feature selection. This method utilizes Pearson correlation to estimate the conflict information among the objectives (Jaimes et al. 2008), which is aimed at the objectives with negative correlation. Then, an improvement on the correlation matrix of objectives is employed to obtain the conflict degree among the objectives. With the conflict degree among the objectives, it adopts a greedy approach to set up a q-neighbourhood for each objective. In this way, the most compact neighbourhood is selected, and then the middle objective is retained, while other \(q-1\) nearest objectives are deleted in this neighbourhood. Similar to this feature selection technique, they further proposed a scheme to partition the objective space into several non-overlapped subspaces to solve MaOPs (Jaimes et al. 2014). This scheme is considered as a new OR framework, and it does not delete the redundant (low conflicting) objectives, but integrates all the objectives to cover the entire PF. Its subspaces are created in terms of conflict information among the objectives, and the same correlation matrix of objectives in Jaimes et al. (2008) is also used to represent the conflict degree between each pair of objectives. A new subspace is always selected based on the greedy approach, including the most compact neighbourhood from the remaining objective set. Therefore, the last subspace is the least compact one (combined by the most conflicting objectives). However, the used Pearson correlation in Jaimes et al. (2008, 2014) only fits for the linear correlation. Whereas in MaOPs, the supporting objectives that can be enhanced simultaneously, namely the redundant objectives, are either linearly or nonlinearly correlated in most cases. Thus, using the linear statistical tools (e.g. Pearson correlation) to measure both linear and nonlinear correlation, the nonlinear correlation may be weakened by this linear presentation. Moreover, in most MaOPs from real-world applications, the conflict relation among the objectives is more complicated, which cannot be correctly reflected by Pearson correlation.

In this paper, we introduce a novel approach to measure the conflict information among the objectives. The database to analyse the new conflict information (NCI) is based on the optimal approximation found by the underlying MOEA. For each objective, we use NCI to compute their conflict contribution rates on the target problem, and then we sort the objectives based on their conflict contribution rates, namely for an objective, the larger the conflict contribution rate is, the higher the position is in the sorting. If the conflict contribution rate of an objective is no less than a preset parameter, it is a main objective for the target problem. Accordingly, we select these main objectives for the target problem and we further extract several overlapped subspaces, which intersection is composed by these main objectives. Our NCI has two contributions: (i) It is simple and more straightforward, not bringing additional parameters and giving the high accuracy; (ii) Its accuracy is not significantly affected by the dataset obtained from the underlying MOEA. Moreover, we introduce a new version of OR in order to improve the diversity along the approximate PF. This new OR version is implemented by extracting the conflicting objectives using the objective sorting among the objectives, namely the proposed OR framework is based on the objective space extraction. Hence, our method is short for OSEOR. We aim to extract several subspaces, in such a way, the union of the search from these subspaces represents the information of the original problem.

In order to evaluate the effectiveness of the proposed OSEOR method, we embed it into a well-known and frequently used MOEA (NSGA-II), and then compare to the recently proposed OR method [objective space partitioning (OSP)] (Jaimes et al. 2014) and the original NSGA-II. The benchmark problems include four irreducible problems (i.e. DTLZ1–DTLZ4) and a reducible problem (i.e. DTLZ5). The experimental results show that our method outperforms the OSP method in most cases. Moreover, our method and OSP exceed NSGA-II in all the test problems considered in this study, except for the biased problem DTLZ4. When compared to OSP, our OSEOR method is also found to achieve an approximation set with better convergence and diversity.

The rest of this paper is organized as follows. Section 2 shows some related definitions. Section 3 introduces the new conflict information and objective sorting. Section 4 presents the proposed OR framework, while Sect. 5 gives the experimental results of our method with OSP and NSGA-II. Finally, this paper is concluded in Sect. 6 with some future work.

2 Problem description and definitions

2.1 Problem description

Without loss of generality, an MOP can be simply formulated as follows.

$$\begin{aligned} \mathrm{Min} \hbox { F}(\mathbf{x})=[f_1 (\mathbf{x}),\ldots ,f_M (\mathbf{x})]^\mathrm{T},\;\;\mathbf{x}\in \Omega \end{aligned}$$
(1)

where \(\mathbf{x}\) is a decision variable vector from the decision space \(\Omega \subseteq \mathbb {R}^{V}\), and \(\hbox {F}:\Omega \rightarrow \Phi \subseteq \mathbb {R}^{M}\) is constructed by a set of M objective functions, which is a mapping from decision space \(\Omega \) to M-dimensional objective space \(\Phi \). If \(M\ge 4\), this optimization problem is often called MaOP.

2.2 Definitions

Definition 1

Pareto dominance relation: Let \(\mathbf{x}_1 ,\;\mathbf{x}_2 \in \Omega \), \(\mathbf{x}_1 \)is said to dominate \(\mathbf{x}_2 \), denoted by \(\hbox {F}(\mathbf{x}_1 )\prec \hbox {F}(\mathbf{x}_2 )\), if and only if \(f_i (\mathbf{x}_1 )\le f_i (\mathbf{x}_2 ),\;i\in \{1,\ldots ,M\}\) and \(\exists \;f_i :f_i (\mathbf{x}_1 )<f_i (\mathbf{x}_2 )\).

Definition 2

Pareto optimality: If a solution \(\mathbf{x}^{*}\in \Omega \) and there does not exist another solution \(x\in \Omega \) such that \(\mathbf{x}\prec \mathbf{x}^{*}\), it is said that \(\mathbf{x}^{*}\) is a Pareto-optimal solution in the decision space \(\Omega \).

Definition 3

Pareto-optimal set: If \(P_\mathrm{opt} =\{\mathbf{x}\in \Omega \left| {\lnot \exists \mathbf{y}\in \Omega :\mathbf{y}\prec \mathbf{x}} \right. \}\), \(P_\mathrm{opt} \)is termed the Pareto-optimal set.

Definition 4

Pareto-optimal front: If \(PF_\mathrm{opt} =\{\mathbf{z}=f_1 (\mathbf{x}),\ldots ,f_M (\mathbf{x})\left| {\mathbf{x}\in P_\mathrm{opt} } \right. \}\), \(PF_\mathrm{opt} \) is called the Pareto-optimal front.

Definition 5

Pareto-optimal front approximation: Let \(PF_\mathrm{approx} \subset \Phi \) be a set of objective function vectors. \(PF_\mathrm{approx} \) is called an approximation set if any element of \(PF_\mathrm{approx} \) does not dominate or is not equal to any other objective function vector in \(PF_\mathrm{approx} \).

Definition 6

Subspace: If the dimension of a space \(\psi \) is lower than the dimension of the space \(\Phi \) and \(\psi \subset \Phi \), \(\psi \) is said to be a subspace of \(\Phi \).

Definition 7

Overlapped subspaces: Two subspaces \(\psi _1 \subset \Phi \) and \(\psi _2 \subset \Phi \) are said to be overlapped if \(\psi _1 \cap \psi _2 \ne \phi \).

Definition 8

Subspace extraction: A space \(\Phi \) is taken as \(N_{s }\) overlapped subspaces which are \(\Psi =\{\psi _1 ,\ldots ,\psi _{N_s } \}\), where \(\psi _i \subset \Phi ,\;i=1,\ldots ,N_s \), and \(\psi _1 \cap \cdots \cap \psi _{N_s } \ne \phi \). Then, \(\Psi \) is said a subspace extraction of \(\Phi \).

3 New conflict information and objective sorting

3.1 New conflict information

In this paper, we introduce a new conflict information among the objectives, which is described as follows.

Given an MaOP with M objectives, suppose that there is an N-individual population

$$\begin{aligned} \mathbf{POP}=\{\mathbf{x}_1 ,\ldots ,\mathbf{x}_N \},\mathbf{x}_i \in \Omega ,i=1,\ldots ,N \end{aligned}$$

with \(C_N^2 \) different pairs of individuals \((\mathbf{x}_k ,\mathbf{x}_l ),k,l=1,\ldots ,N,\;k\ne l\) in the decision space \(\Omega \). For any two objectives \(f_i ,f_j \in \Phi ,i,j=1,\ldots ,M\), if \(K(0\le K\le C_N^2 )\) different pairs of individuals satisfy the following conditions:

$$\begin{aligned} {[}f_i (\mathbf{x}_k )>f_i (\mathbf{x}_l )]&\wedge [f_j (\mathbf{x}_k )<f_j (\mathbf{x}_l ){]}\nonumber \\&\hbox {or}\\ {[}f_i (\mathbf{x}_k )<f_i (\mathbf{x}_l )]&\wedge [f_j (\mathbf{x}_k )>f_j (\mathbf{x}_l ){]}\nonumber \end{aligned}$$
(2)

the conflict information between the i-th objective \(f_i \) and the j-th objective \(f_j \) will be simply evaluated as

$$\begin{aligned} P_{i,j} =K\Big /{C_N^2 } \end{aligned}$$
(3)

Clearly, the NCI \(P_{i,j}\) between the i-th objective \(f_i \) and the j-th objective \(f_j\) has the following properties.

  1. (1)

    Reflexivity:\(P_{i,j} =0,\;i=j\);

  2. (2)

    Symmetry:\(P_{i,j} =P_{j,i} \).

Finally, a symmetric NCI matrix \(C\hbox {=}[P_{i,j} ]_{M\times M} \) with zero-diagonal element can be constructed. Specifically, \(P_{i,j} =1\) implies that two objectives \(f_i\) and \(f_j \) absolutely conflict with each other.

In short, our NCI matrix can be used directly in its original version, while the conflict information presented in OSP (Jaimes et al. 2014) needs some necessary modifications to the conflict information matrix. In OSP, if the original Pearson correlation matrix is used directly, it would be possible that the highly conflicting objectives might be placed in a subspace with low conflict. To solve this problem, they modified the correlation matrix: (i) find the maximum conflict value of each row i in the matrix. (ii) Add the value to the column i. This means that if objective \(f_i \) is in conflict with some objectives, it is in conflict with all the objectives. Therefore, our NCI method is simpler and more straightforward than OSP.

Fig. 1
figure 1

Flowchart of the proposed OSEOR embedded in an MOEA

3.2 Objective sorting

According to the NCI estimation, it is used to compute the conflict contribution rate on the target problem for the objectives. The computation formula is provided, as follows.

$$\begin{aligned} \lambda _{i} ={\mathop {\sum }\limits _{j=1}^{M}} P_{i,j} \Big /\left( {\mathop {\sum }\limits _{i=1}^{M}} {\mathop {\sum }\limits _{j=1}^{M}} P_{i,j} \right) . \end{aligned}$$
(4)

where M is the number of objectives, \(f_i \) and \(f_j\) are, respectively, the ith and jth objectives of the target problem, \(P_{i,j} \) is the NCI between \(f_i\) and \(f_j\), the expression\(\sum _{i=1}^M {\sum _{j=1}^M {P_{i,j}}}\) represents the summation of NCI values among the objective pairs of the target problem, \(\sum _{j=1}^M {P_{i,j}}\) is the summation of NCI values between the ith objective and each one of other objectives, and the value \(\lambda _i (0\le \lambda _i \le 1)\) represents the conflict contribution rate on the target problem for the ith objective. Clearly, according to Eq. (4), we have a result as follows.

$$\begin{aligned} \mathop {\sum }\limits _{i=1}^{M} {\lambda _i } =1. \end{aligned}$$
(5)

More importantly, based on Eq. (4), the ith objective with the largest \(\lambda _i \) is placed in the first position in the objective sorting, which indicates that the ith objective is the most important one among all the objectives. This process is going on until all the objectives are sorted properly. Obviously, the objective locating in the last position is the less important one in terms of its conflict contribution rate on the target problem.

4 Objective reduction based on subspace extraction

4.1 Basic idea

Our OSEOR uses several extracted subspaces, so that different subpopulations can focus the search on different extracted subspaces. It is similar to the framework of OSP proposed by in Jaimes et al. (2014), the main differences are clarified: (i) our technique to measure the conflict relation between the objectives is different from that of OSP, and the subspaces are extracted using different approaches; (ii) the intersection of the extracted subspaces is not empty in our method; (iii) the combination of the parent population and the offspring population would be partitioned in terms of different subspaces in our method. The flowchart of our new OR version is shown in Fig. 1.

Before describing the new OR version, as noted in Jaimes et al. (2014), we should consider that the conflict relation among the objectives is local rather than global. Therefore, the conflict relation might continuously change during the search, while the local conflict among the objectives exists in some cases. Furthermore, the conflict relation among the solutions in \(PF_\mathrm{opt} \) might differ from that observed in the current \(PF_\mathrm{approx} \), namely during the execution of MOEAs, the NCI value would be updated. Therefore, the proposed OR version (as shown in Fig. 1) repeats G cycles of a two-stage search process that consists of integration and extraction phases.

In the integration phase (Phase 1 in Fig. 1), an MOEA searches \(G_\Phi \) generations in the original M-dimensional objective space \(\Phi =\{f_1 ,\ldots ,f_M \}\) to update the POP in this phase, and we obtain an offspring population Q. The combination of parent population \(\hbox {POP}\) and offspring population \(\hbox {Q, R = POP} \cup \hbox { Q}\), is a database to analyse NCI among the objectives in Phase 2. If the stop criterion of the algorithm reaches, R is then sorted to export the top individuals with the population size to form the final \(PF_\mathrm{approx} \).

In the extraction phase (Phase 2 in Fig. 1), firstly the database R is used to estimate the conflict relation in the objective space \(\Phi =\{f_1 ,\ldots ,f_M \}\), as noted in Sect. 3.1, and NCI matrix is then created. Next, according to the NCI matrix, the conflict contribution rate can be obtained for the objectives, and the objectives are sorted by their conflict contribution rates (this process is introduced in Sect. 3.2). In this way, the main objectives are extracted by setting a parameter \(\alpha \) among the conflict contribution rates, namely if the conflict contribution rate of the objective is not less than \(\alpha \), the objective is extracted as a main objective (i.e. the high conflicting objective). Otherwise, it is regarded as a redundant objective (i.e. the low conflicting objective). The process to generate the subspaces will be introduced in detail in Sect. 4.2. The extracted subspaces consist of an extraction, which is represented by \(\Psi =\{\psi _1 ,\ldots ,\psi _{N_S } \}\). At the same time, the combination of POP and Q is performed by the non-dominated sorting (Deb et al. 2002) in each subspace. Then, according to the non-dominated sorting in each subspace, partial individuals with the high ranks on the non-dominated fronts, are extracted into the corresponding subspace and then undergo evolution by MOEA in this subspace. The number of extracted individuals in each subspace is\((\left| \hbox {POP} \right| +\left| \hbox {Q} \right| )/N_s \). The flowchart of population extraction is shown in Fig. 2.

Fig. 2
figure 2

Flowchart of the population extraction in Phase 2

The MOEA searches and evolves concurrently \(G_\Psi \) generations in the subspaces. The subspace extraction \(\Psi \) and the population extraction are updated after the integration phase in every cycle. Population sizes for the integration phase and the extraction phase are, respectively, \(\left| \hbox {POP} \right| \) and \((\left| \hbox {POP} \right| +\left| \hbox {Q} \right| )/N_s \) in each extracted subspace. When the extraction phase is finished after \(G_\Psi \) generations, the best \(\left| \hbox {POP} \right| /N_s \) individuals from each subspace are selected, and all the selected individuals from each subspace form a new population POP for the integration phase of the next cycle. This process is repeated until the cycles reach the maximum generation G (the stop criterion), and the output is \(PF_\mathrm{approx} \) which is selected in R using the non-dominated sorting approach. In this algorithm, the total number of generations is \(\mathrm{Gen}=G\times (G_\Phi +G_\Psi )\), and NSGA-II is adopted as the underlying MOEA for the integration and the extraction phases.

4.2 Subspace extraction using objective sorting

On the basis of objective sorting, our proposed method extracts several overlapped subspaces, which compose a subspace extraction \(\Psi \). Throughout the paper, it is assumed that the dimension of the original problem is M and the number of subspaces is \(N_s \). The selected MOEA can effectively deal with MOPs having m objectives \((m<M)\), and then create a set \(S_\mathrm{dynamic} \) to store the un-extracted objectives before implementing the extraction phase. At the beginning, \(S_\mathrm{dynamic} =\{f_1 ,\ldots ,f_M \}\), where \(f_i (i=1,\ldots ,M)\) represents the ith objective of the problem, is updated continually in the process of subspace extraction until it is empty. Then, initialize a set \(S_\mathrm{main} =\)Ø , which is used to store the most conflicting objectives (or the most important objectives). In these subspaces, one or more objectives may appear in more than one of the extracted subspaces. The procedure to extract subspaces is described as follows.

Step 1 Calculate the value of NCI \(P_{i,j} \in [0,1]\) between each pair of objectives \((f_i ,f_j )\) using the combination of the current population \(\hbox {POP}\) and the offspring \(\hbox {Q}\). Then, a NCI matrix \(C=[P_{i,j}]_{M\times M}\) can be created.

Step 2 Calculate the conflict contribution rate \(\lambda _i \) for each objective \(f_i \;(i=1,\ldots ,M)\) on the target problem.

Step 3 Sort the objectives in the set \(S_\mathrm{dynamic} \) by descending order using the value of \(\lambda _i \). Then, \(S_\mathrm{dynamic} \)is updated, namely \(S_\mathrm{dynamic} =\{f_1^\lambda ,\ldots ,f_M^\lambda \}\), where objective \(f_i^\lambda \) is in the top ith place according to its conflict contribution rate. In \(S_\mathrm{dynamic} \), the top objectives with the conflict contribution rates not less than \(\alpha \) are selected, and these top objectives are then added into the set \(S_\mathrm{main} \). Then, the set \(S_\mathrm{dynamic} \) is updated using the remaining objectives.

Step 4 Randomly select l objectives from \(S_\mathrm{dynamic},\) and these l objectives form \(S_\mathrm{select} \), and \(\left| {S_\mathrm{main}} \right| +l\le m\). Then, the objectives from \(S_\mathrm{main} \) and \(S_\mathrm{select} \) are used to construct the first subspace \(\psi _1 \). This process is drawn in Fig. 3. Meanwhile, these l objectives are removed from \(S_\mathrm{dynamic} \), and \(S_\mathrm{select} =\)Ø. If the updated \(S_\mathrm{dynamic} \) is not empty, use the same method to construct the set \(S_\mathrm{select} \), and then the second subspace is \(\psi _2 =S_\mathrm{main} \cup S_\mathrm{select} \). The same procedure is run to create the following subspaces, until \(S_\mathrm{dynamic} \) is empty.

Step 5 At last, assuming that the number of subspaces is \(N_s \), a set \(\Psi =\{\psi _1 ,\ldots \psi _{N_s } \}\) is created with \(N_s \) subspaces, and the set \(\Psi \) is a subspace extraction (See Definition 8).

Fig. 3
figure 3

The flowchart of generating the first subspace using OSEOR

In the subspace extraction, it is sure that the conflict contributions of the selected subspaces on MOPs are substantially significant. Importantly, the dimensions of MOPs can be reduced; moreover, the less important objectives are retained. However, in other OR algorithms, the less important objectives are all omitted, as a result, it could lose some information of MOPs. Nevertheless, in the OSP algorithm (Jaimes et al. 2014), they adopt greedy approach to partition the objective space into several subspaces based on conflict information (calculated by Pearson correlation). The first selected subspace contains the objectives with the lowest conflict, while the last selected subspace contains the highest conflict. Hence, OSP also retains all the objectives with less importance. However, since they keep the objectives with the lowest conflicts in the same subspace, this would limit the search effect in this subspace.

In our approach, it allows the same objectives to be part of various highly conflicting subspaces. Such way, our method helps to search effectively in each subspace, which contributes mostly to approximate the trade-off solutions of \(PF_\mathrm{opt} \).

5 Experimental analysis

In this paper, OSEOR uses NSGA-II as the underlying MOEA for the integration and extraction phases. OSEOR with NSGA-II is then compared to the new OSP with NSGA-II (Jaimes et al. 2014) and the original NSGA-II on five continuous scalable test problems, i.e. DTLZ1–DTLZ5. In the following subsections, our method is named NSGA-II-OSEOR, while the new OSP (Jaimes et al. 2014) with NSGA-II is named NSGA-II-OSP.

5.1 Test instances

Firstly, we choose four irreducible scalable test problems DTLZ1, DTLZ2, DTLZ3, and DTLZ4 from the DTLZ test function family (Deb et al. 2006) in our study. They have the following properties: (i) They can be scaled to any number of decision variables and objectives; (ii) their exact shape and the location of the resulting \(PF_\mathrm{opt} \) for these problems are known; and (iii) the difficulties in both converging to the true \(PF_\mathrm{opt} \) and maintaining a widely distributed set of solutions can be controlled. These three properties are the reasons that we choose them for testing our algorithm.

In addition, we choose a reducible problem DTLZ5(IM) (Saxena 2006) for performance evaluation. DTLZ5(IM) is an M-objective problem with I conflicting objectives. The objectives of this problem can be categorized into two subsets: The low conflicting subset \(F_R =\{f_1 ,\ldots ,f_{M-I+1} \}\) and the main subset \(F_N =\{f_{M-(I-2)} ,f_{M-(I-3)} \ldots ,f_M \}\) composed of the last \(I-1\) objectives. The \(PF_\mathrm{opt} \) can be formed by only one objective from \(F_R \) and all objectives in \(F_N \). In addition, there is conflicts among all objectives in \(F_N \), but no conflict among the objectives in \(F_R \). Nevertheless, there always exist the conflicts between the elements in \(F_R \) and \(F_N \).

5.2 Parameter settings

In the following subsections, all the algorithms are repeated for 30 independent runs, and stop after 200 (Gen) generations. SBX crossover is run with the probability 1.0 and \(\eta = 15\), while polynomial mutation is performed with the probability 0.1 and \(\eta = 15\). The population size is \(\left| \hbox {POP} \right| =200\). For NSGA-II-OSEOR and NSGA-II-OSP algorithms, \(G=10,G_\Phi =30\% \times (\mathrm{Gen}/G)\), and \(G_\Psi =70\% \times (\mathrm{Gen}/G)\), which are introduced in Sect. 4.1. The numbers of objectives are set to \(M=\{5, 10\}\) on DTLZ1–DTLZ4, and to M={4, 6, 8, 10} and \(I=M/2\) on DTLZ5(IM). The numbers of decision variables are set to \(V=M+9\) on DTLZ2–DTLZ4 and DTLZ5(IM), and to \(V=M+4\) on DTLZ1. For all the test problems, the parameter \(\alpha \) is set to 0.1, and the number of extracted subspaces is set to \(N_{s}= 2\) except that \(N_{s}\) is set to 3 for \(M=10\).

5.3 Quality indicators

Since it is not possible to use 3D plot to help the interpretation of results in MaOPs, we have to rely on the results obtained by the quality indicators.

In order to directly test the convergence of the algorithm, the additive \(\varepsilon \)-indicator (Zitzler et al. 2003) is adopted. Assuming that A and B are two non-dominated sets, this indicator is defined as \(I_{\varepsilon +} (A,B)=\inf _{\varepsilon \in \mathbb {R}} \{\forall z_2 \in B,\;\exists z_1 \in A:z_1 \prec _{\varepsilon +} z_2 \}\), where \(z_1 \prec _{\varepsilon +} z_2 \)if \(\forall i:z_1 ^{i}\le \varepsilon +z_2 ^{i}\), for a given \(\varepsilon \). Generally, since \(I_{\varepsilon +} (A,B)\ne I_{\varepsilon +} (B,A)\), both of them have to be computed. The smaller value of \(I_{\varepsilon +} (A,B)\) and the larger value of \(I_{\varepsilon +} (B,A)\) indicate the better performance of A over B.

Additionally, to evaluate the diversity of our algorithm, the spacing distance indicator (SS) (Schott 1995) is used. SS-indicator is defined as \(SS(A){=}\sqrt{\sum _{i=1}^{|A|} {(d^{*}{-}d_i )^{2}/(|A|{-}1)} }\), where A is an approximation set, \(d_i \) is the Euler distance from the ith element to the nearest element in A, \(d^{*}\) is the average value among all the \(d_i\). The smaller value of SS means the better diversity for A. Especially, when \(SS(A)=0\), it represents the best diversity with all the solutions distributed uniformly.

Moreover, in order to synthetically test both of convergence and diversity, the hypervolume indicator (HV) (Auger et al. 2009) is employed. For DTLZ1–DTLZ4, the objective value of reference point in each dimension is defined as 22 (Auger et al. 2009). For DTLZ5(IM), the reference point is set to \(\mathbf{z}^{\mathrm{ref }}= 2^{\mathrm{M}}\). The larger the HV value is, the better the algorithm’s overall performance is.

Fig. 4
figure 4

The box plots of SS values for NSGA-II-OSEOR, NSGA-II-OSP and the original NSGA-II on DTLZ1–DTLZ4. In the horizontal axis, “1” indicates the original NSGA-II, “2” represents NSGA-II-OSP, and “3” is NSGA-II-OSEOR

Fig. 5
figure 5

The box plots of \({I}_{\varepsilon + }\)values for NSGA-II-OSEOR, NSGA-II-OSP, and the original NSGA-II on DTLZ1with \(M=5\) and \(M=10\). In the first column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSE); in the second column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,OSP), “2” represents \({I}_{\varepsilon +}\)(OSP,OSE); in the third column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSP,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSP)

Since the analytical forms of the true \(PF_\mathrm{opt} \) of DTLZ1–DTLZ4 are known a priori, IGD (the average distance from the true \(PF_\mathrm{opt} \) to the obtained \(PF_\mathrm{approx} )\) (Czyzżak and Jaszkiewicz 1998) is also utilized to assess both of convergence and diversity on DTLZ1–DTLZ4. To have a fair comparison, we randomly generate a large number of reference points (i.e. 100,000 samples) in all benchmark problems. The smaller the IGD value is, the better the algorithm’s performance is.

Fig. 6
figure 6

The box plots of \({I}_{\varepsilon + }\) values for NSGA-II-OSEOR, NSGA-II-OSP, and the original NSGA-II on DTLZ2 with \(M=5\) and \(M=10\). In the first column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSE); in the second column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,OSP), “2” represents \({I}_{\varepsilon +}\)(OSP,OSE); in the third column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSP,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSP)

Fig. 7
figure 7

The box plots of \({I}_{\varepsilon + }\)values for NSGA-II-OSEOR, NSGA-II-OSP, and the original NSGA-II on DTLZ3 with M=5 and M=10. In the first column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSE); in the second column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,OSP), “2” represents \({I}_{\varepsilon +}\)(OSP,OSE); in the third column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSP,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSP)

Fig. 8
figure 8

The box plots of \({I}_{\varepsilon + }\)value for NSGA-II-OSEOR, NSGA-II-OSP and the original NSGA-II on DTLZ4 with M=5 and M=10. In the first column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSE); in the second column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,OSP), “2” represents \({I}_{\varepsilon +}\)(OSP,OSE); in the third column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSP,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSP)

Table 1 IGD value and HV value of NSGA-II-OSEOR and NSGA-II-OSP and the original NSGA-II on DTLZ1-DTLZ4

5.4 Experimental results on quality indicators

5.4.1 Performance comparison on irreducible problems

There are no objectives that can be ignored globally in the irreducible problems DTLZ1–DTLZ4. To show the behaviour of our approach clearly, NSGA-II-OSEOR is compared to NSGA-II-OSP that integrates all the objectives (including those with low conflicts) in order to cover the entire \(PF_\mathrm{opt} \), and the original NSGA-II on DTLZ1–DTLZ4. The detailed comparison results on irreducible problems are presented as follows.

Comparison results of NSGA-II-OSEOR with other two algorithms in terms of SS, \({I}_{\in +}\), IGD and HV values are, respectively, presented in Figs. 4, 5, 6, 7 and  8, and Table 1, which include the mean values obtained from 30 independent runs. Box plots of the SS and \({I}_{\in +}\) values among three algorithms for the continuous irreducible problems DTLZ1–DTLZ4 are visualized in Fig. 4, 5, 6, 7 and 8, whose objective numbers are \(M=5\) and \(M=10\). In Fig. 4, it is noted that, “1” indicates the original NSGA-II algorithm, “2” represents the NSGA-II-OSP algorithm, “3” is the NSGA-II-OSEOR algorithm, and the “1” “2” “3” are signed in the horizontal axis. In Figs. 5, 5, 6, 7 and 8, for the comparison of \({I}_{\varepsilon +}\) values, since \(I_{\varepsilon +} (A,B)\ne I_{\varepsilon +} (B,A)\), we have to show the \({I}_{\varepsilon +}\) values of NSGA-II-OSEOR, NSGA-II-OSP, and the original NSGA-II in different figures for different problems. These results are interpreted as follows. For example, in Fig. 5, “1: \({I}_{\varepsilon +}\)(OSE, NSGA-II)” indicates the \({I}_{\varepsilon +}\) values from the \(PF_\mathrm{approx} \) of NSGA-II-OSEOR to the \(PF_\mathrm{approx} \) of NSGA-II, and the results correspond to “1” position in the horizontal axis. The same interpretation is applicable to other \({I}_{\varepsilon +}\) values between other pairwise algorithm’s results. Table 1 depicts the IGD and HV values among the algorithms, where the best metric values are highlighted in bold face. Moreover, in order to provide statistical quantifications on both IGD and HV, the Wilcoxon’s rank sum test was further conducted to assess the statistical significance of the difference between the results obtained by NSGA-II-OSEOR and those obtained by the other algorithms with a significance level \(\alpha =0.05\). The statistical results are also shown in Tables 1 and 2, following the metric values of each competing MOEA except NSGA-II-OSEOR. Here, the symbols “+”, “\(\approx \)”, “−” denote that NSGA-II-OSEOR performs statistically better than, similar to, and worse than the compared algorithm, respectively. From these empirical results, it is clear that NSGA-II-OSEOR is the best optimizer as it wins most competitions with respect to the compared MOEAs. In the following paragraphs, we discuss these results instance by instance.

The \(PF_\mathrm{opt} \) of DTLZ1 is a linear hyper-plane, where the objective functions of a Pareto-optimal solution \(\mathbf{x}^{*}\) satisfy:\(\sum _{i=1}^M {f_i (\mathbf{x}^{*})=0.5} \). The presence of local optima in the search space leads to the difficulties for converging to the global \(PF_\mathrm{opt} \). In terms of SS-indicator of Fig. 4, the diversity of the final solutions obtained by NSGA-II-OSEOR is better than that of NSGA-II-OSP in five-objective DTLZ1 instance; however, NSGA-II-OSP shows better diversity than NSGA-II-OSEOR in ten-objective DTLZ1 instance, and both of them improve the diversity of NSGA-II. From the experimental results of \({I}_{\varepsilon +}\)-indicator in Fig. 5, NSGA-II-OSEOR gives the best convergence performance both in five- and ten-objective DTLZ1 problems, and the convergence of NSGA-II-OSP is better than NSGA-II in the DTLZ1 test problems. Table 1 also indicates that NSGA-II-OSEOR obtains the best results in both IGD-indicator and HV-indicator in the DTLZ1 test problems, while NSGA-II performs worst. The SS-indicator and \({I}_{\varepsilon +}\)-indicator are associated with the IGD-indicator or HV-indicator. For example, from the experimental analysis, in ten-objective DTLZ1 instance, although the SS values of NSGA-II-OSEOR are worse than that of NSGA-II-OSP, the IGD and HV values of NSGA-II-OSEOR are better than that of NSGA-II-OSP. The reason is that the SS values of NSGA-II-OSP are only slightly better than that of NSGA-II-OSEOR; however, the \({I}_{\varepsilon +}\) values of NSGA-II-OSP are much worse than that of NSGA-II-OSEOR.

DTLZ2 is a relatively easy test problem with a spherical \(PF_\mathrm{opt} \), where the objective function of a Pareto-optimal solution \(\mathbf{x}^{*}\) satisfies: \(\sum _{i=1}^M {f_i (\mathbf{x}^{*})=1} \). Fig.  4 shows the SS-indicator performance in five- and ten-objective DTLZ2 problems. According to the results in all DTLZ2 test cases, NSGA-II-OSEOR shows the best diversity performance in three algorithms, and the diversity performance of NSGA-II-OSP is slightly worse than that of NSGA-II. Fig. 6 plots the convergence performance (\({I}_{\varepsilon +}\)-indicator) comparison between two algorithms. From these results, NSGA-II obtains the best convergence performance in five-objective DTLZ2 problem, whereas NSGA-II-OSEOR achieves better convergence performance than NSGA-II-OSP. It is interesting to note that, in ten-objective DTLZ2 problem, the convergence performance of NSGA-II-OSEOR is best, and NSGA-II-OSP is in the second place in this case. In particular, from the IGD values and the HV values in Table 1, they show the similar performance comparisons with that in DTLZ1. NSGA-II-OSEOR performs the best convergence and diversity performance in DTLZ2 with all cases, while NSGA-II-OSP obtains the worse performance than NSGA-II in five-objective DTLZ2 problem. In contrast, in ten-objective DTLZ2 problem, the performance of NSGA-II-OSP is better than NSGA-II. In five-objective DTLZ2 problem, although the \({I}_{\varepsilon +}\) value of NSGA-II is better than that of NSGA-II-OSEOR, the IGD and HV values of NSGA-II-OSEOR are better than that of NSGA-II. The reason is that the SS value of NSGA-II-OSEOR is much better than that of NSGA-II, which also demonstrates that the integration analysis between SS-indicator and \({I}_{\varepsilon +}\)-indicator is similar with the analysis by IGD-indicator or HV-indicator, respectively.

Fig. 9
figure 9

The box plots of SS values for NSGA-II-OSEOR, NSGA-II-OSP and the original NSGA-II on DTLZ5(\(I=M\)/2,M)

Fig. 10
figure 10

The box plots of \({I}_{\varepsilon + }\)values for NSGA-II-OSEOR, NSGA-II-OSP and the original NSGA-II on DTLZ5\((I=M/2,M)\). In the first column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSE); in the second column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSE,OSP), “2” represents \({I}_{\varepsilon +}\)(OSP,OSE); in the third column of the figure, “1” represents \({I}_{\varepsilon +}\)(OSP,NSGA-II), “2” represents \({I}_{\varepsilon +}\)(NSGA-II,OSP)

DTLZ3 has the same \(PF_\mathrm{opt} \) as DTLZ2, but it is designed to challenge different capacities of an algorithm. DTLZ3 problem presents a huge number of local \(PF_\mathrm{opt} \) paralleling to the global \(PF_\mathrm{opt} \) based on DTLZ2, which poses a stiff challenge for MOEAs to converge to the global \(PF_\mathrm{opt} \). To visualize the results in Fig. 4, the SS-indicator value of NSGA-II-OSEOR is worse than that of NSGA-II-OSP in five-objective case, and NSGA-II shows the worst SS-indicator value. In ten-objective case, NSGA-II-OSEOR presents the best diversity performance in terms of SS-indicator value, and NSGA-II still obtains the worst diversity performance. Fig. 7 shows the convergence performance (\({I}_{\varepsilon +}\)-indicator) between the compared algorithms. On the basis of the experimental results, in five-objective case, both of NSGA-II-OSEOR and NSGA-II-OSP improve the convergence performance of NSGA-II, and NSGA-II-OSEOR has a little harder in converging to the \(PF_\mathrm{opt} \) than NSGA-II-OSP. In ten-objective case, similar to the observations in DTLZ2 problem with ten-objectives, the convergence performance of NSGA-II-OSEOR is still in the top place, and that of NSGA-II-OSP is in the second place. NSGA-II obtains the worst convergence performance. Next, the results of IGD-indicator and HV-indicator are presented in Table 1. Unlike the results of DTLZ1 and DTLZ2, in DTLZ3 with ten-objective case, the IGD value of NSGA-II-OSEOR is the worst. NSGA-II-OSP performs the best in this case. Nevertheless, NSGA-II-OSEOR presents the best result of HV-indicator in this case and NSGA-II-OSP locates the second place. Whereas NSGA-II-OSEOR still performs the best performance of IGD-indicator and HV-indicator in five-objective case, and NSGA-II-OSP shows the secondary performance.

Table 2 HV value of NSGA-II-OSEOR and NSGA-II-OSP and the original NSGA-II on DTLZ5\((I=M/2,M)\)

DTLZ4 also has the same geometrical shape as DTLZ2; nevertheless, in order to challenge the ability of an algorithm to maintain the diversity in the objective space, DTLZ4 introduces a biased density of solutions along the \(PF_\mathrm{opt} \). From the experimental results of SS-indictor in Fig. 4, NSGA-II-OSEOR always maintains the best diversity in three algorithms for five- and ten-objectives cases, while NSGA-II-OSP only struggles to have better diversity performance than NSGA-II. In order to investigate the convergence performance of proposed algorithm, the test results of \({I}_{\varepsilon +}\)-indicator are presented in Fig. 8. It can be observed that both of NSGA-II-OSEOR and NSGA-II-OSP are outperformed by NSGA-II in five- and ten-objective cases, and NSGA-II-OSEOR outperforms NSGA-II-OSP in five-objective case; however, it is outperformed by NSGA-II-OSP in ten-objective case. Table 1 puts up the results of IGD-indicator and HV-indicator. On the basis of these results, for five-objective DTLZ4 and ten-objective DTLZ4, both of NSGA-II-OSEOR and NSGA-II-OSP do not work well regarding these two indicators.

5.4.2 Performance comparison on reducible problems

After demonstrating the performance of NSGA-II-OSEOR on irreducible problems, in this subsection, the performance of NSGA-II-OSEOR is further analysed on reducible problem DTLZ5(IM) with \(M=\{4\ 6\ 8\ 10\}\) and \(I=M\)/2. We choose \(I=M\)/2, and it will be the maximum value to assign all the conflicting objectives in one subspace for NSGA-II-OSP.

In this paper, the dimensionality analysis is carried out using the final population obtained by NSGA-II along the lines as suggested in (Jaimes et al. 2014). Fig. 9 shows the diversity performance (SS-indicator) for all numbers of objectives on DTLZ5(IM). By inspecting the SS values of the test problems, it is observed that NSGA-II-OSEOR is the best optimizer for maintaining the diversity performance, and NSGA-II-OSP ranks at the second place in this indicator. Fig. 10 presents the convergence performance (\({I}_{\varepsilon +}\)-indicator) for all reducible test problems. Firstly, in four-objective case, both of NSGA-II-OSEOR and NSGA-II-OSP perform worse than NSGA-II in this indicator; however, NSGA-II-OSEOR works slightly better than NSGA-II-OSP. Next, it shows that the convergence performance of NSGA-II is significantly improved by both NSGA-II-OSEOR and NSGA-II-OSP, when the number of objectives is larger than four. Moreover, NSGA-II-OSEOR always outperforms other two algorithms in all high-dimensional cases. Table 2 presents the total performance (HV-indicator) of three algorithms. Regarding this indicator, it is found that NSGA-II-OSEOR obtains the best HV values on DTLZ5 with all numbers of objectives, and NSGA-II gives the worst performance in most cases (it only outperforms NSGA-II-OSP in the four-objective case).

5.5 Findings and discussions

According to no free lunch theory (Wolpert and Macready 1997), each algorithm has its unique advantages over one class of MOPs. Therefore, it is more persuadable to provide a comprehensive analysis according to different kinds of test MOPs.

For benchmark problems with multimodal characteristics, there are many local \(PF_\mathrm{opt} \) in the objective space. The main power of NSGA-II-OSEOR to handle with multimodality comes from its process of reducing the objective space based on objective subspace extraction, as the subspace extraction strategy allows the overlapping between the selected subspaces and only selects the subspaces with highest contribution to the target problem. At extraction phase, evolutionary population is divided and distributed into different subspaces, so that the diversity is always maintained, which can prevent the entire population from being stuck in the local optima. Regarding benchmark problems with bias, such as DTLZ2 and DTLZ4, due to the diversity improvement mechanism of NSGA-II-OSEOR, our approach can still obtain the best overall performance on DTLZ2 with five- and ten-objectives; however, because DTLZ4 is modified from DTLZ2, and the diversity of NSGA-II-OSEOR on DTLZ4 improves only a little on NSGA-II, its convergence ability seems much worse than that of NSGA-II. Thus, the overall performance of NSGA-II-OSEOR on DTLZ4 is worse than that of NSGA-II. It is reasonable to conclude that NSGA-II-OSEOR has the ability to maintain a good diversity performance of solutions on irreducible problems, but does not present a powerful convergence capability on bias problems.

Table 3 shows the summary of statistical test results from Table 1. It compares the proposed NSGA-II-OSEOR with each competing MOEA in pairs and counts the number of times that NSGA-II-OSEOR is better than (+), similar to (\(\approx )\), or worse than (-) the compared MOEA. From the results of Table 3, NSGA-II-OSEOR shows the superior advantages over the original NSGA-II and NSGA-II-OSP. These results are consistent with the above analysis.

Table 3 Summary of statistical test results on Table  1

With respect to the reducible benchmark problems DTLZ5 (IM), our OSEOR strategy maintains a good diversity performance in all cases. Meanwhile, it also keeps a good convergence performance in most cases, except for the lower-dimensional problem DTLZ5\((I=2, M=4)\). More importantly, because of its ability of holding good diversity, NSGA-II-OSEOR still has the best overall performance in this case. Hence, NSGA-II-OSEOR indeed improves the performance of the original NSGA-II on the reducible test problems, and it outperforms NSGA-II-OSP in all cases.

Table 4 shows the summary of statistical test results from Table 2. From the results in Table 4, NSGA-II-OSEOR also confirms the superior advantages over the compared algorithms in solving the reducible problems.

Table 4 Summary of statistical test results on Table  2

6 Conclusion

Due to the loss of selection pressure in fitness evaluation and ineffective evolutionary operators in a large objective space, MOEAs perform poorly to solve MaOPs. In this paper, in order to effectively solve MaOPs, we introduce NSGA-II-OSEOR with two main stages: (1) subspace extraction based on the NCI. The extracted subspaces are overlapped with each other, which have the highest contribution to the target problem, and (2) the division of the population into different subspaces with different contributions. This strategy is applied to facilitate the diversity of the population, while updating the population to ensure the convergence ability. The proposed algorithm is then embedded into NSGA-II and compared to NSGA-II-OSP and the original NSGA-II.

Four representative types of irreducible benchmark problems DTLZ1–DTLZ4 and the most typical reducible problem DTLZ5(I, M) are used for challenging the performance of our scheme. Based on the experimental results and analysis on the performance quality indicators SS, \({I}_{\varepsilon +}\), IGD and HV, NSGA-II-OSEOR shows a better performance in both of convergence and diversity. In the future work, we would like to study an adaptive strategy to control the size of subspaces and the number of subspaces. Also, the performance of our OR strategy will be further investigated by embedding it into other MOEAs and solving various kinds of MOPs and practical problems.