1 Introduction

In the past decades, metaheuristic algorithms have been favored by many researchers and successfully applied to solve real-world optimization problems in various fields due to their strong search capability, low computational complexity and strong generalization capability. Researchers have made a lot of efforts to find reasonable and effective search operators for single-objective optimization problems (SOPs), to improve the search capability of metaheuristic algorithms, and to balance the exploration and exploitation of algorithms, and have achieved more satisfactory results. Single-objective optimization algorithms have well solved problems such as cluster analysis (Cui et al. 2022), forest fire rescue (Chen et al. 2022) and crack identification (Benaissa et al. 2021). However, in the real world, many problems are usually composed of multiple conflicting objective functions, and they are called multi-objective optimization problems (MOPs). Since MOPs are widespread in the real world and difficult to be solved efficiently, algorithms for dealing with MOPs have gradually become a popular research topic (Zeng et al. 2021). Taking the minimization problem as an example, the mathematical definition of MOPs is presented in Eq. (1).

$$\begin{gathered} {\text{minimize }}F({\mathbf{x}}) = [f_{1} ({\mathbf{x}}),f_{2} ({\mathbf{x}}), \cdots ,f_{M} ({\mathbf{x}})] \hfill \\ {\text{subject to }}g_{i} ({\mathbf{x}}) \le 0, \, i = 1,2, \cdots ,l \hfill \\ \, h_{j} ({\mathbf{x}}) = 0, \, j = 1,2, \cdots ,c \hfill \\ {\text{with }}lb \le x_{k} \le ub,k = 1,2, \cdots ,d. \hfill \\ \end{gathered}$$
(1)

where \({\mathbf{x}} = [x_{1} ,x_{2} , \cdots ,x_{d} ]^{{\text{T}}}\) is the vector of decision variables, \(d\) is the number of decision variables, \(f_{i} :{\mathbb{R}}^{d} \to {\mathbb{R}},i = 1,2, \cdots ,M\) are the objective functions, \(M\) is the number of objectives, \({\mathbb{R}}^{d}\) is the d-dimensional solution space \(g_{i} :{\mathbb{R}}^{d} \to {\mathbb{R}},i = 1,2, \cdots ,l\) are the inequality constraints functions, \(l\) is the number of inequality constraints, \(h_{i} :{\mathbb{R}}^{d} \to {\mathbb{R}},i = 1,2, \cdots ,c\) are the equality constraints functions, \(c\) is the number of equality constraints, \(lb\) is the lower bound of decision variables, and \(ub\) is the upper bound.

Unlike SOPs, MOPs do not have a global or single optimal solution, but rather have many solutions alternatively representing the optimal solution. In order to compare the superiority of solutions during the iteration process and to provide search agents with environmental selection pressure, the related concept of Pareto dominance is introduced (Coello 2009).

Definition 1

(Pareto dominance) Given two solutions \({\mathbf{x}},{\mathbf{y}} \in {\mathbb{R}}^{d}\), if \(\forall i \in \{ 1,2, \cdots ,M\}\), \(f_{i} ({\mathbf{x}}) \le f_{i} ({\mathbf{y}})\), and \(\exists i \in \{ 1,2, \cdots ,M\}\), such that \(f_{i} ({\mathbf{x}}) < f_{i} ({\mathbf{y}})\), then solution \({\mathbf{x}}\) is said to dominate \({\mathbf{y}}\), denoted as \({\mathbf{x}} \prec {\mathbf{y}}\).

Definition 2

(Pareto optimality) If \({\mathbf{x}} \in {\mathbf{X}} \subset {\mathbb{R}}^{d}\), \(\forall {\mathbf{y}} \in {\mathbf{X}}\), \({\mathbf{x}} \prec {\mathbf{y}}\), then solution \({\mathbf{x}}\) is called the Pareto optimality in the solution set \({\mathbf{X}}\).

Definition 3

(Pareto set) Pareto set (PS) is the set of all Pareto optimal solutions in the solution set \({\mathbf{X}} \subset {\mathbb{R}}^{d}\).

Definition 4

(Pareto front) Pareto front (PF) is the mapping set of the Pareto set (PS) in the objective space.

A challenging task is the need to consider multiple conflicting objective functions simultaneously and then find the best possible trade-off solution that satisfies all constraints, if any. According to definition 1, the non-dominated solution should be strictly better than all solutions for one subfunction and not the best solution for the other subfunctions. The easiest way to deal with MOPs is to assign a specific weight to each subfunction, converting multiple objective functions into a single objective. In practice, this method does not necessarily exist the attainability of design variables for all the objective functions, which means that the objective function may not represent PF correctly and competitiveness may lead to an inappropriate Pareto optimal solution (Ali and Shimoda 2022). An appropriate PF contains a solution \({\mathbf{x}}\) such that for each subobjective function \(f_{i}\), there exists at least one \(f_{j}\) such that \(f_{j} ({\mathbf{x}}) < f_{j} ({\mathbf{y}})\), \({{(f_{i} ({\mathbf{y}}) - f_{i} ({\mathbf{x}}))} \mathord{\left/ {\vphantom {{(f_{i} ({\mathbf{y}}) - f_{i} ({\mathbf{x}}))} {(f_{j} ({\mathbf{x}}) - f_{j} ({\mathbf{y}}))}}} \right. \kern-0pt} {(f_{j} ({\mathbf{x}}) - f_{j} ({\mathbf{y}}))}} \le \varepsilon\), where \({\mathbf{y}} \in {\mathbf{X}} \subset {\mathbb{R}}^{d}\) and \(\varepsilon\) is a scalar larger than 0.

Generally, there are three crucial factors need to be considered when designing multi-objective optimization algorithms (MOAs): convergence, diversity, and spread (Li and Zheng 2009). Convergence reflects the distance between the obtained PF and the true PF; diversity reflects whether the points on the obtained PF are evenly spaced; and spread reflects the distribution range of the obtained PF. It is very challenging to ensure that MOAs find a evenly spaced and highly convergent PF when solving MOPs (Zhao et al. 2022). MOAs can be classified into three categories based on the participation of decision makers in in the optimization process: the priori method (Jin et al. 2001), posteriori method (Branke et al. 2004), and interactive method (Kollat and Reed 2007). In the priori method, the decision maker provides preference weights in advance, so that multiple objectives can be combined into a single objective through weight allocation. In the posteriori method, the decision maker makes a decision at the end of the optimization and therefore needs to generate a set of alternative non-dominated solutions under conflicting objectives. Finally, in the interactive method, the decision maker needs to participate in the optimization process, so this treatment is less efficient. Compared with the other two methods, the posteriori method maintains the formulation of MOPs, has a stronger randomness, does not require too much decision maker intervention, does not depend on specific problems, and has a stronger generalization capability. Based on these advantages, the posteriori method has become the most popular multi-objective optimization processing method, and many multi-objective biologically inspired algorithms belong to this method. The posteriori method usually employs the concept of Pareto dominance to evaluate the advantages and disadvantages of non-dominated solutions, employs a crowding distance mechanism to improve the distribution of the PF, and employs an archive to preserve the optimal Pareto solutions found so far.

When solving MOPs using the multi-objective biologically inspired algorithms, each candidate solution generates a new solution through the search operator of the algorithm. Then the better non-dominated solutions are then selected to update the archive. There are three main methods for archive updating: indicator-based, decomposition-based, and Pareto-based. Indicator-based MOAs use the performance indicators (generational distance (GD) (Ayala et al. 2017), inverted generational distance (IGD) (Champasak et al. 2020) and hypervolume (HV) (Gong et al. 2020) etc.) to guide the direction of evolution. Indicator-based evolutionary algorithm (IBEA) (Zitzler and Künzli 2004) is one of the most famous algorithms, whose main idea is to formalize preferences by successive generalizations of the dominance relation, so that two solutions can be directly compared by the designed indicators. The performance of IBEA depends on the indicator designed for a certain class of problems, and its generalization capability is weak. Decomposition-based MOAs decompose the problem into a set of single-objective subproblems and optimize each subproblem using neighborhood information. Decomposition-based multi-objective evolutionary algorithm (MOEA/D) (Zhang and Li 2007) is a representative algorithm among them. MOEA/D utilizes an objective aggregation strategy to decompose MOPs into multiple single-objective subproblems, and each subproblem can be optimized by simply combining the information of the remaining neighboring subproblems, which reduces the computational cost. However, breakpoints can lead to inefficiency of the MOEA/D strategy when solving some MOPs with discontinuous PFs (Qi et al. 2014). Pareto-based MOAs utilize the Pareto dominance principle to evaluate the proximity between the currently obtained PF and the true PF. Representative algorithms are the non-dominated sorting genetic algorithm (NSGA-II) (Deb et al. 2002), NSGA-III (Deb and Jain 2014), Pareto envelope-based selection algorithm (PESA-II) (Corne et al. 2001), strength Pareto evolutionary algorithm (SPEA2) (Zitzler et al. 2001), multi-objective particle swarm optimizer (MOPSO) (Coello et al. 2004), multi-objective seagull optimization algorithm (MOSOA) (Dhiman et al. 2021), multi-objective water cycle algorithm (MOWCA) (Sadollah et al. 2015), multi-objective grasshopper optimization algorithm (MOGOA) (Mirjalili et al. 2018), multi-objective gradient-based optimizer (MOGBO) (Premkumar et al. 2021a), multi-objective artificial bee colony (MOABC) (Hancer et al. 2015), and so on.

Although researchers have proposed some effective MOAs to solve MOPs, the no free lunch (NFL) theorem (Wolpert and Macready 1997) logically proves that no one algorithm is universally superior in handling all MOPs. That is, there is no universal criterion for trade-offs between multiple objectives for different types of problems. In addition, existing MOAs are usually designed based on the search framework of the most basic single-objective optimization algorithms, and their search operators have weak global search capability in the decision space, which largely affects the convergence performance of the algorithms in the objective space. Therefore, the design of efficient MOAs needs to focus on convergence and diversity in both decision space and objective space.

Biologically inspired algorithms simulate the collaborative foraging behavior of organisms in nature, with strong self-organization and adaptive search capability. Many biological systems are composed of individuals with no intelligence, but these individuals can effectively self-organize into systems that achieve a good balance between efficiency and robustness. Slime mould is a promising organism with strong path planning capability due to its unique oscillatory foraging behavior. Tero et al. (2010) simulated the foraging characteristics of the slime mould to design a mathematical model to map Tokyo subway network. Li et al. (2011) designed two local routing protocols for wireless sensor networks using two different mechanisms in the formation process of slime mould tubular networks. Qian et al. (2013) designed an ant colony system based on slime mould to solve the traveling salesman problem. Becker (2015) developed a slime mould algorithm to solve graph optimization problem. Subsequently, Li et al. (2020) summarized the previous design experience and proposed a general global optimization algorithm named slime mould algorithm (SMA). SMA simulates the positive and negative feedback of slime mold using adaptive weights and has a strong local search capability. Due to its clear structure and easy implementation, SMA has received much attention from scholars since its proposal and has been successfully applied to image threshold segmentation (Naik et al. 2022), photovoltaic parameter extraction (Liu et al. 2021), power system optimization (Wei et al. 2021), job shop scheduling (Wei et al. 2022), optimal economic emission scheduling (Hassan et al. 2021), classification and diagnosis of diseases (Wazery et al. 2021), big data forecasting (Chen and Liu 2020), and other optimization problems. However, SMA has rarely been designed to deal with MOPs, especially in terms of enhancing the performance of the basic SMA.

Premkumar et al. (2021b) designed a multi-objective SMA and evaluated the performance of MOSMA on benchmark functions and engineering problems. Subsequently, Hassan et al. (2021) designed an improved multi-objective SMA based on the sine cosine algorithm and applied it to a multi-objective economic emission dispatch problem. Finally, Houssein et al. (2022) also proposed a multi-objective SMA and analyzed the performance of MOSMA in the decision space on CEC2020 functions. There are three main shortcomings of the existing MOSMA: (1) the existing algorithms do not properly deal with the sorting process in SMA, and the non-dominated sorting does not well combined with the fitness weight, leading to poor convergence accuracy of the algorithm on complex problems; (2) the crowding distance mechanism does not maintain the diversity of solutions in the archive well, leading to poor distribution of the PF. (3) The global search capability of SMA is not improved, which makes the exploration of MOSMA in the decision space inadequate.

Due to the weak exploration capability of SMA, existing MOSMAs usually employ the non-dominated sorting strategy in NSGA-II to construct the archive. This mainly enhances the global search of the algorithm in the objective space, while not improving the algorithm from the decision space. When dealing with complex MOPs, it not only leads to slow convergence of MOSMA, but also easy to fall into local optimum. To improve the search capability of the algorithm in the decision space, Yin et al. (2022b) designed an equilibrium optimizer slime mould algorithm (EOSMA) and revealed the efficiency of the algorithm by comparing it with the CEC winner. The significant advantages of EOSMA on SOPs motivate us to propose its multi-objective version (MOEOSMA) to solve more real-world MOPs. The main contributions of this study are as follows:

  1. 1)

    The elite archive component is applied to EOSMA, which can store the Pareto optimal solutions found so far.

  2. 2)

    The equilibrium pool and crowding distance method are applied to EOSMA to retain the diversity of Pareto optimal solutions.

  3. 3)

    The constant factors in MOEOSMA are replaced by dynamic exploration and exploitation coefficients to enhance the exploration and exploitation.

  4. 4)

    The effectiveness of MOEOSMA was verified in the CEC2020 functions and compared with nine advanced multi-objective algorithms.

  5. 5)

    The convergence results of MOEOSMA are compared with eleven algorithms on eight real-world engineering problems and four truss optimization problems, and the performance is evaluated using the Hypervolume and Spacing-to-Extent indicators.

The rest of this paper is organized as follows. Section 2 briefly introduces EOSMA. Section 3 describes the optimization principle and implementation of MOEOSMA. Section 4 analyzes the performance of MOEOSMA on CEC2020 functions in the decision space and objective space. Section 5 evaluates the efficiency of MOEOSMA on real-world engineering problems and truss optimization problems. Finally, Sect. 6 concludes this paper.

2 Related work

Recently, Yin et al. (2022b) proposed an efficient hybrid equilibrium optimizer slime mould algorithm (EOSMA) and verified the algorithm's performance on CEC2019, CEC2021 test functions, and many real-world engineering optimization problems. In EOSMA, the EO search operator (Faramarzi et al. 2020) is used to guide the oscillatory foraging behavior of slime mould, which makes the anisotropic search of slime mould have a specific orientation, resulting in a better balance between exploration and exploitation. Moreover, the random differential mutation operator and greedy selection strategy are integrated into the algorithm’s search process to enhance the exploration and exploitation capabilities simultaneously. The search principle of EOSMA is described as follows.

2.1 Population initialization and equilibrium pool

The first phase in the swarm intelligence optimization algorithm is population initialization. EOSMA uses the randomly generated uniformly distributed positions in the search space as the population's initial solution. The initialization formula is shown in Eq. (2).

$${\mathbf{X}}_{i} = {\mathbf{LB}} + {\mathbf{r}} \cdot ({\mathbf{UB}} - {\mathbf{LB}})$$
(2)

where \(\cdot\) indicates Hadamard product, \({\mathbf{LB}} = [lb_{1} ,lb_{2} , \cdots ,lb_{d} ]\) is the lower bound of variables, \({\mathbf{UB}} = [ub_{1} ,ub_{2} , \cdots ,ub_{d} ]\) is the upper bound, \({\mathbf{X}}_{i}\) represents the ith initial solution, and \({\mathbf{r}}\) is the random number vector between [0,1].

To further balance the exploration and exploitation capability, EOSMA integrates an equilibrium pool. The equilibrium pool contains five positions, the first four of which are locally optimal positions to help exploration and the last one of which is their average position to help exploitation. The equilibrium pool is shown in Eq. (3).

$${\mathbf{X}}_{eq} = \left[ {{\mathbf{x}}_{eq,1} ,{\mathbf{x}}_{eq,2} ,{\mathbf{x}}_{eq,3} ,{\mathbf{x}}_{eq,4} ,{\mathbf{x}}_{eq,avg} } \right]^{{\text{T}}}$$
(3)

where \({\mathbf{x}}_{eq,1} ,{\mathbf{x}}_{eq,2} ,{\mathbf{x}}_{eq,3} ,{\mathbf{x}}_{eq,4}\) denotes the four individuals with the best fitness in the current population, \({\mathbf{x}}_{eq,avg} = {{({\mathbf{x}}_{eq,1} + {\mathbf{x}}_{eq,2} + {\mathbf{x}}_{eq,3} + {\mathbf{x}}_{eq,4} )} \mathord{\left/ {\vphantom {{({\mathbf{x}}_{eq,1} + {\mathbf{x}}_{eq,2} + {\mathbf{x}}_{eq,3} + {\mathbf{x}}_{eq,4} )} 4}} \right. \kern-0pt} 4}\).

In the iterative process, a position is randomly selected from the equilibrium pool \({\mathbf{X}}_{eq}\) as the best food source \({\mathbf{X}}_{b}\) to guide the updating direction of the search agent, enabling slime mould to utilize multiple food sources at the same time.

2.2 Optimization process of EOSMA

According to the foraging behavior of slime mould, the EOSMA optimization process can be divided into anisotropic search and vein-like tube formation stages. The anisotropic search stage is replaced by the search operator of EO and the venous tube formation stage is updated by the search operator of SMA. The optimization process can be expressed as Eq. (4).

$${\mathbf{X}}_{i}^{*} = \left\{ {\begin{array}{*{20}l} {{\mathbf{X}}_{b} + \left( {{\mathbf{X}}_{i} - {\mathbf{X}}_{b} } \right) \cdot {\mathbf{F}} + \left( {{\mathbf{G}}./{{\varvec{\uplambda}}}} \right) \cdot \left( {1 - {\mathbf{F}}} \right)} \hfill & {r_{1} < z} \hfill \\ {{\mathbf{X}}_{i} + {\mathbf{vb}} \cdot \left( {{\mathbf{W}}_{i} \cdot {\mathbf{X}}_{R1} - {\mathbf{X}}_{R2} } \right)} \hfill & {r_{2} < p} \hfill \\ {{\mathbf{X}}_{b} + {\mathbf{vc}} \cdot \left( {{\mathbf{W}}_{i} \cdot {\mathbf{X}}_{R1} - {\mathbf{X}}_{R2} } \right)} \hfill & {others} \hfill \\ \end{array} } \right.$$
(4)

where \(\cdot\) indicates Hadamard product,\(./\) indicates the division of the corresponding elements of the matrix, \({\mathbf{X}}_{i}^{*}\) denotes the updated ith solution, \(i = 1,2, \cdots ,N\), \(N\) is the population size, \({\mathbf{X}}_{i}\) denotes the current solution, \({\mathbf{X}}_{b}\) is a solution randomly selected from the equilibrium pool, \({\mathbf{F}}\) is the exponential term coefficient, \({\mathbf{F}} = a_{1} sign({\mathbf{r}} - 0.5) \cdot \left( {e^{{ - {{\varvec{\uplambda}}}t_{1} }} - 1} \right)\), \(t_{1} = \left( {1 - {t \mathord{\left/ {\vphantom {t {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)^{{\left( {{{a_{2} t} \mathord{\left/ {\vphantom {{a_{2} t} {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)}}\), \(a_{1} = 2\) and \(a_{2} = 1\) are adaptive parameters that adjust the exploration and exploitation, \(t\) and \(\max \_t\) denote the number of current and maximum iterations, \({\mathbf{G}}\) is the mass generation rate, \({{\varvec{\uplambda}}}\) is a random number vector between [0,1], \({\mathbf{W}}_{i}\) is the fitness weight of the ith slime mould individual, \({\mathbf{vb}}\) is a random number vector between \([ - a,a]\), \(a = {\text{atanh}} \left( {1 - {t \mathord{\left/ {\vphantom {t {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)\), \({\mathbf{vc}}\) decreases linearly from 1 to 0, \(r_{1}\) and \(r_{2}\) are random numbers between [0,1], \({\mathbf{X}}_{R1}\) and \({\mathbf{X}}_{R2}\) represent two randomly selected solutions from the current population, \(z = 0.6\) is a hybrid parameter, \(p = \tanh \left| {{\mathbf{Fit}}_{i} - BF} \right|\), \({\mathbf{Fit}}_{i}\) denotes the fitness of the ith individual, \(BF\) is the best fitness. The mass generation rate \({\mathbf{G}}\) is calculated as shown in Eq. (5).

$${\mathbf{G}} = \left\{ {\begin{array}{*{20}l} {0.5r_{1} \cdot \left( {{\mathbf{X}}_{b} - {{\varvec{\uplambda}}} \cdot {\mathbf{X}}_{i} } \right) \cdot {\mathbf{F}}} \hfill & {r_{2} \ge 0.5} \hfill \\ 0 \hfill & {others} \hfill \\ \end{array} } \right.$$
(5)

where \(r_{1}\) and \(r_{2}\) are random numbers between [0,1]. The adaptive weight \({\mathbf{W}}\) is calculated as shown in Eq. (6).

$$\begin{gathered} {\mathbf{W}}({\mathbf{FitIdx}}_{i} ) = \left\{ {\begin{array}{*{20}l} {1 + {\mathbf{r}} \cdot \log \left( {\frac{{bF - {\mathbf{Fit}}_{i} }}{bF - wF} + 1} \right)} \hfill & {i < \frac{N}{2}} \hfill \\ {1 - {\mathbf{r}} \cdot \log \left( {\frac{{bF - {\mathbf{Fit}}_{i} }}{bF - wF} + 1} \right)} \hfill & {others} \hfill \\ \end{array} } \right. \hfill \\ {\mathbf{FitIdx}} = sort({\mathbf{Fit}}) \hfill \\ \end{gathered}$$
(6)

where \({\mathbf{FitIdx}}\) represents fitness sorted in ascending order, \({\mathbf{r}}\) is the random number vector between [0,1], \(bF\) and \(wF\) are the best and worst fitness values in the current population, \({\mathbf{Fit}}_{i}\) represents the fitness of the ith individual, \(N\) is the population size.

After the search agent is updated by Eq. (4), the random difference mutation mechanism and greedy selection strategy are employed to enhance the search agent’s exploration and exploitation capability, helping the search agent to escape the local optimum. The mathematical formula of the random difference mutation operator is shown in Eq. (7).

$${\mathbf{X}}_{i}^{*} = \left\{ {\begin{array}{*{20}l} {{\mathbf{X}}_{i} + CF\left( {{\mathbf{LB}} + r_{3} \left( {{\mathbf{UB}} - {\mathbf{LB}}} \right)} \right) \cdot {\mathbf{L}}} \hfill & {r_{4} < q} \hfill \\ {{\mathbf{X}}_{i} + SF\left( {{\mathbf{X}}_{R1} - {\mathbf{X}}_{R2} } \right)} \hfill & {others} \hfill \\ \end{array} } \right.$$
(7)

where \(CF = \left( {1 - {t \mathord{\left/ {\vphantom {t {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)^{{\left( {{{a_{1} t} \mathord{\left/ {\vphantom {{a_{1} t} {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)}}\) is an adaptive compression factor, \({\mathbf{L}}\) is a vector with elements 0 or 1, \(SF\) is a random number between [0.2,1], \(r_{3}\) and \(r_{4}\) are random numbers between [0,1], and \(q = 0.2\) is a tunable parameter. To avoid invalid searches, after updating the location of the search agent, the solution outside the boundary is updated using the dichotomy method, as shown in Eq. (8).

$${\mathbf{X}}_{ij}^{*} = \left\{ {\begin{array}{*{20}l} {({\mathbf{X}}_{ij} + ub_{j} )./2} \hfill & {{\mathbf{X}}_{ij}^{*} > ub_{j} } \hfill \\ {({\mathbf{X}}_{ij} + lb_{j} )./2} \hfill & {{\mathbf{X}}_{ij}^{*} < lb_{j} } \hfill \\ {{\mathbf{X}}_{ij}^{*} } \hfill & {others} \hfill \\ \end{array} } \right.$$
(8)

where \(i = 1,2, \cdots ,N\), \(j = 1,2, \cdots ,d\), \(N\) is the population size, \(d\) is the dimension of the decision variable, \(ub_{j}\) and \(lb_{j}\) are the upper and lower bounds of the jth decision variable. After boundary checking, the fitness of each search agent is evaluated, and a greedy selection strategy is applied to retain the better individuals, as shown in Eq. (9).

$${\mathbf{X}}_{i}^{*} = \left\{ {\begin{array}{*{20}l} {{\mathbf{X}}_{i}^{*} } \hfill & {F({\mathbf{X}}_{i}^{*} ) < F({\mathbf{X}}_{i} )} \hfill \\ {{\mathbf{X}}_{i} } \hfill & {others} \hfill \\ \end{array} } \right.$$
(9)

where \(F( \cdot )\) means to evaluate the fitness of an individual, \({\mathbf{X}}_{i}^{*}\) represents the current individual, and \({\mathbf{X}}_{i}\) represents the individual of the previous generation.

In EOSMA, greedy selection strategy and equilibrium pool are introduced. The greedy selection strategy maintains an archive of the same capacity as the population size, which stores the best solutions that each search individual has found so far. After each iteration, the archive is updated by Eq. (9). The greedy strategy also provides search agents with a powerful memory, allowing slime mould to recall successful foraging areas in the past. There are five solutions in the equilibrium pool, four of which are local optimal solutions and the other is the central position of the four local optimal solutions. The equilibrium pool allows slime mould to develop multiple food sources at the same time, increasing the probability of obtaining the global optimal solution. The flow chart and pseudo-code of EOSMA can be referred to (Yin et al. 2022b).

3 Proposed MOEOSMA

3.1 Pareto archive

The purpose of multi-objective optimization is to present decision-makers with a large number of Pareto optimal alternatives. These solutions trade-off between multiple objectives, and there is no dominant relationship between them. To design a multi-objective EOSMA, an archive with a defined maximum capacity, similar to MOPSO (Coello and Lechuga 2002), is first integrated into the algorithm. The Pareto dominance operator is employed during optimization to compare the updated slime mould position to the position in the archive. If the updated slime mould individual is not dominated by the individuals in the archive, it will be preserved; otherwise, it will be discarded. When the number of solutions in the archive exceeds the maximum capacity and is not dominant, the crowding distance and roulette method are used to remove the solutions in the dense region of the population to improve the PF distribution.

3.2 Crowding distance

The crowding distance calculation method determines the selection of the archiving solution and consequently has a significant impact on the diversity of the final PF. Three different crowding distances have been proposed in the literature (Mirjalili et al. 2017b; Zeng et al. 2021; Houssein et al. 2022). By combining the characteristics of existing crowding distances, this study proposes a simple and effective approach for measuring crowding distance. As shown in Fig. 1, the crowding distance of the slime mould individual \(m\) represents the number of neighborhood solutions in the hypercube centered on itself. The length of the ith side of the hypercube is \(2d_{i}\), and the distance \(d_{i}\) is defined as Eq. (10).

$$d_{i} = \frac{{f_{i}^{\max } - f_{i}^{\min } }}{As},i = 1,2, \cdots ,M$$
(10)

where \(f_{i}^{\max }\) and \(f_{i}^{\min }\) are the maximum and minimum values of the ith objective function, \(M\) is the number of objective functions and \(As\) is the archive size.

Fig. 1
figure 1

The crowding distance of the Pareto solution

As shown in Fig. 1, the non-dominated solutions with smaller crowding distances are in sparser regions and are important for approximating the true PF, while the non-dominated solutions with larger crowding distances are in denser regions and have the least influence on the distribution of PS, and are given a higher probability to be removed. Therefore, when the archive is filled, the probability that each solution is removed from the archive is shown in Eq. (11).

$$P_{i} = \frac{{N_{i} }}{C}$$
(11)

where \(C\) denotes the sum of the crowding distances of all solutions in the archive, and \(N_{i}\) denotes the crowding distance of the ith solution.

In this way, MOEOSMA can store better non-dominated solutions in the archive and constantly improve them during iteratives. Using less crowded solutions as food sources can promote slime mould to find other food sources nearby. This will naturally attract search agents to regions with fewer non-dominated solutions in the PF, improving the coverage of the final obtained PF. It is worth noting that, in contrast to the existing MOSMA (Premkumar et al. 2021b; Houssein et al. 2022), the crowding distance used in this research is a discrete value rather than a continuous value, which not only preserves the randomness of the solution being selected in the archive but also facilitates the selection of elite individuals to simulate the multiple food sources found by slime mould. In MOEOSMA, the solution in the archive with the minimum crowding distance is placed in the equilibrium pool. During the iteration, each slime mould individual randomly selected a solution from the equilibrium pool as the food source \({\mathbf{X}}_{b}\) that the current search agent decided to exploit.

3.3 Dynamic coefficient

The parameters \(a_{1}\) and \(a_{2}\) that adjust the exploration and exploitation in EOSMA are set to fixed values during iteration, which indicates that the algorithm's exploration and exploitation trend will remain unchanged during optimization. Based on the improving experience of many algorithms, it can be concluded that the meta-heuristic algorithm should focus on exploration in the early stage, conducting a broad exploration of the whole search space, and then gradually shift to exploitation, searching for more promising regions. Therefore, during MOEOSMA optimization, \(a_{1}\) and \(a_{2}\) are controlled according to the current number of iterations and randomly generated values, which are then translated into dynamic coefficients to adjust the exploration and exploitation trends better. The parameters \(a_{1}\) and \(a_{2}\) are calculated as Eq. (12).

$$\begin{gathered} a_{1} = \left( {1 + \left( {1 - {t \mathord{\left/ {\vphantom {t {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)^{{{{2t} \mathord{\left/ {\vphantom {{2t} {\max \_t}}} \right. \kern-0pt} {\max \_t}}}} } \right) \times r \hfill \\ a_{2} = \left( {2 - \left( {1 - {t \mathord{\left/ {\vphantom {t {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)^{{{{2t} \mathord{\left/ {\vphantom {{2t} {\max \_t}}} \right. \kern-0pt} {\max \_t}}}} } \right) \times r \hfill \\ \end{gathered}$$
(12)

where \(r\) represents the random number between [0,1], \(t\) represents the current iteration number, and \(\max \_t\) represents the maximum iteration number.

3.4 Optimization process of MOEOSMA

Based on the foraging behavior of slime mould, the EOSMA optimization process may be divided into two stages: the anisotropic search stage and the vein-like tube formation stage. The anisotropic search stage is replaced by the EO search operator with the goal to guide the search direction of the search agents, expanding the search range, and avoiding premature convergence. The vein-like tube formation stage is updated by SMA's most crucial search operator. It is worth noting that slime mould in two stages exists in the population simultaneously, as shown in Eq. (13).

$${\mathbf{X}}_{i}^{*} = \left\{ {\begin{array}{*{20}l} {{\mathbf{X}}_{b} + \left( {{\mathbf{X}}_{i} - {\mathbf{X}}_{b} } \right) \cdot {\mathbf{F}} + \left( {{\mathbf{G}}./{{\varvec{\uplambda}}}} \right) \cdot \left( {1 - {\mathbf{F}}} \right)} \hfill & {r < z} \hfill \\ {{\mathbf{X}}_{b} + {\mathbf{vb}} \cdot \left( {{\mathbf{W}}_{i} \cdot {\mathbf{X}}_{R1} - {\mathbf{X}}_{R2} } \right)} \hfill & {others} \hfill \\ \end{array} } \right.$$
(13)

where \(\cdot\) indicates Hadamard product,\(./\) indicates the division of the corresponding elements of the matrix, \({\mathbf{X}}_{i}^{*}\) denotes the updated ith solution, \({\mathbf{X}}_{i}\) denotes the current solution, \({\mathbf{X}}_{b}\) is a solution randomly selected from the equilibrium pool, \({\mathbf{F}}\) is the exponential term coefficient, \({\mathbf{F}} = a_{1} sign({\mathbf{r}} - 0.5) \cdot \left( {e^{{ - {{\varvec{\uplambda}}}t_{1} }} - 1} \right)\), \(t_{1} = \left( {1 - {t \mathord{\left/ {\vphantom {t {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)^{{\left( {{{a_{2} t} \mathord{\left/ {\vphantom {{a_{2} t} {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)}}\), \(a_{1}\) and \(a_{2}\) are calculated by Eq. (12), \({\mathbf{G}}\) is calculated by Eq. (5), \({{\varvec{\uplambda}}}\) is a random number vector between [0,1], \({\mathbf{W}}_{i}\) is the fitness weight of the ith slime mould individual, \({\mathbf{vb}}\) is a random number vector between \([ - a,a]\), \(a = {\text{atanh}} \left( {1 - {t \mathord{\left/ {\vphantom {t {\max \_t}}} \right. \kern-0pt} {\max \_t}}} \right)\), \(r\) is a random number between [0,1], \({\mathbf{X}}_{R1}\) and \({\mathbf{X}}_{R2}\) represent two randomly selected solutions from the current population, and \(z = 0.6\) is an adjustable parameter controlling the balance of exploration and exploitation (Yin et al. 2022b). The adaptive weight \({\mathbf{W}}\) is calculated as shown in Eq. (14).

$$\begin{gathered} {\mathbf{W}}({\mathbf{FitIdx}}(i)) = \left\{ {\begin{array}{*{20}l} {1 + {\mathbf{r}} \cdot \log \left( {\frac{{bF - {\mathbf{Fit}}_{ik} }}{bF - wF} + 1} \right)} \hfill & {i < \frac{N}{2}} \hfill \\ {1 - {\mathbf{r}} \cdot \log \left( {\frac{{bF - {\mathbf{Fit}}_{ik} }}{bF - wF} + 1} \right)} \hfill & {others} \hfill \\ \end{array} } \right. \hfill \\ {\mathbf{FitIdx}} = sort({\mathbf{Fit}}_{k} ),k = rem(t,M) + 1 \hfill \\ \end{gathered}$$
(14)

where \(\cdot\) indicates Hadamard product, \({\mathbf{FitIdx}}\) represents fitness sorted in ascending order, \({\text{rem}} ( \cdot )\) is the remainder function, \(t\) is the current iteration number, \(M\) is the number of objective functions, \({\mathbf{r}}\) is the random number vector between [0,1], \(bF\) and \(wF\) are the best and worst fitness values in the current population, \({\mathbf{Fit}}_{ik}\) represents the fitness of the kth objective function of the ith individual, and \(N\) is the population size.

After updating the location of the slime mould by Eq. (13), the random difference mutation operator and greedy selection strategy are introduced to improve the exploration and exploitation capability, and help the search agent to escape the local optimum and obtain the solution with higher accuracy. The mathematical formula of the mutation operator is shown in Eq. (15).

$${\mathbf{X}}_{i}^{*} = {\mathbf{X}}_{i} + SF\left( {{\mathbf{X}}_{R1} - {\mathbf{X}}_{R2} } \right)$$
(15)

where \({\mathbf{X}}_{i}^{*}\) denotes the updated ith solution, \({\mathbf{X}}_{i}\) denotes the current solution, \(SF\) is a random number between [0.2,1], \({\mathbf{X}}_{R1}\) and \({\mathbf{X}}_{R2}\) represent two randomly selected solutions from the current population.

To avoid invalid searches, after the location of slime mould is updated, check whether \({\mathbf{X}}^{*}\) is beyond the search range and update the location using Eq. (8). Then the fitness is evaluated, and the greedy selection strategy, as shown in Eq. (9), is implemented to retain the better slime mould individuals. The pseudo-code of MOEOSMA is presented in Algorithm 1, and the flow chart is displayed in Fig. 2.

figure a
Fig. 2
figure 2

Flow chart of the MOEOSMA

3.5 Computational complexity

The proposed MOEOSMA is mainly made up of the following subcomponents: initialization, position update, fitness evaluation, fitness sorting, equilibrium pool update, fitness weight update, greedy selection, archive update, and mutation operator. Initialization, position update, mutation operation, and fitness weight update all have \(O(N * d)\) time complexity, fitness sorting has \(O(N * \log N)\) time complexity, greedy selection and equilibrium pool update have \(O(N)\) time complexity, and archive update has \(O(As^{2} * M)\) computational complexity. Therefore, the total time complexity of the algorithm is \(O(\max \_t * (N * d + N * \log N + F + As^{2} * M))\), where \(F\) is the evaluation time of the fitness function, \(N\) is the population size, \(As\) is the archive size, \(M\) is the number of objectives, \(d\) is the dimensionality of the problem, and \(\max \_t\) is the maximum number of iterations. The space complexity is \(O(N * d)\).

4 Experimental and analysis of test functions

In order to verify the effectiveness of the proposed MOEOSMA, the CEC2020 functions are used to analyze the convergence behavior of the algorithm in the objective space and decision space. Unlike previous test suites, CEC2020 includes not only the true PF for each test problem, but also the associated local and global PSs, allowing researchers to evaluate the algorithm's performance in both the objective space and decision space. The MATLAB code of CEC2020 benchmark function can be downloaded at https://github.com/P-N-Suganthan. The specifics of these test functions were described in (Yue et al. 2019; Liang et al. 2020).

4.1 Experimental setup

To evaluate the performance of MOEOSMA relative to other competing algorithms, it is compared to nine well-known MOAs: multi-objective slime mould algorithm (MOSMA) (Premkumar et al. 2021b), multi-objective ant lion optimizer (MOALO) (Mirjalili et al. 2017c), multi-objective grey wolf optimizer (MOGWO) (Mirjalili et al. 2016), multi-objective multi-verse optimization (MOMVO) (Mirjalili et al. 2017b), multi-objective particle swarm optimizer (MOPSO) (Coello et al. 2004), multi-objective Salp swarm algorithm (MSSA) (Mirjalili et al. 2017a), MOEA/D (Zhang and Li 2007), PESA-II (Corne et al. 2001), SPEA2 (Zitzler et al. 2001). The source codes of the comparison algorithms used in the experiments are available on websites: https://aliasgharheidari.com/SMA.html, https://seyedalimirjalili.com, and https://yarpiz.com. All algorithms are executed in MATLAB R2020b under Win 10 OS with hardware details: AMD A8-7410 APU, AMD Radeon R5 Graphics (2.20 GHz) and 12 GB RAM. For a fair comparison, the population size and archiving capacity of all comparison algorithms are set to \(200 \times N\_ops\), with a maximum of \(10000 \times N\_ops\) evaluations, and 21 independent runs, where \(N\_ops\) is the number of local and global PS. The other algorithm parameters set in the original paper are listed in Table 1.

Table 1 Parameter settings of the comparison algorithms

4.2 Experimental results and analysis

Since CEC2020 functions contain multiple global optimal PSs, a good performance of the algorithm in the objective space does not mean that multiple global optimal PSs can be found. The IGD (Zhang et al. 2008) in decision space (IGDX) (Zhou et al. 2009) and objective space (IGDF) (Zhou et al. 2009) are used to evaluate the quality of the obtained PS and PF, respectively. In the decision space, the smaller the IGDX value, the closer the obtained PS is to the true PS. In the objective space, the smaller the IGDF value, the closer the obtained PF is to the true PF.

The mean and standard deviation of the IGDX obtained by MOEOSMA and comparison algorithms are shown in Table 2. The IGDX value quantifies the convergence of the obtained PS in the decision space. It can be seen from Table 2 that MOEOSMA and MOSMA obtained the minimum values on 12 and 8 functions, respectively, while MSSA, MOEA/D and SPEA2 obtained the best results on a few functions. Friedman's statistical test results reveal that MOEOSMA ranks first, and far better than the ranking values of other comparison algorithms. By comparing the IGDX values, it can be seen that MOEOSMA outperforms other algorithms in search in decision space and is able to find multiple global optimal PSs. It is found that MOEOSMA’s superior performance in decision space is mainly due to the equilibrium pool in the EOSMA framework, which stores non-dominated solutions with minimum crowding distance. During the iteration, each slime mould individual randomly selects a solution from the equilibrium pool as the current best food source. This expands the search range of the slime mould in the decision space and enables the algorithm to explore multiple local optimal regions simultaneously. The equilibrium pool strategy not only increases the probability of finding multiple global PSs, but also helps to improve the distribution of PF. In addition, the dynamic exploration and exploitation coefficient improves the search capability of EOSMA, thus improving the convergence accuracy of the algorithm.

Table 2 The IGDX values obtained by all comparison algorithms

Table 3 displays the mean and standard deviation of the IGDF obtained by MOEOSMA and other comparison algorithms. The IGDF reflects the convergence and diversity of the generated PF in the objective space. The results in Table 3 show that MOEOSMA ranks first on 22 functions and does not reach the minimum on MMF15_a and MMF16_l2, but still achieves good results. Combined with the statistical results in Table 2, it can be seen that the performance of MOEOSMA is close to MOSMA in the decision space, but its convergence in the objective space is obviously better than MOSMA. This indicates that the Pareto archive and crowding distance evaluation mechanism used by MOEOSMA are better than MOSMA, which not only improves the convergence rate of PF, but also maintains the diversity of PF well. MOEOSMA differs from MOSMA in terms of archive. Due to the weak exploration capability of SMA, the existing MOSMA (Premkumar et al. 2021b; Houssein et al. 2022) select archived solutions based on the non-dominated level, while MOEOSMA selects archived solutions based on crowding distance by sorting solutions with the highest non-dominated level. The archive of MOSMA is beneficial to exploration, but good non-dominated solutions are easily discarded, while the archive of MOEOSMA can reduce damage to existing archived solutions. As a result, MOEOSMA can provide better convergence than MOSMA and most archive-based MOAs.

Table 3 The IGDF values obtained by all comparison algorithms

Figures 3, 4, 5, and 6 exhibit the best convergence results of PS and PF obtained by all comparison algorithms on MMF2 and MMF16_l3, respectively. These figures display only the non-dominated solutions obtained by the comparison algorithms, not all the solutions of the final population. As shown in Figs. 3 and 4, MOEOSMA obtains more non-dominated solutions on MMF2 and is significantly superior to other algorithms in terms of convergence and distribution. Moreover, Fig. 3 shows that MOEOSMA can also find multiple global PSs, indicating that the algorithm is also suited for handling multimodal multi-objective optimization problems.

Fig. 3
figure 3

The optimal PS obtained by all comparison algorithms on MMF2

Fig. 4
figure 4

The optimal PF obtained by all comparison algorithms on MMF2

Fig. 5
figure 5

The optimal PS obtained by all comparison algorithms on MMF16_l3

Fig. 6
figure 6

The optimal PF obtained by all comparison algorithms on MMF16_l3

MMF16_l3 is a complex three-objective test function with the coexistence of local and global PS. Figures 5 and 6 exhibit the comparison algorithm’s search performance on this function. It can be intuitively seen from the figures that MOEOSMA obtains the best results among all the comparison algorithms. It can not only jump out of the local PS but also find multiple uniformly distributed global PSs. The obtained PF is close to the true PF and provides the decision-maker with more alternative Pareto optimal solutions. In addition, an interesting phenomenon is that the PS obtained by MOSMA is easily spread near the search space's boundary, while MOEOSMA solves this problem well through the improved boundary updating method. In conclusion, MOEOSMA is highly competitive with other multi-objective algorithms and can achieve better results.

To further illustrate the MOEOSMA’s effectiveness and efficiency, Figs. 7 and 8 show the best PS and PF obtained by MOEOSMA on all CEC2020 benchmark functions. As shown in Fig. 7, MOEOSMA can jump out of the local PS, and the obtained PS can cover the true global PS uniformly. It shows that MOEOSMA has a powerful search capability in decision space, which provides a solid platform for solving multimodal multi-objective optimization problems. As shown in Fig. 8, MOEOSMA can approximate the true PF for various types of PF and achieves satisfactory results in terms of convergence, diversity, and uniformity. Overall, MOEOSMA shows superior search performance on CEC2020 benchmark functions. It is verified that the equilibrium pool and the crowding distance can improve the algorithm’s convergence accuracy and speed to obtain well-distributed PS and PF.

Fig. 7
figure 7

The optimal PS obtained by MOEOSMA on all CEC2020 multimodal multi-objective benchmark functions

Fig. 8
figure 8

The optimal PF obtained by MOEOSMA on all CEC2020 multimodal multi-objective benchmark functions

5 Real-world constrained engineering problems

To test the potential of MOEOSMA, it was applied to eight real-world constraint engineering problems and four large-scale truss optimization problems: speed reducer design, spring design, hydrostatic thrust bearing design, vibrating platform design, car side impact design, water resource management, bulk carriers design, multi-product batch plant, 60-bar truss, 72-bar truss, 200-bar truss, and 942-bar truss optimization problems. These problems contain 2 to 5 objective functions, 3 to 59 decision variables, and 5 to 942 constraints, which can comprehensively analyze the optimization performance of MOEOSMA in addressing various MOPs.

5.1 Real-world optimization problems

The first multi-objective engineering design problem is the speed reducer design problem studied by Kurpati et al. (2002). The objective is to minimize the weight and stress of the reducer. As indicated in Fig. 9, this problem contains seven decision variables: the surface width of the gear (\(b\)), the number of pinion teeth (\(z\)), the module of teeth (\(m\)), the length of the first and second shafts between bearings (\(l_{1} , \, l_{2}\)), and the diameter of the first and second shafts (\(d_{1} , \, d_{2}\)). The number of pinion teeth (\(z\)) is an integer and other variables are continuous. This is a mixed integer problem whose mathematical model is shown in Appendix 1.1 (Dhiman and Kumar 2018).

Fig. 9
figure 9

Schematic diagram of the speed reducer problem

The second is the spring design problem, as shown in Fig. 10 (Yin et al. 2022a). The objective of this problem is to minimize both stress and volume (Tawhid and Savsani 2019). The design variables are the wire diameter (\(d\)), the average coil diameter (\(D\)), and the number of active coils (\(N\)). The constraints include outside diameter, shear stress, fluctuation frequency and minimum deflection. This problem is unique because all design variables have different characteristics. The number of coil turns can only be taken as an integer, where the wire diameter is standardized and it must be selected from the set of available diameters. The average coil diameter can be considered as a continuous variable. This problem can be formulated as Appendix 1.2.

Fig. 10
figure 10

Schematic diagram of the spring design problem

Third, the objective of the hydrostatic thrust bearing design problem is to minimize the power loss of the hydrostatic thrust bearing during operation while satisfying some constraints (Rao and Savsani 2012; Kumar et al. 2021a). The hydrostatic thrust bearing must bear a specified load when providing axial support. In this study, an objective function is added to minimize the pressure loss of oil inlet and outlet. As shown in Fig. 11, four design variables are considered in this problem: oil viscosity (\(\mu\)), oil inlet rate (\(Q\)), bearing step radius (\(R\)), and recess radius (\(R_{o}\)). There are seven constraints related to minimum load carrying capacity, inlet oil pressure requirement, oil temperature increase, oil film thickness, and some physical constraints. It is assumed that all variables are continuous. The mathematical formula for this problem is described in Appendix 1.3.

Fig. 11
figure 11

Schematic diagram of the hydrostatic thrust bearing

The fourth problem is a modification of the vibration platform design problem proposed by Messac (1996). It was originally designed as a SOP to maximize the fundamental frequency, with the estimated cost as one of the constraints. Here the problem is modified to include cost as a second objective function and to make the problem combinatorial. Geometry and materials are synthesized in the design process (Narayanan and Azarm 1999). The problem is to design a platform for mounting the motor, as shown in Fig. 12. The setup of the machine is simplified to a pin-pin supported beam bearing the weight. A vibration disturbance is applied from the motor to the beam, which has a length \(L\) and a width \(b\) and is symmetrical around its middle. Variables \(d_{1}\) and \(d_{2}\) locate the contact points of materials 1 and 2 and 2 and 3, respectively. Variable \(d_{3}\) locates the bottom of the beam. The combined variable \(M_{i}\) refers to the type of material that can form each layer of the beam. The mass density (\(\rho\)), Young's modulus of elasticity (\(E\)) and cost per unit volume (\(c\)) of each material type are shown in Table 4. The objective is to design sandwich beams to minimize the vibration of the beam due to motor disturbance while minimizing the cost. The complete formulation of the problem is shown in Appendix 1.4.

Fig. 12
figure 12

Schematic diagram of the vibrating platform apparatus

Table 4 Material properties of vibration platform design problem

Fifth, Jain and Deb (2014) developed the car side impact design problem. The objective of this problem is to minimize the weight of the car while minimizing the public forces experienced by the passenger and the average velocity of the V-pillar responsible for withstanding the impact load. All three objectives are in conflict with each other. Therefore, it is expected that there will be a three dimensional trade-off PF. There are ten constraints in this problem, involving limiting values of abdominal load, pubic force, velocity of the V-pillar, rib deflection, etc. There are eleven design variables describing the thicknesses of the B-pillar, floor, crossmembers, door beam, roof rail, etc. Its mathematical description is given in Appendix 1.5.

Sixth, the water resource management is the optimal planning of storm-drainage systems in urban areas, originally proposed by Musselman and Talavage (1980). The formulation of this problem essentially consists of a hierarchically structured linear program with a simulation model as a constraint. It is assumed that there are three decision variables in the drainage system denoting the local detention storage capacity (\(x_{1}\)), maximum treatment rate (\(x_{2}\)) and maximum allowable overflow rate (\(x_{3}\)). The objectives to be optimized are drainage network cost (\(f_{1}\)), storage facility cost (\(f_{2}\)), treatment facility cost (\(f_{3}\)), expected flood damage cost (\(f_{4}\)) and expected flood economic loss (\(f_{5}\)). There are five objective functions for this problem, and the performance of MOEOSMA and other comparison algorithms can be evaluated on the many-objective optimization problem. The mathematical model of this problem is given in Appendix 1.6 (Ray et al. 2001).

Seventh, the bulk carriers design problem is another challenging constraint optimization problem, extracted from (Parsons and Scott 2004). The objectives of the problem are to reduce the transportation cost (\(f_{1}\)), to reduce the weight of the ship (\(f_{2}\)) and to increase the annual cargo volume (\(f_{3}\)). The decision variables of this problem are the length (\(L\)), beam (\(B\)), depth (\(D\)), draft (\(T\)), speed (\(V_{k}\)) and block coefficient (\(C_{B}\)) of the ship. The mathematical description of the problem is shown in Appendix 1.7.

Eighth, the multi-product batch plant problem is a complex scheduling problem. The early design of this type of problem is generally to reduce the manufacturing cost and makespan. The multi-product batch plant test problem is extracted from (Kumar et al. 2021a), which takes into account three objective functions at the same time, with ten decision variables and ten inequality constraints. The mathematical formula for this mixed integer linear programming problem is described in detail in Appendix 1.8.

Finally, four truss optimization problems (60-bar, 72-bar, 200-bar, and 942-bar) are selected from (Pholdee and Bureerat 2013; Tejani et al. 2019; Chou and Truong 2020; Kumar et al. 2021b; Panagant et al. 2021) for validating the performance of MOEOSMA in solving large-scale structural optimization problems. The structural mass and compliance are specified as objective functions subject to allowable stress constraints. The truss optimization problem can be formulated as Eq. (16) (Pholdee and Bureerat 2013; Panagant et al. 2021).

$$\begin{gathered} {\text{Consider }}{\mathbf{A}} = [A_{1} ,A_{2} , \cdots ,A_{m} ] \hfill \\ {\text{Minimize }}f_{1} ({\mathbf{A}}) = \sum\limits_{i = 1}^{m} {A_{i} \rho_{i} L_{i} } \hfill \\ \, f_{2} ({\mathbf{A}}) = {\mathbf{u}}^{{\text{T}}} {\mathbf{F}} \hfill \\ {\text{subject to }}\left| {\sigma_{i} } \right| - \sigma_{i}^{\max } \le 0{\text{ (stress constraint)}} \hfill \\ \, A_{i}^{lb} \le A_{i} \le A_{i}^{ub} {\text{ (side constraint)}}{.} \hfill \\ \end{gathered}$$
(16)

where \(f_{1}\) denotes the structural mass, \(f_{2}\) denotes compliance, \(A_{i}\) is the design variable, \(\rho_{i}\) is the density, \(L_{i}\) is the element length, u denotes displacement, F denotes loading, \(\sigma_{i}\) is the element stress, and \(\sigma_{i}^{\max }\) is the maximum stress occurs on the element structure.

The displacement and loading vectors in Eq. (16) are employed from finite element analysis. The material density, modulus of elasticity, and allowable stress are set as 7850 kg/m3, 200GPa, and 400 MPa, respectively. In practice, the size of each structural member is usually discrete design variable due to beam standard sizing; therefore, the sizing variables are specified as discrete. The structures of the four trusses are shown in Fig. 13, where 60-bar and 200-bar are planar (2D) trusses and 72-bar and 942-bar are spatial (3D) trusses. The number of design variables may not be equal to the number of truss members due to the presence of grouped design variables. The number of design variables for 60-bar, 72-bar, 200-bar, and 942-bar are 25, 16, 29, and 59, respectively. Table 5 depicts the features of these real-world constraint engineering problems.

Fig. 13
figure 13

Schematic diagram of the truss structure. a 60-bar truss, b 72-bar truss, c 200-bar truss, d 942-bar truss

Table 5 Characteristics of real-world constraint engineering problems

5.2 Constraint handling method

The penalty function is the most popular approach when dealing with constraints because it provides the unconstrained equivalent of the constraint problem. A good penalty function works in such a way that a feasible solution should have a smaller penalty function value than an infeasible solution. For two specific feasible solutions, the solution with the lower objective function is better. For two infeasible solutions, the less constraint violations the better. Therefore, during the optimization process, a particular penalty value is added to the infeasible solution to guide the search agent away from the infeasible region, as shown in Eq. (17) (Savsani and Savsani 2016).

$$O_{n} ({\mathbf{x}}) = f_{n} ({\mathbf{x}}) + w \cdot \sum\limits_{i = 1}^{l} {\max \left( {0,g_{i} ({\mathbf{x}})} \right)} , \, n = 1,2, \cdots ,M$$
(17)

where \(O_{n} ({\mathbf{x}})\) indicates the nth objective function value, \(f_{n} ({\mathbf{x}})\) indicates the objective function value without taking the constraints into account, \(w = 10^{8}\) represents the static penalty coefficient.

5.3 Performance metrics

Two performance metrics, the Hypervolume (HV) (Panagant et al. 2021) and the Spacing-to-Extent (STE) (Tejani et al. 2019), are used to measure the performance of the optimization algorithm. HV is used to measure the convergence and extension of the PF, while STE is the ratio between spacing and extent of the PF. In this research, the STE value is set to 100 if there is only one solution in the PF; if there are two solutions in the PF, the STE value is set to 10; otherwise, the STE value is calculated according to Eq. (18).

$$\begin{gathered} STE = {{Spacing} \mathord{\left/ {\vphantom {{Spacing} {Extent}}} \right. \kern-0pt} {Extent}} \hfill \\ Spacing = \frac{1}{{\left| {{\text{PF}}} \right| - 1}}\sum\limits_{i = 1}^{{\left| {{\text{PF}}} \right|}} {\left( {d_{i} - \overline{d}} \right)^{2} } \hfill \\ Extent = \sum\limits_{i = 1}^{M} {\left| {f_{i}^{\max } - f_{i}^{\min } } \right|} \hfill \\ \end{gathered}$$
(18)

where \(\left| {{\text{PF}}} \right|\) denotes the number of solutions in the obtained PF, \(d_{i}\) is the Euclidean distance between the objective function vector of the ith solution and its nearest neighbor, \(\overline{d}\) is the average of all \(d_{i}\), \(M\) is the number of objective functions, \(f_{i}^{\max }\) and \(f_{i}^{\min }\) are the maximum and minimum values of the ith objective function in the PF, respectively.

The superior PF has a larger HV value and a smaller STE value. For the HV metric, the reference point for each test problem is 1.1 times the maximum objective value of the PF obtained by 100 independent runs of MOEOSMA, as shown in Table 6. If all solutions in the PF obtained by the algorithm are dominated by the reference point, the HV value is set to 0.

Table 6 Reference points of real-world engineering problems

5.4 Discussion of results

In order to verify the efficiency of the proposed algorithm, the results obtained by MOEOSMA were compared with eleven well-known MOAs, including MOSMA (Premkumar et al. 2021b), MOALO (Mirjalili et al. 2017c), MOGWO (Mirjalili et al. 2016), multi-objective marine predator algorithm (MOMPA) (Zhong et al. 2021), MOMVO (Mirjalili et al. 2017b), MOPSO (Coello et al. 2004), MSSA (Mirjalili et al. 2017a), multi-objective dragonfly algorithm (MODA) (Mirjalili 2016), MOEA/D (Zhang and Li 2007), PESA-II (Corne et al. 2001), SPEA2 (Zitzler et al. 2001). All algorithms have a population size of 100, an archive capacity of 100, and run 30 times independently. The maximum number of iterations is 200 for CMOP01 to CMOP08 and 500 for the four truss optimization problems.

The mean and standard deviation of HV values obtained by MOEOSMA and other comparison algorithms on twelve engineering problems are presented in Table 7 and Fig. 14. As can be seen from Table 7, MOEOSMA, MOMPA, MOMVO, and PESA-II obtained the best HV values on 8, 1, 2, and 1 engineering problems, respectively. It is worth noting that MOEOSMA outperforms other algorithms on engineering problems with two objective functions, but the performance in solving engineering problems with more than two objectives needs to be improved. This is because MOEOSMA is updated based on the elite Pareto optimal solution in the equilibrium pool. When solving many-objective optimization problems, the number of non-dominated solutions increases exponentially. Due to the low selection pressure caused by the Pareto dominance relationship, it is difficult for MOEOSMA to select the elite non-dominated solutions. For the three-objective engineering problem, MOMPA and MOMVO achieved better results, and for the five-objective engineering problem, PESA-II achieved the best results, but PESA-II runs 1000 times slower than MOEOSMA. Although MOEOSMA does not perform best on problems with more than two objectives, it still has a strong competitive advantage. In addition, MOEOSMA performs better than other algorithms on large-scale truss optimization problems. According to the NFL theorem (Wolpert and Macready 1997), no algorithm performs best on all problems, and MOEOSMA is more suitable for solving real-world engineering problems with two objectives. In addition, Friedman test results show that MOEOSMA, MOMPA, and MOMVO are superior to other competitive algorithms in terms of convergence and diversity of the PF.

Table 7 The HV values obtained by all comparison algorithms on engineering problems
Fig. 14
figure 14

The box plot of HV values obtained by all comparison algorithms

For the HV metric, a larger value indicates better convergence and coverage of the PF. As can be seen from Fig. 14, the minimum HV value obtained by the algorithm is 0, indicating that all solutions obtained by the algorithm are dominated by the reference point of the problem, and this PF is the worst. In addition, MOEOSMA has the highest box plot with the least number of outliers and is narrowest, indicating that the algorithm has good generalization ability and stability. The performance of MOEOSMA is superior to the current MOSMA except for the car side impact problem, which verifies the effectiveness of the improved strategy used in this study. Although MOEOSMA does not obtain the best results for the many-objective optimization problems, it still remains at the same level as several state-of-the-art algorithms.

The STE values of PF produced by MOEOSMA and other comparison algorithms are recorded in Table 8 and Fig. 15. As shown in Table 8, MOEOSMA, MOMVO, MSSA, and SPEA2 obtain the most uniformly distributed PF on 2, 1, 2, and 7 engineering problems, respectively. The Friedman statistical test results show that the overall distribution of PF obtained by SPEA2 is the best, followed by MOEOSMA. The effectiveness and efficiency of the equilibrium pool strategy and crowding distance method on various MOPs are verified. In addition, the Friedman rankings in Tables 7 and 8 show that MODA, MOSMA and MOEA/D do not solve these engineering problems well. The search efficiency of MODA and MOSMA needs to be improved, while MOEA/D may not be good at handling engineering problems with constraints.

Table 8 The STE values obtained by all comparison algorithms on engineering problems
Fig. 15
figure 15

The box plot of STE values obtained by all comparison algorithms

For the STE metric, smaller values indicate better uniformity and extensiveness of the PF. As can be seen from Fig. 15, the algorithm obtains a maximum STE value of 100. This situation is because the algorithm obtains a PF with only one solution and cannot calculate the Spacing. Since such a PF is the worst, it is set to a relatively large value. If the obtained PF has only two solutions and also cannot calculate the Spacing, set its STE value to 10. For PF with more than two solutions, the STE value (usually less than 1) is calculated by Eq. (18). According to Fig. 15, most algorithms can obtain well-distributed PF. However, MOSMA, MOPSO, MODA, and MOEA/D perform poorly on some problems and can only obtain one or two non-dominated solutions. In addition, SPEA2 obtains the best PF distribution in many problems, but its convergence accuracy is not as good as MOEOSMA.

To avoid the influence of randomness, the Wilcoxon rank-sum test was employed to verify whether the HV and STE values obtained by the paired algorithms are significantly different. Tables 9 and 10 show the results of the paired sample Wilcoxon rank-sum test for MOEOSMA and the other comparison algorithms. Table 9 reveals that MOEOSMA significantly outperforms other algorithms for most optimization problems and outperforms all comparison algorithms for speed reducer, spring design, hydrostatic thrust bearing, and four large-scale truss optimization problems. It is verified that the comprehensive performance of MOEOSMA is better than other comparison algorithms, and there are significant differences.

Table 9 Wilcoxon p-value test results for the HV metrics (two-tailed)
Table 10 Wilcoxon p-value test results for the STE metrics (two-tailed)

Table 10 illustrates that in terms of uniformity and extensiveness, the PF obtained by MOEOSMA is not significantly different from MOMPA, MOMVO, and PESA-II on 9, 5, and 7 problems, respectively. These algorithms all obtained well-distributed PF. However, there are significant differences between MOEOSMA and MOSMA on eleven problems. As can be seen from Table 8, the STE values of MOEOSMA are smaller than that of MOSMA, indicating that MOEOSMA is significantly better than MOSMA in terms of uniformity. In addition, MOEOSMA is significantly different from MOALO, MOGWO, MSSA, MODA, and MOEA/D. The former obtains better PF distribution, which verifies the performance of MOEOSMA. When the primary search operator of MOAs has sufficient exploration capability, the distribution of PF obtained using the elite archiving mechanism based on the crowding distance (similar to MOPSO) is more uniform than that of the non-dominated ranking (similar to NSGA-II). For high-dimensional complex PF, the distribution does not deteriorate significantly. In contrast, the archiving mechanism based on the non-dominated ranking is more suitable to combine with the search operator with strong exploitation capability, thus improving the exploration capability of MOAs.

In MOPs, the number of non-dominated solutions in the PS obtained by the algorithm is crucial. It is detrimental for the user to weigh the decisions if the number of solutions is too small. Therefore, the number of Pareto optimal solutions obtained can be regarded as a diversity indicator. Table 11 statistics the average number of Pareto optimal solutions obtained by all comparison algorithms run 30 times independently on each problem. The best results are shown in bold. Because the archive capacity is set to 100 for all algorithms, the average number of Pareto optimal solutions in Table 11 is at most 100. According to Table 11, MOEOSMA obtains the most non-dominated solutions on twelve engineering problems, while other algorithms can only obtain more non-dominated solutions on some of the problems. It is demonstrated that using MOEOSMA to solve real-world MOPs is more beneficial for users to weigh and select the most satisfactory solution among multiple objective functions.

Table 11 The average number of Pareto solutions obtained by all comparison algorithms

The quality measure for PF is very complicated, and there is no unary quality measure that can indicate that approximate set A is superior to B (Zitzler et al. 2003). Because the theoretical PF is usually unknown in real-world optimization problems, it is challenging to design a satisfactory binary quality measure. Such a metric also has the disadvantage that the evaluation is not objective. A direct comparison of PF can more accurately demonstrate the advantages of different algorithms. Figures 15, 16, 17, and 18 present the optimal PF obtained by all comparison algorithms for the hydrostatic thrust bearing design, car side impact design, water resource management, and 60-bar truss optimization problem, respectively. The PF for other engineering problems is shown in Appendix Figs. 20, 21, 22, 23, 24, 25, 26, and 27.

Fig. 16
figure 16

The optimal PF obtained by all comparison algorithms on the hydrostatic thrust bearing design problem

Fig. 17
figure 17

The optimal PF obtained by all comparison algorithms on the car side impact design problem

Fig. 18
figure 18

The optimal PF obtained by all comparison algorithms on the water resource management problem

As shown in Fig. 16, the PF obtained by MOALO with MOSMA is strictly dominated by the PF obtained by MOEOSMA on the hydrostatic thrust bearing design problem. MOSMA and MOPSO can only discover a few non-dominated solutions that satisfy all constraints, whereas MOEA/D cannot find feasible solutions. It means that the search operators of these three algorithms are inefficient at solving the problem. Compared with other algorithms, MOEOSMA obtains a more uniform and extensive PF.

The optimal PF of the car side impact design problem, depicted in Fig. 17, illustrates the search performance of MOEOSMA in comparison to other algorithms on the three objective optimization problems. In this instance, the HV value indicates that MOMVO obtained the best convergence and diversity of PF, but its distribution is not the most uniform, with many non-dominated solutions concentrated in the same region. As can be seen from Fig. 17, MOPSO, MOMPA, and SPEA2 are the three algorithms with the best performance, followed by MOEOSMA and PESA-II.

Figure 18 displays the PF distribution of the first three objective functions of the water resource management problem with five objectives. The result illustrates that MOPSO, MOEOSMA, and MOMPA provide the best PF distribution, but the HV values of MOMVO and MOSMA are larger, indicating that the latter has superior convergence and is closer to the true PF.

For the truss optimization problem, taking 60-bar as an example, Fig. 19 presents the PF obtained by all comparison algorithms. It can be seen that MOEOSMA demonstrates the best convergence performance on the 60-bar truss optimization problem, and the PF obtained is closer to the true PF. Moreover, the PF obtained by many algorithms can only cover part of the true PF, and only MOEOSMA, MOGWO, and MOMPA can achieve high coverage.

Fig. 19
figure 19

The optimal PF obtained by all comparison algorithms on the 60-bar truss optimization problem

In summary, MOEOSMA shows the strongest competitiveness in all two-objective engineering problems, indicating that the multi-objective algorithm proposed in this research is very efficient in two-objective optimization problems. Among them, the dynamic exploration and exploitation coefficient enhance the search capability of the algorithm, and the elite archiving mechanism based on the crowding distance can promote the convergence and diversity of the PF. For engineering problems with three objectives and above, MOEOSMA’s performance is reduced due to too little selection pressure caused by Pareto dominance. However, it still has a strong competitive advantage and outperforms most comparable MOAs.

6 Conclusion and future work

This study developed a novel multi-objective version of the recently proposed EOSMA called MOEOSMA. Here, EOSMA's superior performance in the decision space is the primary motivation for developing MOEOSMA. In order to handle MOPs efficiently, the proposed algorithm introduces three important components. First, dynamic exploration and exploitation coefficients were used to improve the algorithm's search ability in the decision space. Second, a rotation method was used designed to sort the fitness of slime mould individual to evaluate the fitness weight. Then, a Pareto archive with a fixed capacity was used to store the good non-dominated solutions obtained so far to improve the convergence of solutions in the objective space. Finally, a crowding distance assessment method was developed to maintain the archive and update the equilibrium pool to promote the diversity of the solution in the objective space.

The performance of MOEOSMA was verified on CEC2020 functions, and the convergence of the algorithm in the decision space and objective space was evaluated using IGDX and IGDF, respectively. The experimental results show that MOEOSMA outperforms nine well-known MOAs. In addition, eight real-world engineering problems and four truss optimization problems were tested to demonstrate the efficiency and practicality of MOEOSMA. The convergence, diversity and extensiveness of algorithms were evaluated by HV and STE, respectively. In terms of convergence and diversity, MOEOSMA is obviously superior to other comparison algorithms. In terms of extensiveness, MOEOSMA is second only to SPEA2. In addition, MOEOSMA obtained the largest number of non-dominated solutions, which can provide more alternatives to decision-makers. In future research, MOEOSMA can be applied to more practical optimization problems, such as multi-objective feature selection problem (Hu et al. 2022) and wing aeroelastic optimization problem (Wansasueb et al. 2022). In addition, the crowding distance method can be further improved to enhance the performance of MOEOSMA in solving multimodal multi-objective optimization problems.