1 Introduction

One of the critical research areas in optimization is multi-objective optimization problems (MOOPs), because MOOPs have a wide diverse of applications such as data mining [8], economic and finance issues [43, 54, 82], utility theory, game theory, linear production theory and economics [31], preferences in MOPs [13], airline operations [22], scheduling problems [45, 55, 86], portfolio optimization [23], combinational problems [10, 24, 25, 37], portfolio optimization [22], engineering problems [5, 14, 32, 49], manpower planning [77], reservoir management [2], automatic cell planning problems [48], radiation therapy [38], jury selection [75], can be formulated as MOOPs. We refer the interested reader to further applications in survey articles [69, 97], and other books [21, 29, 52].

There is more than one objective function in MOOPs. These objective functions need to be optimized simultaneously. It turns out solving MOOPs is a challenging job because the objective functions are conflicting, and the solution set acquired from solving MOOPs has the following characteristic: no solution is subaltern to the other solutions, and each solution is a trade-off of the objectives. This set is known as the Pareto-optimal set. Normally, in most of the applications mentioned above, the number of solutions in the Pareto-optimal set grows exponentially with the problem size. Therefore, MOOPs are NP-hard problems.

Therefore, it might be expensive concerning the computational point of view to use traditional methods to solve MOOPs. Among the drawbacks of using the traditional techniques are the following: they require differentiability of objective functions and/or constraints; computing the Jacobian and/or Hessian is expensive; they need a good initial point; they are not efficient when the underlying problems are discontinuous and/or discrete in the search space and stuck at a sub-optimal solution; and the objective functions and/or constraints might be non-smooth.

It is appealing to many researchers to develop effective techniques and algorithms to solve MOOPs so that these algorithms overcome the drawbacks of the traditional methods. Among of these efficient algorithms are evolutionary algorithms (EAs). It is shown [98] that EAs can search for the Pareto optimal solutions of MOOPs that are too difficult to be solved by the traditional methods within a reasonable computation time.

In the past decades, many biologically and nature inspired algorithms have been proposed and established to solve MOOPs, such as multi-objective genetic algorithm (MOGA) [30], Niche Pareto genetic algorithm (NPGA) [35], non-dominated sorting genetic algorithm (NSGA) [79], NSGA-II (an elitist based non-dominated sorting genetic algorithm – an improved version of NSGA) [19], strength Pareto evolutionary algorithm (SPEA) [99], SPEA2 (an improved version of SPEA) in [101], Pareto-archived evolutionary strategy (PAES) [42], Pareto envelope-based selection algorithm (PESA and PESA-II) in [15], multi-objective particle swarm optimization (MOPSO) [11] and an improved version of MOPSO in [16], a multi-objective evolutionary algorithm based on decomposition (MOEA/D) in [95] and its improved versions described in detail in [97], indicator-based evolutionary algorithm (IBEA) [100], multi-objective evolutionary algorithm based on decomposition (MOEA/D) [95], archived multi-objective simulated annealing (AMOSA) [9], and preference-inspired co-evolutionary algorithm (PICEA) [85]. Also, swarm intelligence based search algorithms for multi-objective optimization [1, 4, 6, 7, 11, 12, 33, 36, 50, 56, 58, 65, 81, 84, 89, 94] have also been developed and applied to a variety of MOOPs. Lately, several more developed multi-objective algorithms [62, 90, 92] based on the search techniques other than the ones used in genetic algorithm or EAs and particle swarm optimization [41] have also shown a great potential in obtaining the Pareto front closest to the true Pareto front.

Nature-based meta-heuristics have caught the attention of various researchers due of its capability to solve problems which are multi-dimensional, multi-modal, combinatorial or large search space problems. The concept of nature based meta-heuristic was suggested by Holland in 1975 [34] with the introduction of genetic algorithm (GA), after that many population based effective algorithms were developed in next three decades like ant colony optimization (ACO) [20], artificial immune algorithm (AIA) [28], differential evolution (DE) [80], bacteria foraging algorithm (BFA) [61], shuffled frog leap (SFL) [27] and particle swarm optimization (PSO) [41]. These algorithms were applied to various engineering, scientific and management real-life problems. The effectiveness of such algorithms has motivated researchers further to develop new algorithms based on the principle of nature. The development of nature-based algorithms was remarkable since 2005, and many new algorithms were developed after that period. It is difficult to mention all, but few such algorithm can include; The invasive weed optimization (IWO) [51], artificial bee colony (ABC) [39], biogeography-based optimization (BBO) [78], gravitational search algorithm (GSA) [68], gray wolf optimization (GWO) [53], firefly algorithm (FA) [87], cuckoo search (CS) algorithm [91], bat algorithm (BA) [88], grenade explosion method (GEM) [3], charged system search (CSS) [40], teaching-learning based optimization (TLBO) [66, 67]. The animal migration optimization (AMO) [47], water cycle algorithm (WCA) [26], runner root algorithm (RRA) (Merrikh-Bayat, 2015), mine blast algorithm (MBA) [71], heat transfer search (HTS) [63], lightning search algorithm (LSA) [76], lion optimization algorithm (LOA) [93], Heat transfer search [63], passing vehicle search (PVS) [72] and many more. Many of these algorithms have proven their ability for the applications on real-life applications. Many of these algorithms have their effective multi-objective versions like GA [12, 17], PSO [69, 84], ABC [4, 58, 94], ACO [6, 86], AIA [7, 33], GSA [54], BBO [36, 70], IWA [57], FFA [90], CSA [92], BA [89] and TLBO [43, 62, 63].

In addition to the above listed algorithms, artificial algae algorithm (AAA) is also an effective and recently developed algorithm for the single objective optimization [83] which works on the the living behavior of the microalgae. So far, multi-objective version for AAA is not developed and investigated. The basic search mechanism of AAA and its effectiveness for the single objective optimization can be used as a motivation to develop its multi-objective version and to check its efficacy for challenging optimization problems. So, this paper is mainly focused on developing a useful technique by using multi-objective optimization method along with artificial algae algorithm(AAA). This proposed method will be denoted as MO-AAA method. AAA is an effective meta-heuristic developed in 2015 [83]. This technique has proved its effectiveness and robustness regarding accuracy, computational efforts, and convergence. Later in this paper it is shown that the MO-AAA is capable of generating non-dominated solutions and true Pareto front (PF). The first contribution of this paper is to introduce a new concept of multi-objective AAA algorithm. Second contribution is to show the correctness of MO-AAA method for the engineering design problems. The proposed method is investigated on different multi-objective benchmark problems (20 challenging benchmark problems from CEC 2009 for unconstrained and constrained multi-objective optimization problems) and engineering design problems. These benchmark problems offer challenges to multi-objective optimization algorithms. We have compared our proposed method with about nearly 16 (recently developed) other algorithms for unconstrained problems and 9 (recently developed) algorithms for constrained problems. The proposed approach is compared with the other well-known multi-objective optimization algorithms.

The organization of the paper is as follows: Section 2 describes the definitions of multi-objective optimization problems and Pareto efficiency or optimality. Section 3 presents the artificial algae algorithm(AAA) algorithm in details. Section 4 covers the proposed algorithm, MO-AAA algorithm. Section 5 investigates the performance of MO-AAA on various benchmark problems followed by Section 5 which gives the performance of MO-AAA on different engineering design problems. Finally, Section 6 concludes some final comments and future work.

2 Multi-objective optimization problem

In this paper, we assume that all objective functions are of the minimization type. It is known that minimization type for objective functions can be transformed into a maximization type by multiplying negative one. A minimization multi-objective decision problem (multi-objective optimization problem) with n objectives is defined as follows [10, 44, 52]:

Given a k -dimensional decision variable vector \(\vec {x}=\left \{ x_{1},x_{2},\mathellipsis x_{k} \right \}\) in the solution space S, find a vector \(\vec {x^{\ast }}\) that minimizes a given set of n objective functions

\(f\left (\overset {\longrightarrow }{x^{\ast }} \right )=\left \{ f_{1}\left (\overset {\longrightarrow }{x^{\ast }}\right ),\thinspace f_{2}\left (\overset {\longrightarrow }{x^{\ast }} \right )\mathellipsis f_{n}\left (\overset {\longrightarrow }{x^{\ast }} \right ) \right \}.\thinspace \) The solution space S is generally confined by a series of constraints, such as \(g_{i}\left (\vec {x} \right )\le 0,\thinspace i = 1,\thinspace 2,\mathellipsis I, h_{j}\left (\vec {x} \right )= 0,\thinspace j = 1,2,\mathellipsis J,\thinspace \)and bounds on the decision variables\({(\vec {x})}^{l}\le \vec {x}\le \left (\vec {x} \right )^{u}.\thinspace \thinspace \)The objective space is defined as \(F=\{\vec {f}=\left (f_{1},f_{2},\mathellipsis f_{n} \right ):f_{i}=f_{i}\left (\vec {x} \right ),\thinspace \forall \vec {x}\in S,\thinspace i = 1,2,\mathellipsis n\}\).

In various real-life problems, objective functions under consideration conflict with each other. Thus, minimizing x for a single objective usually is not suitable for the other objective functions. Therefore, we aim a multi-objective solution that simultaneously optimizes each objective function which is an impossible task. A reasonable solution to a multi-objective problem is to examine a set of solutions so that each solution in the set satisfies the objective functions at an acceptable level without being dominated by any other solutions. If all objective functions are minimized, a feasible solution x is said to dominate another feasible solution y (xy), if and only if, fi (x) ≤ fi(y), for i = 1, 2,…,n and fj (x) < fj (y) for least one objective function j. A solution is said to be Pareto optimal if it is not dominated by any other solution in the solution space. A Pareto optimal solution cannot be improved concerning any objective without deteriorating least one other objective. The set of all feasible non-dominated solutions in S is referred to as the Pareto optimal set, and for a given Pareto optimal set, the corresponding objective function values in the objective space are called the Pareto front. For many problems, the number of Pareto optimal solutions is enormous (perhaps infinite).

Our aim in a multi-objective optimization algorithm is to find solutions in the Pareto optimal set. Nonetheless, finding the entire Pareto optimal set, for many multi-objective problems, is practically impossible due to the large size of the problems. Therefore, a practical approach to multi-objective optimization is to examine a set of solutions (the best-known Pareto set) that represent the Pareto optimal set as well as possible.

3 Artificial Algae Algorithm (AAA)

The AAA algorithm, proposed by [83], is based on the living behavior of the microalgae. Artificial algae update each solution in the feasible region by idealizing the characteristics of algae. Based on the natural behavior of algae, the AAA consists of three basic parts, namely, evolutionary process, adaption process and helical movement. In the AAA, each population member is called algal colony. In the evolutionary process, if the algal colony receives enough light in sufficient nutrient conditions, it grows and reproduces itself to generate new algae cells in time. On the other side, the algal colony does not receive enough survival light for short duration and dies eventually. Based on this concept, algal colony provides good solution to grow more and replicate by the smallest algal colony dying in the evolutionary process. So, the evolution process requires the size of algal colony (G), here in algorithm G is the fitness value of the objective function. In adaption process, an insufficient grown algal colony tries to resemble itself to the biggest algal colony in the environment. This process is taken care of the algorithm in the form of starvation. The initial starvation value is zero for each artificial algae, and it increases with time, so the artificial algae having the highest starvation level are adapted based on the value of adaption parameter Ap which is constant on the interval [0,1]. Algae generally swim and try to stay close to the water surface to have adequate light for the survival. They swim helically in the liquid to restrict gravity and drag force of the liquid. In AAA, the gravity restricting movement is taken as 0, and viscous drag is considered in the form of shear force, which is proportional to the size of algal cell. It is in the spherical shape therefore frictional surface becomes the surface of the hemisphere.

$$ \tau \left( x_{i} \right)= 2\pi \left( \sqrt[3]{\frac{3G_{i}}{4\pi }} \right)^{2} $$
(1)

Where τ (xi) is the friction surface.

For one-dimensional problem algal colony moves in a single direction, for two-dimensional problem algal movement is sinusoidal and for three or more-dimensional problem algae follow the helical movement.

The step-wise implementation of basic AAA is explained as below:

Step-1: :

Set algorithm parameters shear force (Δ), loss of energy (e), energy (E) and adaption parameter(Ap), and define number of design variable (DV ) and maximum number of generation.

Define function for the optimization

$$\textit{Minimize f(X), Subject to X }=\textit{\textbraceleft x}_{1}, x_{2}\textit{,\textellipsis x}_{DV}\textit{\textbraceright . }</p><p class="noindent">$$
Step-2: :

Initialize a population of algae colonies and evaluate the algae size (Gi) for each algal colony and calculate friction surface τi.

Step-3: :

Set starvation = true and update the solution by using the following condition Till E(xi)> 0 and starvation= true.

Choose j from the population and three dimensions of j randomly for helical movement k, l, and m.

Update the solution by using helical movement equation betweenxi and xj:

$$x_{ik}^{t + 1}=x_{ik}^{t}+\left( x_{jk}^{t}-x_{ik}^{t} \right)\left( {\Delta}-\tau_{i} \right)cos\alpha $$
$$x_{il}^{t + 1}=x_{il}^{t}+\left( x_{jl}^{t}-x_{il}^{t} \right)\left( {\Delta} -\tau_{i} \right)sin\beta $$
$$x_{im}^{t + 1}=x_{im}^{t}+\left( x_{jm}^{t}-x_{im}^{t} \right)\left( {\Delta} -\tau_{i} \right)p $$

α, βare random angles in the range [0,2π], and p is a random number in the range [-1,1].

Calculate energy loss as E (xi) = E (xi) −(e/2).

Check if new solution is better than the update solution and set starvation = false else again reduce energy E(xi) if still E(xi)> 0 and starvation= true then increase the adaption parameter (Ap) and set xi = xs.

Step-4: :

Evaluate the size (G) of population and choose one dimension for reproduction r and replace \({x\mathunderscore worst}_{r}^{t}=x\mathunderscore bes{t_{r}^{t}}\).

Step-5: :

Generate any random number (rand) and check if rand<Ap than the update starving solution (xs) by using the following equation:

$$x_{s}^{t + 1}={x_{s}^{t}}+rand\left( x\mathunderscore best^{t}-{x_{s}^{t}} \right) $$
Step-6: :

Repeat the procedure till the termination criteria are attained.

As seen from the above procedure, AAA is a population-based method, and it follows the concept of the living behavior of the microalgae to update its solution. Genetic Algorithm (GA) is the basic nature-based algorithm, and most of the recent algorithms can be algorithmically linked to it to understand different algorithm better and richer [18, 59]. Such algorithmic linking can provide appropriate direction to modify the existing algorithms and can be useful to even predict the effect of specific modifications to an algorithm. It can be understood that basic GA follows recombination and mutation process to update the solution. AAA also follows the recombination operator that involves two parent solutions, one being the serial parent as per the sequence of the population and the other parent is randomly chosen from the population. Mutation in AAA is performed on the worst solution from the population by replacing any one dimension directly from the best solution in that generation. AAA follows an elite-preservation operation in which new solutions are directly compared with the current ones, and the best one is preserved. Also, AAA does not change all the dimensions in the current solution to get the new one; rather it only updates three dimensions at a time to have perturbation phenomenon for updating a solution.

4 Multi-Objective Artificial Algae Algorithm (MO-AAA)

We introduce elitist non-dominated sorting method and diversity preserving crowding distance approach of NSGA-II in the proposed MO-AAA algorithm for sorting of the population in different non-domination levels with computed crowded distance. We describe an elitist non-dominated sorting for obtaining different non-domination levels, and then we explain the crowding distance approach to preserve the diversity among the optimal set of solutions.

Firstly, for each solution obtained from the basic AAA algorithm or from initially generated random population Po, all the objectives from the objective vector F are evaluated. Also, a domination count np is defined as the number of solutions dominating the solution p. Sp is a set of solutions dominated by solution p and calculated. Secondly, all the solutions p are assigned a domination count zero and are put in first non-dominated level also known as Pareto Front (PF), and their non-domination rank (NDRp) is set to 1. Thirdly, for each solution p with np = 0, each member q of the set Sp is visited, and its domination count nq is reduced by one. While reducing nq count if it falls to zero the corresponding solution q is put in second non-domination level, and NDRq is set to 2. The procedure is repeated for each member of the second non-domination level to obtain the third non-domination level, and subsequently, the procedure should be repeated until the whole population is sorted into different non-domination levels.

In crowding distance approach for maintaining diversity among the obtained solutions firstly the population is sorted according to value of each objective function in ascending order. An infinite crowding distance is then assigned to the boundary solutions, i = 1 and i = l, of each objective. Here l is the total number of solutions in a particular non-dominated set. The boundary solutions are the minimum (i = 1) and maximum (i = l) function values. Except the boundary solutions all the other solutions of the sorted population (i = 2 to l-1) for each objective j (j = 1,2,…,m) are assigned the the crowding distance (\({d_{j}^{i}})\) as:

$${d_{j}^{i}}=\frac{f_{j}^{i + 1}-f_{j}^{i-1}}{f_{j}^{\max}-f_{j}^{\min}} $$

In above equation, the right-hand side term is the difference in values of objective function j for two neighboring solutions (i + 1 and i-1) of solution i. Once all the solutions in the sorted population of a particular non-dominated set are assigned the crowding distance then each solution i is assigned two entities, non-domination rank NDRi and crowding distance CDi. A crowded comparison operator (≺n) is used as follows to compare two solutions (i and j) as follows inj, if (NDRi < NDRj) or ((NDRi = NDRj) and (CDi > CDj)). This is, between two solutions, the one with the lower non-domination rank is preferred and if both the solutions have same non-domination rank, then one with the higher crowding distance is preferred.

It can be observed from the previous section that AAA algorithm cannot be directly used for the multi-objective optimization. It requires critical modifications to be done in the basic AAA to make it suitable for multi-objective optimization because multi-objective optimization consists of a set of solutions (Pareto solutions) rather than a single optimum solution. All the Pareto solutions are equally important with respect to the objective functions. Hence, it is difficult to identify or indicate any solutions as the best solution (x_best ) or the worst solution (x_worst ). It is also difficult to measure a quantity like f(X’ ) <f(X), where Xand X being the new solution and the existing solution, respectively. The reason for the above challenge is due to the involvement of more than one objective in the multi-objective problems. So, modifications are done in the basic AAA algorithm to make it adaptive for the multi-objective optimization problems. To compare the updated solution with the existing solution, we have used the concept of rank and the crowding distance. The updated solution is considered better if it possesses better rank compared to the existing solution. It may happen that the rank of both existing and the updated solution remain same. In that case the solution with better distance value is considered superior solution. The solution with the better rank and the distance value is considered as the best solution from the population i.e x_best and by using the same concept the solution with inferior rank and distance value will be considered as x_worst. So, it can be observed that the modification is done by not losing the concept of basic AAA algorithm. However, the mathematical expressions for updating the solution remain unaltered as that of basic AAA algorithm. It can also be noted that the updated solution may fly away from the boundary and it is required to set back within the boundary conditions. Padhye et al. [60] has presented different concepts to handle boundary violation. MO-AAA uses set on boundary approach to handle boundary violations. In this approach, if the updated solution is less than the lower limit, it is set as the lower limit value and if the updated solution is more than the upper limit, it is set as the upper limit value.

figure c

The procedure of the proposed MO-AAA algorithm has been shown in Algorithm-1. Firstly, initialize parameters such as population size (Npop), termination criteria, and the maximum number of generation (Ngen) to run the algorithm. Secondly, a random parent population Po in feasible region S is generated and each objective function of the objective vector F for Po is evaluated. Next, elitist based non-dominated sorting and crowding distance computation as explained in earlier section is applied on Po. Thirdly, AAA algorithm is employed to create the new population Pj, which is then merged onto Po to form the merged population Pi. This Pi is sorted based on elitism non-domination, and based on the computed values of NDR and CD. The best Npop solutions are updated to form a new parent population. This process is repeated till the maximum number of generations (iterations) are reached. It should be noted that the same algorithm can also be used with the termination criteria based on the number of function evaluations. Since non-dominated sorting and crowding distance assignment of MO-AAA are adopted from NSGA-II, the computational complexity of MO-AAA is also same as NSGA-II which is O (m (Npop) 2), where m is the total number of objective functions and Npop is the population size.

5 Numerical experimentation

This section presents the numerical experimentation and performance evaluation of MO-AAA for classical multi-objective benchmark problems, challenging benchmark problems of CEC 2009 and engineering design benchmark problems. The results of MO-AAA are compared with other efficient as well as recently developed multi-objective optimization algorithms. The controlling parameters for MO-AAA are given in Table 1.

Table 1 Controlling parameter for MO-AAA

5.1 Benchmark problems-1

We carry out our numerical experiments on various classical benchmark multi-objective problems in this subsection. We consider classical and well-established benchmark multi-objective functions such as SCH, ZDT1, ZDT2, ZDT3, and LZ [46, 74, 98], which have distinct characteristics. For example, SCH is a one-dimensional problem with a convex Pareto front. ZDT1 has convex Pareto front, whereas ZDT2 has non-convex Pareto front. The Pareto front of ZDT3 has the discontinuous Pareto front. The objective functions in LZ function are multi-modal, and they add an additional challenge for the algorithm to find the true Pareto front. In this work, the performance of the algorithms is measured by Generational Distance (GD), Spacing (S) or Spread (Δ) [17]. We calculate GD between the true Pareto front (PFt) and the obtained Pareto front (PFO). GD indicates the average distance between the obtained Pareto front and the true Pareto front. GD is defined as:

$$GD=\frac{\left( {\sum}_{p = 1}^{k} \left( d_{i} \right) \right)}{k} $$
$$Where,\thinspace d_{i}=\left( \sum\limits_{i = 1}^{n} \left( PF_{i,p}^{O}-PF_{i,p}^{t} \right)^{2} \right)^{1/2} $$

Where, k is the number of Pareto solutions, n is the number of objective functions, \(\mathrm {P}\mathrm {F}_{\mathrm {i,p}}^{\mathrm {o}}\thinspace \)indicates the pth obtained Pareto solution for the ith objective and \(PF_{i,p}^{t}\) indicates the nearest point on true Pareto front from \(PF_{i,p}^{o}\). Spacing is defined as the following:

$$S=\frac{1}{k-1}\sum\limits_{p = 2}^{k} \left( D_{p}-\bar{D} \right)^{2} $$

Where, Di is the absolute difference between the two consecutive solutions in the obtained Pareto front (PFo). It is defined as:

$$D_{p}={\sum\limits_{i}^{n}} \left| PF_{i,(p-1)}^{o}-PF_{i,p}^{o} \right| $$

\(\bar {D}\) is the average of all Dp.

Spacing(S) specifies the spread of the obtained Pareto front. It gives the standard deviation of Dg. The small value of S indicates the uniform spacing of the obtained Pareto solutions. Spread is defined as:

$${\Delta} =\frac{{\sum}_{j = 1}^{n} d_{j}^{ex} +{\sum}_{p = 2}^{k} \left| d_{p}-\bar{d} \right| }{{\sum}_{j = 1}^{n} d_{j}^{ex} +(k-1)\bar{d}} $$

Where, dp is the Euclidian distance between two consecutive points of the obtained Pareto front and \(d_{j}^{ex}\) is the Euclidian distance between the obtained Pareto front and the true Pareto front. \(\bar {d}\) is the average of dp. Spread checks the condition for the obtained Pareto front to cover the true Pareto front. The smaller value of Δ indicates better spread for the obtained Pareto front with uniform distribution.

For the numerical experimentation, the population size is set equal to the Pareto solutions required in PF, and maximum numbers of generations are set as 500 for ZDT1, ZDT2, ZDT3 and SCH functions. For LZ function, the number of generations is set as 1000 to maintain the uniformity of experimental setup with the published results. For all the problems, the number of Pareto Solutions (k) is considered as 200. The Pareto front for SCH, ZDT1, ZDT2, ZDT3, and LZ are shown in Figs. 1234 and 5, respectively. Figures 2 to 5 also show the convergence ability of MO-AAA for ZDT1, ZDT2, ZDT3, and LZ, relies on various types of multi-objective problems. The result presents the graphical presentation of the Pareto front at 1st, 50th, 100th and final generations. All the figures show the true Pareto front and the obtained Pareto front. It can be noted from the results that for all the functions, MO-AAA is capable of approximating the true Pareto front. It can also be observed from the convergence figures that MO-AAA can converge to an optimum solution for the multi-objective functions..The rate of convergence is more in first 100 generations, and the solutions try to get near Pareto solutions within that range. However, with the increase in the computation, solutions try to achieve true Pareto front. The effectiveness of the results is checked by the above-mentioned performances measures. The results for the GD for all the functions are summarized in Table 2, where the results are compared with the state-of-the-art multi-objective algorithms such as NSGA-2, SPEA, VEGA, MODE, DEMO, and MO-Bees. The results except MO-AAA are reproduced from Yang, 2013 and Yang and Deb, 2013. The results summarized in Table 2 are the average result obtained in 25 independent runs. It can also be observed from the results that for SCH function, MO-AAA has produced better GD then all the rest of the algorithms. For ZDT1, ZDT2, MO-AAA is better compared to all the algorithms. For ZDT3 it is only inferior to DEMO. For LZ, MO-AAA is inferior to MODE and SPEA. It can be summarized that MO-AAA is capable of finding true Pareto solutions for the problems with convex, non-convex and discrete Pareto front. The results are also compared based on the spread (Δ) with the other multi-objective optimization algorithms such as NSGA-II (real coded), NSGA-II (binary coded), SPEA and PAES. The result for the Δ is shown in Table 3. It can be noted from the results that MO-AAA has shown better Δ for ZDT1, ZDT2, and ZDT3.

Fig. 1
figure 1

Pareto front for SCH function

Fig. 2
figure 2

Convergence ability and Pareto front for ZDT1 function

Fig. 3
figure 3

Convergence ability and Pareto front for ZDT2 function

Fig. 4
figure 4

Convergence ability and Pareto front for ZDT3 function

Fig. 5
figure 5

Convergence ability and Pareto front for LZ function

Table 2 Results (GD) for benchmark multi-objective problems
Table 3 Results (Δ) for the multi-objective benchmark problems

5.2 Benchmark problems-2

Several challenging benchmark functions are provided by Congress on Evolutionary Computation (CEC) for testing performance parameters of multi-objective optimization algorithms. Benchmark functions for multi-objective optimization are provided in CEC 2009 [96]. The effectiveness of MO-AAA algorithm is evaluated by investigating on 20 well-defined benchmark functions of CEC 2009. Out of all these 20 benchmark problems, ten functions are multi-objective unconstrained problems (UF1-UF10), and the remaining 10 are multi-objective constrained functions (CF1-CF10). The detailed mathematical formulations of the considered test functions are given in [96]. The performance of the algorithms is evaluated by inverted generational distance (IGD) measure. The value of IGD indicates the distance of the approximate Pareto front from the true Pareto front. A low value of IGD indicates that the obtained Pareto front is closely approached to the true Pareto front. To calculate IGD, assume Pbe a set of uniformly distributed points along the true Pareto front in the objective space. Let A be a set of points representing obtained Pareto front. Then the average distance from P to A is known as IGD and describe as follows:

$$IGD\left( {A,P^{\ast }} \right)=\frac{\sum\nolimits_{\nu \,\in P^{\ast }}{d\left( {\nu ,A} \right)} }{\left| {P^{\ast }} \right|} $$

Where, d(υ,A) is the minimum Euclidian distance between υ and the other points in A. If |P|is large enough to represent the Pareto front very well, both diversity and convergence of the approximated set A could be measured using IGD (A,P). An optimization algorithm will try to minimize the value of IGD (A,P) measure. To obtain smaller values of this measure, the approximated set A must be very close to the true Pareto front and cannot miss any part of the whole Pareto front. To maintain the consistence in the comparison of the competitive algorithms, a common platform is provided in [96]. The total number of function evaluations is set as 3.0E05, and population size is set as 50. The comparison is based on the average value of IGD obtained in 30 independent runs. The result of MO-AAA is compared with 16 other well-known algorithms for unconstrained problems. These algorithms are derived from other optimization methods such as genetic algorithm, differential evolutions, artificial bee colony technique, biogeography-based optimization or teaching learning-based optimization techniques. In Table 4, the results of average IGD are listed based on 30 runs for ten unconstrained multi-objective benchmark problems (UF1-UF10). In Table 5, the results of MO-AAA for constrained benchmark problems are compared with eight other algorithms, and the results of average IGD are listed. All the results except MO-AAA are rewritten from [4, 64, 73] for the comparison purpose. Friedman rank test is performed to get an idea about the performance rank of the algorithms for unconstrained and constrained problems separately based on average IGD. Variations of Normalized Friedman rank value for unconstrained problems are shown in Fig. 6 and for constrained problems are shown in Fig. 7. It can be observed from the results that for unconstrained multi-objective problems MO-AAA has ranked 3rd among 17 algorithms and it can considered competitive with MOEAD, MTS, MOABC, and MO_BBO-ACO considering the range above 90%. MO-AAA has secured 2nd rank among nine algorithms for constrained problems, and it is comparable with DMOEADD within the top 90% range.

Fig. 6
figure 6

Comparison of multi-objective optimization algorithms for unconstrained functions based on Friedman rank test

Fig. 7
figure 7

Comparison of multi-objective optimization algorithms for constrained functions based on Friedman rank test

Fig. 8
figure 8

Four bar truss problem

Table 4 Results (mean IGD) for unconstrained multi-objective benchmark problems of CEC 2009

5.3 Engineering design benchmark problems

In this subsection, MO-AAA is experimented on six different multi-objective engineering design problems which are widely used in the research to investigate the multi-objective optimization algorithms. All the considered problems possess different characteristics. The algorithms parameters for the MO-AAA are same as that for the benchmark problems considered in the previous subsection. All the engineering problems experiment with a population size of 100 (solutions required in the Pareto front) and 500 iterations. For all the problems, 100 Pareto solutions are generated, and the results mentioned in this work are the average result obtained in the 30 runs to maintain the same experimental settings with [71].

5.3.1 Four bar truss problem

The aim of this problem is to minimize volume and displacement of joints simultaneously. Area of each link is considered as design variables. This problem is an unconstrained problem with all the continuous design variables. The system is shown in Fig. 8.

The problem can be stated as below

$$Minimize:f_{1}\left( x \right)=L\left( 2x_{1}+\sqrt {2x_{2}} +\sqrt x_{3} +x_{4} \right) $$
$${Minimize:\thinspace f}_{2}(x)=\left( F\frac{L}{E} \right)\left( \frac{2}{x_{2}}+ 2\frac{\surd_{2}}{x_{2}}-2\frac{\surd _{2}}{x_{3}}+\frac{2}{x_{4}} \right) $$

Where,

$$F = 10, E = 2e5, L = 200 $$
$$1\le x_{1},\thinspace x_{4}\le 3,\thinspace \sqrt 2 \le x_{2},\thinspace x_{3}\le 3. $$

This problem has a convex Pareto front in the objective space. The extreme point found for this problem is (1174.200, 0.0341) and (1727.739, 0.00276). As the mathematical expression for the exact Pareto front is not available for this problem in the literature, the performance is checked based on the value of spacing (S) due to its availability in the literature for different algorithms. The value of S for MO-AAA along with different algorithms such as NSGA-II, MOPSO, Micro-GA, PAES, and MOWCA [71] are given in Table 6. It can be observed from the results that MO-AAA has produced better or nearly same spacing compared to all the algorithms. The obtained Pareto front is presented in Fig. 9, which justifies the results presented in the Table 5. The obtained Pareto front matches well with the available Pareto front in the literature [71].

Fig. 9
figure 9

Pareto front for the four-bar truss problem

Table 5 Results (mean IGD) for constrained multi-objective benchmark problems of CEC 2009
Table 6 Result (S) for the four-bar truss problem

5.3.2 Gear train problem

The purpose of this problem is to minimize the maximum size of any one of the gear simultaneously with the minimization of the gear ratio error with reference gear ratio of 1/6.931. All the design variables considered in this example can only take integer values as they represent the number of teeth on each gear. Hence, this problem is a min-max problem with discrete design variables, which offers additional challenges to an optimization algorithm. The system is shown in Fig. 10.

Fig. 10
figure 10

Gear train

The problem can be stated as:

$${Minimize\thinspace f}_{1}(x)\thinspace =\thinspace \left( \left( \frac{1}{6.931} \right)-\left( \frac{x_{1}x_{2}}{x_{3}x_{4}} \right) \right)^{2} $$
$${Minimize\thinspace f}_{2}(x)=\thinspace max\left( \left[ x_{1}x_{2}x_{3}x_{4} \right] \right) $$

Where, 12 ≤ x1x2x3x4 ≤ 60.

The performance measures such as GD, S, and Δ are not available for the gear train problems. So, the results are compared based on the extreme Pareto solutions available in the literature. The results are compared with NSGA-II and MOWCA. The results are summarized in Table 7.

Table 7 Result (extreme Pareto solutions) for the gear train problem

It can be observed from the results that the extreme Pareto solutions obtained by MO-AAA is (9.92e-10,47) and (7.32e-1, 12). These extreme solutions are better than NSGA-II and MOWCA for both the conditions. This problem has a discrete Pareto front, and it is shown in Fig. 11, obtained by MO-AAA.

Fig. 11
figure 11

Pareto front for gear train problem

5.3.3 Multi-plate disk brake design

Multi-plate disk brake finds its applications in airplanes, to apply effective braking while landing. The exploded view of the multi-pate disk brake is shown in Fig. 12. The purpose of the problem is to simultaneously minimize the mass of the brake and the stopping time. There are four design variables for the inner radius, outer radius, the engaging force (applied force) and the number of friction surfaces (number of friction plates). Out of these four design variables, the number of friction surfaces can only take a discrete integer values which make it a mixed-integer problem. This is also a constrained optimization problem for which five different restrictions are introduced for the distance between the radii of the friction plates, length of the brake, pressure sustained by the plates, the maximum limitation for the temperature generated and the braking torque. The problem can be stated as follows:

$${Minimize\thinspace f}_{1}(x)= 4.9e-5\left( {x_{2}^{2}}-{x_{1}^{2}} \right)\left( x_{4}-1 \right) $$
$${Minimize\thinspace f}_{2}(x)=\left( 9.82e6 \right)\frac{{x_{2}^{2}}-{x_{1}^{2}}}{x_{3}x_{4}\left( {x_{2}^{3}}-{x_{1}^{3}} \right)} $$
$$g_{1}= 20+x_{1}-x_{2} $$
$$g_{2}= 2.5\left( x_{4}+ 1 \right)-30 $$
$$g_{3}=\frac{x_{3}}{3.14\left( {x_{2}^{2}}-{x_{1}^{2}} \right)^{2}}-0.4 $$
$$g_{4}= 2.22e-3x_{3}\frac{{x_{2}^{3}}-{x_{1}^{3}}}{\left( {x_{2}^{2}}-{x_{1}^{2}} \right)^{2}}-1 $$
$$g_{5}= 900-\left( \frac{2.66e-2x_{3}x_{4}\left( {x_{3}^{3}}-{x_{1}^{3}} \right)}{{x_{2}^{2}}-{x_{1}^{2}}} \right) $$
$$\forall g\le 0 $$

55 ≤ x1 ≤ 80, 75 ≤ x2 ≤ 110, 1000 ≤ x3 ≤ 3000, 2 ≤ x4 ≤ 20, x4I.

Fig. 12
figure 12

Exploded view for the multi-plate disk brake

The extreme Pareto solutions obtained are (0.12740 16.6549) and (2.7915, 2.07204). The results are compared for the value of spacing (Δ). The results are shown in Table 8, where the performance of MO-AAA is compared with NSGA-II, paεODEMO and MOWCA. It can be noted from the results that MO-AAA has produced better Δcompared to NSGA-II, paεODEMO, and MOWCA. The Pareto front obtained by using MO-AAAis is shown in Fig. 13, which is either same or better than the Pareto front given in [71]. It is useful for the designer to note that the Pareto solutions become nearly stagnant for f1 at f1≅ 0.2 and for f2 at f2≅ 1.75.

Fig. 13
figure 13

Pareto front for disk brake problem

Table 8 Result (Δ) for the disk brake problem

5.3.4 Speed reducer problem

The aim of this problem is to simultaneously optimize the weight of the gear assembly and the transverse deflection of the shaft. The assembly of the speed reducer is shown in Fig. 14. The problem is subjected to different constraints on bending stress of the gear teeth, surfaces stress, transverse deflections of the shafts and stresses in the shafts. The design variables are the face width, module of teeth, number of teeth in the pinion, length of the first shaft between bearings, length of the second shaft between bearings and the diameter of the first and second shafts, respectively. All the variables are indicated in the Fig. 14. The third variable is an integer, the rest of them are continuous. So, this problem can be classified as a mixed-integer problem.

Fig. 14
figure 14

Speed reducer

The problem can be stated as follows:

$$\begin{array}{@{}rcl@{}} Minimize\thinspace f_{1}&=&0.7854x_{1}{x_{2}^{2}}(3.3333{x_{3}^{2}}+ 14.9334x_{3}\\ &&-43.0934)-\thinspace 1.508x_{1}({x_{6}^{2}}+{x_{7}^{2}})\\ &&+ 7.4777({x_{6}^{3}}+{x_{7}^{3}})\\ &&+ 0.7854(x_{4}{x_{6}^{2}}+x_{5}{x_{7}^{2}}) \end{array} $$
$$Minimize\thinspace f_{2}=\frac{\sqrt {\left( 745\frac{x_{4}}{x_{2}x_{3}} \right)^{2}+ 1.69e7} }{0.1\ast {x_{6}^{3}}} $$
$$g_{1}=-(27/(x_{1}{x_{2}^{2}}x_{3})-1) $$
$$g_{2}=-(397.5/(x_{1}{x_{2}^{2}}{x_{3}^{3}})-1) $$
$$g_{3}=-\left( 1.93\frac{{x_{4}^{3}}}{x_{2}x_{3}{x_{6}^{4}}}-1 \right) $$
$$g_{4}=-\left( 1.93\frac{{x_{5}^{3}}}{x_{2}x_{3}{x_{7}^{4}}}-1 \right) $$
$$g_{5}=-\left( \frac{\sqrt {\left( 745\frac{x_{4}}{x_{2}x_{3}} \right)^{2}+ 16.9e6} }{0.1{x_{6}^{3}}}-1 \right) $$
$$g_{6}=-\left( \frac{\sqrt {\left( 745\frac{x_{5}}{x_{2}x_{3}} \right)^{2}+ 157.5e6} }{0.1{x_{7}^{3}}}-1 \right) $$
$$g_{7}=-\left( x_{2}\ast \frac{x_{3}}{40}-1 \right) $$
$$g_{8}=-\left( 5\ast \frac{x_{2}}{x_{1}}-1 \right) $$
$$g_{9}=-\left( \frac{x_{1}}{12\ast x_{2}}-1 \right) $$
$$g_{10}=-\left( \frac{1.5\ast x_{6}+ 1.9}{x_{4}}-1 \right) $$
$$g_{11}=-\left( \frac{1.1\ast x_{7}+ 1.9}{x_{5}}-1 \right) $$
$$\forall g1\ge 0 $$
$$2.6\!\le\! x_{1}\!\le\! 3.6,\thinspace 0.7\!\le\! x_{2}\!\le\! 0.8,\thinspace 17\!\le\! x_{3}\!\le\! 28,\thinspace 7.3\!\le\! x_{4}x_{5}\!\le\! 8.3,\thinspace $$
$$2.9\le x_{6}\le 3.9,\thinspace 5\le x_{7}\le 5.5 $$

The extreme Pareto solutions obtained by MO-AAA are (3001.592,1091.24) and (5904.21, 694.728). The results are compared based on the Spacing (S) with the other algorithms such as NSGA-II, Micro-GA, PAES and MOWCA [71]. The results are presented in the Table 9.

Table 9 Result (S) for speed reducer problem

It can be observed from the results that, the Spacing of NSGA-II better compared to all the algorithms, but the extreme points of the Pareto front obtained by NSGA-II are inferior compared to other algorithms. Also, the value of S is inferior for MO-AAA to the other algorithms. The Pareto front is given in Fig. 15. It is useful for the designer to note that the Pareto solutions become nearly stagnant for f2 at f2≅ 697.

Fig. 15
figure 15

Pareto front for speed reducer problem

5.3.5 Welded beam design

The purpose of this problem is to minimize the cost simultaneously with the end deflection. The problem is subjected to the constraints on shear stress, bending stress, weld length, and the buckling load. The four different design variables are the height and the length of the welded joint and thickness and the width of the beam. All the design variables are continuous. The schematic diagram is shown in Fig. 16, where all the variables are shown clearly. The value of objective functions differs much in its values and so such problems are difficult to solve with the weighted sum method. The problem can be stated as:

$$Minimize\thinspace f_{1}= 1.10471{x_{1}^{2}}x_{2}+ 0.04811x_{3}x_{4}\left( 14+x_{2} \right) $$
$$Minimize\thinspace f_{2}=del $$
$$g_{1}=-\left( tau-taumax\right) $$
$$g_{2}=-\left( sig-sigmax \right) $$
$$g_{3}=-\left( x_{1}-x_{4}\right)g_{4}\thinspace =-\left( P-pc \right) $$
$$Where $$
$$pc=\left( \frac{4.013\ast E\left( \sqrt{\frac{x_{3}^{2}{x_{4}^{6}}}{36}} \right)}{L^{2}} \right) \left( 1-\left( \left( \frac{x_{3}}{2L} \right)\sqrt{\frac{E}{4G}} \right)\right) $$
$$del=\frac{4PL^{3}}{E{x_{3}^{3}}x_{4}} $$
$$sig=\frac{6PL}{x_{4}{x_{3}^{2}}} $$
$$J = 2\ast \left( \surd_{2}x_{1}x_{2}\left( \left( \frac{{x_{2}^{2}}}{12} \right)+\left( \frac{x_{1}+x_{3}}{2} \right)^{2} \right) \right) $$
$$R=\sqrt {\left( \frac{{x_{2}^{2}}}{4} \right)+\left( \frac{x_{1}+x_{3}}{2} \right)^{2}} $$
$$M1=P\ast \left( L+\frac{x_{2}}{2} \right) $$
Fig. 16
figure 16

Welded beam

$$tau2=\frac{M1R}{J} $$
$$tau1=\frac{P}{\surd_{2}x_{1}x_{2}} $$
$$tau=\sqrt {tau1^{2}+ 2\ast tau1\ast tau2\ast \frac{x_{2}}{2\ast R}+tau2^{2}} $$
$$\forall g\ge 0 $$
$$P\,=\,6000,L\,=\,14,E\,=\,30e6,G\,=\,12e6,taumax\,=\,13600,sigmax\,=\,30000, $$
$$delmax= 0.25 $$
$$0.125\le x_{1}x_{4}\le 5,\thinspace 0.1\le x_{2}x_{3}\le 10. $$

The extreme Pareto solutions obtained by MO-AAA are (1.800, 0.0243) and (35.552, 0.000449). This problem is compared with the other techniques such as NSGA-II, paεODEMO, and MOWCA [71]. The results are compared based on Δ, and the mean values along with the calculated standard deviation are summarized in Table 10. It can be observed from the results that MO-AAA possesses better spreading compared to the other algorithms and nearly same to that of MOWCA. The Pareto front obtained by MO-AAA is shown in Fig. 17, where it justifies the results.

Fig. 17
figure 17

Pareto front for welded beam problem

Table 10 Results (Δ) for welded beam problem

5.3.6 Spring design problem

The purpose of this problem is to minimize stress and volume simultaneously. The constraints are imposed on the minimum deflection, shear stress, surge frequency, limits on outside diameter and on design variables. The design variables are the wire diameter (d), the mean coil diameter (D), and the number of active coils (N). The schematic view of the spring is shown in Fig. 18. This problem is special because all the design variables possess different characteristics. The number of turns can only take integer values whereas the diameter of the wire is standardized, and it has to be selected from the set of available diameters. The mean coil diameter can be considered as a continuous variable. So, this problem is the mixed-integer-discrete problem. The problem can be stated as follows:

$${Minimize\thinspace f}_{1}(x)=\left( 0.25{3.14}^{2}{x_{2}^{2}}x_{3}\left( x_{1}+ 2 \right) \right) $$
Fig. 18
figure 18

Spring

$${Minimize\thinspace f}_{2}\left( x \right)= 8KP_{\mathunderscore max\thinspace }\frac{x_{3}}{3.14{x_{2}^{3}}} $$
$$g_{1}= 1.05x_{2}\left( x_{1}+ 2 \right)+\frac{P_{\max }}{k}-l_{\max} $$
$$g_{2}=dmin-x_{2} $$
$$g_{3}=x_{2}+x_{3}-D_{\max} $$
$$g_{4}= 3-C $$
$$g_{5}=del_{p}-del_{pm} $$
$$g_{6}=del_{w}-\frac{P_{\max }-P}{k} $$
$$g_{7}= 8KP_{\max}\frac{x_{3}}{3.14{x_{2}^{3}}}-S $$
$$g_{8}=\left( 0.25{3.14}^{2}{x_{2}^{2}}x_{3}\left( x_{1}+ 2 \right) \right)-V_{\max } $$
$$Where $$
$$P = 300 $$
$$D_{\mathunderscore max\thinspace }= 3 $$
$$V_{\mathunderscore max\thinspace}= 30 $$
$$P_{\mathunderscore max\thinspace }= 1000 $$
$$del_{w}= 1.25 $$
$$l_{\mathunderscore max\thinspace}= 14 $$
$$del_{pm}= 6 $$
$$dmin= 0.2 $$
$$S = 189000 $$
$$G = 11500000 $$
$$C=\frac{x_{3}}{x_{2}} $$
$$k=G\frac{{x_{2}^{4}}}{8x_{1}{x_{3}^{3}}} $$
$$del_{p}=\frac{P}{k} $$
$$K=\frac{4C-1}{4C-4}+\left( 0.615\frac{x_{2}}{x_{3}} \right) $$
$$\forall g\le 0 $$
$$\mathrm{1\le }x_{\mathrm{1}}\mathrm{\le 32,\thinspace 1\le \thinspace }x_{\mathrm{3}}\mathrm{\le 30,\thinspace }x_{\mathrm{1}}\mathrm{\in }I $$

x2∈[0.009, 0.0095, 0.0104, 0.0118, 0.0128, 0.0132, 0.014, 0.015, 0.0162, 0.0173, 0.018, 0.020, 0.023, 0.025, 0.028, 0.032, 0.035, 0.041, 0.047, 0.054, 0.063, 0.072, 0.080, 0.092, 0.105, 0.120, 0.135, 0.148, 0.162, 0.177, 0.192, 0.207, 0.225, 0.244,0.263, 0.283, 0.307, 0.331, 0.362, 0.394, 0.4375, 0.5].

This value for GD, S or Δ is not available in the literature for this problem, so the results are compared with the extreme Pareto solutions. The extreme Pareto solutions are compared with NSGA-II and MOWCA. The results are presented in the Table 11. It can be observed from the results that the extreme points obtained by MO-AAA are nearly same compared to NSGA-II and MOWCA [71]. The Pareto front is shown in Fig. 19, which justifies the results. The Pareto front of this problem is discontinuous and of overlapping nature. This occurs because each discrete value for d possesses the certain portion of the Pareto front, which can be observed from the Pareto front for different discrete values of d.

Table 11 Result (extreme Pareto solutions) for spring problem
Fig. 19
figure 19

Pareto front for spring problem

6 Conclusions and future work

In this work, we suggest a new multi-objective version for artificial algae algorithm (AAA) algorithm, called MO-AAA. We test the proposed algorithm on various numerical benchmark problems with different characteristics of the Pareto front. Also, we apply MO-AAA on different multi-objective benchmark problems (20 challenging benchmark problems from CEC 2009 for unconstrained and constrained multi-objective optimization problems) and engineering design problems and calculate its performances based on generational distance, spacing and spreading.

These benchmark problems offer challenges to multi-objective optimization algorithms. We have compared our proposed method with about nearly 16 (recently developed) other algorithms for unconstrained problems and 9 (newly developed) algorithms for constrained problems.

Moreover, we compare the proposed algorithm with a different multi-objective version of GA, PSO, DE, Bees, and WCA. Our results show that the proposed method is better or as par with the existing algorithms. This work motivates us to do and investigates various directions of research as the followings:

  • Apply MO-AAA on solving an economic load dispatch problem with incommensurable objectives, multi-objective flow shop scheduling problems, and analysis of Pareto improvement models in electricity markets.

  • Modify MO-AAA so that we can solve multiple-objective optimization problems and large-scale multi-objective optimization problems.

  • Study some theory analysis for the MO-AAA algorithm, such as, convergence analysis, and complexity.