1 Introduction

Optimisation problems generally have more than one objective, which is a common nature in many industrial and academic research sectors. Such problems with conflicted and incommensurable objectives are called multi-objective optimisation problems MOPs. In contrast to the single-objective optimisation problem SOO, MOPs have no single optimum solution, but they have a set of trade-off solutions known as Pareto optimal solutions [33]. As the search space grows dramatically with problem size, the high-dimensional search space in MOPs caused the traditional optimisation algorithms using exact techniques (e.g. exhaustive search) to be no longer suitable.

The recent decades showed an interesting and significant growth of algorithms as researchers fully utilised natural phenomenon behaviours as an inspiration technique [9]. These algorithms show good performance in solving complex computational problems with high-dimensional objectives (i.e. MOPs). Although various heuristic algorithms have been proposed and show an efficient and effective performance, such as ant colony search algorithm [7], artificial bee colony algorithm [15], genetic algorithm [27], bat algorithm BAT [19], particle swarm optimisation PSO [11], simulated annealing SA [12], gravitational search algorithm GSA [21], their achievements cannot be considered as the best in solving all of the MOPs. Hence, the need for new optimisation algorithms to address MOPs remains a challenge [21].

Basically, exploration and exploitation (also referred to as diversification and intensification, respectively) are two main aspects of population-based heuristic algorithms, where the balance between them in any metaheuristic algorithm is the performance measurement of its success in solving each given MOP. Exploration is the capability to search the space that allows the metaheuristic algorithm to scan the expanding parts of the search space without falling into local optima. Meanwhile, exploitation is the capability to search locally in the search space to provide accurate search and convergence [22].

Although population-based search algorithms have achieved good performance results [29], none of the metaheuristic algorithms have superior performance in solving all problems. Practically, the performance of the algorithm in solving MOPs might be controversial from one problem to another. Thus, developing a hybrid metaheuristic algorithm by combining different metaheuristic concepts can improve the quality of performance and meet the promising balance between diversity and convergence [14].

Many previous researchers have proposed algorithms in an attempt to obtain multi-Pareto optimal solutions with an efficient exploitation and global diverse exploration such as BAT algorithm that mimic the bat behaviour of echolocation capability. Although BAT algorithm shows good results in terms of quality performance for exploration and exploitation, it shows bad convergence and less accuracy performance in some multi-dimensional functions because of trapping in local minimum in some cases [22]. The GSA, which is derived from Newton's law on gravity and mass interactions, performed effectively in SOO and has shown promising results in MOPs, as presented by [8] who proposed a multi-objective GSA (MOGSA) that shows a good reduction in diversity of population at a search space.

Similar to the particle swarm optimisation [2], the GSA has two key points that need to be addressed in tackling objective optimisation problems. The first is related to leaders selection of the population. The second is related to methods on sustaining the obtained good alternative results [26]. Some recent works have attempted to handle these problems by using an external archive for selecting leaders [8], whereas others depended on non-dominated sorting to select leaders from the external archive and population [17]. To address these issues and improve the balance between the diversity and convergence in MOPs, we proposed a hybrid metaheuristic algorithm called multi-objective GSA and bat algorithm MOGSABAT.

The main contributions of this paper are as follows. Firstly, we established a new equation to calculate the masses of individuals in the population using the theoretical work found in the strength Pareto evolutionary algorithm two (SPEAII) [34] to switch in a solution from single function to multiple functions that contain more than one objective to use the GSA. Secondly, we applied the theoretical work of multi-objective particle swarm optimisation MOPSO in multiple functions into the BAT algorithm to solve multiple functions to handle the bat algorithm. Thirdly, we examined the integration of multi-objective GSA and multi-objective BAT to obtain the hybrid proposed MOGSABAT algorithm.

The proposed MOGSABAT algorithm is compared with five state-of-the-art techniques to optimise three common MOP suites, which are ZDT, UF and BT. Also, two well-known performance metrics generational distance (GD) and reversed GD (RGD) are statistically applied in this study to test the performance quality of MOGSABAT in comparison with other algorithms.

We organised the remainder of this paper as follows. In Sect. 2, we provide overviews on the theoretical background of the proposed MOGSABAT algorithm. In Sect. 3, we demonstrated the evaluation methodology. In Sect. 4, we provided the statistical measurement based on mean, standard deviation (STD) and Wilcoxon signed-rank test. Finally, in Sect. 5, we draw our conclusions.

2 Methodology

The present study aims to solve the following types of problem (without loss of generality, the present study will only be assuming minimisation problems):

$$\begin{aligned} {\mathrm{Min}}\,Z =(f_{1}(x ),f_{2}(x),\ldots ,f_{k}(x)) \end{aligned}$$
(1)

where \(g_{i}(x)\ge 0 ,\,\,i=1,2,\ldots ,p\); \(h_{j}(x)=0, \,\,j=1,2,\ldots ,q\); \(x_{i}^{l}\le x_{i}\le x_{i}^{u},\,\,i=1,2,\ldots ,n\); \(k=\) no. of objective functions, \(p=\) no. of inequality constraint, \(q=\) no. of the equality constraint. and f, g, h are linear, quadratic or higher functions. Multiple-objective optimisation programming of the form Eq. (1) arises when rates including the ratios profit/revenue, profit/time, raw materials wastage/used raw material quantities are to be maximised. These problems are frequently linear or at least concave/convex fractional programming algorithmic schemes that provide successful solutions to numerous actual optimisation problems, which are naturally NP-Hard [23].

To describe the target concept of optimality, the researcher introduces the following definitions [23].

  1. 1.

    Definition 1 (Pareto dominance) Given two solutions, namely x and y\(\in R^{n}\), we say that \(x \le y\) if \(x_{i} \le y_{i}\) for \(i=1,2,\ldots ,n\) and that x dominates by y if \(x \le y\) and \(x \ne y\).

  2. 2.

    Definition 2 (Non-dominance) A vector of decision variables \(x\in X \subset R^{n}\) is non-dominated with respect to X, if no \(y\in X\) exist, such that \(f(y)\le f(x)\).

  3. 3.

    Definition 3 (Pareto optimality) A vector of decision variables \(x^{*}\in X \subset R^{n}\) (F is the feasible region) is Pareto optimal if it is non-dominated with respect to F.

  4. 4.

    Definition 4 (Pareto optimal set) The Pareto optimal set \(P^{*}\) is defined as follows: \(P^{*}\)={ \(x\in F\), such that, x is Pareto optimal}.

  5. 5.

    Definition 5 (Pareto front) The Pareto front \(PF^{*}\) is defined by the following: \(PF^{*}\)={ \(f(x)\in R^{k}; x\in P^{*}\)}

2.1 Multi-objective gravitational search algorithm

Gravitational search algorithm is a new algorithmic method that uses Newton's laws of motion and gravity in different modification techniques [24]. Nonetheless, in research, the algorithms have been developed because little is known about its inception. In GSA, for each mass (agent), one can consider four specifications: inertial mass, position, active gravitational mass and passive gravitational mass. Such as:

$$\begin{aligned}&m_{i}(t)=[f_{i}t_{i}(t)-{\mathrm{worst}}(t)]/[{\mathrm{best}}(t)-{\mathrm{worst}}(t)],\quad i=1,2,3,\ldots ,n_{\mathrm{mass}} \end{aligned}$$
(2)
$$\begin{aligned}&M_{i}(t)=m_{i}(t)/\sum _{j=1}^{N}m_{j}(t),\quad 0\le M_{i}(t)\le 1 \end{aligned}$$
(3)

where \(f_{i}t_{i}\) represents the objective function of the agent ith; the worst value worst(t) represents the lowest value of the objective function (for a minimisation problem); t represents the time; and N represents the number of the objective functions or the size of the swarm Eq. 2. To calculate the acceleration of the agent, the sum of the forces of heavy masses applied should be considered on the basis of the law of gravity and the acceleration of the agent using the motion law in Eq. 3. The current velocity is added to the acceleration using Eq. 5. Afterwards, the position is calculated by using Eq. 4.

$$\begin{aligned} a_{i}(t)=G(t)\sum _{j=1}^{n_{\mathrm{mass}}}r_{j}\times [(M_{j}(t)/R_{i,j}+\varepsilon ] \times \left( x_{j}^{d}(t)-x_{i}^{d}(t)\right) \end{aligned}$$
(4)

Thus, the following equations are obtained:

$$\begin{aligned} v_{i}^{d}{(t+1)}=r_{i}\times v_{i}^{d}{(t)}\times (a_{i}^{t})\end{aligned}$$
(5)
$$\begin{aligned} x_{i}^{d}{(t+1)}=x_{i}^{d}{(t)}+v_{i}^{d}{(t+1)} \end{aligned}$$
(6)

In Eq. (4), G(t) represents a constant of gravity and the \(\varepsilon\) value is considerably small. \(R_{i,j}\) is the Euclidean distance between two agents, namely i and j. \(r_{i}\) and \(r_{j}\) represent two random numbers between 0 and 1 that ensure the random properties of the algorithm.

MOPs are observed, and a single fitness function must correspond to each solution. Given that the use of an equivalent mass for a population that contains an objective function is not possible, we used the SPEA II method to determine the equivalent of the mass that is dependent on the MOGSA algorithm. To avoid being controlled by individuals, wherein some exhibit the same objective function in the same archive, we considered the dominant solutions and the controlled solutions with SPEA II. Each individual i in the population \(P_{t}\) displays strength S(i) as in Eq. 7, which represents the number of controlled solutions

$$\begin{aligned} S(i)=\mid \{ j:j\in x_{t}+\bar{x}_{t} \wedge i \succ j \}\mid \end{aligned},$$
(7)

where (|.|) signifies the basic character of the set, \(+\) denotes the union over a set, and the symbol (\(\succ\)) denotes the relationship to the dominant Pareto solution. Based on the values of S(i), the raw objective function \(R_{i}\) is calculated in each individual agency using Eq. 8 as follows:

$$\begin{aligned} R_{i}=\sum _{j\in x_{t}+\bar{x}_{t} \wedge i \prec j} S_{j} \end{aligned}$$
(8)

This simple function \(R_{i}\) is calculated through the strengths of archive and population. On the contrary, in SPEAII, the archive members are in control of the situation. Notably, the objective functions should be in the form of miniaturisation, that is \(R_{i}=0\), corresponding to a non-dominated individual, whereas its highest \(R_{i}\) value indicates that i is controlled by many individuals, which consequently dominates many individuals.

Although the designation of the explicit function provides a type of mechanism (niching mechanism) that is based on the concept of Pareto dominance, it may fail when most individuals do not dominate one another. Additional density information is added between individuals who exhibit objective functions with identical values. The density estimation technique is an intensified method (nearest neighbourhood method) [25], in which the density at any point is a decreasing feature of the distance to the kth nearest neighbour.

The present study considers the inverse distance (the nearest k-th) as an estimate of intensification. Particularly, for any individual i, the spaces between the individual j in the archive and the community are calculated and stored as a list. After the sorting in ascending order, the kth element provides the required distance, which is represented by \(Q_{i}^{k}\).

The present study uses k, which is equal to the square root of sample size [25]; thus, \(k=\sqrt{(N+\bar{N})}\). Subsequently, density \(D_{i}\), which corresponds to i, can be defined as follows:

$$\begin{aligned} D_{i}=1/Q_{i}^k+2 \end{aligned}$$
(9)

Number 2 is added to the denominator to ensure that its value is higher than 0, and \(D_{i}\prec 1\). Finally, the number is added to the explicit \(D_{i}\) function of the initial individual i to obtain the new fitness \(F_{i}\) as shown in Eq. 10:

$$\begin{aligned} F_{i}=R_{i}+D_{i} \end{aligned}$$
(10)

In Eq. 11, the researcher used the function \(F_{i}\) in a new equation as an exponential function by using the technique applied in the solution for SPEAII algorithm.

$$\begin{aligned} m_{i{(t)}}=\exp {(-F{(i)})} \end{aligned}$$
(11)
figure a

2.2 Multi-objective bat algorithm

Bats are mammals with wings and echolocation ability. Approximately 996 different bat species have been identified worldwide, and they account for approximately 20% of all mammal species [30]. In Ref. [4], a new optimisation algorithm known as BAT is proposed on the basis of swarm intelligence and bat observation. One can simulate the parts of the echolocation characteristics of microbat by using the BAT. The advantages of this algorithm include simplicity, flexibility and easy implementation. Furthermore, the algorithm efficiently solves a wide range of problems, such as highly nonlinear ones [10]. BAT also provides promising optimal solutions quickly and works well with complicated problems. The disadvantages of this algorithm are convergence occurring quickly at early stages and the decrease in convergence. In addition, no mathematical analysis links the parameters with convergence rates. The most suitable values for most applications are also unclear [20].

Equations 1214 simulate the movement of bat virtual agencies:

$$\begin{aligned}&Q_{i}=Q_{\mathrm{min}}+(Q_{\mathrm{max}}-Q_{\mathrm{min}})\beta ^{*}\end{aligned}$$
(12)
$$\begin{aligned}&v_{i}^{t}=v_{i}^{t-1}+(P_{i}^{t-1}-P_{\mathrm{best}})f_{i} \end{aligned}$$
(13)
$$\begin{aligned}&P_{i}^{t}=P_{i}^{t-1}+v_{i}^{t} \end{aligned}$$
(14)

where Q is the frequency used by bats to obtain prey; \(Q_{\mathrm{max}}\) and \(Q_{\mathrm{min}}\) represent the minimum and the maximum limits, respectively; \(P_{i}\) represents the location of the ith bat in the search space; \(v_{i}(t)\) represents the velocity of the bat; and t indicates the number of iterations. In addition, \(\beta\) has a range of [0, 1], and it is plotted through a uniform distribution. \(P_{\mathrm{best}}\) represents the most suitable solution found for all the populations.

To obtain better optimal procedure for multi-objective functions using BAT, we develop an algorithm called MOBAT [28] by introducing two new components (i.e. archive and leader), as found in the MOPSO algorithm in Ref. [3]. The archive is responsible for saving and restoring the most remarkable non-dominated and non-controllable Pareto optimal solutions that have been obtained to date. The archive also displays a main unit, which is the control unit of the archive. This unit controls the number of non-controlling solutions when new non-controlling solutions exist. Concurrently, the archive size is complete. During the process of replication, the non-dominated solutions obtained against the archive population are compared. Consequently, four different situations will be observed.

  1. 1.

    The new member is logged into the archive if a member of the archive is in control, in which the user is allowed access to the archive.

  2. 2.

    The new solution dominates the solution of one or more of the solutions in the archive. In this case, the solution or the dominant solutions in the archive must be deleted. The new solution will be able to access the archive.

  3. 3.

    If neither the new solution nor the archive member dominates each other, then a new archive solution must be added.

  4. 4.

    If the archive is full, then the network mechanism is run first to repartition the target space, determine the busiest sector and delete one of the existing solutions. The new solution should subsequently be incorporated into a less crowded slot in the system to improve the final diversification of Pareto approximate solution.

Increasing the probability of deleting a solution is proportional to solutions in a hypercube (segment). A special case exists in which a solution is inserted by hypercube. In this case, all sectors are extended to cover new solutions. Therefore, other solutions can also be changed. The second mechanism is selecting a leader (where the leader directs the selected members within the research area). In MOBAT algorithm, the most suitable obtained solution is used. This leader directs members within the research area to obtain a solution near the most suitable solution.

However, solutions cannot be in a multi-objective search space compared with the ideal Pareto concepts. The leader selection mechanism is designed to handle the issue. An archive contains the most suitable non-dominant solutions obtained. The leader selects the component from the crowded segments of the space solution and offers one of the non-dominant solutions. Selection is performed through a roulette wheel with the following possibility for each hypercube:

$$\begin{aligned} P_{i}=c/N_{i} \end{aligned}$$
(15)

where c is a constant number higher than 1, and \(N_{i}\) is the number of obtained Pareto optimal solutions in the ith segment. The equation indicates that the lack of congestion in the hypercube shows a high probability in the proposal of a new leader.

figure b

In this case, the possibility of selecting a hypercube to choose a driver is increased when the number of solutions obtained in hypercube decreases. The following procedure shows the action of the BAT for multi-objective function and in solving MOPs by using the mechanical Pareto optimal solution from the optimal solution.

2.3 Multi-objective gravitational search algorithm with bat algorithm

This section discusses the optimal hybridisation algorithm by means of a communication strategy between the two algorithms [1]. The idea is based on three stages. The first stage is based on creating a MOGSA from SPEA II theory as follows. We benefit from the theoretical work found in the SPEA II algorithm concerning the selection of the most suitable solution but not for multiple objective functions, because the work of this algorithm depends on more than one objective function. This approach benefits from the equation that describes the sum of non-dominated solutions and the distance between two individuals in the population.

The function \(F_{i}\) is the criterion of differentiation between them and the mass of each individual in population because the proportionality between them and the mass of the opposite suits any possibility when the low \(F_{i}\) increases the mass of any solution well, and vice versa. Remarkably, MOGSA can work if \((\beta <0.5)\), where the parameter \((\beta )\) is optional. Consequently, we can distinguish or decide which algorithm works first in the multi-objective space.

The second stage is based on creating a multi-objective BAT from MOPSO theory as follows. We benefit from the theoretical work found in MOPSO with regard to the selection of the most suitable solution but not for multiple objective functions, because the work of this algorithm depends on more than one objective function. This algorithm contains two important properties in selecting the individual with the most suitable solution. These two features are leader and archive, where the leader of the swarm is selected from the internal archive located in the solution space. The leader is the most remarkable solution for the MOBAT algorithm. Furthermore, the algorithm is based on the same parameter used by the MOGSA algorithm. If the parameter (\(\beta\)) is higher than 0.5, then the algorithm is suitable. Moreover, the random input population generated from the MOGSA algorithm is considered the output of the MOBAT algorithm for the previous iteration and vice versa; this update works in each iteration.

figure c

The final stage is the creation of the two algorithms to obtain a hybrid algorithm, which is our objective. The following two algorithms are run, and the hybrid algorithm will perform an update on the new solutions. The update also depends on the most remarkable solution. We then work on these solutions in the same manner as the MOBAT algorithm [31]. We obtain a population from both algorithms. The present study needs to archive all non-dominated solutions and restart this work until an improved solution that is near the most suitable solution is obtained. These stages can be observed clearly from the procedure.

3 Evaluation methodology, results and discussion

This section evaluates the performance of the proposed MOGSABAT algorithm. Firstly, we describe the evaluation methodology and present the results of the experiments, which are conducted in one scenario with different optimisation problems. Secondly, we compare the performance of the MOGSABAT algorithm with that of other intelligent computational techniques. Thirdly, we discuss our findings in detail.

3.1 Evaluation methodology

The evaluation methodology employed in this work is divided into three parts. The first part describes the benchmarking of the optimisation problem (tri and bi) for evaluating the performance of the proposed MOGSABAT algorithm. The second part compares the performance of the MOGSABAT algorithm with that of other intelligent computation algorithms. The third part explains the procedure of the proposed MOGSABAT algorithm.

3.1.1 Benchmark of the Tri and Bi optimisation problem

According to no free lunch (NFL) theorem, for any algorithm, any elevated performance over one class of problems is exactly paid for in performance over another class [13, 32]. A particular metaheuristic may yield promising results for a set of problems but may perform poorly on another set of problems. With NFL, this field of study is highly active. As a consequence, the extant approaches are enhanced, and new metaheuristics are being proposed every year.

The details formulations of nine multi-objective test problem with bias feature as are follows: BT1 this problem has two objective functions and 30 variables. In the function BT2, there are same variables as that in the function BT1 and \(x\in [0,1]^{30}\).

All the functions from \(BT3-BT9\) are the same in the details; only the function BT7 is different in the space area, that is, \(x\in [0,1]* [-1,1]^{29}\).

In this subsection, we use two groups of benchmark optimisation problem (i.e. UF and ZDT); these two groups of function are illustrated in detail in [32]. In addition, we test these functions with iteration \(i=20\) and populations \(n=100\).

3.2 Performance-based metrics for multiple-objective

  1. 1.

    Generational Distance GD In determining whether solutions of Q can be included with the set of \(P^{*}\) or not, the use of the (GD) metric is appropriate for it [18] because it estimates the average distances of the solution sets of Q from \(P^{*}\) as follows:

    $$\begin{aligned} {\mathrm{GD}}= \sum _{i=1}^{Q}\left( d_{i}^{p}\right) ^{1/p}/Q \end{aligned}$$
    (16)

    for two objective functions (\(p=2\)) where p represents objective function. The parametric value \(d_{i}\) denotes Euclidean distances (in the objective spaces) across the solution \(i\in Q\) to the most proximate members \(P^{*}\), which can be obtained as follows:

    $$\begin{aligned} d_{i}={\mathrm{min}}_{k\in P^{*}}\sqrt{\sum _{m=1}^{M}\left( f_{m}^{i}-f_{m}^*(i)\right) ^{2}} \end{aligned}$$
    (17)

    where \(f_{m}^*(i)\)denotes the ith objective function of the kth member of \(P^{*}\). Algorithm methods with lower values of GD are intuitively more reliable.

  2. 2.

    Reserve Generational Distance RGD

    $$\begin{aligned} {\mathrm{RGD}}= \sum _{i=1}^{Q}\left( d_{i}^{p}\right) ^{1/p}/Q \end{aligned}$$
    (18)

    where Q represents criminality assigned to the \(P^{*}\) set, which is also known as the RGD metric.

3.2.1 Evaluation procedure

The experimental results are described based on the mean, SD and Wilcoxon signed-rank test statistic of the function values.

4 Statistical measurement

  1. 1.

    Mean: Mean \((\bar{x})\) is computed as the sum of all the observed outcomes from the sample divided by the total number of these outcomes as shown as follows:

    $$\begin{aligned} \bar{x}={1/n}\sum _{i=1}^{n}{x_{i}} \end{aligned}$$
    (19)
  2. 2.

    Standard Deviation (SD): SD quantifies the variation or dispersion of a set of data for the function values as shown as follows:

    $$\begin{aligned} {\mathrm{SD}}= \sqrt{\frac{1}{n-1}\sum _{i=1}^{n}({x_{i}-\bar{x}})^2} \end{aligned}$$
    (20)
  3. 3.

    Wilcoxon Signed-Rank Test: The Wilcoxon signed-rank test determines the difference between two samples [6] and provides an alternative test of location that is affected by the magnitudes and signs of these differences. This test also checks whether one algorithm outperforms the other. Let \(d_{i}\) denote the difference between the performance scores of two algorithms in solving ith out of n problems. Let \(R^{+}\) denote the sum of ranks for the problems in which the first algorithm outperforms the second (Eq. 21), and let \(R^{-}\) represent the sum of ranks for the problems in which the second algorithm outperforms the first (Eq. 22). The ranks of \(d_{i} = 0\) are split evenly among the sums. If these sums have an odd number, then one of them is ignored.

    $$\begin{aligned} R^{+}=\sum _{d_{i}\succ {0}}{{\mathrm{rank}}(d_{i})}+1/2 \sum _{d_{i}={0}}{{\mathrm{rank}}(d_{i})}\end{aligned}$$
    (21)
    $$\begin{aligned} R^{-}=\sum _{d_{i}\prec {0}}{\mathrm{rank}(d_{i})}+1/2 \sum _{d_{i}={0}}{\mathrm{rank}(d_{i})} \end{aligned}$$
    (22)

We use MATLAB to find the p value for comparing the algorithms at a significant level of \(\alpha =0.05\). The null hypothesis is rejected when the p value is less than the significant level. \(R^{+}\) represents a high mean algorithm that shows superiority over other algorithms across different sets of experiments. When \(R^{+}= n\times (n-1)/2\), this algorithm outperforms all algorithms across all experiments.

4.1 Results

The results of the experiments are described based on one scenario. The proposed MOGSABAT algorithm and the benchmarking functions that are developed based on three groups of unconstrained optimisation problems in Sect. 3.1.1 are compared with the intelligent computation techniques described in Sect. 3.2. The mean, SD and Wilcoxon signed-rank test statistic of algorithms are discussed in Sect. 4.

Each benchmark function has 30 runs, and each algorithm has 100 iterations and 30 variables, as shown below form Tables 1, 2, 3, 4, 5, 6. Each compared algorithm independently performs 30 runs, and the symbols \(+\), \(=\) and − denote whether the GD and RGD results of the proposed MOGSABAT are statistically better than, equal to or worse than those of the corresponding peer competitors with a significant level 0.05, respectively.

Table 1 GD results of MOGSABAT against MOGSA, MOBAT, MPSO, NSGA II and SPEA II over ZDT1–ZDT6 test problems
Table 2 RGD results of MOGSABAT against MOGSA, MOBAT, MPSO, NSGA II and SPEA II over ZDT1–ZDT6
Table 3 GD results of MOGSABAT against MOGSA, MOBAT, MPSO, NSGA II and SPEA II over UF1–UF7
Table 4 RGD results of MOGSABAT against MOGSA, MOBAT, MPSO, NSGA II and SPEA II over UF1–UF7
Table 5 GD results of MOGSABAT against MOGSA, MOBAT, MPSO, NSGA II and SPEA II over BT1–BT9
Table 6 RGD results of MOGSABAT against MOGSA, MOBAT, MPSO, NSGA II and SPEA II over BT1–BT9

4.2 Discussion

Table 1 compares the algorithms and instances depending on GD, mean and SD criteria. Results show that the excellent performance of the hybrid MOGSABAT algorithm is observed in three instances of multiple functions: ZDT1, ZDT2 and ZDT3, whereas SPEAII and MOGSA are the most suitable algorithms in ZDT4 and ZDT6 functions, respectively.

In Fig. 1, the calculation and comparison among the algorithms are shown to determine which one is the best. Using the Wilcoxon rank sum test for the performance GD, for functions ZDT1 and ZDT4, the proposed MOGSABAT algorithm is better than the MOGSA algorithm. For functions ZDT1 and ZDT2, the control algorithm clearly exceeds the MOBAT, MOPSO and multi-objective non-sorting genetic algorithm (MONSGAII) algorithms [16]; however, in ZDT4, the MOGSABAT is outperformed by the SPEAII algorithm.

Table 2 compares the algorithms and instances that depend on the RGD, mean and SD criteria. Results indicate the excellent performance of MOGSABAT in three instances of multiple functions ZDT1, ZDT2 and ZDT3 and ZDT6. For ZDT4, SPEAII displays the most remarkable mean and SD values.

In Fig. 2, the calculation and comparison among the algorithms are demonstrated to determine which one is the best. Using RGD performance, for functions ZDT1 and ZDT4, the proposed MOGSABAT algorithm is better than the MOGSA algorithm. For the functions ZDT1 and ZDT6, the control algorithm clearly exceeds the MOBAT and NSGAII algorithms; however, in the function ZDT4, the MOGSABAT is outperformed by the SPEAII algorithm.

Table 3 compares the algorithms and instances depending on the GD, mean and SD. Results demonstrate the superiority of NSGAII [5] in three instances of multiple functions UF1, UF2 and UF3. This algorithm also shows a good SD result for instance UF5. In instance UF4, the MOPSO algorithm obtains the most remarkable mean and SD values. SPEAII also shows the most desirable SD value in instance UF5, and NSGAII displays the most remarkable SD value in instance UF6. In addition, MOGSABAT shows superiority in terms of mean and SD values in instance UF7.

As shown in Fig. 3, the proposed MOGSABAT algorithm has better GD performance in comparison with MOGSA and MOBAT algorithms for functions UF1 to UF7 if we take \(a = 0.001\) (where a represents the statistical level). Although the proposed algorithm performs equally with the NSGAII algorithm in function UF1, it is better in functions UF2, UF4, UF5 and UF6. Finally, the hybrid algorithm is better compared with the SPEAII algorithm in functions UF1, UF4, UF5 and UF6.

Table 4 compares the algorithms and instances that depend on the RGD, mean and SD criteria. Results show that NSGAII displays the most remarkable SD value in the instances of multiple functions UF2, UF3, UF5 and UF6 and SD values in UF2 and UF3 instances. SPEAII obtains the most remarkable mean in instance UF7 and SD values in instances UF2, UF4 and UF7. The hybrid MOGSABAT algorithm shows the most remarkable mean value in instances UF1, UF5 and UF6. The MOPSO algorithm also presents preferable value in the instance UF4 for the mean criterion.

As shown in Fig. 4, the proposed MOGSABAT algorithm has better RGD performance than MOGSA and MOBAT algorithms for functions UF1 to UF7 if we take \(a = 0.001\) (where a represents the statistical level). Although the proposed algorithm performs equally with the MOPSO algorithm in function UF4, it is better in functions UF2, UF3, UF5, UF6 and UF7. Finally, the hybrid algorithm is better compared with the SPEAII algorithm in functions UF1, UF3, UF4, UF5 and UF6.

Table 5 compares the algorithms and instances depending on the GD, mean and SD criteria. Results show NSGAII in two instances of multiple functions BT1 and BT7, and it presents good SD results for instances BT1, BT2, BT4, BT5 and BT7. The instances of multiple functions BT3 and BT9 indicate that SPEAII yields good mean results in instances BT4 and BT5. The hybrid MOGSABAT algorithm displays the most remarkable value in instance BT8 and improved mean values in instance BT2.

As shown in Fig. 5, our proposed MOGSABAT algorithm has better GD performance in comparison with the MOGSA algorithm for functions BT1 and BT7 if we take \(a = 0.01\) (where a represents the statistical level). It only fails in functions BT1 and BT9 compared with the MOPSO algorithm. However, the proposed MOGSABAT algorithm performs better as compared with MOBAT algorithm in functions BT1, BT3, BT4, BT5 and BT7. It performs equally with the NSGAII algorithm in instances BT1, BT5 and BT6. Finally, the hybrid algorithm is better compared with the SPEAII algorithm in functions BT1, BT3, BT5 and BT6.

Table 6 compares the algorithms and instances depending on the RGD, mean and SD criteria. Results show the NSGAII in instances of multiple functions BT1, BT5 and BT7 for the SD criterion, and it presents good mean results for instance BT7. The instances of multiple functions BT6 and BT9 indicate that SPEAII yields good SD criterion results, and better results are shown in instances BT1, BT3, BT4, BT5 and BT9 in terms of mean. The hybrid MOGSABAT algorithm displays the most remarkable value in instances BT2, BT6 and BT8 in the mean criterion and improved SD values in instance BT8.

In Fig. 6, the proposed MOGSABAT algorithm has better RGD performance compared with the MOGSA, MOBAT and MOPSO algorithms for functions BT1 to BT9 if we take \(a = 0.001\) (where a represents the statistical level). In functions BT1, BT5 and BT9, it performs better compared with the NSGAII algorithm. Finally, the proposed MOGSABAT algorithm is better than the SPEAII algorithm in functions BT2, BT5, BT6, BT7 and BT8.

Fig. 1
figure 1

Visualisation of GD performance metric by MOGSABAT against the five competitive algorithms in ZDT

Fig. 2
figure 2

Visualisation of RGD performance metric by MOGSABAT against the five competitive algorithms in ZDT

Fig. 3
figure 3

Visualisation of GD performance metric by MOGSABAT against the five competitive algorithms in UF

Fig. 4
figure 4

Visualisation of RGD performance metric by MOGSABAT against the five competitive algorithms in UF

Fig. 5
figure 5

Visualisation of GD performance metric by MOGSABAT against the five competitive algorithms in BT

Fig. 6
figure 6

Visualisation of RGD performance metric by MOGSABAT against the five competitive algorithms in BT

5 Conclusion

A new hybrid GSA and BAT algorithm called MOGSABAT is proposed to solve MOPs. The proposed algorithm is compared with multi-objective GSA, multi-objective BAT algorithm, NSGAII, MOPSO and SPEAII to verify the efficiency of the proposed algorithm for solving MOPs. The numerical experiment results showed that the proposed algorithm is a promising and efficient algorithm. In comparison with the aforementioned algorithms, MOGSABAT can obtain the global minimum or near global minimum of the MOPs faster.