1 Introduction

In the real-world, the resources are limited, and the optimization of the available resources is much desirable. So, optimization is a means to obtain the potential solution for a specific problem. Broadly, based on the mathematical foundation, the optimization problems are classified into two categories: deterministic and stochastic. The deterministic optimization algorithm requires the knowledge of gradient information to obtain the optimal solution, some well-known methods are linear and nonlinear programming. These methods are good for linear search spaces, however, not effective for nonlinear search spaces (Faramarzi and Afshar 2014). Stochastic optimization does not require gradient information; however, they generate and use random variables to obtain the optimal solution. The metaheuristic optimization algorithm is implemented based on a stochastic operator with an advantage of simplicity, flexibility, gradient-free, and problem independent (Mirjalili et al. 2014). The metaheuristic algorithms are gradient-free and problem independent, the problem can be considered as a black box with identified input and output variables. The metaheuristic algorithm also does not require an initial guess to a solution space, which that received major attention in recent years (Naik et al. 2020).

Based on the number of the solution obtained by the metaheuristic algorithm, they are classified into two types (Talbi 2009): a single solution-based and population-based. The single solution-based optimization generates only one solution throughout the optimization stages. On the other hand, population-based optimization generates a set of solutions at every generation of the optimization stages and is mostly inspired by natural phenomena. The well-known example for single solution-based optimization is simulated annealing (SA) (Kirkpatrick et al. 1983), and the population-based optimization is a genetic algorithm (GA) (Holland 1975).

On the other way, based on the source of inspiration, the metaheuristic algorithms are mostly classified into four categories: evolution algorithms (EAs), swarm intelligence (SI), physics-based, and human-based. The EAs are inspired by biological evolution, such as crossover, mutation, and selection. The most popular EAs is a genetic algorithm (GA) (Holland 1975), which is based on the Darwinian theory of evolution. The GA used the crossover to generate the offspring from parents that help to explore the search space. After that, the next generation of parents is evolved by selection. The other popular EAs is differential evolution (DE) (Feoktistov 2006). The SI algorithms are inspired by the intelligent social behavior of organisms living in swarms, herds, schools, or flocks. The most widely used SI algorithms are particle swarm optimization (PSO) (Kennedy and Eberhart 1995), which simulate the bird flocking behaviors with each bird is considered as a candidate solution. Each bird updates its path based on the best one to reach optimal solutions. Some other popular SI algorithms are Cuckoo search (CS) (Yang 2014), ant colony optimization (ACO) (Dorigo and Stützle 2004), firefly algorithm (FA) (Yang 2014), and artificial bee colony (ABC) (Karaboga and Basturk 2008). Physics-based algorithms are inspired by physical laws in nature. The well-known examples of a physics-based algorithm are simulated annealing (SA) (Kirkpatrick et al. 1983) and gravitational search algorithm (GSA) (Rashedi et al. 2009). The SA uses the thermodynamic law of deformation of size due to heating and cooling, whereas GSA uses Newton’s gravitational law to obtain the optimal position based on mass, force, and distance between them. The human-based algorithm is based on human interactions and behavior. Some most prominent algorithms in these categories are teaching–learning-based-optimization (TLBO) (Rao et al. 2012) which is based on impact of teachers on learners and tabu search (TS) (Glover 1989, 1990).

Apart from the above presented well-known algorithms, the researchers come up with many more metaheuristic algorithms inspired by the nature to combat the problem in function optimization and engineering problem (Naik et al. 2020). Some of the recent developments of population-based metaheuristic algorithm that are quite good on function optimization include a gray wolf optimizer (GWO) (Mirjalili et al. 2014), whale optimization algorithm (WOA), moth search algorithm (MSA) (Wang 2018), monarch butterfly optimization (MBO) (Wang et al. 2019), squirrel search algorithm (SSA) (Jain et al. 2019), Harris hawks optimization (HHO) (Heidari et al. 2019), sailfish optimizer (SFO) (Shadravan et al. 2019), equilibrium optimizer (EO) (Faramarzi et al. 2020), slime mould algorithm (SMA) (Li et al. 2020), manta ray foraging optimization (MRFO) (Zhao et al. 2020), hunger games search (HGS) (Yang et al. 2021), RUNge Kutta optimizer (RUN) (Ahmadianfar et al. 2021), and Colony Predation Algorithm (CPA) (Tu et al. 2021). The GWO mimics the hunting mechanism such as searching, encircling, and attacking the prey by gray wolves (Canis lupus) in a leadership hierarchy. The WOA is inspired by the social and hunting behavior like searching, encircling, and bubble-net attacking of humpback whales (Megaptera novaeangliae). The MSA is inspired by the phototaxis movement of a family of moths which belong to order Lepidoptera that follows Lévy flights. The MBO is inspired by the migration behavior of monarch butterflies between two lands (USA and Maxico). The SSA stimulates the foraging behavior of southern flying squirrel (Glaucomys Volans) with effective locomotion known as gliding. The HHO has inspired on Harris’s hawk (Parabuteo unicinctus) cooperative behavior of chasing the prey based on surprise pounce or seven kills strategy. The SFO is a model of the behavior of sailfish (Istiophorus platyerus) on group hunting strategy and attacking pray sardine (Sardinella aurita). The EO is inspired by a physics principle, controlled mass balance model to estimate the equilibrium state. The SMA is inspired by how the plasmodial slime mould (Physarum ploycephalum) establishes an optimal path to reach the solution. The MRFO is inspired on intelligent foraging strategy of manta rays such as chain, cyclone, and somersault. The HGS is designed based on the fitness-wise search method according to the hunger-driven activities shown by animals. The RUN is a metaphor-free optimizer based on slope variation computed by Runge Kutta method widely used in the mathematics. The CPA is based on the community prediction of animal with the strategy such as dispersing prey, encircling prey, assisting the successful predictor, and looking for another target.

As the original optimization algorithms are designed based on basic nature-inspired principles, they possess flaws and constraints. Furthermore, as per the “no free lunch” (NFL) theorem (Wolpert and Macready 1997), one algorithm may not work for different classes of problems. So, there is always a chance of improvement in the performance of the algorithm. This provoked the researchers to enhance the original optimization algorithms with different initial design techniques (Wunnava et al. 2020b), remodeling/modifying the search patterns (Qian et al. 2020; Wunnava et al. 2020a, c; Guha et al. 2020) or hybridizing the optimization algorithms (Zhu et al. 2020). In this context, opposition-based learning (OBL) (Tizhoosh 2005) is a method used to enrich the exploration of any optimization algorithm, because it utilizes the information of opposite relationships among entities to select the best one. The OBL was successfully applied in various optimization algorithms, artificial neural networks, fuzzy systems, and reinforcement learning to improve performance (Mahdavi et al. 2018; Dhargupta et al. 2020). The SMA (Li et al. 2020) is quite an interesting swarm-based optimization algorithm proposed in 2020 with a good performance and convergence. The SMA has been further improved in performance by many researchers and applied in various fields. Some of the prominent works are: CNMSMA (Liu et al. 2021) used the chaotic map with Nelder-Mead simplex strategy to enhance the performance of SMA and used to estimate the parameters of photovoltaic cells; DASMA (Zhao et al. 2021) used the diffusion and association strategy in SMA to enhance the performance and applied in medical multilevel image thresholding to help doctors; WQSMA (Yu et al. 2021) is proposed to boost the original SMA by adding the concept of water-cycle from water-cycle algorithm (WCA) and quantum rotation gate mechanism.

In the same way, the authors of this article are primarily inspired by the performance of SMA (Li et al. 2020) in function optimization. It is noteworthy to mention here that the SMA effectively uses the exploration and exploitation phases to reach an optimal solution or near-optimal solution. However, the SMA uses two random search agents from the whole population to decide the future displacement and direction from the best search agents. This feature of the SMA limits its exploitation and exploration capabilities. This has motivated us to propose a new algorithm for function optimization. In this work, we suggest an adaptive approach to decide whether to use opposition-based learning (OBL) or not. This idea is used to further enrich the exploration capability. In addition, it ensures the maximization of the exploitation phase by replacing one random search agent with the best one in the position updating. The proposed method is coined as an adaptive opposition slime mould algorithm (AOSMA). The qualitative and quantitative analysis of AOSMA is reported using 29 test functions, which are composed of 23 classical test functions (Yao et al. 1999; Naik and Panda 2016) and 6 recent composition functions from the IEEE CEC 2014 test suite (Liang et al. 2013). The results are compared with some recently developed (state-of-the-art) optimization algorithms such as SMA, MRFO, EO, SFO, HHO, SSA, and WOA, which shows AOSMA’s superiority among other optimization algorithms. The AOSMA is validated using Wilcoxon’s rank-sum test. Further, it is ranked one in Friedman’s mean rank test. Exemplar solutions are presented to attract the readers.

The paper is organized as follows. Section 1 is committed to a brief introduction. The proposed work of the development of AOSMA is discussed in Sect. 2. The qualitative and quantitative result analysis of AOSMA with a comparison with the state-of-the-art algorithms are reported in Sect. 3. In Sect. 4, a concluding remark is drawn.

2 Proposed work

The development of the adaptive opposition slime mould algorithm (AOSMA) is based on remodeling the approaching behavior of slime mould as discussed in (Li et al. 2020) with an adaptive decision for opposition-based learning. The SMA is a stochastic optimizer based on the oscillation mode of plasmodial slime mould (Physarum ploycephalum). The slime mould uses the oscillation mode along with positive–negative feedback to establish the optimal path to connect food.

2.1 Mathematical formulation of AOSMA

Let us assume \(N\) slime moulds are present in the search space with upper boundary (\(UB\)) and lower boundary (\(LB\)). Then, \(i\) th slime mould position in \(d\)-dimensions can be expressed as \(X_{i} = \left( {x_{i}^{1} ,x_{i}^{2} , \ldots ,x_{i}^{d} } \right),\forall i \in \left[ {1,N} \right]\), and fitness (odor) of the ith slime is represented as \(f\left( {X_{i} } \right),\forall i = \left[ {1,N} \right]\). So, the position and fitness of \(N\) slime mould at the current time (iteration) \(t\) is expressed as:

$$ X\left( t \right) = \left[ {\begin{array}{*{20}l} {x_{1}^{1} } \hfill & {x_{1}^{2} } \hfill & \cdots \hfill & {x_{1}^{d} } \hfill \\ {x_{2}^{1} } \hfill & {x_{2}^{2} } \hfill & \cdots \hfill & {x_{2}^{d} } \hfill \\ \vdots \hfill & \vdots \hfill & \vdots \hfill & \vdots \hfill \\ {x_{N}^{1} } \hfill & {x_{N}^{2} } \hfill & \cdots \hfill & {x_{N}^{d} } \hfill \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {X_{1} } \\ {X_{2} } \\ \vdots \\ {X_{N} } \\ \end{array} } \right] $$
(1)
$$ f\left( X \right) = \left[ {f\left( {X_{1} } \right),f\left( {X_{2} } \right), \ldots ,f\left( {X_{N} } \right)} \right] $$
(2)

The position of slime mould for the next iteration (\(t + 1\)) in SMA (Li et al. 2020) is updated using Eq. (3).

$$ X_{i} \left( {t + 1} \right) = \left\{ {\begin{array}{*{20}l} {X_{{{\text{LB}}}} \left( t \right) + V_{{\text{b}}} \left( {W \cdot X_{{\text{A}}} \left( t \right) - X_{{\text{B}}} \left( t \right)} \right)} \hfill & {r_{1} \ge \delta\; {\text{and}}\; r_{2} < p_{i} } \hfill \\ {V_{{\text{c}}} \cdot X_{i} \left( t \right)} \hfill & {r_{1} \ge \delta\; {\text{and}}\; r_{2} \ge p_{i} } \hfill \\ {{\text{rand}} \cdot \left( {{\text{UB}} - {\text{LB}}} \right) + {\text{LB}}} \hfill & {r_{1} < \delta} \hfill \\ \end{array} } \right.,\forall i \in \left[ {1,N} \right] $$
(3)

The \(X_{{{\text{LB}}}}\) represent the local best individual for the current iteration, \(X_{{\text{A}}}\) and \(X_{{\text{B}}}\) are randomly pooled slime mould from current populations, \(W\) as the weight factor, and \(V_{{\text{b}}}\) and \(V_{{\text{c}}}\) as the random velocity. The \(r_{1}\) and \(r_{2}\) are random numbers in the range of \(\left[ {0,1} \right]\). The \(\delta\) is the chance of the slime mould that initializes to a random search location which is fixed at \(0.03\).

The \(p_{i}\) is the threshold value of \(i\) th slime mould that helps to choose the slime mould position using the best individual or itself for the next iteration, which is evaluated as:

$$ p_{i} = \tan \,h\left| {f\left( {X_{i} } \right) - f_{{{\text{GB}}}} } \right|, \forall i \in \left[ {1,N} \right] $$
(4)

where \(f\left( {X_{i} } \right)\) is the fitness value of \(i\)th slime mould \(X_{i}\), and the global best fitness value \(f_{{{\text{GB}}}}\) is evaluated using Eq. (5) of the global best position \(X_{GB}\).

$$ f_{GB} = f\left( {X_{GB} } \right) $$
(5)

Then, the weight \(W\) for \(N\) slime mould in a current iteration \(t\) is determined as:

$$ W\left( {{\text{SortInd}}_{{\text{f}}} \left( i \right)} \right) = \left\{ {\begin{array}{*{20}l} {1 + {\text{rand}} \cdot \log \left( {\frac{{f_{{{\text{LB}}}} - f\left( {X_{i} } \right)}}{{f_{{{\text{LB}}}} - f_{{{\text{LW}}}} }} + 1} \right)} \hfill & {1 \le i \le \frac{N}{2}} \hfill \\ {1 - {\text{rand}} \cdot {\text{log}}\left( {\frac{{f_{{{\text{LB}}}} - f\left( {X_{i} } \right)}}{{f_{{{\text{LB}}}} - f_{{{\text{LW}}}} }} + 1} \right)} \hfill & {\frac{N}{2} < i \le N} \hfill \\ \end{array} } \right. $$
(6)

where \({\text{rand}}\) is a random number in between \(\left[ {0,1} \right]\), \(f_{{{\text{LB}}}}\) as the local best fitness and the local worst fitness value is \(f_{{{\text{LW}}}}\). The \(f_{{{\text{LB}}}}\) and \(f_{{{\text{LW}}}}\) are determined from the fitness value \(f\) given in Eq. (2). For a minimization problem, sort the fitness value in ascending order as:

$$ \left[ {{\text{Sort}}_{{\text{f}}} ,{\text{SortInd}}_{{\text{f}}} } \right] = {\text{sort}}\left( f \right) $$
(7)

The local best fitness \(f_{{{\text{LB}}}}\) and corresponding local best individual \(X_{{{\text{LB}}}}\) are extracted as:

$$ f_{{{\text{LB}}}} = f\left( {{\text{Sort}}_{{\text{f}}} \left( 1 \right)} \right) $$
(8)
$$ X_{{{\text{LB}}}} = X\left( {{\text{SortInd}}_{{\text{f}}} \left( 1 \right)} \right) $$
(9)

The local worst fitness \(f_{{{\text{LW}}}}\) is extracted as:

$$ f_{{{\text{LW}}}} = f\left( {{\text{Sort}}_{{\text{f}}} \left( N \right)} \right) $$
(10)

The \(V_{{\text{b}}}\) and \(V_{{\text{c}}}\) are the random velocity chosen from the continuous uniform distribution in the interval \(\left[ { - b,b} \right]\) and \(\left[ { - c,c} \right]\). The \(b\) and \(c\) for the iteration \(t\) are chosen as:

$$ b = {\text{arctan }}h\left( { - \left( \frac{t}{T} \right) + 1} \right) $$
(11)

and

$$ c = 1 - \frac{t}{T} $$
(12)

where the maximum iteration is \(T\).

The SMA has shown promising exploration and exploitation capability to solve the function optimization and engineering design problem as discussed in (Li et al. 2020). However, there is a scope of improvements in the search process of the optimal food path for slime mould as described by Eq. (3). The next generation position updating rule of slime mould in SMA mainly depends on three cases based on \(\delta\) and \(p_{i}\), which are:

Case 1:

When \(r_{1} \ge \delta\) and \(r_{2} < p_{i}\), the slime mould search trajectory guided by the local best individual \(X_{{{\text{LB}}}}\) and two randomly pooled slime \(X_{{\text{A}}}\) and \(X_{{\text{B}}}\) from search space of \(N\) slime mould with velocity \(V_{{\text{b}}}\). This step helps in balancing exploration and exploitation.

Case 2:

When \(r_{1} \ge \delta\) and \(r_{2} \ge p_{i}\), the slime mould trajectory is guided by the position of itself with a velocity \(V_{c}\). This step helps in exploitation.

Case 3:

When \(r_{1} < \delta\), the slime mould reinitializes again in the search space, this helps in exploration.

Case 1 shows that \(X_{{\text{A}}}\) and \(X_{{\text{B}}}\) are two randomly pooled slime mould, the chances of solutions we get are not guided properly to explore and exploit. This limitation can be overcome by replacing one randomly pooled slime mould \(X_{{\text{A}}}\) with the local best individual \(X_{{{\text{LB}}}}\). So, the \(i\)th \(\left( {i = 1,2, \ldots ,N} \right)\) slime mould position updating rule of Eq. (3) is remodeled as Eq. (13).

$$ Xn_{i} \left( t \right) = \begin{array}{*{20}c} {X_{{{\text{LB}}}} \left( t \right) + V_{{\text{b}}} \left( {W \cdot X_{{{\text{LB}}}} \left( t \right) - X_{{\text{B}}} \left( t \right)} \right),} & {{\text{if}}\; r_{1} \ge \delta\; {\text{and}}\; r_{2} < p_{i} } \\ \end{array} $$
(13a)
$$ Xn_{i} \left( t \right) = \begin{array}{*{20}c} {V_{{\text{c}}} \cdot X_{i} \left( t \right),} & {{\text{if}}\; r_{1} \ge \delta\; {\text{and}}\; r_{2} \ge p_{i} } \\ \end{array} $$
(13b)
$$ Xn_{i} \left( t \right) = \begin{array}{*{20}c} {{\text{rand}} \cdot \left( {{\text{UB}} - {\text{LB}}} \right) + {\text{LB}},} & {{\text{if}}\; r_{1} < \delta } \\ \end{array} $$
(13c)

As per Case 2, the slime mould exploits a nearby place, so it may be following a path of lower fitness value than before. To overcome this limitation, an adaptive decision mechanism can be a better choice. Based on Case 3, the SMA has a provision for dedicated exploration, however as \(\delta \) is a small value, the exploration is limited. To overcome this limitation, we need to supplement extra exploration to SMA, which help it to overcome the local minima. As a combined effort to overcome the limitations of Case 2 and Case 3, we use an adaptive decision strategy for whether it is needed to explore furthermore using OBL (Tizhoosh 2005).

2.1.1 Opposition-based learning

The OBL used an estimate \(Xo_{i}\) in the search space which is the exact opposite of the position \(Xn_{i}\) for each slime mould \(\left( {i = 1,2, \ldots ,N} \right)\) and compare it to update the position of the next iterations. This step helps to avoid the chances of being trapped in the local minima with improved convergence. So, the \(Xo_{i}\) for \(i\)th slime mould in \(j\) th dimension is estimated as:

$$ Xo_{i}^{j} \left( t \right) = \min \left( {Xn_{i} \left( t \right)} \right) + \max \left( {Xn_{i} \left( t \right)} \right) - Xn_{i}^{j} \left( t \right) $$
(14)

where \(i = 1,2, \ldots ,N\) and \(j = 1,2, \ldots ,d\).

Let us represent \(Xs_{i}\) is the \(i\)th slime mould position, which is selected for minimization problem as:

$$ Xs_{i} \left( t \right) = \left\{ {\begin{array}{*{20}c} {Xo_{i} \left( t \right)} & {{\text{if}}\; f\left( {Xo_{i} \left( t \right)} \right) < f\left( {Xn_{i} \left( t \right)} \right)} \\ {Xn_{i} \left( t \right)} & {{\text{if}}\; f\left( {Xo_{i} \left( t \right)} \right) \ge f\left( {Xn_{i} \left( t \right)} \right)} \\ \end{array} } \right. $$
(15)

2.1.2 Adaptive decision strategy

When the slime mould is following a decedent nutrient path, an adaptive decision is taken based on the current fitness value \(f\left( {Xn_{i} \left( t \right)} \right)\) and old fitness value \(f\left( {X_{i} \left( t \right)} \right)\). The adaptive decision helps to supplement extra exploration when needed via OBL. Finally, the position for the next iteration is updated using the adaptive decision strategy of AOSMA, which is modeled as:

$$ X_{i} \left( {t + 1} \right) = \left\{ {\begin{array}{*{20}c} {Xn_{i} \left( t \right)} & {{\text{if}}\; f\left( {Xn_{i} \left( t \right)} \right) \le f\left( {X_{i} \left( t \right)} \right)} \\ {Xs_{i} \left( t \right)} & {{\text{if}}\; f\left( {Xn_{i} \left( t \right)} \right) > f\left( {X_{i} \left( t \right)} \right)} \\ \end{array} } \right., \forall i \in \left[ {1,N} \right] $$
(16)

Interestingly, the proposed AOSMA enhances the efficiency of SMA with the help of an adaptive decision strategy to whether OBL is needed during the search trajectory. The pseudocode is presented in Sect. 2.2. A flowchart is shown in Fig. 1.

Fig. 1
figure 1

Flowchart of AOSMA

2.2 Pseudocode of AOSMA

In the beginning, identity the number of slimes mould \(N\) to be employed, objective function \(f\) of dimension \(d\), search space boundary \(UB\) and \(LB\), and the maximum number of iterations allowed as \(T\). The pseudocode of the suggested AOSMA is as follows:

figure a

3 Results and discussion

This section presents the performance evaluation of the proposed AOSMA on a set of 29 test functions that include 23 classical test functions (Yao et al. 1999; Wunnava et al. 2020b) and 6 composition test functions from the CEC-2014 test suite (Liang et al. 2013). In this study, the search history, trajectory, and average fitness history, convergence curve, and boxplots are used as a qualitative metric, however, the average fitness value (‘Ave’) and standard deviation (‘Std’) are used as a quantitative metric for comparisons and validations. The proposed AOSMA is also validated using a statistical method such as Friedman’s mean rank and Wilcoxon rank-sum test.

3.1 Test functions, compared algorithms, and experimental setup

Here, 29 test functions considered for the performance evaluation comprise four core groups of benchmark landscapes: unimodal \(\left( {f_{1} - f_{7} } \right)\), multimodal \(\left( {f_{8} - f_{13} } \right)\), multimodal with fixed dimensions \(\left( {f_{14} - f_{23} } \right)\), and composition (\({\text{CEC}}14 - {\text{F}}23 \) to \({\text{CEC14}} - {\text{F}}28\)). The unimodal test functions have a unique global optimal solution, which helps to understand the exploitation ability of the optimization algorithm. The multimodal test functions have more than one optimal solution with many local minima, so these test functions help to understand the exploration's ability to obtain a globally optimal solution without trapping in the local minima of the optimization algorithm. The composition test functions having many locally optimal solutions cover hybrid composite shifted, rotated, and extended multimodal test functions. They mimic the elevated complexity in the search domain. These composition test functions help to understand the tradeoff ability between exploitation and exploration to reach a globally optimal solution by avoiding the local optima of the optimization algorithm.

The performance of AOSMA on test functions is compared with some recently developed optimization algorithms such as the SMA (Li et al. 2020), MRFO (Zhao et al. 2020), EO (Faramarzi et al. 2020), SFO (Shadravan et al. 2019), HHO (Heidari et al. 2019), SSA (Jain et al. 2019), and WOA (Mirjalili and Lewis 2016). The experimental parameter settings are shown in Table 1, which are taken as reported in the original work. To maintain consistency, the maximum function evaluation and population size are taken as 15,000 and 30 for all optimization algorithms The results are compared with the help of the average fitness value (‘Ave’) and standard deviation among the fitness value (‘Std’) using 51 independent runs of the optimization algorithm.

Table 1 Parameter settings

3.2 Qualitative analysis of AOSMA

The qualitative analysis of AOSMA is presented in Fig. 2 which comprises search history, trajectory, and optimization history metrics. This study shows how AOSMA searches for the optimal solutions. The first column of Fig. 2 shows a three-dimensional representation with a contour line of randomly selected unimodal \(\left( {f_{1} \,{\text{and}}\,f_{7} } \right)\), multimodal \(\left( {f_{10} } \right)\), multimodal with fixed dimensions \(\left( {f_{14} } \right)\), and composition (\({\text{CEC}}14 - {\text{F}}27\) and \({\text{CEC}}14 - {\text{F}}28\)) test functions.

Fig. 2
figure 2

Qualitative analysis (search history, trajectory, and average fitness history) of AOSMA

The search history qualitative metric is shown in the second column of Fig. 2, which comprise the first two dimensions (\(x^{1}\) and \(x^{2}\)) of \(N\) slime mould for the first iteration to the last iteration. The positions are shown on contour lines of the search space for a better thought. The positions of slime mould are aggregate nearer to the optimal solution space, which reflects that the AOSMA has performed effective exploitation. However, some of the positions of slime mould are also scattered around the search space, which reflects that the AOSMA also performed exploration. From the search history metric, it is observed that the AOSMA has well balanced the exploitation and exploration, that is needed for solving a real-world optimization problem.

The trajectory qualitative metric is shown in the third column of Fig. 2, which comprises the positions of the first slime \(X = \left\{ {x^{1} ,x^{2} , \ldots ,x^{d} } \right\}\) in the \(d\)-dimensional space within the range \(\left[ {{\text{UB}},{\text{LB}}} \right]\) of the search boundary. From Fig. 2, it is evident that the positions of slime mould are initialized in a random location that covers the whole search boundary. However, as the generation (iteration) increases, the positions try to converge to the optimal solution value, which describes the exploitation efficiency. Sometimes, the trajectory abruptly changes due to the exploration. Finally, in a later stage (nearer to the maximum iteration), the trajectory gets flattened, which stabilizes the searching and converges to global/local optimum solutions. The AOSMA has a well-tuned performance on exploitation and exploration.

The optimization history (convergence curve) metric is shown in the fourth column of Fig. 2, which describes the fitness value of the best slime mould during the iterations. A decreasing trend in optimization history reveals the effectiveness of the AOSMA.

3.3 AOSMA’s comparative performance on test functions

In this section, the results of AOSMA are compared with various optimization algorithms presented in Table 2 for 29 test functions such as unimodal \(\left( {f_{1} - f_{7} } \right)\), multimodal \(\left( {f_{8} - f_{23} } \right)\), and composition test functions (\({\text{CEC}}14 - {\text{F}}23\) to \({\text{CEC}}14 - {\text{F}}28\)). The results presented in Table 2 are evaluated for 31 independent runs to obtain the average value ‘Ave’ and standard deviation ‘Std’ for 30-dimensional test cases (except multimodal test functions with fixed dimensions \(f_{14} - f_{23}\)) with 15,000 maximum function evaluation.

Table 2 Results of the optimization algorithm and Friedman mean rank

The unimodal test functions \(f_{1} - f_{7}\) are designed to test the exploitation ability of the optimization algorithm. From the results presented in Table 2, AOSMA obtains the optimal minima for the test functions \(f_{1} - f_{4}\). The AOSMA has shown superior results for test functions \(f_{7}\) compared to another optimization algorithm. For the test function \(f_{6}\), AOSMA shows a significant improvement over its predecessors SMA and has shown superiority over SMA, SFO, HHO, and WOA. However, for the test function \(f_{5}\), AOSMA results are like the SMA. However, the results are better than MRFO, EO, and WOA.

The multimodal test functions \(\left( {f_{8} - f_{23} } \right)\) have a higher number of local optima, so these test functions are used to analyse the optimizer to verify how efficiently they explore the search space to reach the global minima. From the results presented in Table 2, it is noted that the AOSMA has shown superiority, among other optimization algorithms to obtain the optimal values. The AOSMA outperformed other optimization algorithms on test functions \(f_{8}\) and \(f_{21} - f_{23}\) to obtain the global minimum. The AOSMA obtains the global minima for test functions \(f_{9}\) and \(f_{11}\) and consistent optimal solution for test function \(f_{10}\) (as SMA, MRFO, EO, and HHO). However, for the test functions \(f_{12}\) and \(f_{13}\), AOSMA has shown significant improvement over SMA, although the results are not the best among other optimizers. Moreover, the results of multimodal test functions with fixed dimensions \(\left( {f_{14} - f_{23} } \right)\) have shown similar results. However, AOSMA has shown superiority, among other optimization algorithms to achieve global optimal results in \(f_{14}\), \(f_{16} - f_{19}\), and \(f_{21} - f_{23}\).

The compositions test functions from the IEEE CEC 2014 test suite are used to test the optimization algorithm that simulates to real search domain with many local minima. A comparative result of AOSMA with other optimization algorithm results is presented in Table 2 of six composition functions (\({\text{CEC14}} - {\text{F}}23\) to \({\text{CEC}}14 - {\text{F}}28\)). The AOSMA has shown similar results as MRFO, SMA, and WOA for all reported composition functions. However, AOSMA has shown some improvement over SFO, HHO, and SSA for composition functions like \({\text{CEC}}14 - {\text{F}}26 \).

The result presented in Table 2 are obtained based on 31 independent runs, so for better understanding, a boxplot of four test functions from each unimodal, multimodal, multimodal with fixed dimensions and composition categories are presented in Fig. 3. From Fig. 3, one can visualize that the AOSMA has shown superior consistency among various optimization algorithms.

Fig. 3
figure 3

Boxplot of several test benchmark functions

Further, a statistical analysis of Friedman's mean rank test on the average value ‘Ave’ data presented in Table 2 is conducted, where the AOSMA ranked first among other optimization algorithms, as reported at the bottom of Table 2. In addition, a Wilcoxon signed-rank test is conducted to obtain \(p\)-value at \(\alpha = 0.05\) and presented in Table 3. The Wilcoxon signed-rank test is used to find a substantial difference to obtain the fitness value by various methods based on \(p\)-value. A comparison of the \(p\)-value of AOSMA with another optimization algorithm is done using significantly better (\(+\)), significantly equal (\(\approx\)), and significantly poorer (\(-\)). From Table 3, it is seen that the AOSMA has shown significantly better results. These are summarized as 48.3% over SMA, 51.7% over MRFO, 82.8% over EO, 96.6% over SFO, 69.0% over HHO, 96.6% over SSA, and 89.7% over WOA. From these data, it can be observed that the AOSMA has evolved as a better optimization algorithm.

Table 3 \(p\)-values with 5% significance for the test functions using Wilcoxon rank-sum test (\(p\)-values greater than 0.05 are shown in boldface)

3.4 AOSMA’s comparative convergence analysis

This section presents a comparative convergence curve of AOSMA with other optimization algorithms for iterations 1 through 500. For the analysis, we have considered 5 from unimodal (\(f_{1} - f_{4}\) and \(f_{7}\)), 5 from multimodal (\(f_{10}\), \(f_{11}\),\(f_{13}\), \(f_{15}\) and \(f_{23}\)) and 2 from composition (\({\text{CEC}}14 - {\text{F}}24\) and \({\text{CEC}}14 - {\text{F}}27\)) test functions. Figure 4 shows a comparative analysis of the convergence curve. Based on Fig. 4, we come up with some critical analysis as given below:

  • \(f_{1} - f_{4}\): At the beginning of the search process, the AOSMA has shown lagging behind MRFO, however, during the latter stage it shows superiority to obtain the optimal results.

  • \(f_{5}\): The AOSMA has shown a superiority convergence over another optimization algorithm.

  • \(f_{10}\) and \(f_{11}\): The AOSMA convergence is like the SMA, MRFO, and HHO to reach a globally optimal solution.

  • \(f_{13}\): The AOSMA convergence has shown little improvement over SMA, however, it still lags behind SSA, HHO, SFO, and EO.

  • \(f_{15}\) and \(f_{23}\): All the optimization algorithms are efficient in reaching the optimal solution, however, AOSMA has shown a better convergence.

  • \({\text{CEC}}14 - {\text{F}}24\) and \({\text{CEC}}14 - {\text{F}}27\): The AOSMA convergence is the same as MRFO.

Fig. 4
figure 4

Convergence curve

Based on these analyses, the AOSMA has shown an improved convergence due to the enhancement of the exploitation and exploration abilities.

3.5 AOSMA’s comparative scalability analysis

In this section, a scalability analysis is performed to understand the impact of dimensions on the AOSMA performance. Its performances are compared with other optimization algorithms. For the experiment, the unimodal/multimodal test functions \(f_{1} - f_{13}\) are chosen with scalable dimensions \(d = \left\{ {10,\,30,\,50,100,200,300} \right\}\). A comparative result based on the average fitness value ‘Ave’ on scalable dimensions for 31 independent runs with 15,000 maximum function evaluation is reported in Fig. 5. From Fig. 5, we can find in most of the cases, such as \(f_{1} - f_{4}\), \(f_{7}\) and \(f_{9} - f_{11}\), that the AOSMA optimal results are independent of dimensionality changes. For the test function \(f_{8}\), the scalable results are like SMA and HHO (we have not considered the SSA, MRFO, and SFO in this test case, as these algorithms produce a large deviation on results for the optimal value). However, for other test functions, the AOSMA has shown marginally degraded results on increasing dimensionality. For a better understanding of the overall performance of AOSMA on scalable test functions, a Friedman means rank test is conducted, based on the average fitness value obtained and reported in Fig. 6. Based on the results shown in Fig. 6, the AOSMA ranked one irrespective of dimensions.

Fig. 5
figure 5

Scalability analysis

Fig. 6
figure 6

Friedman mean rank score and ranking based on scalability results of test functions \(f_{1} - f_{13}\)

4 Conclusion

This work presented an adaptive opposition slime mould algorithm (AOSMA) for function optimization. Nevertheless, the proposed algorithm exhibits better exploration and exploitation, because it adaptively decides whether to use the OBL or not. Need to mention here that the use of the position information from the opposition search space greatly enshrines the performances of the AOSMA. From Fig. 4, it is seen that the suggested AOSMA has shown better convergence than other state-of-the-art methods, because of its enhanced exploration and exploitation capabilities. Figure 5 shows the scalability feature of the proposed AOSMA. From the statistical analysis, it is observed that the performances are better than the recent methods. Moreover, the suggested AOSMA is more consistent over other optimizers, which is implicit in Fig. 3. From Table 3, it is observed that the AOSMA has shown explicitly better results. In summary, this algorithm uses only one random search agent, as opposed to the SMA, reducing 50 per cent chances of the misguidance of the exploration phase in certain instances. Furthermore, this algorithm inherently includes an adaptive mechanism to decide the use of the opposition-based learning on demand to delimit the exploration phase. To figure out, this is the reason behind the improved performances achieved. Finally, it is believed that the suggested algorithm would be useful for function optimization to solve real-world engineering problems.