1 Introduction

Optimization is a commonly encountered problem in all engineering disciplines; moreover, there is a wide range of optimization problems. The goal of solving optimization problems is to find the best possible solution or near-optimal solutions. The idea of ​​near-optimal solutions has been developed over a short time with exact and approximate algorithms [1, 2]. Exact algorithms can accurately find the optimal solution; however, these algorithms cannot solve complex and Np-hard problems. Moreover, in the case of an increase in the dimensions of Np-hard and complex problems, their execution time increases exponentially concerning the problem. Therefore, approximate algorithms have been developed to overcome exact algorithms' problems; they can find the near-optimal solution at much less time to solve Np-hard and complex problems. The approximate algorithms are divided into heuristic and metaheuristic algorithms; heuristic and metaheuristic algorithms use approximate methods for solving optimization problems.

In most cases, heuristic algorithms seek the right solution in near-optimal insufficient computational time; however, heuristic algorithms do not guarantee an optimal solution. The main disadvantages of heuristic algorithms include producing a solution or a limited number of solutions, trapping in a local optimum, and early convergence. Metaheuristic algorithms have been developed to overcome the heuristic algorithms' weaknesses, such as trapping local optima and early convergence [3,4,5,6,7,8,9,10]. Each metaheuristic algorithm uses a specific method to escape the local optima or avoid trapping there. As a result, metaheuristic algorithms can solve optimization problems and find high-quality solutions for all optimization problems without trapping in local optimal [11,12,13,14].

Typically, the strategy of metaheuristic algorithms is to improve step by step as the search begins with a random structure from one or more points in the search space. An optimal layout operation is then used to find a new structure that further improves the objective function. Then, this improved structure is reconsidered as a new system structure, and this process continues to achieve the best improvement for the system.

In the last decade, metaheuristic optimization algorithms were more commonly used in engineering applications to solve Np-hard and complex optimization problems than other algorithms [11,12,13,14]. Because the metaheuristic optimization algorithms have the following advantages: (1) have relatively simple concepts and their implementation is easy; (2) do not need derivative data of the objective function; (3) can overcome local optima; (4) can be applied in a wide variety of Np-hard and complex optimization problems. Metaheuristic methods attempt to solve the various Np-hard and complex optimization problems using inspiration from nature and evolution. The nature-inspired metaheuristic algorithms solve the optimization problems by imitating biological or physical phenomena. These algorithms categorize into three main categories: evolution-based methods, physics-based methods, and congestion-based methods. All of the metaheuristic optimization algorithms, despite their nature, have several standard features [15, 16].

Yang (2008), inspired by the fireflies' flashing behavior, developed the FA [17]. FA considers as one of the successful and low-cost algorithms. Each individual in a group of fireflies moves toward a point where his best personal experience has occurred. The SOS algorithm [18] is also one of the most recent methods for solving optimization problems based on organisms' interactions in nature. This algorithm has three natural processes of Mutualism, Commensalism, and Parasitism that can be used as new processes to improve other algorithms.

The differential evolution (DE) algorithm is suitable because of mutation operators and hybridization with other metaheuristic algorithms to improve them. Therefore, in [19], Zhang et al. presented a new FA through hybridization of the DE algorithm operators for global optimization, which combines the FA and the DE algorithm's advantages to provide a robust hybrid algorithm for solving optimization problems. In [20], a hybrid algorithm has also been developed based on the FA and the DE algorithms for global optimization; the proposed method design covers the FA's weaknesses using the DE algorithm. Finally, a new adaptive hybrid evolutionary FA is proposed in [21] to optimize truss structures' size and shape under multiple frequency constraints. The proposed algorithm in this paper is a hybridization of the DE algorithm and FA. The results showed that the proposed method's convergence rate was significantly improved than the DE algorithm and FA and multiple literature approaches.

In [22], a hybrid model was introduced to improve the FA's performance by introducing automated learning to regulate firefly behavior and using a genetic algorithm (GA) to increase global search and generate new solutions. In another study, a hybridization of GA and FA has been proposed to optimize complex problems and search for a more accurate global solution. In this hybrid algorithm, the GA searches the global minimum solution space, and the FA improves the precision of the potential candidate's solution [23]. Rahmani and MirHassani proposed a hybrid optimization method called the hybrid FA and GA [24]. This method combines the discrete FA with the GA. So, in this paper, the authors present a detailed description of the problem and adapt the algorithm. In this study, a discrete FA is proposed to solve the mechanical problem of prepared facilities. Also, an improved method based on the GA is used to transform a test solution into a better solution.

A chaotic adaptive FA has been proposed for optimization problems of the mechanical design in [25], in which fewer computations are carried out with the help of firefly adaptive motions. Additionally, using comparative parameters, the search accuracy is higher, which results in better target values; therefore, it has been proved that improving the performance of the FA is possible with the help of chaotic functions. The most crucial drawback of this model is the scatter in the search space and the failure to discover the optimal points with low iterations. In [26], various advantages of using particle swarm optimization (PSO) and FA are proposed with three new operators in a hybrid optimizer, called FAPSO, based on these two algorithms. In the first step, the proposed method's population is divided into two selective sub-populations of the FA and the PSO as the initial algorithm to perform the optimization process. To exchange information among the two sub-populations, and then efficient use of the advantages of PSO and FAs, these sub-populations share their optimal solutions, while they have more than one predefined threshold, which encounters recession. In the second step, every single dimension of the search space is divided into many small sub-areas, based on which historical knowledge is recorded to help the best current solution to detect an operator. The most crucial drawback of the hybrid model is the overpopulation and the increasing time complexity.

In [27], the hybridization of the FA and the social spider algorithm (SSA) is presented by Gupta and Arora for the multi-purpose optimization function. The notion of hybridization is proposed in this model to tackle optimization issues that make optimal use of the concepts of exploration and exploitation throughout the search space. The proposed algorithm is compiled in this paper by combining the FA and the SSA's biological processes. The proposed algorithm in this paper was tested on standard benchmark problems and then compared with the FA and the SSA. The results showed that this paper's proposed algorithm outperforms the firefly and the SSAs on the benchmark functions.

In [28], a new optimization model, called CLA-FA, is suggested. This new model combines cellular learning automata (CLA) and the firefly algorithm (FA). In the proposed model, each dimension of search space is assigned to one cell of CLA, and in each cell, a swarm of fireflies is located, which have the optimization duty of that specific dimension. The experimental results showed that the proposed method could be practical to find the global optima and improve the global search and the exploration rate of the standard FA. The hybrid FA is also used in the image processing section. The most crucial drawback of this model is the lack of adjustment of FA parameters and failure to achieve the best optimal solution. A hybrid FA and fuzzy c-mean algorithm are proposed to fragment MRI brain images [29]. It was proven to be efficient in promoting the clustering function of the traditional fuzzy c-mean algorithm. The simulation results indicated the proposed algorithm's superiority to other fragmentation methods in quantitative and qualitative terms.

In [30], a hybridized WOA with GWO, WOAGWO, is proposed. Simulations are tested on three standard test functions called benchmark functions: 23 common functions, 25 CEC2005 functions, and 10 CEC2019 functions. Results showed that WOAGWO achieved optimum solution, which is better than WOA and GWO. Furthermore, the improved raven roosting optimization algorithm (IRRO) and the CSO algorithm are combined in a hybrid (IRRO–CSO) methodology [31]. The CSO algorithm is chosen for its efficiency in balancing local and global search, while the IRRO algorithm is chosen to solve premature convergence and its superior performance in larger search areas. Finally, the proposed hybrid IRRO–CSO algorithm is compared to various imitation-based swarm intelligence algorithms (CEC2017) using benchmark functions. Compared to the following algorithms: WOA, GWO, CSO, BAT, and PSO, the findings from implementing the hybrid IRRO–CSO algorithm in MATLAB revealed an improvement in the average best fitness.

Some algorithms with several limitations were discussed in the related work. These algorithms cannot provide optimal solutions due to local optimization. Demographic diversity and the lack of an optimal population were important limitations in the literature algorithms. Convergence is an essential factor that allows an algorithm to approach the optimal solution, but the described algorithms lacked optimal convergence.

This paper presents two approaches using the SOS algorithm processes and the opposite-based learning (OBL) method to improve exploration and exploitation, increase data-sharing, and avoid trapping in the FA's local optima.

The goal of the OBL strategy is to increase the efficiency of metaheuristic algorithms. OBL refers to candidate solutions generated by a random iteration scheme and their opposite solutions in different areas of the search space.

The main contributions of this paper are as follows:

  • Improving exploration and exploitation operations using OBL

  • Improving the firefly algorithm using OBL

  • Hybridizing IOFA and SOS to discover optimal solutions

  • Evaluation of the proposed models on 24 different mathematical functions.

The rest of the paper has been structured as follows: Sect. 2 describes the FA and the SOS algorithm, and in Sect. 3, two different approaches are proposed to improve the FA in solving optimization problems. Then, Sect. 4 shows experiments and experimental results. Finally, conclusions and suggestions for future work are presented in Sect. 5.

2 Materials and methods

2.1 Firefly algorithm

The FA was presented by Xin-She Yang in 2007 at the University of Cambridge based on the flashing behavior and patterns of fireflies in nature [17]. FA is a metaheuristic algorithm inspired by nature and used in almost all engineering and Np-hard problems optimization. It is a stochastic algorithm. That is, a random search is used to find a set of solutions. At its most basic level, the FA focuses on generating solutions within a search space and choosing the best survival solution. A random search avoids getting trapped in local optima. For metaheuristic algorithms, exploration means discovering various solutions within the search space, while exploitation means the search process focuses on the best adjacent solutions. Figure 1 presents the pseudo-code of the FA.

The rate and mode of illumination and the time interval between the sent optical signals attract the two sexes to each other. The intensity of light, I, decreases with increasing distance (R) from the light source. Three essential features of the FA are [32]:

  1. a.

    The firefly becomes brighter and more attractive when it moves randomly, and all fireflies are of the same sex.

  2. b.

    The attractiveness of the fireflies is proportional to the brightness of the light and the distance from it. The light absorption coefficient γ calculates the reduction in light intensity. The value of the objective function also determines the luminance of the firefly.

  3. c.

    The distance between fireflies is obtained from Eq. (1), so that Xi, k is the kth part of the spatial coordination and ith firefly.

    $$r_{{i,j}} = \sqrt {\left( {xi - xj} \right)^{2} + \left( {yi - yj} \right)^{2} }$$
    (1)

The movement of the firefly and being attracted toward a brighter firefly is determined by Eq. (2):

$$X_{i} = X_{i} + B_{0} e^{{rij^{2} }} \left( {X_{j} - X_{i} } \right) + a\left( {rand - 1/2} \right)$$
(2)

a is a randomizer variable, a rand is a random number obtained between [0, 1], and B indicates the light source's attractiveness. The attractiveness changes determine the parameter γ.

FA has two inner loops when going through the population n and one outer loop for iteration t. Therefore, the complexity at the extreme case is \(O({n}^{2}t)\). As n is small (typically, n = 30), and t is large (for example, t = 5000), the computation cost is relatively inexpensive because the algorithm complexity is linear in terms of t.

One of the disadvantages of the firefly algorithm is its low detection rate. Because members of the population are always moving in the same direction in this algorithm, agents do not explore the entire search space. Also, search agents do not precisely follow the best person in the population due to the law of light intensity.

2.2 Symbiotic organisms search

Cheng and Prayog [18] proposed the SOS algorithm in 2014. The SOS algorithm is one of the newest methods for solving optimization problems based on organisms' interaction in nature. Organisms rarely live in isolation due to their reliance on other species for nutrition and even survival. This relationship is based on trust, which is known as a symbiotic relationship. The SOS algorithm begins with an initial population called the ecosystem. In the initial ecosystem, a group of organisms is generated randomly in a region. Each organism represents a proposed solution for the related problem. The SOS algorithm provides a new solution based on imitating the biological relationships between the two living organisms in an ecosystem. Figure 2 shows the SOS algorithm's pseudo-code, including three types: Mutualism, Commensalism, and Parasitism of biological relationships in nature [33,34,35,36]. The time complexity in the SOS algorithm is \({n}^{2}\).

2.2.1 Mutualism

At this point, both organisms benefit from the relationship they have with each other. Consider honeybees' relationship with flowers, for example, where the bees fly through the flowers to collect the nectar required for producing honey. It is also beneficial to the flowers because bees, scattering the pollen, facilitate their pollination [18]. In the SOS algorithm, Xi is an organism corresponding to the ith individual in the ecosystem. The next organism, Xj, is randomly selected and associated with the Xi in the ecosystem. Finally, Xi and Xj are updated in the Mutualism phase according to Eqs. (3) and (4):

$$X_{{i{\text{new}}}} = {\text{X}}_{{\text{i}}} + {\text{rand }}\left( {0,1} \right) \times \left( {{ }X_{{{\text{best}}}} - {\text{Mutual}}\_{\text{Vector }} \times BF_{1} } \right)$$
(3)
$$X_{{j{\text{new}}}} = {\text{X}}_{{\text{j}}} + {\text{rand }}\left( {0,1} \right) \times \left( {{ }X_{{{\text{best}}}} - {\text{Mutual}}\_{\text{Vector }} \times BF_{2} } \right)$$
(4)
$${\text{Mutual}}\_{\text{Vector}} = \left( {{\text{X}}_{{\text{i}}} + {\text{X}}_{{\text{j}}} } \right)/2$$
(5)

In Eqs. (3) and  (4), rand (0,1) is a random vector of numbers. BF1 and BF2 are the profit factor of Xi and Xj, which show each organism's profit. In Eq. (3), Mutual_Vector represents the relationship between the Xi and Xj. The Mutual_Vector * BF2 in Eqs. (33) and (4) is the attempt to increase the survival of living organisms. Given the Darwinian theory, i.e., the survival of the fittest, all organisms have to increase their compatibility level with their surroundings. Here, the Xbest represents the highest level of compatibility.

2.2.2 Commensalism

At this stage, one of the organisms benefits, and the other gains nothing from the relationship. For example, consider the relationship between sticky fish and sharks. In this relationship, the sticky fish sticks to the shark and feeds on the remaining food; the shark gains very little or no benefit [18]. As in the Mutualism phase, the organism Xj is randomly selected in nature and is associated with Xi. In this case, Xi strives to gain profit, but Xj gets no benefit or loss. Thus, Xi is updated by Eq. (6).

$$X_{{i{\text{new}}}} = {\text{X}}_{{\text{i}}} + {\text{rand }}\left( { - 1,1} \right) \times \left( {{ }X_{{{\text{best}}}} - {\text{X}}_{{\text{j}}} } \right)$$
(6)

where Xbest-Xj represents the benefit provided by Xj to increase the survival of Xi.

2.2.3 Parasitism

At this stage, one organism benefits from the relationship, and the other endures a loss. Take as an example the Malaria blood parasite that the Malaria mosquito transmits to the human body. When the parasite proliferates in the human body, it may cause that person to die. In the SOS algorithm, the Xi, i.e., malaria mosquito, creates an artificial parasite called “Parasite-Vector.” The Parasite-Vector is created in the search space by replicating Xi. The Xj is selected randomly by the ecosystem and serves the parasite as its host. The Parasite-Vector attempts to take Xj's place in the ecosystem. Both Xi and Xj are evaluated to measure their competency. When the Parasite-Vector destroys Xj and takes its place in the ecosystem, it reaches its highest competency. The highest competency for Xj is achieved when it resists the parasite, and the parasite cannot live any longer in the ecosystem.

2.3 Opposition-based learning

Opposition-based learning (OBL), the dual nature of the whole and the similarities of the opposites, is an algorithmic component inspired by the oriental philosophical issue [15, 37]. It includes generating some extra points using a hyper-rectangle and a central symmetry technique. Machine learning algorithms optimize the estimated solutions and search for an existing solution in large spaces. Typically, in obtaining the solution x for the given problem, they obtain an estimate of it as \(\overline{x }\). It is not an accurate solution and can be based on experience or, in general, a random guess. In some cases, conditions are satisfied with estimating \(\overline{x }\) Sometimes, try to reduce the difference between the estimated value and the optimal value known directly or indirectly. However, if the workflow process is known as a function approximation, there is a possibility of decreasing computational complexity, and it is possible to obtain an optimal solution [15].

In many cases, learning begins with a random point. For example, in artificial neural networks, the coefficients are initialized at random, in the genetic algorithm of the population parameters randomly; in the reinforcement learning algorithms, the learning agent is in a random and random selection algorithm. If the random conjecture of the optimal solution is not so far off, the quick results converge. Though it is pretty natural to initiate a random start from far-away points of the existing solutions, then at worst, the beginning in the “contrasting location” is the optimal solution, so the approximation, search, and optimization are considerably time-consuming, or in the worst-case mode. Logically, it must be searched simultaneously in all directions and the same direction. If x is looked for and is helpful in the direction of the shot, then the first step is to compute the opposite number \(\overline{x }\).

If x\(\in\)R is an actual number defined on a given x\(\in\)[a, b] interval, then the opposite number \(\overline{x }\) is defined as \(\overline{x }=a+b-x\).

For example, if a = 0 and b = 1, the opposite number is equal to \(\overline{x }=1-x\), and if a = − ∞ and b =  + ∞, the opposite number is equal to \(\overline{x }=-x\). This form of the opposite number is the usual symmetrical form of a number. In optimization problems, the OBL method uses contrasting numbers to search for the optimal point. Assume that \(X=({x}_{1},{x}_{2},\dots ,{x}_{n})\) is a search point in the next n space, and \({x}_{i}=\left[{a}_{i},{b}_{i}\right]\) so that \(i=\mathrm{1,2},\dots ,n\). The opposite number \({X}^{*}=({x}_{1}^{*},{x}_{2}^{*},\dots ,{x}_{n}^{*})\) is calculated from Eq. (7).

$$x_{i}^{*} = a_{i} + b_{i} - x_{i} { }i = 1,2, \ldots ,n$$
(7)

The principle of the OBL method in optimization based on X and \({X}^{*}\) to search and find the optimal answers is that in each iteration, \({X}^{*}\) is calculated from X and then f (X) and f (\({X}^{*}\)) calculates as a fitness level from values of X and \({X}^{*}\), respectively. In iterations that \(f(X)\ge f({X}^{*})\), X is considered as the answer vector. As a simple example of the OBL optimization process, the one-dimensional optimization state (n = 1) is shown in Fig. 3. In this figure, K is an iteration value.

As shown in Fig. 3, the growing number of iterations in the OBL method based on the search range should be reduced recursively. Then, as a sample in the first iteration, X is selected as candidate mode. The correct limit of the range reduces to \(b_{1}^{{\text{'}}} = {\raise0.7ex\hbox{${b_{1} }$} \!\mathord{\left/ {\vphantom {{b_{1} } 2}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$2$}}\). Also, if \({X}^{*}\) is selected as a candidate mode, the left limit of the interval is \(a^{\prime}_{1} = \frac{{a_{1} }}{2}\). This process eventually becomes convergent if X and \({X}^{*}\) are close enough. In optimization problems, because solutions are continually changing, an excellent way to maintain the population is critical. Hence, by applying OBL, variety can be carried out and improves the metaheuristic algorithm's performance [38,39,40].

2.4 Proposed method

Based on studies in Sect. (2), it can be claimed that the FA can be hybridized with other metaheuristic algorithms in terms of the biologic process to use its advantages in exploration and exploitation effectively. Therefore, an algorithm with vital biological processes was needed to improve the FA and use exploration and exploitation effectively. The SOS algorithm is advantageous in solving optimization problems. In [41], the OBL method was used to overcome early convergence and prevent FA from getting trapped in local optima.

One way to improve accuracy and escape from local optimum points is to change the initial population of fırefly and the number of iterations of the fırefly algorithm. With the OBL method, the distribution of fıreflıes in the problem space increases, and the chances of finding global optimizations increase. Accordingly, the SOS algorithm and OBL method processes can be used to achieve the two main goals of this paper: removing weaknesses and increasing exploration and exploitation of the FA. This section is divided into two subsections to illustrate the two proposed approaches: In Sect. (3.1), the first proposed approach presents a new IOFA to accelerate the convergence and improve exploration in the FA. In Sect. (3.2), the second proposed approach (IOFASOS) uses the SOS algorithm processes to improve the exploration and exploitation of the first proposed approach; these two goals are set in the second approach with two new T parameters and U.

2.5 Improved OBL firefly algorithm (IOFA)

The FA may start with an inappropriate population and may not perform a significant exploration until the end. Indeed, several studies have proven that the OBL method increases population diversity [42], improves the convergence rate [43, 44], improves accuracy and speed [45], improves efficiency and quality of the solution, and improves search in search space and production of more precise solutions [46]. The opposition-based method has a high potential to accelerate convergence and improve exploration in the FA. The randomness of the FA algorithm is eliminated using the OBL method, and only the optimal points are searched. Thus, the opposition-based method can be used during iteration to search the opposite space and generate a suitable initial population for the FA. The second proposed approach uses both opposition-based models to accelerate the convergence and exploration in the FA. In [41], there is also a version of the opposition-based method to improve the FA, which overcomes the FA's early convergence problem by replacing the worst firefly. An entirely different version of the opposition-based method can be used, which will be outlined later outline. First, generating an appropriate initial population for the FA using an opposition-based method will be discussed. Figure 4 shows a flowchart of the proposed method.

The FA begins by randomly generating an initial population and then attempts to move toward an optimal solution. This step is called the initialization of the population, which is initialized according to Eq. (8).

$${\text{First}} {\text{Random}} {\text{Firefly}} \left( {FRF} \right) = \left[ {\begin{array}{*{20}c} {xrand_{1}^{1} } & {xrand_{2}^{1} } & \ldots & {xrand_{n}^{1} } \\ {x{\text{rand}}_{1}^{2} } & {x{\text{rand}}_{2}^{2} } & \ldots & {x{\text{rand}}_{n}^{2} } \\ \vdots & \ldots & \ldots & \ldots \\ {x{\text{rand}}_{1}^{Npop} } & {x{\text{rand}}_{2}^{Npop} } & \ldots & {x{\text{rand}}_{n}^{Npop} } \\ \end{array} \left| {\begin{array}{*{20}c} {f\left( {xrand^{1} } \right)} \\ {f\left( {x{\text{rand}}^{2} } \right)} \\ \vdots \\ {f\left( {x{\text{rand}}^{Npop} } \right)} \\ \end{array} } \right.} \right]$$
(8)
$$x{\text{rand}} = {\text{rand}}\left( {{\text{Min}},{\text{ Max}},D} \right)$$
(9)

In Eq. (8), FRF is the primary firefly population, randomly generated according to the xrand function on a given whole interval. The xrand function is defined according to Eq. (8), where Min represents the minimum possible limit for a variable, Max represents the maximum possible limit for a variable, and D is the dimension of the optimization problem. Once the initial population is randomly generated, each solution's opposition must be determined according to the opposition-based method. Before explaining the generation of opposition-based populations, some definitions should be clarified.

Definition 1: If x is an actual number on the interval [a b], its mirror number is represented by \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{P}\) and defined as Eq. (10) [47].

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} = a + b - x$$
(10)

Definition 2: If p(x1,x2,…,xn) is a point in the n-dimensional space, where xi \(\in\)[ai, bi] is an actual number, then its mirror number is represented by \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{P}\) (\(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x}\)1,\(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x}\)2,…,\(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x}\)n) and defined as Eq. (11) [47]:

$$\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{x} _{i} = a_{i} + b_{i} - x_{i} i = 1,2,...,n$$
(11)

Therefore, Eqs. (10) and (11) describe how opposition-based solutions are generated. One can define an initial IOFA population according to Eq. (11).

$${\text{Opposition - based FRF }}\left( {{\text{OFRF}}} \right) = \left[ {\begin{array}{*{20}c} {xop_{1}^{1} } & {xop_{2}^{1} } & \ldots & {xop_{n}^{1} } \\ {xop_{1}^{2} } & {xop_{2}^{2} } & \ldots & {xop_{n}^{2} } \\ \vdots & \ldots & \ldots & \ldots \\ {xop_{1}^{Npop} } & {xop_{2}^{Npop} } & \ldots & {xop_{n}^{Npop} } \\ \end{array} \left| {\begin{array}{*{20}c} {f\left( {xop^{1} } \right)} \\ {f\left( {xop^{2} } \right)} \\ \vdots \\ {f\left( {xop^{Npop} } \right)} \\ \end{array} } \right.} \right]$$
(12)
$$xop = {\text{Min}}\left( x \right)_{i} + {\text{Max}}\left( x \right)_{i} - {\text{FRF}}(x_{i} ){ }i = 1,2, \ldots ,D$$
(13)

In Eq. (12), OFRF represents the IOFA population generated according to the fireflies' initial random population, where each opposition-based individual is calculated using xop. The xop variable is also defined based on Eq. (13). In this way, FA has two populations of solutions to start with; one of the populations, i.e., the main one, is named FRF, filled by random functions, and the second population is called OFRF, which is generated by an opposition-based method. Given the objective function, if Fitness (xop) >  = Fitness (x), then the solution available in the OFRF population can replace the solution achieved in the FRF population. Otherwise, the algorithm starts with the same solution available in the FRF population. An initial random population and its mirror population are evaluated simultaneously to use a more appropriate point as the starting point.

So far, the opposition-based method has been used to generate a suitable initial population and create convergence at the beginning of the FA; however, the FA may suffer inadequate exploration and exploitation during iteration. Hence, an opposition-based method was used to improve the FA's primary process in the first proposed approach. The FA has an essential process as shown in Eq. (2), where if a firefly is brighter than the other, the less bright one moves toward, the brighter firefly, and if there is a no brighter firefly, there not be any movement. In the second proposed method, if there is no brighter firefly, the firefly's mirror point compares to another firefly in terms of shine. The motion of the firefly and its attraction to the brighter opposite-based firefly are determined according to Eq. (14):

$$Xop_{j} = {\text{Min}}\left( {X_{j} } \right)_{ } + {\text{Max}}\left( {X_{j} } \right)_{ } - X_{j}$$
(14)
$$X_{i} = X_{i} + B_{0} e^{{rij^{2} }} \left( {Xop_{j} - X_{i} } \right) + a\left( {{\text{rand}} - 1/2} \right)$$
(15)

In Eq. (15), a is the randomizer variable; a rand is a random number between [0, 1]; B indicates the light source's attractiveness; and \(rij\) is the distance between fireflies. The changes in attractiveness determine the parameter γ. Xopi is a firefly generated based on the opposition-based method, according to Eq. (13). Thus, the improved opposition-based FA considering the changes pointed out in Fig. 5 is presented. According to the pseudo-code of Fig. 5, it can be stated that the time complexity in the proposed method is in the form of \({(n}^{2}\times t)\times (n\times t)\times {n}^{2}\), where n is equal to the number of population and t is equal to iteration. The complexity of the FA and SOS algorithms is calculated based on OBL. The complexity of OBL is linear and does not increase complexity.

2.6 The Hybrid of IOFA and SOS (IOFASOS)

The first proposed approach presented an improved opposition-based FA to improve convergence rate and exploration capability. Nonetheless, the FA's primary process in Eq. (2) and its modified process in Eq. (2) may sometimes not maintain the balance between exploitation and exploration. Therefore, in this subsection, according to Fig. 5, three robust SOS algorithm processes were applied: Mutualism, Commensalism, and Parasitism, to cover the FA's potential weaknesses IOFA and to improve the balance between exploitation and exploration. A hybridization of IOFA and the SOS algorithm was created, and for balance between exploitation and exploration in the first proposed approach, there should be some set parameters. Thus, two new parameters, i.e., T and U, were used to control the number of fireflies altered by the SOS algorithm's natural process. In the following, these new parameters, how to use them, and their impact on the first proposed approach have been outlined. The pseudo-code of the second proposed approach for improving the FA using the SOS algorithm to solve continuous optimization problems is presented in Fig. 6.

The advantages of the SOS algorithm over the FA algorithm include simplicity and the ability to control convergence speed. In the FA algorithm, attractiveness and distance affect convergence. However, they cannot easily control the convergence speed like the SOS algorithm. On the other hand, OBL in the FA algorithm is effective in increasing the convergence rate. Nevertheless, the main limitation of the FA algorithm is early convergence and getting caught up in local minima. Therefore, the best position of the worms should be changed in each iteration to prevent this from happening. To this end, it is possible to increase the diversity among members of the firefly population and reduce the likelihood of being caught in local minima by involving the operators of the SOS algorithm.

To conduct a balanced exploration and exploitation in IOFA, more exploration should be performed on the population at the beginning of the initial iterations. Then, by each iteration, the exploration should be reduced, and the exploitation should increase. In this way, in the second proposed approach, the biological processes of the SOS algorithm should be applied to a more significant number of IOFA populations at first and a smaller number of IOFA populations in the end; that is, in the processes of the SOS algorithm, the entire firefly population in the IOFA is affected initially, and by each iteration, a smaller number of individuals in the population are affected. Thus, a parameter T was used in the second approach, the value of which was determined in iterations according to Eq. (16):

$$T = T*(T - U)$$
(16)

In Eq. (15), U is a number between [0, 0.5], which is determined initially; T's initial value is 1. Thus, in the second proposed approach, U's initial value is close to zero to reduce exploration and increase exploitation in each iteration. Once the value of T is calculated using the number of fireflies, according to Eq. (17),

$${\text{NE}} = {\text{Round}}(T*N_{{{\text{fireflies}}}} )$$
(17)

In Eq. (17), \({N}_{fireflies}\) is the number of fireflies in FA, and NE is the number of fireflies in IOFA, changed by the SOS algorithm's three processes. As a result, with each iteration, T's value is updated using Eq. (16). According to this equation, T's value decreases with each generation iteration, based on which the proposed method maintains the balance between exploration and exploitation. On the other hand, when the SOS algorithm processes are applied to the firefly, it somehow causes population diversity in the FA and changes its convergence rate. Note that there must be at least four solutions to the SOS algorithm's processes; therefore, we considered the condition value shown in Fig. 6. Hence, we presented two different approaches to improve the FA. Section 5 shows the results of both proposed approaches.

2.7 Experiment results

The two proposed approaches and other comparative algorithms were simulated using MATLAB on a system with a 5-core Intel processor, 2.30 GHz processing power, and 10 GB RAM. Twenty-four mathematical benchmark functions were presented for evaluation in Sect. (4.1). In addition, the two proposed approaches with other basic metaheuristic algorithms were implemented on benchmark functions to compare algorithms such as HS [48], ABC [49], FA [17], and SOS algorithm [18]. The results of comparisons of all the algorithms with the two proposed approaches are presented in the form of a diagram, and Sect. (4.2) presents the initial settings and parameters of the two proposed approaches, and comparative algorithms were initialized. Two experiments were conducted in subsection (4.3): In the first experiment, in each iteration, the best solution found for each function was graphically and separately illustrated by the algorithms mentioned in the literature. In the second experiment, the two proposed approaches and other algorithms were presented in the form of a table for comparison in terms of the best and the worst value of the objective function, the mean of the objective function for the total population, and other statistical parameters.

In this paper, four measurement criteria were used to evaluate the results. These measurements were the worst, best, mean fitness value, and standard deviation (SD). The worst, best, and mean fitness value of SD is mathematically defined as follows:

$${\text{Best}} = {\text{max}}{}_{i = 1}^{{t{\text{Max}}}} BS_{i}$$
(18)
$${\text{Worst}} = {\text{min}}{}_{i = 1}^{{t{\text{Max}}}} BS_{i}$$
(19)
$${\text{Mean}} = \frac{1}{{t{\text{Max}}}}\mathop \sum \limits_{i = 1}^{{t{\text{Max}}}} BS_{i}$$
(20)
$${\text{SD}} = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{M} (BS_{i} - \mu )^{2} }}{{t{\text{Max}}}}}$$
(20)

where \({t}_{Max}\) and, moreover, BS are the maximum iteration and the best score obtained so far for each iteration, respectively.

2.8 Standard test functions

Standard mathematic functions, called benchmark functions, are used to evaluate all optimization and metaheuristic algorithms. The standard benchmark functions are introduced in [50] with a comprehensive description, evaluating the two proposed approaches. To evaluate the performance and comparison, we use 24 benchmark functions; Table 1 provides a general description of each function and its numerical range.

2.9 Initial parameters

It was necessary to set and initialize some of their fundamental parameters at the beginning, including the population, the dimension of each solution, the minimum value of each dimension, the maximum value of each dimension, and the number of generation iteration of each algorithm. In all the simulations and comparisons of the metaheuristic algorithms, some of the algorithms' quantitative parameters were considered equal to an appropriate comparison. The number of objective function evaluations is one of the criteria for which all algorithms must have the same value to study each algorithm's actual performance. The number of evaluations for all algorithms was considered 2000 in the simulation section; moreover, the comparative algorithms' other quantitative and qualitative parameters are given in Table 2.

The previous studies used the same algorithms proposed in previous years to demonstrate the performance of a new model. Therefore, HS, ABC, FA, and SOS algorithms were used to compare and evaluate this paper.

2.10 Evaluation of the proposed methods

In this subsection, the two proposed approaches have been evaluated in two ways: In the first experiment, the two proposed approaches and other comparative algorithms were implemented with the same objective function evaluations. The results have been shown graphically. The graphic illustration shows only the best solution found so far based on objective function evaluations (3000 here). The second experiment is related to the algorithms' total populations' statistical criteria shown in Table 3. The best and the worst values of the objective function, the mean of the objective function for the total population, and other statistical parameters have been shown in tables. The two proposed approaches were then implemented on different benchmark functions, including 2D, 4D, 10D, and 30D functions. The reason for using different dimensions is to demonstrate how the two proposed approaches perform in different dimensions.

The results of the implementation of the two proposed approaches and other basic algorithms on ten 2D benchmark functions are shown in Figs. 7, 8, and 9, respectively. Based on these results, both proposed approaches have a relatively better performance in terms of the convergence of two-dimensional benchmark functions compared to other metaheuristic algorithms. In general, the two proposed approaches, like all the metaheuristic algorithms, successfully solve small-dimensional problems.

Figure (7) shows that the convergence speed of the IOFASOS model on the F1, F2, F3, and F4 functions is higher than other algorithms. In the IOFA model, the search for optimization functions begins at the optimal points of the search space using OBL. On the other hand, the IOFASOS model uses SOS to intensify exploration operations and solutions to find the best spots in the search space.

The implementation of the two proposed approaches and other basic algorithms on four 4D benchmark functions is shown in Fig. 9 for comparison. Based on these results, it can be concluded that both proposed approaches outperformed other metaheuristic algorithms in terms of the convergence rate of 4D benchmark functions. However, Fig. 10 generally shows that implementing the second proposed approach on 4D benchmark functions is more satisfactory.

The implementation of the two proposed approaches and other basic algorithms on the ten 10D benchmark functions is shown in Figs. 11 and 12, respectively. Based on the results presented in these figures, it can be concluded that both of the proposed approaches relatively outperformed other metaheuristic algorithms in terms of the convergence rate of the benchmark functions. The two proposed approaches succeeded in converging high-dimensional optimization functions well to the objective. Thus, the second proposed approach shows better results compared to the first one.

The proposed algorithms performed better on the F13 to F16 functions because these functions have a smaller range, and especially the IOFASOS algorithm was able to find the best points. Moreover, it brought the value of the functions to the nearest optimal point.

The results of implementing the two proposed approaches and other basic algorithms in the ten 30D benchmark functions are shown in Fig. 13. Based on these results, it can be deduced that by increasing the dimension to 30, some algorithms such as HS and ABC show weaker performance; however, the two proposed approaches still maintain their higher dimensions. However, the second proposed approach shows better results in comparison with other metaheuristic algorithms in higher dimensions.

This test further illustrates the effect of algorithms on the population as a whole. The results of this test are compared in Table 3. In this experiment, the two proposed approaches are compared with the SOS algorithm, ABC, FA, and HS algorithm in low, medium, and high dimensions.

Table 3 shows that the IOFASOS method has a high ability to find the optimal points. The SOS algorithm has the optimal answer for F5, F8, F19, and F21 functions. Moreover, the HS and ABC algorithms in the F8 function have optimal solutions. Therefore, the IOFASOS method has been able to find the best solutions by using SOS and OBL.

For a better evaluation in this experiment, the value of the objective function evaluation is considered to be 3000 so that all algorithms have the same evaluation number and can be compared together. Considering the results of the implementation of the two proposed approaches, SOS algorithm, ABC, HS, and FA in low, medium, and high dimensions, as shown in Table 3, it can be argued that the two proposed approaches are significant in terms of statistical criteria such as the worst, mean, and standard deviation. These criteria indicated that the two proposed approaches change all the population's solutions and converge them to the objective. Furthermore, the second proposed approach has better outcomes than the first approach and other algorithms regarding most benchmark functions, which indicates that the second approach based on the opposition method and generation of new and opposite fireflies has affected the entire population and improved the performance of the FA.

To further evaluate the two proposed approaches, they were compared with the algorithms Qi et al. presented in [51] in 2017 as two new types of butterfly algorithms, i.e., ABO1 and ABO2. Qi et al. implemented ABO1, ABO2, and basic metaheuristic algorithms such as ABC [49], GA [52], PSO [53], and cooperative PSO (CPSO) algorithms [54] on 22 benchmark functions and reported the results in Table 3 of the 22 benchmarks functions used in this reference which were the same ones reported in Table 1. Thus, the authors in [51] considered the initial population size for all algorithms to be 20 and the maximum objective function evaluation to be 100,000. Moreover, C1 and C2 learning rates were both set at 2.05 for both CPSO and PSO algorithms. Furthermore, the compression factor X was set to 0.729, and the division factor for CPSO was considered equal to the dimension. In addition, a one-point crossover operator used GA, where the crossover probability was set to be 0.95, and the probability mutation was set to be 0.1. In this test, the parameters of the algorithm of the two proposed approaches were set as the population = 20 and the maximum objective function evaluation = 100,000, so a proper comparison can be obtained. The results are reported in Table 4 where it was shown that the improvement of IOFA on F14 function compared to GA, CPSO, PSO, ABC, ABO2, and ABO1 was 100%. Furthermore, the results of the IOFA and FASOS methods in the F16 function were equal to zero, which indicates that the proposed algorithm has a high power of exploration and exploitation.

Table 4 compares the two proposed approaches with other robust algorithms, i.e., PSO, cooperative PSO, ABC, and GA implemented on the 30D benchmark optimization functions. Thus, to demonstrate the two proposed approaches' performance and evaluate them more precisely, the algorithms are evaluated in Table 4 on 20D functions. The results showed that the second approach had outperformed the other basic metaheuristic algorithms, modified particle, while the first approach in higher-dimension problems, namely in dimensions 20 and 30. Moreover, the second approach continued to improve the results with increasing dimensions of the problem significantly.

2.11 Conclusion and future works

The FA is a metaheuristic optimization algorithm inspired by the flashing behavior of fireflies in nature. Studies on the FA have shown the strengths and weaknesses of this algorithm; therefore, researchers have presented various hybrid, adaptive, modified, and improved versions of this algorithm in recent years to solve various optimization problems. In this paper, two approaches to improve the FA were proposed using the SOS algorithm's natural and opposition-based methods. In the first proposed approach, a new method of IOFA was proposed to accelerate the convergence rate and improve the FA's exploration capability. In the second approach (IOFASOS), the processes of the SOS algorithm were used to improve the exploration and exploitation of the first proposed approach; these two objectives were set in the second approach with two new parameters T and U. After introducing the two proposed approaches, 24 mathematical benchmark functions were introduced to evaluate the two proposed approaches. The two proposed approaches were compared with HS and ABC, FA, and SOS algorithms in all experiments. Experiments were carried out on functions with 2 to 30 dimensions, in two parts, i.e., in terms of convergence rate and statistical criteria. The results of experiments in different dimensions showed that the first proposed approach performs better in low and medium dimensions and shows a relatively moderate performance in higher dimensions, while the second proposed approach was able to show a better performance better. Moreover, the second approach provided better results in terms of statistical criteria such as the best and the worst value of the objective function, the mean of the total population's objective function, and other statistical parameters compared with other algorithms and the first proposed approach. Future work will evaluate the proposed algorithm using more complex problems, and hybrid algorithms will improve the operators.

Fig. 1
figure 1

Pseudo-code of the FA

Fig. 2
figure 2

The pseudo-code of the SOS algorithm

Fig. 3
figure 3

Using the OBL method in optimizing a single one-dimensional problem

Fig. 4
figure 4

The flowchart of the proposed method

Fig. 5
figure 5

The pseudo-code of the first proposed approach for IOFA

Fig. 6
figure 6

The pseudo-code of the second proposed approach

Fig. 7
figure 7

The convergence rate of the two proposed approaches on functions 1–4 with two dimensions

Fig. 8
figure 8

The convergence rate of the two proposed approaches on functions 5–8 with two dimensions

Fig. 9
figure 9

The convergence rate of the two proposed approaches on functions 9–10 with two dimensions

Fig. 10
figure 10

The convergence rate of the two proposed approaches on 4D benchmark functions 11–14

Fig. 11
figure 11

The convergence rate of the two proposed approaches on 10D functions 15–18

Fig. 12
figure 12

The convergence rate of the two proposed approaches on 10D functions 19–20

Fig. 13
figure 13

The convergence rate of the two proposed approaches on 30D functions 21–24

Table 1 Standard benchmark functions (Type: M: multimodal, N: non-separable, U: unimodal, S: separable) for evaluating the two proposed approaches
Table 2 Initialization of the parameters of the two proposed approaches and other algorithms
Table 3 Evaluation of the two proposed approaches based on statistical criteria
Table 4 Evaluation of the proposed algorithm and its comparison with other algorithms reported in 30D