1 Introduction

No unaccompanied optimization techniques are accessible to effectively solve every form of optimization problem [1, 2]. In this way, within the optimization community, certain concerning the most prominent challenges is after locate the appropriate optimization methods for fixing non-linear complicated optimization problems. With many conventional deterministic approaches, optimization has begun. If there is a discontinuity in the objective function problems, a gradient primarily-based algorithm will not perform well. In its case, gradient uninterrupted nature-based optimization algorithms are the superior alternative to resolve its kinds concerning optimization problems. In this case, gradient-free nature-based optimization algorithms are the leading elective to solve these types of optimization problems.

Moreover, a few prevalent gradient-free nature-based optimization algorithms found within the literature are GA [3]; PSO [4], given the rule of scavenging behavior of the swarm of winged creatures; DE [5], in view of the Darwinian hypothesis of advancement; BBO [6], based on the scientific models of biogeography; HS [7], in view of the melodic procedure of looking for an ideal condition of amicability; GSA [8], in view of the law of gravity and mass intuitive; WCA [9], in view of the perception of water cycle procedure and how waterways and streams stream to the ocean in reality; BSA [10], given the three essential and well-known administrators of EA that are selection, mutation and crossover; SOS [11], based on the interaction relationship among the creature in the biological system, etc. By these algorithms, a huge measure of certifiable applications can be found in the writing over practically all parts of Humanities, Science, and Technology. Some different types of optimization problems on networks found in the literature can be seen in references [44,45,46]. The critical focal points in these algorithms are the framework for information-exchange instrument and collaborative quality that offer assistance to locate see the look space more competently and dodge the circumstance of skipping right solutions and getting to be stuck in nearby optima.

Cheng and Prayogo proposed the Symbiotic Organisms Search (SOS) [11] and it is a new robust and efficient metaheuristic algorithm for numerical optimization and engineering problems. SOS mimics the symbiotic interaction techniques that species have developed to outlive and engender within the environment [11].

As of late, SOS has been applied to tackle different real-world applications, Although, the utilization of SOS illustrates its adequate capacity in terms of investigation and absorption, but still in a few cases, it endures the issue of stagnation at nearby optima and improper adjust between investigation and absorption. In this manner, within the literature, a few endeavors have been performed to move forward the look component of SOS. Some are:

Umam and Santosa [12] proposed SOS-VNS strategy and in that paper talks about how to hybridize the SOS with variable neighborhood search so be able to keep utilized in imitation of solving the ATSP problem. The SOS-VNS is proposed dependent on the blend of SOS and VNS to move forward the convergence and computing time of SOS.

Ref. [13] proposes new chaos and a symbiotic search algorithm (A-CSOS) focused on global competitive rankings. The target of A-CSOS is to decrease the snare into nearby optima and improve the convergence of SOS to discover better answers for progressively unpredictable, nonlinear, and multi-modular optimization problems, for example, ORPD [13].

Ref. [14] proposes an improved variant of SOS (ISOS) dependent on the hypothesis of quasi oppositional based learning and a productive option for the parasitism phase. To direct the calculation perform a comprehensive look around the finest arrangement in endeavoring to assist move forward the look demonstrate of ISOS, a chaotic neighborhood look based on the piecewise straight chaotic outline is coupled into the calculation [14]. The objective of this strategy is to maintain a strategic distance from the over investigation issue of a unique parasitism phase that causes undesirable long time look within the second rate look space as the arrangement is as of now refined [14].

To estimate parameters of smooth and non-smooth fuel cost functions for improving the solution accuracy of economic dispatch problems, the improved symbiotic organisms search (R-SOS) Algorithm is proposed [15].

Using the SOS algorithm’s multi-group coordination technique and quantum behavior, a new improved variant of the SOS algorithm is suggested, called the MQSOS algorithm [16]. MQSOS has speed and convergence capability and plays a strong role in multi-population on functional problems [16].

An improved variant of the metaheuristic optimization algorithm called Opposition-based Symbiotic Organisms Search (OSOS) is proposed to tackle the question of color image segmentation [17]. OSOS is used to overcome the multilevel image thresholding technique for the segmentation of color images. Opposition-based learning principles are used to improve the execution of standard SOS [17].

An Enhanced Symbiotic Species Search (ESOS) algorithm is suggested based on the Local Search Improvement Strategies to solve unrelated parallel machine manufacturing scheduling problem with setup times [18].

A new, complex-valued encoding symbiotic organism search (CSOS) algorithm is proposed by introducing the concept of high diploid coding [19].

A Modified SOS (MSOS) algorithm [20] is proposed to improve its search (exploitation) accuracy effectiveness along with exploration by adding an adaptive benefit factor and modified parasitism vector.

However, only a few efforts have been made to deal with the time-cost trade-off problem (TCTP) in the large-scale construction projects, and the existing optimization methods are slightly limited by the trouble of parameter tuning.

As time-cost trade-off problem (TCTP) [21] is known to be an NP-hard problem, a new variant of Symbiotic Organisms Search (SOS) algorithm is proposed that does not contain control parameters, called DSOS (Discrete Symbiotic Organisms Search) which generates the parasite organism using a heuristic rule based on the network levels.

Through combining the techniques Quasi-Opposition-Based Learning (QOBL) and Chaotic Local Search (CLS) with SOS, QOCSOS has been proposed for a higher quality solution and quicker convergence [22].

Nama et al. [23] proposed an Improved Symbiotic Organisms Search (ISOS) algorithm to boost the original algorithm’s efficiency and add a random weighted reflection vector to enhance the SOS algorithm’s search capability.

Nama et al. [24] incorporated Simple Quadratic Interpolation (SQI) into SOS to balance SQI exploration capability and SOS extraction capacity as well as enhance algorithm robustness.

In [25], HSOS [24] is used to find the seismic bearing potential of a shallow strip base under the pseudo-dynamic condition that is formulated by the method of limit analysis.

By introducing adaptive benefit factors in the basic SOS algorithm, three modified versions of the SOS algorithm are proposed to improve its efficiency of the basic SOS algorithm. This lay down a good balance between exploration and exploitation of the search space [26].

Therefore, with the aid of updated benefit factor, parasitism process, and random weight number search, the present research aims to improve the search component of the conventional SOS. The updated benefit factor is presented into the SOS to diminish the trap into neighborhood optima and to improve SOS convergence. The updated phase of parasitism is implemented to improve both the exploration and exploitation potential of the traditional SOS, consequently that an acceptable balance can be formed between exploration and exploitation. The random weight number is used to balance the capacity of exploitation and to increase the convergence rate. To the finest of our information, there’s no variation comparable to what created in this paper. Within the proposed calculation, the covetous choice approach is additionally utilized inside the calculation to draw in the SOS toward the promising look spaces of domains space. The proposed algorithm is tested on twenty benchmark test problems with dimensions 100. The various success tests are also used to validate the importance of enhanced outcomes. The comparisons show that an expansion to a quicker meeting speed, the suggested design accomplished the worldwide best with higher exactness. The effect of the suggested algorithm is also explored in the paper in several problems about engineering optimization. The findings on these issues also check the suggested algorithm’s superior search performance compared with the other comparative algorithms.

The remainder of this paper is composed as follows: Section 2, presents the traditional rendition of SOS and its inquiry component. In Section 3, the proposed mISOS is portrayed in detail. The experimentation and approval of the proposed calculation on benchmark test issues are acted in Section 4. The presentation of the proposed calculation about actual engineering problems is likewise talked about in Section 5. At long last, the conclusion of the work is furnished in Section 6 with some future research bearings.

2 Overview of basic SOS algorithm

Within nature, many species maintain an important relationship between survival and development. Often such partnerships may be helpful, or negative. In advantageous relationship species live respectively for shared advantage and endurance, which make it not quite the same as other normal connections. Considering the normal collaboration between the living being form for endurance in the biological system, Cheng and Prayogo [11] presented a convincing and unassuming meta-heuristic calculation called SOS which recreates the intelligent conduct among living being found in nature. In an ecosystem, a community of species is similar to the SOS algorithm population. Each organism speaks to one individual comparing to the optimization issue. That organism within the ecosystem has to do with an explicit fitness value, i.e. an objective function value that replicates the degree of adaptation to the defined goal. SOS implements the quest space into seven main components: initialization, Mutualism phase, selection-I, Commensalism phase, Selection-II, Parasitism phase, and selection-III.

Initialization

The initial organism generates at random within the uniform search space at the initial level. According to Eq. (1), the initial organism is determined.

$$ {O}_{i,d}={O}_{lb,d}+\mathit{\operatorname{rand}}\left(0,1\right)\ast \left({O}_{ub,d}-{O}_{lb,d}\right) $$
(1)

Here Olb, d and Oub, d are the lower and upper bound of the ith organism respectively, d represents the dimension of the problem.

Mutualism phase

An organism, Oi interact with the randomly selected organism Oj from the ecosystem and all species communicate intending to extend shared survival capabilities within the environment in a mutualistic relationship according to Eqs. (2), (3), and (4).

$$ {O}_{i,d}^{new}={O}_{i,d}+\mathit{\operatorname{rand}}\left(0,1\right)\ast \left({O}_{best}-{M}_V\ast BF1\right) $$
(2)
$$ {O}_{j,d}^{\mathrm{n} ew}={O}_{j,d}+\mathit{\operatorname{rand}}\left(0,1\right)\ast \left({O}_{best}-{M}_V\ast BF2\right) $$
(3)
$$ {M}_V=0.5\ast \left({O}_{i,d}+{O}_{j,d}\right) $$
(4)

Here Obest is the best organism in the ecosystem. The benefit factors (BF1 and BF2) are randomly calculated as either 1 or 2. Such variables reflect the degree of benefit for each organism, i.e. whether the interaction benefits an organism partially or entirely.

At the end of the mutualism phase, the selection-I operator is led by looking at the objective function value of the new aspirant organism with the corresponding old organism utilizing Eqs. (5) and (6)

$$ {O}_i=\left\{\begin{array}{c}{O}_i^{new}\kern5em if\ f\left({O}_i^{new}\right)<f\left({O}_i\right)\\ {}{O}_i\kern7.25em Otherwise\kern4.75em \end{array}\right. $$
(5)
$$ {O}_j=\left\{\begin{array}{c}{O}_j^{new}\kern4.5em if\ f\left({O}_j^{new}\right)<f\left({O}_j\right)\kern6.25em \\ {}{O}_j\kern6em Otherwise\kern8.5em \end{array}\right. $$
(6)

Commensalism phase: Similar to mutualism phase, an organism, Oi interact with the randomly selected organism Ok from the ecosystem and only Oi takes the beneficial advantage within the environment in a commensalism relationship according to Eq. (7),

$$ {O}_{i,d}^{new}={O}_{i,d}+\mathit{\operatorname{rand}}\left(-1,1\right)\ast \left({O}_{best}-{O}_{k,d}\right) $$
(7)

At the end of the commensalism phase, the selection-II operator is led by looking at the objective function value of the new aspirant organism with the corresponding old organism utilizing Eq. (8).

$$ {O}_i=\left\{\begin{array}{c}{O}_i^{new}\kern5em if\ f\left({O}_i^{new}\right)<f\left({O}_i\right)\\ {}{O}_i\kern6.25em Otherwise\kern4.75em \end{array}\right. $$
(8)

Parasitism phase

By duplicating organism, Oi, an artificial parasite called Op, d is produced according to Eq. (9). At the end of production of Op, j, using selection-III operator, Op, j led to

$$ {O}_{p,d}=\left\{\begin{array}{c}{O}_{lb,d}+\mathit{\operatorname{rand}}\left(0,1\right)\ast \left({O}_{ub}-{O}_{lb}\right)\kern4.25em if\ a<b\kern5.5em \\ {}{O}_{i,d}\kern14em Otherwise\kern5.25em \end{array};a,b\in \left(0,1\right)\right. $$
(9)

kill randomly selected organism Oi, d utilizing Eq. (10).

$$ {O}_i=\left\{\begin{array}{c}{O}_{\mathrm{p},d}\kern5.75em if\ f\left({O}_{\mathrm{p},j}\right)<f\left({O}_i\right)\kern1.25em \\ {}{O}_{i,d}\kern7.25em Otherwise\kern4.75em \end{array}\right. $$
(10)

The flowchart of the SOS algorithm is shown in Fig. 1.

Fig. 1.
figure 1

Flowchart of the SOS algorithm

3 The procedure of proposed mISOS algorithm

The search proficiency of the conventional SOS is overhauled in this segment by combining principles of the modified adaptive benefit factors, modification of parasitism phase, and random weighed reflection vector. Such alterations are portrayed and talked about in detail as underneath.

3.1 Modification of beneficial factor

In the fundamental SOS algorithm, the value of the benefit factor is considered as 1 or 2. This speaks that the interaction of the living being is halfway or completely benefited. Benefit factors are, however, the main control element in the SOS mutualism phase of the Mutual Vector. In the basic SOS algorithm, the beneficial factors (BF1 and BF2) are stochastically measured by either one or two, which shows whether the organism is enjoying the interaction partially or fully [20]. Such beneficial influences are heuristic in nature, because one organism may benefit partially or fully than another [11]. Such variables reflect the degree of the gain relationship, i.e., total or fractional. The new organism \( {\mathrm{O}}_{\mathrm{i}}^{\mathrm{new}} \) and \( {\mathrm{O}}_{\mathrm{j}}^{\mathrm{new}} \) enter the set of the organism, if the objective function value of these organisms are better than their corresponding pre-interaction organism \( {\mathrm{O}}_{\mathrm{i}}^{\mathrm{old}} \) and \( {\mathrm{O}}_{\mathrm{j}}^{\mathrm{old}} \). This process is equivalent to greedy selection.

On the other hand, the organism Oi and Oj can get halfway and completely advantage from the mutual vector. It means that the algorithm quest will be successful with a small step if the value of the benefit factor is greater, but the convergence of the algorithm will diminish [26]. Similarly, if the greater value of the benefit factor is taken into account, the search accelerates the nearby value that decreases the ability of the algorithm to be used [26, 27]. Therefore, the benefit factor can vary within the range of 1 and 2. This rouses us for changing benefit factor (BF1 and BF2) adaptively which gives great union, predominant inquiry capacity, and congruity among investigation and misuse. The modification form of the benefit factors (aBF1, and aBF2) are shown in Algorithm 1.

figure a

Here, fi, fj, \( {f}_{wors{t}_{eco}} \) and \( {f}_{bes{t}_{eco}} \) are the objective function value of the ith, jth, worst, and best organisms respectively. Thus, during the execution process, the value of benefit factors can be conserved repeatedly. In the mutualism phase, the term MV ∗ aBF offers a decent harmony among investigation and taking advantage of the relationship trademark between the living being Oi and Oj contrasted to those of starting BF1 and BF2, because of the adaptive benefits factors are implemented. So, the component of mutualism phase (Obest − MV ∗ aBF)can prompt great assorted variety with quicker convergence. Finally, the new organism \( {\mathrm{O}}_{\mathrm{i}}^{\mathrm{new}} \) and \( {\mathrm{O}}_{\mathrm{j}}^{\mathrm{new}} \) tried to minimize (for minimization problem) the fitness value of the optimization problem in the entire ecosystem. Accordingly, aBF drives the suggested method to investigate the untracked area in domain space when an organism (‘I’ or ‘j’) is left from the best organism and aids in increment the convergence rate. It demonstrates that the proposed method with aBF prompts the optimum solution globally, with a decent harmony among investigation and taking advantage of the proposed algorithm.

3.2 Modification of parasitism phase

A step of parasitism is necessary for the upgrade of the SOS exploration capability. Be that as it may, it is additionally encountered that over investigation brings about the higher computational expense. A significant number of new candidate solutions get dismissal because of second rate fitness value contrasted with the past one. In the parasitism period of essential SOS, the investigation rate is underprivileged, since the parasite vector produces in search space with a combination of structure variable with an arbitrary produced variable [20]. This solitary outcome progresses the existing outcome to increases this algorithm’s exploitability. The primary purpose behind the change is to expel the disadvantage of the low exploitability of the parasitism stage. Numerous examinations show that the exploitability of the parasitism stage in SOS is impressively low when contrasted with the explorative ability [20, 26, 27]. Expanding the quantity of FE prompts an expansion in the convergence time as well. Moreover, numerous look into a demonstration of the enhancement in the proficiency of the design algorithm with the change in the parasitism stage [20, 26,27,28]. Along these lines, this stage is upgraded with the alteration of the parasitism stage. Here it is attempted to progress exploitability of parasitism stage with keeping up global optimum also in domain space. In this manner, our thought process is to set an ideal harmony among the investigation and exploitability of the algorithm. Within the suggested algorithm, the investigation is empowered utilizing the best solution and one current solution. The procedure upgrades the assorted variety of populace and solution too. The suggested methodology permits the Algorithm to investigate various areas of the inquiry space simultaneously, keep away from the populace fixation in one district, and maintain a strategic distance from untimely intermingling. Algorithm 2 speaks to the alteration in the parasite vector.

In Algorithm 2, \( {O}_{\mathit{\max},d}^{best} \) and \( {O}_{\mathit{\min},d}^{best} \) are the maximum and minimum dimensions of the best organism respectively. From the algorithm, it is seen that, if a is less than b then the selected dimension is modified by search boundary otherwise the dimension modified with the help of the best organism. In this case, if the fitness value of the problem for the new calculated parasitism vector gives inferior (for minimization issue) than the past one, at that point the parasitism vector precedes the new location whereas taking out past living being. All things considered, the suggested algorithm can merge quicker while keeping up great assorted variety. Due to previously mentioned alteration in parasitism stage exploitability capacity increment with high intermingling rate and solidness of ideal solution.

figure b

3.3 Weighted random number

Within the conventional SOS, the living being stage causes the algorithm to continue by moving the mean of the living being towards its worldwide optima. To get another arrangement of improved living being an arbitrary random weighted number is shaped and added to the current creature of the biological system. Also, in the mutualism stage, the algorithm continues by arbitrary cooperation among creatures to improve their relationship. To get another arrangement of improved living being a random weighted number is shaped dependent on the mix of the random weighted differential vector [29] and the random weighted reflection vector [23]. This is found in algorithm 3.

figure c

This random weighted number is proposed to upgrade the pursuit capacity of the design algorithm. Therefore, the strength of the algorithm depends on the assessment of standard deviation increments, and the calculation requests fewer quantities of investigations to combine a worldwide optimal solution. In the long run investigation capacity of stages is increment because of usage of random weighted number.

3.4 The framework of mISOS

In this subsection, the proposed mISOS architecture is introduced. It was detected after the literature that, because of the benefit factor in the mutualism process, the traditional SOS getting caught at neighborhood optima, less assorted variety, and slow convergence rate [20, 26]. Accordingly, the alteration of benefit factor is acquainted in SOS with keeping the living being from stagnation at nearby optima [26]. The parasitism process is used to discover and manipulate the domain space circa the finest candidate solution while increasing the convergence velocity of the original algorithm [20]. The updated process of parasitism is fundamentally centered around sparing computational time yet at the same time keep up the worldwide capacity of domain space. Integrating a random weighted number has improved the organism’s exploration capacity, and this will be productive whereas fathoming the optimization issues having a huge number of nearby optima moreover [26].

The procedures which are connected within the mISOS can be summarized as takes after:

figure d

1. The modification of benefit factor is utilized within the mISOS to hop out from neighborhood optima and to quicken the look prepare.

2. The parasitism phase is presented to upgrade the investigation of look space amid the look prepare, moreover, when the current organism is absent from the obscure optimal organism.

3. The random weight number is utilized to travel the current organism form to other more promising look space.

4. After each phase i.e. mutualism phases, commensalism phases, and parasitism phases, the selection operator is used between two continuous emphases of the algorithm maintains a strategic distance from the living being to wander from promising pursuit areas and controlled the progression of decent variety inside the algorithm.

The investigation, exploitability, and the harmony between them are fundamental segments of any nature-enlivened algorithm. In the mISOS, these parts have attempted to improve dependent on modified benefit factor, modified parasite factor, and random weighted number. The steps of the algorithm are exhibited in Algorithm 4 and the flowchart is introduced in Fig. 2.

Fig. 2.
figure 2

Flowchart of mISOS Algorithm

4 Performance analysis on 20 test function experiments and comparison of results

4.1 Benchmark set analysis of results and parameter setting

The proposed mISOS is evaluated on a set of twenty notable and traditional benchmark problems [30]. These functions are scalable, and thus the scalability of the mISOS is also evaluated on these metrics, taking into account the dimension one hundred. Table 1 presents the names, mathematical form, quest range, and optimum values (Fmin) of traditional benchmark test functions. On these test functions, every algorithm is executed thirty times independently. The parameter settings of all the algorithms are presented in Table 2. Tables 3, 4, 5, 6, and 7 report the statistical parameters obtained, such as mean, and standard deviation of benchmark problem values.

Table 1 Benchmark function applied for the validation of the proposed method (Fmin = 0, S=Search space)
Table 2 Parameter values of all the algorithm specific control parameters
Table 3 Performance results of SOS [11], I-SOS [23], SOS-ABF1 [26], SOS-ABF2 [26], SOS-ABF1&2 [26] and mISOS at dimension (D) 100 after reaching D*100 FEs of 20 test functions over 30 runs with 50 population size
Table 4 Performance results of ten function (F1-F10) at dimension (D) 100 after reaching D*100 FEs over 30 runs with 50 population size
Table 5 Performance results of ten function (F11-F20) at dimension (D) 100 after reaching D*100 FEs over 30 runs with 50 population size
Table 6 Performance of the corresponding competitor is worse than, better than, and similar to that of mISOS
Table 7 Comparison of mean error and standard deviation in objective function value for 30 dimension of classical test problems

A comparison of the algorithms is important for selecting the appropriate algorithm for an optimization problem from several optimizers. This is as often as possible done by utilizing optimum results from exact examinations on benchmarks [39] because the theorem of non-free lunch [2] infers that there is no all-inclusive best calculation. The algorithms compared based on the quality of the solution achieved for such benchmark function and the essential computational necessities. In this work, the output findings are compared with some SOS variant and these are SOS [11], I-SOS [23], SOS-ABF1 [27], SOS-ABF2 [27] and SOS-ABF1&2 [27]. Also, some other optimization algorithm BSA [10], ABSA [31], CLPSO [32], CPSO-H [33], FDR-PSO [34], FI-PS [35], UPSO [36], EPSDE [37], TSDE [38], CPI-DE [39], ACoS-PSO [40], HBSA [30] and DSOS variants [47]. The reason behind the chosen algorithm for comparison is that the algorithms are widely used for solving the different complex problems from a different branch of science and engineering. The common control.

The parameter value of all the algorithms is thirty independent trials, fifty organisms, and D*100 function evaluations. Also, the parameter values of all algorithm-specific common control are taken the same as given in their original manuscript. For every algorithm, the maximum number of function evaluation is considered as the stopping criteria and every algorithm run in Matlab R2010a with Lenovo Intel(R) Core(TM) i5-8250 U, 8th generation CPU @1.60GHz 1.80GHz, 8GB RAM Windows 10 home and an ×64-based processor.

The outcomes are listed in the “Mean ± STD” format. Mean “and “STD “reflect the benchmark problems values average and standard deviation respectively. Boldface represents the best outcome among the algorithms compared in this study. F: Function, S: Search space.

In the table, the signs ─/ + / ≈ will be used to show that the output of the corresponding competitor in terms of numerical results is worse than, better than, and identical to that of mISOS.

4.2 Comparison with conventional SOS and some improved SOS variant

The proposed mISOS are compared in this section to the conventional SOS and some modified SOS variant on twenty test problems. Table 3 provides a comparison of the tests for dimension 100. In these tables, the test problem values in terms of mean and standard deviation are specified. The output findings are compared with SOS[11], I-SOS[23], SOS-ABF1[27], SOS-ABF2[27] and SOS-ABF1&2[27] at dimension (D) 100, respectively, after achieving D*100 FES of 20 test functions over 30 test runs with a population size of 50. Only one optimum known as global optima is present in these problems which are for F6. As shown in Table 3, on fifteen, thirteen, thirteen, and nineteen test functions, mISOS performs better than SOS, I-SOS, SOS-ABF1, SOS-ABF1, and SOS-ABF1&2 respectively. In the charts, the bold font shows better results. Overall, as shown in Table 3, except for the F5, F 8, F9, F13, and F17 functions, mISOS obtained better results than other algorithms compared.

4.3 Comparison with other optimizers

The execution of the suggested mISOS is contrasted in this paragraph with other optimizers in terms of the numerical value of the test functions which are used in the previous section. These recent optimizers are BSA [10], ABSA [31], CLPSO [32], CPSO-H [33], FDR-PSO [34], FI-PS [35], UPSO [36], EPSDE [37], TSDE [38], CPI-DE [39], ACoS-PSO [40] and HBSA [30], and used to evaluate the mISOS performance. We held the same population size for a reasonable comparison, and a similar introductory population is picked for every algorithm for a specific run. Algorithm parameter settings are equivalent to utilize in their main papers. The examination is acted in Tables 4 and 5 which demonstrations the mISOS ‘search effectiveness and solution accuracy. In a large portion of the test issues, either the proposed mISOS has accomplished the optima, beat, or serious versus different calculations.

The suggested mISOS either reached the optima, outflanked, or exceptionally competitive with other algorithms. In Table 6, the signs ─∕ + ∕ ≈ are utilized to speak to that the execution of the comparing competitor in terms of numerical results is worse than, better than, and comparable to that of mISOS. As shown in Table 6, mISOS performs better than BSA, ABSA, CLPSO, CPSO-H, FDR-PSO, FI-PS, UPSO, EPSDE, TSDE, CPI-DE, ACoS-PSO and HBSA on 18/20, 19/20, 19/20, 19/20, 18/20, 18/20, 18/20, 18/20, 18/20, 19/20, 18/20, and 18/20 test functions, respectively.

4.4 Comparison with some recent algorithms

Table 7 the execution of the suggested mISOS with some recent optimizers in terms of the numerical value of the test functions which are used in the previous section. These recent optimizers are SOS and its variants called DSOS [47]. We held the same population size for a reasonable comparison, and a similar introductory population is picked for every algorithm for a specific run. Algorithm parameter settings are equivalent to utilize in their main papers [47]. The examination presented in Table 7, demonstrates the mISOS ‘search effectiveness and solution accuracy. The suggested mISOS either reached the optima, outflanked, or exceptionally competitive with other algorithms. In Table 7, the signs ─∕ + ∕ ≈ are utilized to speak to that the execution of the comparing competitor in terms of numerical results is worse than, better than, and comparable to that of mISOS. As shown in Table 7, mISOS performs better than SOS, DOS1, DOS2, DSOS3, DSOS4, DSOS5 and DSOS6 on 11/5/1, 11/5/1, 12/7/0, 12/7/0, 12/7/0, 12/7/0, 12/7/0 test functions, respectively.

4.5 Statistical analysis

In addition, simultaneously the multiple-problem Wilcoxon test [41] and Friedman’s test were executed using SPSS software to evaluate the substantial outperformance of the mISOS on the test functions. In this case, the Bonferroni-Dunn technique [42] was chosen as the post-hoc technique for Friedman’s test. With regard to the 100D test functions, the average objective function values over 30 runs, the results of the Wilcoxon multi-problem test, and Friedman tests are condensed in Tables 8, 9, 10, 11, 12 and 13 separately.

Table 8 Results of the multiple-problem based Wilcoxon’s test for mISOS and some selected SOS variants on 20 test functions with 100D from (α = 0.05)
Table 9 Ranking of mISOS and some selected SOS variants by the Friedman’s test on 20 test functions with 100D
Table 10 Results of the multiple-problem based Wilcoxon’s test for mISOS and some selected algorithms variants on 20 test functions with 100D from (α = 0.05)
Table 11 Ranking of mISOS and some selected others algorithm variants by the Friedman’s test on 20 test functions with 100D
Table 12 Results of the multiple-problem based Wilcoxon’s test for mISOS and recent SOS variants on 17 test functions with 30D from (α = 0.05)
Table 13 Ranking of mISOS and some recent algorithm variants by the Friedman’s test on 17 test functions with 30D

In Tables 8, 10, and 12, all R+ values are greater than R-values, showing that mISOS efficiency is higher than other competitors ‘efficiency. Also, mISOS accomplish the first rank in the Friedman’s test shown in Tables 9, 11, and 13. Consequently, the test results show that mISOS is outperforming the competitors with 30D and 100D on the classical test functions.

On the basis of the statistical analysis, we can observe that the rank of mISOS is one, while the other approaches are the following techniques, separately. In this way, the suggested mISOS can be viewed as a superior analyzer than different algorithms beneath thought for contrast to find the result of the optimization problem with high precision of the solution.

4.6 Convergence rate comparison

The convergence curves are illustrated in Figs. 3. were plotted using the objective function values of the respective functions. Those curves were plotted by setting dimensional size 100. The convergence curves are plotted for the test function are F1.Sphere; F3.Schwefel1.2; F4.Schwefel2.21; F6.Step; F11.Griewank; F14.Salomon; F15.Zakharov; F16.Axis parallel hyper ellipsoid; F18.Cigar; F19.Exponential; F20.Cosine mixture. It very well may be effectively seen from the figures that the mISOS converges rapidly towards the worldwide optima and accomplishes high exactness contrasted with other enhancement algorithms. On the other hand, the other advancement algorithms effectively fall into neighborhood optima, and along these lines, the convergence speed is moderate when different peers continue to switch worldwide optima. The global optima for the F14 and F15 functions lie in a narrow valley, and converging to that result is tough for optimization algorithms. Nonetheless, the suggested mISOS accomplishes this global optimum of up to one hundred and propositions a result similar to global optima for the dimension superior to one hundred. Therefore, the suggested mISOS reveal its high precision and higher convergence rate associated with other methods of optimization.

Fig. 3
figure 3

Convergence curve of eleven test functions (F1, F3, F4, F6, F11, F14, F15, F16, F18, F19 and F20) with D = 100 and 10,000 FEs over objective function value vs. fitness evaluation (FES)

Consequently, the general investigation of the outcomes on the test functions shows the substantial improvement in the pursuit technique of the mISOS when contrasted with the SOS variations and other comparative algorithms far as accomplishing the better convergence rate, upgrading, and adjusting the investigation and exploitability quality of the algorithm. Thus, the suggested mISOS can be viewed as a superior optimizer than others to solve the test problems with a large number of variables and high solution accuracy.

5 Applications of mISOS on engineering benchmark problems

In this subsection, the suggested modified version of SOS is executed on five well-known engineering test problems [43], namely, frequency modulation sounds parameter identification problem, Lennard-jones potential problem, tersoff potential function minimization problem, and speed reducer design. These problems are taken from ref. [43]. A summary of the numerical research engineering test problems is as follows:

5.1 Frequency modulation sounds parameter identification problem [42]

Frequency-Modulated (FM) sound wave is a global optimization problem and it has a significant role in several modern music systems. FM amalgamation is a proficient, however not in every case effectively controlled procedure for producing intriguing sounds. A minor number of parameters can be playing an important role to deliver a wide scope of sound tones. This can consequently produce sounds that are analogous to the target sounds. Utilizing mISOS in FM synthesis to find the optimized parameter values in which the FM matching synthesis technique is presented. Initially, a set of parameters value is determined using mISOS and the FM synthesizer generates the corresponding sounds. In this problem, the distances of features between the target sound and synthesized sounds are considered as the fitness value (or objective function value). This engineering benchmark problem is a highly complex multimodal problem. The dimension of this problem is six of the sound wave. The objective function is defined as follows:

$$ \boldsymbol{\operatorname{Minimize}}\ f(y)={\sum}_{t=0}^{100}{\left(y(t)-{y}_0(t)\right)}^2 $$
(19)

Where the primary sound wave and target sound is given by Eqs. (20) and (21).

$$ y(t)={a}_1\mathit{\sin}\left({\omega}_1 t\theta +{a}_2\mathit{\sin}\left({\omega}_2 t\theta +{a}_3\mathit{\sin}\left({\omega}_3 t\theta \right)\right)\right) $$
(20)
$$ {y}_0(t)=1.0\ast \mathit{\sin}\left(5.0\ast t\theta +1.5\ast \mathit{\sin}\left(4.8\ast t\theta +2.0\ast \mathit{\sin}\left(4.9\ast t\theta \right)\right)\right) $$
(21)

Six variables are y = {a1, ω1, a2, ω2, a3, ω3} and the domain of these variables is [─6.4, 6.35]. Here θ = 2π/100.

5.2 Lennard-Jones potential problem [42]

Lennard-jones (LJ) potential problem is the minimization of molecular potential energy associated with pure LJ cluster. The lattice structure of the LJ cluster has an icosahedral core and a combination of surface lattice points. The algorithm mISOS is executed over this capacity for its ability to accommodate molecular structure, where the atoms are composed so that the particle has the least energy. This is a potential energy minimization problem of multimodal optimization comprising an infinite number of local minima. In LJ-problem, the cluster configuration of ‘N’ atoms is given by the Cartesian coordinates

$$ \overrightarrow{p_i}=\left\{\overrightarrow{x_i},\overrightarrow{y_i},\overrightarrow{z_i}\right\},i=1,2,3,.\dots, N $$
(22)

The mathematical formation of Lennard-Jones pair potential for ‘N’ atoms is given as follows:

$$ {V}_N(p)=\sum \limits_{i=1}^{N-1}\sum \limits_{j=i+1}^N\left({{\mathrm{r}}_{\mathrm{i},\mathrm{j}}}^{\hbox{-} 12}\hbox{-} 2.{{\mathrm{r}}_{\mathrm{i},\mathrm{j}}}^{\hbox{-} 6}\right) $$
(23)

Where

$$ {r}_{i,j}={\left\Vert \overrightarrow{p_j}-\overrightarrow{p_i}\right\Vert}_2 $$
(24)

And the gradient is

$$ {\nabla}_j{V}_N(p)=-12\sum \limits_{i=1,i\ne j}^{N-1}\left({{\mathrm{r}}_{\mathrm{i},\mathrm{j}}}^{\hbox{-} 14}\hbox{-} 2.{{\mathrm{r}}_{\mathrm{i},\mathrm{j}}}^{\hbox{-} 8}\right).\left(\overrightarrow{p_j}-\overrightarrow{p_i}\right),j=1,2,3,\dots, N $$
(25)

In this case, 10 atom problems have been considered, and then the total number of variables is 30. Any permutation and translation of the ordering of {pj} results in an equivalent function value since VN(p) depends only on the pair distances. The domain of first three variables are [0, 4], [0, 4], [0, π] and for other variables are: \( \left[-4-\frac{1}{4}\left\lfloor \frac{i-4}{3}\right\rfloor, 4+\frac{1}{4}\left\lfloor \frac{i-4}{3}\right\rfloor \right] \). Where ⌊r⌋is the nearest least integer w.r.t. r.

5.3 Tersoff potential function minimization problem for model Si (B)

The total Tersoff potential energy function of the system is the sum of individual potentials of atoms and defined as

$$ f\left(\overrightarrow{X_1},\overrightarrow{X_2},\overrightarrow{X_3},\dots ..,\overrightarrow{X_N}\right)={E}_1\left(\overrightarrow{X_1},\overrightarrow{X_2},\overrightarrow{X_3},\dots ..,\overrightarrow{X_N}\right)+\dots +{E}_N\left(\overrightarrow{X_1},\overrightarrow{X_2},\overrightarrow{X_3},\dots ..,\overrightarrow{X_N}\right) $$
(26)

The Tersoff potential [11] of individual atoms can be formally defined as follows:

$$ {E}_i=\frac{1}{2}\sum \limits_{i\ne j}{f}_c\left({r}_{i,j}\right)\left({V}_R\left({r}_{i,j}\right)-{B}_{i,j}{V}_A\left({r}_{i,j}\right)\right),\kern0.75em \forall i $$
(27)

Here ri, j is the distance between atoms i and j, VR is a repulsive term, VA is an attractive term, fc(ri, j) is a switching function, Bi, j is a many-body term that depends on the positions of atoms i and j and the neighbors of atom i. The term Bi, j is given by

$$ {B}_{i,j}={\left(1+{\gamma}^{n_1}{\xi}_{i,j}^{n_1}\right)}^{-1/2{n}_1} $$
(28)

In Eq. (28), n1 and γ are known fitted parameters and the term ξi, j for atoms, i and j (i.e., for bond ij) are given by:

$$ {\xi}_{i,j}=\sum \limits_{k\ne i}{f}_c\left({r}_{i,j}\right)g\left({\theta}_{ij k}\right)\exp \left({\lambda}_3^3-{\left({r}_{ij}-{r}_{ik}\right)}^3\right) $$
(29)

In Eq. (29), the term θijk is the bond angle between bonds ij and ik, and the function g is given by:

$$ g\left({\theta}_{ijk}\right)=1+{c}^2/{d}^2-{c}^2/\left({d}^2+{\left(h-\cos \left({\theta}_{ijk}\right)\right)}^2\right) $$
(30)

The quantities λ3, and c, d, h which appear in Eqs. (29) and (30) are also known as fitted parameters. The terms VR(ri, j), VA(ri, j) and the switching function fc(ri, j) which appears in Eq. (27) are given by:

$$ {V}_R\left({r}_{i,j}\right)=A{e}^{-{\lambda}_1{r}_{ij}} $$
(31)
$$ {V}_A\left({r}_{i,j}\right)=B{e}^{-{\lambda}_2{r}_{ij}} $$
(32)
$$ {f}_c\left({r}_{i,j}\right)=\left\{\begin{array}{c}1\kern10.25em ,\kern2.5em {r}_{i,j}\le R-D\kern4.25em \\ {}\frac{1}{2}-\frac{1}{2}\mathit{\sin}\left(\frac{\pi \left({r}_{i,j}-R\right)}{D}\right)\kern1.75em ,\kern0.5em R-D<{r}_{i,j}<R+D\kern1.75em \\ {}0\kern10.25em ,\kern2.75em {r}_{i,j}\ge R+D\kern4.25em \end{array}\right. $$
(33)

Where A, B, λ1 R, D and λ2 are given fitted parameters.

Thus the cost function now can be redefined as:

$$ f(x)={E}_1(x)+{E}_2(x)+\dots +{E}_N(x),\kern0.5em x\in \Omega $$
(34)

In general, for the system of N atoms, the no. of unknown variables is n = 3× N − 6. Now the cluster X of N atoms can be redefined as

$$ x=\left\{{x}_1,{x}_2,{x}_3,\dots {x}_n\right\},x\in {IR}^{3N-6} $$
(35)

The search region for Si (B) model of Tersoff potential is

$$ \Omega =\left\{\left\{{x}_1,{x}_2,{x}_3,\dots {x}_n\right\}:-4.25\le {x}_1,{x}_i\le 4.25;i=4,5,\dots, n;0\le {x}_2\le 4;0\le {x}_3\le \pi \right\} $$
(36)

In this case, the number of atoms is considered as 10 so that the total no of variables is 30. The detailed description of this problem can be seen in ref. [43].

5.4 Spread spectrum radar polyphase code design problem [43]

Spread spectrum radar polyphase code design problem is one of the most popular engineering design optimization problems for the application of global optimization algorithms. When designing a radar system that uses pulse compression, great attention must be given to the choice of the appropriate waveform. Many methods of radar pulse modulation that make pulse compression possible are known. Polyphase codes are attractive as they offer lower side-lobes in the compressed signal and easier implementation of digital processing techniques. This polyphase pulse compression code synthesis is based on the properties of the aperiodic autocorrelation function and the assumption of coherent radar pulse processing in the receiver. The problem has continuous variables and with numerous local optimizations as a min-max non-linear non-convex optimization problem. The mathematical formation of this problem is as follows:

$$ Minimizef(X)=\max \left\{{\varphi}_1(X),{\varphi}_2(X),\dots \dots \dots {\varphi}_{2m}(X)\right\} $$
(37)

Where X = {(x1, x2, x3, .………xD) ∈ RD| 0 ≤ xj ≤ 2π, j = 1, 2, 3, .……, D} and m = 2D-1, with

$$ {\phi}_{2i-1}(X)=\sum \limits_{j=i}^D\cos \left(\sum \limits_{k=\left|2i-j-1\right|+1}^j{X}_k\right),i=1,2,3,..\dots D $$
(38)
$$ {\phi}_{2i}(X)=0.5+\sum \limits_{j=i+1}^D\cos \left(\sum \limits_{k=\left|2i-j\right|+1}^j{X}_k\right),i=1,2,3,.\dots \dots D-1 $$
(39)
$$ {\phi}_{m+i}(X)=-{\phi}_i(X),i=1,2,3,.\dots \dots, m $$
(40)

Here the goal is to optimized (minimize) the module of the greatest among the autocorrelation function that is associated with the compressed radar pulse at the optimal receiver output whereas the variables represent symmetrized phase differences. It is an NP-hard optimization problem. The problem can be categorized as a class of continuous min-max global optimization problems because of the objective function is piecewise smooth. The problem has twenty variables and the domain of the variable is [0, 2π].

5.5 Result and discussion

The mISOS solved the above problems and the optimum solution is described in Tables 14, 15, 16, and 17. The function evaluations are 1000 function evaluations with eco-size 50 and 25 runs used to solve the problem.

Table 14 Parameter Estimation for Frequency-Modulated (FM) Sound Waves
Table 15 Lennard-Jones Potential Problem
Table 16 Tersoff Potential Function Minimization Problem
Table 17 Spread spectrum radar polyphase problem

For Parameter estimation for frequency modulated (FM) sound waves problem, the obtained results are compared with ISOS, SOS, ASOS_BF1, ASOS_BF2, ASOS_BF1&2, BSA, ABSA, and DE. The performance results of this problem are presented in Table 14, which includes the Best, Median, Worst, Mean, and S.D. It is seen from Table 14 that the best solution is obtained by ASOS_BF2, Worst solution obtained by ABSA but Medians and Mean results are obtained by the mISOS algorithm. So from the results presented in Table 14, one can easily summarize that the performance of the mISOS algorithm on this optimization problem is the best minimized compared to other algorithms.

For Lennard-Jones Potential Problem, the obtained results are also compared with ISOS, SOS, ASOS_BF1, ASOS_BF2, ASOS_BF1&2, BSA, ABSA, and DE. The performance results of this problem are presented in Table 15, which includes the Best, Median, Worst, Mean, and S.D. It is seen from Table 15 that the best, medians and mean results are obtained by mISOS. So from the results presented in Table 15, one can easily summarize that the performance of the mISOS algorithm on this optimization problem is the best minimized compared to other algorithms.

For Tersoff potential for model Si(B), the obtained results are compared with various other algorithms such as ISOS, SOS, ASOS_BF1, ASOS_BF2, ASOS_BF1&2, BSA, ABSA, and DE. The performance results of this problem are presented in Table 16, which includes the Best, Median, Worst, Mean, and S.D. It is seen from Table 16 that the best solution is obtained by BSA, the worst solution is obtained by ISOS but medians and mean results are obtained by mISOS. From the reported results, we observe that analyzed the mISOS variant has shown a better solution compared to SOS and other algorithms.

For the spread spectrum radar poly phase problem, the obtained results are compared with ISOS, SOS, ASOS_BF1, ASOS_BF2, ASOS_BF1&2, BSA, ABSA, and DE. It is seen from Table 17 that the best solution is obtained by BSA, the worst solution is obtained by SOS but medians and mean results are obtained by mISOS. So from the observation of Table 17, one can easily show that the mISOS algorithm performs better compared to other algorithms. This indicates the superiority of mISOS in solving this problem in approximating the global. Thus the above discussion concludes that the performance of the SaISOS algorithm in solving this problem is acceptable.

From the above discussion, results, and evaluations, it is clear that mISOS has improved quality, efficiency, and robustness to solve real-life problems. Also, mISOS may be reflected as an additional optimizer for real-life problems.

6 Conclusions and future directions

A modified version of SOS called modification based I-SOS (called mISOS) algorithm is established to astounded the weaknesses of the SOS algorithm. The three altered search approaches called adjusted benefit factor, altered parasitism phase, and a random weighted number-based search are engaged in the suggested mISOS algorithm. An altered parasite vector is suggested here for advancement in the exploitability of the parasitism stage in the fundamental SOS algorithm. To improve the effectiveness of complex structures of basic SOS, an adaptive benefit factor is introduced. The random weight number is used to travel the current form of the organism into other more promising space for the search. The suggested mISOS have been executed on twenty test functions. The strength of the mISOS has been tested by considering dimension 100. The results obtained by mISOS algorithm is compared with a various existing algorithm such as SOS, I-SOS, SOS-ABF1, SOS-ABF2, SOS-ABF1&2 BSA, ABSA, CLPSO, CPSO-H, FDR-PSO, FI-PS, UPSO, EPSDE, TSDE, CPI-DE, ACoS-PSO and HBSA. The examination of the outcomes through numerically, statistically and convergence behavior guarantees the prevalent exhibition of the mISOS on a large number of variable test problems when contrasted with basic SOS, some improve version of SOS, and some other optimizer. The execution comparison on a few designing engineering test problems illustrates the appropriateness of the mISOS to real-world engineering applications.

The analysis of the results through convergence behavior and statistical validation ensures the superior performance of the mISOS on high-dimensional optimization problems as compared to conventional SOS, variants of SOS, and some other recent optimization algorithms. The performance comparison on several engineering optimization test cases demonstrates the applicability of the mISOS on real-world problems.

As the suggested mISOS illustrate its productivity and unwavering quality in solving the problem with large number of variable, it can subsequently be utilized within the future to execute complex non-linear problem. Later on, the mISOS can likewise be extended by some fundamental alterations to solve the discrete, constrain, and multi-objective test problem.