1 Introduction

There are many methodologies about how to handle searching for the best solution, such as robust optimization [102], single-objective optimization [30], large scale optimization [17, 19], multiobjective optimization [15], robust optimization [102], memetic optimization [42], many-objective optimization [16, 18], large scale optimization [17, 19], and fuzzy optimization [26]. There are many methods, but optimization cores are a required step in almost any kind of problem in data science and industry. Some examples are not limited to potential directions as image enhancement optimization [131], deployment optimization in sensor networks [14], Artificial Neural Network (ANN) [93, 161], parameter optimization [159], water-energy optimization [25], deep learning tasks [28, 71, 98100], decision-making processes [77, 79, 139], sustainable development [48, 78, 178], mechanical parameters optimization [13], mechanical and temperature optimization [12], optimal resource allocation [150] and many other domains [21, 49, 84, 96, 143, 148, 149, 157]. Such wide application is that most of the tasks require higher accuracy and strong modeling to understand better the relations among the constraints and objectives systematically [2, 46, 47, 158, 181]. One main category of optimization methods has a swarm and evolutionary basis. In some past years, swarm intelligence (SI) has gained enormous attention because of its simple and efficient search mechanism [107, 108, 116, 120, 135, 171]. It is the well-known and popular branch of population-based metaheuristic solvers, where multiple search agents participate in the search of the optimization task. Multiple search agents together are called “swarm” in SI-based algorithms [20]. These algorithms are used to solve several important real-life applications in science, engineering, and medicine. In the literature, several SI-based algorithms are developed by inspiring the food foraging and social behaviors of various creatures like bees, ants, hawks, etc.

Some comprehensive examples of SI-based algorithms along with their successful real-world applications are Particle Swarm Optimizer (PSO) [29, 153], Differential Evolution (DE) [118], Differential Search (DS) [75], Ant Colony Optimization (ACO) [173, 176, 177], Harris Hawks Optimizer (HHO)Footnote 1 [30, 35, 60], Slime Mould Algorithm (SMA)Footnote 2 [74], Grey Wolf Optimizer (GWO) [8, 22, 57, 58, 119, 175], Whale Optimizer (WOA) [21, 27, 59, 83, 86, 123, 125], Bacterial Foraging Optimization (BFO) [144], Moth-flame Optimizer (MFO) [61, 128, 145, 146, 156, 164, 165], Fruit Fly Optimization (FFO) [31, 36, 37, 110, 134, 155, 169, 170], and Salp Swarm Algorithm (SSA) [6].

Regardless of the variety and different search mechanisms of metaheuristic algorithms, there are two common features: exploration (diversification) and exploitation (intensification), which are responsible for the success of optimization process [44]. Different operators are applied to introduce both of these features in an algorithm and keep an appropriate balance between them. In exploration, the algorithm utilizes different search operators to perform a random search to explore various areas of the solution space deeply. Hence, the exploratory feature of search agents allows finding all possible promising areas of the solution space. On the other hand, the exploitation feature represents the capacity of neighborhood search around the search space’s located regions. This feature is generally performed after the exploration of all the algorithms. Hence, this exploitation can be used to perform a local search in the algorithm. A well-performed algorithm should be capable of establishing an appropriate balance between the exploitation and exploration, and the imbalance between them causes several issues like slow convergence speed, premature convergence, and prone towards the sub-optimal solutions.

The Salp Swarm Algorithm (SSA), inspired from the swarming behavior of salps, is introduced in 2017 by [92]. In the literature, researchers have proposed some modified variants of the SSA by aiming to remove shortcomings present in the classical SSA. However, the classical SSA has shown a better convergence rate and enough exploration features during the search. Nevertheless, in some cases, it falls into sub-optimal solutions. Therefore, researchers have adopted different operators and search mechanisms to improve their search efficacy and provide better results. To improve the level of exploration as well as exploitation, the SSA is hybridized with PSO [67]. The hybrid algorithm is denoted by the SSAPSO, where both the SSA and PSO’s advantages are utilized to develop comparatively better optimizers. Sayed et al. [109] have embedded the chaos theory in the SSA to speed up the convergence and obtain more accurate optimization results. Utilization of the chaotic signals is a way they employed intending to inject pseudo-random motions into the searching behaviors, based on the well-known chaos-based properties [111, 130, 138, 140]. Tubishat et al. [124] have used the concepts of opposition based learning and new local search strategy to improve the swarm diversity and exploitation capability. Gupta et al. [52] have introduced a new variant of the SSA called harmonized salp chain-built optimization. In this variant, levy-flight search and opposition-based learning increase the convergence speed and avoid falling of salps into sub-optimal solutions. An inertia weight-based new search mechanism is introduced by Hegazy et al. [56] in SSA to adjust the present best salp. This inertia weight is adopted in the SSA to enhance solution accuracy, reliability, and convergence speed. Singh et al. [115] have hybridized the SCA search strategy into the SSA to improve the convergence rate and exploration capabilities. Wu et al. [137] have used the dynamic weight to update the state of salp, and adaptive mutation with am aim to achieve a better balance between exploration and exploitation. The SSA has been applied to intricate domains, and its enhanced variants have exposed trending exploratory searching patterns with global optimization [53, 162] and photovoltaic models [1]. There are also several in-depth studies on the structure and analysis of the SSA, including ensemble mutation-driven SSA with restart mechanisms proposed by [172], which shows a robust efficacy and it can be recognized as the best study on SSA. Also, a multi-strategy SSA proposed by [163] demonstrates that the results are much better than SSA in terms of local optima avoidance. Chaotic multi-swarm SSA [80], multiobjective dynamic SSA [9], time-varying hierarchical SSA [38], asynchronous binary SSA [7], and efficient binary SSA are some of the best research on this algorithm. Along with the improvement in the SSA, it is utilized to solve various real-life problems such as scheduling problems [117], image segmentation [142], feature selection [66], parameter estimation for soil water retention curve [160], training of neural network [4] etc. All these applications demonstrate the wide applicability of the SSA. For a comprehensive survey on SSA, readers can refer to the literature review at [3, 39].

In the direction of modifying the SSA to obtain better results for the global optimization problem, this paper proposes a new version of the SSA based on mutation schemes. The motivation of this work can be supported with the fact given by No Free Lunch theorem, given by [136]. This theorem permits the modification of the developed algorithms to obtain better optimization results. The proposed strategies of the paper are tested and compared on 23 standard benchmark optimization problems. In addition to this, the better-performed optimizer among the SSA with Gaussian mutation, SSA with Cauchy mutation, and SSA with the levy-flight mutation is compared with some state-of-the-art optimization methods. The results illustrate that the mutation scheme is successful in improving the search efficacy of the classical SSA.

The rest of the paper is divided into four sections: A simple description of the SSA is provided in Sect. 2. Section 3 presents a brief description of the mutation schemes and the framework of proposed mutation-based SSA versions. Section 4 conducts the experiments to test and compare the performance of the proposed mutation-based SSA versions. Finally, Sect. 5 provides the conclusion of the study and suggests some future works.

2 Overview of the Salp Swarm Algorithm (SSA)

The SSA was developed in 2017 [92]. The inspiration behind the proposal of the SSA was the swarming behavior of salps. These salps are free-floating tunicates and barrel-shaped from the family of Salpidae. Generally, these salps float together in the form called a salp chain when foraging and navigating in oceans. The colony of salps moves in this form for better locomotion and foraging. Like other SI-based metaheuristic methods, the SSA also initializes the swarm with a predefined number of salps. Each salp in a swarm represents the search agent, which performs the search process for a targeted optimization problem. In the swarm of salps, two categories of salps are present: leading salps and follower salps. During the search procedure, follower salps follow the leading salps to allocate the optimal solution. The swarm S consisting of n salps is represented as follows:

$$\begin{aligned} S=\begin{pmatrix} x_{11}&{}x_{12}&{}\cdots &{}x_{1D} \\ a_{21}&{}a_{22}&{}\cdots &{}x_{2D} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ a_{N1}&{}a_{N2}&{}\cdots &{}x_{ND} \end{pmatrix} \end{aligned}$$
(1)

In the mathematical model of the SSA, two concepts are adopted in which the work followed by the leading salps and follower salps are modeled mathematically. The leading salps update their states with the help of Eq. (2)

$$\begin{aligned} L_{j} = \left\{ \begin{matrix} F_{j}+r_1\times ({\text {lb}}+r_2\times ({\text {ub}}-{\text {lb}})) &{} r_3\ge 0.5 \\ F_{j}-r_1\times ({\text {lb}}+r_2\times ({\text {ub}}-{\text {lb}})) &{} r_3 < 0.5 \end{matrix}\right. \end{aligned}$$
(2)

where, \(L_{j}\) and \(F_{j}\) are the jth coordinates for the states of leading salps and food sources, respectively. ub and lb are the upper and lower boundary limits for the solution space. \(r_2\) and \(r_3\) are the random numbers between 0 and 1. \(r_1\) is a variable decreases when the iterations increase. Its mathematical formulation is given by Eq. (3)

$$\begin{aligned} r_1=2\times {\text {exp}} \left[ -\left( \frac{4t}{T}\right) ^2\right] \end{aligned}$$
(3)

where t and T are the current iteration number and maximum iterations, respectively.

In the second phase of the search, the follower salps update their stats. They utilize Newton’s law of motion given by

$$\begin{aligned} X_{i,j}=\frac{1}{2} at^2+v_0t \end{aligned}$$
(4)

where \(a=v_f-v_0/\delta t\) and \(v_0=(x-x0)/t\). The time in optimization process is referred as iteration and therefore, the discrepancy between iterations is 1. Considering \(v_0=0\), Eq. 4 becomes Eq. (5)

$$\begin{aligned} X_{i,j}=\frac{1}{2} \left( X_{i,j}+X_{i-1,j}\right) \end{aligned}$$
(5)

where, value of i is more than 1. \(X_{i,j}\) and \(X_{i-1,j}\) represents the jth coordinate of the follower salps i and \((i-1)\), respectively. Hence, like other swarm intelligence based methods, SSA first initializes the swarm of salps within a provided solution space. In the second step, the leading salps and followers update their states to re-positioned at better locations. This process continues until the prefixed maximum iterations are not completed. The pseudo-code of the classical SSA is presented in Algorithm 1

figure a

3 Proposed improved Salp Swarm Algorithm with mutation strategies

Although the classical SSA enriches with some characteristics like fast convergence speed and simple implementation, it may trap at sub-optimal solutions easily in few cases when handling the more complex optimization problems. The interaction between the leading and follower salps characterize the performance of the SSA. If a single leading salp trap at the sub-optimal solution, a salp can prevent from that local solution by the pull effect of leading salps. However, when the whole swarm of salps falls into a sub-optimal solution, the algorithm is trapped at that local solution and eventually stagnate at that sub-optimal solution.

To explore the solution space more effectively, this paper introduces a strategy called mutation into SSA. The three different mutation schemes, namely, Cauchy Mutation, Gaussian Mutation, and levy-flight mutation, are embedded into the classical SCA. The developed versions are denoted by Cauchy-SSA (CSSA), Gaussian-SSA (GSSA), and levy-SSA (LSSA), respectively. In the proposed method, before applying the mutation scheme, the greedy search is adopted between the states \(X^t\) of tth iteration and \(X^{t+1}\) of \((t+1)\)th iteration, using the following Eq. (6)

$$\begin{aligned} Y^{t+1}= \left\{ \begin{matrix} X^{t} &{} \quad if \quad f(X^t)<f(X^{t+1}) \\ X^{t+1} &{} \quad if \quad f(X^{t+1})<f(X^{t}) \end{matrix}\right. \end{aligned}$$
(6)

When the greedy search is completed corresponding to each salp, the mutation scheme is applied with mutation rate \((m_r)\). The increasing value of the mutation rate is the cause of high diversity and helps complex and large-dimensional optimization problems. During the mutation scheme, new mutated salps are generated and compared with the parent salps. If the newly obtained mutated salp is found better than the parent salp in the sense of fitness, it replaces the parent salp otherwise discarded, and original salps are retained. This rule is utilized in the proposed improved SSA during each mutation scheme.

3.1 Gaussian-SSA (GSSA)

In this section, the Gaussian mutation [62], often used in GA and PSO, is used to mutate the salp based on mutation rate. We aim to do not apply the process by which every salp of the classical SSA travels to another state within the solution space with a predefined mutation rate without being affected by other salps but leave a certain vagueness in the transition to the next iteration due to Gaussian mutation. This mutation follows the following eq.

$$\begin{aligned} \hat{x_i}=x_i\times (1+{\text {Gaussian}} (\delta )) \end{aligned}$$
(7)

where \(x_i\) denotes the ith salp and \({\text {Gaussian}} (\delta )\) is a random number generated using the Gaussian distribution. The density function of Cauchy distribution is given by Eq. (8).

$$\begin{aligned} f_{{\text {Gaussian}}(0,\sigma ^2)} (\beta )=\frac{1}{\sigma \sqrt{2\pi }} exp \left( -\frac{(\beta -\mu )^2}{2\sigma ^2}\right) \end{aligned}$$
(8)

where \(\sigma ^2\) is a variance for each salp. To generate random numbers, the above equation is reduced with a Avg \(\mu =0\) and standard deviation \(\sigma \) to 1. The Gaussian mutation is integrated to cope with the diversity loss during the search process. In our approach, this mutation is used to locally explore the search space around the visited regions of the solution space.

3.2 Cauchy-SSA (CSSA)

Similar to other swarm intelligence methods and other metaheuristics, the SSA also tends to fall into a sub-optimal solution due to insufficient diversity and escaping ability from the sub-optimal regions. Therefore, we should adopt some strategy that provides a high jump sometimes throughout the search process. The Cauchy distribution can help in this situation as it generates large values infrequently, which can provide a large step-size of mutation [51, 147]. In this Cauchy-SSA, a random number is generated, and if its value allows the mutation based on the mutation rate, then each salp of the swarm is mutated as follows

$$\begin{aligned} \hat{x_i}=x_i\times (1+{\text {Cauchy}} (\delta )) \end{aligned}$$
(9)

where \(x_i\) denotes the ith salp and \({\text {Cauchy}} (\delta )\) is a random number generated using the Cauchy distribution function given by

$$\begin{aligned} y=\frac{1}{\pi } {\text {arctan}}\left( \frac{\alpha -\alpha _0}{\gamma } \right) +\frac{1}{2} \end{aligned}$$
(10)

and the density function (DF) is given by equation (11)

$$\begin{aligned} f_{{\text {cauchy}} (0,\gamma )}(\alpha )=\frac{1}{\pi } \frac{\gamma }{\gamma ^2+\alpha ^2} \end{aligned}$$
(11)

where y is a uniformly distributed random number within (0,1). \(\alpha =0\) is a location parameter and \(\gamma =1\) is a scale parameter [126]. This Cauchy mutation generates higher chances of making longer jumps as compared to the Gaussian mutation.

3.3 Levy-SSA (LSSA)

In this section, the levy-mutation [82, 151] is used to improve salps diversity in the SSA. The levy-mutation can handle the global search more effectively by mutating the salps when the mutation rate allows. Each salp in the levy-SSA is mutated as follows

$$\begin{aligned} \hat{x_i}=x_i\times (1+{\text {Levy}} (\delta )) \end{aligned}$$
(12)

where \(x_i\) denotes the ith salp and \({\text {Levy}}(\delta )\) is a random number generated using the levy distribution function. A simplified version of the levy distribution is defined by Eq. (13)

$$\begin{aligned} Levy(\beta )\sim y=t^{-\beta -1}, \quad 0<\beta \le 1 \end{aligned}$$
(13)

where \(\beta \) is stability index. The levy-distributed random number can be obtained using

$$\begin{aligned} {\text {Levy}}(\beta )\sim \frac{\psi \times u}{|v|^{1/\beta }} \end{aligned}$$
(14)

where u and v are standard normal distribution. The value of \(\psi \) is defined by Eq. (15)

$$\begin{aligned} \psi =\left[ \frac{\varGamma (1+\beta ) {\text {sin}}(\pi \beta /2)}{\varGamma ((\frac{1+\beta }{2})\times \beta \times 2^{(\beta -1)/2})} \right] ^{1/\beta } \end{aligned}$$
(15)

The value of \(\beta \) is fixed to 1.5. Normally, the levy-mutation generates different offspring salps as it is long tailed distribution. This feature is helpful to jump out from sub-optimal regions when the stagnation occurs during the search.

A general framework of all the mutation-based SSA is presented in Algorithm 2.

figure b

3.4 Computational complexity

To determine the computational complexity of the proposed mutation-based SSA, mainly seven components are used: Initialization of salp swarm, fitness evaluation of each salp, position update of leading salps, position update of follower salps, fitness evaluation of updated salps, greedy search, mutation scheme and memorization of elite salp. The complexity of initialization of swarm is \(O(N\times D)\), fitness evaluation of salp utilizes O(N) computational effort. The complexity of the position update process of leading and follower salp is \(O(N\times D)\), fitness evaluation of updated salp takes O(N) computational effort, the complexity of both the schemes greedy search and mutation scheme is O(N). The computational effort of the memorization of elite salp is O(N). Hence, by summing all, the complexity of the proposed mutation-based SSA is \(O(N\times D \times T\) same to the classical SSA.

4 Experimental results and validation of proposed mutation-based variants

In this section, the proposed mutation-based SSA variants are evaluated and tested in three phases. In the first phase, a comparison is conducted among classical SSA and all mutation-based variants. The best performer optimizer is selected from here, which is better than the remaining mutation-based variants and classical SSA. In the second phase, the winner best performer variant of the SSA is compared with some state-of-the-art optimization methods. In the third and last phase, all the SSA variants, classical SSA, and other state-of-the-art algorithms are used to solve some real engineering test cases. The benchmark comparison was conducted on a set of 23 scalable benchmark problems. These benchmarks are provided in Table 1. The source of these benchmark problems is [54, 55, 81]. We follow the same testing way for all the compared methods [112,113,114]. This is an accepted way to ensure all methods take the same advantage (or no competitive advantage) according to the user system and conditions [85, 94, 154, 166, 167]. Experimental results obtained from different SSA versions based on mutation scheme in terms of the Best, average (Avg.), and standard deviation (Std.) of the objective function are used to evaluate these versions’ potential. The best results are highlighted in bold. Furthermore, a non-parametric statistical test Wilcoxon signed-rank test [45] at 0.05 significance level is employed to investigate that the achieved results are significantly better or not. To represent these statistical results, “\(+/-/=\)” symbols are used, which are presented in Table 4 and indicate that our proposed method is superior, worse, and statistically same to its competitive optimization method, respectively.

Table 1 Description of unimodal benchmark functions

4.1 Comparison among classical SSA, GSSA, CSSA, and LSSA

This section compares the classical SSA with mutation-based SSA versions such as GSSA, CSSA, and LSSA on a set of 23 benchmarks given in Table 1. Many numerical optimization methods also use these test problems. Furthermore, a comparison between the GSSA, CSSA, and LSSA is also conducted to select the best performer optimization method. In these experiments, the dimension of test functions is taken to 30 and 100. The swarm size is set to 30. Maximum iterations and maximum function evaluations are set to 500 and 15,000, respectively. As can be observed from Tables 23 that the proposed mutation-based SSA variants outperform the classical SSA on approximately 92% at dimension 30 problems with the same results on remaining one problem and worse results on one problem F7 only. However, on dimension 100 problems, the mutation-based SSA variants have outperformed on 100% of problems. It is noticed that the GSSA has the smallest Avg. objective function value than other mutation variants on 20 problems out of 23 for both dimensions 30 and 100. In addition to this, the GSSA has provided a near-optimal solution to most of the test problems. Hence, due to this comparison, the GSSA is selected for future comparison perspectives with other swarm-intelligence-based methods. To demonstrate the superiority of the proposed GSSA in terms of convergence rate, the convergence curves are plotted in Figs. 1 and 2. In this figure, the Avg. value of best objective function obtained in 30 trials is shown and compared for classical SCA, GSSA, CSSA, and LSSA. From these figures, it can be seen that according to the convergence rate, the GSSA takes first place, followed successively by CSSA, LSSA, and classical SSA. Obviously, the fast convergence rate results from the applied mutation scheme and greedy search approach in the proposed method. Hence, from the curves, it can be seen that the mutation scheme has improved the convergence rate of the proposed method, but the more effective is due to the Gaussian mutation rule. Furthermore, the Wilcoxon signed-rank test is utilized to determine whether the GSSA is performed better than the other mutation variants or not? The results obtained by employing this test between A and B (A/B) methods are listed by the symbols “\(+/-/=\)” to indicate that the A is significantly better, worse, or equal to its competitive method. All the results are listed in Table 4 corresponding to the dimension 30 and 100. By this table, it has been found that the GSSA has outperformed classical SCA on 20 problems for 30 dimensions and 21 problems for 100 dimensions. Compared with CSSA and LSSA, GSSA is excellent in providing significantly better results or statistically the same results. It is clear from the table that GSSA is not worse, even on a single problem. Moreover, out of the 23 test problems, CSSA has outperformed LSSA on 15 problems, statistically the same on seven problems and worse on only one problem.

Table 2 Optimization results on 23 problems for the dim 30
Table 3 Optimization results on 23 problems for the dim 100
Table 4 Statistical comparison through Wilcoxon signed rank test for 23 benchmark problems
Fig. 1
figure 1

Convergence curves for benchmark problems

Fig. 2
figure 2

Convergence curves for benchmark problems

4.2 Comparison with other metaheuristic methods

The comparison conducted above illustrates the superior solution accuracy by the GSSA among classical SCA and other mutation-based variants. In this section, the same set of benchmark problems with dimension 100 is used to compare the results of the GSSA with other metaheuristic methods on the same parameter environment (population size and function evaluations) as utilized in the previous section. The metaheuristic methods which are used for comparison in this section are: Firefly Algorithm (FA) [151], Grey Wolf Optimizer (GWO) [91], Moth-flame Optimizer (MFO) [89], Sine Cosine Algorithm (SCA) [90], Teaching-learning-based Optimization (TLBO) [103], Hybrid SSA with SCA (mod-SSA) [115] and Improved Salp Swarm Algorithm (ISSA) [56]. Each of these algorithms is independently 30 times implemented on a benchmark set, and the simulated results in terms of Best, Avg, and Std. are presented in Table 5. To validate that the GSSA has outperformed other metaheuristic algorithms, a non-parametric Wilcoxon signed rank test is used at a 0.05 significance level. These statistical results are presented in Table 6 with p-values and symbols “\(+/-/\approx \)” to indicate that the GSSA is significantly superior, equal or same as its competitive method.

The table indicates that the proposed GSSA has outperformed FA, GWO, MFO, SCA, TLBO, mod-SSA, and ISSA on 22, 19, 21, 22, 14, 19, and 18 problems and inferior to them on 0, 3, 1, 0, 5, 2 and 3 problems, respectively. Thus, the results conclude that the proposed Gaussian mutation-based SSA (GSSA) is superior in solution accuracy to its competitive metaheuristic methods.

Table 5 Comparison with other metaheuristic methods on dim 100
Table 6 Statistical comparison through Wilcoxon signed rank test for 23 benchmark problems

4.3 Application of proposed GSSA on engineering design problems

In this subsection, the proposed GSSA is applied to optimize three engineering design cases with constraints such as three-bar truss design, pressure vessel design, and speed reducer design problem. These optimization cases consist of some inequality and equality constraints [182], so the constraint handling method should be employed in the GSSA. The methods based on giving a penalty to the objective function to construct a fitness function can be used to tackle such situations. In this study, the death penalty is a popular and easiest method to deal with the constraints [34]. In this approach, the SSA will automatically discard the infeasible solutions. This approach has the advantages of small computation and simple calculation. However, this approach does not take advantage of the information of infeasible solutions, which may be useful in solving problems with dominated infeasible areas of the solution space. To verify its efficacy, GSSA is merged with the death penalty approach to solving constrained engineering cases.

4.3.1 Three-bar truss design

This design case was firstly introduced by Nowacki [95]. In this, the objective function is given by minimizing the volume of a statically loaded 3-bar truss by minimizing the cross-sectional area \(f(A_1,A_2)\) with restrictions in the form of stress constraints on each truss member. The mathematical formulation of the problem is given as follows:

$$\begin{aligned} {\text {Minimize}} \quad f(A_1,A_2)= (2 \sqrt{(}2)\times A_1+A_2)\times l \end{aligned}$$
(16)

subject to

$$\begin{aligned} g_1(A_1,A_2)&=\frac{\sqrt{2}A_1+A_2}{2A_1A_2+\sqrt{2}A_1^2}\times P-\sigma \le 0 \end{aligned}$$
(17)
$$\begin{aligned} g_2(A_1,A_2)&=\frac{A_2}{2A_1A_2+\sqrt{2}A_1^2}\times P-\sigma \le 0 \end{aligned}$$
(18)
$$\begin{aligned} g_3(A_1,A_2)&=\frac{1}{A_1+\sqrt{2}A_2}\times P-\sigma \le 0 \end{aligned}$$
(19)

where \(0 \le A_1,A_2 \le 1\), \(l=100\) cm, \(P=\sigma =2\) KN/cm\(^2\).

The problem formulation shows that this is a non-linear optimization problem with the continuous nature of decision variables. This problem has been solved by [105], and by [121]. In the study of [97] this problem is also attempted to solve. Furthermore, the cuckoo search metaheuristic method [43] is also adopted to solve this problem. [152] have used the bat algorithm (BA) to solve this problem. [50] have used the classical sine cosine algorithm (SCA) and their modified SCA (m-SCA) to solve this problem. In this paper, the GSSA is applied to this problem using 25 salp swarm size and 1000 iterations. Numerical results of all the optimization methods are presented in Table 7. The table indicates that the GSSA is superior to provide the best results than other methods. It also verified from this study that the proposed GSSA could deal with the constraints of this optimization case more effectively.

Table 7 Comparison results of the GSSA for the three-bar truss design problem

4.3.2 Tension-compression spring design

Another engineering case optimization problem is a tension-compression spring design. In this problem, the optimization task is described by the minimization of the heaviness of a spring. Three decision variables: diameter d, mean coil diameter D, and the number of dynamic coils N are involved in this problem. The mathematical description of this case problem is as follows:

$$\begin{aligned} {\text {Minimize}} \quad f(d,D,N)=(N+2)d^2D \end{aligned}$$
(20)

subject to

$$\begin{aligned} g_1(d,D,N)&=1-\frac{D^3N}{71785d^4}\le 0 \end{aligned}$$
(21)
$$\begin{aligned} g_2(d,D,N)&=\frac{4D^2-dD}{12566(d^3D-d^4)}+\frac{1}{5108d^2}\le 0 \end{aligned}$$
(22)
$$\begin{aligned} g_3(d,D,N)&=1-\frac{140.45d}{D^3N}\le 0 \end{aligned}$$
(23)
$$\begin{aligned} g_4(d,D,N)&=\frac{d+D}{1.5}-1\le 0 \end{aligned}$$
(24)

where

\(0.05 \le d \le 2.0\), \(0.25 \le D \le 1.30\), \(2.0 \le N \le 15.0\).

In the previous studies, several metaheuristic methods are utilized to solve this case of the optimization problem. In our experiments, we have performed 15,000 searches with 25 salp swarm sizes, and the results obtained by the proposed GSSA are presented in Table 8. In this table, results of various other methods such as GSA ([104]), ES ([87]), GA ([33]), mathematical optimization ([11]) and Constraint Correction [10] are presented for comparison of the GSSA. The table indicates that the GSSA has provided better results than other compared method.

Table 8 Comparison results of the GSSA for tension-compression spring design problem

4.3.3 Speed reducer design

In this subsection, the proposed GSSA is applied to the optimization task of designing a speed reducer, where the weight of the speed reducer is minimized. This problem is considered as a structural optimization problem with the decision variable: module of teeth m, face width b, the number of teeth on pinion z, length of shaft-I and shaft-II between bearings \(l_1\) and \(l_2\), respectively, the diameter of shaft-I and shaft-II \(d_1\) and \(d_2\), respectively. The constraints in this problem are applied on bending stress of the gear teeth, transverse deflections of shafts 1 and 2 due to transmitted force, surface stress, and stresses in shafts-I and shaft-II. In the mathematical form, the problem is stated as follows:

$$\begin{aligned}&{\text {Minimize}} \quad f(b,m,z.l_1,l_2,d_1,d_2)=0.7854bm^2(14.9334z+3.3333z^2-43.0934)\nonumber \\&\quad -1.508b(d_1^2+d_2^2)+0.7854(l_1d_1^2+l_2d_2^2)+7.477(d_1^3+d_2^3) \end{aligned}$$
(25)

subject to

$$\begin{aligned} g_1&=\frac{27}{bzm^2}\times P-1\le 0 \end{aligned}$$
(26)
$$\begin{aligned} g_2&=\frac{397.50}{bz^2m^2}-1\le 0 \end{aligned}$$
(27)
$$\begin{aligned} g_3&=\frac{1.93}{mzl_1^3d_1^4}-1\le 0 \end{aligned}$$
(28)
$$\begin{aligned} g_4&=\frac{1.93}{mzl_1^3d_2^4}-1\le 0 \end{aligned}$$
(29)
$$\begin{aligned} g_5&=\frac{\sqrt{1.69\times 10^6 + (745l_1/mz)^2}}{110d_1^3}-1\le 0 \end{aligned}$$
(30)
$$\begin{aligned} g_6&=\frac{\sqrt{157.50\times 10^6 + (745l_1/mz)^2}}{85d_2^3}-1\le 0 \end{aligned}$$
(31)
$$\begin{aligned} g_7&=\frac{mz}{40}\times -1\le 0 \end{aligned}$$
(32)
$$\begin{aligned} g_8&=\frac{5m}{B-1}\times -1\le 0 \end{aligned}$$
(33)
$$\begin{aligned} g_9&=\frac{b}{12m}\times -1\le 0 \end{aligned}$$
(34)

where

\(2.60 \le b \le 3.60\), \(0.70\le m \le 0.80\), \(17 \le z \le 28\)

\(7.30 \le l_1 \le 8.30\), \(7.80 \le l_2 \le 8.30\)

\(2.90 \le d_1 \le 3.90\), \(5.00 \le d_2 \le 5.50\)

In our study, we have fixed the swarm size to 50 and iterations to 5,000 for obtaining the solution of this problem. In the previous literature, many studies [5, 63, 69, 88, 105] are done to perform this optimization task. [43] have applied the CS to solve this problem. The simulated results on this problem by the GSSA is presented in Table 9, which indicates that the GSSA than other results produces the superior results.

Table 9 Comparison results of the GSSA for speed reducer design problem

5 Conclusions and future directions

In this study, the recently proposed salp swarm algorithm’s performance is enhanced using new search rules based on greedy search and mutation strategies. Significantly, to improve the exploitation feature and to balance exploration and exploitation in the algorithm. Further, the swarm diversity is managed by the different mutation rules: Gaussian, Cauchy, and levy. Based on these mutation rules, the different variants GSSA, CSSA, and LSSA are proposed. These rules have significantly improved the SSA’s exploitation and exploration abilities. Experimental results have also shown that these rules are fruitful in preventing local optima and faster convergence. The experiments for evaluating these variants and choosing the best mutation rule for the SSA, 23 benchmark test problems with dimension 30 and 100 are utilized. Convergence analysis and statistical tests ensure that the GSSA is the best variant to solve global optimization problems. Furthermore, the best-chosen method, the GSSA, is used to solve some engineering design optimization cases. The results and comparison indicate the superiority of the GSSA over other studies that are done to find the solution to these engineering cases.

There are many future domains for the utilized and suggested SSA-based optimizers according to the many domains that need optimization and solution finding in general. We have initialized research works to employ the mutation-based SSA for the sensor networks [40, 41], structural health assessment and promising optimization foundation [32], possible optimization features of the solar systems [129, 132], GIS [122, 179, 180, 183], and a set of industrial tasks in the power engineering [65, 106]. As the proposed method has a more stable exploratory basis, we suggest the application of it to the data modeling of the location-based services (LBS) [72], landslide prediction and forecasting [133], prediction methods [68, 101], and modeling in the environmental scenarios [23, 73, 168]. In the future, applying the proposed mutation-based variant to solve more practical engineering cases such as multiobjective optimization, binary optimization is a worthwhile research direction. The proposed method and its resulted variations can be a suitable tool for operation under intelligent systems, and medical diagnosis cases [24, 64, 70, 76, 127, 141, 174]. The proposed mutation-based variants can also be hybridized with other metaheuristic methods to propose a new better optimization method. The proposed mutation-based SSA can also be used in other areas, such as optimizing machine learning models’ structure and weights.