1 Introduction

The last few decades witnessed the introduction of many robust population-based meta-heuristics, such as swarm intelligence optimization and evolutionary algorithms for the purpose of finding the best and possible optimal solutions to many real-life applications. Although such algorithms have been useful in tackling many real life problems, the extensive literature review reveals that there is no single algorithm which works well in all the applications. Consequently, researchers have been developing newly, modified and hybrid techniques for resolving and eliminate many of the disadvantages of the existing algorithms such as: differential evolution (DE) [1, 2], genetic algorithm (GA) [3, 4], hybrid genetic algorithm (HGA) [5], biogeography based optimization algorithm (BBO) [6], gravitational search algorithm (GSA) [7], particle swarm optimization (PSO) [8], Tabu search (TS) [9], harmony search algorithm (HAS) [10], artificial neural network (ANN) [11], dragonfly algorithm (DA) [12], grey wolf optimization (GWO) [13], krill herd algorithm (KHA) [14], black-hole-based optimization (BHBO) [15], robust optimization (RO) [16], ant lion optimizer (ALO) [17], One half personal best position particle swarm optimizations (OHGBPPSO) [18], EPO [19], SHO [20], EGWO [21], AGWO [22], PSOGWO [23], MFOA [79], RSO [80] and many other algorithms [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38].

The particle swarm optimization is a swarm intelligence approach, which is inspired by the social behaviors of animals like bird flocking and fish schooling [39, 40]. As a stochastic search method, particle swarm optimization algorithm has characters of rapid convergence capability and simple computation. The approach was applied in separate industrial problems like modeling [41,42,43], prediction [41], parameters learning of neural network [44, 45], power systems [46,47,48,49,50], control [51], and parameters optimization of fuzzy system [52].

Over the years, researchers developed several hybrid techniques to enhanced the balanced between exploration and exploitation of the meta-heuristics and searching the superior quality of the solutions of the real life applications. For instance, Ali and Tawhid [53] developed a hybrid approach by integrating the PSO and GA to minimize a simplified model of the energy problem of the molecule. The advantages of existing approach have been presented by using the three different mechanisms. The accuracy of the approach has been verified on the potential energy function and thirteen unconstrained global optimization functions. Simulated results reveal that the newly existing method is an efficient and promising approach. Similarly, Yu et al. [37] presented a hybrid approach called HPSO. In their approach, they integrated space transformation search with a new modified velocity model. Simulation solutions on eight standard problems demonstrated that the hybrid particle swarm optimization holds superior accuracy in solving standard functions. Additionally, Mao et al. [54] proposed hybrid method integrated with a DE and PSO known as DEMPSO, to find the best and possible global optimal solution of the nonlinear model of the forward kinematics. Five configurations with distinct positions and orientations were used as an example to demonstrate the efficiency of the existing variant for finding the best global optimal solution of kinematic function of parallel manipulators. And the experimental solution of existing approach and the other meta-heuristics also revealed that newly hybrid approach can provide a superior performance regarding global searching properties and the convergence rate.

The several real life applications best optimal solutions has been finds by the researchers of the different fields such as environmental/economic dispatch [55], cost-based feature selection in classification [56], feature selection on high-dimensional data [57], dynamic economic dispatch with valve-point effect [58], robot path planning in uncertain environment [59] and many others.

In our previous research work, we presented several modified and hybrid nature-inspired approaches and these algorithms were applied on several real life applications and tried to improved and extend the working strength of the newly existing meta-heuristics [18, 31, 33, 35, 36, 23, 60]. In this research, we extend our previous work via proposing a new hybrid algorithm employing SSA and PSO to benefit from their exploitation and exploration. Furthermore, the proposed method is to aggregate advantages from SSA and PSO with an integrated feedback mechanism to achieve a better optima goal (or solution). In fact, PSO is used to enhance the exploitation and exploration of SSA. As a result, the newly existing method has the balance ability amid local and global searching abilities to guarantee the better convergence. So, the major contributions of the research work can be explained as:

  • The proposed methodology is enhancing the balanced between exploration and exploitation by feed-back mechanism.

  • Search tendency is redefined to improve the capability of searching best optimal solution in the search space during the search process.

  • Under this methodology, during the search process the proposed strategy is fastly trapping the global optima solution in the search space.

The experimental solutions reveal that the newly existing approach possess superior capability of search the global optimum than that of the other meta-heuristics. The performance of the existing approach is demonstrated by the obtained results of standard CEC 2005, CEC 2017 test suits and real life functions with comparison to others.

The rest of the article is structured as follows. Section 2, describes related research which describes the main aspects of the salp swarm algorithm and particle swarm optimization algorithm; Sect. 3, presents the hybrid SSA–PSO algorithms; the numerical and statistical experiments are described in Sect. 4; in Sect. 5, experiment and results are performed. The engineering applications are described in Sects. 6 and 7; Sect. 7 provides the conclusions.

2 Related work

In this section, we overview existing research showing different aspects of the two main algorithms have been employed in this research: salp swarm algorithm and particle swarm optimization algorithm. This is done via presenting existing studies from the literature and showing their relatedness to the current approach.

In 2017, Mirjalili et al. [61] developed a nature inspired approach, known as salp swarm algorithm that mimics the special swarming behavior of salps in oceans. Salps usually live in groups and often form a swarm called salp chain. The first salp is denoted as the leader, while the others are followers. The position of the leader should by using the following mathematical equations:

$$x_{j}^{1} = \left\{ {\begin{array}{*{20}c} {F_{j} + c_{1} \left( {\left( {ub_{j} - lb_{j} } \right)c_{2} + lb_{j} } \right)} & {c_{3} \ge 0.5} \\ {F_{j} - c_{1} \left( {\left( {ub_{j} - lb_{j} } \right)c_{2} + lb_{j} } \right)} & {c_{3} < 0.5} \\ \end{array} } \right.$$
(1)

where \(x_{j}^{1}\) is the position of the best and possible solution, \(ub_{j}\), \(lb_{j}\) is the upper and lower bound of the \(j{\text{th}}\) dimension, \(F_{j}\) is the food source position of the dimension, \(c_{2}\), \(c_{3}\) are two random numbers in the range \(\left[ {0,1} \right]\) and \(c_{1}\) is an most important variable in this algorithm that gradually decreases over the course of generations to allow high exploration at the early stages of the optimization process, then high exploitation in last steps.

$$c_{1} = 2e^{{ - \left( {\frac{4l}{L}} \right)^{2} }}$$
(2)

where \(L\) represents the maximum number of iteration and \(l\) indicates the current iterations.The follower’s positions are updated by using the following mathematical equation:

$$x_{j}^{i} = \frac{1}{2}\left( {x_{j}^{i} + x_{j}^{i - 1} } \right)$$
(3)

where \(x_{j}^{i}\) indicates the position of the \(i{\text{th}}\) follower at the \(j{\text{th}}\) dimension and \(i \ge 2\).

figure a

Besides the SSP algorithm, the original version of PSO approach was proposed by Kennedy et al. [39]. Their algorithm emulates the flocking behavior of particles or birds to solve global optimization functions. In this approach, each possible result is considered as an agent (particle). All candidates have their own velocities and fitness values. These agents fly through the D-dimensional function area by learning from the historical information of the agents. A possible result is demonstrated by an agent, which adjusts its velocity and position according to mathematical Eqs. (4) and (5):

$$v_{i}^{k + 1} = w \times v_{i}^{k} + c_{4} r_{1} \left( {p_{i}^{k} - x_{i}^{k} } \right) + c_{5} r_{2} \left( {g_{best} - x_{i}^{k} } \right)$$
(4)
$$x_{i}^{k + 1} = x_{i}^{k} + v_{i}^{k + 1}$$
(5)

where \(v_{ij}^{k}\) indicates the velocity of particle \(\,i \in \left[ {1,2, \ldots ,n_{x} } \right]\) in dimension \(j\) at time step \(k\), \(w\) is the inertia constant, \(p_{i}^{k}\) is the personal best position of the agent, \(g_{best}\) is the best position of the neighbor agent, \(x_{ij}^{k}\) represents the position of the particle \(i\) in dimension \(j\) at time step \(k\). The constants \(w\) and \(c_{4} ,c_{5}\) are acceleration coefficients and rand values \(r_{1j}^{k} ,r_{2j}^{k} \in U\left( {0,1} \right)\) in the range \(\left[ {0,1} \right]\) at a time step \(k\).

figure b

3 Hybrid SSA–PSO algorithm

Each algorithm is not able to solving each type of real-world problems. These algorithms may be failure to find the goal for any complex functions due to own weakness in the search area. Therefore, according to demand of the present situations of the real-world applications the researchers of the different fields are developing the newly modified and hybrid version for reducing the weakness of the present versions that these algorithms could be able to gives the best global optimal solution for the complex applications. After learning these strategies, we have tried to develop a new hybrid algorithm in this text for complex functions.

Although, the SSA method is more able to reveal a capable accuracy in contrast with other recent meta-heuristics, one of its limitations is that it might face the complexity of getting trapped in local/global optima. Additionally, it is not fitting for high difficult problems or functions and cannot handle their drawbacks such as slow convergence speed, slow diversity and premature convergence. To further boost the exploration and exploitation of SSA, a more focused particle swarm optimizer is incorporated into this algorithm to form a newly hybrid SSA–PSO algorithm. In order to reduce these types of weaknesses and to improve the SSA method search capability, the merits of the two different methods have been merged and developed as an newly approach that we called the HSSAPSO algorithm. The present method has been applied to identify the best score of the complex optimization functions. The particle swarm optimization algorithm phase operates in the direction of exploring the vector of optimal solutions although the HSSAPSO approach is invoked as a local search method to enhance the optimal solution superiority. This methodology is also helpful in fastly trapping the global optimal solution and ignoring the local optima in the search area during the search process. Therefore, with this method we can strengthen the search ability and obtain accurate convergences by accelerating the search. Further, details of Hybrid SSAPSO algorithm is shown step by step as below:

3.1 Step 1: parameter setting

The newly existing hybrid approach starts by setting its constant values such as the number of search agents (\(s\) = 30), current iteration (\(l = 2\)), maximum iterations (\(L = 50 - 500\)), maximum weight (\(w_{\hbox{max} } = 0.9\)), minimum weight (\(w_{\hbox{min} } = 0.2\)) and maximum velocity (\(v_{\hbox{max} } = 6\)).

3.2 Step 1: initialization

During the search process of the algorithm in the search space firstly we are initializing the crowd and these are initialized randomly according of the given problems, where the algorithms assigns a random vector of \(n\) dimensional for the \(i{\text{th}}\) salp; \(X = x_{i} \,\,\,\sim \,\,\,\left( {i = 1,2,3, \ldots ,n} \right)\).

3.3 Step 2: evaluation

Under this step, during the search process the each search agent of the crowd is evaluated as per the superiority of the own location. The fitness value of each search agent is tested on the basis of objective function after that, the each search agents take the next new location according to its fitness value in the search space.

3.4 Step 3: leader position updating

The position of the main search agent (like; leader) has been updating by mathematical Eqs. (1)–(2) during the search process in the search space.

3.5 Step 4: velocity initialization

We easily see in the literature the velocity initialization is a most important role for the population-based algorithms. With the help of this concept we can reduced the search agent’s efforts during the search process in the search space and ignored wrong positions. Many times, during the search process of the search agents for searching the global optima goal, these search agents may leave the intended boundaries of the search area, which will tend the wastage of finding effort and can a bad impact on the accuracy of the algorithm, hence this concept is play a important role for trapping the best global optima solution and ignoring the local optima in the search space. The velocity initialization can be done in three different ways such as initialize to small random value, initialize to random values close to zero and lastly initialize to zero. Distinct strategies of this initialization have different impacts on the accuracy of the algorithm.

3.6 Step 5: follower’s position updating

The position of the follower’s in the search space during the search process has been replaced by the help of modified mathematical Eq. (6). Here the velocity concept plays a most important role for trapping the global optima fastly and ignored the wrong positions during the search process.

3.7 Step 5: stopping condition

At the end, the stopping condition has been applied for searching the global optima for all kind of problems. Here this criteria is used for evaluating each search agent assessment process and replacing the best search agent’s position, it is repeated again and again until it satisfies the criteria of prevention for example it reaches the highest maximum iterations or the solution is earliest found. The rest of operations are the same as salp swarm algorithm. Furthermore, the working process of the proposed algorithm has been discussed in the flowchart Fig. 1.

Fig. 1
figure 1

Flowchart of Hybrid SSAPSO algorithm

The main structure of the newly existing approach HSSAPSO is presented in Algorithm 3.

figure c

Here, the proposed approach plays a important role for trapping the solution in the search space during the search process and its advantages has been described as followings;

  • The proposed method hold the best optima solution after the each iteration and it can not at all be wiped out even if the entire crowd deteriorates.

  • The proposed algorithm updates the location of each search agents in the search space according to the goal which is the best goal obtained so far; hence the best search agent explores and exploits in the search space it for a best one goal.

  • The HSSAPSO update the position of follower agents with the help of velocity concept. This concept plays a important role for ignoring the wrong position and fastly trapping global optima in the search space. It’s also enhanced the exploration and exploitation balance for trapping fastly optima during the search process.

  • Gradual movements of follower search agents during the search process in the search space prevent the HSSAPSO algorithm from effortlessly stagnating into local optima.

  • The main parameter of SSA algorithm helps the HSSAPSO for reducing the complexity and makes it effortless to execute.

4 Numerical and statistical experiments

The numerical and statistical results of the proposed approach are compared to SSA, PSO, MFO, SCA and DA algorithms for the twenty-two standard benchmark functions. HSSAPSO, SSA, PSO, MFO, SCA and DA algorithms are programmed using Matlab 2015. Further details about the experiments are discussed in the following subsections.

4.1 Parameter settings

The literature review shows that researchers have mostly used constants for controlling the speed of the agents, exploration and exploitation tendency. Different settings of parameters have been employed in the search process. The following settings of parameters are used in this experiment to enhance and improve the convergence ability of the meta-heuristics (Table 1).

Table 1 Parameter settings

4.2 Benchmark or standard functions

The initial ability of the HSSAPSO was verified on uni-modal, multi-modal and fixed dimension multi-modal functions. The brief information of these tested functions is described in the Tables 2, 3 and 4 respectively.

Table 2 Unimodal benchmark functions
Table 3 Multimodal benchmark functions
Table 4 Fixed-dimension multimodal benchmark functions

4.3 The convergence performance of HSSAPSO on 100–500 dimensions

In order to make sure the effectiveness of the partitioning process and integrating the standard SSA and PSO algorithm, the general convergence ability of the newly existing approach HSSAPSO and the others are presented for benchmark functions by plotting the function values versus the number of generations as described in Fig. 2. In Fig. 2, the red line indicates the standard SSA, while the black line indicates the newly existing variant HSSAPSO. Here, with the help of these conference graphs we can easily see that the velocity concept playing the important role for search agents in ignoring the wrong positions or directions and trapping the fastly global optima in the search space during the search process. It can search the solution for the given functions in least number of iterations as comparison to others and save the wasting time. The data in Fig. 2 is plotted after d (dim) generations. The convergence results of Fig. 2, reveals that the new approach is superior to the standard SSA and others which verifies that the applied partitioning mechanism and the integration amid the standard salp swarm strategy and standard particle swarm optimization algorithm can accelerate the convergence of the newly hybrid approach.

Fig. 2
figure 2

Convergence performance graphs of meta-heuristics on 100–500 dimensions

5 Experiment and results

In this part, twenty-two standard tested problems have been used [26, 61] demonstrate the quality, superiority and accuracy of the newly hybrid approach, where obtained optimal solutions by the newly approach have been compared with the standard SSA, PSO, MFO, SCA and DA algorithms. The standard benchmark functions like unimodal, multi-modal and fixed dimension multi-modal functions or problems. The description and the convergence graph of all problems are represented in Tables 5, 6 and 7 and Figs. 3, 4 and 5 respectively. The statistical measures in terms of maximum and minimum objective function value, average, best score and standard deviation are recorded. Further, the welded beam design function has been solved by HSSAPSO algorithm and results are reported in the last section of this text. On the basis of obtained solutions, the importance and advantages of the newly hybrid approach has been discussed in the following subsections.

Table 5 Results of unimodal benchmark problems on 200 and 500 dimensions
Table 6 Results of multimodal benchmark problems on 200 and 500 dimensions
Table 7 Results of fixed dimension multi-modal benchmark problems on 50–80 dimensions
Fig. 3
figure 3figure 3

HSSAPSO, SSA, PSO, MFO, SCA and DA algorithms convergence curves for unimodal test functions

Fig. 4
figure 4

HSSAPSO, SSA, PSO, MFO, SCA and DA algorithms convergence curves for multi-modal test functions

Fig. 5
figure 5figure 5

HSSAPSO, SSA, PSO, MFO, SCA and DA algorithms convergence curves for fixed dimension multi-modal test functions

5.1 Results on unimodal benchmark test functions

In this subsection, has been utilized the unimodal function for verifying the working performance of algorithms. The numerical experimental solutions on these functions are discussed in Table 5. This table shows the comparison between the HSSAPSO, SSA, PSO, MFO, SCA and DA algorithms. Here, it can be easily seen that the hybrid algorithm gives superior or highly effective optimal solutions as compared to others. As previously discussed, the uni-modal benchmark tested functions are able for benchmarking exploitation of the approaches. Therefore, have been obtained optimal solutions proofs high rate of exploitation potential of the hybrid algorithm.

5.2 Results on multi-modal benchmark test functions

The optimal solutions of meta-heuristic approaches on the multimodal problems/functions have been discussed in this section and also verify the working ability of the algorithms. These numerical experimental optimal results are represented in Table 6. The accuracy and ability of the presented approach has been verified in the terms of best scores, least and maximum cost of the objective function and statistical experiments on different dimensions. Table 6, results prove that the newly hybrid approach gives best score values, best minimum and maximum values on different dimensions.

Further, the results strongly demonstrate that high exploration of newly hybrid approach is capable to explore the search space extensively and provide capable regions of the search space.

5.3 Results on fixed dimension multi-modal benchmark test functions

Further, in this subsection, the results of HSSAPSO on fixed dimension multi-modal problems are discussed and also verifying the convergence performance with other meta-heuristics in Table 7. Results are prove that the hybrid approach is capable to gives the best quality of solutions in the search area of these functions.

Hence, it can be concluded that the HSSAPSO algorithm has better characteristics in high quality of the optimal results and strength of the optimal solutions.

5.4 Statistical results of the algorithms

In this subsection, the working performance of the newly modified method has been verified by statistical solutions on the fixed dimension, multi-modal and uni-modal functions. These solutions are presented in Tables 5, 6 and 7. In order to do a fair comparison of a HSSAPSO algorithm with SSA, PSO, MFO, SCA and DA algorithms, average and standard deviation for multiple runs has been reported. In Tables 5, 6 and 7, the least statistic scores indicate that the HSSAPSO algorithm is the most robust and is competent to reproduce the best solution with the minimum discrepancy and has less dependence as comparison to others.

5.5 Convergence graphs

At the end, the convergence performance of the presented approach and others has been plotted on Figs. 3, 4 and 5. The accuracy of the meta-heuristics has been tested on different dimension in the search space of the functions and each algorithms code has been run several times for the test the fair comparison of the algorithms. Figures 3, 4 and 5, shows that the HSSAPSO algorithm finds the best optimal solution of the problems in least number of generations or iterations. But, on the other sides, recent algorithms are taking the maximum times for find the solution in the search areas of the functions.

Hence, the graphs result proves that the HSSAPSO approach more suitable and faster for benchmark function as comparison to others.

5.6 Comparison of the algorithms on high dimensional (CEC 2017) functions

Due to the high complexity, the CEC test functions [62] customized for global optimization has been used these function for evaluating the performance quality of the proposed algorithm as comparison to others. The accuracy of the algorithms has been compared on 100 dimension and 5000 iterations with each others. The CEC 2017 test function contains twenty-nine standard functions, but function (F2) has been excluded because it reveals unstable behavior as given in [62]. These functions have been described in four different categories such as uni-modal functions (F1 and F3), multi-modal functions (F4–F10), hybrid functions (F11–F20) and composition functions (F21–F30). Here multimodal standard function has been utilized to evaluate the competitive metaheuristics to avoid fall into local optimum because these functions have several local values. Further, the composite standard functions have been utilized to evaluate the balance amid exploration and exploitation phases.

Through the Table 8 has been reported the simulation global optimum solutions of the algorithms. The convergence performance of the algorithms has been compared through Fig. 6. These graphs have been plotted on 5000 iterations at 100 dimensions.

Table 8 Results of CEC 2017 on 100 dimensions of algorithms
Fig. 6
figure 6figure 6figure 6figure 6figure 6

Convergence graphs of algorithms on CEC 2017 test functions at 100 dimensions and 5000 iterations

The experimental results of the SSA, PSO, MFO, SCA, EGWO, AGWO, GWOPSO and HSSAPSO algorithms have been compared through minimum value of objective function, maximum value of objective function, mean value and standard deviation value. The simulation results on all functions reveals that the proposed algorithms has very superior than others for evaluating the best global optima solution in the search space during the search process. Hence, it’s capable for best solution for complex functions. According to the aforementioned findings, the following observations can be made.

  • Unimodal functions: These functions have only one global optima, hence these functions have been utilized to evaluate the exploitation phase. The obtained global optimal solutions shows that proposed algorithm has exhibited a superior exploitation capability outperform than others.

  • Multimodal functions; in which have many local optima and the number of design variables exponentially increases with the problem size compared to the unimodal functions. Generally these test suits utilized for representing the exploration ability of the metaheuristics. Here, the obtained solution shows that the proposed algorithm yields a superior exploration capability.

  • Composition and Hybrid functions generally used for assessing the local global optima avoidance capability and the balance amid exploitation and exploration, which have a huge number of local global optima. The obtained simulation solutions prove that the proposed method is able to create the superior balance amid exploration/exploitation.

  • Hence, the proposed methodology is most efficient approach for all CEC 2017 test suits outperform than others.

5.7 Testing performance of the HSSAPSO algorithm through Wilcoxon signed ranks method

In this sub-section, the working performance of the HSSAPSO algorithm has been confirmed by applying the Wilcoxon signed ranks (WSR) method on the median values (see in Tables 9, 10, 11) of the algorithms for the superior assessment [63]. The WSR test is a non-parametric test that is applied on two distinct samples or data columns, for searching the significance amid two samples/or two algorithms. With the help of this method/or test, we can be easily find the best one sample/or algorithms. In addition, the help of this method easily locates the significant difference of the behaviors of two meta-heuristics.

Table 9 Median values of the algorithms on seven uni-modal problems
Table 10 Median values of the algorithms on sixmulti-modal problems
Table 11 Median values of the algorithms on ninefixed dimension multi− modal problems

Here after using the WSR test we found the z-value and p value of the each algorithm. These values have been represented in the Tables 12, 13 and 14. Further we have used the condition of this test for verifying the significance of each algorithms, such as if \(p < 0.05\), then it represents a rejection of the \(H_{0}\) hypothesis, whereas \(p > 0.05\) represents a failure to reject the null hypothesis. Hence p-values are less than 0.05, it can be determined that HSSAPSO algorithm is significantly superior to the other meta-heuristics. If not, the obtained improvements are not statistically significant.

Table 12 Wilcoxon method for the comparison of solutions in Table 8
Table 13 Wilcoxon method for the comparison of solutions in Table 9
Table 14 Wilcoxon method for the comparison of solutions in Table 10

To sum up, all numerical and statistical solutions has proven that the HSSAPSO algorithm is more helpful in enhancing and improving the convergence performance of the Salp Swarm Algorithm in the terms of optimal solution superiority as well as computational efforts.

Finally, the results of Tables 12, 13 and 14, illustrates that proposed method has better characteristics such that strength of the global optimal target and superiority of the optimal solution. Also, significant importance may be placed in global exploration and local exploitation. Solutions illustrate based on the WSR method proved that the proposed algorithm presents the superiority performance as comparison to recent meta-heuristics. Hence, the obtained results by the proposed algorithm have been statistically superior and this has not happened by likelihood/or chance.

6 Welded beam design function

Welding can be presented as a procedure of combining metallic parts by heating to appropriate temperature without or with the application of pressure [64]. Here, the welding course that uses heat alone is known as mixture welding process. The components involved in this procedure are kept in position and given to attach the melted metal. Molten metal that can come from any part that it’s called parent metal or external filler metal. The merging surface of different parts becomes plastic under the action of heat. When consolidated, two parts fuse into a unit.

A beam is a member subjected to loads applied transverse to the long dimension, causing the member to bend. Beam is often distributed on a support or reaction basis. Here the beam supported by smooth surface, rollers, pins at the split ends is known as simple beam. It will develop a general reaction to a beam assisted action but will not create a momentum at the reactions. If a beam project other than the supporters either ends either side then it is called the simple beam with the overhang.

A beam (\(A\)) is to be welded to a rigid member (\(B\)). The welded beam is to consist of 1010 steel and is to support a force P of 6000 lb. Beam dimensions should be selected so that the cost of the system is reduced. The structure of the beam has been illustrated by Fig. 7. In this image an rigid member (\(A\)) has been welded on one beam. A weight or load is applied at the last part of the member. Here, the beam is to be adapted for minimize cost by varying the weld and member dimensions \(\overrightarrow {y} = \left[ {y_{1} \,y_{2} \,y_{3} \,y_{4} } \right] = \left[ {h\,\,l\,\,t\,\,b} \right]\). This includes the limits of the end deflection, bending stress, buckling load and shear stress. The variables \(y_{1}\) and \(y_{2}\) are multiples of 0.0625 inch and but for this problem are assumed continuous [65, 66].

Fig. 7
figure 7

Weld Beam Structure

The main objective of this function is to minimize the total material and fabrication cost of a beam that is loaded in bending shown in Fig. 8 [67]. The beam dimensions are varied to decrease the total mass (thus reducing fabrication cost). However, the cost of welding is also measured, introducing more complexity to the problem. The subject to constraints of the problems are on \(l\): shear stress, \(m\): bending stress in the beam, \(n\): buckling load on the bar, \(o\): end deflection of the beam and side constraints. This function is depend on four variables such as \(a(y_{1} )\): thickness of the weld \(\left( {h\,} \right)\), \(b(y_{2} )\): length of the clamped bar \(\left( l \right)\), \(c(y_{3} )\): height of the bar \(\left( t \right)\) and \(d(y_{4} )\): thickness of the bar\(\left( b \right)\) [67]. The function can be indicated as the following mathematical equations:

$${\text{Consider}}\;\overrightarrow {y} = \left[ {y_{1} \,y_{2} \,y_{3} \,y_{4} } \right] = \left[ {h\,\,l\,\,t\,\,b} \right]$$
(8)
$${\text{Minimize}}:f(Y) = 1.10471y_{1}^{2} y_{2} + 0.04811y_{3} y_{4} \left( {14.0 + y_{{_{2} }} } \right)$$
(9)
$$r_{1} \left( Y \right) = l\left( Y \right) - l_{\hbox{max} } \le 0,$$
(10)
$$r_{2} \left( Y \right) = m\left( Y \right) - m_{\hbox{max} } \le 0,$$
(11)
$$r_{3} \left( Y \right) = y_{1} - y_{4} \le 0,$$
(12)
$$r_{4} \left( Y \right) = 0.010471y_{1}^{2} + 0.04811y_{3} y_{4} \left( {14.0 + y_{{_{2} }} } \right) \le 0,$$
(13)
$$r_{5} \left( Y \right) = 0.125 - y_{1} \le 0,$$
(14)
$$r_{6} \left( Y \right) = o\left( Y \right) - o_{\hbox{max} } \le 0,$$
(15)
$$r_{7} \left( Y \right) = n - n\left( Y \right) \le 0,$$
(16)

where \(l\left( Y \right) = \sqrt {\left( {l'} \right)^{2} + l'l''\left( {y_{2} /2R} \right) + \left( {l''} \right)^{2} }\), \(l' = n/\sqrt 2 y_{1} y_{2}\), \(l'' = MR/J\), \(M = n\left( {L + y_{2} /2} \right)\), \(R = \sqrt {y_{2}^{2} /4 + \left( {\left( {y_{1} + y_{2} } \right)/2} \right)^{2} }\), \(J = 2\left\{ {\sqrt 2 y_{1} y_{2} \left[ {y_{2}^{2} /12 + \left( {\left( {y_{1} + y_{2} } \right)} \right)^{2} } \right]} \right\}\), \(m\left( Y \right) = 6nL^{3} /Ey_{3}^{3} y_{4}\), \(n\left( Y \right) = \left( {4.013E\sqrt {y_{3}^{2} y_{4}^{6} /36/L^{2} } } \right)\left( {1 - \left( {y_{3} /2L} \right)\sqrt {E/4G} } \right)\), \(n = 6000\;{\text{lb}}\), \(L = 14\;{\text{in}}\), \(E = 30 \times 10^{6} \;{\text{psi}}\), \(G = 12 \times 10^{6} \;{\text{psi}}\), \(l_{\hbox{max} } = 13,600\;{\text{psi}}\), \(m_{\hbox{max} } = 30,000\;{\text{psi}}\), \(o_{\hbox{max} } = 0.25\;{\text{in}}\), \(Y = \left( {y_{1} ,y_{2} ,y_{3} ,y_{4} } \right)\) and \(\left( {0.1,0.1,0.1,0.1} \right) \le Y \le \left( {2,10,10,2} \right)\).

Fig. 8
figure 8

Welded beam design function: a schematic of the weld; b stress distribution evaluated at the optimum design; c displacement distribution at the optimum design

The welded beam design function has been solved by newly HSSAPSO approach in this section. During this study the compatibility with extreme values of the parameter has been tested with different solvers. The results of the algorithms are of a stochastic type, hence these results have been tested on 20 trial. The results obtained by proposed algorithm has been compared with using the literature results of; ABC (Artificial Bee Colony) [68], CPSOS (Co-Evolutionary PSO) [69], CDE (Co-evolutionary) [70], HIS (Improved Harmony Search Algorithm) [71], MFO (Moth Flame Optimizer) [29], AFA (Adaptive FA) [72],CSS (Charged System Search) [73], LSA-SM (Hybrid Lightning Search Algorithm-Simplex Method) [74] and Water Cycle and Moth-Flame Optimization algorithms (WCMFO) [75] respectively. The results of Table 14, represents the obtained solution in the terms of thickness of the weld, length of the clamped bar, height of the bar, thickness of the weld and minimum fabrication cost of a beam of this problem. Here, the thickness of weld (\(y_{1}\)) remains constant in the LSA-SM, CSS, AFA, MFO, HIS, ACO and it is greater of CPSO and CDE algorithms. The length of the clamped bar (\(y_{2}\)) remains greater in the LSA-SM, CSS, AFA, MFO, HIS, CDE, CPSO and ABC algorithms while it is least of HSSAPSO algorithm as comparison to others. The height of the bar (\(y_{3}\)) remains constant in HSSAPSO, LSA-SM, CSS, AFA, MFO, HIS and ABC algorithms while it is least of CDE algorithm outperforms than the others. Although, CPSO algorithm gives an high value of the height of the bar than others. The thickness of the weld (\(y_{4}\)) remains constant in the all algorithms. Finally, the minimum fabrication cost of a beam of this problem has been reported in the Table 15. Hence, these experimental solutions affirmed that the proposed algorithm is able to find the least cost of a beam of this problem outperform than others. Hence, the HSSAPSO approach is more suitable and capable for welded beam design function as compared to ABC, CPSO, CDE, HIS, MFO, AFA, CSS, LSA-SM and WCMFO algorithms.

Table 15 Best optimal solutions of the welded beam design function by different algorithms

7 Rolling element bearing

The main objective of this function is to increase the reliability and service life of the bearing [76, 77]. On the basis of operating requirements, several objectives for this function can be proposed such as longest fatigue life, mode of failure contact fatigue and many others. In which have been used three different critical constants or parameters such as pitch diameter \(\left( {D_{m} } \right)\), the number of balls \(\left( Z \right)\) and the ball diameter \(\left( {D_{b} } \right)\). During this research, the basic dynamic capacity has been taken as objective for maximization and expression of basic dynamic capacity for ball bearing is given as:

$${\text{Max}}\left[ {C_{d} \left( X \right)} \right] = \left\{ {\begin{array}{*{20}l} {\hbox{max} \left( { - f_{c} z^{{{\raise0.7ex\hbox{$2$} \!\mathord{\left/ {\vphantom {2 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}}} D_{b}^{1.8} } \right)} \hfill & {if\;D_{b} \le 25.5\;{\text{mm}}} \hfill \\ {\hbox{max} \left( { - 3.647 \times f_{c} z^{{{\raise0.7ex\hbox{$2$} \!\mathord{\left/ {\vphantom {2 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}}} D_{b}^{1.4} } \right)\,} \hfill & {if\;D_{b} \le 25.5\;{\text{mm}}} \hfill \\ \end{array} } \right.$$

Subject to:

$$g_{1} \left( x \right) = \frac{{\phi_{o} }}{{2sin^{ - 1} \left( {D_{b} /D_{m} } \right)}} - Z + 1 \ge 0,$$
(17)
$$g_{2} \left( x \right) = 2D_{b} - K_{Dmin} \left( {D - d} \right) \ge 0$$
(18)
$$g_{3} \left( x \right) = K_{Dmax} \left( {D - d} \right) - 2D_{b} \ge 0$$
(19)
$$g_{4} \left( x \right) = \xi B_{w} - D_{b} \ge 0$$
(20)
$$g_{5} \left( x \right) = D_{m} - 0.5\left( {D + d} \right) \ge 0$$
(21)
$$g_{6} \left( x \right) = \left( {0.5 + e} \right)\left( {D + d} \right) - D_{m} \ge 0$$
(22)
$$g_{7} \left( x \right) = 0.5\left( {D - D_{m} - D_{b} } \right) - \xi D_{b} \ge 0$$
(23)
$$g_{8} \left( x \right) = f_{1} \ge 0.515$$
(24)
$$g_{9} \left( x \right) = f_{o} \ge 0.515$$
(25)

where

$$f_{c} = 37.91\left\{ {1 + \left[ {1.04\left( {\frac{1 - \gamma }{1 + \gamma }} \right)^{1.72} \left( {\frac{{f_{i} \left( {2f_{o} - 1} \right)}}{{f_{0} \left( {2f_{i} - 1} \right)}}} \right)^{0.41} } \right]^{{{\raise0.7ex\hbox{${10}$} \!\mathord{\left/ {\vphantom {{10} 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}}} } \right\}^{ - 0.3} \times \left[ {\frac{{\gamma^{0.3} \left( {1 - \gamma } \right)^{1.39} }}{{\left( {1 + \gamma } \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}}} }}} \right]\left[ {\frac{{2f_{i} }}{{2f_{i} - 1}}} \right]^{0.41}$$
(26)
$$\gamma = \frac{{D_{b} cos\alpha }}{{D_{m} }}$$
(27)
$$f_{1} = \frac{{r_{1} }}{{D_{b} }}$$
(28)
$$\phi_{o} = 2\pi - 20 cos^{ - 1} \frac{{\left[ {\left\{ {\left( {D - d} \right)/2 - 3\left( {T/4} \right)^{2} + \left\{ {D/2 - \left( {T/4 - D_{b} } \right)} \right\}^{2} - \left\{ {d/2 + \left( {T/4} \right)} \right\}^{2} } \right\}} \right]}}{{2\left\{ {D - d/2 - 3\left( {T/4} \right)} \right\}\left\{ {D/2 - \left( {T/4} \right) - D_{b} } \right\}}}$$
(29)
$$T = D - d - 2D_{b}$$
(30)
$${\text{D}} = 1 60, \quad {\text{d}} = 90,\quad B_{w} = 30$$
(31)
$$0.5\left( {D + d} \right) \le D_{m} \le 0.6\left( {D + d} \right)$$
(32)
$$0.15\left( {D - d} \right) \le D_{b} \le 0.45\left( {D - d} \right)$$
(33)
$$\begin{aligned} & 4 \le Z \le 50;\quad 0.515 \le f_{1} \le 0.6;\quad 0.515 \le f_{o} \le 0.6;\quad 0.4 \le K_{Dmin} \le 0.5;\quad 0.6 \le K_{Dmax} \le 0.7 \\ & 0.3 \le \varepsilon \le 0.4;\quad 0.02 \le e \le 0.1;\quad 0.6 \le \xi \le 0.85. \\ \end{aligned}$$

where \(\left( d \right)\) is the bearing bore, \(\left( D \right)\) is the outside diameter (see Fig. 9, \(\left( {f_{i} = {\raise0.7ex\hbox{${r_{i} }$} \!\mathord{\left/ {\vphantom {{r_{i} } {D_{b} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${D_{b} }$}}} \right)\) is the curvature radius coefficient of the inner raceway groove, \(\left( {f_{o} = {\raise0.7ex\hbox{${r_{o} }$} \!\mathord{\left/ {\vphantom {{r_{o} } {D_{b} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${D_{b} }$}}} \right)\) is the curvature radius coefficient of the outer raceway groove, the outer \(\left( {r_{o} } \right)\) and inner \(\left( {r_{i} } \right)\) (see Fig. 11) raceway groove curvature radius, \(\left( {K_{\text{Dmax}} } \right)\) is the maximum roller diameter limiter (0.8), \(\left( {K_{\text{Dmin}} } \right)\) is minimum roller diameter limiter (0.5), \(\left( e \right)\) is the parameter for mobility condition (0.1), \(\left( \varepsilon \right)\) is the parameter for outer ring strength (0.1) and \(\left( {\phi_{o} } \right)\) is the maximum tolerable assembly angle (see Fig. 10) which depends upon the bearing geometry (Fig. 11).

Fig. 9
figure 9

Rolling Element Bearing macro-geometry

Fig. 10
figure 10

Ball bearing showing the assembly angle

Fig. 11
figure 11

Cut sections of bearing races

For the analysis of the performance of the algorithms usually consider three parameters such as pitch diameter, ball diameter and the number of balls for reasons of complexity. But during this research have been applied five design parameters \(\left( {D_{m} ,D_{b} ,Z,f_{i} ,f_{o} } \right)\). In the subject to the constraints, the constants values have been used \({\text{K}}_{\text{Dmax}} { = 0} . 8 , {\text{K}}_{\text{Dmin}} { = 0} . 5 , { }\,e = 0.1,\,\,\varepsilon = 0.1\) and \(\phi_{o} = 4.7124\) rads which are obtained from considerations of the rings, strength of the ball and mobility of balls. In the Matlab code of the proposed algorithm have been used the parameters values such as; number of search agents (30) and maximum iterations (500). The performance of the proposed approach have been compared with the literature results of the SSA, SCA, GWO, GSA, MVO, PSO, EPO, SHO and ESA algorithms [78]. The simulation solutions of the all algorithms have presented through the Table 16.

Table 16 Results of rolling element bearing function on different algorithms

The simulation results of Table 10 reveals that the proposed method able to search the best corresponding (max) value (\(C_{d} \left( X \right)\)) for this function outperforms than others. Here we can say that, on the basis of solutions of the proposed method, it can be improved and enhanced the life of the bearings.

8 Conclusion and future works

For the purpose of improving the exploration and exploitation of the algorithm, we have developing the newly hybrid approach called HSSAPSO in this article. This approach integrates the advantages of the salp swarm algorithm and particle swarm optimization algorithm to eliminate disadvantages, like the trapping in local optima and unbalanced exploitation. Welded beam design function, Rolling element bearing, twenty-two CEC 2005and twenty-nine CEC 2017 functions have been used for verifying the performance of the newly approach. The results and convergence graphs of these functions, proves that the HSSAPSO algorithm is a capable and faster algorithm for these functions outperform than recent metaheuristics.

Watchful evidences will determine the following advantages of the HSSAPSO algorithm.

  1. (1)

    It can powerfully improve the examining capabilities of the exploration by introducing the PSO.

  2. (2)

    It can recognize the accurate position HSSAPSO by tuning the constant around target.

  3. (3)

    It can find the superior possible global optima solution in the search areas for the practical and real life functions.

  4. (4)

    It can enhanced the convergence purity and extend the performance through reduction the computational time.

Future studies will investigate a new method with the help of statistical model to accelerate the speed of recent meta-heuristics as well as apply it for solving the other constrained nonlinear optimization functions, big data statistical and built train applications.