Abstract
The salp swarm algorithm (SSA) has shown its fast search speed in several challenging problems. Research shows that not every nature-inspired approach is suitable for all applications and functions. Additionally, it does not provide the best exploration and exploitation for each function during the search process. Therefore, there were several researches attempts to improve the exploration and exploitation of the meta-heuristics by developing the newly hybrid approaches. This inspired our current research and therefore, we developed a newly hybrid approach called hybrid salp swarm algorithm with particle swarm optimization for searching the superior quality of optimal solutions of the standard and engineering functions. The hybrid variant integrates the advantages of SSA and PSO to eliminate many disadvantages such as the trapping in local optima and the unbalanced exploitation. We have used the velocity phase of the PSO approach in salp swarm approach in order to avoid the premature convergence of the optimal solutions in the search space, escape from ignoring in local minima and improve the exploitation tendencies. The new approach has been verified on different dimensions of the given functions. Additionally, the proposed technique has been compared with a wide range of algorithms in order to confirm its efficiency in solving standard CEC 2005, CEC 2017 test suits and engineering problems. The simulation results show that the proposed hybrid approach provides competitive, often superior results as compared to other existing algorithms in the research community.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The last few decades witnessed the introduction of many robust population-based meta-heuristics, such as swarm intelligence optimization and evolutionary algorithms for the purpose of finding the best and possible optimal solutions to many real-life applications. Although such algorithms have been useful in tackling many real life problems, the extensive literature review reveals that there is no single algorithm which works well in all the applications. Consequently, researchers have been developing newly, modified and hybrid techniques for resolving and eliminate many of the disadvantages of the existing algorithms such as: differential evolution (DE) [1, 2], genetic algorithm (GA) [3, 4], hybrid genetic algorithm (HGA) [5], biogeography based optimization algorithm (BBO) [6], gravitational search algorithm (GSA) [7], particle swarm optimization (PSO) [8], Tabu search (TS) [9], harmony search algorithm (HAS) [10], artificial neural network (ANN) [11], dragonfly algorithm (DA) [12], grey wolf optimization (GWO) [13], krill herd algorithm (KHA) [14], black-hole-based optimization (BHBO) [15], robust optimization (RO) [16], ant lion optimizer (ALO) [17], One half personal best position particle swarm optimizations (OHGBPPSO) [18], EPO [19], SHO [20], EGWO [21], AGWO [22], PSOGWO [23], MFOA [79], RSO [80] and many other algorithms [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38].
The particle swarm optimization is a swarm intelligence approach, which is inspired by the social behaviors of animals like bird flocking and fish schooling [39, 40]. As a stochastic search method, particle swarm optimization algorithm has characters of rapid convergence capability and simple computation. The approach was applied in separate industrial problems like modeling [41,42,43], prediction [41], parameters learning of neural network [44, 45], power systems [46,47,48,49,50], control [51], and parameters optimization of fuzzy system [52].
Over the years, researchers developed several hybrid techniques to enhanced the balanced between exploration and exploitation of the meta-heuristics and searching the superior quality of the solutions of the real life applications. For instance, Ali and Tawhid [53] developed a hybrid approach by integrating the PSO and GA to minimize a simplified model of the energy problem of the molecule. The advantages of existing approach have been presented by using the three different mechanisms. The accuracy of the approach has been verified on the potential energy function and thirteen unconstrained global optimization functions. Simulated results reveal that the newly existing method is an efficient and promising approach. Similarly, Yu et al. [37] presented a hybrid approach called HPSO. In their approach, they integrated space transformation search with a new modified velocity model. Simulation solutions on eight standard problems demonstrated that the hybrid particle swarm optimization holds superior accuracy in solving standard functions. Additionally, Mao et al. [54] proposed hybrid method integrated with a DE and PSO known as DEMPSO, to find the best and possible global optimal solution of the nonlinear model of the forward kinematics. Five configurations with distinct positions and orientations were used as an example to demonstrate the efficiency of the existing variant for finding the best global optimal solution of kinematic function of parallel manipulators. And the experimental solution of existing approach and the other meta-heuristics also revealed that newly hybrid approach can provide a superior performance regarding global searching properties and the convergence rate.
The several real life applications best optimal solutions has been finds by the researchers of the different fields such as environmental/economic dispatch [55], cost-based feature selection in classification [56], feature selection on high-dimensional data [57], dynamic economic dispatch with valve-point effect [58], robot path planning in uncertain environment [59] and many others.
In our previous research work, we presented several modified and hybrid nature-inspired approaches and these algorithms were applied on several real life applications and tried to improved and extend the working strength of the newly existing meta-heuristics [18, 31, 33, 35, 36, 23, 60]. In this research, we extend our previous work via proposing a new hybrid algorithm employing SSA and PSO to benefit from their exploitation and exploration. Furthermore, the proposed method is to aggregate advantages from SSA and PSO with an integrated feedback mechanism to achieve a better optima goal (or solution). In fact, PSO is used to enhance the exploitation and exploration of SSA. As a result, the newly existing method has the balance ability amid local and global searching abilities to guarantee the better convergence. So, the major contributions of the research work can be explained as:
-
The proposed methodology is enhancing the balanced between exploration and exploitation by feed-back mechanism.
-
Search tendency is redefined to improve the capability of searching best optimal solution in the search space during the search process.
-
Under this methodology, during the search process the proposed strategy is fastly trapping the global optima solution in the search space.
The experimental solutions reveal that the newly existing approach possess superior capability of search the global optimum than that of the other meta-heuristics. The performance of the existing approach is demonstrated by the obtained results of standard CEC 2005, CEC 2017 test suits and real life functions with comparison to others.
The rest of the article is structured as follows. Section 2, describes related research which describes the main aspects of the salp swarm algorithm and particle swarm optimization algorithm; Sect. 3, presents the hybrid SSA–PSO algorithms; the numerical and statistical experiments are described in Sect. 4; in Sect. 5, experiment and results are performed. The engineering applications are described in Sects. 6 and 7; Sect. 7 provides the conclusions.
2 Related work
In this section, we overview existing research showing different aspects of the two main algorithms have been employed in this research: salp swarm algorithm and particle swarm optimization algorithm. This is done via presenting existing studies from the literature and showing their relatedness to the current approach.
In 2017, Mirjalili et al. [61] developed a nature inspired approach, known as salp swarm algorithm that mimics the special swarming behavior of salps in oceans. Salps usually live in groups and often form a swarm called salp chain. The first salp is denoted as the leader, while the others are followers. The position of the leader should by using the following mathematical equations:
where \(x_{j}^{1}\) is the position of the best and possible solution, \(ub_{j}\), \(lb_{j}\) is the upper and lower bound of the \(j{\text{th}}\) dimension, \(F_{j}\) is the food source position of the dimension, \(c_{2}\), \(c_{3}\) are two random numbers in the range \(\left[ {0,1} \right]\) and \(c_{1}\) is an most important variable in this algorithm that gradually decreases over the course of generations to allow high exploration at the early stages of the optimization process, then high exploitation in last steps.
where \(L\) represents the maximum number of iteration and \(l\) indicates the current iterations.The follower’s positions are updated by using the following mathematical equation:
where \(x_{j}^{i}\) indicates the position of the \(i{\text{th}}\) follower at the \(j{\text{th}}\) dimension and \(i \ge 2\).
Besides the SSP algorithm, the original version of PSO approach was proposed by Kennedy et al. [39]. Their algorithm emulates the flocking behavior of particles or birds to solve global optimization functions. In this approach, each possible result is considered as an agent (particle). All candidates have their own velocities and fitness values. These agents fly through the D-dimensional function area by learning from the historical information of the agents. A possible result is demonstrated by an agent, which adjusts its velocity and position according to mathematical Eqs. (4) and (5):
where \(v_{ij}^{k}\) indicates the velocity of particle \(\,i \in \left[ {1,2, \ldots ,n_{x} } \right]\) in dimension \(j\) at time step \(k\), \(w\) is the inertia constant, \(p_{i}^{k}\) is the personal best position of the agent, \(g_{best}\) is the best position of the neighbor agent, \(x_{ij}^{k}\) represents the position of the particle \(i\) in dimension \(j\) at time step \(k\). The constants \(w\) and \(c_{4} ,c_{5}\) are acceleration coefficients and rand values \(r_{1j}^{k} ,r_{2j}^{k} \in U\left( {0,1} \right)\) in the range \(\left[ {0,1} \right]\) at a time step \(k\).
3 Hybrid SSA–PSO algorithm
Each algorithm is not able to solving each type of real-world problems. These algorithms may be failure to find the goal for any complex functions due to own weakness in the search area. Therefore, according to demand of the present situations of the real-world applications the researchers of the different fields are developing the newly modified and hybrid version for reducing the weakness of the present versions that these algorithms could be able to gives the best global optimal solution for the complex applications. After learning these strategies, we have tried to develop a new hybrid algorithm in this text for complex functions.
Although, the SSA method is more able to reveal a capable accuracy in contrast with other recent meta-heuristics, one of its limitations is that it might face the complexity of getting trapped in local/global optima. Additionally, it is not fitting for high difficult problems or functions and cannot handle their drawbacks such as slow convergence speed, slow diversity and premature convergence. To further boost the exploration and exploitation of SSA, a more focused particle swarm optimizer is incorporated into this algorithm to form a newly hybrid SSA–PSO algorithm. In order to reduce these types of weaknesses and to improve the SSA method search capability, the merits of the two different methods have been merged and developed as an newly approach that we called the HSSAPSO algorithm. The present method has been applied to identify the best score of the complex optimization functions. The particle swarm optimization algorithm phase operates in the direction of exploring the vector of optimal solutions although the HSSAPSO approach is invoked as a local search method to enhance the optimal solution superiority. This methodology is also helpful in fastly trapping the global optimal solution and ignoring the local optima in the search area during the search process. Therefore, with this method we can strengthen the search ability and obtain accurate convergences by accelerating the search. Further, details of Hybrid SSAPSO algorithm is shown step by step as below:
3.1 Step 1: parameter setting
The newly existing hybrid approach starts by setting its constant values such as the number of search agents (\(s\) = 30), current iteration (\(l = 2\)), maximum iterations (\(L = 50 - 500\)), maximum weight (\(w_{\hbox{max} } = 0.9\)), minimum weight (\(w_{\hbox{min} } = 0.2\)) and maximum velocity (\(v_{\hbox{max} } = 6\)).
3.2 Step 1: initialization
During the search process of the algorithm in the search space firstly we are initializing the crowd and these are initialized randomly according of the given problems, where the algorithms assigns a random vector of \(n\) dimensional for the \(i{\text{th}}\) salp; \(X = x_{i} \,\,\,\sim \,\,\,\left( {i = 1,2,3, \ldots ,n} \right)\).
3.3 Step 2: evaluation
Under this step, during the search process the each search agent of the crowd is evaluated as per the superiority of the own location. The fitness value of each search agent is tested on the basis of objective function after that, the each search agents take the next new location according to its fitness value in the search space.
3.4 Step 3: leader position updating
The position of the main search agent (like; leader) has been updating by mathematical Eqs. (1)–(2) during the search process in the search space.
3.5 Step 4: velocity initialization
We easily see in the literature the velocity initialization is a most important role for the population-based algorithms. With the help of this concept we can reduced the search agent’s efforts during the search process in the search space and ignored wrong positions. Many times, during the search process of the search agents for searching the global optima goal, these search agents may leave the intended boundaries of the search area, which will tend the wastage of finding effort and can a bad impact on the accuracy of the algorithm, hence this concept is play a important role for trapping the best global optima solution and ignoring the local optima in the search space. The velocity initialization can be done in three different ways such as initialize to small random value, initialize to random values close to zero and lastly initialize to zero. Distinct strategies of this initialization have different impacts on the accuracy of the algorithm.
3.6 Step 5: follower’s position updating
The position of the follower’s in the search space during the search process has been replaced by the help of modified mathematical Eq. (6). Here the velocity concept plays a most important role for trapping the global optima fastly and ignored the wrong positions during the search process.
3.7 Step 5: stopping condition
At the end, the stopping condition has been applied for searching the global optima for all kind of problems. Here this criteria is used for evaluating each search agent assessment process and replacing the best search agent’s position, it is repeated again and again until it satisfies the criteria of prevention for example it reaches the highest maximum iterations or the solution is earliest found. The rest of operations are the same as salp swarm algorithm. Furthermore, the working process of the proposed algorithm has been discussed in the flowchart Fig. 1.
The main structure of the newly existing approach HSSAPSO is presented in Algorithm 3.
Here, the proposed approach plays a important role for trapping the solution in the search space during the search process and its advantages has been described as followings;
-
The proposed method hold the best optima solution after the each iteration and it can not at all be wiped out even if the entire crowd deteriorates.
-
The proposed algorithm updates the location of each search agents in the search space according to the goal which is the best goal obtained so far; hence the best search agent explores and exploits in the search space it for a best one goal.
-
The HSSAPSO update the position of follower agents with the help of velocity concept. This concept plays a important role for ignoring the wrong position and fastly trapping global optima in the search space. It’s also enhanced the exploration and exploitation balance for trapping fastly optima during the search process.
-
Gradual movements of follower search agents during the search process in the search space prevent the HSSAPSO algorithm from effortlessly stagnating into local optima.
-
The main parameter of SSA algorithm helps the HSSAPSO for reducing the complexity and makes it effortless to execute.
4 Numerical and statistical experiments
The numerical and statistical results of the proposed approach are compared to SSA, PSO, MFO, SCA and DA algorithms for the twenty-two standard benchmark functions. HSSAPSO, SSA, PSO, MFO, SCA and DA algorithms are programmed using Matlab 2015. Further details about the experiments are discussed in the following subsections.
4.1 Parameter settings
The literature review shows that researchers have mostly used constants for controlling the speed of the agents, exploration and exploitation tendency. Different settings of parameters have been employed in the search process. The following settings of parameters are used in this experiment to enhance and improve the convergence ability of the meta-heuristics (Table 1).
4.2 Benchmark or standard functions
The initial ability of the HSSAPSO was verified on uni-modal, multi-modal and fixed dimension multi-modal functions. The brief information of these tested functions is described in the Tables 2, 3 and 4 respectively.
4.3 The convergence performance of HSSAPSO on 100–500 dimensions
In order to make sure the effectiveness of the partitioning process and integrating the standard SSA and PSO algorithm, the general convergence ability of the newly existing approach HSSAPSO and the others are presented for benchmark functions by plotting the function values versus the number of generations as described in Fig. 2. In Fig. 2, the red line indicates the standard SSA, while the black line indicates the newly existing variant HSSAPSO. Here, with the help of these conference graphs we can easily see that the velocity concept playing the important role for search agents in ignoring the wrong positions or directions and trapping the fastly global optima in the search space during the search process. It can search the solution for the given functions in least number of iterations as comparison to others and save the wasting time. The data in Fig. 2 is plotted after d (dim) generations. The convergence results of Fig. 2, reveals that the new approach is superior to the standard SSA and others which verifies that the applied partitioning mechanism and the integration amid the standard salp swarm strategy and standard particle swarm optimization algorithm can accelerate the convergence of the newly hybrid approach.
5 Experiment and results
In this part, twenty-two standard tested problems have been used [26, 61] demonstrate the quality, superiority and accuracy of the newly hybrid approach, where obtained optimal solutions by the newly approach have been compared with the standard SSA, PSO, MFO, SCA and DA algorithms. The standard benchmark functions like unimodal, multi-modal and fixed dimension multi-modal functions or problems. The description and the convergence graph of all problems are represented in Tables 5, 6 and 7 and Figs. 3, 4 and 5 respectively. The statistical measures in terms of maximum and minimum objective function value, average, best score and standard deviation are recorded. Further, the welded beam design function has been solved by HSSAPSO algorithm and results are reported in the last section of this text. On the basis of obtained solutions, the importance and advantages of the newly hybrid approach has been discussed in the following subsections.
5.1 Results on unimodal benchmark test functions
In this subsection, has been utilized the unimodal function for verifying the working performance of algorithms. The numerical experimental solutions on these functions are discussed in Table 5. This table shows the comparison between the HSSAPSO, SSA, PSO, MFO, SCA and DA algorithms. Here, it can be easily seen that the hybrid algorithm gives superior or highly effective optimal solutions as compared to others. As previously discussed, the uni-modal benchmark tested functions are able for benchmarking exploitation of the approaches. Therefore, have been obtained optimal solutions proofs high rate of exploitation potential of the hybrid algorithm.
5.2 Results on multi-modal benchmark test functions
The optimal solutions of meta-heuristic approaches on the multimodal problems/functions have been discussed in this section and also verify the working ability of the algorithms. These numerical experimental optimal results are represented in Table 6. The accuracy and ability of the presented approach has been verified in the terms of best scores, least and maximum cost of the objective function and statistical experiments on different dimensions. Table 6, results prove that the newly hybrid approach gives best score values, best minimum and maximum values on different dimensions.
Further, the results strongly demonstrate that high exploration of newly hybrid approach is capable to explore the search space extensively and provide capable regions of the search space.
5.3 Results on fixed dimension multi-modal benchmark test functions
Further, in this subsection, the results of HSSAPSO on fixed dimension multi-modal problems are discussed and also verifying the convergence performance with other meta-heuristics in Table 7. Results are prove that the hybrid approach is capable to gives the best quality of solutions in the search area of these functions.
Hence, it can be concluded that the HSSAPSO algorithm has better characteristics in high quality of the optimal results and strength of the optimal solutions.
5.4 Statistical results of the algorithms
In this subsection, the working performance of the newly modified method has been verified by statistical solutions on the fixed dimension, multi-modal and uni-modal functions. These solutions are presented in Tables 5, 6 and 7. In order to do a fair comparison of a HSSAPSO algorithm with SSA, PSO, MFO, SCA and DA algorithms, average and standard deviation for multiple runs has been reported. In Tables 5, 6 and 7, the least statistic scores indicate that the HSSAPSO algorithm is the most robust and is competent to reproduce the best solution with the minimum discrepancy and has less dependence as comparison to others.
5.5 Convergence graphs
At the end, the convergence performance of the presented approach and others has been plotted on Figs. 3, 4 and 5. The accuracy of the meta-heuristics has been tested on different dimension in the search space of the functions and each algorithms code has been run several times for the test the fair comparison of the algorithms. Figures 3, 4 and 5, shows that the HSSAPSO algorithm finds the best optimal solution of the problems in least number of generations or iterations. But, on the other sides, recent algorithms are taking the maximum times for find the solution in the search areas of the functions.
Hence, the graphs result proves that the HSSAPSO approach more suitable and faster for benchmark function as comparison to others.
5.6 Comparison of the algorithms on high dimensional (CEC 2017) functions
Due to the high complexity, the CEC test functions [62] customized for global optimization has been used these function for evaluating the performance quality of the proposed algorithm as comparison to others. The accuracy of the algorithms has been compared on 100 dimension and 5000 iterations with each others. The CEC 2017 test function contains twenty-nine standard functions, but function (F2) has been excluded because it reveals unstable behavior as given in [62]. These functions have been described in four different categories such as uni-modal functions (F1 and F3), multi-modal functions (F4–F10), hybrid functions (F11–F20) and composition functions (F21–F30). Here multimodal standard function has been utilized to evaluate the competitive metaheuristics to avoid fall into local optimum because these functions have several local values. Further, the composite standard functions have been utilized to evaluate the balance amid exploration and exploitation phases.
Through the Table 8 has been reported the simulation global optimum solutions of the algorithms. The convergence performance of the algorithms has been compared through Fig. 6. These graphs have been plotted on 5000 iterations at 100 dimensions.
The experimental results of the SSA, PSO, MFO, SCA, EGWO, AGWO, GWOPSO and HSSAPSO algorithms have been compared through minimum value of objective function, maximum value of objective function, mean value and standard deviation value. The simulation results on all functions reveals that the proposed algorithms has very superior than others for evaluating the best global optima solution in the search space during the search process. Hence, it’s capable for best solution for complex functions. According to the aforementioned findings, the following observations can be made.
-
Unimodal functions: These functions have only one global optima, hence these functions have been utilized to evaluate the exploitation phase. The obtained global optimal solutions shows that proposed algorithm has exhibited a superior exploitation capability outperform than others.
-
Multimodal functions; in which have many local optima and the number of design variables exponentially increases with the problem size compared to the unimodal functions. Generally these test suits utilized for representing the exploration ability of the metaheuristics. Here, the obtained solution shows that the proposed algorithm yields a superior exploration capability.
-
Composition and Hybrid functions generally used for assessing the local global optima avoidance capability and the balance amid exploitation and exploration, which have a huge number of local global optima. The obtained simulation solutions prove that the proposed method is able to create the superior balance amid exploration/exploitation.
-
Hence, the proposed methodology is most efficient approach for all CEC 2017 test suits outperform than others.
5.7 Testing performance of the HSSAPSO algorithm through Wilcoxon signed ranks method
In this sub-section, the working performance of the HSSAPSO algorithm has been confirmed by applying the Wilcoxon signed ranks (WSR) method on the median values (see in Tables 9, 10, 11) of the algorithms for the superior assessment [63]. The WSR test is a non-parametric test that is applied on two distinct samples or data columns, for searching the significance amid two samples/or two algorithms. With the help of this method/or test, we can be easily find the best one sample/or algorithms. In addition, the help of this method easily locates the significant difference of the behaviors of two meta-heuristics.
Here after using the WSR test we found the z-value and p value of the each algorithm. These values have been represented in the Tables 12, 13 and 14. Further we have used the condition of this test for verifying the significance of each algorithms, such as if \(p < 0.05\), then it represents a rejection of the \(H_{0}\) hypothesis, whereas \(p > 0.05\) represents a failure to reject the null hypothesis. Hence p-values are less than 0.05, it can be determined that HSSAPSO algorithm is significantly superior to the other meta-heuristics. If not, the obtained improvements are not statistically significant.
To sum up, all numerical and statistical solutions has proven that the HSSAPSO algorithm is more helpful in enhancing and improving the convergence performance of the Salp Swarm Algorithm in the terms of optimal solution superiority as well as computational efforts.
Finally, the results of Tables 12, 13 and 14, illustrates that proposed method has better characteristics such that strength of the global optimal target and superiority of the optimal solution. Also, significant importance may be placed in global exploration and local exploitation. Solutions illustrate based on the WSR method proved that the proposed algorithm presents the superiority performance as comparison to recent meta-heuristics. Hence, the obtained results by the proposed algorithm have been statistically superior and this has not happened by likelihood/or chance.
6 Welded beam design function
Welding can be presented as a procedure of combining metallic parts by heating to appropriate temperature without or with the application of pressure [64]. Here, the welding course that uses heat alone is known as mixture welding process. The components involved in this procedure are kept in position and given to attach the melted metal. Molten metal that can come from any part that it’s called parent metal or external filler metal. The merging surface of different parts becomes plastic under the action of heat. When consolidated, two parts fuse into a unit.
A beam is a member subjected to loads applied transverse to the long dimension, causing the member to bend. Beam is often distributed on a support or reaction basis. Here the beam supported by smooth surface, rollers, pins at the split ends is known as simple beam. It will develop a general reaction to a beam assisted action but will not create a momentum at the reactions. If a beam project other than the supporters either ends either side then it is called the simple beam with the overhang.
A beam (\(A\)) is to be welded to a rigid member (\(B\)). The welded beam is to consist of 1010 steel and is to support a force P of 6000 lb. Beam dimensions should be selected so that the cost of the system is reduced. The structure of the beam has been illustrated by Fig. 7. In this image an rigid member (\(A\)) has been welded on one beam. A weight or load is applied at the last part of the member. Here, the beam is to be adapted for minimize cost by varying the weld and member dimensions \(\overrightarrow {y} = \left[ {y_{1} \,y_{2} \,y_{3} \,y_{4} } \right] = \left[ {h\,\,l\,\,t\,\,b} \right]\). This includes the limits of the end deflection, bending stress, buckling load and shear stress. The variables \(y_{1}\) and \(y_{2}\) are multiples of 0.0625 inch and but for this problem are assumed continuous [65, 66].
The main objective of this function is to minimize the total material and fabrication cost of a beam that is loaded in bending shown in Fig. 8 [67]. The beam dimensions are varied to decrease the total mass (thus reducing fabrication cost). However, the cost of welding is also measured, introducing more complexity to the problem. The subject to constraints of the problems are on \(l\): shear stress, \(m\): bending stress in the beam, \(n\): buckling load on the bar, \(o\): end deflection of the beam and side constraints. This function is depend on four variables such as \(a(y_{1} )\): thickness of the weld \(\left( {h\,} \right)\), \(b(y_{2} )\): length of the clamped bar \(\left( l \right)\), \(c(y_{3} )\): height of the bar \(\left( t \right)\) and \(d(y_{4} )\): thickness of the bar\(\left( b \right)\) [67]. The function can be indicated as the following mathematical equations:
where \(l\left( Y \right) = \sqrt {\left( {l'} \right)^{2} + l'l''\left( {y_{2} /2R} \right) + \left( {l''} \right)^{2} }\), \(l' = n/\sqrt 2 y_{1} y_{2}\), \(l'' = MR/J\), \(M = n\left( {L + y_{2} /2} \right)\), \(R = \sqrt {y_{2}^{2} /4 + \left( {\left( {y_{1} + y_{2} } \right)/2} \right)^{2} }\), \(J = 2\left\{ {\sqrt 2 y_{1} y_{2} \left[ {y_{2}^{2} /12 + \left( {\left( {y_{1} + y_{2} } \right)} \right)^{2} } \right]} \right\}\), \(m\left( Y \right) = 6nL^{3} /Ey_{3}^{3} y_{4}\), \(n\left( Y \right) = \left( {4.013E\sqrt {y_{3}^{2} y_{4}^{6} /36/L^{2} } } \right)\left( {1 - \left( {y_{3} /2L} \right)\sqrt {E/4G} } \right)\), \(n = 6000\;{\text{lb}}\), \(L = 14\;{\text{in}}\), \(E = 30 \times 10^{6} \;{\text{psi}}\), \(G = 12 \times 10^{6} \;{\text{psi}}\), \(l_{\hbox{max} } = 13,600\;{\text{psi}}\), \(m_{\hbox{max} } = 30,000\;{\text{psi}}\), \(o_{\hbox{max} } = 0.25\;{\text{in}}\), \(Y = \left( {y_{1} ,y_{2} ,y_{3} ,y_{4} } \right)\) and \(\left( {0.1,0.1,0.1,0.1} \right) \le Y \le \left( {2,10,10,2} \right)\).
The welded beam design function has been solved by newly HSSAPSO approach in this section. During this study the compatibility with extreme values of the parameter has been tested with different solvers. The results of the algorithms are of a stochastic type, hence these results have been tested on 20 trial. The results obtained by proposed algorithm has been compared with using the literature results of; ABC (Artificial Bee Colony) [68], CPSOS (Co-Evolutionary PSO) [69], CDE (Co-evolutionary) [70], HIS (Improved Harmony Search Algorithm) [71], MFO (Moth Flame Optimizer) [29], AFA (Adaptive FA) [72],CSS (Charged System Search) [73], LSA-SM (Hybrid Lightning Search Algorithm-Simplex Method) [74] and Water Cycle and Moth-Flame Optimization algorithms (WCMFO) [75] respectively. The results of Table 14, represents the obtained solution in the terms of thickness of the weld, length of the clamped bar, height of the bar, thickness of the weld and minimum fabrication cost of a beam of this problem. Here, the thickness of weld (\(y_{1}\)) remains constant in the LSA-SM, CSS, AFA, MFO, HIS, ACO and it is greater of CPSO and CDE algorithms. The length of the clamped bar (\(y_{2}\)) remains greater in the LSA-SM, CSS, AFA, MFO, HIS, CDE, CPSO and ABC algorithms while it is least of HSSAPSO algorithm as comparison to others. The height of the bar (\(y_{3}\)) remains constant in HSSAPSO, LSA-SM, CSS, AFA, MFO, HIS and ABC algorithms while it is least of CDE algorithm outperforms than the others. Although, CPSO algorithm gives an high value of the height of the bar than others. The thickness of the weld (\(y_{4}\)) remains constant in the all algorithms. Finally, the minimum fabrication cost of a beam of this problem has been reported in the Table 15. Hence, these experimental solutions affirmed that the proposed algorithm is able to find the least cost of a beam of this problem outperform than others. Hence, the HSSAPSO approach is more suitable and capable for welded beam design function as compared to ABC, CPSO, CDE, HIS, MFO, AFA, CSS, LSA-SM and WCMFO algorithms.
7 Rolling element bearing
The main objective of this function is to increase the reliability and service life of the bearing [76, 77]. On the basis of operating requirements, several objectives for this function can be proposed such as longest fatigue life, mode of failure contact fatigue and many others. In which have been used three different critical constants or parameters such as pitch diameter \(\left( {D_{m} } \right)\), the number of balls \(\left( Z \right)\) and the ball diameter \(\left( {D_{b} } \right)\). During this research, the basic dynamic capacity has been taken as objective for maximization and expression of basic dynamic capacity for ball bearing is given as:
Subject to:
where
where \(\left( d \right)\) is the bearing bore, \(\left( D \right)\) is the outside diameter (see Fig. 9, \(\left( {f_{i} = {\raise0.7ex\hbox{${r_{i} }$} \!\mathord{\left/ {\vphantom {{r_{i} } {D_{b} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${D_{b} }$}}} \right)\) is the curvature radius coefficient of the inner raceway groove, \(\left( {f_{o} = {\raise0.7ex\hbox{${r_{o} }$} \!\mathord{\left/ {\vphantom {{r_{o} } {D_{b} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${D_{b} }$}}} \right)\) is the curvature radius coefficient of the outer raceway groove, the outer \(\left( {r_{o} } \right)\) and inner \(\left( {r_{i} } \right)\) (see Fig. 11) raceway groove curvature radius, \(\left( {K_{\text{Dmax}} } \right)\) is the maximum roller diameter limiter (0.8), \(\left( {K_{\text{Dmin}} } \right)\) is minimum roller diameter limiter (0.5), \(\left( e \right)\) is the parameter for mobility condition (0.1), \(\left( \varepsilon \right)\) is the parameter for outer ring strength (0.1) and \(\left( {\phi_{o} } \right)\) is the maximum tolerable assembly angle (see Fig. 10) which depends upon the bearing geometry (Fig. 11).
For the analysis of the performance of the algorithms usually consider three parameters such as pitch diameter, ball diameter and the number of balls for reasons of complexity. But during this research have been applied five design parameters \(\left( {D_{m} ,D_{b} ,Z,f_{i} ,f_{o} } \right)\). In the subject to the constraints, the constants values have been used \({\text{K}}_{\text{Dmax}} { = 0} . 8 , {\text{K}}_{\text{Dmin}} { = 0} . 5 , { }\,e = 0.1,\,\,\varepsilon = 0.1\) and \(\phi_{o} = 4.7124\) rads which are obtained from considerations of the rings, strength of the ball and mobility of balls. In the Matlab code of the proposed algorithm have been used the parameters values such as; number of search agents (30) and maximum iterations (500). The performance of the proposed approach have been compared with the literature results of the SSA, SCA, GWO, GSA, MVO, PSO, EPO, SHO and ESA algorithms [78]. The simulation solutions of the all algorithms have presented through the Table 16.
The simulation results of Table 10 reveals that the proposed method able to search the best corresponding (max) value (\(C_{d} \left( X \right)\)) for this function outperforms than others. Here we can say that, on the basis of solutions of the proposed method, it can be improved and enhanced the life of the bearings.
8 Conclusion and future works
For the purpose of improving the exploration and exploitation of the algorithm, we have developing the newly hybrid approach called HSSAPSO in this article. This approach integrates the advantages of the salp swarm algorithm and particle swarm optimization algorithm to eliminate disadvantages, like the trapping in local optima and unbalanced exploitation. Welded beam design function, Rolling element bearing, twenty-two CEC 2005and twenty-nine CEC 2017 functions have been used for verifying the performance of the newly approach. The results and convergence graphs of these functions, proves that the HSSAPSO algorithm is a capable and faster algorithm for these functions outperform than recent metaheuristics.
Watchful evidences will determine the following advantages of the HSSAPSO algorithm.
-
(1)
It can powerfully improve the examining capabilities of the exploration by introducing the PSO.
-
(2)
It can recognize the accurate position HSSAPSO by tuning the constant around target.
-
(3)
It can find the superior possible global optima solution in the search areas for the practical and real life functions.
-
(4)
It can enhanced the convergence purity and extend the performance through reduction the computational time.
Future studies will investigate a new method with the help of statistical model to accelerate the speed of recent meta-heuristics as well as apply it for solving the other constrained nonlinear optimization functions, big data statistical and built train applications.
References
Kalaiselvi K, Kumar VS, Chandrasekar K (2010) Enhanced genetic algorithm for optimal electric power flow using TCSC and TCPS. World congress on engineering, vol II, WCE 2010, June 30–July 2, 2010, London, UK
Bakirtzis AG, Biskas PN, Zoumas CE, Petridis V (2002) Optimal power flow by enhanced genetic algorithm. IEEE Power Eng Rev 22(2):60
Chung TS, Li YZA (2000) A hybrid GA approach for OPF with consideration of FACTS devices. IEEE Power Eng Rev 20(8):54–57
Cai LJ, Erlich I, Stamtsis G (2004) Optimal choice and allocation of FACTS devices in deregulated electricity market using genetic algorithms. In: IEEE PES power systems conference and exposition, 10–13 Oct 2004, New York, NY, USA
Slimani L, Optimal BT (2012) Optimal power flow solution of the algerian electrical network using differential evolution algorithm. TELKOMNIKA 10(2):199–210
Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713
Duman S, Guvenc U, Sonmez Y, Yorukeren N (2012) Optimal power flow using gravitational search algorithm. Energy Convers Manag 59:86–95
Kennedy J (2011) Particle swarm optimization. In: Sammut C, Webb GI (eds) Encyclopedia of machine learning. Springer, Boston
Abido MA (2002) Optimal power flow using Tabu search algorithm. Electr Power Compon Syst 30:469–483
Sinsupan N, Leeton U, Kulworawanichping T (2010) Application of harmony search to optimal power flow problems. In: 2010 International conference on advances in energy engineering, 19–20 June 2010, Beijing, China
Alrashydah EI, Qudais SAA (2018) Modeling of creep compliance behavior in asphalt mixes using multiple regression and artificial neural networks. Constr Build Mater 159:635–641
Mirjalili S (2015) Dragonfly algorithm: a new meta-heuristics optimization technique for solving single-objetive, discrete, and multi-objective problems. Neural Comput Appl 27:1053–1073
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61
Mukherjee A, Mukherjee V (2015) Solution of optimal power flow using chaotic krill herd algorithm. Chaos Solitons Fractals 78:10–21
Bouchekara HREH (2014) Optimal power flow using black-hole-based optimization approach. Appl Soft Comput 24:879–888
Tal AB, Ghaoui LEI, Nemirovski A (2009) Robust optimization. Princeton series in applied mathematics. Princeton University Press, Princeton, pp 1–576
Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw 83:80–98
Singh N, Singh SB (2011) One half global best position particle swarm optimization algorithm. Int J Sci Eng Res 2(8):1–9
Roger JM, Chauchard F, Maurel VB (2003) EPOPLS external parameter orthogonalisation of PLS application to temperature-independent measurement of sugar content of intact fruits. Chemometr Intell Lab 66:191–204
Fausto F, Cuevas E, Valdivia A, González A (2017) A global optimization algorithm inspired in the behavior of selfish herds. Biosystems 160:39–55
Joshi H, Arora S (2017) Enhanced grey wolf optimization algorithm for constrained optimization problems. Int J Swarm Intell 3(2/3):126–151
Qais MH, Hasanien HM, Alghuwainem S (2018) Augmented grey wolf optimizer for grid-connected PMSG-based wind energy conversion systems. Appl Soft Comput 69:505–515
Singh N, Singh SB (2017) Hybrid algorithm of particle swarm optimization and grey wolf optimizer for improving convergence performance. J Appl Math 2017:1–15
Soares J, Sousa T, Vale ZA, Morais H, Faria P (2011) Ant colony search algorithm for the optimal power flow problem. In: 2011 IEEE power and energy society general meeting, 24–28 July 2011, Detroit, MI, USA
Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47
Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl Based Syst 96:120–133
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Singh S, Singh SB (2017) A novel hybrid GWO-SCA approach for optimization problems. Eng Sci Technol Int J 20(6):1586–1601
Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspire heuristic paradigm. Knowl Syst 89:228–249
Daryani N, Hagh MT, Teimourzadeh S (2016) Adaptive group search optimization algorithm for multi-objective optimal power flow problem. Appl Soft Comput 38:1012–1024
Singh N, Singh S, Singh SB (2017) A new hybrid MGBPSO-GSA variant for improving function optimization solution in search space. Evolut Bioinform 13:1–13
Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl 27:495–513
Singh N, Singh S, Singh SB, Arora S (2012) Half mean particle swarm optimization algorithm. Int J Sci Eng Res 3(80):1–10
Rao RM, Babu AVN (2013) Optimal power flow using cuckoo optimization algorithm. Int J Adv Res Electr Electron Instrum Eng 2(9):1–6
Singh N, Singh SB (2012) Personal best position particle swarm optimization. J Appl Comput Sci Math Suceava 12(6):69–76
Singh N, Hachimi H (2018) A new hybrid whale optimizer algorithm with mean strategy of grey wolf optimizer for global optimization. Math Comput Appl 23(1):1–32
Yu S, Wu Z, Wang H, Chen Z (2010) A hybrid particle swarm optimization algorithm based on space transformation search and a modified velocity model. In: Schaeffer J (ed) High performance computing and applications. Springer, Berlin, pp 522–527
Liang RH, Tsai SR, Chen YT, Tseng WT (2011) Optimal power flow by a fuzzy based hybrid particle swarm optimization approach. Electr Power Syst Res 81(7):1466–1474
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of the ICNN’95—international conference on neural networks. IEEE, pp 1942–1948. https://doi.org/10.1109/icnn.1995.488968
Kennedy J, Eberhart RC, Shi Y (2001) Swarm intelligence. Morgan Kaufmann Publishers, Burlington
Liu W, Wang K, Sun B, Shao K (2006) A hybrid particle swarm optimization algorithm for predicting the chaotic time series. In: 2006 international conference on mechatronics and automation, 25–28 June 2006, Luoyang, Henan, China
Marinke R, Araujo E, Coelho LS, Matiko L (2005) Particle swarm optimization (PSO) applied to fuzzy modeling in a thermal-vacuum system. In: Fifth international conference on hybrid intelligent system (HIS’05), 6–9 Nov 2005, Rio de Janeiro, Brazil
Angeline PJ (1998) Evolutionary optimization versus particle swarm optimization: philosophy and performance differences. In: Porto VW, Saravanan N, Waagen D, Eiben AE (eds) Evolutionary programming VII. EP 1998. Lecture notes in computer science, vol 1447. Springer, Berlin
Juang CF (2004) A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans Syst Man Cybern B (Cybern) 34(2):997–1006
Zhang C, Shao H, Li Yu (2000) Particle swarm optimisation for evolving artificial neural network. In: SMC 2000 conference proceedings. 2000 IEEE international conference on systems, man and cybernetics. ‘cybernetics evolving to systems, humans, organizations, and their complex interactions’, 8–11 Oct 2000, Nashville, TN, USA
Esmin AAA, Torres GL, Souza ACZD (2005) A hybrid particle swarm optimization applied to loss power minimization. IEEE Trans Power Syst 20(2):859–866
Esmin AAA, Torres GL, Alvarenga GB (2006) Hybrid evolutionary algorithm based on PSO and GA mutation. In: 2006 sixth international conference on hybrid intelligent systems (HIS’06’), 13–15 Dec 2006, Rio de Janeiro, Brazil
Zhao B, Guo CX, Cao YJ (2005) A multiagent-based particle swarm optimization approach for optimal reactive power dispatch. IEEE Trans Power Syst 20:1070–1078
Vlachogiannis JG, Leet KY (2006) A comparative study on particle swarm optimization for optimal steady-state performance of power systems. IEEE Trans Power Syst 21(4):1718–1728
Huang CM, Huang CJ, Wang ML (2005) A particle swarm optimization to identifying the ARMAX model for short-term load forecasting. IEEE Trans Power Syst 20(2):1126–1133
Esmin AAA, Torres GL (2012) Application of particle swarm optimization to optimal power systems. Int J Innov Comput Inf Control 8(3):1705–1716
Esmin AAA, Torres GL (2006) Fitting fuzzy membership functions using hybrid particle swarm optimization. In: 2006 IEEE international conference on fuzzy systems, 16–21 July 2006
Ali AA, Tawhid MA (2017) A hybrid particle swarm optimization and genetic algorithm with population partitioning for large scale optimization problems. Ain Shams Eng J 8(2):191–206
Mao B, Xie Z, Wang Y, Handroos H, Wu H, Shi S (2017) A hybrid differential evolution and particle swarm optimization algorithm for numerical kinematics solution of remote maintenance manipulators. Fusion Eng Des 124:587–590
Hadji B, Mahdad B, Srairi K, Mancer N (2015) Multi-objective PSO-TVAC for environmental/economic dispatch problem. Energy Procedia 74:102–111
Zhang Y, Gong DW, Cheng J (2017) Multi-objective particle swarm optimization approach for cost-based feature selection in classification. IEEE/ACM Trans Comput Biol Bioinf 14(1):64–75
Song XF, Zhang Y, Guo YN, Sun XY, Wang YL (2020) Variable-size cooperative co-evolutionary particle swarm optimization for feature selection on high-dimensional data. IEEE Trans Evolut Comput. https://doi.org/10.1109/TEVC.2020.2968743
Zhang Y, Gong DW, Geng N, Sun XY (2014) Hybrid bare-bones PSO for dynamic economic dispatch with value-point effects. Appl Soft Comput 18:248–260
Zhang Y, Gong DW, Zhang J (2013) Robot path planning in uncertain environment using multi-objective particle swarm optimization. Neurocomputing 103:172–185
Singh N, Singh SB (2017) A modified mean gray wolf optimization approach for benchmark and biomedical problems. Evol Bioinform 13:1–28
Mirjalil S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191
Awad N, Ali M, Liang J, Qu B, Suganthan P (2017) Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective real-parameter numerical optimization. Technical report
Derrac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1(1):3–18
David DCN, Stephen CEA (2018) Cost Minimization of Welded Beam Design Problem using Non-traditional optimization through Matlab and validation through analyses simulation. Int J Mech Eng Technol IJMET 9(8):180–192
Mosavi A, Vaezipour A (2012) Reactive search optimization: application to multi-objective optimization problems. Appl Math 3:1572–1582
Li HS, Au SK (2010) Solving constrained optimization problems via subset simulation. In: 4th international workshop on reliable engineering computing, pp 439–453
Coello CAC (2000) Use of a self-adaptive penalty approach for engineering optimization problems. Comput Ind 41:113–127
Akay B, Karaboga D (2012) Artificial bee colony algorithm for large-scale problems and engineering design optimization. J Intell Manuf 23(4):1001–1014
He Q, Wang L (2007) An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng Appl Artif Intell 20(1):89–99
Huang FZ, Wang L (2007) An effective co-evolutionary differential evolution for constrained optimization. Appl Math Comput 186(1):340–356
Mahdavi M, Fesangharg M, Damangir E (2007) An improved harmony search algorithm for solving optimization problems. Appl Math Comput 188(2):1567–1579
Baykasoğlu A, Ozsoydan FB (2015) Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl Soft Comput 36:152–164
Kaveh A, Talatahari S (2010) A novel heuristic optimization method: charged system search. Acta Mech 213:267–289
Lu Y, Zhou Y, Wu X (2017) A hybrid lightning search algorithm-simplex method for global optimization. Discrete Dyn Nat Soc 8342694:1–23
Khalilpourazari S, Khalilpourazary S (2019) An efficient hybrid algorithm based on water cycle and moth-flame optimization algorithms for solving numerical and constrained engineering optimization problems. Soft Comput 23:1699–1722. https://doi.org/10.1007/s00500-017-2894-y
Kumar A (2016) Ball bearing design through Jaya algorithm. Int J Adv Res Sci Eng 5(12):458–467
Chakraborty I, Kumar V, Nair SB (2003) Rolling element bearing design through genetic algoirthms. Eng Optim 35(6):1–26
Dhiman G (2019) ESA: a hybrid bio-inspired metaheuristic optimization approach for engineering problems. Eng Comput. https://doi.org/10.1007/s00366-019-00826-w
Yuan X, Dai X, Zhao J, He Q (2014) On a novel multi-swarm fruit fly optimization algorithm and its application. Appl Math Comput 233:260–271
Mosavi A, Vaezipour A (2012) Reactive search optimization; application to multiobjective optimization problems. Appl Math 3(10):1572–1582
Acknowledgements
The authors are very grateful to the referees for their valuable suggestions, which helped to improve the quality of the paper significantly.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Singh, N., Singh, S.B. & Houssein, E.H. Hybridizing salp swarm algorithm with particle swarm optimization algorithm for recent optimization functions. Evol. Intel. 15, 23–56 (2022). https://doi.org/10.1007/s12065-020-00486-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12065-020-00486-6