1 Introduction

The randomization concept encouraged the researchers to inspire different behaviors in nature and propose new meta-heuristic algorithms to solve complicated mathematical problems [1]. The nature-inspiration can be a bio-inspiration or a physical-inspiration. The bio-inspiration is based on the life behavior of the creatures to locate their food and adapt their habitats. There are two types of bio-inspired algorithms: 1) swarm-based algorithms, which are imitating the group behavior of creatures’ society for searching and finding food sources; 2) evolutionary-based algorithms, which imitate the evolution concept of creatures. On the other hand, the physical-inspiration is based on scientific laws and equations in many scientific disciplines such as chemistry, astronomy, electrical, disasters … etc. Mathematical equations of different sciences can be randomized in a special manner then tested and re-evaluated until reaching better results and applied to many engineering applications. The straightforwardness, strength, computational time, and suitable proposal are the general characteristics of the algorithms’ comparison and competition.

In a literature review, tens of meta-heuristic algorithms were proposed in the last decades and applied to solve many engineering problems. In the swarm-based algorithms group [2], particle swarm optimization (PSO) is stimulated by the teeming deeds of the birds and fishes [3], ant colony optimization (ACO) is stimulated by the straight path actions between the food and the colony [4], bat algorithm (BA) is motivated by the echo method of detecting the food [5], cuckoo search (CS) algorithm is stimulated by the remembrance of Cuckoos in exploration and recording the best nest for their eggs, and artificial bee colony (ABC) is stimulated by the honey bee group activities [6], grey wolf optimizer (GWO) is motivated by the headship and members grading for exploration and pursue the prey [7, 8], and whale optimization algorithm (WOA) is inspired by the food encircling by the humpback whales [9,10,11]. Furthermore, many swarm-based algorithms appeared recently in last five years to enter the competition circle and solve several engineering problems such as: coyote optimizer [12], sunflower optimizer [13, 14], salp swarm algorithm [15], squirrel search algorithm [16], butterfly optimization algorithm [17], pity beetle algorithm [18], moth-flame optimization algorithm [19], mouth brooding fish algorithm [20], dolphin echolocation [21], spotted hyena optimizer [22], emperor penguin optimizer [23], virus colony search [24], krill herd algorithm [25, 26], and Firefly algorithm (FA) [27].

In evolutionary-based algorithms group [28], genetic algorithm (GA) stimulates the evolution method of biological selection and genetics, biogeography-based optimizer (BBO) is stimulated by the evolution of kinds (for example predators and victims) throughout colonization and transmutation to find best home [29], differential evolution (DE) relies on the weighted-variance among two inhabitants vectors [30], and fast evolutionary programming (FEP) [31]. All these algorithms have been utilized in solving several engineering optimization problems.

Physics-based algorithms are also implemented to achieve the main target. In this regards, many algorithms are inspired such as gravitational search algorithm (GSA), which is motivated by Newton’s law of gravity [32], ray optimization (RO), which imitates the Snell’s light refraction law [33], charged-system-search (CSS) that is stimulated by the Coulomb rule and Newtonian rules of motions [34], and colliding bodies optimization (CBO) that is motivated by basic crashes among masses [35]. Moreover, electromagnetic field optimization [36], thermal exchange optimization [37], Ions motion algorithm [38], water evaporation optimization [39], and water cycle algorithm [40] are created as competitive algorithms.

Based on the No Free Lunch theory [41], there is no optimization algorithm that can efficiently solve all optimization problems. It sometimes solves one optimization problem and in the meantime fails to solve other problems. Therefore, the trend of establishing new meta-heuristic optimization algorithms is highly welcomed as long as they contribute something significant to the field. Recently, a tremendous revolution of these evolutionary computations algorithms has been emerged. This represents the principal impetus to the authors to exhibit the proposed transient search optimization (TSO) algorithm.

In this paper, the TSO algorithm is inspired by the transient response of electrical circuits that are including energy storage elements (inductor and capacitor). The computational complexity of the mathematical model and flowchart of the TSO algorithm is reduced as much as possible. The exploration of TSO algorithm is examined using uin-modal benchmark functions and the exploration is examined using multi-modal benchmark functions. The statistical results, non-parametric sign test, and convergence curves verified the superiority of the TSO algorithm among other 13 metaheuristic algorithms. Furthermore, the TSO algorithm is applied for optimal design for three applicable engineering problems and compared with different algorithms.

The rest of the paper is arranged as follows: Section 2 has two subsections, where the first subsection 2.1 presents a brief background on the transient response of first-order and second-order RLC circuits and the second subsection 2.2 describes the TSO algorithm. In Section 3, the TSO is verified using 23 benchmark functions and analyzed statistically. In section 4, the TSO is applied to find the optimal design of three engineering problems. Finally, a brief conclusion of the paper is drawn in Section 5.

2 Transient Search Optimization Algorithm

2.1 Background

The complete response of electrical circuits, which are containing resistive (R) and energy storage elements such as capacitors (C), inductors (L), or both of them (LC), includes a transient response and a steady-state response (final response) as shown in Eq. (1). The electrical circuits that contain a single storing component (RL or RC) are called first-order circuits, as indicated in Fig. 1, while the circuits that involve two storage elements (RLC) are known as second-order circuits, as depicted in Fig. 2. The switching of these circuits cannot change the process instantaneously toward the next steady state, where the capacitor or inductor takes time to be charged or discharged until reaching the steady-state value. The transient response of the first-order circuit could be computed by the differential equation as shown in Eq. (2). The differential equation can be solved to find the solution of x(t), as illustrated in Eq. (3). The transient response of the first-order circuit is shown in Fig. 3, where the transient response is the exponential response for charging and discharging operations [42, 43].

$$ Complete\ response= Transient\ response+ Final\ response $$
(1)
$$ \frac{d}{dt}x(t)+\frac{x(t)}{\tau }=K $$
(2)
$$ x(t)=x\left(\infty \right)+\left(x(0)-x\left(\infty \right)\right){e}^{\frac{-t}{\tau }} $$
(3)

where t is the time, x(t) can be the capacitor voltage v(t) of the RC circuit or an inductor current i(t) of the RL circuit, τ is the circuit time constant, where τ = RC of the RC circuit and τ = L/R of the RL circuit, K is a constant that relies on the initial value of x(0), and x() is the final response value. The transient response of the second-order circuit could be computed by the differential equation shown in Eq. (4). The solution of the second-order differential equation is shown in Eq. (5), where the response of RLC circuit is considered as an under-damped response.

$$ \frac{d^2}{d{t}^2}x(t)+2\alpha \frac{d}{dt}x(t)+{w}_0^2x(t)=f(t) $$
(4)
$$ x(t)={e}^{-\alpha t}\left({B}_1\cos \left(2\pi {f}_dt\right)+{B}_2\sin \left(2\pi {f}_dt\right)\right)+x\left(\infty \right) $$
(5)

where α is the damping coefficient, ω0 is the resonant frequency, fd is the damped resonant frequency, and B1 and B2 are constants. The underdamped response occurs when α < ω0 which causes damped oscillations of the transient response of the RLC circuit, as shown in Fig. 3.

Fig. 1
figure 1

First-order circuits: a) RC circuit; b) RL circuit

Fig. 2
figure 2

Second-order (RLC) circuit

Fig. 3
figure 3

Transient response of first-order and second-order circuits

2.2 Inspiration of the TSO Algorithm

In this section, the TSO algorithm is modeled as 1) initialization the search-agents between lower and upper bounds of the search area; 2) Searching for the best solution (Exploration); and 3) reaching the steady-state or best solution (Exploitation). Firstly, the initialization of the search-agents is randomly generated as in Eq. (6). Secondly, the exploration behavior of TSO is inspired by the oscillations of the second-order RLC circuits around the zero as depicted in Fig. 3. However, the exploitation of TSO is inspired by the exponential decaying of the first-order discharge, as displayed in Fig. 3. The random number r1 is used to balance between the exploration (r1 ≥ 0.5) and exploitation (r1 < 0.5) of the TSO algorithm. The mathematical modeling of the exploitation and exploration of the TSO algorithm is shown in Eq. (7), which is inspired by Eq. (3) and Eq. (5). The best solution (Yl*) of TSO algorithm imitates the steady-state or final value (x()) of the electrical circuit, also B1 = B2 = |Yl-C1. Yl*|.

$$ Y= lb+\mathit{\operatorname{rand}}\times \left( ub- lb\right) $$
(6)
$$ {Y}_{l+1}=\Big\{{\displaystyle \begin{array}{l}{Y_l}^{\ast }+\left({Y}_l-{C}_1.{Y_l}^{\ast}\right){e}^{-T}\kern11em {r}_1<0.5\\ {}{Y_l}^{\ast }+{e}^{-T}\left[\cos \left(2\pi T\right)+\sin \left(2\pi T\right)\right]\mid {Y}_l-{C}_1.{Y_l}^{\ast}\mid \kern2.25em {r}_1\ge 0.5\end{array}} $$
(7)
$$ T=2\times z\times {r}_2-z $$
(8)
$$ {C}_1=k\times z\times {r}_3+1 $$
(9)
$$ z=2-2\left(\raisebox{1ex}{$l$}\!\left/ \!\raisebox{-1ex}{${L}_{\mathrm{max}}$}\right.\right) $$
(10)

where lb is the lower bound of search area, ub is the upper bound of the search area, rand is a random number distributed uniformly, z is a variable that changes from 2 to 0 as in Eq. (10), T and C1 are random coefficients, r1, r2, and r3 are random numbers distributed uniformly ϵ [0, 1], Yl is the position of search agents, Yl* is the best position, l is the iteration number, k is a constant number (k = 0, 1, 2, …), and Lmax is the maximum iterations number. Furthermore, the balance between the exploration and exploitation process is realized by the coefficient T, which varies between [−2, 2]. The exploitation process of TSO algorithm is achieved when T > 0, while the exploration process is achieved when T < 0, as demonstrated in Fig. 4. It is obvious that the transient response indicated in Fig. 4 starts with a high response value then damps into the smallest value when the T > 0 then it oscillates again and goes to higher values when T < 0. The pseudo code of the TSO algorithm is depicted in Fig. 5. It is obvious that the algorithm offered is not complex and only one equation is used for position updating and balancing between the exploration and exploitation procedures.

Fig. 4
figure 4

Exploration and exploitation process

Fig. 5
figure 5

Pseudo-code of the TSO algorithm

In addition, the computational complexity of the TSO is expressed using big-oh notation, where the process of TSO algorithm begins with initialization of search-agents, then evaluate them using the cost function, then update the search-agents according to the function evaluation. The initialization process is expressed in big-oh notation O(N), where N is the number of search-agents. Secondly, the search agents enter the while loop, which has a maximum iteration (Lmax). Then the complexity of function evaluations of all search-agents is expressed as O(N*Lmax). Finally, the complexity of updating all search agents that have a dimension (D) for total iterations (Lmax) is expressed as O (NxLmaxxD). Therefore, the computational complexity of the TSO algorithm is expressed as O (Nx (Lmax D+ Lmax + 1)).

2.3 Verification of the TSO Algorithm

In this section, the robustness of the TSO algorithm is testing by using 23 well-known benchmark functions. These functions sorted into three categories [44]: uni-modal functions, multi-modal, and fixed-dimension multi-modal functions, as depicted in Tables 1, 2, 3. Uni-modal functions are usually used to check the exploitation ability of the algorithms, while the multi-modal functions are usually used to check the exploration capability of the algorithms. The experimental tests are performed using MATLAB R2016b and the whole tests are executed on a PC (Intel (R) Core (TM) i7–3770 CPU @ 3.40 GHz (8 CPUs), 16 GB, Windows 7–64 bits). Firstly, the TSO algorithm is compared with eight famous algorithm, which are widely applied in different engineering problems algorithms: such as salp swarm algorithm (SSA) [15], grey wolf optimizer (GWO) [7], whale optimization algorithm (WOA) [9], PSO, and CS algorithms, DE and FEP, and GSA. So, for fair comparison, all algorithms have the same population size, which is 30, the same maximum number of iterations, which is 500, and the same number of independent runs, which is 30. However, there are specific parameters for every algorithm as illustrated in Table 4. Therefore, the statistical results (average and standard deviation) of 30 independent runs are calculated as shown in Table 5. The statistical results of the TSO algorithm are paralleled with the other eight algorithms. Then, the comparison of results revealed that the TSO algorithm achieved higher number of best values (15/23) than other algorithms. So, the TSO algorithm has the 1st rank among other algorithms, however, DE achieved the 2nd rank. Also, it can be noticed that any algorithm achieves good result for Rastrigin function (F9), but it achieves bad result for the other function Rosenbrock function (F5). However, the TSO algorithm achieved good results for both functions, which means that the TSO algorithm balances between exploration and exploitation processes, as shown in Table 5. Furthermore, the Shwefel’s function (F8) is a challengeable test function to most of the metaheuristic algorithms, but the TSO algorithm succeeded to solve it and find its best solution. For further investigations, the TSO algorithm is compared with the most recent emerged algorithms, such as sandpiper optimization algorithm (SOA) [45], hybrid sine cosine algorithm (HSCA) [46], enhanced salp swarm algorithm (ESSA) [47], augmented grey wolf optimizer (AGWO) [48], GA, ABC [49], and FA [50], as shown in Tables 6, 7, 8. The comparison revealed that the TSO algorithm achieved the 1st rank of best results (17/23).

Table 1 Uni-modal test functions
Table 2 Multi-modal benchmark functions
Table 3 Fixed-dimension multi-modal benchmark functions
Table 4 Specific setting of the parameters of compared algorithms
Table 5 Average and standard deviation of compared algorithms for twenty-three benchmark functions
Table 6 Extra comparison with recent algorithms for uni-modal benchmark functions
Table 7 Extra comparison with recent algorithms for multi-modal benchmark functions
Table 8 Extra comparison with recent algorithms for fixed-dimension multi-modal benchmark functions

Moreover, the superiority of the TSO algorithm is tested by using the non-parametric sign test (Wilcoxon signed rank test) at 5% significant level as shown in Table 9. The Wilcoxon signed rank test computed the rank of all algorithms for each benchmark function, then the sum of all ranks for all 23 benchmark functions is calculated as shown in Table 9. This test revealed that the TSO algorithm achieved the first rank compared with other algorithms. Also, the p value is calculated for each benchmark function, where the null hypothesis (that states no difference between algorithms) is rejected, because all p-values are less than the significant level (5%). On the other hand, the simulation speeds of the TSO algorithm and other algorithms are compared in Table 10. It is obvious that the PSO algorithm has the smallest simulation time due to its simplicity, then the TSO algorithm has the second smallest simulation time. However, the DE algorithm has the second longest simulation time. Finally, Fig. 6 shows that the TSO algorithm converges mostly faster than the other algorithms (PSO, SSA, GWO, DE, CS, and WOA).

Table 9 Rank test of algorithms using Wilcoxon signed rank test
Table 10 Execution time for 23 benchmark functions
Fig. 6
figure 6figure 6

Convergence curves of twenty-three benchmark functions

3 Application of the TSO to Classical Engineering Problems

In this section, the TSO algorithm is examined using three well-known classical engineering design problems: tension spring design, welded beam design, and pressure vessel. These design problems are multi-constrained handling problems, which assess the aptitude of the TSO algorithm. Each engineering problem is solved for 30 times, while the number of maximum iterations is 500 and the number of population is 30 for all optimization algorithms.

3.1 Coil Spring Design

The minimum weight of tension coil spring is the fitness function of this engineering test problem as shown in Fig. 7 [51]. The optimum weight is subjected to four constraints: deflection, shear stress, surge waves frequency, and outer diameter as written in Eq. (11). The number of optimal variables is three: average coil diameter (D), Wire diameter (d), and a number of active coils (N). The TSO algorithm is employed to optimize this problem and it is paralleled with some algorithms. The statistical results (minimum, average, and standard deviation of 30 runs) are listed in Table 11. The TSO algorithm and interactive search algorithm (ISA) [55] offered the minimum weight value compared with other algorithms. This proves the capability and superiority of the TSO algorithm to solve this aforementioned optimization problem.

$$ {\displaystyle \begin{array}{l}x=\left[{x}_1\ {x}_2\ {x}_3\right]=\left[d\ D\ N\right]\\ {}f(x)={x_1}^2{x}_2\left({x}_3+2\right)\\ {}{g}_1(x)=1-\frac{{x_2}^3{x}_3}{71785{x_1}^4}\le 0\\ {}{g}_2(x)=\frac{4{x_2}^2-{x}_1{x}_2}{12566{x_1}^3\left({x}_2-{x}_1\right)}+\frac{1}{5108{x_1}^2}-1\le 0\\ {}{g}_3(x)=1-\frac{140.45{x}_1}{x_3{x_2}^2}\le 0\\ {}{g}_4(x)=\frac{x_1+{x}_2}{1.5}-1\le 0\end{array}} $$
(11)
Fig. 7
figure 7

Tension coil spring

Table 11 Optimum results of tension coil spring

The range of variables

$$ {\displaystyle \begin{array}{c}0.05\le {x}_1\le 2\\ {}0.25\le {x}_2\le 1.3\\ {}2.00\le {x}_3\le 15\end{array}} $$

3.2 Welded Beam Design

The minimum cost is the objective function of the welded beam design problem, as depicted in Fig. 8 [56]. The cost function is subjected to seven constraint functions as formulated in Eq. (12). Four design variables must be optimized to minimize the cost functions: weld thickness (h), attached part of bar length (l), bar height (t), and bar thickness (b). The TSO algorithm is applied to attain the lowest cost of welded beam design and paralleled with other algorithms. The statistical results (minimum, average, and standard deviation of 30 runs) are listed in Table 12. The TSO, ISA, and interactive fuzzy search algorithm (IFSA) [57] offered the minimum cost of welded beam design among other algorithms. However, ISA and IFSA are hybrid algorithms which take a longer execution time.

$$ {\displaystyle \begin{array}{l}x=\left[{x}_1\ {x}_2\ {x}_3\ {x}_4\right]=\left[h\ l\ t\ b\right]\\ {}f(x)=1.10471{x_1}^2{x}_2+0.04811{x}_3{x}_4\left(14.0+{x}_2\right)\\ {}{g}_1(x)=\tau -{\tau}_{\mathrm{max}}\le 0\\ {}{g}_2(x)=\sigma -{\sigma}_{\mathrm{max}}\le 0\\ {}{g}_3(x)=\delta -{\delta}_{\mathrm{max}}\le 0\\ {}{g}_4(x)={x}_1-{x}_4\le 0\\ {}{g}_5(x)=P-{P}_c\le 0\\ {}{g}_6(x)=0.125-{x}_1\le 0\\ {}{g}_7(x)=0.10471{x_1}^2+0.04811{x}_3{x}_4\left(14.0+{x}_2\right)-5.0\le 0\end{array}} $$
(12)
Fig. 8
figure 8

Welded beam design

Table 12 Optimum results of welded beam design

The range of variables is

$$ {\displaystyle \begin{array}{l}0.1\le {x}_1\le 2\\ {}0.1\le {x}_2\le 10\\ {}0.1\le {x}_3\le 10\\ {}0.1\le {x}_4\le 2\end{array}} $$

where

$$ {\displaystyle \begin{array}{l}P=6000\ lb;L=14\ in;E=30\times {10}^6\ psi;G=12\times {10}^6\ psi\\ {}{t}_{\mathrm{max}}=13600\ psi;{\sigma}_{\mathrm{max}}=30000\ psi;{\delta}_{\mathrm{max}}=0.25\ in\\ {}M=P\left(L+\frac{x_2}{2}\right);R=\sqrt{\frac{{x_2}^2}{4}+{\left(\frac{x_1+{x}_3}{2}\right)}^2};{\tau}^{\hbox{'}}=\frac{P}{\sqrt{2{x}_1{x}_2}};{\tau}^{"}=\frac{MR}{J}\end{array}} $$
$$ {\displaystyle \begin{array}{l}J=2\sqrt{2}{x}_1{x}_2\left(\frac{{x_2}^2}{12}+{\left(\frac{x_1+{x}_3}{2}\right)}^2\right);\tau =\sqrt{\tau^{\hbox{'}2}+{\tau}^{\hbox{'}}{\tau}^{"}\frac{x2}{R}+{\tau}^{"2}}\\ {}{P}_c=4.013\frac{E}{L^2}\sqrt{\frac{x_3^2{x}_4^6}{36}}\left(1-\frac{x_3}{2L}\sqrt{\frac{E}{4G}}\right)\\ {}\sigma =\frac{6 PL}{x_4{x_3}^2};d=\frac{4P{L}^3}{E{x}_4{x_3}^2}\end{array}} $$

3.3 Pressure Vessel Design

The purpose of this design is to find the minimum cost of cylindrical vessel design as shown in Fig. 9. The objective function is subjected to four constraint functions, as demonstrated in Eq. (13). There are four parameters to be optimized: shell thickness (Ts), head thickness (Th), inner radius (R), and length of the cylindrical vessel without head (L). The TSO is applied to get the minimum cost of cylindrical pressure vessel design and the results are paralleled with those obtained using other algorithms. The statistical results (minimum, average, and standard deviation of 30 runs) are listed in Table 13. It is worthy for noting here that the TSO offers the minimum cost compared with the results of other algorithms. The ISA and IFSA algorithms are hybrid algorithms which take a longer time to find the minimum solution.

$$ {\displaystyle \begin{array}{l}x=\left[{x}_1\ {x}_2\ {x}_3\ {x}_4\right]=\left[{T}_s\ {T}_h\ R\ L\right]\\ {}f(x)=0.6224{x}_1{x}_3{x}_4+1.7781{x}_2{x_3}^2+3.1661{x_1}^2{x}_4+19.84{x_1}^2{x}_3\\ {}{g}_1(x)=-{x}_1+0.0193{x}_3\le 0\\ {}{g}_2(x)=-{x}_2+0.00954{x}_3\le 0\\ {}{g}_3(x)=-\pi {x_3}^2{x}_4-\frac{4}{3}\pi {x_3}^3+1296000\le 0\\ {}{g}_4(x)={x}_4-240\le 0\end{array}} $$
(13)
Fig. 9
figure 9

Cylindrical vessel design

Table 13 Optimum results of cylindrical pressure vessel design

The range of variables

$$ {\displaystyle \begin{array}{l}0\le {x}_1\le 99\\ {}0\le {x}_2\le 99\\ {}10\le {x}_3\le 200\\ {}10\le {x}_4\le 200\end{array}} $$

4 Conclusion

This paper has presented a novel physical-inspired algorithm called the TSO algorithm. This suggested algorithm is stimulated by the transient behavior of switched electrical RLC circuits. The TSO is confirmed using 23 benchmark functions and three well-known classical engineering problems. The statistical results (mean, standard deviation, rank, and p-values) proved the superiority and significance level of the TSO compared with 15 algorithms (SSA, WOA, GWO, GSA, DE, FEP, PSO, CS, ESSA, AGWO, SOA, HSCA, GA, ABC, and FA). The convergence curves verified the fast convergence behavior of TSO in comparison with those obtained using other algorithms. The constrained-handled problems are utilized in checking the TSO algorithm where the constraints make the problems much difficult to find the optimum values. The TSO offered the minimum weight of coil spring design, the minimum cost of welded beam design, and minimum cost of cylindrical pressure vessel design. The superiority of the TSO algorithm reflects its flexibility, robustness, and proper design. For future work, the TSO algorithm will be applied in different fields, such as feature selection and optimal power flow. Also, the flowchart simplicity and the results encourage the researchers to apply the TSO algorithm in different disciplines.