1 Introduction

In system engineering, the order of the plant or designed controller could be very high which adds an additional burden on simulation and computation. If a higher-order system or controller can be approximated in terms of its order, it saves computation time and cost. System approximation methods may be classified into three headings: classical methods, using optimization algorithms, and mixed methods.

The classical methods are proposed by different researchers. They are based on simple techniques such as dominant pole preservation, dividing state matrix into two parts, retention of predominated Eigenvalues, continued fraction expansions, time moments matching, factor division method, clustering of poles and zeros, and Routh stability criterion. These classical methods are also mixed by many researchers to get the reduced-order system. The disadvantages of these methods are the higher cost of computation and sometimes the problem of stability. Recently, the Improved Balanced Realization technique [24] was proposed in which, the denominator was calculated based on balanced realization, whereas the numerator was calculated by the simple mathematical procedure. Some other classical methods based on second-order optimization are [9, 10].

In the second method, optimization techniques are used for order reduction. They are based on the minimization of the error between the original and the reduced systems [13]. The error functions are taken as Integral of Absolute Error (IAE) [4], Integral of Time multiplied by Absolute Error (ITAE) [30], Integral of Square Error (ISE) [4], Integral of Time multiplied by Square Error (ITSE) [30], L1, L2, L-infinity norm [14], weighted error function [12], etc. Meta-heuristic algorithms inspired by nature are widely used in the optimization technique method [23]. In this category, the first one is the Genetic Algorithm (GA) [16] which works on the survival of fittest. Some other models are Pachycondyla apicalis metaheuristic (API) algorithm [21] based on foraging behavior of a population of Pachycondyla apicalis, Particle Swarm Optimization (PSO) [17] based on flocking behavior of birds, Harmony Search Algorithm (HSA) [18] based on musical improvisation, Cuckoo Search Algorithm (CSA) [3, 27] based on a cuckoo bird which lays their own eggs in the nest of other host bird and Big Bang–Big Crunch [5] theory is based on the evolution of the Universe. These techniques are successfully applied in various fields such as: API in parameters identification of chaotic electrical system [19], Cuckoo Search in order reduction [27], PSO in fuzzy predictive adaptive controller design [6]. Further, these optimization techniques can also be mixed to get better results in any field of research [8].

In the third category, order reduction is done by using classical methods plus optimization algorithms. Nadi [2] proposed a mixed method for single-input single-output (SISO) and multi-input multi-output (MIMO) systems in which dominant poles were chosen from the original system, while the other parameters were selected by PSO. Big Bang–Big Crunch optimization was mixed with Routh approximation by Desai et al. [11], and recently, it was also mixed with Time moment matching [5]. In the above two methods, the numerator was calculated by Big Bang–Big Crunch optimization and the denominator was estimated by Routh approximation and Time moment matching, respectively. Two unified hybrid metaheuristic algorithms [15] for the discrete-time system were proposed by Ganguli in which a Gray Wolf optimization and Firefly algorithm were combined for order reduction. Hence, in spite of so many methods, still, new algorithms are in great demand.

The system approximation techniques provided till now are either problem-specific or offer large errors in the resultant reduced system. Moreover, they are time-consuming and computationally more expensive. This motivates the researchers to explore the new more effective, general, and computationally less time-consuming system approximation techniques. This article presents a new technique using ALO that tide over all the problems listed above. The ALO exceptionally reduces the error between the two systems because it has higher efficiency, high computational speed, and fast convergence. The competency of the proposed algorithm is shown by diminishing some reference systems, including a system of order 84, and the results thus obtained are better than the other techniques existing in the literature. Also, the performance of the proposed approximated system is not only analyzed in the time domain but also in the frequency domain. The final convergence rate and CPU usage time are less as compared to other optimization techniques such as genetic algorithm (GA) and particle swarm optimization (PSO). Therefore, this article contributes a more effective, computationally less expensive, and general technique of system approximation.

2 Problem Formulation

2.1 Definition of Order Reduction

Let us define a system transfer function of order n as

$$\begin{aligned} G_k(s):u\rightarrow y_k\ \end{aligned}$$
(1)

The objective of order reduction is to find out a new transfer function \(R_r(s)\) of reduced order r which is defined as

$$\begin{aligned} R_r(s):u\rightarrow y_r \quad {\text { with }}\; r<k \end{aligned}$$
(2)

provided input and output characteristics are the same for the above two systems, that is for the same input u(t), output \(y_k(t)\approx y_r(t)\). For the sake of definiteness, the type of systems considered in this paper is proper LTI systems. The proposed method is limited to transfer function matrices such that

  • \(G_k(s)\) are rational and stable, i.e., the poles lie on the left-half of the s-plane.

  • In frequency domain, \(G_k(jw)\) is nonzero for all \(\omega \) including \(\omega =\infty \).

In other words, MOR leads to optimization problem in the following manner. MOR defines the problem of finding the reduced-model \(R_r(s)\) from the full-order plant \(G_k(s)\) using optimization formulation such that

$$\begin{aligned} {\hbox {Minimize}}\, {\hbox {ISE}}=\int \limits _{0}^{\infty }{e{{(t)}^{2}}{\hbox {d}}t} \end{aligned}$$
(3)

where \(e(t)=y_k(t)-y_r(t)\).

We have selected ISE criterion because it quickly eliminates large errors in both transient and steady-state in comparison with other integral error performance measures such as ISE, IAE, ITSE.

2.2 Single-Input Single-Output (SISO) Systems

Let the higher order system is given as follows-

$$\begin{aligned} {G_k}(s) = \frac{{{N_{k - 1}}(s)}}{{{D_k}(s)}} = \frac{{\sum \limits _{i = 0}^{k - 1} {{b_i}{s^i}} }}{{\sum \limits _{i = 0}^k {{a_i}{s^i}} }} \end{aligned}$$
(4)

where \({a_i}\) and \({b_i}\) are constants, whereas for unity steady state output \({a_0} = {b_0}\).

The objective is to obtain the reduced system of order \('r'\) \((r < k)\) such that it preserves all necessary characteristics of the higher order system and represented as follows-

$$\begin{aligned} {R_r}(s) = \frac{{{N_{r - 1}}(s)}}{{{D_r}(s)}} = \frac{{\sum \limits _{i = 0}^{r - 1} {{d_i}{s^i}} }}{{\sum \limits _{i = 0}^r {{c_i}{s^i}} }} \end{aligned}$$
(5)

where \({c_i}\) and \({d_i}\) are unknown constants.

3 Ant Lion Optimization (ALO)

The Ant Lion or Antlion is a recently developed meta-heuristic optimization technique in 2015 by Seyedali [20] which basically models the trapping of ants by antlions. Antlions are also called doodlebugs sometimes. They all come under the Myrmeleontidae family and live in two phases, larva and adult. They have a very interesting hunt mechanism when they are in larvae form. A small coned-shaped trapped are made by antlions to catch ants. They sit under the pit and wait for prey to be trapped. Antlions eat the flesh of trapped prey and throw the leftovers outside the pit. This pit is again prepared for the next hunting. They make a big pit when they are more hungry, and this is the inspiration for the ALO algorithm. The method includes five main steps on the stochastic base of ants, building traps, ants trapped in traps, capturing ants in traps, and rebuilding traps. The mathematical model of ALO can be brief in the following five steps

Step 1: Random walk of ants It can be mathematically written as

$$\begin{aligned} {{X}_{i}}=\left[ \begin{aligned}&0,{{f}_{r}}(1),{{f}_{r}}(1)+{{f}_{r}}(2),\ldots ,\sum \limits _{j=1}^{N-1}{{{f}_{r}}(j),}\\&\sum \limits _{j=1}^{N}{{{f}_{r}}(j)} \\ \end{aligned} \right] \end{aligned}$$
(6)

where \(i=1,2,\ldots din\) and din is the size of ant and antlion, N is the number of iterations and \(f_r(j)\) is a stochastic function given as

$$\begin{aligned} f_r(j)=\left\{ \begin{matrix} 1 \qquad \;\;\hbox {if}\; P_\mathrm{rand}> 0.5 \\ -1\qquad \hbox {if}\; P_\mathrm{rand}\le 0.5 \\ \end{matrix} \right. \end{aligned}$$
(7)

where \(P_\mathrm{rand}\) is the pseudo-random number generated in the interval of [0, 1]. Previous random walks can be mapped into the realistic search space in the position with the lower and upper limits using the following formula

$$\begin{aligned} {{Y}_{i}}=\left( \frac{{{X}_{i}}-{{a}_{i}}}{{{b}_{i}}-{{a}_{i}}} \right) \times \left( {{d}_{i}}-{{c}_{i}} \right) +{{c}_{i}} \end{aligned}$$
(8)

where \(a_i\) and \(b_i\) are the min and max values of \(X_i\) ; \(c_i\) and \(d_i\) are the min and max antlion in ith dimensions. In Eq. (8), \(X_i\) is standardized in the domain [0, 1] using \(\left( \frac{{{X}_{i}}-{{a}_{i}}}{{{b}_{i}}-{{a}_{i}}} \right) \) and then converted in the domain \(\left[ {{c}_{i}}\,\,{{d}_{i}} \right] \).

Step 2: Trapping ants in antlions’ pit

If minimum and maximum changing limit are denoted by \(c_{\min }^{'}\) and \(d_{\max }^{'}\) at current iteration and the position of antlion selected on the bases of its fitness value using the Roulette wheel is denoted by \(P_\mathrm{Antlion}\), then ants movement can be mathematically written as

$$\begin{aligned} c_{\min }={{c_{\min }}^{'}}+P_{Antlion},\qquad d_{\max }={{d_{\max }}^{'}}+P_\mathrm{Antlion} \end{aligned}$$
(9)

Step 3: Building traps

The probability of establishing a trap by antlions is proportional to their skill value and is selected by the Roulette wheel.

Step 4: Ants sliding

Once the ants are stuck, they will try to getaway and the process of sliding the ant will occur. The updated values of \(c_{\min }^{'}\) and \(d_{\max }^{'}\) are calculated by the following equations

$$\begin{aligned} {c_{\min }^{'}}=\frac{lb}{{{10}^{w}}\times \left( {t}/{N}\; \right) },\qquad {d_{\max }^{'}}=\frac{ub}{{{10}^{w}}\times \left( {t}/{N}\; \right) } \end{aligned}$$
(10)

where t represents the current iteration and the constant w is defined according to the latest iteration.

Step 5: Elitism

The location of each ant is updated according to the random walk and is selected by the wheel of the Roulette and the elite; mathematically, it can be written as

$$\begin{aligned} P_\mathrm{update}=\frac{{{R}_\mathrm{antlion}}+{{R}_\mathrm{elite}}}{2} \end{aligned}$$
(11)

where \(P_\mathrm{update}\) denotes the updated location, \(R_\mathrm{antlion}\) and \(R_\mathrm{elite}\) are the random walks around the antlion and elite selected by the Roulette wheel.

Step 6: Catching Prey

When the ant reaches the bottom of the pit, antlion must take its place. When prey is captured, \(Antlion=Ant\), provided \(f(Ant)<f(Antlion)\).

ALO has less number of parameters to be initialized, better search performances, and fast convergence speed [20] as compared to GA and PSO. This algorithm is also tested for Multi-layer Perceptrons Training [31], and the classification rate results are the best (>90%) as compared to GA, PSO, and Ant Colony Optimization (ACO). A tabular comparison is shown below (Table 1), which depicts that ALO has less computational complexity and balanced exploration and exploitation.

Table 1 A comparison of ALO

4 Proposed Methodology

The theory of optimization is being employed here to calculate the reduced numerator and denominator by minimizing the predefined fitness function. Although the reduced system may be any type/order for simplification, the following reduced system is considered in this paper.

$$\begin{aligned} {R_2}(s) = \frac{{{d_0} + {d_1}s}}{{{c_0} + {c_1}s + {c_2}{s^2}}} \end{aligned}$$
(12)

where \(d_0\), \(d_1\), \(c_0\), \(c_1\) and \(c_2\) are unknown coefficients of numerator and denominator, respectively, which are to be determined.

Therefore, the proposed algorithm is used to achieve the best values of the coefficients in Eq. (12), for reduce system, by minimizing the performance index described by Eq. (3). The algorithmic steps to find out numerator and denominator coefficients of the reduced system given by Eq. (5) are given in Algorithm 1.

figure a

5 Computational Experiments

This section discusses four different examples to show the superiority of the proposed MOR method. The first and second examples are fourth- and eighth-order SISO systems. The third example has a time delay which becomes a non-minimum system after Pade approximation of its time delay. The fourth example is of 84th order which has been reduced to a second-order system using the proposed algorithm. The results of all examples are compared with other up-to-date order reduction techniques available in the literature as well as GA and PSO methods. Several trials are made, and the best solutions (least ISE value) are considered with ALO parameters shown in Table 2. All problems are solved with Intel(R) Core(TM) i3-5005U, 2.000 GHz processor, and 4.0 GB RAM computer.

Table 2 ALO parameters

Example 1

Consider a fourth-order benchmark problem ([5, 28, 29]) with the following transfer function

$$\begin{aligned} G(s)=\frac{{{s}^{3}}+7{{s}^{2}}+24s+24}{{{s}^{4}}+10{{s}^{3}}+35{{s}^{2}}+50s+24} \end{aligned}$$
(13)

The second-order approximation of the above system with the proposed algorithm has the following transfer function

$$\begin{aligned} {{G}_{r\mathrm Prop}}(s)=\frac{{14}{.29}s+{28}{.78}}{{ 18}{.66}{{s}^{2}}+{45}{.18}s+{28}{.87}} \end{aligned}$$
(14)

This system is also reduced to second order by GA and PSO techniques and cost versus iterations curves are shown in Fig. 1. It is clear from this figure that the convergence rate of ALO is slow up to around 50 iterations, but afterward it becomes better and the obtained value of ISE is less. Figures 2 and 3 show the time and frequency domains comparisons of different systems. The proposed system is consistent in the time and frequency domain. Table 3 shows the comparison in terms of its ISE error criterion, and CPU simulation time. The proposed system has the least error \((6.39\times 10^{-05})\) and simulation time. Rise time, Settling time, Overshoot, and Peak time are shown in Table 4. These values are comparable with GA and PSO, whereas better than the other methods of order reduction.

Fig. 1
figure 1

Cost versus iterations of Example 1

Fig. 2
figure 2

Time domain comparison of Example 1

Fig. 3
figure 3

Frequency domain comparison of Example 1

Table 3 ISE and CPU time comparison for different systems of Example 1
Table 4 Transient response comparison for different systems of Example 1

Example 2

Consider an eighth-order benchmark problem ([1, 5, 28]) with transfer function given by the following equation

$$\begin{aligned} G(s)=\frac{\begin{aligned} {18}{{\text {s}}^{7}}{+514}{{\text {s}}^{6}}{+5982}{{\text {s}}^{5}}{+36380}{{\text {s}}^{4}}{+122664}{{\text {s}}^{3}} +{222088}{{\text {s}}^{2}}{+185760}{{\text {s}}^{1}}{+40320} \end{aligned} }{\begin{aligned} {{\text {s}}^{8}}{+36}{{\text {s}}^{7}}{+546}{{\text {s}}^{6}}{+4536}{{\text {s}}^{5}}{+22449}{{\text {s}}^{4}}{+67284}{{\text {s}}^{3}} {+118124}{{\text {s}}^{2}}{+109584}{{\text {s}}^{1}}{+40320} \end{aligned} }\nonumber \\ \end{aligned}$$
(15)

The above system is reduced to second order by the proposed technique which results in following transfer function

$$\begin{aligned} {{G}_{r\mathrm Prop}}(s)=\frac{{71}{.23}s+{21}{.68}}{{4}{.19}{{s}^{2}}+{28}{.99}s+{21}{.83}} \end{aligned}$$
(16)

The cost versus iterations plots are shown in Fig. 4. Although the convergence rates are similar in this example, the final ISE value is minimum in the case of ALO. The step response and Bode plots are shown in Figs. 5 and 6 for the original and reduced systems. It is clear that the proposed system matches in time as well as frequency domain over the entire time and frequency range. The proposed method has a step response and Bode plot more closed to the original system and comparable with GA and PSO. Table 5 shows the comparison in terms of ISE and CPU time. The obtained simulation time is the least as compared to GA and PSO. Table 6 shows Rise time, Settling time, Overshoot, and Peak time. These values are more closely matched with the original system as compared to Biradar, Sikander, Desai, and Abdullah methods.

Fig. 4
figure 4

Cost versus iterations of Example 2

Fig. 5
figure 5

Time domain comparison of Example 2

Fig. 6
figure 6

Frequency domain comparison of Example 2

Table 5 ISE and CPU time comparison for different systems of Example 2
Table 6 Transient response comparison for different systems of Example 2
Fig. 7
figure 7

Time domain comparison of Example 3

Table 7 ISE comparison for different systems of Example 3
Fig. 8
figure 8

Time domain comparison of Example 3

Fig. 9
figure 9

Frequency domain comparison of Example 3

Example 3

Consider a seventh-order delayed system ([5]) with following transfer function

$$\begin{aligned} G(s)= & {} \frac{(4000s+50000){{e}^{-0.3s}}}{{{s}^{7}}+69{{s}^{6}}+1764{{s}^{5}}+20280{{s}^{4}} + 102500{{s}^{3}}+221375{{s}^{2}}+187500s+50000 }\nonumber \\ \end{aligned}$$
(17)

In above system, time delay of 0.3 sec can be approximated to third order by Pade approximation, so system of Eq. (17) converts to tenth order of the form \(G(s)=N(s)/D(s)\), where N(s) and D(s) are given below

$$\begin{aligned} \begin{aligned}&N(s)=-4000{{s}^{4}}+110000{{s}^{3}}-666700{{s}^{2}}-15560000s +222200000 \\&D(s)={{s}^{10}}+109{{s}^{9}}+5191{{s}^{8}}+141300{{s}^{7}}+2396000{{s}^{6}} +25680000{{s}^{5}}\\&\quad +167500000{{s}^{4}}+610500000{{s}^{3}}\\&\quad +1111000000{{s}^{2}}+866700000s+222200000 \end{aligned} \end{aligned}$$
(18)

This tenth order system is reduced to a second order with following transfer function

$$\begin{aligned} {{G}_{r\mathrm Prop}}(s)=\frac{{-0}{.1394}s+0.2491}{0.8484{{s}^{2}}+0.8339s+0.2512} \end{aligned}$$
(19)

The convergence rate of ALO is between GA and PSO as shown in Fig. 7, whereas time taken to simulate (CPU time) and ISE is the least, as shown in Table 7. The step response and Bode plot are shown in Figs. 8 and 9, respectively. It can be noted from the Bode plot that all methods are not reliable for higher frequencies, whereas they are performing well for a lower range of frequencies. A comparison in terms of ISE is drawn in Table 7 which shows that ALO, GA, and PSO are performing well. The step response specifications are shown in Table 8 which shows that genetic algorithm and particle swarm optimization techniques are comparable with ALO.

Table 8 Transient response comparison for different systems of Example 3

Example 4

Consider problem of 84 order [7] represented as follows

$$\begin{aligned} A = \left[ \begin{array}{cccccccccc} {A_1}&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} {a_{78}}&{} 0&{} 0\\ 0&{} {A_2}&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} \ddots 0\\ 0&{} 0&{} {A_3}&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} {a_{154}}\\ 0&{} 0&{} 0&{} {A_4}&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} {A_5}&{} 0&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} {A_6}&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} {A_7}&{} 0&{} 0&{} 0\\ {a_1}&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} {A_8}&{} 0&{} 0\\ 0&{} \ddots 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} \ddots &{} 0\\ 0&{} 0&{} {a_{77}}&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} {A_{42}} \end{array} \right] \end{aligned}$$
Fig. 10
figure 10

Time domain comparison of Example 4

Fig. 11
figure 11

Time domain comparison of Example 4

where \({A_i} = \left[ \begin{array}{l} - 734\quad 171\\ - 9\quad - 734 \end{array} \right] ;\;\;i = 1,2 \ldots 42\)

and \({a_j} = 196;\;\;j = 1,2 \ldots 154\)

\(B = {[{B_{i,1}}]_{84 \times 1}};\quad i = 1,2 \ldots 84\)

The values of \([{B_i}]\) are given in Table 11 in Appendix. \(\begin{array}{l} C = {[{C_{1,j}}]_{1 \times 84}};\quad j = 1,2 \ldots 84\\ \;\;\; = {B^T} \end{array}\)

Fig. 12
figure 12

Frequency domain comparison of Example 4

Table 9 ISE comparison for different systems of Example 4

The lower order system obtained from the proposed method is shown below

$$\begin{aligned} {{G}_{r\mathrm Prop}}(s)=\frac{{ 606}{.47}s+{818}{.5}}{{0}{.2339}{{s}^{2}}+{69}{.04}s+{92}{.75}} \end{aligned}$$
(20)

The cost versus iterations graphs are shown in Fig. 10 for different optimization techniques. The initial convergent rate of ALO is comparable with GA and PSO which becomes inferior later on, but the final obtained value of ISE is the least. This higher order system has also been reduced to second order by Afzal [26] with following transfer function

$$\begin{aligned} {{G}_{r\mathrm Sikander}}(s)=\frac{{ 371}{.3}s+{919}{.3}}{{0}{.1432}{{s}^{2}}+{42}{.43}s+{104}{.2}} \end{aligned}$$
(21)
Table 10 Transient response comparison for different systems of Example 4

The step response of the original and reduced systems is shown by Fig. 11 and Bode plot by Fig. 12. It is evident that the response in time and frequency domains is very well approximated by the proposed ALO method. This approach provides good approximation at lower as well as at higher frequencies as shown by the Bode plot. For this particular problem, ALO, GA, and PSO all perform well in terms of ISE \((order 10^{-8})\) compared to [26] \((ISE =2.1023\times 10^{-3})\) as shown in Table 9. The obtained ISE is slightly less as compared to GA and PSO, and the CPU usage time is the least by the proposed method. The comparison in terms of Rise and Settling times, Over Shoot, Peak and Peak time for different systems is done in Table 10. These values are very closely matched with the original system by the reduced second-order system using ALO. The proposed algorithm also performed fairly well even for a very high order system.

6 Conclusion

A novel reduction approach for linear time-invariant (LTI) system approximation is advised in this paper. The presented approach is based on the concept of optimization, and therefore, the fast and more accurate Ant Lion Optimization (ALO) technique has been employed. The unknown parameters of the approximated system are calculated by minimization of integral square error (ISE) between the original and the approximated system. The efficacy of the proposed reduction approach is proved by taking different types of computational experiments. It reveals that the proposed reduction approach exhibits the best results for a SISO as well as a real system of partial differential equations of 84th order. The complexity of the 84th order system is reduced to very low (second order). An example of the time-delay system has been considered which also shows the supremacy of this technique. The proposed reduction approach is compared with the latest available methods, and the comparative results confirm the efficacy of the proposed approach in terms of ISE and CPU usage time. The following conclusions are drawn about ALO from the order reduction point of view.

  • In general, the error ISE is reduced by ALO to the lowest value as compared to GA and PSO techniques.

  • Initial convergent rate of ALO is comparable to GA and PSO techniques.

  • The ALO takes minimum CPU time which results in fast order diminution.

  • The proposed technique provides better results as compared to classical methods available in the literature.

  • The proposed technique is fast and accurate as compared to existing optimization techniques of MOR.

Further, looking into the analysis of this paper, ALO can also be applied to a discrete-time system as well as an interval system in the future and it may also be applied to real-world problems such as Power system, Single machines infinite bus (SMIB), Two-wheeled mobile robot (TWMR), etc. Alternatively, it can be further modified to get a faster convergent rate.