1 Introduction

Scientists and Engineers are usually facing a problem with the analysis, design and synthesis of complex physical systems. The first step in studying such complex system is to develop a mathematical model which would act as a representative of the physical system. The mathematical model empowers for virtual experimentation when the physical experimentation on the system is too expensive, tedious, infeasible, or even impossible to conduct. Besides that, the mathematical modelling of such complex physical systems leads to either higher-order differential equations, (or) transfer functions, (or) state-space formulations. These higher-order mathematical models are difficult to use either for analysis and controller design. Hence, the model order reduction methods are necessary to find an appropriate reduced-order model and that which will reflect the dominant characteristics of the original higher-order system under consideration. The reduced-order model (ROM) will help in better understanding of the original system, lower the computational complexities, and simplifies the control design. Several model order reduction methods for linear continuous-time SISO systems are available in the literature, in time-domain as well as in the frequency-domain (Schilders et al. 2008; Jamshidi 1997; Antoulas and Sorensen 2001). Some of the recently published model order reduction methods in time-domain and frequency-domain are given by Vishwakarma and Prasad (2014), Vasu et al. (2016a, b). The methods belonging to either of time- or frequency-domain category offer the possible user certain advantages and most likely certain disadvantages. Basing on simplicity and enormous design information gathered in frequency-domain, model order reduction methods of this group have become more prominent.

Frequency-domain methods are roughly classified into two categories. The first category methods are based on retention of dominant system parameters in the ROM, and the second category methods are developed by preserving the stability of an original system in the ROM. Basically, the first category methods (Lucas 1983; Munro and Lucas 1991; Aguirre 1992) match as many system time moments (or) shifted time moments, and Markov parameters as likely between the original and reduced models. The main disadvantage of these methods is that they may not preserve the stability of the original system in the ROM all the time. Vasu et al. (2016a, b) have made an attempt to overcome the above-mentioned disadvantage. They proposed a powerful conventional model order reduction method, which retains original system stability in the ROM is improved relatively than the above-discussed methods. The second category methods (Krishnamurthy and Seshadri 1978; Chen et al. 1979; Choo 1999) preserve the stability, but are not able to give satisfied ROM as that of the first category methods. Later, several reduction methods (Hwang 1984; Lucas 1993; Mukherjee and Mittal 2005; Vishwakarma and Prasad 2009; Desai and Prasad 2013; Sikander and Prasad 2015) have been suggested by combining the features of two different reduction methods to obtain a ROM that preserves the stability with a good approximation. In these methods the denominator polynomial of the reduced-order model is calculated first by using the stability preserving method, and the numerator polynomial is obtained by using an optimal/suboptimal reduction methods. Hwang (1984) used the Routh approximation method and ISE criterion for obtaining stable ROMs which tend to approximate the transient portion of the system. Lucas (1993) introduced the popular multipoint Pade approximation technique in an iterative way to generate efficiently the optimal models. Mukherjee and Mittal (2005) use the secant method to minimize the ISE for generating a ROM with a desired pole pattern. Vishwakarma and Prasad (2009) use modified pole clustering and genetic algorithm for order reduction. Desai and Prasad (2013) combine the merits of the BBBC optimization technique and the stability equation method. Sikander and Prasad (2015) use the stability equation and PSO to determine the approximate ROM to the original system. However, it is generally recognized that the ROM’s produced by these methods may not to be the global optimum models as those of the models developed by using modern heuristic algorithms. More recently numerous model order reduction methods have been developed by using evolutionary algorithms such as BBBC, PSO, MPSO, DE, and ABC algorithms (Boby and Pal 2010; Pamar et al. 2007; Deepa and Sugumaran 2011; Santosh et al. 2012; Bansal et al. 2011). These methods have been developed based on the integral square error (ISE) minimization while allowing both the numerator and the denominator coefficients as free parameters in the optimization process. In spite of the many number of reduction methods available in the literature, there is a greater significance for the development of the global reduction method. Researchers are still trying to attain a global optimum reduction method that can be applied to all types of systems and to obtain an accurate ROM’s with less computational effort.

Recently, The stochastic search algorithms are broadly using to perform better on complex real-life optimization problems. Swarm intelligence algorithms (Kennedy and Eberhart 1995; Kennedy et al. 2001) and Evolutionary algorithms (EA) (Fogel et al. 1966; Yao 1999) are two important members of the former search algorithms. Swarm intelligence algorithms are based on the simulation of the collective behaviour of a flock of birds (or) the movements of a school of fish. The key concept in swarm based optimization is the cooperation among the population-members. On the other hand Evolutionary Algorithms (EA) employ selection and mutation operators to locate the global maxima/minima of a complex objective function. The main concept in EA is to keep the competition in the population. The most familiar algorithms of swarm intelligence and evolutionary algorithms are particle swarm optimization (PSO) and differential evolutionary (DE). The mutual synergism of particle swarm optimization (PSO) with differential evolution (DE) leads to a more powerful global search algorithm (Das et al. 2008). It incorporates a selection mechanism in PSO and thus saves the limited computational source by prohibiting the particles from visiting the useless regions of the search space. It also incorporates the differential vector operator borrowed from DE and update in the velocity scheme of PSO; hence, this algorithm named as PSO-DV (particle swarm with differential perturbed velocity).

The main objective of this paper is to simplify the higher-order linear continuous SISO system by determining an optimal reduced-order model through minimizing of the step response ISE, using the hybrid evolutionary (PSO-DV) algorithm. In the process of PSO-DV algorithm, the reduced-order model numerator and denominator coefficients are considered as free parameters for ISE optimization. The initialization/selection of ROM coefficients has to be done carefully so as to satisfy the Routh–Hurwitz stability criteria (Kuo and Farid 2007). The final ROM coefficients are obtained for which the ISE value is optimum. Hence, the method has an inbuilt stability and accuracy preserving features. The applicability of the proposed method is shown by illustrating two typical numerical examples taken from the literature. The superiority of the proposed method than the other existing evolutionary and conventional methods are shown by comparing the time responses, error responses, ISE values pertaining to step input and impulse response energies (IRE) of the results.

2 A Synergism of PSO and DE: The PSO-DV Algorithm

The canonical PSO algorithm (Kennedy and Eberhart 1995) updates the velocity of a particle using three terms. The first term includes a previous velocity term that provides the particle with the necessary momentum. Remaining two are the social and cognitive terms that are focused more on the cooperation of the particles. With memory, each particle can track its personal best performance and that achieved in its neighbourhood throughout its lifetime. However, the particles in PSO are not eliminated even when they come up with worst fitness (in terms of the objective function value) and thus waste the limited computational resources. The velocity of the particles gets damped quickly for small inertia weight ‘W’. Hence, the advantage of the differential evolutionary algorithm can compensate for the PSO’s shortcoming. Therein lays the motivation to develop a hybrid algorithm based on PSO and DE.

Based on the complementary properties of PSO and DE, Das et al. (2008) coupled a differential operator with the velocity-update scheme of PSO and proposed a new hybrid evolutionary algorithm named as a PSO-DV algorithm. The operator is invoked on the position vectors of two randomly chosen particles (population-members), not on their individual best positions. Further, unlike the PSO scheme, a particle is actually shifted to a new location only if the new location yields a better fitness value, i.e. a selection strategy has been incorporated into the swarm dynamics.

Step 1: PSO-DV starts with a population of particles called swarm of size ‘N’, each of ‘d’-dimensional search variable vectors. At iteration \((g) = 1\), a population of particles are initialized with random positions \(X_{i,j} (1)\) and random velocities \(v_{i,j} (1), \) where \(i\in \left[ {1,N} \right] \) and \(j\in \left[ {1,d} \right] \), i.e. given as

$$\begin{aligned} \left. {{\begin{array}{l} {X_{i,j} (1)=X_{\mathrm{min}} +\,\hbox {r and } ({0,1})*\left( {X_{\mathrm{max}} -X_{\mathrm{min}} } \right) } \\ {v_{i,j} (1)=v_{\mathrm{min}} +\,\hbox {r and } ({0,1})*\left( {v_{\mathrm{max}} -v_{\mathrm{min}} } \right) } \\ \end{array} }} \right\} \end{aligned}$$
(1)

where \(\left( {X_{\mathrm{max}} ,X_{\mathrm{min}} } \right) \) define the permissible bounds of the search space, \(\left( {v_{\mathrm{max}} ,v_{\mathrm{min}} } \right) \) define the permissible bounds of the velocities. Once the particles are all initialized, an iterative optimization process begins.

Step 2: For each particle \('i'\) in the swarm, two other distinct particles, say \('l'\) and \('k' \quad \left( {i\ne l\ne k} \right) ,\) are selected randomly. The difference between their positional coordinates is taken as a difference vector:

$$\begin{aligned} \vec {\delta } = \vec {X}_{k} - \vec {X}_{l}. \end{aligned}$$
(2)

Then the jth velocity component \(\left( {1< j < d} \right) \) of the target particle \('i'\) is updated as:

$$\begin{aligned} \left. {{\begin{array}{l} {{\begin{array}{l} v_{i,j} \left( {g+1} \right) =W.v_{i,j} (g)+\beta .\delta _j +C_2 *\hbox { r and }({0,1})\,*\\ \qquad \left( {gbest_j (g)-X_{i,j} (g)} \right) ;\mathrm{if}\,\, \hbox { r and }_j ({0,1})\le CR \\ v_{i,j} \left( {g+1} \right) =v_{i,j} (g) ;\mathrm{Otherwise} \\ \end{array}}} \\ \end{array}}} \right\} \nonumber \\ \end{aligned}$$
(3)

where \('W'\) is the inertia weight factor varying randomly between \(\left[ {0.4 , 0.9} \right] \) as \(W=W_{\mathrm{min}} +\left( \mathrm{rand}({0,1}).\left( W_{\mathrm{max}} -W_{\mathrm{min}} \right) \right) , 'CR'\) is the crossover probability constant, \(\delta _j \) is the \(j^{th}\) component of the difference vector \('\delta '\) defined in Eq. (2), and \('\beta '\) is a scale factor varying randomly in between\(\left[ {0,\hbox { }1} \right] \). \('C_2'\) is a swarm confidence acceleration factor varies randomly between \(\left[ {C_{\mathrm{min}} , C_{\mathrm{max}} } \right] \) as \(C_2 =C_{\mathrm{min}} +\left( {\left( {C_{\mathrm{max}} -C_{\mathrm{min}} } \right) .\mathrm{rand}({0,1})} \right) \). Clearly, for \( \hbox {CR}<\mathrm{rand}({0,1}),\) some of the velocity components will retain their old values.

Step 3: A new trial location \({\overrightarrow{Tr}}_i\) is created for the particle by adding the updated velocity to the previous position \( \vec {X}_{i} (g) \):

$$\begin{aligned} \overrightarrow{Tr} _{i} = ~\vec {X}_{i} (g) + \vec {v}_{i} ({g + 1}) \end{aligned}$$
(4)

Step 4: The particle is placed at this new location only if the coordinates of the location yield a better fitness value. Thus, if we are seeking the minimum of an d-dimensional function \( f\left( {\vec {X}} \right) \), then the target particle is relocated as follows:

$$\begin{aligned} \left. {\begin{array}{l} {\vec {X}_{i} ({g + 1}) = \overrightarrow{{Tr}} _{i} ;\quad \mathrm{if} \quad f\left( {\overrightarrow{{Tr}} _{i} } \right) < f\left( {\vec {X}_{i} (g)} \right) } \\ {\vec {X}_{i} ({g + 1}) = \vec {X}_{i} (g); \quad \mathrm{Otherwise}} \\ \end{array} } \right\} ~ \end{aligned}$$
(5)

Therefore, every time its velocity changes, the particle either moves to a better position in the search space or sticks to its previous location. If a particle \(' i '\) in the swarm get stagnant at a particular point, or if it does not move to a better position in the search space for \('m'\) number of iterations (i.e., if its fitness value does not improve for a predetermined number of iterations), then the particle and its velocities are shifted by a random mutation (explained below) to a new location. This technique helps escape local minima and also keeps the swarm moving:

$$\begin{aligned}&\hbox {If } ~\left( {f\left( {\vec {X}_{i} (g)} \right) = f\left( {\vec {X}_{i} ({g + 1})} \right) = \ldots = f\left( {\vec {X}_{i} \left( {g + m} \right) } \right) ~} \right) \nonumber \\&\left. \hbox { then}{{\begin{array}{l} {X_{i,j} ({g+m+1})=X_{\mathrm{min}} +\hbox { r and }({0,1})*\left( {X_{\mathrm{max}} -X_{\mathrm{min}} } \right) } \\ {v_{i,j} ({g+m+1})=v_{\mathrm{min}} +\hbox {r and } ({0,1})*\left( {v_{\mathrm{max}} -v_{\mathrm{min}} } \right) } \\ \end{array} }} \right\} \nonumber \\&\mathrm{for}\, j=1,2,\cdots ,d. \end{aligned}$$
(6)

where \('m'\) is the maximum number of iteration up to which stagnation can be tolerated.

Step 5: Repeat from Step 2 to Step 4 until the maximum iteration \(({g_{\mathrm{max}}})\) is reached or a specified termination criteria are satisfied.

3 Description of the Method

3.1 Model Order Reduction Problem Formulation

Consider a \(n^{{th}}\)-order linear dynamic system represented by:

$$\begin{aligned} G_n (s)=\frac{Y(s)}{U(s)} =\frac{a_{n-1} s^{n-1}+\cdots +a_1 s+a_0 }{b_n s^{n}+b_{n-1} s^{n-1}+\cdots +b_1 s+b_0 } \end{aligned}$$
(7)

The objective is to find an \(r^{th}\)-order reduced model that has a transfer function \(\left( {r<n} \right) :\)

$$\begin{aligned} R(s)=\frac{Y_r (s)}{U(s)}=\frac{c_{r-1} s^{r-1}+\cdots +c_1 s+c_0 }{s^{r}+d_{r-1} s^{r-1}+\cdots +d_1 s+d_0 } \end{aligned}$$
(8)

The determination of successful reduced-order model (ROM) coefficients is done by minimizing the integral square error (ISE) between the step response of the original system \(G_n (s)\), and the reduced-order model R(s) using PSO-DV algorithm. The ISE (George and Luss 1990) is defined by

$$\begin{aligned} \hbox {ISE }\left( E \right) =\mathop {\int }\limits _0^\infty \left[ {e(t)} \right] ^{2}.\mathrm{d}t=\mathop {\int }\limits _0^\infty \left[ {y(t)-y_r (t)} \right] ^{2}.\mathrm{d}t \end{aligned}$$
(9)

where y(t) and \(y_r (t)\) are the unit step responses of the original system \(G_n (s)\) and the reduced-order model R(s), respectively. The step response of a system gives information on the stability of a system and on its ability to reach one steady state when starting from another state. So by having the unit step response, the dynamic behaviour of the system can be resolved. Hence by determining the reduced-order model coefficients using optimization of ISE between the step responses of the original system and the ROM, the overall dynamic behaviour of the original system can be employed in the ROM.

3.2 Model Order Reduction Method Using PSO-DV Algorithm

The objective of the proposed model order reduction method is to find an optimal reduced-order model for the given large-scale linear continuous-time system by preserving the stability of the system in the reduced-order model. The numerator and denominator coefficients of the reduced-order model are considered as free components in the optimization process. The PSO-DV optimization algorithm is employed to search for an optimal reduced-order model coefficients within the permissible bounds for which the ISE between the original system and the reduced-order model step responses is minimized. The stability of the reduced-order model is preserved by choosing proper denominator polynomial coefficients within the permissible bounds of the search space, so that it satisfies the Routh–Hurwitz stability condition. To retain the steady-state response of the original system in the reduced-order model, the condition mentioned below is included while determining the numerator coefficient (\(c_0 )\) of the reduced-order model.

$$\begin{aligned} c_0 =\frac{a_0 }{b_0 }d_0 \end{aligned}$$
(10)

3.2.1 Procedural Steps for Model Order Reduction Technique Using PSO-DV Algorithm

  • Step 1: Initially specify the parameters of PSO-DV algorithm such as, population of particles called swarm size \(\left( N \right) \), number of components of a particle (d), maximum number of iteration \(({g_{\mathrm{max}}})\), permissible bounds for the particle positions \(\left( {X_{\mathrm{max}} ,X_{\mathrm{min}} } \right) \), permissible bounds for the particle velocities \(\left( {v_{\mathrm{max}} ,v_{\mathrm{min}} } \right) \). Each particle in the swarm represents the set of reduced-order model coefficients. The components of a particle (d) represent the number of unknown numerator and denominator coefficients of the reduced-order model.

  • Step 2: At iteration \((g)=1\), the \('N'\) reduced-order models in the swarm, each of them is having \('d'\) unknown coefficients. Initialize the coefficients such that it has to be satisfy the Routh–Hurwitz condition. Their respective velocities are to be chosen randomly within the permissible limits as given in Eq. (1).

  • Step 3: Evaluate the integral square error values of ROMs with in the swarm, by using an Eq. (9).

  • Step 4: The target velocities for the ROM coefficients in the swarm are update by using Eq. (3).

  • Step 5: The trail ROMs are created by adding the updated velocities to the previous ROM’s coefficients in the swarm as per the Eq. (4).

  • Step 6: Evaluate the ISE values of the trial ROMs and compare trail ISE values with the corresponding previous ISE values. Update the swarm with the ROM that give low ISE value for the next iteration as per the equation (5).

  • Step 7: Repeat Step 4 to Step 6 of Sect. 3.2.1 until the iteration number reaches to maximum iteration \(({g_{\mathrm{max}}})\) (Or) until the population reached to truly global optimum ISE value (i.e. The global ISE value remains unchanged for any further number of iterations). Finally, at the end of termination the ROM coefficients that offers the lowest ISE value in the swarm will be the final optimum reduced-order model obtained from the PSO-DV algorithm. In addition to step ISE, another performance index such as retention of impulse response energy (IRE) with the original system is also included to evaluate the effectiveness of the ROM. The preserving of full IRE of the original system in the ROM leads to the coincidence of time moments of the impulse response and also matches the frequency response. The full impulse response energy (Lepschy et al. 1988) of the system/model is given by:

    $$\begin{aligned} \mathrm{IRE}\left( I \right) =\mathop {\int }\limits _0^\infty \left[ {g(t)} \right] ^{2}.\mathrm{d}t \end{aligned}$$
    (11)

    where g(t) is the impulse response of the system/model.

The computational flowchart of the PSO-DV algorithm is as shown in Fig. 1.

Fig. 1
figure 1

Flow chart of the PSO-DV algorithm

3.3 Procedure to Compute Step ISE

The integral square step response error between the original system given and the reduced-order model can be defined as \(\Vert e(t)\Vert _2 \buildrel \Delta \over = \Vert y(t)-y_r (t)\Vert _2 =\int _0^\infty \left[ {y(t)-y_r (t)} \right] ^{2}.\mathrm{d}t =\int _0^\infty \left[ {h(t)-h_r (t)} \right] ^{2}.\mathrm{d}t;\) where \(h(t)=y\left( \infty \right) -y(t)\) and \(h_r (t)=y_r \left( \infty \right) -y_r (t)\) are the transient parts of the step responses. For zero steady-state response error the conditioned mentioned in Eq. (10) is to be satisfied while determining the constant term \(\left( {c_0 } \right) \) of the numerator polynomial in the reduced-order model. Therefore, \(y_r \left( \infty \right) =\frac{c_0 }{d_0 }=y\left( \infty \right) =\frac{a_0}{b_0}\).

Fig. 2
figure 2

A block diagram of IEEE type1 (DC) Excitation model

Then, \(H(s)=L\left[ {h(t)} \right] \, \mathrm{and}\, H_r (s)=L\left[ {h_r (t)} \right] \) can be expressed as

$$\begin{aligned} H(s)= & {} \frac{{y\left( \infty \right) }}{s} - \frac{{G_{n} (s)}}{s} = \frac{{\bar{a}_{0} + \bar{a}_{1} s + \cdots + \bar{a}_{{n - 1}} s^{{n - 1}} }}{{b_{0} + b_{1} s + \cdots {\text { + }}b_{n} s^{n} }}\nonumber \\= & {} \frac{{N(s)}}{{D(s)}}\end{aligned}$$
(12)
$$\begin{aligned} H_{r} (s)= & {} \frac{{y_{r} \left( \infty \right) }}{s} - \frac{{R(s)}}{s} = \frac{{\bar{c}_{0} + \bar{c}_{1} s + \cdots + \bar{c}_{{r - 1}} s^{{r - 1}} }}{{d_{0} + d_{1} s + \cdots {\text { + }}d_{r} s^{r} }} \nonumber \\= & {} \frac{{N_{r} (s)}}{{D_{r} (s)}} \end{aligned}$$
(13)

where \(\bar{a}_{i} = \frac{{a_{0} }}{{b_{0} }}b_{{i + 1}} - a_{{i + 1}} ;i = 0,1, \cdots ,n - 2;~\bar{a}_{{n - 1}} = \frac{{a_{0} }}{{b_{0} }}b_{n}\). and \( \bar{c}_{i} = \frac{{c_{0} }}{{d_{0} }}d_{{i + 1}} - c_{{i + 1}} ;i = 0,1, \cdots ,r - 2;~\bar{c}_{{r - 1}} = \frac{{c_{0} }}{{d_{0} }}d_{r}\).

Using Parseval’s theorem, the step integral square error can be alternatively specified in the frequency-domain as

$$\begin{aligned} \mathrm{ISE}\left( E \right) =\frac{1}{2\pi j}\mathop {\int }\limits _{-j\infty }^{j\infty } E(s).E\left( {-s} \right) .ds, j=\sqrt{-1} \end{aligned}$$
(14)

where E(s) is the Laplace transform of the error function e(t). Hence, we can write \(E(s)=H(s)-H_r (s)\) that leads to an \(\left( {n+r} \right) \hbox { }^{th}\) order transfer function. Therefore, the integral in Eq. (14) can be evaluated from the gamma and delta Routh tables corresponding to E(s). Finally, the value of \(\mathrm{ISE}\left( E \right) \) is given by:

$$\begin{aligned} \mathrm{ISE}\left( E \right) =\mathop \sum \limits _{i=1}^{n+r} \frac{\delta _i^{2}}{2\gamma _i} \end{aligned}$$
(15)

The constant values of \('\gamma _i' \hbox { and } '\delta _i'\) for \(i=1,2,\cdots ,n+r\) are computed from \(\gamma \hbox { and } \beta \)- Routh tables as given in (Hwang and Wang 1984).

4 Numerical Examples and Results

To validate the flexibility and effectiveness of the proposed method, two typical numerical examples are considered from the literature and the results are compared in terms of the ISE and IRE.

Example 1

Consider a linearized version of IEEE type1 (DC) excitation model which is one of the most prevailing models in the industry. The block diagram of the IEEE type1 excitation model is shown in Fig. 2.

A set of parameters values of the IEEE type1 excitation model is taken from (Choi et al. 2008), i.e.

$$\begin{aligned}&T_R =0.005,\quad K_A =400.0, \quad T_A =0.02, \quad V_{RMAX} =6.36, \\&T_F =1.0, \quad V_{\mathrm{RMIN}} =-5.72, \quad T_B =0.0, \quad T_C =0.0, \\&K_E =1.0,\quad T_E =1.14,\quad K_F =0.08, \quad E_1 =2.58,\\&E_2 =3.44, \quad S_{E_1 } =0.64, \quad S_{E_2 } =0.85. \end{aligned}$$

The linearized version of nonlinear IEEE type1 excitation model is of \(4^{\mathrm{th}}\) order, and its transfer function is as follows:

$$\begin{aligned} G_4 (s)= \frac{400 s+400}{ {0.0001s^{4}+0.03s^{3}+1.36s^{2}+34.32s+1.15}} \end{aligned}$$
(16)

The impulse response energy of the dynamic model \(G_4 (s)\) is = 3901.4965. For the given dynamic model an optimal second-order reduced model is desired to find by using the proposed method. The specified parameters of the PSO-DV algorithm are given in Table 1.

Table 1 Parameters used for PSO-DV algorithm
Fig. 3
figure 3

Convergence of ISE for Example 1

Fig. 4
figure 4

Comparison of unit step response of original system and reduced-order model given by Eqs. (16) and (17)

The optimal reduced second-order model is obtained with the application of proposed method given in Sect. 3 to Example 1 for g = 848 number of iterations, it is as given below:

$$\begin{aligned} R_2 (s)=\frac{306.565826 s+306.681976}{s^{2}+26.312263s+0.881711} \end{aligned}$$
(17)

With step ISE value \(=\) 0.410628, and the IRE value of \(R_2 (s)=\) 3812.9482. The computation time taken to complete simulation of PSO-DV algorithm with 50 particles and 1000 iterations to determine optimal second-order reduced model is 12 seconds.

The best ISE values of each iteration of the PSO-DV algorithm are plotted with respect to the number of iteration (g) and are shown in Fig. 3. The original fourth-order system and the reduced second-order model step and frequency responses are shown in Figs. 4 and 5. The IRE values and ISE values pertaining to step input of the proposed ROM and the ROM’s obtained from the familiar reduction methods are evaluated and compared in Table 2.

Fig. 5
figure 5

Comparison of frequency response of original system and reduced-order model given by Eqs. (16) and (17)

Table 2 Comparison of proposed method with other existing methods results by using ISE and IRE values

From the Figs. 4 and 5, it has been observed that the error gap between the step and frequency responses of the original system and the proposed ROM are quite small. For complete matching of the frequency response of ROM with the original system it is necessary to retain full IRE value of the original system in its ROM. From Table 2 it has been observed that the ROM obtained by the proposed method retains the stability and maximum impulse response energy of the original system and also the step ISE between the system, and the proposed ROM gives least ISE value compared to other familiar reduction methods available in the literature. Hence, the simplified (reduced-order) DC exciter model obtained from proposed method is the best match with the original system, and it shall be very useful for dynamic simulation of large-scale power system to reduce the computational complexity, simulation time and memory requirements.

Example 2

Consider a ninth-order system described by the transfer function given in (Mukherjee and Mittal 2005):

$$\begin{aligned} G_9 (s)= & {} \frac{s^{4} +35s^{3}+291s^{2}+1093s+1700}{ s^{9}+9s^{8}+66s^{7}+294s^{6}+1029s^{5}+2541s^{4}}\nonumber \\&+\,4684s^{3}+5856s^{2}+4620s+1700 \end{aligned}$$
(18)

The impulse response energy of \(G_9 (s) = 0.470518.\)

By applying the proposed PSODV algorithm to the system described in the equation (18) with the specified parameters given in Table 3, we obtain an optimal second- and third-order reduced models.

Hence, the optimal reduced second-order model is obtained by the proposed method for \(g = 371\) number of iterations, and it is given by Eq. (19):

$$\begin{aligned} R_2 (s)=\frac{-0.597217 s+1.021683}{s^{2}+1.464934s+1.021683} \end{aligned}$$
(19)

With step ISE value \(=\) 0.019602, and the IRE value of \(R_2 (s) = 0.470448\). The computation time taken for complete simulation of the PSO-DV algorithm with 20 particles and 750 iterations to determine optimal \(R_2 (s)\) is 17 seconds.

The optimal third-order reduced model is obtained by the proposed method for \(g = 700\) number of iterations, and it is given below:

$$\begin{aligned} R_3 (s)=\frac{0.320848s^{2}-1.455979 s+2.780751}{s^{3}+2.565194s^{2}+4.36444s+2.780751} \end{aligned}$$
(20)

With step ISE value \(=\) 0.001490, and the IRE value of \(R_3 (s) = 0.470472\). The computation time taken for complete simulation of the PSO-DV algorithm with 20 particles and 750 iterations to determine optimal \(R_3 (s)\) is 34 seconds.

Table 3 Parameters used for PSO-DV algorithm
Fig. 6
figure 6

Comparison of unit step responses of the original system and some of the ROM’s tabulated in Table 4

Fig. 7
figure 7

Comparison of Nyquist plots of the original system and some of the ROM’s tabulated in Table 4

Fig. 8
figure 8

Comparison of step error response between the original system \(G_9 (s)\), and some of the reduced-order models tabulated in Table 4

Table 4 Comparison of proposed method with other existing method results by using ISE and IRE

The comparison of step, frequency and step error \(\left( {e(t)=y(t)-y_r (t)} \right) \) responses of the ROM’s obtained from the proposed and other recent familiar reduction methods are plotted in Figs. 6, 7, 8. In addition, the ISE and IRE values of the ROM’s are tabulated in Table 4 to show the effectiveness of the proposed method than prevalent reduction methods that are available in the literature. It can be observed from Figs. 6 and 7 that the time and frequency responses of the proposed ROM’s are seem to be very close to the responses of the original system. From the Fig. 8 it can identify that the output step error response’s of the proposed third-order reduced model provides the least error among all other reduction methods. Finally, the ROM’s obtained from the proposed method gives comparatively lower ISE values than many of the existing heuristic and non-heuristic model order reduction methods. As seen from the two previous examples, the proposed method is simple and effective to retain stability and dominance characteristics of the original system in the ROM. The time and frequency response of the proposed ROM is closely matched to the original system. And it is also observed from Table 2 and Table 4 that the IRE value of the proposed ROM has closely matched with the original system IRE value than the other methods.

5 Conclusion

In this paper, the hybridization of PSO with DE lead to a more powerful global search (PSO-DV) algorithm is employed to develop a new model order reduction technique by minimizing the integral square error between the step response of the original system and reduced-order model. Once the desired order of the model is known, the PSO-DV model order reduction technique will search for the coefficients of the reduced-order model for which step ISE is optimum. The proposed method is simple, effective and completely computer oriented. The proposed algorithm is developed and verified in C language, and the computational time required for complete simulation of each example with the specified parameters is less than 35 seconds. The ROM obtained from the proposed method retains the stability, IRE of the original system in its ROM and also leads to low ISE value. To show the applicability the method is applied to an IEEE type1 DC Exciter model, and the convincing results are obtained. To show the efficacy of the proposed method than other familiar and recently published reduction methods, a typical ninth-order system is also considered from the literature and the results are illustrated and compared in Example 2. Hence, the reduced-order models obtained by the proposed method will effectively helpful for further investigation on Higher-order linear-time invariant systems.