Introduction

There are many methods available for analysis of stability and design of systems. But these methods become computationally tedious when dealt with large scale practical systems like power systems involving large number of variables. To avoid these problems, model order reduction techniques are suggested for approximation of higher order models to reduced ones for a lower computational cost. It is considered that every physical system can be transformed into mathematical model and there is possibility of finding the equation of same type but of lower model. The reduced order model may reflect or retain physical characteristics of original higher order systems (HOSs) such as stability, time response, etc.

For a HOS, the most classical way to obtain a low-order model based controller is to apply model reduction techniques to an accurate HOS model. There are several classical methods in literature for model reduction given by Krishnamurthy and Seshadri [1], Hutton and Friendland [2], Heydari and Pedram [3], Soloklo and Farsangi [4], etc., and based on particle swarm optimization (PSO) [5] and differential evolution (DE) [6].

Multi-objective optimization problems generally represent an important class of practical real world design and decision making problems. Soft computing techniques based on multi-objective non-dominating sorting gives rise to set of optimal solutions called as non-dominated solutions or Pareto-optimal solutions. The primary reason to concentrate these algorithms is to obtain multi Pareto-optimal solutions in a single run, i.e., evolutionary algorithms are particularly suited for the task to process a set of solutions in parallel. Over the past decade, some multi-objective evolutionary algorithms (MOEA) have been suggested by Horn et al. [7], Fonseca and Fleming [8], Zitzler and Thiele [9].

The non-dominated sorting genetic algorithm (NSGA) was one of the earliest such evolutionary algorithms proposed by Srinivas and Deb [10]. An improved version of NSGA known as NSGA-II [11], multi-objective PSO (MOPSO) [12], multi-objective gravitational search algorithm (MOGSA) [13] are some of the latest techniques. A new weighted-sum multi-objective order reduction is suggested by Soloklo and Farsangi [14] by using harmony search algorithm.

In this paper, algorithms for model reduction based on multi-objective PSO and DE (MODE and MOPSO) are developed. Some existing classical model reduction techniques for multi input and multi output (MIMO) systems are considered. The interlacing property and coefficients matching (IPCM) method [15] when compared with other methods offered better performances indices like integral square error (ISE), integral absolute error (IAE), etc. In this method, the denominator is reduced by interlacing property method and reduced numerator is obtained by using matching of coefficients of HOS.

It is observed in classical methods that if one method satisfies one objective, it may not satisfy other objective. Multi-objective model reductions using MODE and MOPSO are suggested to obtain optimal reduced models to satisfy objectives like ISE, IAE and integral time-weighted absolute error (ITAE). These methods based on multi objectives are applied to the transfer function matrix of a 10th order two-input two-output linear time invariant of a power system model. Simulations results are compared for proposed multi-objective and single variable optimizations to test the optimality of the proposed techniques. The MODE and MOPSO algorithms for model reduction are described in detail in the following sections and the same is applied to the numerical example.

Problem Formulation of MIMO Systems

Let the transfer function of the HOS of order ‘n’ having ‘p’ inputs and ‘m’ outputs be

$$\begin{aligned}{}[G(s)]=\frac{1}{D_{n} (s)}\left[ {{\begin{array}{cccc} a_{11} (s) &{} a_{12} (s)&{} a_{13} (s)\cdots &{}a_{1p} (s) \\ a_{21} (s)&{} a_{22} (s) &{} a_{23} (s)\cdots &{}a_{2p} (s) \\ \vdots &{}\vdots &{} \quad \vdots \qquad \cdots &{}\vdots \\ a_{m1} (s)&{}a_{m2} (s)&{} a_{m3} (s)\cdots &{}a_{mp} (s)\\ \end{array}}}\right] , \end{aligned}$$
(1)

or

$$\begin{aligned}{}[G(s)]=\left[ g_{ij} (s)\right] ,\quad i=1,\,2,\ldots , m;\quad j=1,\,2,\ldots ,p, \end{aligned}$$

where the general form of each \(g_{ij} (s)\) of [G(s)] is

$$\begin{aligned} g_{ij} (s)= & {} \frac{a_{ij}(s)}{D_{n} (s)}\nonumber \\= & {} \frac{a_{0} +a_{1} s+a_{2} s^{2}+\cdots +a_{n-1} s^{n-1}}{b_{0} +b_{1}s+b_{2} s^{2}+\cdots +b_{n-1} s^{n-1}+s^{n}}. \end{aligned}$$
(2)

Let the transfer function matrix of the lower order system (LOS) of order ‘r’ having ‘p’ inputs and ‘m’ outputs be:

$$\begin{aligned}{}[R(s)]=\frac{1}{D_r (s)}\left[ {{\begin{array}{cccc} c_{11} (s)&{}c_{12} (s)&{}c_{13} (s)\cdots &{}c_{1p} (s) \\ c_{21} (s)&{}c_{22} (s)&{}c_{23} (s)\cdots &{}c_{2p} (s) \\ \vdots &{}\vdots &{} \quad \vdots \qquad \cdots &{}\vdots \\ c_{m1} (s)&{}c_{m2} (s)&{}c_{m3} (s)\cdots &{}c_{mp} (s) \\ \end{array}}}\right] , \end{aligned}$$
(3)

or

$$\begin{aligned}{}[R(s)]=\left[ r_{ij} (s)\right] ,\quad i=1,\,2,\ldots ,m;\quad j=1,\,2, \ldots ,p, \end{aligned}$$

where the general form of each \(r_{ij} (s)\) of [R(s)] is

$$\begin{aligned} r_{ij} (s)= & {} \frac{c_{ij}(s)}{D_{r}(s)}\nonumber \\= & {} \frac{c_{0} +c_{1} s+c_{2} s^{2}+\cdots +c_{r-1} s^{r-1}}{d_{0} +d_{1} s+d_{2} s^{2}+\cdots +d_{r-1} s^{r-1}+s^{r}}. \end{aligned}$$
(4)

Performance Indices for Model Order Reduction Techniques

The performance of model reduction techniques are measured by some performance indices like ISE, IAE, ITAE, integral time square error (ITSE), etc. The definitions of ISE, IAE and ITAE are given below.

Integral Square Error (ISE)

ISE integrates the square of the measured error over time. It tends to eradicate large errors rapidly, but tolerates small errors which persist for long period of time.

$$\begin{aligned} \mathrm{ISE}=\int \nolimits _{0}^{\infty } e^{2}(t)dt. \end{aligned}$$
(5)

Integral Absolute Error (IAE)

IAE integrates the absolute error and adds equal weights to the small and large errors. It produces response slower than ISE but with less sustained oscillations.

$$\begin{aligned} \mathrm{IAE}=\int \nolimits _{0}^{\infty }|e(t)|dt. \end{aligned}$$
(6)

Integral Time-Weighted Absolute Error (ITAE)

ITAE integrates the absolute error multiplied by the time over time. It adds weight to the errors which exist for a long time much more heavily than that of those at the start of the response.

$$\begin{aligned} \mathrm{ITAE}=\int \nolimits _{0}^{\infty }\mathrm{t} |\mathrm{e}(\mathrm{t})|\mathrm{dt}, \end{aligned}$$
(7)

where error \(e(t)=(y(t)-y_{r}(t))\) and y(t) and \(y_{r}(t)\) are the unit step responses of the original and reduced order systems respectively.

MIMO Model Reduction

There are many model reduction techniques available in the literature for single variable systems but there are only few methods for reduction of multi variable systems. However the methods related to single variable can be extended to reduction of linear multi-input and multi-output (MIMO) systems. Model order reduction for multi variable system can be solved by using both classical and soft computing techniques.

Classical Methods

Classical methods for model order reduction are mathematical approaches to find the reduced order model. For mixed methods of MIMO systems, the reduced order model is obtained by the combination of methods based on error minimization and some stability preserving methods. Some of the existing classical methods for MIMO model reduction like continued fraction and dominant pole [16], Pade approximants and dominant eigenvalue concept [17] etc. are discussed.

Method 1 (Shieh and Wei) [16]

In this model, the method takes the advantages of matrix continued fraction approach and dominant eigenvalue concept. For the reduced model, the matrix-continued fraction approach is used to obtain the numerator polynomials and the common denominator is formulated by dominant eigenvalues. The procedure is simple and the reduced model can easily be obtained with good approximation. But the disadvantage is the method is applicable to the system for only equal number of inputs and outputs.

Method 2 (Shamash) [17]

The method is the combination of Pade-type approximants and dominant eigenvalue concept. For the reduced model, the Pade-type approximants approach is used to obtain the numerator polynomials and the common denominator is formulated by dominant eigenvalues. Regardless of the above Shieh technique, this method is applicable to general multivariable systems where it is not necessary that the number of inputs is equal to number of outputs. This method never fails to produce a model, since there is no restriction that any matrix be non singular.

Method 3 (Liaw) [18]

The common denominator is obtained by preserving the dynamic modes with dominant energy contribution and the coefficients of the numerator are obtained by using the continued fraction method. The reduced model obtained by this method is always stable if the original system is stable and this model gives good approximations in both transient and steady state responses of the original system.

Method 4 (Prasad et al.) [19]

In this method, the reduced order common denominator is determined by using modified stability equation method and numerator matrix polynomial is formulated by using Pade approximation method. This method is also applicable to general multivariable system. This method overcomes the drawback of stability equation method of approximating the non-dominant poles of original systems. The obtained reduced model by this method is stable, if the original system is stable.

Method 5 (Viswakarma and Prasad) [20]

In this method, differentiation method is used to determine the denominator polynomial and factor division method is used to obtain the numerator polynomial of the reduced order model. The reduced model is stable if the original model is stable for linear time invariant system. Also this method avoids errors between the initial or final values of the time responses of original and reduced order systems.

Method 6 (Habib and Prasad) [21]

This method has combined advantage of the differentiation and the modified Cauer continued fraction method to find biased reduced order models for linear time dynamic systems. This method is computationally simple. The poles are determined by the biased differentiation method and zeros are synthesized by matching the coefficients of reduced denominator using modified Routh array.

Method 7 (Agarwal and Mittal) [22]

This method is a combination of eigen spectrum analysis and Cauer second form. The denominator of the reduced order model is determined by eigen spectrum analysis and the numerator is found by Cauer second form. The reduced order model retains the steady state value and stability of the original system.

Method 8 (Rama Jaya Lakshmi et al.) [15]

The numerator of the lower order reduced model is obtained by matching the coefficients of HOS with those of denominator of the LOS. The denominator of the reduced order model is obtained by using interlacing property. This method is flexible and simple and LOS retains stability. The steps involved in the method are discussed below

Step-1 the denominator is given by

$$\begin{aligned} D_n (s)=b_{0} +b_{1} s+b_{2} s^{2}+\cdots +b_{n-1} s^{n-1}+s^{n}. \end{aligned}$$
(8)

The denominator polynomial is separated into even and odd parts.

For n is even

$$\begin{aligned} D^{even}(s)= & {} b_0 +b_{2} s^{2}+b_{4} s^{4}+\cdots +b_n s^{n},\nonumber \\ \frac{D^{odd}(s)}{s}= & {} b_{1} +b_{3} s^{2}+b_{5} s^{4}+\cdots +b_{n-1} s^{n-2}. \end{aligned}$$
(9)

For n is odd

$$\begin{aligned} D^{even}(s)= & {} b_0 +b_{2} s^{2}+b_{4} s^{4}+\cdots +b_{n-1} s^{n-1},\nonumber \\ \frac{D^{odd}(s)}{s}= & {} b_{1} +b_{3} s^{2}+b_{5} s^{4}+\cdots +b_n s^{n-1}. \end{aligned}$$
(10)

Let \((0\pm \omega _{e,i}^d)\) and (\(0\pm \omega _{o,i}^d )\) denotes the roots of \(D^{even}(s)\) and \(\frac{D^{odd}(s)}{s},\) respectively. Then it can be observed that

$$\begin{aligned} 0<\omega _{d,1}^e <\omega _{d,1}^o <\omega _{d,2}^e <\omega _{d,2}^o<\omega _{d,3}^e \cdots \end{aligned}$$
(11)

Step-2 the even and odd polynomial can be written as,

For r is even

$$\begin{aligned} D_r^{even} (s)= & {} \left( s^{2}+\omega _{e,1}^{2}\right) \left( s^{2}+\omega _{e,2}^{2}\right) \cdots \left( s^{2}+\omega _{e,\frac{r}{2}}^{2}\right) ,\nonumber \\ \frac{D_r^{odd} (s)}{s}= & {} \left( s^{2}+\omega _{o,1}^{2}\right) \left( s^{2}+\omega _{o,2}^{2}\right) \cdots \left( s^{2}+\omega _{o,\frac{r}{2}-1}^{2}\right) .\nonumber \\ \end{aligned}$$
(12)

For r is odd

$$\begin{aligned} D_r^{even} (s)= & {} \left( s^{2}+\omega _{e,1}^{2}\right) \left( s^{2}+\omega _{e,2}^{2}\right) \cdots \left( s^{2}+\omega _{e,\frac{r-1}{2}}^{2}\right) \nonumber \\ \frac{D_r^{odd} (s)}{s}= & {} \left( s^{2}+\omega _{o,1}^{2}\right) \left( s^{2}+\omega _{o,2}^{2}\right) \cdots \left( s^{2}+\omega _{o,\frac{r-1}{2}}^{2}\right) .\nonumber \\ \end{aligned}$$
(13)

Modified reduced denominators are

$$\begin{aligned} D_m^{even} (s)=I_{1} *D_r^{even} (s);\quad D_m^{odd} (s)=I_{2}*D_r^{odd} (s). \end{aligned}$$
(14)

Now \(D_r (s)=D_m^{even} (s)+D_m^{odd} (s)\) where \(I_{1}\) is the ratio of the constants of \(D^{even}(s)\) and \(D_r ^{even}(s)\) and \(I_{2} \) is the ratio of the constants of \(D^{odd}(s)\) and \(D_r^{odd} (s)\) polynomials.

Step-3 the coefficients \(c_{i}\) of each \(c_{ij} (s)\) of the numerator polynomial of the reduced order model is obtained by using the equation

$$\begin{aligned} c_i =\frac{a_i }{b_i }d_i\quad \mathrm{for}\quad 0\le i\le r-1. \end{aligned}$$
(15)

Step-4 the rth order reduced order model is obtained in the form of Eq. (4).

Drawbacks of Classical Methods

Though the IPCM method is efficient, it may not guarantee for minimization of all the objective functions simultaneously. The following are drawbacks of classical methods observed for the model reduction of MIMO power system model.

  • If one method offers less ISE value, it may not guarantee for less ITAE value.

  • Classical methods are not that much case-specific. The method that gives better result for one problem may not do the same for another application.

  • There is uncertainty for classical methods that which method is the best suited for objectives like minimum ISE, IAE, ITAE values, etc.

Soft Computing Techniques

Recently soft computing techniques like GA [23], modified GA [24], PSO [5] are popular in solving optimization problems. Soft computing techniques are combined with classical methods for model reduction to address real world applications which are having thousand of parameters. There is a choice of making best decisions by using soft computing techniques which can be based on minimization of any objective at the beginning of the problem. The denominator coefficients are reduced by some stability preserving methods like Routh approximation methods, Routh stability criterion, etc.

Here the numerator coefficients are determined by soft computing techniques [2527] based on minimization of ISE between original and reduced order. In ISE, large errors are magnified and small errors (\(<\)1) become still smaller because it involves squaring. But there are other performances indices (objectives) for model reduction like IAE, ITAE, etc., related to small, large and long time persisting errors. The characteristics of ISE, IAE and ITAE are discussed in “Performance Indices for Model Order Reduction Techniques” section. Soft computing techniques are not only meant for single-objective optimization but also for multi-objective model reduction problems.

Multi-objective Optimization

Many real world problems of decision making involve simultaneous optimization of several challenging objectives to solve up to required point of satisfaction. In the multi-objective optimization, there exists a set of solutions known as non-dominated solutions or Pareto optimal solutions where every solution may be an acceptable one and have information of alternative optimal solutions. The set of Pareto optimal solutions are superior to rest of solutions when all objectives are considered but these are inferior to other solutions when single-objective is considered.

Problem Formulation

To apply the optimization techniques to any problem, basically objective function and limitations are to be formulated [28]. A general form of the multi-objective optimization problem (MOOP) subjected to set of equality and inequality constraints is given as follows

$$\begin{aligned} \mathrm{Minimize}/\mathrm{maximize}\,f_{k}(x)&k=1,\,2,\ldots ,K, \end{aligned}$$
(16)
$$\begin{aligned} \mathrm{Subjected\,to}\,p_{i} (x)\ge 0&i=0,\,1,\,2,\ldots ,I, \end{aligned}$$
(17)
$$\begin{aligned} \quad \qquad \qquad q_{j}(x)=0&j=0,\,1,\,2,\ldots ,J, \end{aligned}$$
(18)

where \(f_{k}\) is kth objective function, parameter x is a design or decision vector that represents a solution: \(K,\,I\) and J are number of objective functions, equality and inequality constraints respectively.

Let \(x_{1}\) and \(x_{2}\) are two solutions of MOOP, a solution \(x_{1}\) is said to dominates \(x_{2}\) if it satisfies the following two conditions.

  1. (i)

    The solution \(x_{1}\) is not worst than \(x_{2}\) for all objectives i.e.,

    $$\begin{aligned} \mathrm{for\,all}\, i\in \{1,\,2,\ldots ,K\}{\text {:}}\,f_{i}\left( x_{1}\right) \le f_{i}\left( x_{2}\right) . \end{aligned}$$
    (19)
  2. (ii)

    The solution \(x_{1} \)is firmly better than \(x_{2} \) for at least one objective, i.e.,

    $$\begin{aligned} \mathrm{there\,exists}\,j\in \{ {1,\,2,\ldots ,K}\}{\text {:}}\,f_j\left( x_{1}\right) <f_{j}\left( x_{2}\right) . \end{aligned}$$
    (20)

Reduced Order Modeling Using Proposed Multi-objective Optimization: MOPSO and MODE

The algorithms for PSO and DE based on multi-objective optimization (MOPSO and MODE) are developed. These algorithms incorporate elitism and no sharing parameter needs to be chosen and the population is initialized as usual. Then the population is sorted normally based on non-domination into each front being the first front totally non-dominant set in the current population. The second front is dominated only by the individuals of the first front and the front goes on. In first front, individuals are assigned a fitness value of 1 and individuals of second front are given a fitness value of 2 and so on.

In addition to the fitness value, a new parameter known as crowding distance is determined for each individual. The crowding distance is a measure for an individual how close it is to its neighbours. Average crowding distance of large magnitude will end result in better diversity in population. Parents are selected by using binary tournament based on the crowding distance and rank. The selected population is updated using PSO and DE operators. For N population size, the current population and its offspring based on non domination is sorted again and only the N best individuals are selected.

Detailed Description of Proposed MODE and MOPSO Algorithm

This section describes application of DE and PSO for solving multi-objective model order reduction problem. The flowchart for MOPSO and MODE for order reduction is shown in Fig. 1.

Fig. 1
figure 1

Flow chart of multi-objective order reduction

The Following Steps Describe the Detailed Procedure of Proposed Methods

Step-1 generate randomly initial population for the coefficients of numerator of the reduced model with \(N_{pop} \times N\) size across the problem domain and store them in X as given below.

Where \(N=r-1,\) for rth reduced order model.

$$\begin{aligned} X=\left[ X^{1}\, X^{2}\,\ldots ,X^{Npop}\right] ^{T}, \end{aligned}$$
(21)

where

$$\begin{aligned} X^{i}=\left[ c_{0}^{i},\, c_{1}^{i},\ldots ,c_{r-1}^i\right] . \end{aligned}$$
(22)

All elements of \(X^{i}\) is set of decision variables.

Step-2 handle the set of constraint violations as shown in the Eq. 23:

$$\begin{aligned}&X_{\min }^i <X^{i}<X_{\max }^{i}.\nonumber \\&\quad \mathrm{If}\,X^{i}>X_{\max }^{i}\quad \mathrm{then}\quad X^{i}=X_{\max }^{i}\quad \mathrm{and}\nonumber \\&\quad \mathrm{If}\,X^{i}<X_{\min }^{i}\quad \mathrm{then}\quad X^{i}=X_{\min }^{i}. \end{aligned}$$
(23)

Step-3 for each decision variable, the constraints are to be satisfied. That means the decision variables which are the coefficients of the reduced order numerator are to be put in the given limits.

Step-4 find the objective functions for the initial population.

Step-5 model order reduction techniques by soft computing techniques are based on some objective functions like ISE, IAE and ITAE.

Step-6 basing on non dominated sorting and the crowding distance described [11], the population is sorted.

Step-7 set iteration count \(k=0.\)

Step-8 \(k=k+1.\)

Step-9 select the best population \(N_s \) of set of decision variables which gives non-dominated solution in the archive \(X_n \) and store them in an archive \(X_{Best} \) for PSO and DE operation.

For PSO operation

Step-10 the decision variables are considered as particles. For each iteration, the velocities of all particles are updated as given in the equation below.

$$\begin{aligned} v_{id}^{n+1} =wv_{id}^n +c_{1} r_{1}^n \left( {P_{id}^n -x_{id}^n }\right) +c_{2} r_{2}^n \left( {P_{id}^n -x_{id}^n}\right) , \end{aligned}$$
(24)

where w is inertia weight, \(c_{1},\,c_{2}\) are cognitive and social acceleration respectively, \(r_{1},\,r_{2}\) are random numbers uniformly distributed in the range (0, 1).

The ith particle in the swarm population is represented by a d-dimensional vector \(X_i=(x_{i1},\,x_{i2},\ldots ,x_{id}).\) Its velocity is denoted by another d-dimensional vector \(V_{i}=(v_{i1},\, v_{i2},\ldots ,v_{id}).\) The best previously visited position of the ith particle is represented by \(P_{i}=(p_{i1},\,p_{i2},\ldots , p_{id}).\)

Step-11 the positions of all particles are updated as given in the equation below for each iteration.

$$\begin{aligned} x_{id}^{n+1} =x_{id}^n +v_{id}^{n+1}. \end{aligned}$$
(25)

Step-12 compare particle’s performance evaluation with particle’s \(p_{best}.\) Set \(p_{best}\) value and location equal to the current value and its current location if current value is better than \(p_{best},\) otherwise \(p_{best}\) will remain the same and overall best is \(g_{best}.\)

For DE operation

Step-10 the mutation operation is performed for each individual target vector \(x_{i}^{G}\) for obtaining a mutant vector \(V_{i}^{G+1}\) given by

$$\begin{aligned} V_i^{G+1} =X_{r1}^{G} +F\cdot \left( X_{r2}^{G} -X_{r3}^{G}\right) . \end{aligned}$$
(26)

The randomly chosen indexes \(r_{1},\,r_{2}, r_{3} \in [1,\,N_{P}]\) are integers and mutually different from each other from the running index i and mutation factor \(F>0,\) is a real constant \(\in [0,\,2]\) which controls the amplification of the differential variation \((X_{r2}^G -X_{r3}^{G}).\)

Step-11 the crossover operation is performed to obtain the trial vector

$$\begin{aligned} U_i^{G+1}= & {} (U_{1,i}^{G+1},\,U_{2,i}^{G+1},\ldots ,U_{D,i}^{G+1}) \hbox { is formed where}\nonumber \\ U_{j,i}^{G+1}= & {} \left\{ {{\begin{array}{l} V_{j,i}^{G+1}\quad \mathrm{if\,rand}(j)\le \textit{CR}\quad \mathrm{or}\, j=rn(i), \\ X_{j,i}^G \quad \mathrm{if\,rand}(j)>CR\quad \mathrm{and}\,j\ne rn(i),\\ \end{array}}}\right. \end{aligned}$$
(27)

where rand\((j)\in [0,\,1]\) is the jth evaluation of a uniform random number generator. CR is the crossover constant \(\in [0,\,1].\)

Step-12 the target vector \(x_{i}^{G}\) is to be compared with the trial vector \(V_i^{G+1}\) in the selection operation. The one with the better fitness value is admitted in the next generation and the algorithm is repeated for the given iterations.

For PSO or DE operation

Step-13 recalculate the affinity of all best particles sorted again based on non-dominated sorting method and crowding distance [11].

Step-14 check for the stopping criterion. If the number of iterations reaches the maximum then go to the next step, otherwise go to step 8.

Step-15 acquire the set of Pareto optimal solutions from the final iteration.

Step-16 the best compromised solution set is obtained from Pareto optimal front which satisfies all objectives.

The best compromised solution in the model reduction is the coefficients of reduced model which gives the optimal solution.

Best Compromise Solution

Once the Pareto optimal set of non dominated solution is obtained, the best compromise solution can be offered to the decision maker with fuzzy membership function. In this paper, fuzzy membership approach [13] is used to find a best compromise solution. For the ith objective function \(f_{i}\) of individual j can be represented by a membership function \(\mu _{i}^{j}\) defined as:

$$\begin{aligned} \mu _i^j =\left\{ {{\begin{array}{ll} 1&{}\quad {f_i \le f_i^{\min }}, \\ {\frac{f_i^{\max } -f_i }{f_i^{\max } -f_i^{\min } }}&{}\quad {f_i^{\min } \le f_i\le f_i^{\max } }, \\ 0&{}\quad {f_i \ge f_i^{\max }}, \\ \end{array} }} \right. \end{aligned}$$

where \(f_i^{\min }\) and \(f_i^{\max }\) are the minimum and maximum values of ith objective function among all non-dominated solutions. For every non-dominated solution j,  the normalized membership function \(\mu ^{j}\) calculated as:

$$\begin{aligned} \mu ^{j}=\frac{\sum \nolimits _{i=1}^{N}\mu _{i}^{j}}{\sum \nolimits _{j=1}^{P} \sum \nolimits _{i=1}^{N}\mu _{i}^{j}}, \end{aligned}$$

where p is the total number of non-dominated solutions. The best compromise solution is that having maximum value of \(\mu ^{j}.\)

Numerical Example

The Phillips–Heffron model of single-machine infinite bus (SMIB) power system is shown in Fig. 2 [23]. The system consists of a three phase 160-MVA synchronous machine with automatic excitation control system i.e. a standard IEEE type-I exciter with rate feedback and power system stabilizer (PSS). The numerical values of the parameters which define the total operating systems as well as the operating point are given in [23]. Based on the state variables, parameters values and the operating point of the system (without accounting for the limiters) may be described in the form of state space as:

$$\begin{aligned} {\dot{x}}= & {} Ax+Bu, \end{aligned}$$
(28)
$$\begin{aligned} y= & {} Cx, \end{aligned}$$
(29)

where state matrix

$$\begin{aligned} x^{T}=\left[ E_{q}^{\prime }\;\omega \;\delta \;v_{1}\; v_{2}\; v_{3}\;v_{4}\;v_{5}\;v_{R}\;E_{FD}\right] . \end{aligned}$$

Control matrix

$$\begin{aligned} u^{T}=\left[ \varDelta V_{Ref}\; \varDelta T_{m}\right] . \end{aligned}$$

Output matrix

$$\begin{aligned} y^{T}=\left[ \delta \;V_{t}\right] . \end{aligned}$$

State space equations describing each individual block of Fig. 2 in terms of state variables are shown in Appendix and organized in the vector matrix form as given in Eqs. (28) and (29). The matrices \({{\mathbf {A}}}\) and \({{\mathbf {B}}}\) are defined below:

$$\begin{aligned} A = \left[ {\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} -\frac{1}{{K_{3} \tau _{{d_{o}}}^{\prime }}} &{} 0&{}-\frac{{K_{4}}}{{\tau _{{d_{o}}}^{'}}} &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{}\frac{1}{{\tau _{{d_{o}}}^{\prime }}}\\ -\frac{{K_{2}}}{{6H}}&{}0&{}-\frac{{K_{1}}}{{6H}}&{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} {\omega _{R}} &{} 0 &{} 0 &{} 0 &{} 0&{} 0 &{} 0 &{} 0 &{} 0 \\ \frac{{K_{R} K_{6}}}{{\tau _{R}}} &{} 0 &{}\frac{{K_{R} K_{5}}}{{\tau _{R}}}&{}-\frac{1}{{\tau _{R}}}&{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{}0 \\ 0 &{} 0 &{} 0 &{} 0 &{} -\frac{1}{{\tau _{F}}} &{} 0 &{} 0 &{} 0 &{} \frac{{K_{F}}}{{\tau _{E}\tau _{F}}}&{}-\frac{{K_{F} (K_{{F + S_{E}}})}}{{\tau _{E}\tau _{F}}} \\ -\frac{{K_{2}\tau _{3}}}{{6H\tau _{4}}}&{}\frac{1}{{\tau _{4}}}&{}-\frac{{K_{1} \tau _{3}}}{{6H\tau _{4}}} &{} 0 &{} 0 &{}-\frac{1}{{\tau _{4}}} &{} 0 &{} 0 &{} 0 &{} 0 \\ -\frac{{K_{2}\tau _{1} \tau _{3}}}{{6H\tau _{2} \tau _{4}}}&{}\frac{{\tau _{1}}}{{\tau _{2} \tau _{4}}} &{}-\frac{{K_{1} \tau _{1} \tau _{3}}}{{6H\tau _{2} \tau _{4}}} &{} 0 &{} 0 &{} \frac{{\tau _{4} -\tau _{1}}}{{\tau _{2} \tau _{4}}} &{}-\frac{1}{{\tau _{2}}}&{} 0 &{} 0 &{} 0 \\ -\frac{{K_{0} K_{2} \tau _{1} \tau _{3}}}{{6H\tau _{2} \tau _{4}}}&{}\frac{{K_{0} \tau _{1}}}{{\tau _{2} \tau _{4}}} &{} -\frac{{K_{0} K_{1} \tau _{1} \tau _{3}}}{{6H\tau _{2} \tau _{4}}} &{} 0 &{} 0 &{}\frac{{K_{0} (\tau _{4}-\tau _{1} )}}{{\tau _{2} \tau _{4}}} &{} \frac{{K_{0}}}{{\tau _{2}}}&{}-\frac{1}{{\tau _{0}}} &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} -\frac{{K_{A}}}{{\tau _{A}}} &{}-\frac{{K_{A}}}{{\tau _{A}}}&{} 0 &{} 0 &{} \frac{{K_{A}}}{{\tau _{A}}} &{}-\frac{1}{{\tau _{A}}} &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} \frac{1}{{\tau _{E}}} &{} -\frac{{K_{E}}}{{\tau _{E}}}\\ \end{array} } \right] , \end{aligned}$$
$$\begin{aligned} B^{T}=\left[ {{\begin{array}{cccccccccc} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} {\frac{K_A }{\tau _A }}&{} 0 \\ 0&{} {\frac{1}{2H}}&{} 0&{} 0&{} 0&{}{\frac{\tau _{3} }{2H\tau _{4} }}&{} {\frac{\tau _{1} \tau _{3} }{2H\tau _{2} \tau _{4} }}&{} {\frac{K_o \tau _{1} \tau _{3} }{2H\tau _{2} \tau _{4} }}&{} 0&{} 0 \\ \end{array}}} \right] . \end{aligned}$$

The numerical values of the state space matrices A, B and C are obtained by substituting the data given in [23]. The transfer function matrix of the 10th order two-input two-output of linear time invariant power system model after converting state space matrices is given by

$$\begin{aligned} \left[ {G(s)} \right] =\frac{1}{D(s)} \left[ {{\begin{array}{cc} a_{11} (s)&{} a_{12} (s) \\ a_{21} (s)&{} a_{22} (s) \\ \end{array} }} \right] , \end{aligned}$$
(30)

where the common denominator D(s) is given by

$$\begin{aligned} {\mathrm{D}(\mathrm{s})}= & {} \mathrm{s}^{10 }+ 64.21\mathrm{s}^{9}+ 1596\mathrm{s}^{8}+ 1.947\times 10^{4}\mathrm{s}^{7}\\&+\,1.268\times 10^{5}\mathrm{s}^{6}+ 5.036\times 10^{5}\mathrm{s}^{5}+ 1.569\\&\times \,10^{6}\mathrm{s}^{4} +3.24\times 10^{6}\mathrm{s}^{3}+ 4.061\times 10^{6}\mathrm{s}^{2}\\&+\,2.095\times 10^{6}\mathrm{s} + 2.531\times 10^{5} \end{aligned}$$

and

$$\begin{aligned} \,a_{11}(\mathrm{s})= & {} 0\cdot \mathrm{s}^{8}+ 0\cdot \mathrm{s}^{7}+0\cdot \mathrm{s}^{6}-2298\mathrm{s}^{5}\\&-\,9.85\times 10^{4} \mathrm{s}^{4} -1.38 \times 10^{6} \mathrm{s} ^{3}\\&-\,6.838 \times 10^{6}\mathrm{s}^{2}-6.1\times 10^{6}\mathrm{s} -5.43\times 10^{5}\\ a_{12}(\mathrm{s})= & {} 29.09\mathrm{s}^{8}+ 1868\mathrm{s}^{7}+ 4.61\times 10^{4}\mathrm{s}^{6}\\&+\,5.459\times 10^{5}\mathrm{s}^{5}+ 3.185\times 10^{6}\mathrm{s}^{4}\\&+\,8.702\times 10^{6}\mathrm{s}^{3}+ 1.206\times 10^{7}\mathrm{s}^{2}\\&+\,7.606\times 10^{6}\mathrm{s} +6.483\times 10^{5}\\ a_{21}(\mathrm{s})= & {} 0\cdot \mathrm{s}^{8}+ 85.23\mathrm{s}^{7}+ 3651\mathrm{s}^{6}+5.208\times 10^{4}\mathrm{s}^{5}\\&+\,2.98\times 10^{5}\mathrm{s}^{4 }+ 8.471\times 10^{5}\mathrm{s}^{3}+ 3.105\\&\times \,10^{6}\mathrm{s}^{2}+2.752\times 10^{6}\mathrm{s}+ 2.45\times 10^{5}\\ a_{22} (\mathrm{s})= & {} -1.26\mathrm{s}^{8}-85.18\mathrm{s}^{7}-2089\mathrm{s}^{6}-2.568\times 10^{4}\mathrm{s}^{5}\\&-\,1.909\times 10^{5}\mathrm{s}^{4}-7.123\times 10^{5}\mathrm{s}^{3}-1.084\\&\times \,10^{6}\mathrm{s}^{2}-2.972 \times 10^{5}\mathrm{s}-1.942\times 10^{4}. \end{aligned}$$

The reduced order transfer function matrix for the original order two-input two-output is given by

$$\begin{aligned}{}[ {R(s)}]=\frac{1}{\tilde{D} (s)}\left[ {{\begin{array}{cc} b_{11} (s)&{} b_{12} (s) \\ b_{21} (s)&{} b_{22} (s) \\ \end{array} }} \right] . \end{aligned}$$
(31)
Fig. 2
figure 2

Block diagram of Phillips–Heffron model of single-machine infinite bus (SMIB) power system

All methods are applied to the 10th order transfer function matrix of two-input two-output of linear time invariant power system model to obtain the reduced 3rd order model.

Results and Discussions

The obtained reduced order models for all the eight classical methods are shown in the Table 1. In the table, \(b_{11} (s)\) and \(b_{12} (s)\) indicates the reduced models for step change in input \(\Delta V_{Ref}\) and disturbance \(\Delta T_{m}\) respectively for torque angle \(( \delta )\) output. Similarly \(b_{21} (s)\) and \(b_{22} (s)\) indicates the reduced models for the same step changes respectively for terminal voltage \((V_{t})\) output. The objective functions like ISE, IAE, ITAE and magnitude of stability responses like settling time, overshoot and undershoot of reduced order models are specified in Table 2 for the different eight multivariable classical methods.

Table 1 Reduced order models by various classical methods
Table 2 Comparison of some objective functions for the various classical model reduction methods

Each classical method gives single solution where decision making for alternative solutions of minimization of absolute error, square error etc. becomes difficult. It has been observed that for the denominator reduced by interlacing property for the four output responses has overall minimum ISE, IAE and ITAE values when compared to the other seven classical methods. The reduced order models where denominator is reduced by dominant pole retention method offers settling time, overshoot, undershoot responses very close to that of the original model

However the values of ISE, IAE and ITAE (which are good measures to find the efficiency of model reduction methods) for dominant pole retention method are not that much minimum as that of interlacing property. The reduced models for interlacing property and dominant pole retention method are tested for the adequacy by comparing the \(\delta \) and \(V_{t}\) output time responses of the original 10th order multi-variable system and those of 3rd order reduced models as shown in the Figs. 3 and 4 respectively.

Fig. 3
figure 3

Torque angle response for input step change. a \(\Delta V_{ref} = 0.05\) p.u. and \(\Delta T_{m} = 0\) p.u. b \(\Delta V_{ref} = 0\) p.u. and \(\Delta T_{m}= 0.05\) p.u

Fig. 4
figure 4

Terminal voltage response for input step change. a \(\Delta V_{ref} = 0.05\) p.u. and \(\Delta T_{m} = 0\) p.u. b \(\Delta V_{ref} = 0\) p.u. and \(\Delta T_{m}= 0.05\) p.u

From the comparisons in the Table 2, it is observed that if one method offers less ISE value, it may not guarantee for less ITAE value. Though interlacing property is proposed as the efficient method, this classical method does not satisfy all objective functions. So multi-objective optimization using MOPSO and MODE algorithms are implemented for reduced order modeling of higher order multi-variable system (Figs. 3 and 4).

These methods are based on minimization of multi objectives like ISE, IAE and ITAE to eliminate small, normal and large errors persisting for long time between original and reduced order models. Figures 5 and 6 represent the Pareto optimal front obtained by MODE and MOPSO for \(b_{11} (s)\) and \(b_{12} (s)\) reduced models respectively for the three objectives. Similarly Figs. 7 and 8 represent the Pareto optimal front by MODE and MOPSO for \(b_{21} (s)\) and \(b_{22} (s)\) reduced models respectively. These Pareto fronts consist of a set of non dominant solutions. The interactive fuzzy membership approach is used in deciding the compromise solution among the non dominated solutions [13].

Fig. 5
figure 5

Pareto front of \(b_{11}\) (s) reduced model with ISE, IAE & ITAE objectives. a MODE. b MOPSO

Fig. 6
figure 6

Pareto front of \(b_{12}\) (s) reduced model with ISE, IAE & ITAE objectives. a MODE. b MOPSO

Fig. 7
figure 7

Pareto front of \(b_{21}\) (s) reduced model with ISE, IAE & ITAE objectives. a MODE. b MOPSO

Fig. 8
figure 8

Pareto front of \(b_{22}\) (s) reduced model with ISE, IAE & ITAE objectives. a MODE. b MOPSO

The parameters considered for PSO and DE are shown in the Table 3. Reduced models by DE and PSO model reduction based on single and multi-objectives are shown in the Table 4. The corresponding ISE, IAE and ITAE values are compared. It is observed that for each DE or PSO method based on minimization of any objective satisfy only that particular objective and based on multi-objectives minimizes all objectives. DE based on single and multi objectives offered optimal results compared to PSO.

Table 3 Parameters used for PSO and DE Algorithms
Table 4 Comparison of DE and PSO model reduction based on single and multi-objectives

From the Pareto optimal front in the Figs. 567 and 8, reduced models are chosen from non-dominant solutions based on single(solution from extreme points) and multi objectives(compromise solution) and their corresponding ISE, IAE and ITAE values are tabulated in the Table 5. The corresponding ISE, IAE and ITAE values are compared and the same observations from the Table 4 have been seen in the Table 5 i.e., for each MODE or MOPSO method for minimization of any objective satisfy only that particular objective. For example ‘minimum ISE’ satisfies only ISE and other objectives like IAE and ITAE are not minimized but methods based on multi-objective satisfy all objectives.

Table 5 Comparison of ISE, IAE and ITAE values using MODE and MOPSO

In multi-objective approach based on non-dominated sorting, there exists a set of solutions which are superior to other solutions when all objectives are considered and are inferior to rest of solutions when single objective is considered. From Tables 4 and 5, it is observed that the ISE, IAE and ITAE values of reduced models obtained by DE method is less compared to PSO based on single and multi-objectives. The best solutions obtained in a single run by using MODE are compared with those obtained using MOPSO based on three objectives. These observations are supported from Figs. 567 and 8, where the set of Pareto optimal points (each representing different combinations of ISE, IAE and ITAE values) are greater than that of MOPSO methods. Moreover, MODE is very easy to implement using few control parameters compared to MOPSO method. On the other hand, PSO is more sensitive to parameter changes than the other algorithms. When changing the problem, one probably needs to change parameters as well to sustain optimal performance. As a result, PSO must be executed several times to ensure good results, whereas one run of DE usually suffices. MOPSO outperforms other evolutionary algorithms with computational time less than 10 min. This technique works well in model reduction with optimum proximity.

MOPSO suffers with less convergence speed more than 7 min whereas MODE converges for less than 5 min. The results demonstrate that the MODE algorithm is well competent to find the non-dominated solutions for the model reduction problem. These algorithms have been implemented in MATLAB 7.10 on a Intel core processor. The time responses of \(\delta \) for methods based on multi-objective compared with single objective based on only ISE, IAE and ITAE objectives are shown in the Figs. 9 and 10 for step change conditions \(\Delta V_{ref}= 0.05\) p.u.; \(\Delta T_m = 0\) p.u and \(\Delta V_{ref}= 0\) p.u.; \(\Delta T_m = 0.05\) p.u respectively. Similarly the same comparisons are shown for \(V_{t}\) output response in the Figs. 11 and 12 for the same step changes.

Fig. 9
figure 9

Torque angle response for \(\Delta V_{ref} = 0.05\) p.u and \(\Delta T_{m}=0\) p.u. a MODE. b MOPSO

Fig. 10
figure 10

Torque angle response for \(\Delta V_{ref} = 0\) p.u and \(\Delta T_{m}= 0.05\) p.u. a MODE. b MOPSO

Fig. 11
figure 11

Terminal voltage response for \(\Delta V_{ref} = 0.05\) p.u and \(\Delta T_{m} = 0\) p.u. a MODE. b MOPSO

Fig. 12
figure 12

Terminal voltage response for \(\Delta V_{ref}=0\) p.u and \(\Delta T_{m} = 0.05\) p.u. a MODE. b MOPSO

It is clear from the simulation results shown in the Figs. 91011 and 12, the reduced order models obtained by the proposed MODE algorithm is adequate because its output time responses coincides relatively well with those of the original system for the same input step change.

Conclusions

In the paper, MODE method based on non dominated sorting approach is suggested for multi objective order reduction. Initially some existing classical model reduction methods are applied to a multi-variable system compared for the settling time, overshoot and undershoot responses to that of the original system model and also for the performance indices like ISE, IAE and ITAE values. It is observed that IPCM method offered better performance indices compared to other classical methods. Classical approaches for model reduction are mathematical methods and are not based on minimization of any objective and if one method gives less ISE value, it may not offer less ITAE value. Further there exists no method which satisfies all objective functions.

Multi-objective model reduction approach is used to reduce the numerator coefficients and the denominator is reduced by interlacing property method. The multi objectives considered for model reduction are ISE, IAE and ITAE to minimize the small and large errors persisting for long time between full order and reduced order models. A choice can be made from the set of non dominant solutions to get the desired solutions. The adequacy of lower order models obtained by proposed methods are judged by comparing their output time responses with that of original model. It is observed that the MODE method is simple, robust and finds the optimum in almost every run. It has few parameters to set, and the same settings can be used for many different problems. It outperforms MOPSO with more convergence speed and less computational time.