1 Introduction

The model approximation replaces the mathematical model of a high-order system by an approximant having order lower than that of the system while preserving most of the characteristics of the original system. The lower order approximant simplifies the system analysis, controller design and saves simulation time. Many methods [1,2,3,4,5,6,7,8] are proposed in the literature for approximating the high-order discrete as well as continuous systems. Recently, some approximation techniques [9,10,11,12] are also appeared for interval systems.

The Padé approximation method attracted the attention of many researchers [13,14,15,16,17] due to its simplicity. In Padé approximation, first 2r time moments of the system are fully retained in its lower order approximant. However, Padé method, sometimes, produces unstable approximant despite high-order system being stable. To eliminate this limitation of Padé method, various improvements like stability equation method [17], Mihailov criterion [18], Hurwitz polynomial based approximation [19], Routh approximation [20, 21], etc., are suggested in the literature.

Shamash [22] proved that some Markov parameters are required, additionally, along with the time moments of high-order system to achieve better time response approximation. In [14], Singh proposed a Luus–Jaakola algorithm based reduction technique in which both time moments and Markov parameters are considered for deriving the approximant. Soloklo and Farsangi [16] proposed Routh-Padé approximation using harmony search optimization algorithm in which multi-objective optimization approach is presented for deriving the approximant by minimising the errors of the time moments and of the Markov parameters of the system and approximant. But, this method [16] does not guarantee the matching of steady states of the approximant and system.

In this paper, an analytic hierarchy process (AHP) based technique is proposed to derive the approximant which ensures the matching of steady states of the system and approximant. This approach retains the first time moment of system in approximant to guarantee the steady state matching. Additionally, the errors between some subsequent time moments and some Markov parameters are minimized. This multi-objective problem of minimization of errors of time moments and of Markov parameters is converted into single objective function by assigning some weights using AHP method [23,24,25]. Then, this single objective is minimized using recently proposed optimization technique namely, teacher–learner-based-optimization (TLBO) [26, 27]. The choice of TLBO for minimizing the objective function is based on its simplicity and being free from algorithm-specific parameters. To ensure the stability of proposed approximant, the constraints obtained due to Hurwitz criterion [28, 29] are considered in this work. The efficiency of proposed technique is investigated and demonstrated by considering two test systems.

The organization of paper is as follows: Sect. 2 describes the problem formulation, the TLBO algorithm is discussed in Sect. 3, simulation results for two test systems are provided in Sect. 4, and finally the work is concluded in Sect. 5.

2 Problem formulation

Consider an \(n\hbox {th-order}\) continuous system, \(G_n (s)\), given by

$$\begin{aligned} G_n (s)=\frac{N(s)}{D(s)}=\frac{B_0 +B_1 s+\cdots +B_{n-1} s^{n-1}}{A_0 +A_1 s+\cdots +A_n s^{n}} \end{aligned}$$
(1)

where \(B_i \), for \(i=0,1,\ldots ,n-1\), represent coefficients of numerator, N(s), and \(A_i \), for \(i=0,1,\ldots ,n\), denote coefficients of denominator, D(s).

Suppose, an \(r\hbox {th-order}\) approximant, \(H_r (s)\), given by

$$\begin{aligned} H_r (s)=\frac{\bar{{N}}(s)}{\bar{{D}}(s)}=\frac{b_0 +b_1 s+\cdots +b_{r-1} s^{r-1}}{a_0 +a_1 s+\cdots +a_r s^{r}} \end{aligned}$$
(2)

is desired for system described by (1) such that \(n>r\).

The system expansions of (1) around \(s=0\) and \(s=\infty \) are given as

$$\begin{aligned}&G_n (s) = T_0 +T_1 s+\cdots +T_k s^{k} + \cdots \nonumber \\&~~~~~~~~~~~~~~~ (\hbox {expansion around }s=0) \end{aligned}$$
(3)
$$\begin{aligned}&G_n (s) = M_1 s^{-1}+M_2 s^{-2}+\cdots +M_k s^{-k} +\cdots \nonumber \\&~~~~~~~~~~~~~~~ (\hbox {expansion around }s=\infty ) \end{aligned}$$
(4)

where \(T_i \), for \(i=0,1,\ldots \), and \(M_i \), for \(i=1,2,\ldots \), are known as time moments and Markov parameters [13] of system, (1), respectively.

The expansions of approximant, (2), in terms of time moments and Markov parameters can be written as

$$\begin{aligned} H_r (s)= & {} t_0 +t_1 s+\cdots +t_k s^{k}+\cdots \end{aligned}$$
(5)
$$\begin{aligned} H_r (s)= & {} m_1 s^{-1}+m_2 s^{-2}+\cdots +m_k s^{-k}+\cdots \end{aligned}$$
(6)

where \(t_i \) for \(i=0,1,\ldots \), and \(m_i \) for \(i=1,2,\ldots \) are, respectively, the time moments and the Markov parameters of approximant, (2).

The parameters of approximant are determined by

  1. (i)

    matching the first time moments of system and approximant, such that

    $$\begin{aligned} T_0 =t_0 , \end{aligned}$$
    (7)

    and

  2. (ii)

    minimizing the errors between subsequent \((r-1)\) time moments and first r Markov parameters of the system and its approximant, such that

    $$\begin{aligned} J=\sum _{i=1}^{r-1} {w_i^{\mathrm{T}} \left( {1-\frac{t_i }{T_i }} \right) ^{2}} +\sum _{j=1}^r {w_j^{\mathrm{M}} \left( {1-\frac{m_j }{M_j }} \right) ^{2}} \end{aligned}$$
    (8)

    where \(w_i^{\mathrm{T}} \) for \(i=1,2,\ldots ,(r-1)\) and \(w_i^{\mathrm{M}} \) for \(j=1,2,\ldots ,r\) are weights.

The multi-objective problem, (8), is in normalized form having a total of \(\left( {2r-1} \right) \) objectives. The first \(\left( {r-1} \right) \) objectives, normalized with corresponding time moments of the system, are used to minimize the errors of \(\left( {r-1} \right) \) time moments of system and its approximant. Remaining r objectives, normalized with respective Markov parameters of the system, are used to minimize the errors of r Markov parameters of the system and its approximant. A total of \((2r-1)\) weights, \(w_i^{\mathrm{T}} \) for \(i=1,2,\ldots ,(r-1)\) and \(w_j^{\mathrm{M}} \) for \(j=1,2,\ldots ,r\), are considered in objective function to assign proper importance to different objectives.

2.1 Determination of weights

The relative weights appearing in objective function, (8), are determined using analytic hierarchy process (AHP). AHP technique is simple and one of the most widely used analytic methods [23, 30, 31].

In AHP, each attribute is assigned a performance measure. The relative weights are obtained with the help of performance measures. The performance measures have the values 1, 3, 5, 7 and 9. The value 1 is assigned when the attribute is compared with itself. The values 3, 5, 7 and 9 are analogous to verbal decisions ‘moderate important’, ‘strong important’, ‘very strong important’ and ‘absolute important’, respectively. The values 2, 4, 6 and 8 may also be considered for compromise between these values. With the help of performance measures, a pair-wise square comparison matrix is formed as

(9)

where \(c_{i,j} =1\) when \(i=j\) and \(c_{j,i} =c_{i,j}^{-1} \) when \(i\ne j\). The relative weights \(w_i , i=1,2,\ldots ,\left( {2r-1} \right) \) for all \(\left( {2r-1} \right) \) objectives \(J_i , i=1,2,\ldots ,\left( {2r-1} \right) \), are obtained by computing the normalized geometric means of the \(i\hbox {th}\) rows as

$$\begin{aligned} w_i ={GM_i }\Bigg /{\sum _{i=1}^{2r-1} {GM_i } } \end{aligned}$$
(10)

where \(GM_i \) is the geometric mean of \(i\hbox {th}\) row of (9) which is computed as

$$\begin{aligned} GM_i =\left[ {\prod _{j=1}^{\left( {2r-1} \right) } {c_{i,j} } } \right] ^{1/{\left( {2r-1} \right) }} \end{aligned}$$
(11)

The resultant objective function, J, can be written in the following form

$$\begin{aligned} J\left( {J_1 ,J_2 ,\ldots ,J_{\left( {2r-1} \right) } } \right)&=w_1 J_1 +w_2 J_2 +\cdots \nonumber \\ {}&\quad + w_{\left( {2r-1} \right) } J_{\left( {2r-1} \right) } \end{aligned}$$
(12)

where \(w_i , i=1,2,\ldots ,\left( {2r-1} \right) \), are the scalar weights obtained using (10) for the objectives \(J_i , i=1,2,\ldots , \left( {2r-1} \right) \), respectively.

2.2 Steady state matching

The constraint given by (7) guarantees the steady state matching of system and approximant since

$$\begin{aligned} T_0 =t_0 \quad \Rightarrow \quad \frac{B_0 }{A_0 }=\frac{b_0 }{a_0 } \end{aligned}$$
(13)

2.3 Stability of approximant

The stability of proposed approximant, (2), is ensured with the help of Hurwitz criterion [28, 29] such that

$$\begin{aligned} \bar{{D}}(s),\;\hbox {i.e. the denominaror of approximant, is Hurwitz.} \end{aligned}$$
(14)

The Hurwitz criterion is used in this work to achieve stable approximant because of being simple in application.

Hence, the problem to obtain the approximant, (2), modifies to minimization of objective function, (12), subject to the constraints given by (13) and (14).

3 Teacher–learner-based-optimization (TLBO) algorithm

The TLBO algorithm is proposed by Rao et al. [26, 27] in 2011 and is inspired from the teaching–learning process of a class. The learners in a class constitute the population and the subjects offered to the learners are analogous to the decision variables. The whole functioning of TLBO algorithm is divided into two parts, namely teacher phase and learner phase.

Teacher phase Suppose, the performance of \(i\hbox {th}\) learner in \(j\hbox {th}\) subject is \(X_{i,j} \), with \(i=1,2,\ldots ,P\) and \(j=1,2,\ldots ,Q\) where P and Q denote, respectively, the population size and number of decision variables.

In this phase, class teacher tries to improve the knowledge of learners to his level as given by

$$\begin{aligned} X_{i,j}^{\mathrm {new}} =X_{i,j}^{\mathrm {old}} +r_i \left( {X_{\mathrm {teacher},j} -T_{\mathrm{f}} M_j } \right) \end{aligned}$$
(15)

where \(X_{i,j}^{\mathrm {new}} \) and \(X_{i,j}^{\mathrm {old}} \) are new and old performances of learners, \(r_i \) is a random number in the range \((0,1), X_{\mathrm {teacher},j} \) is the class teacher (i.e. the best learner of class), \(T_{\mathrm{f}} \) is the teacher factor, and \(M_j \) is the mean performance of class. The value of \(T_{\mathrm{f}} \) is either 1 or 2 which is selected randomly. \(X_{i,j}^{\mathrm {new}} \) is accepted if it has better fitness value otherwise \(X_{i,j}^{\mathrm {old}} \) is retained. This new solution becomes input to the learner phase.

Learner phase Each learner of the class interacts with other randomly selected learner to improve his knowledge. The performance of \(p\hbox {th}\) learner is modified as

$$\begin{aligned}&X_{p,j}^{\mathrm {new}} = X_{p,j}^{\mathrm {old}} +r_j \left( {X_{p,j} -X_{q,j} } \right) \nonumber \\&\qquad \qquad f\left( {X_{p,j} } \right) \le f\left( {X_{q,j} } \right) \nonumber \\&X_{p,j}^{\mathrm {new}} =X_{p,j}^{\mathrm {old}} +r_j \left( {X_{q,j} -X_{p,j} } \right) \nonumber \\&\qquad \qquad f\left( {X_{q,j} } \right) <f\left( {X_{p,j} } \right) \end{aligned}$$
(16)

where \(p\ne q, X_{p,j}^{\mathrm {new}} \) and \(X_{p,j}^{\mathrm {old}} \) are the new and old performances of \(p\hbox {th}\) learner. \(f\left( {X_{p,j} } \right) \) and \(f\left( {X_{p,j} } \right) \) represent the fitness values of \(p\hbox {th}\) and \(q\hbox {th}\) learners. \(X_{p,j}^{\mathrm {new}} \) is accepted if it provides better fitness value.

The output of the learner phase again becomes input to the teacher phase and this iterative procedure continues until termination criterion is met. Table 1 shows the pseudo-code of TLBO algorithm.

Table 1 Pseudo-code of TLBO

3.1 Steps of implementation of TLBO algorithm

The detailed steps of implementation of TLBO algorithm for obtaining approximant of high-order system are as follow:

Suppose, an \(n\hbox {th-order}\) system is given and an \(r\hbox {th-order}\) approximant is to be obtained such that \(n<r\).

  • Step 1 Formulate an \(r\hbox {th-order}\) approximant as given by (2).

  • Step 2 Obtain the time moments and Markov parameters of the system and approximant as discussed in (3)–(6).

  • Step 3 Formulate the multi-objective function as given in (8).

  • Step 4 Form the comparison matrix, given in (9), and obtain the relative weights using (10).

  • Step 5 Obtain single-objective problem as discussed in (12).

  • Step 6 Obtain the constraints given in (13) and (14).

  • Step 7 Obtain the approximant by minimizing the problem obtained in step 5 using TLBO algorithm subject to the constraints obtained in step 6. The steps for minimization using TLBO algorithm are provided in Table 1.

4 Simulation results

To demonstrate the efficacy and systematic nature of the proposed method, two worked test systems are considered.

Test system 1 Consider the transfer function of a sixth order system [16]

$$\begin{aligned} G_6 (s)= \frac{1+8s+20s^{2}+16s^{3}+3s^{4}+2s^{5}}{1+18.3s+102.42s^{2}+209.46s^{3}+155.94s^{4}+33.6s^{5}+2s^{6}}\nonumber \\ \end{aligned}$$
(17)

Suppose, a second-order approximant, \(\left( {r=2} \right) \), given by

$$\begin{aligned} H_2 (s)=\frac{b_0 +b_1 s}{a_0 +a_1 s+a_2 s^{2}} \end{aligned}$$
(18)

is desired.

For the system (17), (3) and (4) turn out to be

$$\begin{aligned} G_6 (s)= & {} 1-10.3s+106.07s^{2}+\cdots \end{aligned}$$
(19)
$$\begin{aligned} G_6 (s)= & {} s^{-1}-15.3s^{-2}+187.07s^{-3}+\cdots \end{aligned}$$
(20)

Similarly, for approximant (18), (5) and (6) become

$$\begin{aligned} H_2 (s)= & {} \left( {\frac{b_0 }{a_0 }} \right) +\left( {\frac{b_1 a_0 -b_0 a_1 }{a_0^2 }} \right) s \nonumber \\&+\,\left( {\frac{b_0 a_1^2 -b_1 a_0 a_1 -b_0 a_0 a_2 }{a_0^3 }} \right) s^{2}+\cdots \end{aligned}$$
(21)
$$\begin{aligned} H_2 (s)= & {} \left( {\frac{b_1 }{a_2 }} \right) s^{-1}+\left( {\frac{b_0 a_2 -b_1 a_1 }{a_2^2 }} \right) s^{-2}\nonumber \\&+\,\left( {\frac{b_1 a_1^2 -b_0 a_1 a_2 -b_1 a_0 a_2 }{a_2^3 }} \right) s^{-3}+\cdots \end{aligned}$$
(22)

For this problem \(\left( {r=2} \right) \), the objective function, (8), takes the following form

$$\begin{aligned} J= & {} \sum _{i=1}^1 {w_i^{\mathrm{T}} \left( {1-\frac{t_i }{T_i }} \right) ^{2}} +\sum _{j=1}^2 {w_j^{\mathrm{M}} \left( {1-\frac{m_j }{M_j }} \right) ^{2}} \end{aligned}$$
(23)
$$\begin{aligned} J= & {} w_1^{\mathrm{T}} \left( {1-\frac{t_1 }{T_1 }} \right) ^{2}+w_1^{\mathrm{M}} \left( {1-\frac{m_1 }{M_1 }} \right) ^{2} \nonumber \\&+\,w_2^{\mathrm{M}} \left( {1-\frac{m_2 }{M_2 }} \right) ^{2} \end{aligned}$$
(24)

Putting the values of time moments and Markov parameters from (19)–(22), (24) becomes

$$\begin{aligned} J= & {} w_1^{\mathrm{T}} \left( {1+\frac{1}{10.3}\left( {\frac{b_1 a_0 -b_0 a_1 }{a_0^2 }} \right) } \right) ^{2} \nonumber \\&+\,w_1^{\mathrm{M}} \left( {1-\frac{b_1 }{a_2 }} \right) ^{2}+w_2^{\mathrm{M}} \left( {1+\left( {\frac{b_0 a_2 -b_1 a_1 }{15.3a_2^2 }} \right) } \right) ^{2}\nonumber \\ \end{aligned}$$
(25)

The Markov parameters and time moments are, respectively, responsible for transient response matching and steady state response matching [22]. The matching of first time moment is necessary for retaining the steady state of the system. This is taken into consideration using the constraint provided in (13). For improved transient response approximation, the matching of first and second Markov parameters is considered as ‘absolute important’ and ‘strong important’, respectively, with respect to second time moment. The matching of first Markov parameter is treated as ‘moderate important’ with respect to second Markov parameter. Hence, the comparison matrix, (9), turn out to be

(26)

The weights calculated using (10) are

$$\begin{aligned} w_1^T =0.06,\quad w_1^M =0.67,\quad w_2^M =0.27 \end{aligned}$$
(27)

Putting the values of weights given in (27), the objective function, (25), becomes

$$\begin{aligned} J= & {} 0.06\left( {1+\frac{1}{10.3}\left( {\frac{b_1 a_0 -b_0 a_1 }{a_0^2 }} \right) } \right) ^{2} \nonumber \\&+\,0.67\left( {1-\frac{b_1 }{a_2 }} \right) ^{2}+0.27\left( {1+\left( {\frac{b_0 a_2 -b_1 a_1 }{15.3a_2^2 }} \right) } \right) ^{2}\nonumber \\ \end{aligned}$$
(28)

The constraints given by (13) and (14) modify to (29) and (30), respectively.

$$\begin{aligned} b_0= & {} a_0 \end{aligned}$$
(29)
$$\begin{aligned} a_1> & {} 0,\quad a_0 a_1 >0 \end{aligned}$$
(30)

By minimizing (28) using TLBO algorithm, subject to (29) and (30), the second-order approximant, (18), obtained is

$$\begin{aligned} H_2 (s)=\frac{43.77+10.66s}{43.77+377.9s+10.72s^{2}} \end{aligned}$$
(31)

The second-order approximants proposed in [16] are

$$\begin{aligned} H_2^{\mathrm{S}} (s)= & {} \frac{0.81849+7.80016s}{0.81657+12.43314s+87.58712s^{2}} \end{aligned}$$
(32)
$$\begin{aligned} H_2^{\mathrm{P}} (s)= & {} \frac{1.09228+6.87815s}{0.99732+12.96860s+89.54625s^{2}} \end{aligned}$$
(33)

The step responses of approximants, given in (31)–(33), along with system, (17), are illustrated in Fig. 1. The time domain specifications and integral of squared errors (ISEs) for system and approximants are provided in Table 2.

Fig. 1
figure 1

The output responses of system and approximants

From Fig. 1, it is clearly observed that the output response of the proposed approximant, (31), is closer to the output response of the system, (17), when compared to other two approximants, (32) and (33). Table 2 shows that the value of steady state of the proposed approximant is same as that of the system while the steady state values of other two approximants have deviations of 0.23% and 9.52%. The integral of square error (ISE) is also obtained for 100 minutes which is minimum in case of proposed approximant. The value of peak overshoot of proposed approximant, (31), is same as that of the system while it deviates considerably in case of other two approximants, (32) and (33). Also, the values of peak time, rise time and settling time of proposed approximant, (31), are closer to those of the system when compared to respective values of the other two approximants. This confirms the superiority of proposed approximant in terms of steady state and transient response matching.

Table 2 Comparison of approximants

Test system 2 Suppose, a ninth-order boiler system [16, 32] is described as

$$\begin{aligned} {\dot{x}}(t)= & {} Ax(t)+Bu \nonumber \\ y(t)= & {} Cx(t) \end{aligned}$$
(34)

where

$$\begin{aligned} A= & {} \left[ {{\begin{array}{ccccccccc} {-\,0.910}&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} {-\,4.449}&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} {-\,10.262}&{} {571.479}&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} {-\,571.479}&{} {-\,10.262}&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} {-\,10.987}&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} {-\,15.214}&{} {11.622}&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} {-\,11.622}&{} {-\,15.214}&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} {-89.874}&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} {-\,502.665} \\ \end{array} }} \right] \\ B= & {} \left[ {{\begin{array}{ccccccccc} {-\,4.336}&{} {-\,3.691}&{} {10.141}&{} {-\,1.612}&{} {16.629}&{} {-\,242.476}&{} {-\,14.261}&{} {13.672}&{} {82.187} \\ \end{array} }} \right] ^{T}\\ C= & {} \left[ {{\begin{array}{ccccccccc} {-\,0.422}&{} {-\,0.736}&{} {-\,0.00416}&{} {0.232}&{} {-\,0.816}&{} {-\,0.715}&{} {0.546}&{} {-\,0.235}&{} {-\,0.080} \\ \end{array} }} \right] \end{aligned}$$

A third-order approximant, \(\left( {r=3} \right) \), given as

$$\begin{aligned} H_3 (s)=\frac{b_0 +b_1 s+b_2 s^{2}}{a_0 +a_1 s+a_2 s^{2}+a_3 s^{3}} \end{aligned}$$
(35)

is desired for the system, (34).

For the system, (34), and approximant, (35), (3)–(6) become (36)–(39), respectively.

$$\begin{aligned} G_9 (s)= & {} 12.7275-2.7282s+2.4587s^{2}+\cdots \end{aligned}$$
(36)
$$\begin{aligned} G_9 (s)= & {} 146.3569s^{-1}+1530.6433s^{-2} \nonumber \\&-1559917.2779s^{-3}+\cdots \end{aligned}$$
(37)
$$\begin{aligned} H_3 (s)= & {} \left( {\frac{b_0 }{a_0 }} \right) +\left( {\frac{b_1 a_0 -b_0 a_1 }{a_0^2 }} \right) s \nonumber \\&+\,\left( {\frac{a_0^2 b_2 -a_0 a_2 b_0 -a_0 a_1 b_1 +a_1^2 b_0 }{a_0^3 }} \right) s^{2}+\cdots \nonumber \\\end{aligned}$$
(38)
$$\begin{aligned} H_3 (s)= & {} \left( {\frac{b_2 }{a_3 }} \right) s^{-1}+\left( {\frac{a_3 b_1 -a_2 b_2 }{a_3^2 }} \right) s^{-2}\nonumber \\&+\,\left( {\frac{a_3^2 b_0 -a_1 a_3 b_2 -a_2 a_3 b_1 +a_2^2 b_2 }{a_3^3 }} \right) s^{-3}+\cdots \nonumber \\ \end{aligned}$$
(39)

For this problem, the objective function, (8), takes the form

$$\begin{aligned} J= & {} \sum _{i=1}^2 {w_i^{\mathrm{T}} \left( {1-\frac{t_i }{T_i }} \right) ^{2}} +\sum _{j=1}^3 {w_j^{\mathrm{M}} \left( {1-\frac{m_j }{M_j }} \right) ^{2}} \end{aligned}$$
(40)
$$\begin{aligned} J= & {} w_1^{\mathrm{T}} \left( {1-\frac{t_1 }{T_1 }} \right) ^{2}+w_2^{\mathrm{T}} \left( {1-\frac{t_2 }{T_2 }} \right) ^{2} \nonumber \\&+\,w_1^{\mathrm{M}} \left( {1-\frac{m_1 }{M_1 }} \right) ^{2}+w_2^{\mathrm{M}} \left( {1-\frac{m_2 }{M_2 }} \right) ^{2}\nonumber \\&+\,w_3^{\mathrm{M}} \left( {1-\frac{m_3 }{M_3 }} \right) ^{2} \end{aligned}$$
(41)
Fig. 2
figure 2

The output responses of system and approximants

Table 3 Comparison of approximants

The comparison matrix, (9), for this test system is formed as

(42)

The weights obtained using (10) are

$$\begin{aligned} w_1^T= & {} 0.06, w_2^T =0.04, w_1^M =0.47,\nonumber \\ w_2^M= & {} 0.28, w_3^M =0.15 \end{aligned}$$
(43)

Putting the values of time moments and Markov parameters from (36)–(39) and the values of weights from (43), the objective function, (24), turn out to be

$$\begin{aligned} J&=0.06\left( {1+\frac{1}{2.7282}\left( {\frac{b_1 a_0 -b_0 a_1 }{a_0^2 }} \right) } \right) ^{2} \nonumber \\&\quad +\,0.04\left( {1-\frac{1}{2.4587}\left( {\frac{a_0^2 b_2 -a_0 a_2 b_0 -a_0 a_1 b_1 +a_1^2 b_0 }{a_0^3 }} \right) } \right) ^{2} \nonumber \\&\quad +\, 0.47\left( {1-\frac{1}{146.3569}\left( {\frac{b_2 }{a_3 }} \right) } \right) ^{2} \nonumber \\&\quad +\,0.28\left( {1-\frac{1}{1530.6433}\left( {\frac{a_3 b_1 -a_2 b_2 }{a_3^2 }} \right) } \right) ^{2} \nonumber \\&\quad +\, 0.15\left( {1+\frac{1}{1559917.2779}\left( {\frac{a_3^2 b_0 -a_1 a_3 b_2 -a_2 a_3 b_1 +a_2^2 b_2 }{a_3^3 }} \right) } \right) ^{2}\nonumber \\ \end{aligned}$$
(44)

The constraints given by (13) and (14) take the form as given in (45) and (46), respectively.

$$\begin{aligned} b_0= & {} 12.7275a_0 \end{aligned}$$
(45)
$$\begin{aligned} a_2> & {} 0,\quad a_1 a_2 -a_0 a_3>0,\quad a_0 \left( {a_1 a_2 -a_0 a_3 } \right) >0 \end{aligned}$$
(46)

The third-order approximant, (35), obtained by minimizing (44) using TLBO algorithm, subject to (45) and (46), is

$$\begin{aligned} H_3 (s)=\frac{\hbox {2363.5}+\hbox {2201.5}s+\hbox {73.8}s^{2}}{\hbox {185.7}+\hbox {213.5}s+\hbox {15.76}s^{2}+\hbox {0.49885}s^{3}} \end{aligned}$$
(47)

The third-order approximants derived in [16] are

$$\begin{aligned} H_3^{\mathrm{S}} (s)= & {} \frac{\hbox {4725.72521}+\hbox {4398.96963}s+\hbox {148.12856}s^{2}}{\hbox {371.99085}+\hbox {429.17178}s+\hbox {29.90996}s^{2}+s^{3}} \nonumber \\\end{aligned}$$
(48)
$$\begin{aligned} H_3^P (s)= & {} \frac{4701.85734+4312.31031s+145.36242s^{2}}{371.29177+420.38264s+23.239s^{2}+s^{3}}\nonumber \\ \end{aligned}$$
(49)

The step responses of approximants, (47)–(49), and system, (34), are shown in Fig. 2.

Table 3 provides the values of time domain specifications like peak overshoot, rise time, peak time and steady state; and ISEs for system and different approximants.

Figure 2 shows that the output response of proposed approximant, (47), is better matched to the step response of the system when compared to the responses of other two approximants, (48) and (49). An observation of Table 3 shows that the steady state value of (47) is same as that of the system, (34), while steady state values of (48) and (49) deviate by 0.16% and 0.47%, respectively. The ISE value obtained for 100 minutes is minimum in case of (47) when compared to those of (48) and (49). Table 3 also clarify that the values of rise time, peak time and settling time of (47) are closer to those of (34) than the respective values of (48) and (49). This confirms the dominance of proposed approximant, (47), over others.

5 Conclusion

An analytic hierarchy process (AHP) based technique is proposed for obtaining stable approximant of stable high-order system using teacher–learner-based-optimization (TLBO) algorithm. The proposed method confirms the steady state matching of approximant and high-order system. To obtain the approximant, a multi-objective function, formulated in terms of errors of Markov parameters and of time moments of the system and its approximant, is considered. Furthermore, this multi-objective optimization problem is changed into a single objective function by assigning some weights using AHP method. To ensure the stability of approximant, the Hurwitz criterion is used. The efficacy, simplicity and systematic nature of proposed method are illustrated using two different test systems.