1 Introduction

The model reduction methods, which use a lower-order system to approximate the large-scale dynamical system, have received considerable attention in recent years. For linear time-invariant (LTI) systems, there are many available model reduction methods, such as the \(H_{2}\) optimal model reduction methods [1,2,3], the \(H_{\infty }\) optimal model reduction methods [4, 5] and the balanced truncation methods [6]. For large-scale systems, the Krylov subspace methods have proven to be well applicable and a certain number of moments or Markov parameters can be matched by the transfer function of the reduced system [7,8,9,10]. Moreover, if the original system is stable, the one-sided Krylov subspace methods generically lead to a stable reduced system [11, 12]. For more details of model reduction, one can refer to [13,14,15,16,17,18,19,20].

Second-order systems arise in many engineering applications, such as the modeling of electric circuits, structural systems, mechanical systems and microelectromechanical systems (MEMS). However, it often takes great computational cost to simulate them, when they are large-scale systems. Therefore, it is necessary to replace the original system by a lower-order system, which not only can reduce the computational burden but also can retain some important properties of the original system. In [21,22,23,24], accurate and effective reduced systems were applied for steady state analysis, transient analysis and sensitivity analysis. To reduce the second-order system, a feasible approach is to convert it into an equivalent first-order system and thus the existing methods for first-order systems can be applied [25,26,27,28,29]. Moreover, it is worth mentioning that some second-order systems can be represented by their equivalent strictly dissipative systems [30]. Thereby, the dissipativity of these systems can be preserved by one-sided projection model reduction methods. However, the resulting reduced systems may not preserve the structure of the original second-order system. To address this issue, some researchers have engaged in structure-preserving model reduction for large-scale second-order systems. As we know, some reduced first-order systems were converted into the equivalent second-order systems by similarity transformations [31, 32]. Taking the structure of second-order systems into consideration, the second-order Krylov subspace methods were proposed to directly obtain the reduced second-order system [33, 34]. In addition, [35] presented a factorization of the error system by Krylov subspace methods and the transfer function of the error system was written as the product of the corresponding transfer functions. On the basis of the Cholesky decomposition and the logarithmic norm, the product was used to obtain the \(H_{2}\) and \(H_{\infty }\) error bounds [36, 37]. We also note that some researchers focused on the \(H_{\infty }\) control for linear discrete-time systems and continuous-time systems with polytopic uncertainties [38, 39].

Motivated by the importance of second-order systems and the efficiency of the Krylov subspace methods, we explore the structure-preserving model reduction methods for second-order systems in this paper. First, the second-order system is converted into a strictly dissipative system and the \(H_{2}\) norm of the strictly dissipative system is studied. Thereby, based on the Krylov subspace and the second-order Krylov subspace, two structure-preserving model reduction methods are proposed to reduce the strictly dissipative system. It is verified that the resulting reduced first-order systems can preserve the dissipativity. Further, making full use of the good result that the 1st Markov parameters of the strictly dissipative system and its reduced first-order systems are 0, these reduced first-order systems can be converted into the corresponding reduced second-order systems. Thus, the structure of the original second-order system is preserved, which is one of the main contributions of this paper. Moreover, according to the factorization of the error system, the \(H_{2}\) error bounds are derived by using the Kronecker product and the vectorization operator.

This paper is organized as follows. In Sect. 2, we introduce the strictly dissipative realization of the second-order system and derive the \(H_{2}\) norm. In Sect. 3, two structure-preserving model reduction methods for the second-order system are proposed. The \(H_{2}\) error bounds are discussed in Sect. 4 while two numerical examples are given in Sect. 5. Finally, some conclusions are drawn in Sect. 6.

2 The strictly dissipative realization and the \(H_{2}\) norm

In this section, we introduce the strictly dissipative realization [30] and the \(H_{2}\) norm, which are used to obtain the model reduction methods and the \(H_{2}\) error bounds.

We consider the following LTI second-order system:

$$\begin{aligned} \left\{ \begin{aligned}&E_{2}\frac{d^{2}z(t)}{dt^{2}}+E_{1}\frac{dz(t)}{dt}+E_{0}z(t)=B_{1}u(t),\\&y_{1}(t)=C_{1}z(t), \end{aligned}\right. \end{aligned}$$
(1)

where \(E_{i}\in \mathcal {R}^{n\times n}\) (\(i=0, 1, 2\)) are symmetric positive definite matrices, \(B_{1}\in \mathcal {R}^{n\times m}\) and \(C_{1}\in \mathcal {R}^{p\times n}\). The variables \(z(t)\in \mathcal {R}^{n}, u(t)\in \mathcal {R}^{m}\) and \(y_{1}(t)\in \mathcal {R}^{p}\) are the state, the input and the output of the system (1), respectively. Assume that \(z(0)=\dot{z}(0)=0\). Applying the Laplace transformation to the system (1), we can obtain the transfer function

$$\begin{aligned} G(s)=C_{1}\left( s^{2}E_{2}+sE_{1}+E_{0}\right) ^{-1}B_{1}. \end{aligned}$$
(2)

Let \(E=\text {diag}\{E_{0}, E_{2}\}\) and

$$\begin{aligned} A=\begin{bmatrix} 0&E_{0}\\ - E_{0}&-E_{1} \end{bmatrix},\quad B=\begin{bmatrix} 0\\ B_{1} \end{bmatrix},\quad C=\begin{bmatrix} C_{1}\quad 0 \end{bmatrix}. \end{aligned}$$

The system (1) can be converted into the equivalent first-order system

$$\begin{aligned} \left\{ \begin{aligned}&E \frac{dx(t)}{dt}=Ax(t)+Bu(t),\\&y(t)=Cx(t), \end{aligned}\right. \end{aligned}$$
(3)

where \(x(t)=[\;z^{T}(t)\; \frac{dz^{T}(t)}{dt}\;]^{T}\). The transfer function of the system (3) is given by \(H(s)=C(sE-A)^{-1}B\). One can easily verify that the 1st Markov parameter of H(s) equals to 0.

Let the transformation matrix

$$\begin{aligned} T=\begin{bmatrix}I&\alpha I\\ \alpha E_{2}E_{0}^{-1}&I \end{bmatrix} \end{aligned}$$
(4)

be a nonsingular matrix, where I is an identity matrix and \(\alpha \) is a positive constant. Multiplying the state equation of the system (3) from the left by the matrix T yields the following system

$$\begin{aligned} \left\{ \begin{aligned}&\widetilde{E} \frac{d\widetilde{x}(t)}{dt}=\widetilde{A}\widetilde{x}(t)+\widetilde{B}u(t),\\&\widetilde{y}(t)=\widetilde{C}\widetilde{x}(t), \end{aligned}\right. \end{aligned}$$
(5)

where

$$\begin{aligned} \widetilde{E}=\begin{bmatrix}E_{0}&\alpha E_{2}\\ \alpha E_{2}&E_{2} \end{bmatrix},\quad \widetilde{A}=\begin{bmatrix}-\alpha E_{0}&E_{0}-\alpha E_{1}\\ - E_{0}&\alpha E_{2}- E_{1} \end{bmatrix}, \quad \widetilde{B}=\begin{bmatrix} \alpha B_{1}\\ B_{1} \end{bmatrix},\quad \widetilde{C}=C. \end{aligned}$$

The transfer function of the system (5) is written as \(\widetilde{H}(s)=\widetilde{C}(s\widetilde{E} -\widetilde{A})^{-1}\widetilde{B}\).

Let sym\((\widetilde{A})=(\widetilde{A}+\widetilde{A}^{T})/2\) and \(\lambda _{\max }(\text {sym}(\widetilde{A}))\) is the maximum eigenvalue of the matrix sym\((\widetilde{A})\). The system (5) is termed as a strictly dissipative realization if E is positive definite and \(\mu _{2}(\widetilde{A})=\lambda _{\max }(\text {sym}(\widetilde{A}))<0\) [30]. The reference [30] also show that if \(\alpha \in (\;0,\; \alpha ^{*}\;)\), then the second-order system (1) can be transformed into the strictly dissipative system (5), where \(\alpha ^{*}=\lambda _{\min }(E_{1}(E_{2}+E_{1}E_{0}^{-1}E_{1}/4)^{-1})\) and \(\lambda _{\min }(M)\) denotes the minimum eigenvalue of the matrix M. Notice that the matrix T is determined by \(\alpha \). In the following, we always choose appropriate \(\alpha \) so as to guarantee the strict dissipativity of the system (5).

Since the system (5) is strictly dissipative, one can easily verify that \(\widetilde{E}^{-1}\widetilde{A}\) is stable, i.e., all its eigenvalues have negative real part. According to the fact that T is nonsingular, it holds \(\widetilde{H}(s)=H(s)\). Let \(\widetilde{P}\) be the controllability Gramian and \(\widetilde{Q}\) be the observability Gramian of the system (5), which satisfy the Lyapunov equations, respectively,

$$\begin{aligned}&\widetilde{A}\widetilde{P}\widetilde{E}^{T}+\widetilde{E} \widetilde{P}\widetilde{A}^{T}+\widetilde{B}\widetilde{B}^{T}=0, \end{aligned}$$
(6)
$$\begin{aligned}&\widetilde{E}^{T}\widetilde{Q}\widetilde{A} +\widetilde{A}^{T}\widetilde{Q}\widetilde{E}+\widetilde{C}^{T}\widetilde{C}=0. \end{aligned}$$
(7)

The following theorem studies the \(H_{2}\) norm of the transfer function \(\widetilde{H}(s)\), which will be employed to generate the \(H_{2}\) error bounds.

Theorem 1

Given the strictly dissipative system (5) with the transfer function \(\widetilde{H}(s)\). Then, the \(H_{2}\) norm of \(\widetilde{H}(s)\) is computed by

$$\begin{aligned} \left\| \widetilde{H}(s)\right\| _{H_{2}}^{2}=\mathrm{tr}\left( \widetilde{C}\widetilde{P}\widetilde{C}^{T}\right) =\mathrm{tr}\left( \widetilde{B}^{T}\widetilde{Q}\widetilde{B}\right) , \end{aligned}$$

where \(\widetilde{P}\) and \(\widetilde{Q}\) satisfy (6) and (7), respectively.

Proof

Since \(\widetilde{E}\) is nonsingular, we can get

$$\begin{aligned} \widetilde{H}(s)=\widetilde{C}\left( sI-\widetilde{E}^{-1} \widetilde{A}\right) ^{-1}\widetilde{E}^{-1}\widetilde{B} =\widetilde{C}\widetilde{E}^{-1}\left( sI-\widetilde{A}\widetilde{E}^{-1}\right) ^{-1}\widetilde{B}, \end{aligned}$$
(8)

which corresponds to the following two system realizations

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{aligned} &{}\frac{d\breve{x}(t)}{dt}=\widetilde{E}^{-1} \widetilde{A}\breve{x}(t)+\widetilde{E}^{-1}\widetilde{B}u(t),\\ &{}\breve{y}(t)=\widetilde{C}\breve{x}(t), \end{aligned} \end{array}\right. } \end{aligned}$$
(9)

and

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{aligned} &{}\frac{d\check{x}(t)}{dt}=\widetilde{A}\widetilde{E}^{-1} \check{x}(t)+\widetilde{B}u(t),\\ &{}\check{y}(t)=\widetilde{C}\widetilde{E}^{-1}\check{x}(t). \end{aligned} \end{array}\right. } \end{aligned}$$
(10)

The transfer function of (9) is written as \(\breve{H}(s)=\widetilde{C}(sI-\widetilde{E}^{-1}\widetilde{A})^{-1}\widetilde{E}^{-1}\widetilde{B}\), and the transfer function of the system (10) is expressed by \(\check{H}(s)=\widetilde{C}\widetilde{E}^{-1}(sI-\widetilde{A} \widetilde{E}^{-1})^{-1}\widetilde{B}\). Let \(\breve{P}\) be the controllability Gramian of the system (9) and \(\check{Q}\) be the observability Gramian of the system (10), which satisfy the Lyapunov equations, respectively,

$$\begin{aligned}&\widetilde{E}^{-1}\widetilde{A}\breve{P}+\breve{P} \widetilde{A}^{T}\widetilde{E}^{-T}+\widetilde{E}^{-1} \widetilde{B}\widetilde{B}^{T}\widetilde{E}^{-T}=0, \end{aligned}$$
(11)
$$\begin{aligned}&\widetilde{E}^{-T}\widetilde{A}^{T}\check{Q}+\check{Q} \widetilde{A}\widetilde{E}^{-1}+\widetilde{E}^{-T}\widetilde{C}^{T} \widetilde{C}\widetilde{E}^{-1}=0. \end{aligned}$$
(12)

It is well known that \(\Vert \breve{H}(s)\Vert _{H_{2}}^{2}=\text {tr}(\widetilde{C}\breve{P}\widetilde{C}^{T})\) and \(\Vert \check{H}(s)\Vert _{H_{2}}^{2}=\text {tr}(\widetilde{B}^{T}\check{Q}\widetilde{B})\). Since \(\widetilde{E}^{-1}\widetilde{A}\) is stable, we simply compare (11) with (6) and we then conclude that \(\breve{P}=\widetilde{P}\). Then, it holds \(\Vert \breve{H}(s)\Vert _{H_{2}}^{2}=\text {tr}(\widetilde{C}\widetilde{P}\widetilde{C}^{T})\). Similarly, we have \(\Vert \check{H}(s)\Vert _{H_{2}}^{2}=\text {tr}(\widetilde{B}^{T}\widetilde{Q}\widetilde{B})\), Therefore, we get

$$\begin{aligned} \left\| \widetilde{H}(s)\right\| _{H_{2}}^{2}= \text {tr}\left( \widetilde{C}\widetilde{P}\widetilde{C}^{T}\right) = \text {tr}\left( \widetilde{B}^{T}\widetilde{Q}\widetilde{B}\right) . \end{aligned}$$
(13)

Thus, the proof of this theorem is accomplished. \(\square \)

3 Structure-preserving model reduction methods for the second-order system

In this section, we reduce the strictly dissipative system (5) and obtain the reduced second-order system of the system (1). Based on the Krylov subspace and the second-order Krylov subspace, two structure-preserving model reduction methods are established.

Our goal is to find the transformation matrices \(W_{r}, V_{r}\in \mathcal {R}^{n\times r}\), such that the following reduced second-order system is obtained by applying the transformation \(z(t)\approx V_{r}\widehat{z}(t)\) to the system (1):

$$\begin{aligned} \left\{ \begin{aligned}&\widehat{E}_{2}\frac{d^{2}\widehat{z}(t)}{dt^{2}} +\widehat{E}_{1}\frac{d\widehat{z}(t)}{dt}+\widehat{E}_{0} \widehat{z}(t)=\widehat{B}_{1}u(t),\\&{\widehat{y}_{1}(t)}=\widehat{C}_{1}\widehat{z}(t), \end{aligned}\right. \end{aligned}$$
(14)

where \(\widehat{E}_{i}=W_{r}^{T}E_{i}V_{r}\) (\(i=0, 1, 2\)), \(\widehat{B}_{1}=W_{r}^{T}B_{1}\), \(\widehat{C}_{1}=C_{1}V_{r}\) and \(r \ll n\). Since the system (1) can be converted into the system (5), we can also generate the following reduced first-order system of the system (5) by using a pair of transformation matrices \(W, V\in \mathcal {R}^{2n\times 2r}\):

$$\begin{aligned} \left\{ \begin{aligned}&\widehat{E} \frac{d\widehat{x}(t)}{dt}=\widehat{A}\widehat{x}(t)+\widehat{B}u(t),\\&{\widehat{y}(t)}=\widehat{C}\widehat{x}(t), \end{aligned}\right. \end{aligned}$$
(15)

where \(\widehat{E}=W^{T}\widetilde{E}V, \widehat{A}=W^{T}\widetilde{A}V, \widehat{B}=W^{T}\widetilde{B}\) and \(\widehat{C}=\widetilde{C}V\). Further, the reduced system (14) is obtained by transforming the system (15) into the reduced second-order system. The transfer function of the system (15) is \(\widehat{H}(s)=\widehat{C}(s\widehat{E}-\widehat{A})^{-1}\widehat{B}\). Then, the transfer function of the error system between the systems (5) and (15) can be denoted by \(\widetilde{H}_{e}(s)=\widetilde{H}(s)-\widehat{H}(s)\).

3.1 The structure-preserving model reduction method based on the Krylov subspace

In the following, based on the Krylov subspace, we explore the model reduction method of the strictly dissipative system (5). Before this, we first introduce the Krylov subspace MOR method [9]. Choose the expansion point \(s_{0}\ne \infty \) such that \(\widetilde{A}-s_{0}\widetilde{E}\) is nonsingular. For the given matrix \(V\in \mathcal {R}^{2n\times 2rp}\), if the matrix \(W\in \mathcal {R}^{2n\times 2rp}\) satisfies

$$\begin{aligned} K_{2r}\left( \left( \widetilde{A}-s_{0}\widetilde{E}\right) ^{-T}\widetilde{E}^{T}; \left( \widetilde{A}-s_{0}\widetilde{E}\right) ^{-T}\widetilde{C}^{T}\right) \subseteq \text {colspan}\{W\}, \end{aligned}$$
(16)

or

$$\begin{aligned} K_{2r}\left( \widetilde{E}^{-T}\widetilde{A}^{T}; \widetilde{E}^{-T}\widetilde{C}^{T}\right) \subseteq \text {colspan}\{W\}, \end{aligned}$$
(17)

and \(\widehat{E}=W^{T}\widetilde{E}V\), \(\widehat{A}=W^{T}\widetilde{A}V\) are nonsingular, then \(\widehat{H}(s)\) matches the first 2r moments at \(s_{0}\) or Markov parameters of \(\widetilde{H}(s)\). Analogously, for the given matrix \(W\in \mathcal {R}^{2n\times 2rm}\), if \(V\in \mathcal {R}^{2n\times 2rm}\) satisfies

$$\begin{aligned} K_{2r}\left( \left( \widetilde{A}-s_{0}\widetilde{E}\right) ^{-1}\widetilde{E}; \left( \widetilde{A}-s_{0}\widetilde{E}\right) ^{-1}\widetilde{B}\right) \subseteq \text {colspan}\{V\}, \end{aligned}$$
(18)

or

$$\begin{aligned} K_{2r}\left( \widetilde{E}^{-1}\widetilde{A}; \widetilde{E}^{-1}\widetilde{B}\right) \subseteq \text {colspan}\{V\}, \end{aligned}$$
(19)

and \(\widehat{E}\) and \(\widehat{A}\) are nonsingular, then \(\widehat{H}(s)\) matches the first 2r moments at \(s_{0}\) or Markov parameters of \(\widetilde{H}(s)\).

Next, we expand the transfer function \(\widetilde{H}(s)\) at infinity and it yields

$$\begin{aligned} \widetilde{H}(s)=\sum _{i=1}^{\infty }\widetilde{M}_{i}s^{-i} =\sum _{i=1}^{\infty }\widetilde{C}\left( \widetilde{E}^{-1} \widetilde{A}\right) ^{i-1}\widetilde{E}^{-1}\widetilde{B}s^{-i}, \end{aligned}$$

where \(\widetilde{M}_{i}\) is the ith Markov parameter and

(20)

This means that H(s) and \(\widetilde{H}(s)\) have the same 1st Markov parameters.

Let \(W=V\), where W (or V) is constructed by (17) [or (19)]. The matrices W or V are utilized to obtain the reduced system (15). Thereby, the reduced system (15) can be converted into the second-order system (14) [9, 25]. Concretely, since \(\widehat{E}\) is nonsingular, we multiply (15) from the left by the matrix \(\widehat{E}^{-1}\). Choose a suitable matrix S and construct the matrix \(C_{z}=[\;\widehat{C}^{T}\; S^{T}\;]^{T}\) such that \(S\widehat{E}^{-1}\widehat{B}=0\). Let \(\bar{z}(t)=C_{z}\hat{x}(t)\). Then, we have

$$\begin{aligned} \begin{bmatrix} \bar{z}^{T}(t)&\frac{d\bar{z}^{T}(t)}{dt} \end{bmatrix}^{T}= \begin{bmatrix} C_{z}^{T}&\left( C_{z}\widehat{E}^{-1}\widehat{A}\right) ^{T} \end{bmatrix}^{T}\hat{x}(t)=\widetilde{T}^{-1}\hat{x}(t), \end{aligned}$$

where \(\widetilde{T}=[\;C_{z}^{T}\; (C_{z}\widehat{E}^{-1}\widehat{A})^{T}\;]^{-T}\). The reduced system (15) can be rewritten as

$$\begin{aligned} \left\{ \begin{aligned}&\begin{bmatrix} I&0\\ 0&\overline{E}_{2}\\ \end{bmatrix} \begin{bmatrix} \frac{d\bar{z}(t)}{dt}\\ \\ \frac{d^{2}\bar{z}(t)}{dt^{2}} \end{bmatrix}= \begin{bmatrix} 0&I\\ -\overline{E}_{0}&-\overline{E}_{1} \end{bmatrix} \begin{bmatrix} \bar{z}(t)\\ \\ \frac{d\bar{z}(t)}{dt} \end{bmatrix}+ \begin{bmatrix} 0\\ \\ \overline{B}_{1} \end{bmatrix}u(t),\\&\bar{y}(t)= \begin{bmatrix} \overline{C}_{1}&0 \end{bmatrix} \begin{bmatrix} \bar{z}(t)\\ \\ \frac{d\bar{z}(t)}{dt} \end{bmatrix}, \end{aligned}\right. \end{aligned}$$
(21)

where \(\overline{E}_{2}=I, [\;-\overline{E}_{0}\; -\overline{E}_{1}\;]=C_{z}(\widehat{E}^{-1}\widehat{A})^{2}\widetilde{T}, \overline{B}_{1}=C_{z}\widehat{E}^{-1}\widehat{A}\widehat{E}^{-1}\widehat{B}\) and \(\overline{C}_{1}=[\;I_{p}\; 0\;]\). Notice that the systems (21) and (3) have the same structure. Therefore, the reduced second-order system shown as (14) is obtained based on the system (21). We should point out that the 1st Markov parameters of \(\widehat{H}(s)\) and \(\widetilde{H}(s)\) both equal to 0 plays an important role in transforming the first-order system (15) into the second-order system (14).

The specific model reduction method is presented as below.

figure a

3.2 The structure-preserving model reduction method based on the second-order Krylov subspace

In this subsection, the relationship between the Krylov subspace and the second-order Krylov subspace is discussed. And the second-order Krylov subspace is employed to establish the structure-preserving model reduction method. Additionally, the strict dissipativity of the reduced system is studied.

Let \(V_{r}\) be the orthogonal basis of the second-order Krylov subspace [9, 34], which is defined as

$$\begin{aligned} \text {colspan}\{V_{r}\}=K_{r}\left( -E_{2}^{-1}E_{0}, -E_{2}^{-1}E_{1}; E_{2}^{-1}B_{1}\right) . \end{aligned}$$
(22)

In the following, the relationship of the Krylov subspace and the second-order Krylov subspace is presented.

Theorem 2

Given the orthogonal matrix \(V_{r}\in \mathcal {R}^{n\times rm}\) generated by (22). Let \(V=\mathrm{diag}\{V_{r}, V_{r}\}\in \mathcal {R}^{2n\times 2rm}\). Then, it holds

$$\begin{aligned} K_{r}\left( E^{-1}A; E^{-1}B\right) \subseteq \mathrm{colspan}\{V\}. \end{aligned}$$

Proof

Let

$$\begin{aligned} \begin{aligned} A_{1}=-E_{2}^{-1}E_{1},&\quad A_{2}=-E_{2}^{-1}E_{0}, \quad P_{0}=E_{2}^{-1}B_{1},\quad P_{1}=A_{1}P_{0},\\ \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} P_{i}=A_{1}P_{i-1}+A_{2}P_{i-2}, \quad i=2, 3, \ldots , r-1. \end{aligned} \end{aligned}$$

It follows that \(K_{r}(-E_{2}^{-1}E_{0}, -E_{2}^{-1}E_{1};\) \( E_{2}^{-1}B_{1})=K_{r}(A_{2}, A_{1}; P_{0})\). We can get

$$\begin{aligned} E^{-1}A=\begin{bmatrix} 0&I\\ A_{2}&A_{1} \end{bmatrix},\quad E^{-1}B= \begin{bmatrix} 0\\ {E}^{-1}_{2}B_{1} \end{bmatrix}=\begin{bmatrix} 0\\ P_{0} \end{bmatrix}, \quad \left( E^{-1}A\right) ^{i}E^{-1}B =\begin{bmatrix} P_{i-1}\\ P_{i} \end{bmatrix}. \end{aligned}$$

It yields

$$\begin{aligned} \begin{bmatrix} 0&P_{0}&\cdots&P_{r-2}\\ P_{0}&P_{1}&\cdots&P_{r-1} \end{bmatrix}= \begin{bmatrix} E^{-1}B\quad E^{-1}AE^{-1}B\quad \cdots \quad \left( E^{-1}A\right) ^{r-1}E^{-1}B \end{bmatrix}. \end{aligned}$$
(23)

Applying the second-order Arnoldi procedure [40, 41] to the second-order Krylov subspace \(K_{r}(-E_{2}^{-1}E_{0}, -E_{2}^{-1}E_{1}; E_{2}^{-1}B_{1})\), we have

$$\begin{aligned} \begin{bmatrix} P_{0}\quad P_{1}\quad \cdots \quad P_{i} \end{bmatrix}=V_{i+1}S_{i+1}, \quad i=0, 1, \cdots , r-1, \end{aligned}$$
(24)

where \(S_{i+1}\) is a nonsingular matrix with compatible dimension. Then, it yields

$$\begin{aligned} \text {colspan}\left\{ E^{-1}B\quad E^{-1}AE^{-1}B\quad \cdots \quad \left( E^{-1}A\right) ^{r-1}E^{-1}B\right\} \subseteq \text {colspan}\{V\}, \end{aligned}$$
(25)

which implies that the conclusion of this theorem holds. Thus, the proof of Theorem 2 is accomplished. \(\square \)

Based on the above analysis, the matrices \(V_{r}\) and V are used to construct the reduced system (15). Notice that \(K_{r}(\widetilde{E}^{-1}\widetilde{A}; \widetilde{E}^{-1}\widetilde{B})=K_{r}(E^{-1}A; E^{-1}B)\). Then, we find that the 1st Markov parameter of \(\widehat{H}(s)\) is equal to that of \(\widetilde{H}(s)\), i.e., their 1st Markov parameters are equal to 0. Thus, we propose the following model reduction method.

figure b

In Algorithm 2, the second-order Krylov subspace is directly utilized to construct the transformation matrix V. Thus, Algorithm 2 is more efficient than Algorithm 1. Moreover, the computation cost can be further reduced when the SOAR procedure [23, 41] is used in Algorithm 2.

In Algorithm 2, Steps 4–6 are implemented to transform the reduced system (15) into the second-order system. Taking the structure of the transformation matrix V into account, we can present another way to achieve it. Let

$$\begin{aligned} \widehat{T}=\left[ \begin{array}{cc} I &{} \alpha I \\ \alpha \widetilde{E}_{2}\widetilde{E}_{0}^{-1} &{} I \\ \end{array} \right] , \end{aligned}$$

where \(\widetilde{E}_{0}=V_{r}^{T}E_{0}V_{r}\) and \(\widetilde{E}_{2}=V_{r}^{T}E_{2}V_{r}\). Then, we have

$$\begin{aligned} \widehat{T}^{-1}=\left[ \begin{array}{lc} \left( I-\alpha ^{2}\widetilde{E}_{2}\widetilde{E}_{0}^{-1}\right) ^{-1} &{} -\alpha \left( I-\alpha ^{2}\widetilde{E}_{2}\widetilde{E}_{0}^{-1}\right) ^{-1}\\ -\alpha \left( I-\alpha ^{2}\widetilde{E}_{2}\widetilde{E}_{0}^{-1}\right) ^{-1} \widetilde{E}_{2}\widetilde{E}_{0}^{-1} &{} \left( I-\alpha ^{2}\widetilde{E}_{2}\widetilde{E}_{0}^{-1}\right) ^{-1} \\ \end{array} \right] , \end{aligned}$$
(26)

where \(\widetilde{E}_{1}=V_{r}^{T}E_{1}V_{r}\). Let \(\widetilde{B}_{1}=V_{r}^{T}B_{1}\) and \(\widetilde{C}_{1}=C_{1}V_{r}\). The coefficient matrices of (15) can be given by

$$\begin{aligned} \begin{aligned} \widehat{E}=\left[ \begin{array}{cc} \widetilde{E}_{0} &{} \alpha \widetilde{E}_{2} \\ \alpha \widetilde{E}_{2} &{} \widetilde{E}_{2} \\ \end{array} \right] ,\; \widehat{A}=\left[ \begin{array}{cc} -\alpha \widetilde{E}_{0} &{} \widetilde{E}_{0}-\alpha \widetilde{E}_{1} \\ -\widetilde{E}_{0} &{} \alpha \widetilde{E}_{2}-\widetilde{E}_{1} \\ \end{array} \right] ,\; \widehat{B}=\left[ \begin{array}{c} \alpha \widetilde{B}_{1} \\ \widetilde{B}_{1} \\ \end{array} \right] ,\; \widehat{C}=\left[ \begin{array}{cc} \widetilde{C}_{1} &{} 0 \\ \end{array} \right] . \end{aligned} \end{aligned}$$

Multiplying the state equation of (15) from the left by \(\widehat{T}^{-1}\) yields

$$\begin{aligned} \widehat{T}^{-1}\widehat{E}=\left[ \begin{array}{cc} \widetilde{E}_{0} &{} \\ &{} \widetilde{E}_{2} \\ \end{array} \right] ,\; \widehat{T}^{-1}\widehat{A}=\left[ \begin{array}{cc} 0 &{} \widetilde{E}_{0} \\ -\widetilde{E}_{0} &{} -\widetilde{E}_{1} \\ \end{array} \right] ,\; \widehat{T}^{-1}\widehat{B}=\left[ \begin{array}{c} 0 \\ \widetilde{B}_{1} \\ \end{array} \right] , \end{aligned}$$
(27)

which corresponds to the following second-order system:

$$\begin{aligned} \left\{ \begin{aligned}&\widetilde{E}_{2}\frac{d^{2}\widetilde{z}(t)}{dt^{2}} +\widetilde{E}_{1}\frac{d\widetilde{z}(t)}{dt} +\widetilde{E}_{0}\widetilde{z}(t)=\widetilde{B}_{1}u(t),\\&\widetilde{y}_{1}(t)=\widetilde{C}_{1}\widetilde{z}(t). \end{aligned}\right. \end{aligned}$$
(28)

According to (26), we find that \(\widehat{T}^{-1}\) is determined by the matrices \((I-\alpha ^{2}\widetilde{E}_{2}\widetilde{E}_{0}^{-1})^{-1} \widetilde{E}_{2}\widetilde{E}_{0}^{-1}\) and \((I-\alpha ^{2}\widetilde{E}_{2}\widetilde{E}_{0}^{-1})^{-1}\). It is worth mentioning that \(\widehat{T}^{-1}\) can be efficiently computed due to \(r\ll n\).

Since Algorithms 1 and 2 are one-sided model reduction methods, \(V^{T}\widetilde{E}V\) is positive definite and \(\text {sym}(V^{T}\widetilde{A}V)\) is negative definite. Therefore, the reduced first-order systems resulting from Algorithms 1 and 2 are strictly dissipative.

4 The \(H_{2}\) error bounds

In this section, by the factorization of the error system [35,36,37], we generate the \(H_{2}\) error bounds by the Kronecker product and the vectorization operator.

Let \(\mathcal {P}_{\widetilde{E}^{T}W}\) and \(\mathcal {P}_{W}\) denote two orthogonal projectors, which are given by

$$\begin{aligned} \mathcal {P}_{\widetilde{E}^{T}W}=\widetilde{E}^{T}W\left( W^{T} \widetilde{E}\widetilde{E}^{T}W\right) ^{-1}W^{T}\widetilde{E}, \quad \mathcal {P}_{W}=V\widehat{E}^{-1}W^{T}\widetilde{E}, \end{aligned}$$

where \(\widehat{E}=W^{T}\widetilde{E}V\). If W is yielded based on (16), the transfer function of the error system can be factorized into

$$\begin{aligned} \widetilde{H}_{e}(s)=\widetilde{H}(s)-\widehat{H}(s) =\underbrace{\left[ \widehat{C}\left( s\widehat{E}-\widehat{A}\right) ^{-1} \widehat{\mathcal {B}}+I\right] }_{\widehat{H}_{\widehat{\mathcal {B}}}(s)}\cdot \underbrace{\widetilde{C}_{\bot }\left( s\widetilde{E}-\widetilde{A}\right) ^{-1} \widetilde{B}}_{\widetilde{H}_{W, \bot }(s)}, \end{aligned}$$
(29)

where

$$\begin{aligned} \begin{aligned}&\widetilde{C}_{\bot }=\widetilde{C}(I-\mathcal {P}_{W}),\quad \widetilde{C}_{\bot , \widetilde{E}^{T}W} =\widetilde{C}\left( I-\mathcal {P}_{\widetilde{E}^{T}W}\right) ,\\&\widehat{\mathcal {B}}=W^{T}\widetilde{A}\widetilde{C}_{\bot , \widetilde{E}^{T}W}^{T}\left( \widetilde{C}_{\bot , \widetilde{E}^{T}W}\widetilde{C}_{\bot , \widetilde{E}^{T}W}^{T}\right) ^{-1}. \end{aligned} \end{aligned}$$
(30)

If W satisfies (17), the transfer function of the error system is given by

$$\begin{aligned} \widetilde{H}_{e}(s)=\widetilde{H}(s)-\widehat{H}(s) =\underbrace{\left[ \widehat{C}\left( s\widehat{E} -\widehat{A}\right) ^{-1} \widehat{\mathcal {B}}_{\infty }\right] }_{\widehat{H}_{\widehat{\mathcal {B}}_{\infty }}(s)} \cdot \underbrace{\widetilde{C}_{\bot , \infty }\left( s\widetilde{E}-\widetilde{A}\right) ^{-1}\widetilde{B}}_{\widetilde{H}_{W, \infty }(s)}, \end{aligned}$$
(31)

where

$$\begin{aligned} \begin{aligned} \widetilde{C}_{\bot , \widetilde{E}^{T}W, \infty }&=\widetilde{C}\left( \widetilde{E}^{-1}\widetilde{A}\right) ^{2r} \left( I-\mathcal {P}_{\widetilde{E}^{T}W}\right) , \quad \widetilde{C}_{\bot , \infty }=\widetilde{C}_{\bot , \widetilde{E}^{T}W, \infty }(I-\mathcal {P}_{W}),\\ \widehat{\mathcal {B}}_{\infty }&=W^{T}\widetilde{A}\widetilde{C}_{\bot , \widetilde{E}^{T}W, \infty }^{T}\left( \widetilde{C}_{\bot , \widetilde{E}^{T}W, \infty }\left( I-\mathcal {P}_{\widetilde{E}^{T}W}\right) ^{2}\widetilde{C}_{\bot , \widetilde{E}^{T}W, \infty }^{T}\right) ^{-1}. \end{aligned} \end{aligned}$$
(32)

In the following, the \(H_{2}\) error bounds between the system (5) and the reduced system (15) is derived. First, we reformulate the \(H_{2}\) norm of the transfer function \(\widetilde{H}(s)\).

Lemma 1

Given the strictly dissipative realization (5) with the transfer function \(\widetilde{H}(s)\). Assume that \(\widetilde{A}^{T}\otimes \widetilde{E}^{T}+\widetilde{E}^{T}\otimes \widetilde{A}^{T}\) is nonsingular. Then, the \(H_{2}\) norm of \(\widetilde{H}(s)\) is rewritten as

$$\begin{aligned} \left\| \widetilde{H}(s)\right\| _{H_{2}}^{2}= \mathrm {vec}(I)^{T}\left( \widetilde{B}^{T}\otimes \widetilde{B}^{T}\right) \left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) ^{-1} \left( \widetilde{C}^{T}\otimes \widetilde{C}^{T}\right) \mathrm {vec}(I), \end{aligned}$$
(33)

where \(\mathrm {vec}(\cdot )\) denotes the vectorization operator and \(\otimes \) denotes the Kronecker product.

Proof

To prove this lemma, we first note the following important properties:

$$\begin{aligned} \text {tr}\left( X^{T}Y\right) =\text {vec}(X)^{T}\text {vec}(Y),\quad \text {vec}(XYM)=\left( M^{T}\otimes X\right) \text {vec}(Y). \end{aligned}$$

Since \(\widetilde{Q}\) is the observability Gramian of the system (5), by vectorizing both sides of the Lyapunov equation (7), we get

$$\begin{aligned} \left( \widetilde{A}^{T}\otimes \widetilde{E}^{T}+\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) \text {vec}\left( \widetilde{Q}\right) +\text {vec}\left( \widetilde{C}^{T}\widetilde{C}\right) =0. \end{aligned}$$

According to the assumption that \(\widetilde{A}^{T}\otimes \widetilde{E}^{T}+\widetilde{E}^{T}\otimes \widetilde{A}^{T}\) is nonsingular, we obtain

$$\begin{aligned} \text {vec}\left( \widetilde{Q}\right) =\left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) ^{-1}\text {vec}\left( \widetilde{C}^{T}\widetilde{C}\right) . \end{aligned}$$

From Theorem 1, we have

$$\begin{aligned} \begin{aligned} \left\| \widetilde{H}(s)\right\| _{H_{2}}^{2}&= \text {tr}\left( \widetilde{B}^{T}\widetilde{Q}\widetilde{B}\right) \\&=\text {vec}\left( \widetilde{B}\right) ^{T}\left( \widetilde{B}^{T}\otimes I\right) \text {vec}\left( \widetilde{Q}\right) \\&=\text {vec}\left( \widetilde{B}\right) ^{T}\left( \widetilde{B}^{T}\otimes I\right) \left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T} \right) ^{-1}\text {vec}\left( \widetilde{C}^{T}\widetilde{C}\right) \\&=\left( \left( \widetilde{B}\otimes I\right) \text {vec} \left( \widetilde{B}\right) \right) ^{T}\left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) ^{-1} \text {vec}\left( \widetilde{C}^{T}\widetilde{C}\right) \\&= \text {vec}(I)^{T}\left( \widetilde{B}^{T}\otimes \widetilde{B}^{T}\right) \left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) ^{-1} \left( \widetilde{C}^{T}\otimes \widetilde{C}^{T}\right) \text {vec}(I). \end{aligned} \end{aligned}$$

Thus, the proof of this lemma is accomplished. \(\square \)

Theorem 3

Let \(\widetilde{H}_{e}(s)\) be the transfer function of the error system related to the system (5) and the reduced system (15). Assume that \(\widetilde{A}^{T}\otimes \widetilde{E}^{T}+\widetilde{E}^{T}\otimes \widetilde{A}^{T}\) is nonsingular. If the orthogonal matrix W satisfies (16), then the \(H_{2}\) error bound is given by

$$\begin{aligned} \begin{aligned} \left\| \widetilde{H}_{e}(s)\right\| _{H_{2}}^{2}&\le \sup _{\omega }\left\| \widehat{H}_{\widehat{\mathcal {B}}}(i\omega ) \right\| _{F}^{2}\cdot \mathrm {vec}(I)^{T}\left( \widetilde{B}^{T}\otimes \widetilde{B}^{T}\right) \\&\quad \times \left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) ^{-1} \left( \widetilde{C}_{\bot }^{T}\otimes \widetilde{C}_{\bot }^{T}\right) \mathrm {vec}(I). \end{aligned} \end{aligned}$$
(34)

If the orthogonal matrix W satisfies (17), then the \(H_{2}\) error bound is given by

$$\begin{aligned} \begin{aligned} \left\| \widetilde{H}_{e}(s)\right\| _{H_{2}}^{2}&\le \sup _{\omega }\left\| \widehat{H}_{\widehat{\mathcal {B}}_{\infty }}(i\omega ) \right\| _{F}^{2}\cdot \mathrm {vec}(I)^{T}\left( \widetilde{B}^{T}\otimes \widetilde{B}^{T}\right) \\&\quad \times \left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) ^{-1}\left( \widetilde{C}_{\bot , \infty }^{T}\otimes \widetilde{C}_{\bot , \infty }^{T}\right) \mathrm {vec}(I), \end{aligned} \end{aligned}$$
(35)

where \(\widetilde{C}_{\bot }\), \(\widehat{\mathcal {B}}\), \(\widetilde{C}_{\bot , \infty }\) and \(\widehat{\mathcal {B}}_{\infty }\) are given by (30) and (32), respectively.

Proof

When the matrix W is the orthogonal basis of (16), the factorization of the transfer function \(\widetilde{H}_{e}(s)\) is given by (29). According to Theorem 1 and Lemma 1, we find that

$$\begin{aligned} \begin{aligned} \left\| \widetilde{H}_{e}(s)\right\| _{H_{2}}^{2}&=\frac{1}{2\pi } \int ^{+\infty }_{-\infty } \text {tr}\left( \widetilde{H}_{e}^{T}(-i\omega )\widetilde{H}_{e}(i\omega )\right) d\omega \\&=\frac{1}{2\pi } \int ^{+\infty }_{-\infty }\left\| \widehat{H}_{\widehat{\mathcal {B}}}(i\omega )\cdot \widetilde{H}_{W, \bot }(i\omega )\right\| _{F}^{2}d\omega \\&\le \sup _{\omega }\left\| \widehat{H}_{\widehat{\mathcal {B}}}(i\omega ) \right\| _{F}^{2}\cdot \frac{1}{2\pi } \int ^{+\infty }_{-\infty } \text {tr}\left( \widetilde{H}_{W, \bot }^{T}(-i\omega )\widetilde{H}_{W, \bot }(i\omega )\right) d\omega \\&\le \sup _{\omega }\left\| \widehat{H}_{\widehat{\mathcal {B}}}(i\omega ) \right\| _{F}^{2}\cdot \text {vec}(I)^{T}\left( \widetilde{B}^{T}\otimes \widetilde{B}^{T}\right) \left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) ^{-1}\\&\quad \times \left( \widetilde{C}_{\bot }^{T}\otimes \widetilde{C}_{ \bot }^{T}\right) \text {vec}(I). \end{aligned} \end{aligned}$$

If the matrix W is the orthogonal basis of (17), then the factorization of the transfer function \(\widetilde{H}_{e}(s)\) is shown in (31). Similarly, we can obtain the corresponding \(H_{2}\) error bound, which is given by (35). Thus, the proof of this theorem is accomplished. \(\square \)

Remark 1

Theorem 3 gives two \(H_{2}\) error bounds resulting from the Krylov subspaces (16) and (17), respectively. After similar analysis and derivation, another two \(H_{2}\) error bounds can also be obtained based on the Krylov subspaces (18) and (19). Let \(\widehat{H}_{\widehat{\mathcal {C}}}(s)=\widehat{\mathcal {C}}(s\widehat{E}-\widehat{A})^{-1}\widehat{B}+I\) and \(\widehat{H}_{\widehat{\mathcal {C}}_{\infty }}(s)=\widehat{\mathcal {C}}_{\infty }(s\widehat{E}-\widehat{A})^{-1}\widehat{B}\). Assume that \(\widetilde{A}^{T}\otimes \widetilde{E}^{T}+\widetilde{E}^{T}\otimes \widetilde{A}^{T}\) is nonsingular. If the orthogonal basis V satisfies (18), then the \(H_{2}\) error bound is computed by

$$\begin{aligned} \begin{aligned} \left\| \widetilde{H}_{e}(s)\right\| _{H_{2}}^{2}&\le \sup _{\omega }\left\| \widehat{H}_{\widehat{\mathcal {C}}}(i\omega ) \right\| _{F}^{2}\cdot \mathrm {vec}(I)^{T}\left( \widetilde{B}_{\bot }^{T}\otimes \widetilde{B}_{\bot }^{T}\right) \\&\quad \times \left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) ^{-1} \left( \widetilde{C}^{T}\otimes \widetilde{C}^{T}\right) \mathrm {vec}(I). \end{aligned} \end{aligned}$$
(36)

If V satisfies (19), then the following \(H_{2}\) error bound is obtained

$$\begin{aligned} \begin{aligned} \left\| \widetilde{H}_{e}(s)\right\| _{H_{2}}^{2}&\le \sup _{\omega }\left\| \widehat{H}_{\widehat{\mathcal {C}}_{\infty }}(i\omega ) \right\| _{F}^{2}\cdot \mathrm {vec}(I)^{T}\left( \widetilde{B}_{\bot , \infty }^{T}\otimes \widetilde{B}_{\bot , \infty }^{T}\right) \\&\quad \times \left( -\widetilde{A}^{T}\otimes \widetilde{E}^{T}-\widetilde{E}^{T}\otimes \widetilde{A}^{T}\right) ^{-1} \left( \widetilde{C}^{T}\otimes \widetilde{C}^{T}\right) \mathrm {vec}(I), \end{aligned} \end{aligned}$$
(37)

where

$$\begin{aligned}&\mathcal {P}_{\widetilde{E}V}=\widetilde{E}V\left( V^{T}\widetilde{E}^{T} \widetilde{E}V\right) ^{-1}V^{T}\widetilde{E}^{T},\quad \mathcal {P}_{V}=\widetilde{E}V\widehat{E}^{-1}W^{T}, \\&\widetilde{B}_{\bot }=\left( I-\mathcal {P}_{V}\right) \widetilde{B},\quad {\widetilde{B}_{\bot , \widetilde{E}V} =\left( I-\mathcal {P}_{\widetilde{E}V}\right) \widetilde{B}},\\&\widehat{\mathcal {C}}=\left( \widetilde{B}_{\bot , \widetilde{E}V}^{T} \widetilde{B}_{\bot , \widetilde{E}V}\right) ^{-1}\widetilde{B}_{\bot , \widetilde{E}V}^{T} \widetilde{A}V,\\&\widetilde{B}_{\bot , \widetilde{E}V, \infty }= \left( I-\mathcal {P}_{\widetilde{E}V}\right) \left( \widetilde{A}\widetilde{E}^{-1}\right) ^{2r}\widetilde{B},\quad \widetilde{B}_{\bot , \infty }=\left( I-\mathcal {P}_{V}\right) \widetilde{B}_{\bot , \widetilde{E}V, \infty },\\&\widehat{\mathcal {C}}_{\infty }=\left( \widetilde{B}_{\bot , \widetilde{E}V, \infty }^{T} \left( I-\mathcal {P}_{\widetilde{E}V}\right) ^{2} \widetilde{B}_{\bot , \widetilde{E}V, \infty }\right) ^{-1} \widetilde{B}_{\bot , \widetilde{E}V, \infty }^{T}\widetilde{A}V. \end{aligned}$$

5 Numerical examples

In this section, two numerical examples are presented to illustrate the effectiveness of our methods. The orthogonal matrix \(V_{r}\) in Algorithm 2 is constructed by the SOAR procedure. All experiments are operated in Matlab R2010b.

The two examples simulated in this section both are the second-order system with proportional damping [24], whose coefficient matrices \(E_{i}\) \((i=0, 1, 2)\) are given by

$$\begin{aligned} E_{0}= & {} \frac{\beta }{\gamma }\left[ \begin{array}{ccccc} \frac{2-\sqrt{1-\beta \gamma }}{\sqrt{1-\beta \gamma }} &{} -1 &{} 0 &{} \cdots &{} 0 \\ -1 &{} \frac{2}{\sqrt{1-\beta \gamma }} &{} \ddots &{} &{} \vdots \\ 0 &{} \ddots &{} \ddots &{} \ddots &{} 0 \\ \vdots &{} &{} \ddots &{} \frac{2}{\sqrt{1-\beta \gamma }} &{} -1 \\ 0 &{} \cdots &{} 0 &{} -1 &{} \frac{2-\sqrt{1-\beta \gamma }}{\sqrt{1-\beta \gamma }} \\ \end{array} \right] _{n\times n},\\ E_{2}= & {} \left[ \begin{array}{ccccc} \frac{2+\sqrt{1-\beta \gamma }}{\sqrt{1-\beta \gamma }} &{} 1 &{} 0 &{} \cdots &{} 0 \\ 1 &{} \frac{2}{\sqrt{1-\beta \gamma }} &{} \ddots &{} &{} \vdots \\ 0 &{} \ddots &{} \ddots &{} \ddots &{} 0 \\ \vdots &{} &{} \ddots &{} \frac{2}{\sqrt{1-\beta \gamma }} &{} 1 \\ 0 &{} \cdots &{} 0 &{} 1 &{} \frac{2+\sqrt{1-\beta \gamma }}{\sqrt{1-\beta \gamma }} \\ \end{array} \right] _{n\times n}, \end{aligned}$$

and \(E_{1}=\beta E_{2}+\gamma E_{0}\), where \(\beta , \gamma \in (0,\; 1)\). But, the orders of these second-order systems are different.

Example 1

In the first example, the order of the discussed second-order system is set to \(n=200\), where \( B_{1}=\left[ \begin{array}{ccccc} 1 &{} 0 &{} 0 &{} \cdots &{} 0 \\ \end{array} \right] ^{T},\quad C_{1}=1/n\left[ \begin{array}{ccccc} 1 &{} 1 &{} &{} 1 \cdots &{} 1 \\ \end{array} \right] .\)

Fig. 1
figure 1

The Bode plot for the reduced order \(r=8\) in Example 1 (the left one is the magnitude plot, and the right one is the phase plot)

Set \(\beta =0.45\) and \(\gamma =0.6\). Then, we get \(\alpha ^{*}=0.2782\). Here, let \(\alpha =\alpha ^{*}/2\). With these parameters, the original second-order system is turned into the dissipative system. After that, the SP-Krylov algorithm and the SPS-Krylov algorithm are used to produce reduced systems with different orders. We simulate the strictly dissipative system and its reduced systems on the frequency interval \([10^{-2}, 10^{8}]\). Figures 1 and 2 show the corresponding simulation results, where the reduced orders \(r=8\) and 20. Moreover, Table 1 shows the computational time for generating the reduced systems and the simulation time of the original system and the reduced systems.

Fig. 2
figure 2

The Bode plot for the reduced order \(r=20\) in Example 2 (the left one is the magnitude plot, and the right one is the phase plot)

From Figs. 1 and 2, we observe that the proposed methods can efficiently reduce the order of the original system and the reduced systems can well approximate the original system on the given frequency domain. Table 1 implies that the reduced systems can effectively save the simulation time. Furthermore, compared with the SP-Krylov algorithm, the SPS-Krylov algorithm spends less time in generating the reduced system. Therefore, the effectiveness of these two algorithms is illustrated in this example.

Table 1 Computational time (s) and simulation time (s) in Example 1

Example 2

In the second example, we consider the second-order system with the order \(n=2000\), where \( B_{1}^{T}=C_{1}=\left[ \begin{array}{ccccc} 1 &{} 0 &{} 0 &{} \cdots &{} 0 \\ \end{array} \right] .\) The parameters \(\beta \) and \(\gamma \) are chosen as \(\beta =0.9\) and \(\gamma =0.3\). Thereby, we get \(\alpha ^{*}=0.2485\) and we still let \(\alpha =\alpha ^{*}/2\). Then the strictly dissipative system is obtained.

Fig. 3
figure 3

The Bode plot for the reduced order \(r=7\) in Example 2 (the left one is the magnitude plot, and the right one is the phase plot)

The SP-Krylov algorithm and the SPS-Krylov algorithm are applied to reduce this strictly dissipative system. The corresponding simulation results for the reduced systems with \(r=7\) and 17 are presented in Figs. 3 and 4, respectively. Moreover, the computational time for generating the reduced systems and the simulation time of the original system and the reduced systems are listed in Table 2.

Fig. 4
figure 4

The Bode plot for the reduced order \(r=17\) in Example 2 (the left one is the magnitude plot, and the right one is the phase plot)

From Figs. 3 and 4, it is observed that the order of the original system can be significantly reduced by the proposed methods and the reduced systems perform well in approximating the original system. Table 2 shows that less time is spent in obtaining the reduced system by the SPS-Krylov algorithm and our methods save the simulation time greatly. In conclusion, these results indicate that the proposed methods are feasible in this example.

Table 2 Computational time (s) and simulation time (s) in Example 2

6 Conclusions

In this paper, the structure-preserving model reduction methods for the second-order system are presented. First, the second-order system is represented by a strictly dissipative realization. Next, the SP-Krylov algorithm and the SPS-Krylov algorithm are proposed to obtain the reduced systems, which can preserve the structure of the original second-order system. Additionally, the factorization of the error system is used to derive the \(H_{2}\) error bounds related to the original system and its reduced system. Finally, the effectiveness of these two algorithms is illustrated by two numerical examples.