2.1 Introduction

As integrated circuits get more complex and different physical effects have to be taken into account, the development of efficient modeling and simulation tools for very large networks is highly required. In this context, model order reduction is of crucial importance, especially if simulation of large-scale systems has to be done in a short time or it has to be repeated for different input signals. A general idea of model order reduction is to approximate a large-scale dynamical system by a reduced-order model that preserves essential properties like stability and passivity. It is also required that the approximation error is small.

Many different model reduction approaches have been developed in computational fluid dynamics, control design and electrical and mechanical engineering, see [3, 13, 61, 64] for books on this topic. One of the most used model reduction techniques in circuit simulation is moment matching approximation based on Krylov subspace methods, e.g., [6, 25, 30]. Although these methods are efficient for very large sparse problems, the resulting reduced-order systems have only locally good approximation properties, and stability and passivity are not necessarily preserved. Furthermore, passivity-preserving model reduction methods based on Krylov subspaces have been developed for structured systems arising in circuit simulation [26, 27, 42, 48] and also for general systems [4, 38, 66]. However, none of these methods provides computable global error bounds. Another drawback of Krylov subspace methods is the ad hoc choice of interpolation points that strongly influence the approximation quality. An optimal point selection strategy based on tangential interpolation has been presented in [5, 32] that provides an optimal \(\mathcal{H}_{2}\)-approximation.

In this paper, we present a survey on passivity-preserving balanced truncation model reduction methods for linear circuit equations developed in [54, 56, 72]. They involve computing the spectral projectors onto the left and right deflating subspaces corresponding to the finite and infinite eigenvalues of an underlying pencil and solving projected matrix equations. An important property of these methods is the existence of computable error bounds that allow an adaptive choice of the order of the approximate model.

Furthermore, we consider model reduction of nonlinear circuits based on decoupling linear and nonlinear subcircuits followed by reduction of the linear part [68]. This model reduction approach can also be combined with the POD-based reduction technique for semiconductor devices, see Chap. 1, and further with hierarchical reduction methods studied in Chap. 4 The developed model reduction algorithms for circuit equations were implemented as MATLAB toolbox PABTEC and tested on practical problems.

Notation

Throughout the paper, \(\mathbb{R}^{n,m}\) and \(\mathbb{C}^{n,m}\) denote the spaces of n × m real and complex matrices, respectively. The open left and right half-planes are denoted by \(\mathbb{C}_{-}\) and \(\mathbb{C}_{+}\), respectively, and \(i = \sqrt{-1}\). The matrices A T and A denote, respectively, the transpose and the conjugate transpose of \(A \in \mathbb{C}^{n,m}\), and A T = (A −1)T. An identity matrix of order n is denoted by I n or simply by I. We use rank(A), im(A) and ker(A) for the rank, the range and the kernel of A, respectively. A matrix \(A \in \mathbb{C}^{n,n}\) is positive definite (semidefinite), if v Av > 0 (v Av ≥ 0) for all non-zero \(v \in \mathbb{C}^{n}\). Note that positive (semi)definiteness of A does not require A to be Hermitian. For \(A,B \in \mathbb{C}^{n,n}\), we write A > B (AB) if AB is positive definite (semidefinite). Furthermore, diag(A 1, , A s ) denotes a block diagonal matrix with block entries A j , j = 1, , s, on the diagonal.

2.2 Circuit Equations

In this section, we briefly describe the modeling of electrical circuits via differential-algebraic equations (DAEs) and discuss their properties. For more details on graph theory and network analysis, we refer to [1, 20, 22, 40, 75].

2.2.1 Graph-Theoretic Concepts

A general circuit can be modeled as a directed graph \(\mathfrak{G}\) whose vertices (nodes) \(\mathfrak{n}_{k}\) correspond to the nodes of the circuit and whose branches (edges) \(\mathfrak{b}_{k_{1},k_{2}} =\langle \mathfrak{n}_{k_{1}},\mathfrak{n}_{k_{2}}\rangle\) correspond to the circuit elements like capacitors, inductors and resistors. For the ordered pair \(\mathfrak{b}_{k_{1},k_{2}} =\langle \mathfrak{n}_{k_{1}},\mathfrak{n}_{k_{2}}\rangle\), we say that \(\mathfrak{b}_{k_{1},k_{2}}\) leaves \(\mathfrak{n}_{k_{1}}\) and enters \(\mathfrak{n}_{k_{2}}\). In this case, \(\mathfrak{b}_{k_{1},k_{2}}\) is called incident with \(\mathfrak{n}_{k_{1}}\) and \(\mathfrak{n}_{k_{2}}\). An alternating sequence \((\mathfrak{n}_{k_{1}},\mathfrak{b}_{k_{1},k_{2}},\mathfrak{n}_{k_{2}},\ldots,\mathfrak{n}_{k_{s-1}},\mathfrak{b}_{k_{s-1},k_{s}},\mathfrak{n}_{k_{s}})\) of vertices and branches in \(\mathfrak{G}\) is called a path connecting \(\mathfrak{n}_{k_{1}}\) and \(\mathfrak{n}_{k_{s}}\) if the branches \(\mathfrak{b}_{k_{j-1},k_{j}}\) are incident with the vertices \(\mathfrak{n}_{k_{j-1}}\) and \(\mathfrak{n}_{k_{j}}\) for 2 ≤ js. A path is closed if \(\mathfrak{n}_{k_{1}} = \mathfrak{n}_{k_{s}}\). A closed path is called a loop if \(\mathfrak{n}_{k_{i}}\neq \mathfrak{n}_{k_{j}}\) for 1 ≤ i < js except for \(\mathfrak{n}_{k_{1}}\) and \(\mathfrak{n}_{k_{s}}\). A graph \(\mathfrak{G}\) is called connected if for every two vertices there exists a path connecting them. A cutset is a set of branches of a connected graph whose removal disconnects the graph, and this set is minimal with this property. A subgraph of the graph \(\mathfrak{G}\) is called a tree if it has all nodes of \(\mathfrak{G}\), is connected and does not contain loops.

A directed graph \(\mathfrak{G}\) with n v vertices, n b branches and n l loops can be described by an incidence matrix \(\mathbf{A}_{0} = [a_{pq}] \in \mathbb{R}^{n_{v},n_{b}}\) with

$$\displaystyle{ a_{pq} = \left \{\begin{array}{rl} 1& \text{if branch }q\text{ leaves vertex }p, \\ - 1& \text{if branch }q\text{ enters vertex }p, \\ 0& \text{otherwise}, \end{array} \right.\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad }$$

or by a loop matrix \(\mathbf{B}_{0} = [b_{pq}] \in \mathbb{R}^{n_{l},n_{b}}\) with

$$\displaystyle{ b_{pq} = \left \{\begin{array}{rl} 1& \text{if branch }q\text{ belongs to loop }p\text{ and has the same orientation,} \\ - 1& \text{if branch }q\text{ belongs to loop }p\text{ and has the contrary orientation,} \\ 0& \text{otherwise}. \end{array} \right. }$$

For a connected graph, the matrices A 0 and B 0 satisfy the following relations

$$\displaystyle{ \mathrm{ker}(\mathbf{B}_{0}) =\mathrm{ im}(\mathbf{A}_{0}^{T}),\qquad \mathrm{rank}(\mathbf{A}_{ 0}) = n_{v} - 1,\qquad \mathrm{rank}(\mathbf{B}_{0}) = n_{b} - n_{v} + 1, }$$

see [22, p. 213]. Removing linear dependent rows from A 0 and B 0, we obtain the full rank matrices \(\mathbf{A} \in \mathbb{R}^{n_{v}-1,n_{b}}\) and \(\mathbf{B} \in \mathbb{R}^{n_{b}-n_{v}+1,n_{b}}\) which are called the reduced incidence matrix and the reduced loop matrix, respectively.

2.2.2 Modified Nodal Analysis and Modified Loop Analysis

We now consider a general nonlinear RLC circuit that contains resistors, inductors, capacitors, independent voltage sources and \(n_{\mathscr{I} }\) independent current sources. Such circuits are completely described by the graph-theoretic relations like Kirchhoff’s current and voltage laws together with the branch constitutive relations that characterize the circuit elements. Kirchhoff’s current law states that the sum of the currents along all branches leaving and entering a circuit node is zero. Kirchhoff’s voltage law states that the sum of the voltages along the branches of any loop is zero. Let

denote the vectors of branch currents and branch voltages, respectively, and let the reduced incidence and loop matrices

be partitioned accordingly, where the subscripts and \(\mathscr{I}\) stand for resistors, capacitors, inductors, voltage sources and current sources, respectively. Then Kirchhoff’s current and voltage laws can be expressed in the compact form as

$$\displaystyle{ \mathbf{A}\,j = 0,\qquad \qquad \mathbf{B}\,v = 0, }$$

respectively, or, equivalently,

$$\displaystyle{ \mathbf{B}^{T}\iota = j,\qquad \qquad \mathbf{A}^{T}\eta = v, }$$

where \(\iota \in \mathbb{R}^{n_{b}-n_{v}+1}\) and \(\eta \in \mathbb{R}^{n_{v}-1}\) denote the vectors of loop currents and node potentials.

The branch constitutive relations for nonlinear capacitors, inductors and resistors are given by

(2.1)

where the functions , and describe electromagnetic fluxes in the inductors, capacitor charges and resistor voltage-current characteristics, respectively. For current-controlled resistors, we have also the relation , where is the resistor current-voltage characteristic function. We assume that

  1. (A1)

    the functions ϕ, and g are continuously differentiable and their Jacobians

    are positive definite for all , and , respectively.

This assumption guarantees that inductors, capacitors and resistors are locally passive, see [19] for details.

Using Kirchhoff’s laws and the branch constitutive relations, the dynamical behaviour of a nonlinear circuit can be described using modified nodal analysis (MNA) [37] by the following system of DAEs

$$\displaystyle{ \begin{array}{rcl} \mathcal{E}(x) \frac{\mathrm{d}} {\mathrm{d}t}x & = & \mathcal{A}\,x + f(x) + \mathcal{B}\,u, \\ y & = & \mathcal{B}^{T}x,\end{array} }$$
(2.2)

where

(2.3)

and the input u and the output y have the form

(2.4)

respectively. Another approach for modeling electrical circuits is based on modified loop analysis (MLA), see [79]. In this case, the circuit equations take the form (2.2) with

(2.5)

and the input and the output are as in (2.4).

We assume that the circuit is well-posed in the sense that

  1. (A2)

    the circuit does not contain cutsets consisting of current sources (I-cutsets),

  2. (A3)

    the circuit does not contain loops consisting of voltage sources (V-loops).

These assumptions avoid open-circuit current sources and short-circuit voltage sources, respectively. Assumption (A2) is equivalent to

which is, on the other hand, equivalent to \(\mathrm{rank}(B_{\mathscr{I} } ) = n_{\mathscr{I} }\). In terms of rank conditions, (A3) means that or, equivalently, .

The index concept plays an important role in the analysis of DAEs. To characterize different analytical and numerical properties of DAE systems, several index notations have been introduced in the literature, e.g., [17, 29, 33, 43]. For example, the differentiation index is roughly defined as the minimum of times that all or part of a DAE system must be differentiated with respect to t in order to determine the derivative of x as a continuous function of t and x. In the sequel, we will use the shorter term “index” instead of “differentiation index”. The following proposition characterizes the index of the MNA system (2.2), (2.3).

Proposition 2.2.1 ( [24, 68])

Consider a circuit satisfying assumptions (A1)–(A3).

  1. 1.

    The index of the MNA system  (2.2) (2.3) is at most two.

  2. 2.

    The index of the MNA system  (2.2) (2.3) is equal to zero if and only if

    (2.6)
  3. 3.

    The index of the MNA system  (2.2) (2.3) is equal to one if and only if

    (2.7)

Similar, rank conditions can also be formulated for the MLA system (2.2), (2.5). Considering the topological structure of the circuit, the conditions (2.6) imply that the circuit does not contain voltage sources and the circuit graph contains a capacitive tree, respectively. Furthermore, the first condition in (2.7) implies that the circuit does not contain loops consisting of capacitors and/or voltage sources (CV-loops) except for loops consisting of capacitors only (C-loops), whereas the second condition in (2.7) means that the circuit does not contain cutsets consisting of inductors and/or current sources (LI-cutsets) .

In the following, we will distinguish between nonlinear circuits, which contain nonlinear elements, and linear circuits consisting exclusively of linear capacitors, inductors and resistors. A circuit element is called linear if the current-voltage relation for this element is linear. Otherwise, the circuit element is called nonlinear. Without loss of generality we may assume that the circuit elements are ordered such that the incidence matrices are partitioned as

(2.8)

where the incidence matrices , and correspond to the linear circuit components, and , and are the incidence matrices for the nonlinear devices. We also assume that the linear and nonlinear elements are not mutually connected, i.e., where , and are the capacitance, inductance and conductance matrices for the corresponding linear elements, whereas

describe the corresponding nonlinear components, and is the current vector through the nonlinear inductors.

2.2.3 Linear RLC Circuits

For simplification of notation, a linear RLC circuit containing n R linear resistors, n L linear inductors, n C linear capacitors, n I current sources and n V voltage sources will be described by the linear DAE system

$$\displaystyle{ \begin{array}{rcl} E \frac{\mathrm{d}} {\mathrm{d}t}x & = & Ax + Bu, \\ y & = & B^{T}x,\end{array} }$$
(2.9)

with the MNA matrices

$$\displaystyle{ E = \left [\begin{array}{ccc} A_{C}^{}CA_{C}^{T}& 0 & 0 \\ 0 & L& 0\\ 0 & 0 & 0 \end{array} \right ],\quad A = \left [\begin{array}{ccc} - A_{R}^{}GA_{R}^{T}& - A_{L}^{}& - A_{V }^{} \\ A_{L}^{T} & 0 & 0 \\ A_{V }^{T} & 0 & 0 \end{array} \right ],\quad B = \left [\begin{array}{cr} - A_{I}& 0 \\ 0 & 0\\ 0 & - I \end{array} \right ],\quad }$$
(2.10)

or the MLA matrices

$$\displaystyle{ E = \left [\begin{array}{ccc} B_{L}^{}LB_{L}^{T}& 0 & 0 \\ 0 & C & 0\\ 0 & 0 & 0 \end{array} \right ],\quad A = \left [\begin{array}{ccc} - B_{R}^{}RB_{R}^{T}& - B_{C}^{}& - B_{I}^{} \\ B_{C}^{T} & 0 & 0 \\ B_{I}^{T} & 0 & 0 \end{array} \right ],\quad B = \left [\begin{array}{rc} 0& - B_{V } \\ 0& 0\\ - I & 0 \end{array} \right ].\quad }$$
(2.11)

Here, the subscripts R ,  C ,  L ,  V and I stand for linear resistors, linear capacitors, linear inductors, voltage sources and current sources, respectively, and \(L \in \mathbb{R}^{n_{L},n_{L}}\), \(C \in \mathbb{R}^{n_{C},n_{C}}\), \(R \in \mathbb{R}^{n_{R},n_{R}}\) and G = R −1 are the inductance, capacitance, resistance and conductance matrices, respectively. Linear circuits are often used to model interconnects, transmission lines and pin packages in VLSI networks. They arise also in the linearization of nonlinear circuit equations around DC operating points. According to (A1), we assume that

  1. (A1​´)

    the matrices L, C and G are symmetric and positive definite.

This condition together with (A2) and (A3) guarantees that the pencil λEA is regular, i.e., det(λE​ −​ A)​ ≠ ​0 for some \(\lambda \in \mathbb{C}\), see [27]. In this case, we can define a transfer function

$$\displaystyle{ \mathbf{G}(s) = B^{T}(sE - A)^{-1}B }$$

of the DAE system (2.9). Applying the Laplace transform to (2.9) with an initial condition x(0) = x 0 satisfying Ex 0 = 0, we obtain y(s) = G(s)u(s), where u(s) and y(s) are the Laplace transformations of the input u(t) and the output y(t), respectively. Thus, the transfer function G(s) describes the input-output relation of (2.9) in the frequency domain. Note that the MNA system (2.9), (2.10) and the MLA system (2.9), (2.11) have the same transfer function.

For any rational matrix-valued function G(s), there exist matrices E, A, B in and B out such that G(s) = B out(sEA)−1 B in, see [21]. Then the DAE system

$$\displaystyle{ \begin{array}{rcl} E \frac{\mathrm{d}} {\mathrm{d}t}x & = & Ax + B_{in}u,\\ y & = & B_{ \mathrm{out}}x, \end{array} }$$

is said to form a realization of G(s). We will also denote a realization of G(s) by G = (E, A, B in, B out).

The transfer function G(s) is called proper if \(\lim \limits _{s\rightarrow \infty }\mathbf{G}(s) <\infty\), and improper, otherwise. If G(s) is proper and analytic in \(\mathbb{C}_{+}\), then the \(\mathcal{H}_{\infty }\)-norm of G(s) is defined as

$$\displaystyle{ \|\mathbf{G}\|_{\mathcal{H}_{\infty }} =\sup _{s\in \mathbb{C}_{+}}\|\mathbf{G}(s)\| =\lim _{\begin{array}{c} \sigma \rightarrow 0\\ \sigma> 0 \end{array} }\sup _{\omega \in \mathbb{R}}\|\mathbf{G}(\sigma +i\omega )\|, }$$

where ∥⋅ ∥ denotes the spectral matrix norm.

2.2.3.1 Passivity

Passivity is the most important property of circuit equations. System (2.9) with x(0) = 0 is passive if

$$\displaystyle{ \int _{0}^{t}u(\tau )^{T}y(\tau )\,d\tau \geq 0 }$$
(2.12)

for all t ≥ 0 and all admissible u such that u T y is locally integrable. Passive elements can store and dissipate energy, but they do not generate energy. Thus, capacitors, resistors and inductors are passive, while current and voltage sources are not.

It is well known in linear network theory [1] that the DAE system (2.9) is passive if and only if its transfer function G(s) = B T(sEA)−1 B is positive real, i.e., G is analytic in \(\mathbb{C}_{+}\) and G(s) + G(s) ≥ 0 for all \(s \in \mathbb{C}_{+}\). Since the system matrices in (2.10) satisfy E = E T ≥ 0 and A + A T ≤ 0, the transfer function of (2.9), (2.10) is positive real, and, hence, this system is passive.

2.2.3.2 Stability

Stability is a qualitative property of dynamical systems which describes the behaviour of their solutions under small perturbations in the initial data. For the linear DAE system (2.9), stability can be characterized in terms of the finite eigenvalues of the pencil λEA, e.g., [21]. System (2.9) is stable if all the finite eigenvalues of λEA lie in the closed left half-plane and the eigenvalues on the imaginary axis are semi-simple, i.e., they have the same algebraic and geometric multiplicity. System (2.9) is asymptotically stable if the pencil λEA is c-stable, i.e., all its finite eigenvalues lie in the open left half-plane. Note that passivity of the MNA system (2.9), (2.10) implies that this system is stable [1, Theorem 2.7.2]. Topological conditions for the asymptotic stability of the MNA equations (2.9), (2.10) can be found in [58, 59].

2.2.3.3 Reciprocity

Another relevant property of circuit equations is reciprocity. We call a matrix \(S \in \mathbb{R}^{m,m}\)signature if S is diagonal and S 2 = I m . System (2.9) is reciprocal with an external signature \(S_{\mathrm{ext}} \in \mathbb{R}^{m,m}\) if its transfer function satisfies

$$\displaystyle{\mathbf{G}(s) = S_{\mathrm{ext}}\mathbf{G}(s)^{T}S_{\mathrm{ ext}}}$$

for all \(s \in \mathbb{C}\). Obviously, the MNA system (2.9), (2.10) with symmetric L, C and G is reciprocal with the external signature \(S_{\mathrm{ext}} =\mathrm{ diag}(I_{n_{I}},-I_{n_{V }})\).

2.3 Model Reduction of Linear Circuits

Consider the linear MNA system (2.9), (2.10) with \(E,A \in \mathbb{R}^{n,n}\) and \(B \in \mathbb{R}^{n,m}\). We aim to approximate this system by a reduced-order model

$$\displaystyle{ \begin{array}{rcl} \hat{E}\, \frac{\mathrm{d}} {\mathrm{d}t}\hat{x} & = & \hat{A}\,\hat{x} +\hat{ B}\,u,\\ \hat{y}&=&\hat{C}\,\hat{x}, \end{array} }$$
(2.13)

where \(\hat{E}\), \(\hat{A} \in \mathbb{R}^{n_{r},n_{r}}\), \(\hat{B} \in \mathbb{R}^{n_{r},m}\), \(\hat{C} \in \mathbb{R}^{m,n_{r}}\) and n r n. It is required that the approximate system (2.13) has a small approximation error \(y -\hat{ y}\) and also preserves passivity and reciprocity. In the frequency domain, the error can be measured via \(\mathbf{G} -\hat{\mathbf{G}}\) in an appropriate system norm, where \(\hat{\mathbf{G}}(s) =\hat{ C}(s\hat{E} -\hat{ A})^{-1}\hat{B}\) is the transfer function of system (2.13).

A classical approach for computing the reduced-order model (2.13) is based on the projection of system (2.9) onto lower dimensional subspaces. In this case, the system matrices in (2.13) have the form

$$\displaystyle{ \hat{E} = W^{T}\!E\,T,\qquad \hat{A} = W^{T}\!A\,T,\qquad \hat{B} = W^{T}B,\qquad \hat{C} = B^{T}\,T, }$$
(2.14)

where the projection matrices W, \(T \in \mathbb{R}^{n,n_{r}}\) determine the subspaces of interest. In interpolation-based passivity-preserving model reduction methods like PRIMA [48], SPRIM [26, 27] and spectral zero interpolation [38, 66], the columns of these matrices span certain (rational) Krylov subspaces associated with (2.9).

Balanced truncation also belongs to the projection-based model reduction techniques. This method consists in transforming the dynamical system into a balanced form whose appropriately chosen controllability and observability Gramians are both equal to a diagonal matrix. Then a reduced-order model (2.13), (2.14) is obtained by projecting (2.9) onto the subspaces corresponding to the dominant diagonal elements of the balanced Gramians. In order to capture specific system properties, different balancing techniques have been developed in the last 30 years [31, 46, 47, 52, 55, 69]. An important property of these techniques is the existence of computable error bounds that allow us to approximate (2.9) to a prescribed accuracy.

In Sect. 2.3.1, we consider a passivity-preserving model reduction method for general RLC circuits developed in [54, 72]. This method is based on balancing the Gramians that satisfy the projected Lur’e matrix equations. For RC circuits consisting only of resistors, capacitors, current sources and/or voltage sources, this method can significantly be simplified. In Sect. 2.3.2, we present passivity-preserving model reduction methods for RC circuits developed in [56] that rely on balancing the solutions of the projected Lyapunov equations. Thereby, we will distinguish three cases: RC circuits with current sources (RCI circuits), RC circuits with voltage sources (RCV circuits) and RC circuits with both current and voltage sources (RCIV circuits). Finally, in Sect. 2.3.3, we discuss the numerical aspects of the presented balancing-related model reduction algorithms.

2.3.1 Balanced Truncation for RLC Circuits

First, we consider model reduction of general RLC circuits. Note that passivity of the MNA system (2.9), (2.10) can be characterized via the projected Lur’e equations

$$\displaystyle{ \begin{array}{l} E\,X\,(A - BB^{T})^{T} + (A - BB^{T})\,XE^{T} + 2P_{l}^{}BB^{T}P_{l}^{T} = -2K_{c}^{}K_{c}^{T}, \\ EXB - P_{l}^{}BM_{0}^{T} = -K_{c}^{}J_{c}^{T},\quad I - M_{0}^{}M_{0}^{T} = J_{c}^{}J_{c}^{T},\quad X = P_{r}^{}XP_{r}^{T} \geq 0,\end{array} }$$
(2.15)

and

$$\displaystyle{ \begin{array}{l} E^{T}Y (A - BB^{T}) + (A - BB^{T})^{T}Y E + 2P_{r}^{T}\!BB^{T}P_{r} = -2K_{o}^{T}K_{o}^{}, \\ - E^{T}Y B + P_{r}^{T}\!BM_{0} = -K_{o}^{T}J_{o}^{},\quad I - M_{0}^{T}M_{0}^{} = J_{o}^{T}J_{o}^{}\quad Y = P_{l}^{T}Y P_{l}^{} \geq 0 \end{array} }$$
(2.16)

with unknowns \(X \in \mathbb{R}^{n,n}\), \(K_{c} \in \mathbb{R}^{n,m}\), \(J_{c} \in \mathbb{R}^{m,m}\) and \(Y \in \mathbb{R}^{n,n}\), \(K_{o} \in \mathbb{R}^{m,n}\), \(J_{o} \in \mathbb{R}^{m,m}\), respectively. Here, P r and P l are the spectral projectors onto the right and left deflating subspaces of the pencil λE − (ABB T) corresponding to the finite eigenvalues along the right and left deflating subspaces corresponding to the eigenvalue at infinity, and

$$\displaystyle{ M_{0} = I - 2\lim _{s\rightarrow \infty }B^{T}(sE - A + BB^{T})^{-1}B. }$$
(2.17)

In general, the solvability of the projected Lur’e equations (2.15) and (2.16) requires that system (2.9) is passive and R-minimal, i.e.,

$$\displaystyle{\mathrm{rank}([\,\lambda E - A\,,\;B\,]) =\mathrm{ rank}([\,\lambda E^{T} - A^{T},\;B\,]) = n}$$

for all \(\lambda \in \mathbb{C}\). For the circuit equations (2.9), (2.10), however, the R-minimality condition can be removed.

Theorem 2.3.1 ( [54])

Consider an MNA system  (2.9) (2.10) satisfying (A1​´), (A2) and (A3). Then the projected Lur’e equations  (2.15) and  (2.16) are solvable.

Note that the solutions X and Y of (2.15) and (2.16) are not unique. However, there exist unique minimal solutions X min and Y min that satisfy 0 ≤ X minX and 0 ≤ Y minY for all symmetric solutions X and Y of (2.15) and (2.16), respectively. These minimal solutions X min and Y min of (2.15) and (2.16), respectively, are called the controllability and observability Gramians of system (2.9). This system is called balanced if X min = Y min = diag(Γ, 0), where \(\varGamma =\mathrm{ diag}(\gamma _{1},\ldots,\gamma _{n_{f}})\) with \(\gamma _{1} \geq \ldots \geq \gamma _{n_{f}} \geq 0\) and n f = rank(P r ). The values γ j are called the characteristic values of (2.9). Based on the energy interpretation of the Gramians X min and Y min, see [55], one can conclude that the truncation of the states of a balanced system corresponding to the small characteristic values does not change the system properties significantly. The characteristic values and balancing transformation matrices can be determined from the singular value decomposition of the matrix \(\tilde{Y }^{T}E\tilde{X}\), where \(\tilde{X}\) and \(\tilde{Y }\) are the Cholesky factors of the Gramians \(X_{\min } =\tilde{ X}\tilde{X}^{T}\) and \(Y _{\min } =\tilde{ Y }\tilde{Y }^{T}\). Taking into account the block structure of the MNA matrices in (2.10), we have E T = S int ES int and A T = S int AS int with

$$\displaystyle{ S_{\mathrm{int}} =\mathrm{ diag}(I_{n_{v}-1},-I_{n_{L}},-I_{n_{V }}). }$$
(2.18)

This implies that Y min = S int X min S int. Then instead of the more expensive singular value decomposition of \(\tilde{Y }^{T}E\tilde{X}\), we can compute the eigenvalue decomposition of the symmetric matrix \(\tilde{X}^{T}S_{\mathrm{int}}E\tilde{X}\). In this case, the numerical solution of only one Lur’e equation is required. If λ j are eigenvalues of \(\tilde{X}^{T}S_{\mathrm{int}}E\tilde{X}\), then γ j = | λ j |. Thus, the reduced-order model (2.13), (2.14) can be determined by projecting (2.9) onto the subspaces corresponding to the dominant eigenvalues of \(\tilde{X}^{T}S_{\mathrm{int}}E\tilde{X}\).

One can also truncate the states that are uncontrollable and unobservable at infinity. Such states do not contribute to the energy transfer from the input to the output, and, therefore, they can be removed from the system without changing the input-output relation [69, 71]. For general DAE systems, such states can be determined from the solution of certain projected discrete-time Lyapunov equations [69]. Exploiting again the structure of the MNA equations (2.9), (2.10), the required states can be determined from the eigenvalue decomposition of the symmetric matrix (IM 0)S ext with

$$\displaystyle{ S_{\mathrm{ext}} =\mathrm{ diag}(I_{n_{I}},-I_{n_{V }}). }$$
(2.19)

We summarize the resulting model reduction method for RLC circuits in Algorithm 2.1.

Algorithm 2.1 Passivity-preserving balanced truncation for RLC circuits

Given a passive MNA system (2.9) with E, A, B as in (2.10), compute a reduced-order model (2.13).

  1. 1.

    Compute the full-rank Cholesky factor \(\tilde{X}\) of the minimal solution \(X_{\min } =\tilde{ X}\tilde{X}^{T}\) of the projected Lur’e equation (2.15).

  2. 2.

    Compute the eigenvalue decomposition

    $$\displaystyle{ \tilde{X}^{T}S_{\mathrm{ int}}E\tilde{X} = [\,U_{1},\,U_{2}\,]\left [\begin{array}{cc} \varLambda _{1} & 0 \\ 0& \varLambda _{2}\end{array} \right ][\,U_{1},\,U_{2}\,]^{T}, }$$

    where S int is as in (2.18), the matrix [ U 1, U 2 ] is orthogonal, Λ 1 = diag(λ 1, , λ r ) and Λ 2 = diag(λ r+1, , λ q ).

  3. 3.

    Compute the eigenvalue decomposition

    $$\displaystyle{ (I - M_{0})S_{\mathrm{ext}} = U_{0}^{}\varLambda _{0}^{}U_{0}^{T}, }$$

    where M 0 is as in (2.17), S ext is as in (2.19), U 0 is orthogonal and \(\varLambda _{0} =\mathrm{ diag}(\hat{\lambda }_{1},\ldots,\hat{\lambda }_{m})\).

  4. 4.

    Compute the reduced-order system (2.13) with

    $$\displaystyle{ \begin{array}{l} \hat{E} = \left [\begin{array}{cc} I& 0\\ \;0 & 0\end{array} \right ],\qquad \hat{A} = \left [\begin{array}{cc} \quad \;W^{T}A\,T & W^{T}BC_{ \infty }/\sqrt{2} \\ - B_{\infty }B^{T}T/\sqrt{2}& I - B_{\infty }C_{\infty }/2\end{array} \right ], \\ \hat{B} = \left [\begin{array}{cc} W^{T}B \\ - B_{\infty }/\sqrt{2}\end{array} \right ],\qquad \hat{C} = \left [\,B^{T}T, C_{ \infty }/\sqrt{2}\,\right ],\end{array} }$$

    where

    $$\displaystyle{ \begin{array}{l} B_{\infty } = S_{0}\vert \varLambda _{0}\vert ^{1/2}U_{0}^{T}S_{\mathrm{ext}},\quad C_{\infty } = U_{0}^{}\vert \varLambda _{0}\vert ^{1/2}, \\ W = S_{\mathrm{int}}\tilde{X}U_{1}^{}\vert \varLambda _{1}\vert ^{-1/2},\quad T =\tilde{ X}U_{1}^{}S_{1}\vert \varLambda _{1}\vert ^{-1/2}, \\ S_{0} =\mathrm{ diag}(\mathrm{sign}(\hat{\lambda }_{1}),\ldots,\mathrm{sign}(\hat{\lambda }_{m})),\quad \vert \varLambda _{0}\vert =\mathrm{ diag}(\vert \hat{\lambda }_{1}\vert,\ldots,\vert \hat{\lambda }_{m}\vert ), \\ S_{1} =\mathrm{ diag}(\mathrm{sign}(\lambda _{1}),\ldots,\mathrm{sign}(\lambda _{r})),\quad \vert \varLambda _{1}\vert =\mathrm{ diag}(\vert \lambda _{1}\vert,\ldots,\vert \lambda _{r}\vert ).\end{array} }$$

One can show that the reduced-order model computed by Algorithm 2.1 preserves not only passivity but also reciprocity. Moreover, we have the following error bound

$$\displaystyle{ \|\hat{\mathbf{G}} -\mathbf{G}\|_{\mathcal{H}_{\infty }} \leq \frac{\|I + \mathbf{G}\|_{\mathcal{H}_{\infty }}^{2}(\gamma _{r+1} +\ldots +\gamma _{q})} {1 -\| I + \mathbf{G}\|_{\mathcal{H}_{\infty }}(\gamma _{r+1} +\ldots +\gamma _{q})}, }$$

provided \(\|I + \mathbf{G}\|_{\mathcal{H}_{\infty }}(\gamma _{r+1} +\ldots +\gamma _{q}) <1\), see [54] for details. Note that this error bound requires the computation of the \(\mathcal{H}_{\infty }\)-norm of G, which is expensive for large-scale systems. If r is chosen in Algorithm 2.1 such that

$$\displaystyle{\|I +\hat{ \mathbf{G}}\|_{\mathcal{H}_{\infty }}(\gamma _{r+1} +\ldots +\gamma _{q}) <1,}$$

then we can estimate

$$\displaystyle{ \|\hat{\mathbf{G}} -\mathbf{G}\|_{\mathcal{H}_{\infty }} \leq \frac{\|I +\hat{ \mathbf{G}}\|_{\mathcal{H}_{\infty }}^{2}(\gamma _{r+1} +\ldots +\gamma _{q})} {1 -\| I +\hat{ \mathbf{G}}\|_{\mathcal{H}_{\infty }}(\gamma _{r+1} +\ldots +\gamma _{q})}, }$$
(2.20)

where only the evaluation of the \(\mathcal{H}_{\infty }\)-norm of the reduced-order system \(\hat{\mathbf{G}}\) is required.

If the matrix IM 0 M 0 T is nonsingular, then the projected Lur’e equation (2.15) can be written as the projected Riccati equation

$$\displaystyle{ EXF^{T}\! + FXE^{T}\! + EXB_{ c}^{T}\!B_{ c}^{}XE^{T} + P_{ l}^{}B_{o}^{}B_{o}^{T}\!P_{ l}^{T}\! = 0,\quad X = P_{ r}^{}XP_{r}^{T}, }$$
(2.21)

where

$$\displaystyle{ \begin{array}{l} F = A - BB^{T} - 2P_{l}BM_{0}^{T}(I - M_{0}^{}M_{0}^{T})^{-1}B^{T}P_{r}, \\ B_{c} = \sqrt{2}\,J_{c}^{-1}B^{T}P_{r},\qquad B_{o} = -\sqrt{2}\,BJ_{o}^{-1}, \\ J_{c}^{}J_{c}^{T} = I - M_{0}^{}M_{0}^{T},\quad J_{o}^{T}J_{o}^{} = I - M_{0}^{T}M_{0}^{}. \end{array} }$$
(2.22)

Note that the invertibility of IM 0 M 0 T depends on the topological structure of the circuit.

Theorem 2.3.2

Consider an MNA system  (2.9) (2.10). Let the matrix M 0 be as in  (2.17). Then IM 0 M 0 T is nonsingular if and only if

$$\displaystyle{ \mathrm{rank}(Z_{C}^{T}[\,A_{ I},\;A_{V }\,]) = n_{I} + n_{V },\qquad Z_{RC}^{T}[\,A_{ I},\;A_{V }\,] = 0, }$$
(2.23)

where Z C and Z RC are the basis matrices for ker(A C T) and ker([ A R ,  A C ]T), respectively.

Proof

The result immediately follows from [54, Theorem 7].

The first condition in (2.23) is equivalent to the absence of loops of capacitors, voltage sources and current sources (CVI-loops) except for loops consisting of capacitive branches (C-loops). The second condition in (2.23) means that the circuit does not contain cutsets consisting of branches of inductors, voltage sources and current sources (LVI-cutsets) except for cutsets consisting of inductive branches (L-cutsets).

2.3.2 Balanced Truncation for RC Circuits

We now present a Lyapunov-based balanced truncation model reduction approach for RC circuits. In this approach, the Gramians of system (2.9) are defined as unique symmetric, positive semidefinite solutions of the projected continuous-time Lyapunov equations

$$\displaystyle{ \begin{array}{ll} EXA^{T} + AXE^{T} = -P_{l}^{}BB^{T}P_{l}^{T}, &\quad X = P_{r}^{}XP_{r}^{T}, \\ E^{T}Y A + A^{T}Y E = -P_{r}^{T}BB^{T}P_{r}^{},&\quad Y = P_{l}^{T}Y P_{l}^{}.\end{array} }$$

The numerical solution of such equations is much less exhausting than of the projected Lur’e or Riccati equations. For a balanced system, these Gramians are both equal to a diagonal matrix

$$\displaystyle{ X = Y =\mathrm{ diag}(\varSigma,0), }$$

where \(\varSigma =\mathrm{ diag}(\sigma _{1},\ldots,\sigma _{n_{f}})\). The values σ j are called the proper Hankel singular values of system G = (E, A, B, B T). They determine which states are important and which states can be removed from the system.

Note that Lyapunov-based balanced truncation does not, in general, guarantee the preservation of passivity in the reduced-order model. However, the RC circuit equations either have a symmetric structure

$$\displaystyle{ E = E^{T} \geq 0,\qquad A = A^{T} \leq 0, }$$
(2.24)

or they can be transformed under preservation of passivity into a symmetric form. Then Lyapunov-based balanced truncation applied to symmetric systems is known to be structure-preserving [45] and, hence, also passivity-preserving. All model reduction algorithms presented in this section have been developed in [56].

2.3.2.1 RCI Circuits

First, we consider RCI circuits consisting of resistors, capacitors and current sources only. The MNA matrices are then given by

$$\displaystyle{ E = A_{C}^{}CA_{C}^{T},\qquad A = -A_{ R}^{}GA_{R}^{T},\qquad B = -A_{ I}. }$$
(2.25)

Obviously, the symmetry condition (2.24) is fulfilled. In this case, the reduced-order system (2.13) can be computed by Algorithm 2.2.

Algorithm 2.2 Passivity-preserving balanced truncation for RCI circuits

Given a passive MNA system (2.9) with E, A, B as in (2.25), compute a reduced-order model (2.13).

  1. 1.

    Compute the full column rank matrices Z C and Z R such that

    $$\displaystyle{ \mathrm{im}(Z_{C}) =\mathrm{ ker}(A_{C}^{T}),\qquad \mathrm{im}(Z_{ R}) =\mathrm{ ker}(A_{R}^{T}). }$$
  2. 2.

    Compute a full-rank Cholesky factor \(\tilde{X}\) of the solution \(X =\tilde{ X}\tilde{X}^{T}\) of the projected Lyapunov equation

    $$\displaystyle{ EXA + EXA = -PBB^{T}P,\qquad X = PXP, }$$

    where

    $$\displaystyle{ P = I - Z_{R}^{}(Z_{R}^{T}EZ_{ R}^{})^{-1}Z_{ R}^{T}E - Z_{ C}^{}(Z_{C}^{T}AZ_{ C}^{})^{-1}Z_{ C}^{T}A }$$

    is the spectral projector onto the right deflating subspace of λEA corresponding to the finite eigenvalues with negative real part.

  3. 3.

    Compute the eigenvalue decomposition

    $$\displaystyle\begin{array}{rcl} \tilde{X}^{T}E\tilde{X} = [\,U_{ 1},\,U_{2}\,]\left [\begin{array}{*{10}c} \varSigma _{1} & 0 \\ 0& \varSigma _{2}\end{array} \right ][\,U_{1},\,U_{2}\,]^{T},& & {}\end{array}$$
    (2.26)

    where [ U 1, U 2 ] is orthogonal, Σ 1 = diag(σ 1, , σ r ) and Σ 2 = diag(σ r+1, , σ q ).

  4. 4.

    Compute the full-rank Cholesky factors \(B_{0} \in \mathbb{R}^{r_{0},m}\) and \(B_{\infty }\in \mathbb{R}^{r_{\infty },m}\) of the matrices R 0 = B 0 T B 0 and R = B T B given by

    $$\displaystyle\begin{array}{rcl} R_{0} = B^{T}Z_{ R}^{}(Z_{R}^{T}EZ_{ R})^{-1}Z_{ R}^{T}B,\qquad R_{ \infty } = -B^{T}Z_{ C}^{}(Z_{C}^{T}AZ_{ C})^{-1}Z_{ C}^{T}B.& & {}\end{array}$$
    (2.27)
  5. 5.

    Compute the reduced-order system (2.13) with

    $$\displaystyle{ \hat{E} = \left [\begin{array}{ccc} I_{r}& 0 & 0 \\ 0 & I_{r_{0}} & 0 \\ 0 & 0 & 0\end{array} \right ],\quad \hat{A} = \left [\begin{array}{ccc} A_{s}& 0& 0 \\ 0 & 0& 0\\ 0 & 0 & - I_{ r_{\infty }}\end{array} \right ],\quad \hat{B} =\hat{ C}^{T} = \left [\begin{array}{c} B_{s} \\ B_{0} \\ B_{\infty }\end{array} \right ], }$$
    (2.28)

    where

    $$\displaystyle\begin{array}{rcl} A_{s} =\varSigma _{ 1}^{-1/2}U_{ 1}^{T}\tilde{X}^{T}A\tilde{X}U_{ 1}\varSigma _{1}^{-1/2}\quad and\quad B_{ s} =\varSigma _{ 1}^{-1/2}U_{ 1}^{T}\tilde{X}^{T}B.& & {}\end{array}$$
    (2.29)

One can show that the reduced-order system (2.13), (2.28) has the transfer function

$$\displaystyle{ \hat{\mathbf{G}}(s) =\hat{ C}(s\hat{E} -\hat{ A})^{-1}\hat{B} = B_{ s}^{T}(sI - A_{ s})^{-1}B_{ s}^{} + \frac{1} {s}R_{0} + R_{\infty }, }$$

where the matrices R 0 and R are as in (2.27), and A s and B s are given in (2.29). Furthermore, we have the following \(\mathcal{H}_{\infty }\)-norm error bound.

Theorem 2.3.3 ( [56])

Let an RCI circuit  (2.9) (2.25) fulfill (A1​´) and (A2). Then a reduced-order model  (2.13) (2.28) obtained by Algorithm  2.2 is passive and reciprocal with an external signature \(S_{\mathrm{ext}} = I_{n_{I}}\) . Moreover, for the transfer functions G and \(\hat{\mathbf{G}}\) of the original system  (2.9) (2.25) and the reduced-order model  (2.13) (2.28), we have the \(\mathcal{H}_{\infty }\) -norm error bound

$$\displaystyle{ \|\mathbf{G} -\hat{\mathbf{G}}\|_{\mathcal{H}_{\infty }} \leq 2(\sigma _{r+1} +\ldots +\sigma _{q}), }$$

where σ j are the proper Hankel singular values of G = (E, A, B, B T) obtained in  (2.26).

2.3.2.2 RCV Circuits

We now consider RCV circuits consisting of resistors, capacitors and voltage sources. Unfortunately, the MNA equations for such circuits do not satisfy the symmetry conditions (2.24). We can, however, transform the MLA equations with the system matrices

$$\displaystyle{ E = \left [\begin{array}{cc} 0& 0\\ 0 & C \end{array} \right ],\qquad A = \left [\begin{array}{cc} - B_{R}^{}RB_{R}^{T}& - B_{C} \\ B_{C}^{T} & 0 \end{array} \right ],\qquad B = \left [\begin{array}{c} - B_{V }\\ 0 \end{array} \right ] }$$
(2.30)

into a symmetric system. Such a transformation is the frequency inversion

$$\displaystyle{ \mathbf{G}^{\star }(s) = \mathbf{G}(s^{-1}). }$$

The transfer function of the transformed system can be realized as

$$\displaystyle{ \mathbf{G}^{\star }(s) = B_{ \star }^{T}(sE_{ \star }^{} - A_{\star }^{})^{-1}B_{ \star }^{}, }$$

where

$$\displaystyle{ E_{\star } = B_{C}^{}C^{-1}B_{ C}^{T},\qquad A_{ \star } = -B_{R}^{}RB_{R}^{T},\qquad B_{ \star } = -B_{V }. }$$
(2.31)

Reducing this system and applying the back transformation, we obtain a reduced-order model. The resulting model reduction method is given in Algorithm 2.3.

Algorithm 2.3 Passivity-preserving balanced truncation for RCV circuits

Given a passive MLA system (2.9) with E, A, B as in (2.30), compute a reduced-order model (2.13).

  1. 1.

    Compute the full column rank matrices Y C and Y R such that im(Y C ) = ker(B C T) and im(Y R ) = ker(B R T).

  2. 2.

    Compute a full-rank Cholesky factor \(\tilde{X}\) of the solution \(X =\tilde{ X}\tilde{X}^{T}\) of the projected Lyapunov equation

    $$\displaystyle{ E_{\star }XA_{\star } + E_{\star }XA_{\star } = -PB_{\star }^{}B_{\star }^{T}P,\quad X = PXP, }$$

    where E , A , B are as in (2.31) and

    $$\displaystyle{ P = I - Y _{R}^{}(Y _{R}^{T}E_{ \star }^{}Y _{R}^{})^{-1}Y _{ R}^{T}E_{ \star }^{} - Y _{C}^{}(Y _{C}^{T}A_{ \star }^{}Y _{C}^{})^{-1}Y _{ C}^{T}A_{ \star }^{}. }$$

    is the projector onto the right deflating subspace of λE A corresponding to the finite eigenvalues with negative real part.

  3. 3.

    Compute the eigenvalue decomposition

    $$\displaystyle{ \tilde{X}^{T}E_{ \star }\tilde{X} = [\,U_{1},\,U_{2}\,]\left [\begin{array}{cc} \varSigma _{1} & 0 \\ 0& \varSigma _{2}\end{array} \right ][\,U_{1},\,U_{2}\,]^{T}, }$$
    (2.32)

    where [ U 1, U 2 ] is orthogonal, Σ 1 = diag(σ 1, , σ r ) and Σ 2 = diag(σ r+1, , σ q ).

  4. 4.

    Compute the matrices

    $$\displaystyle{ \begin{array}{l} A_{s} =\varSigma _{ 1}^{-1/2}U_{1}^{T}\tilde{X}^{T}A_{\star }\tilde{X}U_{1}^{}\varSigma _{1}^{-1/2},\quad B_{s} =\varSigma _{ 1}^{-1/2}U_{1}^{T}\tilde{X}^{T}B_{\star }, \\ R_{0} = B_{\star }^{T}Y _{R}^{}(Y _{R}^{T}E_{\star }^{}Y _{R}^{})^{-1}Y _{R}^{T}B_{\star }^{},\quad R_{\infty } = -B_{\star }^{T}Y _{C}^{}(Y _{C}^{T}A_{\star }^{}Y _{C}^{})^{-1}Y _{C}^{T}B_{\star }^{}, \\ \tilde{R}_{\infty } = R_{\infty }- B_{s}^{T}A_{s}^{-1}B_{s}^{}.\end{array} }$$
  5. 5.

    Compute the eigenvalue decomposition

    $$\displaystyle{ \left [\begin{array}{cc} \tilde{R}_{\infty }&R_{0} \\ R_{0} & 0\end{array} \right ] = [\,V _{1},\;V _{2}\,]\left [\begin{array}{cc} \varLambda _{0} & 0\\ 0 &0\end{array} \right ][\,V _{1},\;V _{2}\,]^{T}, }$$

    where [ V 1,  V 2 ] is orthogonal and Λ 0 is nonsingular.

  6. 6.

    Compute the reduced-order system (2.13) with

    $$\displaystyle{ \hat{E} = \left [\begin{array}{cc} I & 0\\ 0 &\hat{E}_{\infty }\end{array} \right ]\!,\qquad \hat{A} = \left [\begin{array}{cc} \hat{A}_{1} & 0 \\ 0 &\hat{A}_{\infty }\end{array} \right ]\!,\qquad \hat{B} = -\hat{C}^{T} = \left [\begin{array}{cc} \hat{B}_{1} \\ \hat{B}_{\infty }\end{array} \right ]\!, }$$
    (2.33)

    where \(\hat{A}_{1} = A_{s}^{-1}\), \(\hat{B}_{1} = A_{s}^{-1}B_{s}^{}\) and

    $$\displaystyle{ \hat{E}_{\infty } = V _{1}^{T}\left [\begin{array}{cc} R_{0} & 0 \\ 0 &0\end{array} \right ]V _{1}^{},\quad \hat{A}_{\infty } = V _{1}^{T}\left [\begin{array}{cc} \tilde{R}_{\infty }&R_{0} \\ R_{0} & 0\end{array} \right ]V _{1}^{},\quad \hat{B}_{\infty } = V _{1}^{T}\left [\begin{array}{cc} \tilde{R}_{\infty } \\ R_{0}\end{array} \right ]. }$$

The following theorem provides the error bound for the reduced model (2.13), (2.33).

Theorem 2.3.4 ( [56])

Let an RCV circuit fulfill assumptions (A1​´) and (A3). Then a reduced-order model  (2.13) (2.33) obtained by Algorithm  2.3 is passive and reciprocal with an external signature \(S_{\mathrm{ext}} = I_{n_{V }}\) . Moreover, for the transfer functions G and \(\hat{\mathbf{G}}\) of the original system  (2.9) (2.10) and the reduced-order model  (2.13) (2.33), we have the \(\mathcal{H}_{\infty }\) -norm error bound

$$\displaystyle{ \|\mathbf{G} -\hat{\mathbf{G}}\|_{\mathcal{H}_{\infty }} \leq 2(\sigma _{r+1} +\ldots +\sigma _{q}), }$$

where σ j are the proper Hankel singular values of G = (E , A , B , B T) obtained in  (2.32).

2.3.2.3 RCIV Circuits

Finally, we consider RCIV circuits that contain resistors, capacitors and both current as well as voltage sources. Such circuits are modeled by the linear system (2.9) with the MNA matrices

$$\displaystyle{ E = \left [\begin{array}{cc} A_{C}^{}CA_{C}^{T}& 0 \\ 0 & 0 \end{array} \right ],\quad A = \left [\begin{array}{cc} - A_{R}^{}GA_{R}^{T}& - A_{V } \\ A_{V }^{T} & 0 \end{array} \!\right ],\quad B = \left [\!\begin{array}{cc} - A_{I}& 0 \\ 0 & - I \end{array} \!\right ] }$$
(2.34)

or the MLA matrices

$$\displaystyle{ E =\! \left [\!\begin{array}{ccc} 0& 0 & 0\\ 0 & C & 0 \\ 0& 0 & 0 \end{array} \!\right ]\!,\quad A =\! \left [\!\begin{array}{ccc} - B_{R}^{}RB_{R}^{T}& - B_{C}& - B_{I} \\ B_{C}^{T} & 0 & 0 \\ B_{I}^{T} & 0 & 0 \end{array} \!\right ]\!,\quad B =\! \left [\!\begin{array}{rc} 0& - B_{V } \\ 0& 0\\ - I & 0 \end{array} \!\right ]. }$$
(2.35)

Due to the reciprocity, the transfer function of this system can be partitioned in blocks as

$$\displaystyle{ \mathbf{G}(s) = \left [\begin{array}{rr} \mathbf{G}_{II}(s)& \mathbf{G}_{IV }(s) \\ -\mathbf{G}_{IV }^{T}(s)& \mathbf{G}_{V V }(s) \end{array} \right ], }$$

see [56]. Assume that

  1. (A4)

    the circuit does not contain cutsets of current and voltage sources.

Then G VV (s) is invertible and G(s) has a (2,2) partial inverse defined as

$$\displaystyle{ \mathbf{G}^{(2,2)}(s) = \left [\begin{array}{cc} \mathbf{G}_{II}(s) + \mathbf{G}_{IV }^{}(s)\,\mathbf{G}_{V V }^{-1}(s)\,\mathbf{G}_{ IV }^{T}(s)&\quad -\mathbf{G}_{ IV }^{}(s)\,\mathbf{G}_{V V }^{-1}(s) \\ -\mathbf{G}_{V V }^{-1}(s)\,\mathbf{G}_{IV }^{T}(s) & \quad \mathbf{G}_{V V }^{-1}(s) \end{array} \right ]. }$$

This rational function can be realized as G (2,2)(s) = B 2,2 T(sE 2,2A 2,2)−1 B 2,2 with

$$\displaystyle{ E_{2,2} = A_{C}^{}CA_{C}^{T},\qquad A_{ 2,2} = -A_{R}^{}GA_{R}^{T},\qquad B_{ 2,2} = [\,-A_{I},\,-A_{V }\,]. }$$
(2.36)

Note that the (2,2) partial inversion can interpreted as the replacements of all voltage sources by current sources. The system G (2,2) = (E 2,2, A 2,2, B 2,2, B 2,2 T) is symmetric and passive. Then applying balanced truncation to this system and reversing the voltage replacement, we obtain a required reduced-order model, see Algorithm 2.4.

Algorithm 2.4 Passivity-preserving balanced truncation for RCIV circuits—I

Given a passive MNA system (2.9) with E, A, B as in (2.34), compute a reduced-order model (2.13).

  1. 1.

    Compute a reduced-order model \(\hat{\mathbf{G}}^{(2,2)} = (\hat{E}_{2,2},\,\hat{A}_{2,2},\,\hat{B}_{2,2},\,\hat{C}_{2,2})\) by applying Algorithm 2.2 to the system G (2,2) = (E 2,2, A 2,2, B 2,2, B 2,2 T) as in (2.36).

  2. 2.

    Compute the reduced-order system (2.13) with

    $$\displaystyle{ \hat{E} = \left [\begin{array}{cc} \hat{E}_{2,2} & 0 \\ 0 & 0\end{array} \right ],\qquad \hat{A} = \left [\begin{array}{cc} \;\hat{A}_{2,2} & \hat{B}_{2} \\ -\hat{ B}_{2}^{T}& 0\end{array} \right ],\qquad \hat{B} =\hat{ C}^{T} = \left [\begin{array}{cc} \hat{B}_{1} & 0 \\ 0 & - I_{n_{V }}\end{array} \right ]. }$$
    (2.37)

    where \(\hat{B}_{1} =\hat{ B}_{2,2}[I_{n_{I}},\,0]^{T}\) and \(\hat{B}_{2} =\hat{ B}_{2,2}[0,\,I_{n_{V }}]^{T}\).

The following theorem establishes the properties of the reduced-order model (2.13), (2.37) and gives an error bound.

Theorem 2.3.5 ( [56])

Consider an RCIV circuit fulfilling assumptions (A1​´), (A3) and  (A4). Let Z R an Z C be the basis matrices for ker(A R T) and ker(A C T), respectively, and let Z R be the basis matrix for im(A R ). Assume that A I T Z R (Z R T A C CA C T Z R )−1 Z R T A V = 0 and Z C T A V has full column rank. Then the reduced-order model  (2.13) (2.37) obtained by Algorithm  2.4 is passive and reciprocal with the external signature \(S_{\mathrm{ext}} =\mathrm{ diag}(I_{n_{I}},-I_{n_{V }})\) . Moreover, for the transfer functions G and \(\hat{\mathbf{G}}\) of the original system  (2.9) (2.34) and the reduced-order model  (2.13) (2.37), we have the error bound

$$\displaystyle{ \|\mathbf{G} -\hat{\mathbf{G}}\|_{\mathcal{H}_{\infty }} \leq 2\,(1 + c_{1}^{2} + c_{ 1}^{2}\,c_{ 2}^{2})(\sigma _{ r+1} +\ldots +\sigma _{q}), }$$

where σ j are the proper Hankel singular values of the (2,2) partially inverted system G (2,2) ,

$$\displaystyle{\begin{array}{rcl} c_{1} & =&\;\|(A_{V }^{T}Z_{C}^{}(Z_{C}^{T}A_{R}^{}GA_{R}^{T}Z_{C}^{})^{-1}Z_{C}^{T}A_{V }^{})^{-1}\|, \\ c_{2} & =&\;\|A_{V }^{T}HA_{V }^{}\|^{1/2}\|A_{I}^{T}HA_{I}^{}\|^{1/2}\end{array} }$$

with

$$\displaystyle{H = QZ_{R}^{{\prime}}\left ((Z_{ R}^{{\prime}})^{T}A_{ R}GA_{R}^{T}Z_{ R}^{{\prime}}\right )^{-1}\!(Z_{ R}^{{\prime}})^{T}Q^{T}\mathit{\mbox{ and }}Q = I\!-\!Z_{ R}^{}(Z_{R}^{T}A_{ C}^{}CA_{C}^{T}Z_{ R}^{})^{-1}Z_{ R}^{T}A_{ C}^{}CA_{C}^{T}.}$$

An alternative approach for model reduction of RCIV circuits is based on considering the frequency-inverted MLA system

$$\displaystyle{ \mathbf{G}^{\star }(s) = \mathbf{G}(s^{-1}) = B_{ \star }^{T}(sE_{ \star }^{} - A_{\star }^{})^{-1}B_{ \star }^{} }$$

with the matrices

$$\displaystyle{ E_{\star } = \left [\begin{array}{cc} B_{C}^{}\,C^{-1}B_{ C}^{T}& 0 \\ 0 & 0 \end{array} \right ],\quad A_{\star } = \left [\begin{array}{cc} - B_{R}^{}RB_{R}^{T}& - B_{I} \\ B_{I}^{T} & 0 \end{array} \right ],\quad B_{\star } = \left [\begin{array}{rc} 0& - B_{V } \\ - I & 0 \end{array} \right ]. }$$

Let G (s) be partitioned in blocks as

$$\displaystyle{ \mathbf{G}^{\star }(s) = \left [\begin{array}{cc} \mathbf{G}_{11}^{}(s) & \mathbf{G}_{12}^{}(s) \\ -\mathbf{G}_{12}^{T}(s)& \mathbf{G}_{22}^{}(s) \end{array} \right ]. }$$

Assume that

  1. (A5)

    the circuit does not contain loops of current and voltage sources.

Then G 11(s) is invertible and G (s) has an (1,1) partial inverse defined as

$$\displaystyle{ \begin{array}{rcl} (\mathbf{G}^{\star })^{(1,1)}(s)& =&\left [\begin{array}{cc} \mathbf{G}_{11}^{-1}(s) & \quad \mathbf{G}_{11}^{-1}(s)\,\mathbf{G}_{12}^{}(s) \\ \mathbf{G}_{12}^{T}(s)\,\mathbf{G}_{11}^{-1}(s)&\quad \mathbf{G}_{22}^{}(s) + \mathbf{G}_{12}^{T}(s)\,\mathbf{G}_{11}^{-1}(s)\,\mathbf{G}_{12}^{}(s) \end{array} \right ] \\ & =&B_{1,1}^{T}(s\,E_{1,1}^{} - A_{1,1}^{})^{-1}B_{1,1}^{}, \end{array} }$$

where

$$\displaystyle{ E_{1,1} = B_{C}^{}C^{-1}B_{ C}^{T},\qquad A_{ 1,1} = -B_{R}^{}RB_{R}^{T},\qquad B_{ 1,1} = [\,-B_{I},\,-B_{V }\,]. }$$
(2.38)

Reducing this symmetric system and reversing the initial transformation, we obtain a required reduced-order model. This model reduction method is presented in Algorithm 2.5.

Algorithm 2.5 Passivity-preserving balanced truncation for RCIV circuits—II

Given a passive MLA system (2.9) with E, A, B as in (2.35), compute a reduced-order model (2.13).

  1. 1.

    Compute a reduced-order model \(\hat{\mathbf{G}}_{1} = (\hat{E}_{1},\,\hat{A}_{1},\,\hat{B}_{1},\,\hat{C}_{1})\) using Algorithm 2.3, where E , A and B are replaced, respectively, by E 1,1, A 1,1 and B 1,1 as in (2.38).

  2. 2.

    Compute the reduced-order system (2.13) with

    $$\displaystyle{ \begin{array}{c} \hat{E} = \left [\begin{array}{cc} \hat{E}_{1}& 0\\ 0 & 0\end{array} \right ]\!,\quad \hat{A} = \left [\begin{array}{cc} \hat{A}_{1} & \hat{B}_{11} \\ \hat{C}_{11}& 0\end{array} \right ]\!,\quad \hat{B} = \left [\begin{array}{cc} 0 &\hat{B}_{12} \\ I_{n_{I}}& 0\end{array} \right ]\!,\quad \hat{C} = \left [\begin{array}{cc} 0 & - I_{n_{I}} \\ \hat{C}_{21}& 0\end{array} \right ]\!,\end{array} }$$
    (2.39)

    where \(\hat{B}_{11} =\hat{ B}_{1}[I_{n_{I}},\,0]^{T}\), \(\hat{B}_{12} =\hat{ B}_{1}[0,\,I_{n_{V }}]^{T}\), \(\hat{C}_{11} = [I_{n_{I}},\,0]\hat{C}_{1}\) and \(\hat{C}_{21} = [0,\,I_{n_{V }}]\hat{C}_{1}\).

The following theorem establishes the properties of the reduced-order model (2.13), (2.39) and gives an error bound.

Theorem 2.3.6 ( [56])

Consider an RCIV circuit fulfilling assumptions (A1​´), (A2) and  (A5). Let Y R and Y C be the basis matrices for ker(B R T) and ker(B C T), respectively, and let Y R be the basis matrix for im(B R ). Assume that B V T Y R (Y R T B C C −1 B C T Y R )−1 Y R T B I = 0 and Y C T B I has full column rank. Then the reduced-order model  (2.13) (2.39) obtained by Algorithm  2.5 is passive and reciprocal with the external signature \(S_{\mathrm{ext}} =\mathrm{ diag}(I_{n_{I}},-I_{n_{V }})\) . Moreover, for the transfer functions G and \(\hat{\mathbf{G}}\) of the original system  (2.9) (2.35) and the reduced-order model  (2.13) (2.39), we have the error bound

$$\displaystyle{ \|\mathbf{G} -\hat{\mathbf{G}}\|_{\mathcal{H}_{\infty }} \leq 2(1 +\tilde{ c}_{1}^{2} +\tilde{ c}_{ 1}^{2}\tilde{c}_{ 2}^{2})(\sigma _{ r+1} +\ldots +\sigma _{q}), }$$

where σ j are the proper Hankel singular values of the system (G )(1,1) ,

$$\displaystyle{\begin{array}{rcl} \tilde{c}_{1} & =&\;\|(B_{I}^{T}Y _{C}(Y _{C}^{T}B_{R}RB_{R}^{T}Y _{C})^{-1}Y _{C}^{T}B_{I})^{-1}\|, \\ \tilde{c}_{2} & =&\;\|B_{V }^{T}\tilde{H}B_{V }\|^{1/2}\|B_{I}^{T}\tilde{H}B_{I}\|^{1/2}\end{array} }$$

with

$$\displaystyle{\tilde{H}\! =\tilde{ Q}Y _{R}^{{\prime}}\left ((Y _{ R}^{{\prime}})^{T}B_{ R}^{}RB_{R}^{T}Y _{ R}^{{\prime}}\right )^{-1}(Y _{ R}^{{\prime}})^{T}\tilde{Q}^{T},\tilde{Q} = I\!-\!Y _{ R}^{}(Y _{R}^{T}B_{ C}^{}C^{-1}B_{ C}^{T}Y _{ R}^{})^{-1}Y _{ R}^{T}B_{ C}^{}C^{-1}B_{ C}^{T}.}$$

Remark 2.3.7

Model reduction methods for RC circuits can also be extended to RL circuits which contain resistors, inductors, voltage and/or current sources. Observing that the frequency-inverted MNA equations for RLI circuits as well as the MLA equations for RLV circuits yield symmetric systems, we can design balanced truncation model reduction methods for RL circuits similar to Algorithms 2.22.5.

2.3.3 Numerical Aspects

The most expensive step in the presented model reduction algorithms is solving matrix equations. The numerical solution of the projected Lyapunov and Riccati equations will be discussed in Sects. 2.5.1 and 2.5.2, respectively. Here, we consider the computation of the matrix M 0 and the projectors P r and P l required in Algorithm 2.1 as well as the basis matrices required in Algorithms 2.22.5.

Fortunately, using the MNA structure of the system matrices in (2.10), the matrix M 0 and the projectors P r and P l can be computed in explicit form

$$\displaystyle\begin{array}{rcl} M_{0}& =& \left [\begin{array}{*{10}c} I - 2A_{I}^{T}ZH_{0}^{-1}Z^{T}\!A_{I}^{}& \qquad \quad 2A_{I}^{T}ZH_{0}^{-1}Z^{T}\!A_{V }^{} \\ -2A_{V }^{T}ZH_{0}^{-1}Z^{T}\!A_{I}^{} &\quad - I\! +\! 2A_{V }^{T}ZH_{0}^{-1}Z^{T}\!A_{V }^{} \end{array} \right ],{}\end{array}$$
(2.40)
(2.41)

where S int is given in (2.18), and

$$\displaystyle{ \begin{array}{l} H_{0} = Z^{T}(A_{R}^{}GA_{R}^{T} + A_{I}^{}A_{I}^{T} + A_{V }^{}A_{V }^{T})Z, \\ H_{1} = Z_{CRIV }^{T}A_{L}^{}L^{-1}A_{L}^{T}Z_{CRIV }^{}, \\ H_{2} = A_{R}^{}GA_{R}^{T} + A_{I}^{}A_{I}^{T} + A_{V }^{}A_{V }^{T} + A_{L}^{}L^{-1}A_{L}^{T}Z_{CRIV }^{}H_{1}^{-1}Z_{CRIV }^{T}A_{L}^{}L^{-1}A_{L}^{T},\quad \\ H_{3} = Z_{C}^{T}H_{2}Z_{C}^{}, \\ H_{4} = Z_{C}^{}H_{3}^{-1}Z_{C}^{T}, \\ H_{5} = Z_{CRIV }^{}H_{1}^{-1}Z_{CRIV }^{T}A_{L}^{}L^{-1}A_{L}^{T} - I, \\ H_{6} = I - L^{-1}A_{L}^{T}Z_{CRIV }^{}H_{1}^{-1}Z_{CRIV }^{T}A_{L}^{}, \\ Z = Z_{C}^{}Z_{RIV -C}^{{\prime}}, \\ Z_{C} \text{is a basis matrix for}\;\mathrm{ker}(A_{C}^{T}), \\ Z_{RIV -C}^{{\prime}}\;\text{is a basis matrix for}\;\mathrm{im}(Z_{C}^{T}[\,A_{R},\,A_{I},\,A_{V }\,]), \\ Z_{CRIV } \text{is a basis matrix for}\;\mathrm{ker}([\,A_{C},\,A_{R},\,A_{I},\,A_{V }\,]^{T}), \end{array} }$$

see [54, 72] for details. The basis matrices Z C and Z CRIV can be computed by analyzing the corresponding subgraphs of the given network graph as described in [23]. For example, the matrix Z C can be constructed in the form

$$\displaystyle{ Z_{C} = \Pi _{C}\left [\begin{array}{ccc} \mathbf{1}_{k_{1}} & & \\ & \ddots & \\ & & \mathbf{1}_{k_{s}} \\ & 0& \end{array} \right ] }$$

by searching the components of connectivity in the C-subgraph consisting of the capacitive branches only. Here, \(\mathbf{1}_{k_{i}}\! =\! [1,\ldots,1]^{T}\! \in \! \mathbb{R}^{k_{i}}\), i = 1, , s, and \(\Pi _{C}\) is a permutation matrix. For this purpose, we can use graph search algorithms like breadth-first-search [40]. As a consequence, the nonzero columns of

$$\displaystyle{ A_{RIV -C} = Z_{C}^{T}[A_{ R},\,A_{I},\,A_{V }\,] }$$

form again an incidence matrix. In order to compute the basis matrix Z RIVC , we first determine the basis matrix

$$\displaystyle{ Z_{RIV -C} = \Pi _{RIV -C}\left [\begin{array}{*{10}c} \mathbf{1}_{l_{1}} & &\\ & \ddots& \\ & & \mathbf{1}_{l_{t}} \\ & 0& \end{array} \right ] }$$

for ker(A RIVC T) from the associated graph. Then the complementary matrix Z RIVC can be determined as

$$\displaystyle{ Z_{RIV - C}^\prime = \Pi _{RIV -C}S_{RIV -C}, }$$

where S RIVC is a selector matrix constructed from the identity matrix by removing 1-st, (l 1 + 1)-st, , (l 1 + + l t + 1)-st columns. One can see that the resulting basis matrices and also the matrices H 0, H 1, H 2, H 3, H 5 and H 6 are sparse. Of course, the projector P r will never be constructed explicitly. Instead, we use projector-vector products required in the numerical solution of the Riccati equation.

Algorithms 2.3 and 2.5 require the knowledge of the reduced loop matrix B that can be obtained by the search for a loop basis in the circuit graph [2, 22, 39, 40]. Since the efficiency in the numerical solution of the projected Lyapunov equations can be improved if the matrix coefficients are sparse, it is preferable to choose a basis of loops with length as small as possible. This kind of problem was treated in [49].

The basis matrices Z R , Z R and Z C required in Algorithms 2.2 and 2.4 can be computed using graph search algorithms as described above. The basis matrices Y R and Y C required in Algorithms 2.3 and 2.5 can be determined by searching for dependent loops in the graphs \(\mathfrak{G}_{R}\) and \(\mathfrak{G}_{C}\) consisting of the resistive and capacitive branches, respectively. Furthermore, the basis matrix Y R can be obtained by removing the linear dependent columns of B R . Such columns can be determined by searching for cutsets in the graph \(\mathfrak{G}_{R}\), e.g., [2]. For the analysis of loop dependency and the search for cutsets in a graph, there exist a variety of efficient algorithms, see [40] and the references therein.

2.4 Model Reduction of Nonlinear Circuits

In this section, we present a model reduction approach for nonlinear circuits containing large linear subnetworks. This approach is based on decoupling the nonlinear circuit equations (2.2) into linear and nonlinear subsystems in an appropriate way. The linear part is then approximated by a reduced-order model using one of the model reduction algorithms from Sect. 2.3 depending on the topological structure of the linear subcircuit. The nonlinear part either remains unchanged or is approximated by a trajectory piece-wise linear (TPWL) approach [57] based on linearization, or proper orthogonal decomposition (POD), e.g., [65], which relies on snapshot calculations. If the circuit contains semiconductor devices modeled by instationary nonlinear partial differential equations [67, 73], these equations can first be discretized in space and then reduced using the POD method as described in [35, 36], see also Chap. 1 in this book. Finally, combining the reduced-order linear and nonlinear models, we obtain a reduced-order nonlinear model that approximates the MNA system (2.2). The concept of this model reduction approach is illustrated in Fig. 2.1. We now describe this approach in more detail.

Fig. 2.1
figure 1

A model reduction approach for nonlinear circuits

First, we consider the decoupling procedure developed in [68] that allows us to extract a linear subcircuit from a nonlinear circuit. This procedure is based on the formal replacement of nonlinear inductors by controlled current sources, nonlinear capacitors by controlled voltage sources and nonlinear resistors by equivalent circuits consisting of two serial linear resistors and one controlled current source connected parallel to one of the introduced linear resistors. Such replacements are demonstrated in Fig. 2.2, where we present two circuits before and after replacements. It should be noted that the suggested replacements introduce additional nodes and state variables, but neither additional CV-loops nor LI-cutsets occur in the decoupled linear subcircuit meaning that its index does not exceed the index of the original system (2.2). The following theorem establishes the decoupling on the equation level.

Fig. 2.2
figure 2

Replacements of nonlinear circuit elements

Theorem 2.4.1

​[68] Let and satisfy the relation , and let be symmetric, positive definite. Assume that and satisfy

(2.42)

Then system  (2.2) together with the relations

(2.43)
(2.44)

for the additional unknowns and has the same components η, η z , , , and in the state vector as the system

(2.45)

coupled with the linear DAE system

$$\displaystyle{ \begin{array}{rcl} E \frac{\mathrm{d}} {\mathrm{d}t}x_{\ell} & = & Ax_{\ell} + Bu_{\ell}, \\ y_{\ell} & = & B^{T}x_{\ell}, \end{array} }$$
(2.46)

where , ,

$$\displaystyle{ E = \left [\begin{array}{ccc} A_{C}^{}CA_{C}^{T}& 0 & 0 \\ 0 & L& 0\\ 0 & 0 & 0\\ \end{array} \right ], A = \left [\begin{array}{ccc} - A_{R}^{}GA_{R}^{T}& - A_{L}& - A_{V } \\ A_{L}^{T} & 0 & 0 \\ A_{V }^{T} & 0 & 0\\ \end{array} \right ], B = \left [\begin{array}{cc} - A_{I}& 0 \\ 0 & 0\\ 0 & -I \\ \end{array} \right ], }$$
(2.47)

and the incidence and element matrices are given by

(2.48)

Note that the system matrices in the decoupled linear system (2.46)–(2.48) are in the MNA form. This system has the state space dimension

and the input space dimension . It should also be noted that the state equivalence in Theorem 2.4.1 is independent of the choice of the matrices G 1 and G 2 satisfying the assumptions in the theorem. The substitution of nonlinear resistors with equivalent circuits as described above implies that G 1 and G 2 are both diagonal and their diagonal elements are conductances of the first and the second linear resistors, respectively, in the replacement circuits.

The following theorem establishes the well-posedness of the decoupled system (2.46)–(2.48).

Theorem 2.4.2 ( [68])

Let a nonlinear circuit satisfy assumptions  (A1)–(A3). Assume that it contains neither loops consisting of nonlinear capacitors and voltage sources (\(\widetilde{\mathrm{C}}\) V-loops ) nor cutsets of nonlinear inductors and/or current sources (\(\widetilde{\mathrm{C}}\) V-loops ). Then the decoupled linear DAE system  (2.46)–(2.48) modeling the linear subcircuit is well-posed in the sense that

  1. 1.

    the matrices C, L and G are symmetric and positive definite,

  2. 2.

    the matrix A V has full column rank,

  3. 3.

    the matrix [ A C ,  A L ,  A R ,  A V ] has full row rank.

Note that the presence of \(\widetilde{\mathrm{C}}\) V-loops and \(\widetilde{\mathrm{L}}\) I-cutsets in the original circuit would lead after the replacement of the nonlinear capacitors and nonlinear inductors by voltage sources and current sources, respectively, to V-loops and I-cutsets in the decoupled circuit that would violate its well-posedness.

The next theorem shows that slightly stronger conditions for the original nonlinear circuit guarantee that the decoupled linear DAE system (2.46)–(2.48) is well-posed and, in addition, has index at most one.

Theorem 2.4.3 ( [68])

Let a nonlinear circuit satisfy assumptions  (A1)–(A3). If this circuit contains neitherCV-loops except for \(\bar{\mathrm{C}}\) -loops with linear capacitors norLI-cutsets, then the linear system  (2.46)–(2.48) modeling the extracted linear subcircuit is well-posed and is of index at most one.

The index one condition for system (2.46)–(2.48) implies that its transfer function is proper. The approximation of such systems is much easier than that of systems with an improper transfer function [71].

Depending on the topology of the extracted linear subcircuit, we can now apply one of the model reduction algorithms presented in Sect. 2.3 to the linear system (2.46)–(2.48). As a result, we obtain a reduced-order model (2.13) which can be combined with the nonlinear subsystem in order to get a reduced-order nonlinear model.

According to the block structure of the input and output vectors of the extracted linear DAE system (2.46)–(2.48), the reduced-order model (2.13) can be written in the form

(2.49)

where \(\hat{y}_{\ell j} =\hat{ C}_{j}\hat{x}_{\ell}\), j = 1, , 5, approximate the corresponding components of the output of (2.46). Taking into account that and , Eqs. (2.43) and (2.45) are approximated by

(2.50)

respectively, where and are approximations to and , respectively. Furthermore, for j z and η z defined in (2.42) and (2.44), respectively, we have

Since , this equation is approximated by

(2.51)

where approximates . Combining (2.45), (2.49), (2.50) and (2.51), we obtain the reduced-order nonlinear model

(2.52)

where ,

and

(2.53)
(2.54)

This model can now be used for further investigations in steady-state analysis, transient analysis or sensitivity analysis of electrical circuits. Note that the error bounds for the reduced-order linear subsystem (2.49) presented in Sect. 2.3 can be used to estimate the error in the output of the reduced-order nonlinear system (2.52)–(2.54), see [34] for such estimates for a special class of nonlinear circuits.

2.5 Solving Matrix Equations

In this section, we consider numerical algorithms for solving the projected Lyapunov and Riccati equations developed in [55, 70, 71]. In practice, the numerical rank of the solutions of these equations is often much smaller than the dimension of the problem. Then such solutions can be well approximated by low-rank matrices. Moreover, these low-rank approximations can be determined directly in factored form. Replacing the Cholesky factors of the Gramians in Algorithms 2.12.5 by their low-rank factors reduces significantly the computational complexity and storage requirements in the balancing-related model reduction methods and makes these methods very suitable for large-scale circuit equations.

2.5.1 ADI Method for Projected Lyapunov Equations

We focus first on solving the projected Lyapunov equation

$$\displaystyle{ E\,X\,A^{T} + A\,X\,E^{T}\! = -P_{ l}^{}BB^{T}\!P_{ l}^{T},\qquad X = P_{ r}^{}X\,P_{r}^{T} }$$
(2.55)

¡/ImgDisp¿ using the alternating direction implicit (ADI) method. Such an equation has to be solved in Algorithms 2.22.5. The ADI method has been first proposed for standard Lyapunov equations [14, 44, 50, 76] and then extended in [70] to projected Lyapunov equations. The generalized ADI iteration for the projected Lyapunov equation (2.55) is given by

$$\displaystyle{ \begin{array}{rcl} (E +\tau _{k}A)X_{k-1/2}A^{T} + AX_{k-1}(E -\tau _{k}A)^{T} & = & - P_{l}^{}BB^{T}P_{l}^{T}, \\ (E + \overline{\tau }_{k}A)X_{k}^{T}A^{T} + AX_{k-1/2}^{T}(E -\overline{\tau }_{k}A)^{T} & = & - P_{l}^{}BB^{T}P_{l}^{T}\end{array} }$$
(2.56)

with an initial matrix X 0 = 0 and shift parameters \(\tau _{1},\ldots,\tau _{k} \in \mathbb{C}_{-}\). Here, \(\overline{\tau }_{k}\) denotes the complex conjugate of τ k . If the pencil λEA is c-stable, then X k converges towards the solution of the projected Lyapunov equation (2.55). The rate of convergence depends strongly on the choice of the shift parameters. The optimal shift parameters providing the superlinear convergence satisfy the generalized ADI minimax problem

$$\displaystyle{ \{\tau _{1},\ldots,\tau _{q}\} =\mathop{\arg \min }\limits_{\{\tau _{1},\ldots,\tau _{q}\} \in \mathbb{C}_{-}}\;\max _{t\in \text{Sp}_{f}(E,A)}\frac{\vert (1 -\overline{\tau }_{1}t) \cdot \ldots \cdot (1 -\overline{\tau }_{q}\,t)\vert } {\vert (1 +\tau _{1}t) \cdot \ldots \cdot (1 +\tau _{q}\,t)\vert }, }$$

where Sp f (E, A) denotes the finite spectrum of the pencil λEA. If the matrices E and A satisfy the symmetry condition (2.24), then λEA has real non-positive eigenvalues. In this case, the optimal real shift parameters can be determined by the selection procedure proposed in [78] once the spectral bounds

$$\displaystyle{ a =\min \{\,\lambda _{k}\,:\,\lambda _{k} \in \mathrm{ Sp}_{-}(E,A)\,\},\qquad b =\max \{\,\lambda _{k}\,:\,\lambda _{k} \in \mathrm{ Sp}_{-}(E,A)\,\} }$$

are available. Here Sp(E, A) denotes the set of finite eigenvalues of λEA with negative real part. In general case, the suboptimal ADI parameters can be obtained from a set of largest and smallest in modulus approximate finite eigenvalues of λEA computed by an Arnoldi procedure [50, 70]. Other parameter selection techniques developed for standard Lyapunov equations [15, 63, 77] can also be used for the projected Lyapunov equation (2.55).

A low-rank approximation to the solution of the projected Lyapunov equation (2.55) can be computed in factored form XZ k Z k T using a low-rank version of the ADI method (LR-ADI) as presented in Algorithm 2.6.

Algorithm 2.6 The LR-ADI method for the projected Lyapunov equation

Given E, \(A \in \mathbb{R}^{n,n}\), \(B \in \mathbb{R}^{n,m}\), projector P l and shift parameters \(\tau _{1},\ldots,\tau _{q} \in \mathbb{C}_{-}\), compute a low-rank approximation XZ k Z k T to the solution of the projected Lyapunov equation (2.55).

  1. 1.

    \(Z^{(1)} = \sqrt{-2\mathrm{Re }(\tau _{1 } )}\,(E +\tau _{1}A)^{-1}P_{ l}B\), Z 1 = Z (1);

  2. 2.

    FOR k = 2, 3, 

    $$\displaystyle{ \begin{array}{rcl} Z^{(k)} & = & \sqrt{ \frac{\mathrm{Re }(\tau _{k } )} {\mathrm{Re}(\tau _{k-1})}}\left (I - (\overline{\tau }_{k-1} +\tau _{k})(E +\tau _{k}A)^{-1}A\right )Z^{(k-1)}, \\ Z_{k} & = & [\,Z_{k-1}, Z^{(k)}\,];\end{array} }$$

    END FOR

In order to guarantee for the factors Z k to be real in case of complex shift parameters, we take these parameters in complex conjugate pairs \(\{\tau _{k},\tau _{k+1} = \overline{\tau }_{k}\}\). Then a novel approach for efficient handling of complex shift parameters in the LR-ADI method developed in [16] can also be extended to the projected Lyapunov equation (2.55). At each ADI iteration we have \(Z_{k} = [\,Z^{(1)},\ldots,Z^{(k)}\,] \in \mathbb{R}^{n,mk}\). To keep the low-rank structure in Z k for large mk, we can compress the columns of Z k using the rank-revealing QR factorization [18] as described in [9].

Finally, note that the matrices (E + τ k A)−1 in Algorithm 2.6 do not have to be computed explicitly. Instead, we solve linear systems of the form (E + τ k A)x = P l b either by computing (sparse) LU factorizations and forward/backward substitutions or by using iterative Krylov subspace methods [62].

2.5.2 Newton’s Method for Projected Riccati Equations

We consider now the numerical solution of the projected Riccati equation

$$\displaystyle{ EXF^{T}\! + FXE^{T}\! + EXB_{ c}^{T}\!B_{ c}^{}XE^{T} + P_{ l}^{}B_{o}^{}B_{o}^{T}\!P_{ l}^{T}\! = 0,\quad X = P_{ r}^{}XP_{r}^{T} }$$
(2.57)

with F, B c and B o as in (2.22). Such an equation has to be solved in Algorithm 2.1. The minimal solution X min of (2.57) is at least semi-stabilizing in the sense that all the finite eigenvalues of λEFEX min B c T B c are in the closed left half-plane.

Consider the spaces

$$\displaystyle{ \begin{array}{l} \mathbb{S}_{P_{r}} =\{\, X \in \mathbb{R}^{n,n}\,:\, X = X^{T},\,X = P_{r}^{}XP_{r}^{T}\,\}, \\ \mathbb{S}_{P_{l}} =\{\, X \in \mathbb{R}^{n,n}\,:\, X = X^{T},\,X = P_{l}^{}XP_{l}^{T}\,\}. \end{array} }$$

Since X = P r XP r T, EP r = P l E and FP r = P l F, the Riccati operator given by

$$\displaystyle{ \mathcal{R}(X) = EXF^{T}\! + FXE^{T}\! + EXB_{ c}^{T}\!B_{ c}^{}XE^{T} + P_{ l}^{}B_{o}^{}B_{o}^{T}\!P_{ l}^{T} }$$

maps \(\mathbb{S}_{P_{r}}\) into \(\mathbb{S}_{P_{l}}\). Then the Frechét derivative of \(\mathcal{R}\) at \(X \in \mathbb{S}_{P_{r}}\) is a linear operator \(\mathcal{R}_{X}^{{\prime}}\!: \mathbb{S}_{P_{r}} \rightarrow \mathbb{S}_{P_{l}}\) defined as

$$\displaystyle{ \mathcal{R}_{X}^{{\prime}}(N) =\lim _{\delta \rightarrow 0}\,\frac{1} {\delta } {\bigl (\mathcal{R}(X +\delta N) -\mathcal{R}(X)\bigr )} }$$

for \(N \in \mathbb{S}_{P_{r}}\). Taking into account that N = P r N = NP r T, we have

$$\displaystyle{ \mathcal{R}_{X}^{{\prime}}(N) = (F + EXB_{ c}^{T}\!B_{ c}^{})NE^{T} + EN(F + EXB_{ c}^{T}\!B_{ c}^{})^{T}. }$$

Then Newton’s method for the projected Riccati equation (2.57) can be written as

$$\displaystyle{ \begin{array}{rcl} N_{j}& =& - (\mathcal{R}_{X_{j}}^{{\prime}})^{-1}(\mathcal{R}(X_{j})), \\ X_{j+1} & =&X_{j} + N_{j}. \end{array} }$$

The standard formulation of this method is given in Algorithm 2.7.

Algorithm 2.7 Newton’s method for the projected Riccati equation

Given E, \(F \in \mathbb{R}^{n,n}\), \(B_{c} \in \mathbb{R}^{m,n}\), \(B_{o} \in \mathbb{R}^{n,m}\), projectors P r , P l and a stabilizing initial guess X 0, compute an approximate solution of the projected Riccati equation (2.57).FOR j = 0, 1, 2, 

  1. 1.

    Compute F j = F + EX j B c TB c .

  2. 2.

    Solve the projected Lyapunov equation

    $$\displaystyle{ F_{j}^{}N_{j}^{}E^{T} + EN_{ j}^{}F_{j}^{T} = -P_{ l}^{}\mathcal{R}(X_{j})P_{l}^{T},\quad N_{ j}^{} = P_{r}^{}N_{j}^{}P_{r}^{T}. }$$
  3. 3.

    Compute X j+1 = X j + N j .

END FOR

As in the standard case [41], we can combine the second and third steps in Algorithm 2.7 and compute the new iterate X j+1 directly from the projected Lyapunov equation as presented in Algorithm 2.8.

Algorithm 2.8 The Newton-Kleinman method for the projected Riccati equation

Given E, \(F \in \mathbb{R}^{n,n}\), \(B_{c} \in \mathbb{R}^{m,n}\), \(B_{o} \in \mathbb{R}^{n,m}\), projectors P r , P l and a stabilizing initial guess X 0, compute an approximate solution of the projected Riccati equation (2.57).FOR j = 1, 2, 

  1. 1.

    Compute K j = EX j−1 B c T and F j = F + K j B c .

  2. 2.

    Solve the projected Lyapunov equation

    $$\displaystyle{ EX_{j}^{}\,F_{j}^{T} + F_{ j}^{}X_{j}^{}\,E^{T} = -P_{ l}^{}(B_{o}^{}B_{o}^{T} - K_{ j}^{}K_{j}^{T})P_{ l}^{T},\qquad X_{ j}^{} = P_{r}^{}X_{j}^{}P_{r}^{T}. }$$

END FOR

Although Algorithms 2.7 and 2.8 are equivalent, they behave different in finite precision arithmetic and there are significant differences in their implementation especially for large-scale problems.

Similarly to the standard state space case [8, 74], one can show that if λEF is c-stable, then for X 0 = 0, all λEF j are also c-stable and \(\lim \limits _{j\rightarrow \infty }X_{j} = X_{\min }\), see [10]. The convergence is quadratic if the pencil λEFEX min B c T B c is c-stable. Some difficulties may occur if the pencil λEF has eigenvalues on the imaginary axis. For circuit equations, these eigenvalues are uncontrollable and unobservable [54]. In that case, similarly to [12], one could choose a special stabilizing initial guess X 0 that ensures the convergence of the Newton-Kleinman iteration. However, the computation of such a guess for large-scale problems remains an open problem.

A low-rank approximation to the minimal solution of the projected Riccati equation (2.57) can be computed in factored form \(X_{\min } \approx \tilde{ R}\tilde{R}^{T}\) with \(\tilde{R} \in \mathbb{R}^{n,k}\), kn using the same approach as in [11]. Starting with K 1 = EX 0 B c T and F 1 = F + K 1 B c , we solve in each Newton-Kleinman iteration two projected Lyapunov equations

$$\displaystyle\begin{array}{rcl} EX_{1,j}\,F_{j}^{T} + F_{ j}^{}X_{1,j}\,E^{T}& =& -P_{ l}^{}B_{o}^{}B_{o}^{T}P_{ l}^{T},\qquad X_{ 1,j} = P_{r}^{}X_{1,j}\,P_{r}^{T},{}\end{array}$$
(2.58)
$$\displaystyle\begin{array}{rcl} EX_{2,j}\,F_{j}^{T} + F_{ j}^{}X_{2,j}\,E^{T}& =& -P_{ l}^{}K_{j}^{}K_{j}^{T}P_{ l}^{T},\qquad X_{ 2,j} = P_{r}^{}X_{2,j}\,P_{r}^{T},{}\end{array}$$
(2.59)

for the low-rank approximations X 1,j R 1,j R 1,j T and X 2,j R 2,j R 2,j T, respectively, and then compute K j+1 = E(R 1,j R 1,j TR 2,j R 2,j T)B c T and F j+1 = F + K j+1 B c . If the convergence is observed after j max iterations, then an approximate solution \(X_{\min } \approx \tilde{ R}\tilde{R}^{T}\) of the projected Riccati equation (2.57) can be computed in factored form by solving the projected Lyapunov equation

$$\displaystyle{ EXF^{T} + FXE^{T} = -P_{ l}^{}QQ^{T}P_{ l}^{T},\qquad X = P_{ r}^{}XP_{r}^{T} }$$
(2.60)

with \(Q = [\,B_{o}, E(X_{1,j_{\max }} - X_{2,j_{\max }})B_{c}^{T}]\) provided λEF is c-stable. For computing low-rank factors of the solutions of the projected Lyapunov equations (2.58)–(2.60), we can use the generalized LR-ADI method presented in Sect. 2.5.1. Note that in this method we need to compute the products (E + τF j )−1 w, where \(w \in \mathbb{R}^{n}\) and \(E +\tau F_{j} = E +\tau (A - BB^{T}) -\tau \hat{ K}_{j}B_{c}\) with the low-rank matrices \(B_{c} \in \mathbb{R}^{m,n}\) and \(\hat{K}_{j} = \sqrt{2}P_{l}BM_{0}^{T}J_{c}^{-T} - K_{j} \in \mathbb{R}^{n,m}\). One can use the Sherman-Morrison-Woodbury formula [28, Sect. 2.1.3] to compute these products as

$$\displaystyle{ (E +\tau F_{j})^{-1}w = w_{ 1} + M_{\hat{K}_{j}}{\Bigl ((I_{m} - B_{c}M_{\hat{K}_{j}})^{-1}B_{ c}w_{1}\Bigr )}, }$$

where w 1 = (E + τ(ABB T))−1 w and \(M_{\hat{K}_{j}} =\tau (E +\tau (A - BB^{T}))^{-1}\hat{K}_{j}\) can be determined by solving linear systems with the sparse matrix E + τ(ABB T) either by computing sparse LU factorization or by using Krylov subspace methods [62].

2.6 MATLAB Toolbox PABTEC

In this section, we briefly describe the MATLAB toolbox PABTEC which provides some functions for analysis, model reduction, simulation and visualization of circuit equations. PABTEC stays for PAssivity-preserving Balanced Truncation for Electrical Circuits.

Figure 2.3 shows the main strategy of PABTEC. First, the user has to specify the electrical circuit under consideration. The input data for the main routine pabtec.m are incidence matrices describing the topological structure of the circuit, element matrices for linear circuit components, element relations for nonlinear circuit components and some parameters that can be initialized and verified in the routine inipar.m.

Fig. 2.3
figure 3

MATLAB toolbox PABTEC

Once the circuit is specified, it can be analyzed with the PABTEC routine sysana.m which delivers the information on the topology structure of the circuit, well-posedness and index. If the circuit contains nonlinear elements, it will be decoupled into linear and nonlinear parts as described in Sect. 2.4. Model reduction of the linear (sub)circuit is implemented in the routine pabtecl.m. Linear circuits that contain neither inductors nor capacitors and circuits without resistors cannot be reduced with PABTEC. For model reduction of large resistive network, one can use a graph-based algorithm presented in [60], which is not yet included in PABTEC. For other network structures, an appropriate model reduction method will be chosen automatically. RC and RL circuits are reduced by the Lyapunov-based balanced truncation algorithms presented in Sect. 2.3.2, while for model reduction of general RLC circuit, Algorithm 2.1 is applied. If the circuit does not contain CVI-loops and LIV-cutsets, then the Gramian in this algorithm is determined by solving the projected Riccati equation (2.57). Otherwise, we have to solve the projected Lur’e equation (2.15). Independent of the topological structure of the circuit, the model reduction algorithms include the following steps: preprocessing, solving matrix equations, model reduction and postprocessing. In the preprocessing step, the basis matrices required in the projector P r or P are computed using graph-theoretical algorithms. If necessary, also the auxiliary matrices H k and the matrix M 0 are computed. The projected Lyapunov equations are solved by the LR-ADI method described in Sect. 2.5.1, while the projected Riccati equation is solved by the Newton or Newton-Kleinman method presented in Sect. 2.5.2. The numerical methods for large-scale projected Lur’e equations proposed in [53] will be included in a future release of PABTEC. Note that the matrix equation solvers in PABTEC can be seen as extension of the corresponding functions in the MATLAB toolbox LyaPACK

Footnote 1[51] and its successor MESS.

Footnote 2 Postprocessing involves the computation of error bounds and reduced-order initial vectors required for simulation.

The last step in PABTEC is combination of the reduced-order linear model with unchanged nonlinear part. PABTEC includes also the routines simorih.m and simredh.m for simulation of the original and reduced-order models, respectively.

PABTEC provides the following routines:

  • main routines listed in Table 2.1, which can be called by the user in the main program;

  • supplementary routines listed in Table 2.2, which are used in the main routines;

  • auxiliary routines listed in Table 2.3 which are used in the main and supplementary routines;

  • user supplied routines listed in Table 2.4, which provide the information on the circuit.

Table 2.1 Main subroutines in PABTECg
Table 2.2 Supplementary subroutines in PABTEC
Table 2.3 Auxiliary subroutines in PABTEC
Table 2.4 User supplied subroutines for PABTEC

The MATLAB toolbox PABTEC can be used by a line command or via a graphical user interface (GUI). PABTEC-GUI contains four tab panels shown in Figs. 2.4 and 2.5 that enable the user to upload the circuit, set up all required parameters, perform model reduction, simulate the original and reduced-order systems and save these systems and simulation data.

Fig. 2.4
figure 4

PABTEC-GUI: general panel

Fig. 2.5
figure 5

PABTEC-GUI: simulation panel

2.7 Numerical Examples

In this section, we present some results of numerical experiments to demonstrate the properties of the presented model reduction methods for linear and nonlinear circuits. The computations were done on IBM RS 6000 44P Model 270 with machine precision ɛ = 2. 22 × 10−16 using MATLAB 7.0.4.

Example 2.7.1

The first example is a transmission line model from [7] consisting of a scalable number of RLC ladders. We have a single-input-single-output reciprocal passive DAE system (2.9), (2.10) of order n = 60, 000. The minimal solution of the projected Riccati equation (2.21) was approximated by a low-rank matrix \(X_{\min } \approx \widetilde{ R}\,\widetilde{R}^{T}\) with \(\widetilde{R} \in \mathbb{R}^{n,58}\) using Newton’s method as presented in Algorithm 2.7.

The original system was approximated by a reduced model of order n r = 16 using Algorithm 2.1 with the error bound γ = 2. 72 ⋅ 10−5, where

$$\displaystyle{ \gamma = \frac{\|I +\hat{ \mathbf{G}}\|_{\mathcal{H}_{\infty }}^{2}(\gamma _{r+1} +\ldots +\gamma _{q})} {1 -\| I +\hat{ \mathbf{G}}\|_{\mathcal{H}_{\infty }}(\gamma _{r+1} +\ldots +\gamma _{q})},\qquad r = 15. }$$

For comparison, we have also computed the reduced models of order n r = 117 using the PRIMA and SPRIM algorithms [26, 48]. This order was chosen as a smallest integer such that the absolute error \(\vert \hat{\mathbf{G}}(\,j\omega ) -\mathbf{G}(\,j\omega )\vert\) for the SPRIM model is below γ on the frequency interval [10−5, 1010]. In Fig. 2.6, we display the magnitude of the frequency responses G( ) and \(\hat{\mathbf{G}}(\,j\omega )\) of the original and reduced-order models. Figure 2.7 shows the absolute errors \(\vert \hat{\mathbf{G}}(\,j\omega ) -\mathbf{G}(\,j\omega )\vert\) and also the error bound γ. One can see that PABTEC provides a much smaller system with keeping the better global approximation properties. It should also be noted that the result for SPRIM is presented here for the best choice of the expansion point that was found after several runs of this algorithm. Taking this into account, the reduction time for the PABTEC method becomes comparable to the actual reduction time for SPRIM.⊲

Fig. 2.6
figure 6

Transmission line: the frequency responses of the original system and the reduced-order models computed by the PABTEC, PRIMA and SPRIM methods

Fig. 2.7
figure 7

Transmission line: the absolute errors and the error bound (2.20)

Example 2.7.2

Next we consider a nonlinear circuit shown in Fig. 2.8. It contains 1501 linear capacitors, 1500 linear resistors, 1 voltage source and 1 diode. Such a circuit is described by the DAE system (2.2), (2.3) of the state space dimension n = 1503. We simulate this system on the time interval \(\mathbb{I} = [0\,\mathrm{s},0.07\,\mathrm{s}]\) with a fixed stepsize 10−5 s using the BDF method of order 2. The voltage source is given by \(\mathcal{V}_{v}{(t)} = {10 \, sin(100 \, \pi \, t)^{4}} V\) V, see Fig. 2.9. The linear resistors have the same resistance \(R = 2\,\mathrm{k}\Omega\), the linear capacitors have the same capacitance C = 0. 02 μF and the diode has a characteristic curve \(g(v_{\widetilde{\phantom{\mathcal{R}}}\mathcal{R}}) = 10^{-14}(\exp (40 \frac{1} {\mathrm{V}}v_{\widetilde{\phantom{\mathcal{R}}}\mathcal{R}}) - 1)\) A.

Fig. 2.8
figure 8

Nonlinear RC circuit

Fig. 2.9
figure 9

Voltage source for the RC circuit

The diode was replaced by an equivalent linear circuit as described in Sect. 2.4. The resulting linear system of order n = 1504 was approximated by a reduced model of order n r = r + r 0, where r 0 = rank(IM 0) and r satisfies the condition (γ r+1 + + γ q ) < tol with a prescribed tolerance tol. For comparison, we compute the reduced-order linear models for the different tolerances tol = 10−2, 10−3, 10−4, 10−5. The numerical results are given in Fig. 2.10. \(y(t) = -j_{\mathcal{V}}(t)\) and \(\hat{y}(t)\) of the original and reduced-order nonlinear systems, respectively, whereas the lower plot shows the error \(\vert \hat{y}(t) - y(t)\vert\).

Fig. 2.10
figure 10

Outputs of the original and the reduced-order nonlinear systems and the errors in the output for the different tolerances (a) 10−2, (b) 10−3, (c) 10−4, (d) 10−5

Table 2.5 demonstrates the efficiency of the proposed model reduction method. One can see that for the decreasing tolerance, the dimension of the reduced-order system increases while the error in the output decreases. The speedup is defined as the simulation time for the original system divided by the simulation time for the reduced-order model. For example, a speedup of 219 in simulation of the reduced-order nonlinear model of dimension \(\hat{n} = 13\) with the error \(\|\hat{y} - y\|_{\mathbb{L}_{2}(\mathbb{I})} = 6.2 \cdot 10^{-7}\) was achieved compared to the simulation of the original system. ⊲

Table 2.5 Statistics for the RC circuit

Example 2.7.3

We consider now the nonlinear circuit shown in Fig. 2.11. It contains 1000 repetitions of subcircuits consisting of one inductor, two capacitors and two resistors. Furthermore, at the beginning and at the end of the chain, we have a voltage source with \(v_{\mathcal{V}}(t) =\sin (100\pi t)^{10}\) V as in Fig. 2.12 and an additional linear inductor, respectively. In the 1st, 101st, 201st, etc., subcircuits, a linear resistor is replaced by a diode, and in the 100th, 200th, 300th, etc., subcircuits, a linear inductor is replaced by a nonlinear inductor. The resulting nonlinear circuit contains one voltage source, 1990 linear resistors with \(R_{1} = 20\,\Omega\) and \(R_{2} = 1\,\Omega\), 991 linear inductors with L = 0. 01 H, 2000 linear capacitors with C = 1 μF, ten diodes with \(\widetilde{g}(v_{\widetilde{\phantom{\mathcal{R}}}\mathcal{R}}) = 10^{-14}(\exp (40 \frac{1} {\mathrm{V}}v_{\widetilde{\phantom{\mathcal{R}}}\mathcal{R}}) - 1)\)  A, and ten nonlinear inductors with

Fig. 2.11
figure 11

Nonlinear RLC circuit

Fig. 2.12
figure 12

Voltage source for RLC circuit

$$\displaystyle{ \widetilde{\mathcal{L}}(\,j_{\,\widetilde{\phantom{\mathcal{L}}}\mathcal{L} } ) = L_{\min } + (L_{\max } - L_{\min })\exp (-j_{\widetilde{\phantom{\,\mathcal{L}}}\mathcal{L} }^{2}L_{\mathrm{ scl}}), }$$

where L min = 0. 001 H, L max = 0. 002 H and \(L_{\mathrm{scl}} = 10^{4} \frac{1} {\mathrm{A}}\). The state space dimension of the resulting DAE system is n = 4003.

The numerical simulation is done on the time interval \(\mathbb{I} = [0\,\mathrm{s},0.05\,\mathrm{s}]\) using the BDF method of order 2 with a fixed stepsize of length 5 ⋅ 10−5 s. In Fig. 2.13, we again present the outputs \(y(t) = -j_{\mathcal{V}}(t)\) and \(\hat{y}(t)\) of the original and reduced-order nonlinear systems, respectively, as well as the error \(\vert \hat{y}(t) - y(t)\vert\) for the different tolerances tol = 10−2, 10−3, 10−4, 10−5 for model reduction of the decoupled linear subcircuit. Table 2.6 demonstrates the efficiency of the model reduction method. As in the example above, also here one can see that if the tolerance decreases, the dimension of the reduced-order system increases while the error in the output becomes smaller. In particular, for the approximate nonlinear model of dimension \(\hat{n} = 189\) with the error \(\vert \vert \hat{y} - y\vert \vert _{\mathbb{L}_{2}(\mathbb{I})} = 4.10 \cdot 10^{-5}\), the simulation time is only 57 s instead of 1 h and 13 min for the original system that implies a speedup of 76.8.

Fig. 2.13
figure 13

The outputs of the original and the reduced-order nonlinear systems and the errors in the output for the different tolerances (a) 10−2, (b) 10−3, (c) 10−4, (d) 10−5

Table 2.6 Statistics for the RLC circuit

Other results of numerical experiments with PABTEC can be found in Chaps. 14 and 5 in this book.