Abstract
In this paper, we consider model reduction of linear and nonlinear differential-algebraic equations arising in circuit simulation. Circuit equations obtained using modified nodal or loop analysis have a ;special structure that can be exploited to construct efficient model reduction algorithms. For linear systems, we review passivity-preserving balanced truncation model reduction methods that are based on solving projected Lur’e or Lyapunov matrix equations. Furthermore, a ;topology-based index-preserving procedure for extracting large linear subnetworks from nonlinear circuits is given. Finally, we describe a ;new MATLAB Toolbox PABTEC for model reduction of circuit equations and present some results of numerical experiments.
Access provided by CONRICYT-eBooks. Download chapter PDF
Similar content being viewed by others
2.1 Introduction
As integrated circuits get more complex and different physical effects have to be taken into account, the development of efficient modeling and simulation tools for very large networks is highly required. In this context, model order reduction is of crucial importance, especially if simulation of large-scale systems has to be done in a short time or it has to be repeated for different input signals. A general idea of model order reduction is to approximate a large-scale dynamical system by a reduced-order model that preserves essential properties like stability and passivity. It is also required that the approximation error is small.
Many different model reduction approaches have been developed in computational fluid dynamics, control design and electrical and mechanical engineering, see [3, 13, 61, 64] for books on this topic. One of the most used model reduction techniques in circuit simulation is moment matching approximation based on Krylov subspace methods, e.g., [6, 25, 30]. Although these methods are efficient for very large sparse problems, the resulting reduced-order systems have only locally good approximation properties, and stability and passivity are not necessarily preserved. Furthermore, passivity-preserving model reduction methods based on Krylov subspaces have been developed for structured systems arising in circuit simulation [26, 27, 42, 48] and also for general systems [4, 38, 66]. However, none of these methods provides computable global error bounds. Another drawback of Krylov subspace methods is the ad hoc choice of interpolation points that strongly influence the approximation quality. An optimal point selection strategy based on tangential interpolation has been presented in [5, 32] that provides an optimal \(\mathcal{H}_{2}\)-approximation.
In this paper, we present a survey on passivity-preserving balanced truncation model reduction methods for linear circuit equations developed in [54, 56, 72]. They involve computing the spectral projectors onto the left and right deflating subspaces corresponding to the finite and infinite eigenvalues of an underlying pencil and solving projected matrix equations. An important property of these methods is the existence of computable error bounds that allow an adaptive choice of the order of the approximate model.
Furthermore, we consider model reduction of nonlinear circuits based on decoupling linear and nonlinear subcircuits followed by reduction of the linear part [68]. This model reduction approach can also be combined with the POD-based reduction technique for semiconductor devices, see Chap. 1, and further with hierarchical reduction methods studied in Chap. 4 The developed model reduction algorithms for circuit equations were implemented as MATLAB toolbox PABTEC and tested on practical problems.
Notation
Throughout the paper, \(\mathbb{R}^{n,m}\) and \(\mathbb{C}^{n,m}\) denote the spaces of n × m real and complex matrices, respectively. The open left and right half-planes are denoted by \(\mathbb{C}_{-}\) and \(\mathbb{C}_{+}\), respectively, and \(i = \sqrt{-1}\). The matrices A T and A ∗ denote, respectively, the transpose and the conjugate transpose of \(A \in \mathbb{C}^{n,m}\), and A −T = (A −1)T. An identity matrix of order n is denoted by I n or simply by I. We use rank(A), im(A) and ker(A) for the rank, the range and the kernel of A, respectively. A matrix \(A \in \mathbb{C}^{n,n}\) is positive definite (semidefinite), if v ∗ Av > 0 (v ∗ Av ≥ 0) for all non-zero \(v \in \mathbb{C}^{n}\). Note that positive (semi)definiteness of A does not require A to be Hermitian. For \(A,B \in \mathbb{C}^{n,n}\), we write A > B (A ≥ B) if A − B is positive definite (semidefinite). Furthermore, diag(A 1, …, A s ) denotes a block diagonal matrix with block entries A j , j = 1, …, s, on the diagonal.
2.2 Circuit Equations
In this section, we briefly describe the modeling of electrical circuits via differential-algebraic equations (DAEs) and discuss their properties. For more details on graph theory and network analysis, we refer to [1, 20, 22, 40, 75].
2.2.1 Graph-Theoretic Concepts
A general circuit can be modeled as a directed graph \(\mathfrak{G}\) whose vertices (nodes) \(\mathfrak{n}_{k}\) correspond to the nodes of the circuit and whose branches (edges) \(\mathfrak{b}_{k_{1},k_{2}} =\langle \mathfrak{n}_{k_{1}},\mathfrak{n}_{k_{2}}\rangle\) correspond to the circuit elements like capacitors, inductors and resistors. For the ordered pair \(\mathfrak{b}_{k_{1},k_{2}} =\langle \mathfrak{n}_{k_{1}},\mathfrak{n}_{k_{2}}\rangle\), we say that \(\mathfrak{b}_{k_{1},k_{2}}\) leaves \(\mathfrak{n}_{k_{1}}\) and enters \(\mathfrak{n}_{k_{2}}\). In this case, \(\mathfrak{b}_{k_{1},k_{2}}\) is called incident with \(\mathfrak{n}_{k_{1}}\) and \(\mathfrak{n}_{k_{2}}\). An alternating sequence \((\mathfrak{n}_{k_{1}},\mathfrak{b}_{k_{1},k_{2}},\mathfrak{n}_{k_{2}},\ldots,\mathfrak{n}_{k_{s-1}},\mathfrak{b}_{k_{s-1},k_{s}},\mathfrak{n}_{k_{s}})\) of vertices and branches in \(\mathfrak{G}\) is called a path connecting \(\mathfrak{n}_{k_{1}}\) and \(\mathfrak{n}_{k_{s}}\) if the branches \(\mathfrak{b}_{k_{j-1},k_{j}}\) are incident with the vertices \(\mathfrak{n}_{k_{j-1}}\) and \(\mathfrak{n}_{k_{j}}\) for 2 ≤ j ≤ s. A path is closed if \(\mathfrak{n}_{k_{1}} = \mathfrak{n}_{k_{s}}\). A closed path is called a loop if \(\mathfrak{n}_{k_{i}}\neq \mathfrak{n}_{k_{j}}\) for 1 ≤ i < j ≤ s except for \(\mathfrak{n}_{k_{1}}\) and \(\mathfrak{n}_{k_{s}}\). A graph \(\mathfrak{G}\) is called connected if for every two vertices there exists a path connecting them. A cutset is a set of branches of a connected graph whose removal disconnects the graph, and this set is minimal with this property. A subgraph of the graph \(\mathfrak{G}\) is called a tree if it has all nodes of \(\mathfrak{G}\), is connected and does not contain loops.
A directed graph \(\mathfrak{G}\) with n v vertices, n b branches and n l loops can be described by an incidence matrix \(\mathbf{A}_{0} = [a_{pq}] \in \mathbb{R}^{n_{v},n_{b}}\) with
or by a loop matrix \(\mathbf{B}_{0} = [b_{pq}] \in \mathbb{R}^{n_{l},n_{b}}\) with
For a connected graph, the matrices A 0 and B 0 satisfy the following relations
see [22, p. 213]. Removing linear dependent rows from A 0 and B 0, we obtain the full rank matrices \(\mathbf{A} \in \mathbb{R}^{n_{v}-1,n_{b}}\) and \(\mathbf{B} \in \mathbb{R}^{n_{b}-n_{v}+1,n_{b}}\) which are called the reduced incidence matrix and the reduced loop matrix, respectively.
2.2.2 Modified Nodal Analysis and Modified Loop Analysis
We now consider a general nonlinear RLC circuit that contains resistors, inductors, capacitors, independent voltage sources and \(n_{\mathscr{I} }\) independent current sources. Such circuits are completely described by the graph-theoretic relations like Kirchhoff’s current and voltage laws together with the branch constitutive relations that characterize the circuit elements. Kirchhoff’s current law states that the sum of the currents along all branches leaving and entering a circuit node is zero. Kirchhoff’s voltage law states that the sum of the voltages along the branches of any loop is zero. Let
denote the vectors of branch currents and branch voltages, respectively, and let the reduced incidence and loop matrices
be partitioned accordingly, where the subscripts and \(\mathscr{I}\) stand for resistors, capacitors, inductors, voltage sources and current sources, respectively. Then Kirchhoff’s current and voltage laws can be expressed in the compact form as
respectively, or, equivalently,
where \(\iota \in \mathbb{R}^{n_{b}-n_{v}+1}\) and \(\eta \in \mathbb{R}^{n_{v}-1}\) denote the vectors of loop currents and node potentials.
The branch constitutive relations for nonlinear capacitors, inductors and resistors are given by
where the functions , and describe electromagnetic fluxes in the inductors, capacitor charges and resistor voltage-current characteristics, respectively. For current-controlled resistors, we have also the relation , where is the resistor current-voltage characteristic function. We assume that
-
(A1)
the functions ϕ, and g are continuously differentiable and their Jacobians
are positive definite for all , and , respectively.
This assumption guarantees that inductors, capacitors and resistors are locally passive, see [19] for details.
Using Kirchhoff’s laws and the branch constitutive relations, the dynamical behaviour of a nonlinear circuit can be described using modified nodal analysis (MNA) [37] by the following system of DAEs
where
and the input u and the output y have the form
respectively. Another approach for modeling electrical circuits is based on modified loop analysis (MLA), see [79]. In this case, the circuit equations take the form (2.2) with
and the input and the output are as in (2.4).
We assume that the circuit is well-posed in the sense that
-
(A2)
the circuit does not contain cutsets consisting of current sources (I-cutsets),
-
(A3)
the circuit does not contain loops consisting of voltage sources (V-loops).
These assumptions avoid open-circuit current sources and short-circuit voltage sources, respectively. Assumption (A2) is equivalent to
which is, on the other hand, equivalent to \(\mathrm{rank}(B_{\mathscr{I} } ) = n_{\mathscr{I} }\). In terms of rank conditions, (A3) means that or, equivalently, .
The index concept plays an important role in the analysis of DAEs. To characterize different analytical and numerical properties of DAE systems, several index notations have been introduced in the literature, e.g., [17, 29, 33, 43]. For example, the differentiation index is roughly defined as the minimum of times that all or part of a DAE system must be differentiated with respect to t in order to determine the derivative of x as a continuous function of t and x. In the sequel, we will use the shorter term “index” instead of “differentiation index”. The following proposition characterizes the index of the MNA system (2.2), (2.3).
Proposition 2.2.1 ( [24, 68])
Consider a circuit satisfying assumptions (A1)–(A3).
- 1.
-
2.
The index of the MNA system (2.2), (2.3) is equal to zero if and only if
(2.6) -
3.
The index of the MNA system (2.2), (2.3) is equal to one if and only if
(2.7)
Similar, rank conditions can also be formulated for the MLA system (2.2), (2.5). Considering the topological structure of the circuit, the conditions (2.6) imply that the circuit does not contain voltage sources and the circuit graph contains a capacitive tree, respectively. Furthermore, the first condition in (2.7) implies that the circuit does not contain loops consisting of capacitors and/or voltage sources (CV-loops) except for loops consisting of capacitors only (C-loops), whereas the second condition in (2.7) means that the circuit does not contain cutsets consisting of inductors and/or current sources (LI-cutsets) .
In the following, we will distinguish between nonlinear circuits, which contain nonlinear elements, and linear circuits consisting exclusively of linear capacitors, inductors and resistors. A circuit element is called linear if the current-voltage relation for this element is linear. Otherwise, the circuit element is called nonlinear. Without loss of generality we may assume that the circuit elements are ordered such that the incidence matrices are partitioned as
where the incidence matrices , and correspond to the linear circuit components, and , and are the incidence matrices for the nonlinear devices. We also assume that the linear and nonlinear elements are not mutually connected, i.e., where , and are the capacitance, inductance and conductance matrices for the corresponding linear elements, whereas
describe the corresponding nonlinear components, and is the current vector through the nonlinear inductors.
2.2.3 Linear RLC Circuits
For simplification of notation, a linear RLC circuit containing n R linear resistors, n L linear inductors, n C linear capacitors, n I current sources and n V voltage sources will be described by the linear DAE system
with the MNA matrices
or the MLA matrices
Here, the subscripts R , C , L , V and I stand for linear resistors, linear capacitors, linear inductors, voltage sources and current sources, respectively, and \(L \in \mathbb{R}^{n_{L},n_{L}}\), \(C \in \mathbb{R}^{n_{C},n_{C}}\), \(R \in \mathbb{R}^{n_{R},n_{R}}\) and G = R −1 are the inductance, capacitance, resistance and conductance matrices, respectively. Linear circuits are often used to model interconnects, transmission lines and pin packages in VLSI networks. They arise also in the linearization of nonlinear circuit equations around DC operating points. According to (A1), we assume that
-
(A1´)
the matrices L, C and G are symmetric and positive definite.
This condition together with (A2) and (A3) guarantees that the pencil λE − A is regular, i.e., det(λE − A) ≠ 0 for some \(\lambda \in \mathbb{C}\), see [27]. In this case, we can define a transfer function
of the DAE system (2.9). Applying the Laplace transform to (2.9) with an initial condition x(0) = x 0 satisfying Ex 0 = 0, we obtain y(s) = G(s)u(s), where u(s) and y(s) are the Laplace transformations of the input u(t) and the output y(t), respectively. Thus, the transfer function G(s) describes the input-output relation of (2.9) in the frequency domain. Note that the MNA system (2.9), (2.10) and the MLA system (2.9), (2.11) have the same transfer function.
For any rational matrix-valued function G(s), there exist matrices E, A, B in and B out such that G(s) = B out(sE − A)−1 B in, see [21]. Then the DAE system
is said to form a realization of G(s). We will also denote a realization of G(s) by G = (E, A, B in, B out).
The transfer function G(s) is called proper if \(\lim \limits _{s\rightarrow \infty }\mathbf{G}(s) <\infty\), and improper, otherwise. If G(s) is proper and analytic in \(\mathbb{C}_{+}\), then the \(\mathcal{H}_{\infty }\)-norm of G(s) is defined as
where ∥⋅ ∥ denotes the spectral matrix norm.
2.2.3.1 Passivity
Passivity is the most important property of circuit equations. System (2.9) with x(0) = 0 is passive if
for all t ≥ 0 and all admissible u such that u T y is locally integrable. Passive elements can store and dissipate energy, but they do not generate energy. Thus, capacitors, resistors and inductors are passive, while current and voltage sources are not.
It is well known in linear network theory [1] that the DAE system (2.9) is passive if and only if its transfer function G(s) = B T(sE − A)−1 B is positive real, i.e., G is analytic in \(\mathbb{C}_{+}\) and G(s) + G(s)∗ ≥ 0 for all \(s \in \mathbb{C}_{+}\). Since the system matrices in (2.10) satisfy E = E T ≥ 0 and A + A T ≤ 0, the transfer function of (2.9), (2.10) is positive real, and, hence, this system is passive.
2.2.3.2 Stability
Stability is a qualitative property of dynamical systems which describes the behaviour of their solutions under small perturbations in the initial data. For the linear DAE system (2.9), stability can be characterized in terms of the finite eigenvalues of the pencil λE − A, e.g., [21]. System (2.9) is stable if all the finite eigenvalues of λE − A lie in the closed left half-plane and the eigenvalues on the imaginary axis are semi-simple, i.e., they have the same algebraic and geometric multiplicity. System (2.9) is asymptotically stable if the pencil λE − A is c-stable, i.e., all its finite eigenvalues lie in the open left half-plane. Note that passivity of the MNA system (2.9), (2.10) implies that this system is stable [1, Theorem 2.7.2]. Topological conditions for the asymptotic stability of the MNA equations (2.9), (2.10) can be found in [58, 59].
2.2.3.3 Reciprocity
Another relevant property of circuit equations is reciprocity. We call a matrix \(S \in \mathbb{R}^{m,m}\) a signature if S is diagonal and S 2 = I m . System (2.9) is reciprocal with an external signature \(S_{\mathrm{ext}} \in \mathbb{R}^{m,m}\) if its transfer function satisfies
for all \(s \in \mathbb{C}\). Obviously, the MNA system (2.9), (2.10) with symmetric L, C and G is reciprocal with the external signature \(S_{\mathrm{ext}} =\mathrm{ diag}(I_{n_{I}},-I_{n_{V }})\).
2.3 Model Reduction of Linear Circuits
Consider the linear MNA system (2.9), (2.10) with \(E,A \in \mathbb{R}^{n,n}\) and \(B \in \mathbb{R}^{n,m}\). We aim to approximate this system by a reduced-order model
where \(\hat{E}\), \(\hat{A} \in \mathbb{R}^{n_{r},n_{r}}\), \(\hat{B} \in \mathbb{R}^{n_{r},m}\), \(\hat{C} \in \mathbb{R}^{m,n_{r}}\) and n r ≪ n. It is required that the approximate system (2.13) has a small approximation error \(y -\hat{ y}\) and also preserves passivity and reciprocity. In the frequency domain, the error can be measured via \(\mathbf{G} -\hat{\mathbf{G}}\) in an appropriate system norm, where \(\hat{\mathbf{G}}(s) =\hat{ C}(s\hat{E} -\hat{ A})^{-1}\hat{B}\) is the transfer function of system (2.13).
A classical approach for computing the reduced-order model (2.13) is based on the projection of system (2.9) onto lower dimensional subspaces. In this case, the system matrices in (2.13) have the form
where the projection matrices W, \(T \in \mathbb{R}^{n,n_{r}}\) determine the subspaces of interest. In interpolation-based passivity-preserving model reduction methods like PRIMA [48], SPRIM [26, 27] and spectral zero interpolation [38, 66], the columns of these matrices span certain (rational) Krylov subspaces associated with (2.9).
Balanced truncation also belongs to the projection-based model reduction techniques. This method consists in transforming the dynamical system into a balanced form whose appropriately chosen controllability and observability Gramians are both equal to a diagonal matrix. Then a reduced-order model (2.13), (2.14) is obtained by projecting (2.9) onto the subspaces corresponding to the dominant diagonal elements of the balanced Gramians. In order to capture specific system properties, different balancing techniques have been developed in the last 30 years [31, 46, 47, 52, 55, 69]. An important property of these techniques is the existence of computable error bounds that allow us to approximate (2.9) to a prescribed accuracy.
In Sect. 2.3.1, we consider a passivity-preserving model reduction method for general RLC circuits developed in [54, 72]. This method is based on balancing the Gramians that satisfy the projected Lur’e matrix equations. For RC circuits consisting only of resistors, capacitors, current sources and/or voltage sources, this method can significantly be simplified. In Sect. 2.3.2, we present passivity-preserving model reduction methods for RC circuits developed in [56] that rely on balancing the solutions of the projected Lyapunov equations. Thereby, we will distinguish three cases: RC circuits with current sources (RCI circuits), RC circuits with voltage sources (RCV circuits) and RC circuits with both current and voltage sources (RCIV circuits). Finally, in Sect. 2.3.3, we discuss the numerical aspects of the presented balancing-related model reduction algorithms.
2.3.1 Balanced Truncation for RLC Circuits
First, we consider model reduction of general RLC circuits. Note that passivity of the MNA system (2.9), (2.10) can be characterized via the projected Lur’e equations
and
with unknowns \(X \in \mathbb{R}^{n,n}\), \(K_{c} \in \mathbb{R}^{n,m}\), \(J_{c} \in \mathbb{R}^{m,m}\) and \(Y \in \mathbb{R}^{n,n}\), \(K_{o} \in \mathbb{R}^{m,n}\), \(J_{o} \in \mathbb{R}^{m,m}\), respectively. Here, P r and P l are the spectral projectors onto the right and left deflating subspaces of the pencil λE − (A − BB T) corresponding to the finite eigenvalues along the right and left deflating subspaces corresponding to the eigenvalue at infinity, and
In general, the solvability of the projected Lur’e equations (2.15) and (2.16) requires that system (2.9) is passive and R-minimal, i.e.,
for all \(\lambda \in \mathbb{C}\). For the circuit equations (2.9), (2.10), however, the R-minimality condition can be removed.
Theorem 2.3.1 ( [54])
Consider an MNA system (2.9), (2.10) satisfying (A1´), (A2) and (A3). Then the projected Lur’e equations (2.15) and (2.16) are solvable.
Note that the solutions X and Y of (2.15) and (2.16) are not unique. However, there exist unique minimal solutions X min and Y min that satisfy 0 ≤ X min ≤ X and 0 ≤ Y min ≤ Y for all symmetric solutions X and Y of (2.15) and (2.16), respectively. These minimal solutions X min and Y min of (2.15) and (2.16), respectively, are called the controllability and observability Gramians of system (2.9). This system is called balanced if X min = Y min = diag(Γ, 0), where \(\varGamma =\mathrm{ diag}(\gamma _{1},\ldots,\gamma _{n_{f}})\) with \(\gamma _{1} \geq \ldots \geq \gamma _{n_{f}} \geq 0\) and n f = rank(P r ). The values γ j are called the characteristic values of (2.9). Based on the energy interpretation of the Gramians X min and Y min, see [55], one can conclude that the truncation of the states of a balanced system corresponding to the small characteristic values does not change the system properties significantly. The characteristic values and balancing transformation matrices can be determined from the singular value decomposition of the matrix \(\tilde{Y }^{T}E\tilde{X}\), where \(\tilde{X}\) and \(\tilde{Y }\) are the Cholesky factors of the Gramians \(X_{\min } =\tilde{ X}\tilde{X}^{T}\) and \(Y _{\min } =\tilde{ Y }\tilde{Y }^{T}\). Taking into account the block structure of the MNA matrices in (2.10), we have E T = S int ES int and A T = S int AS int with
This implies that Y min = S int X min S int. Then instead of the more expensive singular value decomposition of \(\tilde{Y }^{T}E\tilde{X}\), we can compute the eigenvalue decomposition of the symmetric matrix \(\tilde{X}^{T}S_{\mathrm{int}}E\tilde{X}\). In this case, the numerical solution of only one Lur’e equation is required. If λ j are eigenvalues of \(\tilde{X}^{T}S_{\mathrm{int}}E\tilde{X}\), then γ j = | λ j |. Thus, the reduced-order model (2.13), (2.14) can be determined by projecting (2.9) onto the subspaces corresponding to the dominant eigenvalues of \(\tilde{X}^{T}S_{\mathrm{int}}E\tilde{X}\).
One can also truncate the states that are uncontrollable and unobservable at infinity. Such states do not contribute to the energy transfer from the input to the output, and, therefore, they can be removed from the system without changing the input-output relation [69, 71]. For general DAE systems, such states can be determined from the solution of certain projected discrete-time Lyapunov equations [69]. Exploiting again the structure of the MNA equations (2.9), (2.10), the required states can be determined from the eigenvalue decomposition of the symmetric matrix (I − M 0)S ext with
We summarize the resulting model reduction method for RLC circuits in Algorithm 2.1.
Algorithm 2.1 Passivity-preserving balanced truncation for RLC circuits
Given a passive MNA system (2.9) with E, A, B as in (2.10), compute a reduced-order model (2.13).
-
1.
Compute the full-rank Cholesky factor \(\tilde{X}\) of the minimal solution \(X_{\min } =\tilde{ X}\tilde{X}^{T}\) of the projected Lur’e equation (2.15).
-
2.
Compute the eigenvalue decomposition
$$\displaystyle{ \tilde{X}^{T}S_{\mathrm{ int}}E\tilde{X} = [\,U_{1},\,U_{2}\,]\left [\begin{array}{cc} \varLambda _{1} & 0 \\ 0& \varLambda _{2}\end{array} \right ][\,U_{1},\,U_{2}\,]^{T}, }$$where S int is as in (2.18), the matrix [ U 1, U 2 ] is orthogonal, Λ 1 = diag(λ 1, …, λ r ) and Λ 2 = diag(λ r+1, …, λ q ).
-
3.
Compute the eigenvalue decomposition
$$\displaystyle{ (I - M_{0})S_{\mathrm{ext}} = U_{0}^{}\varLambda _{0}^{}U_{0}^{T}, }$$where M 0 is as in (2.17), S ext is as in (2.19), U 0 is orthogonal and \(\varLambda _{0} =\mathrm{ diag}(\hat{\lambda }_{1},\ldots,\hat{\lambda }_{m})\).
-
4.
Compute the reduced-order system (2.13) with
$$\displaystyle{ \begin{array}{l} \hat{E} = \left [\begin{array}{cc} I& 0\\ \;0 & 0\end{array} \right ],\qquad \hat{A} = \left [\begin{array}{cc} \quad \;W^{T}A\,T & W^{T}BC_{ \infty }/\sqrt{2} \\ - B_{\infty }B^{T}T/\sqrt{2}& I - B_{\infty }C_{\infty }/2\end{array} \right ], \\ \hat{B} = \left [\begin{array}{cc} W^{T}B \\ - B_{\infty }/\sqrt{2}\end{array} \right ],\qquad \hat{C} = \left [\,B^{T}T, C_{ \infty }/\sqrt{2}\,\right ],\end{array} }$$where
$$\displaystyle{ \begin{array}{l} B_{\infty } = S_{0}\vert \varLambda _{0}\vert ^{1/2}U_{0}^{T}S_{\mathrm{ext}},\quad C_{\infty } = U_{0}^{}\vert \varLambda _{0}\vert ^{1/2}, \\ W = S_{\mathrm{int}}\tilde{X}U_{1}^{}\vert \varLambda _{1}\vert ^{-1/2},\quad T =\tilde{ X}U_{1}^{}S_{1}\vert \varLambda _{1}\vert ^{-1/2}, \\ S_{0} =\mathrm{ diag}(\mathrm{sign}(\hat{\lambda }_{1}),\ldots,\mathrm{sign}(\hat{\lambda }_{m})),\quad \vert \varLambda _{0}\vert =\mathrm{ diag}(\vert \hat{\lambda }_{1}\vert,\ldots,\vert \hat{\lambda }_{m}\vert ), \\ S_{1} =\mathrm{ diag}(\mathrm{sign}(\lambda _{1}),\ldots,\mathrm{sign}(\lambda _{r})),\quad \vert \varLambda _{1}\vert =\mathrm{ diag}(\vert \lambda _{1}\vert,\ldots,\vert \lambda _{r}\vert ).\end{array} }$$
One can show that the reduced-order model computed by Algorithm 2.1 preserves not only passivity but also reciprocity. Moreover, we have the following error bound
provided \(\|I + \mathbf{G}\|_{\mathcal{H}_{\infty }}(\gamma _{r+1} +\ldots +\gamma _{q}) <1\), see [54] for details. Note that this error bound requires the computation of the \(\mathcal{H}_{\infty }\)-norm of G, which is expensive for large-scale systems. If r is chosen in Algorithm 2.1 such that
then we can estimate
where only the evaluation of the \(\mathcal{H}_{\infty }\)-norm of the reduced-order system \(\hat{\mathbf{G}}\) is required.
If the matrix I − M 0 M 0 T is nonsingular, then the projected Lur’e equation (2.15) can be written as the projected Riccati equation
where
Note that the invertibility of I − M 0 M 0 T depends on the topological structure of the circuit.
Theorem 2.3.2
Consider an MNA system (2.9), (2.10). Let the matrix M 0 be as in (2.17). Then I − M 0 M 0 T is nonsingular if and only if
where Z C and Z RC are the basis matrices for ker(A C T) and ker([ A R , A C ]T), respectively.
Proof
The result immediately follows from [54, Theorem 7].
The first condition in (2.23) is equivalent to the absence of loops of capacitors, voltage sources and current sources (CVI-loops) except for loops consisting of capacitive branches (C-loops). The second condition in (2.23) means that the circuit does not contain cutsets consisting of branches of inductors, voltage sources and current sources (LVI-cutsets) except for cutsets consisting of inductive branches (L-cutsets).
2.3.2 Balanced Truncation for RC Circuits
We now present a Lyapunov-based balanced truncation model reduction approach for RC circuits. In this approach, the Gramians of system (2.9) are defined as unique symmetric, positive semidefinite solutions of the projected continuous-time Lyapunov equations
The numerical solution of such equations is much less exhausting than of the projected Lur’e or Riccati equations. For a balanced system, these Gramians are both equal to a diagonal matrix
where \(\varSigma =\mathrm{ diag}(\sigma _{1},\ldots,\sigma _{n_{f}})\). The values σ j are called the proper Hankel singular values of system G = (E, A, B, B T). They determine which states are important and which states can be removed from the system.
Note that Lyapunov-based balanced truncation does not, in general, guarantee the preservation of passivity in the reduced-order model. However, the RC circuit equations either have a symmetric structure
or they can be transformed under preservation of passivity into a symmetric form. Then Lyapunov-based balanced truncation applied to symmetric systems is known to be structure-preserving [45] and, hence, also passivity-preserving. All model reduction algorithms presented in this section have been developed in [56].
2.3.2.1 RCI Circuits
First, we consider RCI circuits consisting of resistors, capacitors and current sources only. The MNA matrices are then given by
Obviously, the symmetry condition (2.24) is fulfilled. In this case, the reduced-order system (2.13) can be computed by Algorithm 2.2.
Algorithm 2.2 Passivity-preserving balanced truncation for RCI circuits
Given a passive MNA system (2.9) with E, A, B as in (2.25), compute a reduced-order model (2.13).
-
1.
Compute the full column rank matrices Z C and Z R such that
$$\displaystyle{ \mathrm{im}(Z_{C}) =\mathrm{ ker}(A_{C}^{T}),\qquad \mathrm{im}(Z_{ R}) =\mathrm{ ker}(A_{R}^{T}). }$$ -
2.
Compute a full-rank Cholesky factor \(\tilde{X}\) of the solution \(X =\tilde{ X}\tilde{X}^{T}\) of the projected Lyapunov equation
$$\displaystyle{ EXA + EXA = -PBB^{T}P,\qquad X = PXP, }$$where
$$\displaystyle{ P = I - Z_{R}^{}(Z_{R}^{T}EZ_{ R}^{})^{-1}Z_{ R}^{T}E - Z_{ C}^{}(Z_{C}^{T}AZ_{ C}^{})^{-1}Z_{ C}^{T}A }$$is the spectral projector onto the right deflating subspace of λE − A corresponding to the finite eigenvalues with negative real part.
-
3.
Compute the eigenvalue decomposition
$$\displaystyle\begin{array}{rcl} \tilde{X}^{T}E\tilde{X} = [\,U_{ 1},\,U_{2}\,]\left [\begin{array}{*{10}c} \varSigma _{1} & 0 \\ 0& \varSigma _{2}\end{array} \right ][\,U_{1},\,U_{2}\,]^{T},& & {}\end{array}$$(2.26)where [ U 1, U 2 ] is orthogonal, Σ 1 = diag(σ 1, …, σ r ) and Σ 2 = diag(σ r+1, …, σ q ).
-
4.
Compute the full-rank Cholesky factors \(B_{0} \in \mathbb{R}^{r_{0},m}\) and \(B_{\infty }\in \mathbb{R}^{r_{\infty },m}\) of the matrices R 0 = B 0 T B 0 and R ∞ = B ∞ T B ∞ given by
$$\displaystyle\begin{array}{rcl} R_{0} = B^{T}Z_{ R}^{}(Z_{R}^{T}EZ_{ R})^{-1}Z_{ R}^{T}B,\qquad R_{ \infty } = -B^{T}Z_{ C}^{}(Z_{C}^{T}AZ_{ C})^{-1}Z_{ C}^{T}B.& & {}\end{array}$$(2.27) -
5.
Compute the reduced-order system (2.13) with
$$\displaystyle{ \hat{E} = \left [\begin{array}{ccc} I_{r}& 0 & 0 \\ 0 & I_{r_{0}} & 0 \\ 0 & 0 & 0\end{array} \right ],\quad \hat{A} = \left [\begin{array}{ccc} A_{s}& 0& 0 \\ 0 & 0& 0\\ 0 & 0 & - I_{ r_{\infty }}\end{array} \right ],\quad \hat{B} =\hat{ C}^{T} = \left [\begin{array}{c} B_{s} \\ B_{0} \\ B_{\infty }\end{array} \right ], }$$(2.28)where
$$\displaystyle\begin{array}{rcl} A_{s} =\varSigma _{ 1}^{-1/2}U_{ 1}^{T}\tilde{X}^{T}A\tilde{X}U_{ 1}\varSigma _{1}^{-1/2}\quad and\quad B_{ s} =\varSigma _{ 1}^{-1/2}U_{ 1}^{T}\tilde{X}^{T}B.& & {}\end{array}$$(2.29)
One can show that the reduced-order system (2.13), (2.28) has the transfer function
where the matrices R 0 and R ∞ are as in (2.27), and A s and B s are given in (2.29). Furthermore, we have the following \(\mathcal{H}_{\infty }\)-norm error bound.
Theorem 2.3.3 ( [56])
Let an RCI circuit (2.9), (2.25) fulfill (A1´) and (A2). Then a reduced-order model (2.13), (2.28) obtained by Algorithm 2.2 is passive and reciprocal with an external signature \(S_{\mathrm{ext}} = I_{n_{I}}\) . Moreover, for the transfer functions G and \(\hat{\mathbf{G}}\) of the original system (2.9), (2.25) and the reduced-order model (2.13), (2.28), we have the \(\mathcal{H}_{\infty }\) -norm error bound
where σ j are the proper Hankel singular values of G = (E, A, B, B T) obtained in (2.26).
2.3.2.2 RCV Circuits
We now consider RCV circuits consisting of resistors, capacitors and voltage sources. Unfortunately, the MNA equations for such circuits do not satisfy the symmetry conditions (2.24). We can, however, transform the MLA equations with the system matrices
into a symmetric system. Such a transformation is the frequency inversion
The transfer function of the transformed system can be realized as
where
Reducing this system and applying the back transformation, we obtain a reduced-order model. The resulting model reduction method is given in Algorithm 2.3.
Algorithm 2.3 Passivity-preserving balanced truncation for RCV circuits
Given a passive MLA system (2.9) with E, A, B as in (2.30), compute a reduced-order model (2.13).
-
1.
Compute the full column rank matrices Y C and Y R such that im(Y C ) = ker(B C T) and im(Y R ) = ker(B R T).
-
2.
Compute a full-rank Cholesky factor \(\tilde{X}\) of the solution \(X =\tilde{ X}\tilde{X}^{T}\) of the projected Lyapunov equation
$$\displaystyle{ E_{\star }XA_{\star } + E_{\star }XA_{\star } = -PB_{\star }^{}B_{\star }^{T}P,\quad X = PXP, }$$where E ⋆, A ⋆, B ⋆ are as in (2.31) and
$$\displaystyle{ P = I - Y _{R}^{}(Y _{R}^{T}E_{ \star }^{}Y _{R}^{})^{-1}Y _{ R}^{T}E_{ \star }^{} - Y _{C}^{}(Y _{C}^{T}A_{ \star }^{}Y _{C}^{})^{-1}Y _{ C}^{T}A_{ \star }^{}. }$$is the projector onto the right deflating subspace of λE ⋆ − A ⋆ corresponding to the finite eigenvalues with negative real part.
-
3.
Compute the eigenvalue decomposition
$$\displaystyle{ \tilde{X}^{T}E_{ \star }\tilde{X} = [\,U_{1},\,U_{2}\,]\left [\begin{array}{cc} \varSigma _{1} & 0 \\ 0& \varSigma _{2}\end{array} \right ][\,U_{1},\,U_{2}\,]^{T}, }$$(2.32)where [ U 1, U 2 ] is orthogonal, Σ 1 = diag(σ 1, …, σ r ) and Σ 2 = diag(σ r+1, …, σ q ).
-
4.
Compute the matrices
$$\displaystyle{ \begin{array}{l} A_{s} =\varSigma _{ 1}^{-1/2}U_{1}^{T}\tilde{X}^{T}A_{\star }\tilde{X}U_{1}^{}\varSigma _{1}^{-1/2},\quad B_{s} =\varSigma _{ 1}^{-1/2}U_{1}^{T}\tilde{X}^{T}B_{\star }, \\ R_{0} = B_{\star }^{T}Y _{R}^{}(Y _{R}^{T}E_{\star }^{}Y _{R}^{})^{-1}Y _{R}^{T}B_{\star }^{},\quad R_{\infty } = -B_{\star }^{T}Y _{C}^{}(Y _{C}^{T}A_{\star }^{}Y _{C}^{})^{-1}Y _{C}^{T}B_{\star }^{}, \\ \tilde{R}_{\infty } = R_{\infty }- B_{s}^{T}A_{s}^{-1}B_{s}^{}.\end{array} }$$ -
5.
Compute the eigenvalue decomposition
$$\displaystyle{ \left [\begin{array}{cc} \tilde{R}_{\infty }&R_{0} \\ R_{0} & 0\end{array} \right ] = [\,V _{1},\;V _{2}\,]\left [\begin{array}{cc} \varLambda _{0} & 0\\ 0 &0\end{array} \right ][\,V _{1},\;V _{2}\,]^{T}, }$$where [ V 1, V 2 ] is orthogonal and Λ 0 is nonsingular.
-
6.
Compute the reduced-order system (2.13) with
$$\displaystyle{ \hat{E} = \left [\begin{array}{cc} I & 0\\ 0 &\hat{E}_{\infty }\end{array} \right ]\!,\qquad \hat{A} = \left [\begin{array}{cc} \hat{A}_{1} & 0 \\ 0 &\hat{A}_{\infty }\end{array} \right ]\!,\qquad \hat{B} = -\hat{C}^{T} = \left [\begin{array}{cc} \hat{B}_{1} \\ \hat{B}_{\infty }\end{array} \right ]\!, }$$(2.33)where \(\hat{A}_{1} = A_{s}^{-1}\), \(\hat{B}_{1} = A_{s}^{-1}B_{s}^{}\) and
$$\displaystyle{ \hat{E}_{\infty } = V _{1}^{T}\left [\begin{array}{cc} R_{0} & 0 \\ 0 &0\end{array} \right ]V _{1}^{},\quad \hat{A}_{\infty } = V _{1}^{T}\left [\begin{array}{cc} \tilde{R}_{\infty }&R_{0} \\ R_{0} & 0\end{array} \right ]V _{1}^{},\quad \hat{B}_{\infty } = V _{1}^{T}\left [\begin{array}{cc} \tilde{R}_{\infty } \\ R_{0}\end{array} \right ]. }$$
The following theorem provides the error bound for the reduced model (2.13), (2.33).
Theorem 2.3.4 ( [56])
Let an RCV circuit fulfill assumptions (A1´) and (A3). Then a reduced-order model (2.13), (2.33) obtained by Algorithm 2.3 is passive and reciprocal with an external signature \(S_{\mathrm{ext}} = I_{n_{V }}\) . Moreover, for the transfer functions G and \(\hat{\mathbf{G}}\) of the original system (2.9), (2.10) and the reduced-order model (2.13), (2.33), we have the \(\mathcal{H}_{\infty }\) -norm error bound
where σ j are the proper Hankel singular values of G ⋆ = (E ⋆, A ⋆, B ⋆, B ⋆ T) obtained in (2.32).
2.3.2.3 RCIV Circuits
Finally, we consider RCIV circuits that contain resistors, capacitors and both current as well as voltage sources. Such circuits are modeled by the linear system (2.9) with the MNA matrices
or the MLA matrices
Due to the reciprocity, the transfer function of this system can be partitioned in blocks as
see [56]. Assume that
-
(A4)
the circuit does not contain cutsets of current and voltage sources.
Then G VV (s) is invertible and G(s) has a (2,2) partial inverse defined as
This rational function can be realized as G (2,2)(s) = B 2,2 T(sE 2,2 − A 2,2)−1 B 2,2 with
Note that the (2,2) partial inversion can interpreted as the replacements of all voltage sources by current sources. The system G (2,2) = (E 2,2, A 2,2, B 2,2, B 2,2 T) is symmetric and passive. Then applying balanced truncation to this system and reversing the voltage replacement, we obtain a required reduced-order model, see Algorithm 2.4.
Algorithm 2.4 Passivity-preserving balanced truncation for RCIV circuits—I
Given a passive MNA system (2.9) with E, A, B as in (2.34), compute a reduced-order model (2.13).
-
1.
Compute a reduced-order model \(\hat{\mathbf{G}}^{(2,2)} = (\hat{E}_{2,2},\,\hat{A}_{2,2},\,\hat{B}_{2,2},\,\hat{C}_{2,2})\) by applying Algorithm 2.2 to the system G (2,2) = (E 2,2, A 2,2, B 2,2, B 2,2 T) as in (2.36).
-
2.
Compute the reduced-order system (2.13) with
$$\displaystyle{ \hat{E} = \left [\begin{array}{cc} \hat{E}_{2,2} & 0 \\ 0 & 0\end{array} \right ],\qquad \hat{A} = \left [\begin{array}{cc} \;\hat{A}_{2,2} & \hat{B}_{2} \\ -\hat{ B}_{2}^{T}& 0\end{array} \right ],\qquad \hat{B} =\hat{ C}^{T} = \left [\begin{array}{cc} \hat{B}_{1} & 0 \\ 0 & - I_{n_{V }}\end{array} \right ]. }$$(2.37)where \(\hat{B}_{1} =\hat{ B}_{2,2}[I_{n_{I}},\,0]^{T}\) and \(\hat{B}_{2} =\hat{ B}_{2,2}[0,\,I_{n_{V }}]^{T}\).
The following theorem establishes the properties of the reduced-order model (2.13), (2.37) and gives an error bound.
Theorem 2.3.5 ( [56])
Consider an RCIV circuit fulfilling assumptions (A1´), (A3) and (A4). Let Z R an Z C be the basis matrices for ker(A R T) and ker(A C T), respectively, and let Z R ′ be the basis matrix for im(A R ). Assume that A I T Z R (Z R T A C CA C T Z R )−1 Z R T A V = 0 and Z C T A V has full column rank. Then the reduced-order model (2.13), (2.37) obtained by Algorithm 2.4 is passive and reciprocal with the external signature \(S_{\mathrm{ext}} =\mathrm{ diag}(I_{n_{I}},-I_{n_{V }})\) . Moreover, for the transfer functions G and \(\hat{\mathbf{G}}\) of the original system (2.9), (2.34) and the reduced-order model (2.13), (2.37), we have the error bound
where σ j are the proper Hankel singular values of the (2,2) partially inverted system G (2,2) ,
with
An alternative approach for model reduction of RCIV circuits is based on considering the frequency-inverted MLA system
with the matrices
Let G ⋆(s) be partitioned in blocks as
Assume that
-
(A5)
the circuit does not contain loops of current and voltage sources.
Then G 11(s) is invertible and G ⋆(s) has an (1,1) partial inverse defined as
where
Reducing this symmetric system and reversing the initial transformation, we obtain a required reduced-order model. This model reduction method is presented in Algorithm 2.5.
Algorithm 2.5 Passivity-preserving balanced truncation for RCIV circuits—II
Given a passive MLA system (2.9) with E, A, B as in (2.35), compute a reduced-order model (2.13).
-
1.
Compute a reduced-order model \(\hat{\mathbf{G}}_{1} = (\hat{E}_{1},\,\hat{A}_{1},\,\hat{B}_{1},\,\hat{C}_{1})\) using Algorithm 2.3, where E ⋆, A ⋆ and B ⋆ are replaced, respectively, by E 1,1, A 1,1 and B 1,1 as in (2.38).
-
2.
Compute the reduced-order system (2.13) with
$$\displaystyle{ \begin{array}{c} \hat{E} = \left [\begin{array}{cc} \hat{E}_{1}& 0\\ 0 & 0\end{array} \right ]\!,\quad \hat{A} = \left [\begin{array}{cc} \hat{A}_{1} & \hat{B}_{11} \\ \hat{C}_{11}& 0\end{array} \right ]\!,\quad \hat{B} = \left [\begin{array}{cc} 0 &\hat{B}_{12} \\ I_{n_{I}}& 0\end{array} \right ]\!,\quad \hat{C} = \left [\begin{array}{cc} 0 & - I_{n_{I}} \\ \hat{C}_{21}& 0\end{array} \right ]\!,\end{array} }$$(2.39)where \(\hat{B}_{11} =\hat{ B}_{1}[I_{n_{I}},\,0]^{T}\), \(\hat{B}_{12} =\hat{ B}_{1}[0,\,I_{n_{V }}]^{T}\), \(\hat{C}_{11} = [I_{n_{I}},\,0]\hat{C}_{1}\) and \(\hat{C}_{21} = [0,\,I_{n_{V }}]\hat{C}_{1}\).
The following theorem establishes the properties of the reduced-order model (2.13), (2.39) and gives an error bound.
Theorem 2.3.6 ( [56])
Consider an RCIV circuit fulfilling assumptions (A1´), (A2) and (A5). Let Y R and Y C be the basis matrices for ker(B R T) and ker(B C T), respectively, and let Y R ′ be the basis matrix for im(B R ). Assume that B V T Y R (Y R T B C C −1 B C T Y R )−1 Y R T B I = 0 and Y C T B I has full column rank. Then the reduced-order model (2.13), (2.39) obtained by Algorithm 2.5 is passive and reciprocal with the external signature \(S_{\mathrm{ext}} =\mathrm{ diag}(I_{n_{I}},-I_{n_{V }})\) . Moreover, for the transfer functions G and \(\hat{\mathbf{G}}\) of the original system (2.9), (2.35) and the reduced-order model (2.13), (2.39), we have the error bound
where σ j are the proper Hankel singular values of the system (G ⋆)(1,1) ,
with
Remark 2.3.7
Model reduction methods for RC circuits can also be extended to RL circuits which contain resistors, inductors, voltage and/or current sources. Observing that the frequency-inverted MNA equations for RLI circuits as well as the MLA equations for RLV circuits yield symmetric systems, we can design balanced truncation model reduction methods for RL circuits similar to Algorithms 2.2–2.5.
2.3.3 Numerical Aspects
The most expensive step in the presented model reduction algorithms is solving matrix equations. The numerical solution of the projected Lyapunov and Riccati equations will be discussed in Sects. 2.5.1 and 2.5.2, respectively. Here, we consider the computation of the matrix M 0 and the projectors P r and P l required in Algorithm 2.1 as well as the basis matrices required in Algorithms 2.2–2.5.
Fortunately, using the MNA structure of the system matrices in (2.10), the matrix M 0 and the projectors P r and P l can be computed in explicit form
where S int is given in (2.18), and
see [54, 72] for details. The basis matrices Z C and Z CRIV can be computed by analyzing the corresponding subgraphs of the given network graph as described in [23]. For example, the matrix Z C can be constructed in the form
by searching the components of connectivity in the C-subgraph consisting of the capacitive branches only. Here, \(\mathbf{1}_{k_{i}}\! =\! [1,\ldots,1]^{T}\! \in \! \mathbb{R}^{k_{i}}\), i = 1, …, s, and \(\Pi _{C}\) is a permutation matrix. For this purpose, we can use graph search algorithms like breadth-first-search [40]. As a consequence, the nonzero columns of
form again an incidence matrix. In order to compute the basis matrix Z RIV −C ′, we first determine the basis matrix
for ker(A RIV −C T) from the associated graph. Then the complementary matrix Z RIV −C ′ can be determined as
where S RIV −C is a selector matrix constructed from the identity matrix by removing 1-st, (l 1 + 1)-st, …, (l 1 + … + l t + 1)-st columns. One can see that the resulting basis matrices and also the matrices H 0, H 1, H 2, H 3, H 5 and H 6 are sparse. Of course, the projector P r will never be constructed explicitly. Instead, we use projector-vector products required in the numerical solution of the Riccati equation.
Algorithms 2.3 and 2.5 require the knowledge of the reduced loop matrix B that can be obtained by the search for a loop basis in the circuit graph [2, 22, 39, 40]. Since the efficiency in the numerical solution of the projected Lyapunov equations can be improved if the matrix coefficients are sparse, it is preferable to choose a basis of loops with length as small as possible. This kind of problem was treated in [49].
The basis matrices Z R , Z R ′ and Z C required in Algorithms 2.2 and 2.4 can be computed using graph search algorithms as described above. The basis matrices Y R and Y C required in Algorithms 2.3 and 2.5 can be determined by searching for dependent loops in the graphs \(\mathfrak{G}_{R}\) and \(\mathfrak{G}_{C}\) consisting of the resistive and capacitive branches, respectively. Furthermore, the basis matrix Y R ′ can be obtained by removing the linear dependent columns of B R . Such columns can be determined by searching for cutsets in the graph \(\mathfrak{G}_{R}\), e.g., [2]. For the analysis of loop dependency and the search for cutsets in a graph, there exist a variety of efficient algorithms, see [40] and the references therein.
2.4 Model Reduction of Nonlinear Circuits
In this section, we present a model reduction approach for nonlinear circuits containing large linear subnetworks. This approach is based on decoupling the nonlinear circuit equations (2.2) into linear and nonlinear subsystems in an appropriate way. The linear part is then approximated by a reduced-order model using one of the model reduction algorithms from Sect. 2.3 depending on the topological structure of the linear subcircuit. The nonlinear part either remains unchanged or is approximated by a trajectory piece-wise linear (TPWL) approach [57] based on linearization, or proper orthogonal decomposition (POD), e.g., [65], which relies on snapshot calculations. If the circuit contains semiconductor devices modeled by instationary nonlinear partial differential equations [67, 73], these equations can first be discretized in space and then reduced using the POD method as described in [35, 36], see also Chap. 1 in this book. Finally, combining the reduced-order linear and nonlinear models, we obtain a reduced-order nonlinear model that approximates the MNA system (2.2). The concept of this model reduction approach is illustrated in Fig. 2.1. We now describe this approach in more detail.
First, we consider the decoupling procedure developed in [68] that allows us to extract a linear subcircuit from a nonlinear circuit. This procedure is based on the formal replacement of nonlinear inductors by controlled current sources, nonlinear capacitors by controlled voltage sources and nonlinear resistors by equivalent circuits consisting of two serial linear resistors and one controlled current source connected parallel to one of the introduced linear resistors. Such replacements are demonstrated in Fig. 2.2, where we present two circuits before and after replacements. It should be noted that the suggested replacements introduce additional nodes and state variables, but neither additional CV-loops nor LI-cutsets occur in the decoupled linear subcircuit meaning that its index does not exceed the index of the original system (2.2). The following theorem establishes the decoupling on the equation level.
Theorem 2.4.1
[68] Let and satisfy the relation , and let be symmetric, positive definite. Assume that and satisfy
Then system (2.2) together with the relations
for the additional unknowns and has the same components η, η z , , , and in the state vector as the system
coupled with the linear DAE system
where , ,
and the incidence and element matrices are given by
Note that the system matrices in the decoupled linear system (2.46)–(2.48) are in the MNA form. This system has the state space dimension
and the input space dimension . It should also be noted that the state equivalence in Theorem 2.4.1 is independent of the choice of the matrices G 1 and G 2 satisfying the assumptions in the theorem. The substitution of nonlinear resistors with equivalent circuits as described above implies that G 1 and G 2 are both diagonal and their diagonal elements are conductances of the first and the second linear resistors, respectively, in the replacement circuits.
The following theorem establishes the well-posedness of the decoupled system (2.46)–(2.48).
Theorem 2.4.2 ( [68])
Let a nonlinear circuit satisfy assumptions (A1)–(A3). Assume that it contains neither loops consisting of nonlinear capacitors and voltage sources (\(\widetilde{\mathrm{C}}\) V-loops ) nor cutsets of nonlinear inductors and/or current sources (\(\widetilde{\mathrm{C}}\) V-loops ). Then the decoupled linear DAE system (2.46)–(2.48) modeling the linear subcircuit is well-posed in the sense that
-
1.
the matrices C, L and G are symmetric and positive definite,
-
2.
the matrix A V has full column rank,
-
3.
the matrix [ A C , A L , A R , A V ] has full row rank.
Note that the presence of \(\widetilde{\mathrm{C}}\) V-loops and \(\widetilde{\mathrm{L}}\) I-cutsets in the original circuit would lead after the replacement of the nonlinear capacitors and nonlinear inductors by voltage sources and current sources, respectively, to V-loops and I-cutsets in the decoupled circuit that would violate its well-posedness.
The next theorem shows that slightly stronger conditions for the original nonlinear circuit guarantee that the decoupled linear DAE system (2.46)–(2.48) is well-posed and, in addition, has index at most one.
Theorem 2.4.3 ( [68])
Let a nonlinear circuit satisfy assumptions (A1)–(A3). If this circuit contains neitherCV-loops except for \(\bar{\mathrm{C}}\) -loops with linear capacitors norLI-cutsets, then the linear system (2.46)–(2.48) modeling the extracted linear subcircuit is well-posed and is of index at most one.
The index one condition for system (2.46)–(2.48) implies that its transfer function is proper. The approximation of such systems is much easier than that of systems with an improper transfer function [71].
Depending on the topology of the extracted linear subcircuit, we can now apply one of the model reduction algorithms presented in Sect. 2.3 to the linear system (2.46)–(2.48). As a result, we obtain a reduced-order model (2.13) which can be combined with the nonlinear subsystem in order to get a reduced-order nonlinear model.
According to the block structure of the input and output vectors of the extracted linear DAE system (2.46)–(2.48), the reduced-order model (2.13) can be written in the form
where \(\hat{y}_{\ell j} =\hat{ C}_{j}\hat{x}_{\ell}\), j = 1, …, 5, approximate the corresponding components of the output of (2.46). Taking into account that and , Eqs. (2.43) and (2.45) are approximated by
respectively, where and are approximations to and , respectively. Furthermore, for j z and η z defined in (2.42) and (2.44), respectively, we have
Since , this equation is approximated by
where approximates . Combining (2.45), (2.49), (2.50) and (2.51), we obtain the reduced-order nonlinear model
where ,
and
This model can now be used for further investigations in steady-state analysis, transient analysis or sensitivity analysis of electrical circuits. Note that the error bounds for the reduced-order linear subsystem (2.49) presented in Sect. 2.3 can be used to estimate the error in the output of the reduced-order nonlinear system (2.52)–(2.54), see [34] for such estimates for a special class of nonlinear circuits.
2.5 Solving Matrix Equations
In this section, we consider numerical algorithms for solving the projected Lyapunov and Riccati equations developed in [55, 70, 71]. In practice, the numerical rank of the solutions of these equations is often much smaller than the dimension of the problem. Then such solutions can be well approximated by low-rank matrices. Moreover, these low-rank approximations can be determined directly in factored form. Replacing the Cholesky factors of the Gramians in Algorithms 2.1–2.5 by their low-rank factors reduces significantly the computational complexity and storage requirements in the balancing-related model reduction methods and makes these methods very suitable for large-scale circuit equations.
2.5.1 ADI Method for Projected Lyapunov Equations
We focus first on solving the projected Lyapunov equation
¡/ImgDisp¿ using the alternating direction implicit (ADI) method. Such an equation has to be solved in Algorithms 2.2–2.5. The ADI method has been first proposed for standard Lyapunov equations [14, 44, 50, 76] and then extended in [70] to projected Lyapunov equations. The generalized ADI iteration for the projected Lyapunov equation (2.55) is given by
with an initial matrix X 0 = 0 and shift parameters \(\tau _{1},\ldots,\tau _{k} \in \mathbb{C}_{-}\). Here, \(\overline{\tau }_{k}\) denotes the complex conjugate of τ k . If the pencil λE − A is c-stable, then X k converges towards the solution of the projected Lyapunov equation (2.55). The rate of convergence depends strongly on the choice of the shift parameters. The optimal shift parameters providing the superlinear convergence satisfy the generalized ADI minimax problem
where Sp f (E, A) denotes the finite spectrum of the pencil λE − A. If the matrices E and A satisfy the symmetry condition (2.24), then λE − A has real non-positive eigenvalues. In this case, the optimal real shift parameters can be determined by the selection procedure proposed in [78] once the spectral bounds
are available. Here Sp−(E, A) denotes the set of finite eigenvalues of λE − A with negative real part. In general case, the suboptimal ADI parameters can be obtained from a set of largest and smallest in modulus approximate finite eigenvalues of λE − A computed by an Arnoldi procedure [50, 70]. Other parameter selection techniques developed for standard Lyapunov equations [15, 63, 77] can also be used for the projected Lyapunov equation (2.55).
A low-rank approximation to the solution of the projected Lyapunov equation (2.55) can be computed in factored form X ≈ Z k Z k T using a low-rank version of the ADI method (LR-ADI) as presented in Algorithm 2.6.
Algorithm 2.6 The LR-ADI method for the projected Lyapunov equation
Given E, \(A \in \mathbb{R}^{n,n}\), \(B \in \mathbb{R}^{n,m}\), projector P l and shift parameters \(\tau _{1},\ldots,\tau _{q} \in \mathbb{C}_{-}\), compute a low-rank approximation X ≈ Z k Z k T to the solution of the projected Lyapunov equation (2.55).
-
1.
\(Z^{(1)} = \sqrt{-2\mathrm{Re }(\tau _{1 } )}\,(E +\tau _{1}A)^{-1}P_{ l}B\), Z 1 = Z (1);
-
2.
FOR k = 2, 3, …
$$\displaystyle{ \begin{array}{rcl} Z^{(k)} & = & \sqrt{ \frac{\mathrm{Re }(\tau _{k } )} {\mathrm{Re}(\tau _{k-1})}}\left (I - (\overline{\tau }_{k-1} +\tau _{k})(E +\tau _{k}A)^{-1}A\right )Z^{(k-1)}, \\ Z_{k} & = & [\,Z_{k-1}, Z^{(k)}\,];\end{array} }$$END FOR
In order to guarantee for the factors Z k to be real in case of complex shift parameters, we take these parameters in complex conjugate pairs \(\{\tau _{k},\tau _{k+1} = \overline{\tau }_{k}\}\). Then a novel approach for efficient handling of complex shift parameters in the LR-ADI method developed in [16] can also be extended to the projected Lyapunov equation (2.55). At each ADI iteration we have \(Z_{k} = [\,Z^{(1)},\ldots,Z^{(k)}\,] \in \mathbb{R}^{n,mk}\). To keep the low-rank structure in Z k for large mk, we can compress the columns of Z k using the rank-revealing QR factorization [18] as described in [9].
Finally, note that the matrices (E + τ k A)−1 in Algorithm 2.6 do not have to be computed explicitly. Instead, we solve linear systems of the form (E + τ k A)x = P l b either by computing (sparse) LU factorizations and forward/backward substitutions or by using iterative Krylov subspace methods [62].
2.5.2 Newton’s Method for Projected Riccati Equations
We consider now the numerical solution of the projected Riccati equation
with F, B c and B o as in (2.22). Such an equation has to be solved in Algorithm 2.1. The minimal solution X min of (2.57) is at least semi-stabilizing in the sense that all the finite eigenvalues of λE − F − EX min B c T B c are in the closed left half-plane.
Consider the spaces
Since X = P r XP r T, EP r = P l E and FP r = P l F, the Riccati operator given by
maps \(\mathbb{S}_{P_{r}}\) into \(\mathbb{S}_{P_{l}}\). Then the Frechét derivative of \(\mathcal{R}\) at \(X \in \mathbb{S}_{P_{r}}\) is a linear operator \(\mathcal{R}_{X}^{{\prime}}\!: \mathbb{S}_{P_{r}} \rightarrow \mathbb{S}_{P_{l}}\) defined as
for \(N \in \mathbb{S}_{P_{r}}\). Taking into account that N = P r N = NP r T, we have
Then Newton’s method for the projected Riccati equation (2.57) can be written as
The standard formulation of this method is given in Algorithm 2.7.
Algorithm 2.7 Newton’s method for the projected Riccati equation
Given E, \(F \in \mathbb{R}^{n,n}\), \(B_{c} \in \mathbb{R}^{m,n}\), \(B_{o} \in \mathbb{R}^{n,m}\), projectors P r , P l and a stabilizing initial guess X 0, compute an approximate solution of the projected Riccati equation (2.57).FOR j = 0, 1, 2, …
-
1.
Compute F j = F + EX j B c TB c .
-
2.
Solve the projected Lyapunov equation
$$\displaystyle{ F_{j}^{}N_{j}^{}E^{T} + EN_{ j}^{}F_{j}^{T} = -P_{ l}^{}\mathcal{R}(X_{j})P_{l}^{T},\quad N_{ j}^{} = P_{r}^{}N_{j}^{}P_{r}^{T}. }$$ -
3.
Compute X j+1 = X j + N j .
END FOR
As in the standard case [41], we can combine the second and third steps in Algorithm 2.7 and compute the new iterate X j+1 directly from the projected Lyapunov equation as presented in Algorithm 2.8.
Algorithm 2.8 The Newton-Kleinman method for the projected Riccati equation
Given E, \(F \in \mathbb{R}^{n,n}\), \(B_{c} \in \mathbb{R}^{m,n}\), \(B_{o} \in \mathbb{R}^{n,m}\), projectors P r , P l and a stabilizing initial guess X 0, compute an approximate solution of the projected Riccati equation (2.57).FOR j = 1, 2, …
-
1.
Compute K j = EX j−1 B c T and F j = F + K j B c .
-
2.
Solve the projected Lyapunov equation
$$\displaystyle{ EX_{j}^{}\,F_{j}^{T} + F_{ j}^{}X_{j}^{}\,E^{T} = -P_{ l}^{}(B_{o}^{}B_{o}^{T} - K_{ j}^{}K_{j}^{T})P_{ l}^{T},\qquad X_{ j}^{} = P_{r}^{}X_{j}^{}P_{r}^{T}. }$$
END FOR
Although Algorithms 2.7 and 2.8 are equivalent, they behave different in finite precision arithmetic and there are significant differences in their implementation especially for large-scale problems.
Similarly to the standard state space case [8, 74], one can show that if λE − F is c-stable, then for X 0 = 0, all λE − F j are also c-stable and \(\lim \limits _{j\rightarrow \infty }X_{j} = X_{\min }\), see [10]. The convergence is quadratic if the pencil λE − F − EX min B c T B c is c-stable. Some difficulties may occur if the pencil λE − F has eigenvalues on the imaginary axis. For circuit equations, these eigenvalues are uncontrollable and unobservable [54]. In that case, similarly to [12], one could choose a special stabilizing initial guess X 0 that ensures the convergence of the Newton-Kleinman iteration. However, the computation of such a guess for large-scale problems remains an open problem.
A low-rank approximation to the minimal solution of the projected Riccati equation (2.57) can be computed in factored form \(X_{\min } \approx \tilde{ R}\tilde{R}^{T}\) with \(\tilde{R} \in \mathbb{R}^{n,k}\), k ≪ n using the same approach as in [11]. Starting with K 1 = EX 0 B c T and F 1 = F + K 1 B c , we solve in each Newton-Kleinman iteration two projected Lyapunov equations
for the low-rank approximations X 1,j ≈ R 1,j R 1,j T and X 2,j ≈ R 2,j R 2,j T, respectively, and then compute K j+1 = E(R 1,j R 1,j T − R 2,j R 2,j T)B c T and F j+1 = F + K j+1 B c . If the convergence is observed after j max iterations, then an approximate solution \(X_{\min } \approx \tilde{ R}\tilde{R}^{T}\) of the projected Riccati equation (2.57) can be computed in factored form by solving the projected Lyapunov equation
with \(Q = [\,B_{o}, E(X_{1,j_{\max }} - X_{2,j_{\max }})B_{c}^{T}]\) provided λE − F is c-stable. For computing low-rank factors of the solutions of the projected Lyapunov equations (2.58)–(2.60), we can use the generalized LR-ADI method presented in Sect. 2.5.1. Note that in this method we need to compute the products (E + τF j )−1 w, where \(w \in \mathbb{R}^{n}\) and \(E +\tau F_{j} = E +\tau (A - BB^{T}) -\tau \hat{ K}_{j}B_{c}\) with the low-rank matrices \(B_{c} \in \mathbb{R}^{m,n}\) and \(\hat{K}_{j} = \sqrt{2}P_{l}BM_{0}^{T}J_{c}^{-T} - K_{j} \in \mathbb{R}^{n,m}\). One can use the Sherman-Morrison-Woodbury formula [28, Sect. 2.1.3] to compute these products as
where w 1 = (E + τ(A − BB T))−1 w and \(M_{\hat{K}_{j}} =\tau (E +\tau (A - BB^{T}))^{-1}\hat{K}_{j}\) can be determined by solving linear systems with the sparse matrix E + τ(A − BB T) either by computing sparse LU factorization or by using Krylov subspace methods [62].
2.6 MATLAB Toolbox PABTEC
In this section, we briefly describe the MATLAB toolbox PABTEC which provides some functions for analysis, model reduction, simulation and visualization of circuit equations. PABTEC stays for PAssivity-preserving Balanced Truncation for Electrical Circuits.
Figure 2.3 shows the main strategy of PABTEC. First, the user has to specify the electrical circuit under consideration. The input data for the main routine pabtec.m are incidence matrices describing the topological structure of the circuit, element matrices for linear circuit components, element relations for nonlinear circuit components and some parameters that can be initialized and verified in the routine inipar.m.
Once the circuit is specified, it can be analyzed with the PABTEC routine sysana.m which delivers the information on the topology structure of the circuit, well-posedness and index. If the circuit contains nonlinear elements, it will be decoupled into linear and nonlinear parts as described in Sect. 2.4. Model reduction of the linear (sub)circuit is implemented in the routine pabtecl.m. Linear circuits that contain neither inductors nor capacitors and circuits without resistors cannot be reduced with PABTEC. For model reduction of large resistive network, one can use a graph-based algorithm presented in [60], which is not yet included in PABTEC. For other network structures, an appropriate model reduction method will be chosen automatically. RC and RL circuits are reduced by the Lyapunov-based balanced truncation algorithms presented in Sect. 2.3.2, while for model reduction of general RLC circuit, Algorithm 2.1 is applied. If the circuit does not contain CVI-loops and LIV-cutsets, then the Gramian in this algorithm is determined by solving the projected Riccati equation (2.57). Otherwise, we have to solve the projected Lur’e equation (2.15). Independent of the topological structure of the circuit, the model reduction algorithms include the following steps: preprocessing, solving matrix equations, model reduction and postprocessing. In the preprocessing step, the basis matrices required in the projector P r or P are computed using graph-theoretical algorithms. If necessary, also the auxiliary matrices H k and the matrix M 0 are computed. The projected Lyapunov equations are solved by the LR-ADI method described in Sect. 2.5.1, while the projected Riccati equation is solved by the Newton or Newton-Kleinman method presented in Sect. 2.5.2. The numerical methods for large-scale projected Lur’e equations proposed in [53] will be included in a future release of PABTEC. Note that the matrix equation solvers in PABTEC can be seen as extension of the corresponding functions in the MATLAB toolbox LyaPACK
Footnote 1[51] and its successor MESS.
Footnote 2 Postprocessing involves the computation of error bounds and reduced-order initial vectors required for simulation.
The last step in PABTEC is combination of the reduced-order linear model with unchanged nonlinear part. PABTEC includes also the routines simorih.m and simredh.m for simulation of the original and reduced-order models, respectively.
PABTEC provides the following routines:
-
main routines listed in Table 2.1, which can be called by the user in the main program;
-
supplementary routines listed in Table 2.2, which are used in the main routines;
-
auxiliary routines listed in Table 2.3 which are used in the main and supplementary routines;
-
user supplied routines listed in Table 2.4, which provide the information on the circuit.
The MATLAB toolbox PABTEC can be used by a line command or via a graphical user interface (GUI). PABTEC-GUI contains four tab panels shown in Figs. 2.4 and 2.5 that enable the user to upload the circuit, set up all required parameters, perform model reduction, simulate the original and reduced-order systems and save these systems and simulation data.
2.7 Numerical Examples
In this section, we present some results of numerical experiments to demonstrate the properties of the presented model reduction methods for linear and nonlinear circuits. The computations were done on IBM RS 6000 44P Model 270 with machine precision ɛ = 2. 22 × 10−16 using MATLAB 7.0.4.
Example 2.7.1
The first example is a transmission line model from [7] consisting of a scalable number of RLC ladders. We have a single-input-single-output reciprocal passive DAE system (2.9), (2.10) of order n = 60, 000. The minimal solution of the projected Riccati equation (2.21) was approximated by a low-rank matrix \(X_{\min } \approx \widetilde{ R}\,\widetilde{R}^{T}\) with \(\widetilde{R} \in \mathbb{R}^{n,58}\) using Newton’s method as presented in Algorithm 2.7.
The original system was approximated by a reduced model of order n r = 16 using Algorithm 2.1 with the error bound γ = 2. 72 ⋅ 10−5, where
For comparison, we have also computed the reduced models of order n r = 117 using the PRIMA and SPRIM algorithms [26, 48]. This order was chosen as a smallest integer such that the absolute error \(\vert \hat{\mathbf{G}}(\,j\omega ) -\mathbf{G}(\,j\omega )\vert\) for the SPRIM model is below γ on the frequency interval [10−5, 1010]. In Fig. 2.6, we display the magnitude of the frequency responses G( jω) and \(\hat{\mathbf{G}}(\,j\omega )\) of the original and reduced-order models. Figure 2.7 shows the absolute errors \(\vert \hat{\mathbf{G}}(\,j\omega ) -\mathbf{G}(\,j\omega )\vert\) and also the error bound γ. One can see that PABTEC provides a much smaller system with keeping the better global approximation properties. It should also be noted that the result for SPRIM is presented here for the best choice of the expansion point that was found after several runs of this algorithm. Taking this into account, the reduction time for the PABTEC method becomes comparable to the actual reduction time for SPRIM.⊲
Example 2.7.2
Next we consider a nonlinear circuit shown in Fig. 2.8. It contains 1501 linear capacitors, 1500 linear resistors, 1 voltage source and 1 diode. Such a circuit is described by the DAE system (2.2), (2.3) of the state space dimension n = 1503. We simulate this system on the time interval \(\mathbb{I} = [0\,\mathrm{s},0.07\,\mathrm{s}]\) with a fixed stepsize 10−5 s using the BDF method of order 2. The voltage source is given by \(\mathcal{V}_{v}{(t)} = {10 \, sin(100 \, \pi \, t)^{4}} V\) V, see Fig. 2.9. The linear resistors have the same resistance \(R = 2\,\mathrm{k}\Omega\), the linear capacitors have the same capacitance C = 0. 02 μF and the diode has a characteristic curve \(g(v_{\widetilde{\phantom{\mathcal{R}}}\mathcal{R}}) = 10^{-14}(\exp (40 \frac{1} {\mathrm{V}}v_{\widetilde{\phantom{\mathcal{R}}}\mathcal{R}}) - 1)\) A.
The diode was replaced by an equivalent linear circuit as described in Sect. 2.4. The resulting linear system of order n ℓ = 1504 was approximated by a reduced model of order n r = r + r 0, where r 0 = rank(I − M 0) and r satisfies the condition (γ r+1 + … + γ q ) < tol with a prescribed tolerance tol. For comparison, we compute the reduced-order linear models for the different tolerances tol = 10−2, 10−3, 10−4, 10−5. The numerical results are given in Fig. 2.10. \(y(t) = -j_{\mathcal{V}}(t)\) and \(\hat{y}(t)\) of the original and reduced-order nonlinear systems, respectively, whereas the lower plot shows the error \(\vert \hat{y}(t) - y(t)\vert\).
Table 2.5 demonstrates the efficiency of the proposed model reduction method. One can see that for the decreasing tolerance, the dimension of the reduced-order system increases while the error in the output decreases. The speedup is defined as the simulation time for the original system divided by the simulation time for the reduced-order model. For example, a speedup of 219 in simulation of the reduced-order nonlinear model of dimension \(\hat{n} = 13\) with the error \(\|\hat{y} - y\|_{\mathbb{L}_{2}(\mathbb{I})} = 6.2 \cdot 10^{-7}\) was achieved compared to the simulation of the original system. ⊲
Example 2.7.3
We consider now the nonlinear circuit shown in Fig. 2.11. It contains 1000 repetitions of subcircuits consisting of one inductor, two capacitors and two resistors. Furthermore, at the beginning and at the end of the chain, we have a voltage source with \(v_{\mathcal{V}}(t) =\sin (100\pi t)^{10}\) V as in Fig. 2.12 and an additional linear inductor, respectively. In the 1st, 101st, 201st, etc., subcircuits, a linear resistor is replaced by a diode, and in the 100th, 200th, 300th, etc., subcircuits, a linear inductor is replaced by a nonlinear inductor. The resulting nonlinear circuit contains one voltage source, 1990 linear resistors with \(R_{1} = 20\,\Omega\) and \(R_{2} = 1\,\Omega\), 991 linear inductors with L = 0. 01 H, 2000 linear capacitors with C = 1 μF, ten diodes with \(\widetilde{g}(v_{\widetilde{\phantom{\mathcal{R}}}\mathcal{R}}) = 10^{-14}(\exp (40 \frac{1} {\mathrm{V}}v_{\widetilde{\phantom{\mathcal{R}}}\mathcal{R}}) - 1)\) A, and ten nonlinear inductors with
where L min = 0. 001 H, L max = 0. 002 H and \(L_{\mathrm{scl}} = 10^{4} \frac{1} {\mathrm{A}}\). The state space dimension of the resulting DAE system is n = 4003.
The numerical simulation is done on the time interval \(\mathbb{I} = [0\,\mathrm{s},0.05\,\mathrm{s}]\) using the BDF method of order 2 with a fixed stepsize of length 5 ⋅ 10−5 s. In Fig. 2.13, we again present the outputs \(y(t) = -j_{\mathcal{V}}(t)\) and \(\hat{y}(t)\) of the original and reduced-order nonlinear systems, respectively, as well as the error \(\vert \hat{y}(t) - y(t)\vert\) for the different tolerances tol = 10−2, 10−3, 10−4, 10−5 for model reduction of the decoupled linear subcircuit. Table 2.6 demonstrates the efficiency of the model reduction method. As in the example above, also here one can see that if the tolerance decreases, the dimension of the reduced-order system increases while the error in the output becomes smaller. In particular, for the approximate nonlinear model of dimension \(\hat{n} = 189\) with the error \(\vert \vert \hat{y} - y\vert \vert _{\mathbb{L}_{2}(\mathbb{I})} = 4.10 \cdot 10^{-5}\), the simulation time is only 57 s instead of 1 h and 13 min for the original system that implies a speedup of 76.8.
Other results of numerical experiments with PABTEC can be found in Chaps. 1, 4 and 5 in this book.
⊲
References
Anderson, B., Vongpanitlerd, S.: Network Analysis and Synthesis. Prentice Hall, Englewood Cliffs, NJ (1973)
Andrásfai, B.: Graph Theory: Flows, Matrices. Adam Hilger, New York/Bristol (1991)
Antoulas, A.: Approximation of Large-Scale Dynamical Systems. SIAM, Philadelphia, PA (2005)
Antoulas, A.: A new result on passivity preserving model reduction. Syst. Control Lett. 54(4), 361–374 (2005)
Antoulas, A., Beattie, C., Gugercin, S.: Interpolatory model reduction of large-scale dynamical systems. In: Mohammadpour, J., Grigoriadis, K. (eds.) Efficient Modeling and Control of Large-Scale Systems, pp. 3–58. Springer, New York (2010)
Bai, Z.: Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems. Appl. Numer. Math. 43, 9–44 (2002)
Bechtold, T., Verhoeven, A., ter Maten, E., Voss, T.: Model order reduction: an advanced, efficient and automated computational tool for microsystems. In: Cutello, V., Fotia, G., Puccio, L. (eds.) Applied and Industrial Mathematics in Italy II, Selected Contributions from the 8th SIMAI Conference. Advances in Mathematics for Applied Sciences, vol. 75, pp. 113–124. World Scientific, Singapore (2007)
Benner, P.: Numerical solution of special algebraic Riccati equations via exact line search method. In: Proceedings of the European Control Conference (ECC97), paper 786. BELWARE Information Technology, Waterloo (1997)
Benner, P., Quintana-Ortí, E.: Solving stable generalized Lyapunov equations with the matrix sign function. Numer. Algorithms 20(1), 75–100 (1999)
Benner, P., Stykel, T.: Numerical solution of projected algebraic Riccati equations. SIAM J. Numer. Anal. 52(2), 581–600 (2014)
Benner, P., Quintana-Ortí, E., Quintana-Ortí, G.: Efficient numerical algorithms for balanced stochastic truncation. Int. J. Appl. Math. Comput. Sci. 11(5), 1123–1150 (2001)
Benner, P., Hernández, V., Pastor, A.: The Kleinman iteration for nonstabilizable systems. Math. Control Signals Syst. 16, 76–93 (2003)
Benner, P., Mehrmann, V., Sorensen, D. (eds.): Dimension Reduction of Large-Scale Systems. Lecture Notes in Computational Science and Engineering, vol. 45. Springer, Berlin/Heidelberg (2005)
Benner, P., Li, J.R., Penzl, T.: Numerical solution of large Lyapunov equations, Riccati equations, and linear-quadratic control problems. Numer. Linear Algebra Appl. 15, 755–777 (2008)
Benner, P., Mena, H., Saak, J.: On the parameter selection problem in the Newton-ADI iteration for large-scale Riccati equations. Electron. Trans. Numer. Anal. 29, 136–149 (2008)
Benner, P., Kürschner, P., Saak, J.: Efficient handling of complex shift parameters in the low-rank Cholesky factor ADI method. Numer. Algorithms 62(2), 225–251 (2013)
Brenan, K., Campbell, S., Petzold, L.: The Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. Classics in Applied Mathematics, vol. 14. SIAM, Philadelphia, PA (1996)
Chan, T.: Rank revealing QR factorizations. Linear Algebra Appl. 88/89, 67–82 (1987)
Chua, L.: Dynamic nonlinear networks: state-of-the-art. IEEE Trans. Circuits Syst. 27(11), 1059–1087 (1980)
Chua, L., Desoer, C., Kuh, E.: Linear and Nonlinear Circuits. McGraw-Hill, New York (1987)
Dai, L.: Singular Control Systems. Lecture Notes in Control and Information Sciences, vol. 118. Springer, Berlin/Heidelberg (1989)
Deo, N.: Graph Theory with Application to Engineering and Computer Science. Prentice-Hall, Englewood Cliffs, NJ (1974)
EstévezSchwarz, D.: A step-by-step approach to compute a consistent initialization for the MNA. Int. J. Circuit Theory Appl. 30, 1–16 (2002)
Estévez Schwarz, D., Tischendorf, C.: Structural analysis for electric circuits and consequences for MNA. Int. J. Circuit Theory Appl. 28, 131–162 (2000)
Freund, R.: Model reduction methods based on Krylov subspaces. Acta Numer. 12, 267–319 (2003)
Freund, R.: SPRIM: structure-preserving reduced-order interconnect macromodeling. In: Technical Digest of the 2004 IEEE/ACM International Conference on Computer-Aided Design, pp. 80–87. IEEE Computer Society, Los Alamos, CA (2004)
Freund, R.: Structure-preserving model order reduction of RCL circuit equations. In: Schilders, W., van der Vorst, H., Rommes, J. (eds.) Model Order Reduction: Theory, Research Aspects and Applications. Mathematics in Industry, vol. 13, pp. 49–73. Springer, Berlin/Heidelberg (2008)
Golub, G., Van Loan, C.: Matrix Computations, 3rd edn. The Johns Hopkins University Press, Baltimore/London (1996)
Griepentrog, E., März, R.: Differential-Algebraic Equations and Their Numerical Treatment. Teubner-Texte zur Mathematik, vol. 88. B.G. Teubner, Leipzig (1986)
Grimme, E.: Krylov projection methods for model reduction. Ph.D. thesis, University of Illinois, Urbana-Champaign (1997)
Gugercin, S., Antoulas, A.: A survey of model reduction by balanced truncation and some new results. Int. J. Control 77(8), 748–766 (2004)
Gugercin, S., Antoulas, A., Beattie, C.: \(\mathcal{H}_{2}\) model reduction for large-scale linear dynamical systems. SIAM J. Matrix Anal. Appl. 30(2), 609–638 (2008)
Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II - Stiff and Differential-Algebraic Problems, 2nd edn. Springer, Berlin (1996)
Heinkenschloss, M., Reis, T.: Model reduction for a class of nonlinear electrical circuits by reduction of linear subcircuits. Technical Report 702–2010, DFG Research Center Matheon, Technische Universität Berlin (2010). http://http://www.math.tu-berlin.de/~reis/Publicat/pr_10_702.pdf
Hinze, M., Kunkel, M.: Residual based sampling in pod model order reduction of drift-diffusion equations in parametrized electrical networks. Z. Angew. Math. Mech. 92(2), 91–104 (2012)
Hinze, M., Kunkel, M., Steinbrecher, A., Stykel, T.: Model order reduction of coupled circuit-device systems. Int. J. Numer. Model. Electron. Networks Devices Fields 25, 362–377 (2012)
Ho, C.W., Ruehli, A., Brennan, P.: The modified nodal approach to network analysis. IEEE Trans. Circuits Syst. 22(6), 504–509 (1975)
Ionutiu, R., Rommes, J., Antoulas, A.: Passivity-preserving model reduction using dominant spectral-zero interpolation. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 27(12), 2250–2263 (2008)
Ipach, H.: Graphentheoretische Anwendungen in der Analyse elektrischer Schaltkreise. Bachelorarbeit, Universität Hamburg (2013)
Jungnickel, D.: Graphs, Network and Algorithms. Springer, Berlin/Heidelberg (2005)
Kleinman, D.: On an iterative technique for Riccati equation computations. IEEE Trans. Autom. Control 13, 114–115 (1968)
Knockaert, L., De Zutter, D.: Laguerre-SVD reduced-order modeling. IEEE Trans. Microwave Theory Tech. 48(9), 1469–1475 (2000)
Kunkel, P., Mehrmann, V.: Differential-Algebraic Equations. Analysis and Numerical Solution. EMS Publishing House, Zürich (2006)
Li, J.R., White, J.: Low rank solution of Lyapunov equations. SIAM J. Matrix Anal. Appl. 24(1), 260–280 (2002)
Liu, W., Sreeram, V., Teo, K.: Model reduction for state-space symmetric systems. Syst. Control Lett. 34(4), 209–215 (1998)
Moore, B.: Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans. Autom. Control 26(1), 17–32 (1981)
Ober, R.: Balanced parametrization of classes of linear systems. SIAM J. Control Optim. 29(6), 1251–1287 (1991)
Odabasioglu, A., Celik, M., Pileggi, L.: PRIMA: passive reduced-order interconnect macromodeling algorithm. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 17(8), 645–654 (1998)
Ovalekar, V., Narayanan, H.: Fast loop matrix generation for hybrid analysis and a comparison of the sparsity of the loop impedance and MNA impedance submatrices. In: Proceedings of the IEEE International Symposium on Circuits and Systems, ISCAS ’92, vol. 4, pp. 1780–1783 (1992)
Penzl, T.: A cyclic low-rank Smith method for large sparse Lyapunov equations. SIAM J. Sci. Comput. 21(4), 1401–1418 (1999/2000)
Penzl, T.: LYAPACK Users Guide. Preprint SFB393/00-33, Fakultät für Mathematik, Technische Universität Chemnitz, Chemnitz (2000). Available from http://www.tu-chemnitz.de/sfb393/sfb00pr.html
Phillips, J., Daniel, L., Silveira, L.: Guaranteed passive balancing transformations for model order reduction. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 22(8), 1027–1041 (2003)
Poloni, F., Reis, T.: A deflation approach for large-scale Lur’e equations. SIAM. J. Matrix Anal. Appl. 33(4), 1339–1368 (2012)
Reis, T., Stykel, T.: PABTEC: Passivity-preserving balanced truncation for electrical circuits. IEEE Trans. Compu. Aided Des. Integr. Circuits Syst. 29(9), 1354–1367 (2010)
Reis, T., Stykel, T.: Positive real and bounded real balancing for model reduction of descriptor systems. Int. J. Control 83(1), 74–88 (2010)
Reis, T., Stykel, T.: Lyapunov balancing for passivity-preserving model reduction of RC circuits. SIAM J. Appl. Dyn. Syst. 10(1), 1–34 (2011)
Rewieński, M.: A trajectory piecewise-linear approach to model order reduction of nonlinear dynamical systems. Ph.D. thesis, Massachusetts Institute of Technology (2003)
Riaza, R., Tischendorf, C.: Qualitative features of matrix pencils and DAEs arising in circuit dynamics. Dyn. Syst. 22, 107–131 (2007)
Riaza, R., Tischendorf, C.: The hyperbolicity problem in electrical circuit theory. Math. Methods Appl. Sci. 33(17), 2037–2049 (2010)
Rommes, J., Schilders, W.: Efficient methods for large resistor networks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 29(1), 28–39 (2010)
Roos, J., Costa, L. (eds.): Scientific Computing in Electrical Engineering SCEE 2008. Mathematics in Industry, vol. 14. Springer, Berlin/Heidelberg (2010)
Saad, Y.: Iterative Methods for Sparse Linear Systems. PWS Publishing Company, Boston, MA (1996)
Sabino, J.: Solution of large-scale Lyapunov equations via the block modified Smith method. Ph.D. thesis, Rice University, Houston (2006)
Schilders, W., van der Vorst, H., J., R. (eds.): Model Order Reduction: Theory, Research Aspects and Applications. Mathematics in Industry, vol. 13. Springer, Berlin/Heidelberg (2008)
Sirovich, L.: Turbulence and the dynamics of coherent structures. I: coherent structures. II: symmetries and transformations. III: dynamics and scaling. Q. Appl. Math. 45, 561–590 (1987)
Sorensen, D.: Passivity preserving model reduction via interpolation of spectral zeros. Syst. Control Lett. 54(4), 347–360 (2005)
Soto, M.S., Tischendorf, C.: Numerical analysis of DAEs from coupled circuit and semiconductor simulation. Appl. Numer. Math. 53(2–4), 471–88 (2005)
Steinbrecher, A., Stykel, T.: Model order reduction of nonlinear circuit equations. Int. J. Circuits Theory Appl. 41, 1226–1247 (2013)
Stykel, T.: Gramian-based model reduction for descriptor systems. Math. Control Signals Syst. 16, 297–319 (2004)
Stykel, T.: Low-rank iterative methods for projected generalized Lyapunov equations. Electron. Trans. Numer. Anal. 30, 187–202 (2008)
Stykel, T.: Balancing-related model reduction of circuit equations using topological structure. In: Benner, P., Hinze, M., ter Maten, E. (eds.) Model Reduction for Circuit Simulation. Lecture Notes in Electrical Engineering, vol. 74, pp. 53–80. Springer, Berlin/Heidelberg (2011)
Stykel, T., Reis, T.: The PABTEC algorithm for passivity-preserving model reduction of circuit equations. In: Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems, MTNS 2010, Budapest, 5–9 July 2010, paper 363. ELTE, Budapest (2010)
Tischendorf, C.: Coupled systems of differential algebraic and partial differential equations in circuit and device simulation. Habilitation thesis, Humboldt-Universität Berlin (2004)
Varga, A.: On computing high accuracy solutions of a class of Riccati equations. Control Theory Adv. Technol. 10, 2005–2016 (1995)
Vlach, J., Singhal, K.: Computer Methods for Circuit Analysis and Design. Van Nostrand Reinhold, New York (1994)
Wachspress, E.: Iterative solution of the Lyapunov matrix equation. Appl. Math. Lett. 1, 87–90 (1988)
Wachspress, E.: The ADI minimax problem for complex spectra. In: Kincaid, D., Hayes, L. (eds.) Iterative Methods for Large Linear Systems, pp. 251–271. Academic, San Diego (1990)
Wachspress, E.: The ADI Model Problem. Monograph, Windsor, CA (1995)
Weinbreg, L., Ruehili, A.: Combined modified loop analysis, modified nodal analysis for large-scale circuits. IBM Research Report RC 10407, IBM Research Devision (1984)
Acknowledgements
The work reported in this paper was supported by the German Federal Ministry of Education and Research (BMBF), grant no. 03STPAE3. Responsibility for the contents of this publication rests with the authors.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Steinbrecher, A., Stykel, T. (2017). Element-Based Model Reduction in Circuit Simulation. In: Benner, P. (eds) System Reduction for Nanoscale IC Design. Mathematics in Industry, vol 20. Springer, Cham. https://doi.org/10.1007/978-3-319-07236-4_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-07236-4_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-07235-7
Online ISBN: 978-3-319-07236-4
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)