1 Introduction

Complex high-speed interconnect systems have become dominant factors to determine the performance of VLSI circuits. Interconnect networks are usually tied to large-scale system equations, and model-order reduction (MOR) techniques are crucial to reduce the interconnect system complexity and the computational effort of simulation tools. Furthermore, the uncertainty in the manufacturing process of interconnects leads to variations in the width size and in the interlayer dielectrics thicknesses of interconnect wires, which in turn could cause the integrated circuit performance to be uncertain, as well as result in manufacturing faults and yield loss. Due to process variations, the parameterized model-order reduction (PMOR) method is a useful simulation technique for the analysis of parameterized interconnect circuits and statistical static timing analysis.

Parameterized model-order reduction techniques have become an area of intense research in the past few years [2, 13,14,15, 19, 20, 23,24,25, 27, 34, 37, 40, 41], owing to the increase in the variability of process control as the technology keeps scaling. Early methods, such as the perturbation technique [27], mainly capture small variations around the nominal design values. Although useful to model strong nonlinear effects caused by the intra-die parameter variations [24], they eventually result to be rather ineffective. Applying the MOR technique to some parameters based on moment-matching techniques [7, 20] can instead preserve the structure of the original system, thus guaranteeing the passivity of the reduced-order process naturally. Nevertheless, the reduced-order methods present numerical stability issues because of the explicit calculation of the projection basis that they perform. To overcome such drawback, [8] and [9] suggested to first obtain a linear combination vector with respect to a set of orthonormal bases, and to later determine the matrix vector products based on the orthonormal basis vector. However, even such method is not a fully implicit reduced-order method, and it still presents the above-mentioned numerical stability issues. Reference [6] proposes a union Krylov subspace for moment matching, which however requires a wide number of Krylov basis vectors, significantly larger than the dimension of the original system spaces. Moving further, some parameterized reduce order methods based on the multi-series expansion of the transfer function method are proposed in [10] and [12], which in turn are rather difficult to apply to many parametrized systems. The CORE algorithm [24] is an explicit and implicit method to generate basis vectors. However, it does not preserve the structure of the original system as well as the passivity of the system in the reduced-order process. For what concerns rational systems, a multi-parameter moment-matching method is introduced in [14], which uses a subspace projection approach to provide a parameterized MOR. Its structure, however, exhibits some computational problems, and the results of the reduced models usually suffer from oversizing when the number of moments to match is high, either because high accuracy is required or because the number of parameters is large. The technique presented in [14, 15] combines instead traditional passivity-preserving model-order reduction methods and interpolation schemes based on a class of positive interpolation operators.

Another type of parameterized reduced-order approach based on truncated balance realization (TBR) has also been proposed as a variation of the MOR [34, 35]. Typically, the reduced-order methods based on TBR exhibit higher computational complexity because of the calculation of the two system Grammians. The stochastic spectral Galerkin reduced-order method proposed by J. M. Wang is instead another approach to investigate interconnects systems affected by process variations. Homogeneous chaos is here used to capture the impact of parameter variations on the interconnects system response (i.e., the variation parameters shown in the generic Hermite polynomial basis function). Nonetheless, such method is not a true PMOR technique, as well as it does not preserve the structure of the original interconnects system. One more PMOR technique is based on rational interpolation, which includes thevariations of electrical and geometrical parameters in lossy transmission line [38, 39] and delayed systems [15].

We present here a multi-parameter moment-matching method based on PMOR techniques for parameterized RC and RLC interconnect systems, which combines the traditional structure-preserving model-order reduction methods and the adjacency uniformity techniques. We refer to it as structure-preserving parameterized interconnect model-order reduction (SPMOR) algorithm. The SPMOR algorithm embeds the advantages of structure-preserving moment-matching-based methods, thus it preserves the structure of the original state equations, like the structure-preserving reduced-order interconnect macromodeling [16]. The main benefit of SPMOR is preserving the statistical characteristics of the system, in contrast to the previous PMOR approaches. To date, all of the multi-parameter moment-matching-based PMOR methods have been applied to low-dimensional parameter space, although parameter variations in modern integrated circuits processes sometimes show a high-dimensional space. In [11], an approach to construct a compact model of parameterized systems is introduced, which reduces the simulation accuracy of interconnect circuit system. In this paper, we consider interconnect circuit parameters (i.e., resistance, capacitance and inductance) which include all physical parameter variations.

This paper is organized as follow. In Sect. 2, the parameter variation interconnect system problems and the structure-preserving MOR methods are outlined. In Sect. 3, we present the homogeneous RC and RLC parameter interconnects reduction system. The extension of the homogeneous parameter system to inhomogeneous system based SPMOR algorithm is described in Sect. 4. Simulation results, accompanying the comparison with the original parameterized interconnect systems, are shown in Sect. 5. Conclusion and remarks are finally drawn in Sect. 6.

2 Problem Formulation

2.1 Parameter Variations in Interconnect Systems

For the purposes of this paper, all of the variations of the system interconnects are lumped into a set of parameters \(\sigma =\sigma _{1} ,\ldots ,\sigma _{m}\) according to which the matrices of the interconnect model vary. Assuming the system equations to be in the form of (23), and coherently to the notation in [27] and [7].

$$\begin{aligned} C(\sigma )\frac{\mathrm{d}x}{\mathrm{d}t}= & {} G(\sigma )x+Bu(t) \nonumber \\ y= & {} L^{T}x \end{aligned}$$
(1)

The first challenge in generating the reduced models from (1) is the number of parameters, which is the dimension size of the parameter variable space. Since the linearity of process variations in VLSI interconnect systems is typically guaranteed throughout all reference [1], expression (1) can be rewritten as

$$\begin{aligned} C(\sigma )= & {} C_0 +\sigma _1 C_1 +\cdots +\sigma _m C_m\\ G(\sigma )= & {} G_0 +\sigma _1 G_1 +\cdots +\sigma _m G_m \end{aligned}$$

where \(C_{i}\) and \(G_{i}\), (i from 1 to m) are \(N\times N\) constant matrices. The linear dynamic interconnect system described by (1) applies to parameterized interconnect networks, where \(C(\sigma )\) and \(G(\sigma )\) are from parameter-dependent memory and memoryless elements of the interconnect networks [1, 21]. In general, the system matrices \(C(\sigma )\) and \(G(\sigma )\) can be approximated by the power series expansion described in [21]. Our proposed method does not require any linear approximation model nor series expansion approximated models; indeed, it can be applied to any model form, given that the structure-preserving projected matrices can be efficiently computed.

Secondly, the difficulty in the proper choice of the projection matrix must be considered. In general, multi-dimensional moment-matching algorithms are hardly able to generate a large projection matrix, while the proposed SPMOR procedure for parameter varying system can efficiently generate projection matrices, as it will be explained in details in Sects. 3 and 4.

2.2 Projection-Based MOR in Literature

What follows is a literature review of some basic ideas about reduced-order models of interconnect systems based on projection techniques.

$$\begin{aligned} E\frac{\mathrm{d}x(t)}{\mathrm{d}t}= & {} Gx(t)+Bu(t) \nonumber \\ y(t)= & {} L^{T}x(t) \end{aligned}$$
(2)

where E, \(G \in \mathbb {R}^{N\times N}\) and L, \(B\in \mathbb {R}^{N}\).

The reduced-order models described by (3) is in the same form of (2), but with a reduced dimension of r (<N).

$$\begin{aligned} E_r \frac{\mathrm{d}\overline{x} (t)}{\mathrm{d}t}= & {} G_r \overline{x} (t)+B_r u(t) \nonumber \\ \overline{y} (t)= & {} L_r ^{T}\overline{x} (t) \end{aligned}$$
(3)

where \(E_{\mathrm{r}}\), \(G_{\mathrm{r}}\in \mathbb {R}^{r\times r}\) and \(L_{\mathrm{r}}\), \(B_{\mathrm{r}}\in \mathbb {R}^{r}\). Thus, the problem of order reduction relies in finding a reduced state space r or a reduced system and, accordingly, making sure that the output results represent a good approximation of the original system.

A very common method to construct reduced-order models is to employ the space projection which is used in space geometry. Let the matrix \(V_{r}\) (\(V_r \in \mathbb {R}^{N\times r}\) and with full column rank) be given. It follows that the reduced-order system state matrix can be written as:

$$\begin{aligned} E_r =V_r^T EV_r, G_r =V_r^T GV_r, \quad B_r =V_r^T B\;\hbox { and }\;L_r^T =V_r^T L. \end{aligned}$$

The matrix \(V_{r}\) contains a certain Krylov subspace, which is the solution space of the first equation in (2), based on the Cayley–Hamilton theorem.

2.3 Krylov Subspace Methods and PMOR

The above discussion has also pointed out that a possible approach for calculating \(V_{r}\) is the Krylov subspace method, which is also referred to as “moment-matching method.” To introduce such method, we firstly write the transfer function in the frequency domain based on the Laplace transform of system (2).

$$\begin{aligned} H(s)=L^{T}(sE-G)^{-1}B=-\sum \limits _0^\infty {m_i^{s_0 } (s-s_0 )^{i}} \end{aligned}$$
(4)

The \(m_i^{s_0}\) named moments of the interconnect system are the negative Taylor expansion coefficients of the transfer function \(H(\hbox {s})\) around the expansion point \(s_{0}\) in (4). Based on the expansion point \(s_{0}\), the Krylov vector can be written as

$$\begin{aligned} v_k^j =((G-s_0 E)^{-1}E)^{j-1}(G-s_0 E)^{-1}\,B \end{aligned}$$

and then the input Krylov sequences are

$$\begin{aligned} K^{r}\,(s_0 )=sp\{v_k^1 ,v_k^2 ,\ldots ,v_k^r \} \end{aligned}$$
(5)

where \(v_{\mathrm{k}}\) are the Krylov space vectors (the index k stands for “Krylov space”). For the sake of clarity, we only consider single-parameter variation systems, as most of the bibliographic references do.

$$\begin{aligned} E(\sigma )\frac{\mathrm{d}x(t)}{\mathrm{d}t}= & {} G(\sigma )x(t)+Bu(t) \nonumber \\ y= & {} L^{T}x(t) \end{aligned}$$
(6)

\(\sigma \) is a single scalar parameter. Similarly to (4), the transfer function of (6) in the frequency domain is

$$\begin{aligned} H(s,\sigma )=L^{T}(E(\sigma )s-G(\sigma ))^{-1}\,B \end{aligned}$$
(7)

In the same way, based on a series expansion on zero for both s and \(\sigma \), and maintaining the terms whose order is lower than r, then:

$$\begin{aligned} H(s,\sigma )\approx \sum \limits _{i=0}^r {m_{s,i} \sigma ^{i}+\sum \limits _{i=0}^r {m_{\sigma ,i} s^{i}+\sum \limits _{i=2}^r {\sum \limits _{j=1}^{i-1} {m_{\mathrm{mixed},i,j} s^{j}\sigma ^{i-j}} } } } \end{aligned}$$

where \(m_{.,i}\) are called the ith moments of the expansion, as expressed in (4). In the same way, the basis V of the reduced-order projection subspace is the column span of the moments \(V=\{m_{s,0} \ldots m_{s,q} m_{\sigma ,0} \ldots m_{\sigma ,q} m_{\mathrm{mixed},2,1} \ldots m_{mixed,q,q-1} \}\). There are two methods to compute such a moment. The first is the direct method [7, 19, 29, 30], while the other is constructed using a previously computed moment method. In both of them, the basis is generated by orthonormalizing with respect to each other by a modified Gram–Schmidt procedure. The moment constructing methods include both explicit and implicit moment matching [36]. However, all of above methods based on moment matching do not provide an easy solution to calculate the moments, especially the mixed moments in PMOR.

2.4 Balanced Truncation/Interpolatory Used in PMOR

The PMOR method was originally proposed in [3] and is based on polynomial interpolation. A hybrid approach of balanced truncation (BT) with some kind of interpolation was instead later proposed in [4]. It makes use of rational interpolation and assumes that the interpolation points are uniformly distributed over the parameter interval. Then, balanced truncation is applied, leading to order reduction in interconnect systems.

The reduced-order transfer function over the whole parameter interval is obtained by rational interpolation, i.e., by the use of the barycentric formula [5]. A global error bound can be derived by a combination of the BT. Thus, the global error estimation in a given parameter interval [a, b] is expressed as

$$\begin{aligned} {\mathop {\mathop {\sup }\limits _{s\in C^{+}}}\limits _{\sigma \in [a,b]}} \left\| {H(s,\sigma )-H_n (s,\sigma )} \right\|\le & {} {\mathop {\mathop {\sup }\limits _{s\in C^{+}}}\limits _{\sigma \in [a,b]}} \left\| {R_k (H,s,\sigma )} \right\| \nonumber \\&+\,\,\mathrm{tol}\cdot \mathop {\sup }\limits _{\sigma \in [a,b]} \left\| {\frac{1}{\sum _{j=1}^k {\frac{u_j }{\sigma -\sigma _j }} }} \right\| \end{aligned}$$
(8)

where tol is the error tolerance (more details can be found in [5, 22]).

The main problem of this method is the number of interpolation points, which grows exponentially for high-dimensional parameter spaces. This increases the computational complexity in the case of BT, and furthermore, the order of the reduced system would also grow exponentially.

2.5 Problem Statement

To summarize, there is no easy method, to date, for accomplishing parameter model-order reduction using Krylov subspace methods or balanced truncation methods. Even the simple expansion along a subset of parameter spaces (i.e., without performing the moment computation of any mixed term), the selected expansion points already appear to be more complex. This paper introduces a structure preserving parameter model-order reduction method which can partly address the above-mentioned issue, facilitating the investigation of parameterized system by preserving the main probability characteristics.

3 Homogeneous RC and RLC Parameterized Interconnects Reduction Method

In this section, we briefly review the formulation of the circuit equations for parameterized RC and RLC circuits. The connectivity of a circuit can be indicated using differential state equations or a directional graph. The nodes of the graph correspond to the nodes of the circuit, and the edges of the graph correspond to the circuit elements. An arbitrary direction is assigned to the graph edges to distinguish between the source and destination nodes. The adjacency matrix of the directional graph describes the connectivity of the circuit.

3.1 RC Circuit State-Space Formulation and PMOR

A lumped, linear, time-invariant circuit can be best described using differential state equations. While this is not the only possible description, it is however the simplest to understand and formulate. Such a network description usually exists, and is given by

$$\begin{aligned} \frac{\mathrm{d}x(t)}{\mathrm{d}t}= & {} Ax(t)+bu(t) \nonumber \\ y(t)= & {} c^{T}x(t) \end{aligned}$$
(9)

where \(x(\hbox {t})\) is the N-dimensional state vector, \(u(\hbox {t})\) is the m-dimensional excitation vector, and \(y(\hbox {t})\) is the vector of the required outputs. As a simple example, let us consider the network in Fig. 1. The state-space equations for it are given in (10).

$$\begin{aligned} \left[ {{\begin{array}{l} {\frac{\mathrm{d}v_1 }{\mathrm{d}t}} \\ {\frac{\mathrm{d}v_2 }{\mathrm{d}t}} \\ \end{array} }} \right]= & {} \left[ {{\begin{array}{c@{\quad }c} {-\left( {\frac{1}{R_1 C_1 }+\frac{1}{R_2 C_1 }} \right) }&{} {\frac{1}{R_2 C_1 }} \\ {\frac{1}{R_2 C_2 }}&{} {-\frac{1}{R_2 C_2 }} \\ \end{array} }} \right] \left[ {{\begin{array}{l} {v_1 } \\ {v_2 } \\ \end{array} }} \right] +\left[ {{\begin{array}{c} {\frac{1}{R_1 C_1 }} \\ 0 \\ \end{array} }} \right] v_{in} \nonumber \\ v_\mathrm{out}= & {} \left[ {{\begin{array}{l@{\quad }l} {-1}&{} 0 \\ \end{array} }} \right] \left[ {{\begin{array}{l} {v_1 } \\ {v_2 } \\ \end{array} }} \right] \end{aligned}$$
(10)
Fig. 1
figure 1

Simple RC interconnect network

We assume that the electrical parameters of the interconnect network are homogeneous (i.e., \(R_{1}=R_{2}=R\) and \(C_{1}=C_{2}=C)\). This is in general true in distributed RC interconnect lines due to the local homogeneous condition. Therefore, (10) can be reformulated as

$$\begin{aligned} \left[ {{\begin{array}{l} {\frac{\mathrm{d}v_1 }{\mathrm{d}t}} \\ {\frac{\mathrm{d}v_2 }{\mathrm{d}t}} \\ \end{array} }} \right]= & {} \frac{1}{RC}\left( {\left[ {{\begin{array}{c@{\quad }c} {-2}&{} 1 \\ 1&{} {-1} \\ \end{array} }} \right] \left[ {{\begin{array}{l} {v_1 } \\ {v_2 } \\ \end{array} }} \right] +\left[ {{\begin{array}{l} 1 \\ 0 \\ \end{array} }} \right] v_{in} } \right) \nonumber \\ v_\mathrm{out}= & {} \left[ {{\begin{array}{l@{\quad }l} {-1}&{} 0 \\ \end{array} }} \right] \left[ {{\begin{array}{l} {v_1 } \\ {v_2 } \\ \end{array} }} \right] \end{aligned}$$
(11)

Considering now

$$\begin{aligned} A=\left[ {{\begin{array}{c@{\quad }c} {-2}&{} 1 \\ 1&{} {-1} \\ \end{array} }} \right] , \qquad b=\left[ {{\begin{array}{l} 1 \\ 0 \\ \end{array} }} \right] \end{aligned}$$

then the Krylov space described in (11) can be expressed in the form

$$\begin{aligned} V=\left[ {{\begin{array}{l@{\quad }l} {\frac{1}{RC}b}&{} {\left( {\frac{1}{RC}} \right) ^{2}Ab} \\ \end{array} }} \right] =\left[ {{\begin{array}{l@{\quad }l} b&{} {Ab} \\ \end{array} }} \right] \left[ {{\begin{array}{c@{\quad }c} {\frac{1}{RC}}&{} 0 \\ 0&{} {\left( {\frac{1}{RC}} \right) ^{2}} \\ \end{array} }} \right] \end{aligned}$$
(12)

We now provide an example to demonstrate a way to use parameter extracting MOR. The circuit chosen for such example is an interconnect circuit consisting of a single bus. The order of the interconnect system state matrix is N, while that of the reduced system is r. H(s) represents the original system transfer function, while the reduced system transfer function is denoted as \(H_{\mathrm{r}}(s)\).

$$\begin{aligned} H(s)= & {} c^{T}\left( sI-\frac{1}{RC}A\right) ^{-1}\frac{1}{RC}b \end{aligned}$$
(13)
$$\begin{aligned} H_r (s)= & {} {c_r}^{T}\left( sI_r -\frac{1}{RC}A_r \right) ^{-1}\frac{1}{RC}b_r \end{aligned}$$
(14)

where \(A\in \mathbb {R}^{N\times N}\).

Fig. 2
figure 2

Transfer function variation in 10 sets of RC variation

As expected from the parameter model-order reduction method, the variations of resistances and capacitances (R and C) are preserved in the reduced system. The only restriction is that R and C parameters must be homogeneous. This will be further addressed in Sect. 4. Figure 2 shows the variation of the transfer function with 10 different sets of R and C parameters, which are Gaussian-distributed with average values of \(0.027\,\Omega \,(\mu _{r})\) and 0.1 nF (\(\mu _{c}\)) for R and C respectively and standard deviations of \(0.001\,\Omega \,(\sigma _{r})\), and 0.01 nF (\(\sigma _{c}\)) for R and C, respectively. It is worth noting that the transfer function of parameter MOR systems is similar to that of the original RC system in the large-frequency domain.

3.2 State-Space Formulation and PMOR of RLC Circuits

In deep nanoscale CMOS technologies, the effect of the inductance of interconnect lines becomes significant. We therefore address our analysis to RLC circuits. Assuming ideal voltage and current sources, V(t) and I(t) will hereafter be denoted as the node voltage vector and branch current vector, respectively. Their transient analyses can be formulated using the modified nodal analysis as follows:

$$\begin{aligned} \left[ {{\begin{array}{c@{\quad }c} C&{} 0 \\ 0&{} L \\ \end{array} }} \right] \left[ {{\begin{array}{l} {\frac{\mathrm{d}V}{\mathrm{d}t}} \\ {\frac{\mathrm{d}I}{\mathrm{d}t}} \\ \end{array} }} \right] +\left[ {{\begin{array}{c@{\quad }c} 0&{} {-A_l } \\ {-A_l^T }&{} {-R} \\ \end{array} }} \right] \left[ {{\begin{array}{l} {V(t)} \\ {I(t)} \\ \end{array} }} \right] =\left[ {{\begin{array}{c} {U(t)} \\ 0 \\ \end{array} }} \right] \end{aligned}$$
(15)
Fig. 3
figure 3

Simple RLC interconnect network

where C, L and R are matrices of coefficients for capacitors, inductors and resistors, respectively, while \(U(\hbox {t})\) is the input vector generated by the ideal voltage and current sources. The simple RLC network shown in Fig. 3 can be considered as an example of a typical interconnect line in modern integrated circuits. The linear state equations of such a circuit can be expressed as

$$\begin{aligned} \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {C_1 }&{} 0&{} 0&{} 0 \\ 0&{} {C_2 }&{} 0&{} 0 \\ 0&{} 0&{} {L_1 }&{} 0 \\ 0&{} 0&{} 0&{} {L_2 } \\ \end{array} }} \right] \left[ {{\begin{array}{l} {\frac{\mathrm{d}v_1 }{\mathrm{d}t}} \\ {\frac{\mathrm{d}v_2 }{\mathrm{d}t}} \\ {\frac{\mathrm{d}i_1 }{\mathrm{d}t}} \\ {\frac{\mathrm{d}i_2 }{\mathrm{d}t}} \\ \end{array} }} \right] =\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0&{} 0&{} 1&{} {-1} \\ 0&{} 0&{} 0&{} 1 \\ 1&{} 0&{} {R_1 }&{} 0 \\ {-1}&{} 1&{} 0&{} {R_2 } \\ \end{array} }} \right] \left[ {{\begin{array}{c} {v_1 } \\ {v_2 } \\ {i_1 } \\ {i_2 } \\ \end{array} }} \right] +\left[ {{\begin{array}{c} 0 \\ 0 \\ {-v_s } \\ 0 \\ \end{array} }} \right] \end{aligned}$$
(16)

We assume here that the electrical parameters of the interconnect network are homogeneous (i.e., \(R_{1}=R_{2}=\hbox {R}\), \(L_{1}=L_{2}=\hbox {L}\) and \(C_{1}=C_{2}=\hbox {C}\)), which is generally true in distributed RLC interconnect lines due to the local homogeneous condition. The system Eq. (16) is therefore rewritten as

$$\begin{aligned} \left[ {{\begin{array}{c@{\quad }c} {C\left( {{\begin{array}{c@{\quad }c} 1&{} 0 \\ 0&{} 1 \\ \end{array} }} \right) }&{} {{\begin{array}{c@{\quad }c} 0&{} 0 \\ 0&{} 0 \\ \end{array} }} \\ {\;\;{\begin{array}{c@{\quad }c} 0&{} 0 \\ 0&{} 0 \\ \end{array} }}&{} {L\left( {{\begin{array}{c@{\quad }c} 1&{} 0 \\ 0&{} 1 \\ \end{array} }} \right) } \\ \end{array} }} \right] \left[ {{\begin{array}{l} {\frac{\mathrm{d}v_1 }{\mathrm{d}t}} \\ {\frac{\mathrm{d}v_2 }{\mathrm{d}t}} \\ {\frac{\mathrm{d}i_1 }{\mathrm{d}t}} \\ {\frac{\mathrm{d}i_2 }{\mathrm{d}t}} \\ \end{array} }} \right] =\left[ {{\begin{array}{c@{\quad }c} {{\begin{array}{cc} 0&{} 0 \\ 0&{} 0 \\ \end{array} }}&{} {{\begin{array}{c@{\quad }c} 1&{} {-1} \\ 0&{} 1 \\ \end{array} }} \\ {{\begin{array}{c@{\quad }c} 1&{} 0 \\ {-1}&{} 1 \\ \end{array} }}&{} {R\left( {{\begin{array}{c@{\quad }c} 1&{} 0 \\ 0&{} 1 \\ \end{array} }} \right) } \\ \end{array} }} \right] \left[ {{\begin{array}{c} {v_1 } \\ {v_2 } \\ {i_1 } \\ {i_2 } \\ \end{array} }} \right] +\left[ {{\begin{array}{c} 0 \\ 0 \\ {-v_s } \\ 0 \\ \end{array} }} \right] \end{aligned}$$
(17)

We let

$$\begin{aligned} E=\left[ {{\begin{array}{c@{\quad }c} {C\left( {{\begin{array}{c@{\quad }c} 1&{} 0 \\ 0&{} 1 \\ \end{array} }} \right) }&{} {{\begin{array}{c@{\quad }c} 0&{} 0 \\ 0&{} 0 \\ \end{array} }} \\ {\;\;{\begin{array}{c@{\quad }c} 0&{} 0 \\ 0&{} 0 \\ \end{array} }}&{} {L\left( {{\begin{array}{c@{\quad }c} 1&{} 0 \\ 0&{} 1 \\ \end{array} }} \right) } \\ \end{array} }} \right] ,\, G=\left[ {{\begin{array}{c@{\quad }c} {{\begin{array}{cc} 0&{} 0 \\ 0&{} 0 \\ \end{array} }}&{} {{\begin{array}{c@{\quad }c} 1&{} {-1} \\ 0&{} 1 \\ \end{array} }} \\ {{\begin{array}{c@{\quad }c} 1&{} 0 \\ {-1}&{} 1 \\ \end{array} }}&{} {R\left( {{\begin{array}{c@{\quad }c} 1&{} 0 \\ 0&{} 1 \\ \end{array} }} \right) } \\ \end{array} }} \right] . \end{aligned}$$

It is evident from the structure of matrices E and G that the system in (17) describes an RLC circuit. According to [16], the heaviest computation in structure-preserving reduced-order interconnect macromodeling (SPRIM) is the generation of a suitable basis for the rth Krylov subspace \(K_r (M,R)\). In this section, for simplicity, we assume that the basis matrix \(V_{r}\) is constructed according to the Krylov subspace method (further details are given in Sect. 4).

Let

$$\begin{aligned} V_r =\left[ {{\begin{array}{l} {V_1 } \\ {V_2 } \\ \end{array} }} \right] \end{aligned}$$

be the partitioning of Vr. Then

$$\begin{aligned} \overline{E} =\left[ {{\begin{array}{c@{\quad }c} {V_1^T CI_1 V_1 }&{} 0 \\ 0&{} {V_2^T LI_2 V_2 } \\ \end{array} }} \right] , \overline{G} =\left[ {{\begin{array}{c@{\quad }c} 0&{} {V_1^T A_l V_2 } \\ {(V_1^T A_l V_2 )^{T}}&{} {V_2^T RI_3 V_2 } \\ \end{array} }} \right] \end{aligned}$$

where

$$\begin{aligned} A_l =\left[ {{\begin{array}{c@{\quad }c} 1&{} {-1} \\ 0&{} 1 \\ \end{array} }} \right] \end{aligned}$$

Formally set

$$\begin{aligned} M=\left[ {{\begin{array}{c@{\quad }c} 0&{} {\frac{1}{C}A_{12} } \\ {\frac{1}{L}A_{21} }&{} {\frac{R}{L}I} \\ \end{array} }} \right] ,\, R=\left[ {{\begin{array}{l} 0 \\ {b_{21} } \\ \end{array} }} \right] \end{aligned}$$

where

$$\begin{aligned} A_{12} =\left[ {{\begin{array}{c@{\quad }c} 1&{} {-1} \\ 0&{} 1 \\ \end{array} }} \right] , A_{21} =\left[ {{\begin{array}{c@{\quad }c} 1&{} 0 \\ {-1}&{} 1 \\ \end{array} }} \right] , b_{21} =\left[ {{\begin{array}{l} 1 \\ 0 \\ \end{array} }} \right] \end{aligned}$$

The Krylov subspace-based partitioned matrices M and R are given as

$$\begin{aligned} K(M,R)=\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} R&{} {MR}&{} {M^{2}R}&{} {M^{3}R} \\ \end{array} }} \right] \end{aligned}$$

Let

$$\begin{aligned} A=\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0&{} 0&{} 1&{} {-1} \\ 0&{} 0&{} 0&{} 1 \\ 1&{} 0&{} 1&{} 1 \\ {-1}&{} 1&{} 1&{} 1 \\ \end{array} }} \right] , b=\left[ {{\begin{array}{l} 0 \\ 0 \\ 1 \\ 0 \\ \end{array} }} \right] \end{aligned}$$
(18)

then

$$\begin{aligned} M=A\times K, \,R=b \end{aligned}$$

where

$$\begin{aligned} K=\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\frac{1}{L}}&{} 0&{} {\frac{RCL}{LC}}&{} {-\frac{1}{C}} \\ 0&{} {\frac{1}{L}}&{} {\frac{RC-2L}{LC}}&{} {\frac{RC-2L}{LC}} \\ 0&{} 0&{} {\frac{1}{C}}&{} 0 \\ 0&{} 0&{} 0&{} {\frac{1}{C}} \\ \end{array} }} \right] \end{aligned}$$

Assuming now to reduce the simple interconnect system to a second order system, the projection V would then be

$$\begin{aligned} V=[A^{0}{\begin{array}{l@{\quad }l} {K^{0}b}&{} {A^{1}K^{1}b} \\ \end{array} }]=\left[ {{\begin{array}{l@{\quad }l} {A^{0}}&{} {A^{1}} \\ \end{array} }} \right] \left[ {{\begin{array}{c@{\quad }c} {K^{0}b}&{} 0 \\ 0&{} {K^{1}b} \\ \end{array} }} \right] \end{aligned}$$
(19)

Another representation of the Krylov subspace is shown below, where the terms A and b are as in (18).

Then

$$\begin{aligned} M=K\times A, \, R=b \end{aligned}$$

where

$$\begin{aligned} K=\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\frac{1}{C}}&{} 0&{} 0&{} 0 \\ 0&{} {\frac{1}{C}}&{} 0&{} 0 \\ {\frac{R-1}{L}}&{} {\frac{R-2}{L}}&{} {\frac{1}{L}}&{} 0 \\ {-\frac{1}{L}}&{} {\frac{R-2}{L}}&{} 0&{} {\frac{1}{L}} \\ \end{array} }} \right] \end{aligned}$$

The projection matrix V is

$$\begin{aligned} V=[{\begin{array}{l@{\quad }l} {A^{0}b}&{} {A^{1}b} \\ \end{array} }]=\left[ {{\begin{array}{l@{\quad }l} {K^{0}}&{} {K^{1}} \\ \end{array} }} \right] \left[ {{\begin{array}{c@{\quad }c} {A^{0}b}&{} 0 \\ 0&{} {A^{1}b} \\ \end{array} }} \right] \end{aligned}$$
(20)

So, from (19) and (20), we can see that the projection matrix V is complex in PMOR method. Still considering R, C, and L variations around their mean value, we will hereafter make use of such mean values to generate the Krylov subspace matrix V, which is the constant matrix shown in (21).

$$\begin{aligned} V=\left[ {{\begin{array}{l@{\quad }l} {A^{0}b}&{} {A^{1}b} \\ \end{array} }} \right] =\left[ {{\begin{array}{l} {V_1 } \\ {V_2 } \\ \end{array} }} \right] \end{aligned}$$
(21)

A is here constructed by using the mean values of R, C and L. The original interconnect system can therefore be reduced by structure-preserving MOR methods. The mean values which have been adopted to define the projection matrix are listed in Sect. 4. In the following, we presents a \(10{\mathrm{th}}\)-order single-bus interconnect system, where homogeneous resistances (R), capacitances (C), and inductances (L) are Gaussian-distributed random variables, whose mean and standard deviation are, respectively, \(R_{\mu }=0.027\,\Omega \), \(R_{\sigma }=0.0081\), \(C_{\mu }=1\hbox { pF}\), \(C_{\sigma }=0.3\hbox { pF}\), \(L_{\mu }=1\hbox { nH}\), \(L_{\sigma }=0.3\hbox { nH}\).

Fig. 4
figure 4

Transfer function variation in 10 sets of RLC variation

Fig. 5
figure 5

Transfer function probability distribution in \(\omega =10^{7}\) point

The wire parasitic resistances, capacitances, and inductances are variable parameters because of the variations of thickness and width of the metal and interlayer dielectric, as well as because of the finite resistivity of the material. It has been shown that an increase of 10% of the width leads to about a 10% increase in the total capacitance, 12% increase in the coupling capacitance and 10% reduction of the resistance [31]. Thus, we will hereafter assume that the variation of the interconnect electrical parameters are Gaussian-distributed with a standard deviation of about 30%, which is instead generally around 20% .

Figure 4 shows the magnitude frequency response of the RLC interconnect system affected by parameters variations, while Fig. 5 is its probability density function. Both of them are based on the \(s_{0}=10^{7}\) expansion point. From Figs. 4 and 5, it can be seen that the parameter reduced system based on the structure-preserving method is similar to the original interconnect system in terms of its frequency response and probability density distribution.

In summary, the order of a homogenous parameterized interconnect system can be reduced albeit maintaining the statistical characteristics under the two following conditions:

  1. (1)

    the RC parameterized interconnect system projection matrix is as in (12).

  2. (2)

    the RLC parameterized interconnect system projection matrix is as in (21).

4 Non-homogeneous Parameterized Interconnects Reduction Method

The two major sources of variability of device parameters are the limited control over the manufacturing process (extrinsic causes of variations) and the fundamental atomic-scale randomness (intrinsic causes of variations) [33]. The variations of the interconnect parameters are mainly caused by the former. Indeed, processing steps such as chemical, mechanical polishing and etching may induce variations in interconnects and wire dimensions. This results in interconnect electrical parameters which vary in different places, although still locally homogeneous.

The key question in parameter MOR is, firstly, how to stably and efficiently compute an orthonormal basis V of solution subspace and, secondly, how to preserve the variation parameters in reduced interconnect systems. In this section, a method for defining V based on the mean value of the electrical parameter variations is proposed. For preserving the variation parameter, we use the structure-preserving reduced method, which massively exploits the electrical parameter local homogeneous characteristics. In the following, we firstly review the structure-preserving reduced-order method and then proceed further by providing details about the SPMOR algorithm based on non-homogeneous RC and RLC parameterized interconnect systems.

4.1 SPRIM Algorithm Review

The SPRIM (structure-preserving reduced-order interconnect macromodeling) algorithm was originally introduced for RLC circuits including only current sources (no voltage sources). Let us consider a modified nodal formulation of an RLC circuit equation in the frequency domain:

$$\begin{aligned} Gx(s)+sCx(s)= & {} Bi(s) \nonumber \\ v(s)= & {} B^{T}x(s) \end{aligned}$$
(22)

where x(s) is the state variable vector, G and \(C \,\in \mathbb {R}^{N\times N}\) are state matrices and B is the incident matrix. Thus, the transfer function H(s) is

$$\begin{aligned} H(s)=B^{T}(G+sC)^{-1}B \end{aligned}$$
(23)

According to the Krylov subspace projection method, we can find a projection matrix V whose columns span the r-th Krylov subspace\(K_r (A,R)\), where \(A=(G+s_0 C)^{-1}C\), \(R=(G+s_0 C)^{-1}B\), and \(s_{0}\) is the expansion point, which typically is a real number. The transfer function of the reduced-order system can therefore be expressed as

$$\begin{aligned} H_r (s)=B_r ^{T}(G_r +sC_r )^{-1}B_r, \end{aligned}$$

where \(G_r =V^{T}\,GV\),\(C_r =V^{T}\,CV\), and \(B_r =V^{T}\,B\). The first expanded r-th moments of \(H_{r}(s)\) is identical to those of the original transfer function H(s).

The above method, such as indeed the MOR, cannot instead preserve the structure of the original models. This means that if the transfer function H(s) is symmetric, the one of the reduced system \(H_{r}(s)\) will not be. The primary result of the structure-preserving model-order reduction is to keep the said transfer function symmetric and to preserve the passive properties. This is achieved by using a split projection matrix V.

SPRIM starts with the \(2\times 2\) structured modified nodal analysis (MNA) circuit matrices. Thus, a structured projection matrix can be built by splitting the projection matrix V obtained from the Krylov subspace method as

$$\begin{aligned} V=\left[ {{\begin{array}{l} {V_1} \\ {V_2} \\ \end{array} }} \right] , \overline{V} =\left[ {{\begin{array}{c@{\quad }c} {V_1 }&{} 0 \\ 0&{} {V_2 } \\ \end{array} }} \right] \end{aligned}$$
(24)

where \(V_1 \in \mathbb {R}^{n\times r}\), \(V_2 \in \mathbb {R}^{m\times r}\) and hence \(\overline{V} \in \mathbb {R}^{N\times 2r}\). As a result, the number of columns in \(\overline{V}\) is double compared to that of V (the moment matching doubles). The main benefit of the SPRIM algorithm is that the reduced model system maintains matching 2r order moment based r-th Krylov subspace. The 2r matching property mainly applies because both the original and the reduced system transfer functions are symmetric. The same result could be obtained from the structure-preserving reduced method for RCL circuits which include voltage sources.

The rank of the projection matrix \(V\in \mathbb {R}^{N\times r}\) is r. After the splitting operation in (24), the rank usually results to be less than 2r, since some of the columns become dependent (i.e., the columns in \(\overline{V}\) are no longer orthogonal with respect to each other). To solve such problem, it is necessary to re-orthonormalize each sub-block, thus obtaining a new projection matrix T:

$$\begin{aligned} T=\left[ {{\begin{array}{c@{\quad }c} {\overline{V}_1 }&{} 0 \\ 0&{} {\overline{V}_2 } \\ \end{array} }} \right] \end{aligned}$$
(25)

It is worth noting that the moment-matching property still holds after the re-orthonormalization process [28].

4.2 Non-homogeneous Parameterized RC MOR

When the electrical parameters of an interconnect circuit are non-homogeneous, as in the case of part 3.2 of Sect. 3, it is rather complicated to extract their variations because the variation parameters cross each other in a block structure. However, the physical parameters of interconnect wires exhibit local consistency in space. We can therefore assume that the parameters of adjacent resistor R and capacitor C are the same. Furthermore, we will consider the interconnect system divided into k local homogeneous blocks, whose individual size is m. The system state equation is thus expressed as

$$\begin{aligned} \left[ {{\begin{array}{c} {\frac{\mathrm{d}v_1 }{\mathrm{d}t}} \\ \vdots \\ {\frac{\mathrm{d}v_m }{\mathrm{d}t}} \\ \vdots \\ {\frac{\mathrm{d}v_{2m} }{\mathrm{d}t}} \\ \vdots \\ {\frac{\mathrm{d}v_{km} }{\mathrm{d}t}} \\ \end{array} }} \right]= & {} \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {\frac{1}{R_1 C_1 }}&{} &{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} &{} \\ &{} &{} {\frac{1}{R_1 C_1 }}&{} &{} &{} &{} \\ &{} &{} &{} \ddots &{} &{} &{} \\ &{} &{} &{} &{} {\frac{1}{R_2 C_2 }}&{} &{} \\ &{} &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} &{} {\frac{1}{R_k C_k }} \\ \end{array} }} \right] \nonumber \\&\times \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {-2}&{} 1&{} &{} &{} &{} &{} \\ 1&{} {-2}&{} 1&{} &{} &{} &{} \\ &{} &{} \ddots &{} &{} &{} &{} \\ &{} &{} &{} \ddots &{} &{} &{} \\ &{} &{} &{} &{} \ddots &{} &{} \\ &{} &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} 1&{} {-1} \\ \end{array} }} \right] \nonumber \\&+\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {\frac{1}{R_1 C_1 }}&{} &{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} &{} \\ &{} &{} {\frac{1}{C_1 }}&{} &{} &{} &{} \\ &{} &{} &{} \ddots &{} &{} &{} \\ &{} &{} &{} &{} {\frac{1}{C_2 }}&{} &{} \\ &{} &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} &{} {\frac{1}{C_k }} \\ \end{array} }} \right] \left[ {{\begin{array}{c} 1 \\ 0 \\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ 0 \\ \end{array} }} \right] V_{in} \end{aligned}$$
(26)

where \(k=N/m\). The projection matrix is considerably hard to calculate, as the one in Sect. 3. We will therefore use the mean values of the interconnect electrical parameters to construct such projection matrix, which results to be similar to the global homogeneous interconnect system introduced in Sect. 3. The moment-matching characteristic will be verified in part 4.4 of Sect. 4.

The projection matrix \(V_{r}\) is

$$\begin{aligned} V_r =\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} b&{} {Ab}&{} \ldots &{} {A^{r}b} \\ \end{array} }} \right] \end{aligned}$$
(27)

where

$$\begin{aligned} A= & {} \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {\frac{1}{R_{10} C_{10} }}&{} &{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} &{} \\ &{} &{} {\frac{1}{R_{10} C_{10} }}&{} &{} &{} &{} \\ &{} &{} &{} \ddots &{} &{} &{} \\ &{} &{} &{} &{} {\frac{1}{R_{20} C_{20} }}&{} &{} \\ &{} &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} &{} {\frac{1}{R_{k0} C_{k0} }} \\ \end{array} }} \right] \nonumber \\&\times \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {-2}&{} 1&{} &{} &{} &{} &{} \\ 1&{} {-2}&{} 1&{} &{} &{} &{} \\ &{} &{} \ddots &{} &{} &{} &{} \\ &{} &{} &{} \ddots &{} &{} &{} \\ &{} &{} &{} &{} \ddots &{} &{} \\ &{} &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} 1&{} {-1} \\ \end{array} }} \right] ,\\ b= & {} \frac{1}{R_{10} C_{10} }\left[ {{\begin{array}{c} 1 \\ 0 \\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ 0 \\ \end{array} }} \right] \end{aligned}$$

\(R_{n0}\) and \(C_{n0}(n=1{\ldots },\hbox { k})\) are the mean values of the local homogeneous resistors and capacitors, respectively. A structure-preserving projection matrix V is consequently constructed by splitting the projection matrix \(V_{r}\) obtained from the Krylov subspace method.

$$\begin{aligned} V=\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {V_1 }&{} &{} &{} \\ &{} {V_2 }&{} &{} \\ &{} &{} \ddots &{} \\ &{} &{} &{} {V_k } \\ \end{array} }} \right] \end{aligned}$$
(28)

where \(V_n \in \mathbb {R}^{m\times r}\) (\(n=1,{\ldots },k)\), \(V\in \mathbb {R}^{N\times kr}\). Finally, the reduced-order state equation is

$$\begin{aligned} \left[ {{\begin{array}{c} {\frac{\mathrm{d}x_1 }{\mathrm{d}t}} \\ \vdots \\ {\frac{\mathrm{d}x_r }{\mathrm{d}t}} \\ \vdots \\ {\frac{\mathrm{d}x_{kr} }{\mathrm{d}t}} \\ \end{array} }} \right] =\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {\frac{1}{R_1 C_1 }}&{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} \\ &{} &{} {\frac{1}{R_1 C_1 }}&{} &{} \\ &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} {\frac{1}{R_k C_k }} \\ \end{array} }} \right] V^{T}\,TV+V^{T}\left[ {{\begin{array}{c} {\frac{1}{R_1 C_1 }} \\ 0 \\ \vdots \\ 0 \\ 0 \\ \end{array} }} \right] V_{in} \end{aligned}$$
(29)

It is worth mentioning that every local homogeneous block equation of order m is reduced to r, while the structure of all of the electrical parameters of variation is preserved, such as indeed the probability characteristics.

4.3 Non-homogeneous Parameterized RLC MOR

When the electrical parameters of interconnect RLC circuits are non-homogeneous, as evident from Eq. (16), generating the projection matrix is rather difficult. The electrical parameters of the interconnect wires are locally uniform, as already mentioned in part 4.2 of Sect. 4. We can therefore assume that the parameters of adjacent resistor R, inductor L and capacitor C are the same (i.e., their probability distributions are equal). By also assuming that the interconnect system is divided in k average blocks, the system state equation is expressed as

$$\begin{aligned} \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {C_1 }&{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} \\ &{} &{} {C_k }&{} &{} &{} \\ &{} &{} &{} {L_1 }&{} &{} \\ &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} {L_k } \\ \end{array} }} \right] \left[ {{\begin{array}{c} {\frac{\mathrm{d}v_1 }{\mathrm{d}t}} \\ \vdots \\ {\frac{\mathrm{d}v_{k\times m} }{\mathrm{d}t}} \\ {\frac{\mathrm{d}i_1 }{\mathrm{d}t}} \\ \vdots \\ {\frac{\mathrm{d}i_{k\times m} }{\mathrm{d}t}} \\ \end{array} }} \right]= & {} \left[ {{\begin{array}{c@{\quad }c} 0&{} Q \\ {Q^{T}}&{} {{\begin{array}{c@{\quad }c@{\quad }c} {R_1 }&{} &{} \\ &{} \ddots &{} \\ &{} &{} {R_{k\times m} } \\ \end{array} }} \\ \end{array} }} \right] \left[ {{\begin{array}{c} {v_1 } \\ \vdots \\ {v_{k\times m} } \\ {i_1 } \\ \vdots \\ {i_{k\times m} } \\ \end{array} }} \right] \nonumber \\&+B\times v_{in} \end{aligned}$$
(30)

where \(C_n ,L_n\) and \(R_n \in \mathbb {R}^{m\times m}(\hbox {n}=1,{\ldots },\hbox {k},\hbox { k}=\hbox {N}/2\hbox {m})\), Q can assume the values 1, \(-1\) and 0, while B is the input matrix, shown in the following.

$$\begin{aligned} B=\left[ {{\begin{array}{c} 0 \\ \vdots \\ 0 \\ 1 \\ \vdots \\ 0 \\ \end{array} }} \right] \end{aligned}$$

As previously described in part 4.2 of Sect. 4, we can still make use of the mean values of the electrical parameters of interconnects in order to construct the projection matrix.

We also assume that the projection matrix \(V_{r}\) obtained by applying the Krylov subspace method can be written as.

$$\begin{aligned} V_r =\left[ {{\begin{array}{l@{\quad }l@{\quad }l@{\quad }l} b&{} {Ab}&{}\ldots &{} {A^{r}b} \\ \end{array} }} \right] \end{aligned}$$
(31)

where

$$\begin{aligned} A= & {} \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {C_{10} }&{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} \\ &{} &{} {C_{k0} }&{} &{} &{} \\ &{} &{} &{} {L_{01} }&{} &{} \\ &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} {L_{k0} } \\ \end{array} }} \right] ^{-1}\left[ {{\begin{array}{c@{\quad }c} 0&{} Q \\ {Q^{T}}&{} {{\begin{array}{c@{\quad }c@{\quad }c} {R_{10} }&{} &{} \\ &{} \ddots &{} \\ &{} &{} {R_{k0} } \\ \end{array} }} \\ \end{array} }} \right] \\ b= & {} \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {C_{10} }&{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} \\ &{} &{} {C_{k0} }&{} &{} &{} \\ &{} &{} &{} {L_{01} }&{} &{} \\ &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} {L_{k0} } \\ \end{array} }} \right] ^{-1}\left[ {{\begin{array}{c} 0 \\ \vdots \\ 0 \\ 1 \\ \vdots \\ 0 \\ \end{array} }} \right] \end{aligned}$$

\(R_{n0}\), \(L_{n0}\) and \(C_{n0}({n}=1{\ldots },\hbox { k}=\hbox {N}/2\hbox {m})\in \mathbb {R}^{m\times m}\). A structure-preserving projection matrix V is thus derived by splitting \(V_{r}\) into 2k parts.

$$\begin{aligned} V=\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {V_1 }&{} &{} &{} \\ &{} {V_2 }&{} &{} \\ &{} &{} \ddots &{} \\ &{} &{} &{} {V_{2k} } \\ \end{array} }} \right] \end{aligned}$$
(32)

where \(V_{n}\in \mathbb {R}^{m\times r}(n=1,\ldots ,2k)\), \(V\in \mathbb {R}^{N\times 2kr}\). The reduced-order state equation results to be of form

$$\begin{aligned} \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {C_{1r} }&{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} \\ &{} &{} {C_{kr} }&{} &{} &{} \\ &{} &{} &{} {L_{1r} }&{} &{} \\ &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} {L_{kr} } \\ \end{array} }} \right] \left[ {{\begin{array}{c} {\frac{\mathrm{d}v_1 }{\mathrm{d}t}} \\ \vdots \\ {\frac{\mathrm{d}v_{k\times r} }{\mathrm{d}t}} \\ {\frac{\mathrm{d}i_1 }{\mathrm{d}t}} \\ \vdots \\ {\frac{\mathrm{d}i_{k\times r} }{\mathrm{d}t}} \\ \end{array} }} \right]= & {} \left[ {{\begin{array}{c@{\quad }c} 0&{} {Q_r } \\ {Q_r ^{T}}&{} {{\begin{array}{c@{\quad }c@{\quad }c} {R_{1r} }&{} &{} \\ &{} \ddots &{} \\ &{} &{} {R_{kr} } \\ \end{array} }} \\ \end{array} }} \right] \left[ {{\begin{array}{c} {v_1 } \\ \vdots \\ {v_{k\times r} } \\ {i_1 } \\ \vdots \\ {i_{k\times r} } \\ \end{array} }} \right] \nonumber \\&+B_r \times v_{in} \end{aligned}$$
(33)

where \(C_{nr}\), \(L_{nr}\), \(Q_{r}\) and \(R_{nr} \,\in \mathbb {R}^{r\times r}(n=1,\ldots ,\,k)\), i.e., every homogeneous block equation of order m is reduced to r. Furthermore, the structure of all the electrical parameters of variation is preserved, such as the probability characteristic.

4.4 Moment-Matching Characteristics of the SPMOR Algorithm

The main invariant properties of MOR techniques are the coefficients of some power series expansion of the transfer function H(s). The techniques proposed in this paper rely on a reduced-order model that accurately matches the leading coefficients of order kr or 2kr of the transfer function. This improved moment-matching property can be stated as follows.

Theorem 1

Let the projection matrix \(V_{r}\) satisfy the Krylov subspace \(\kappa _r (A,b)\). Then, the first r moments of the expansion of the transfer function H(s) and the r-th transfer function \(H_{r}({s})\) around \(s_{0}\) are identical [17, 32]:

$$\begin{aligned} H(\hbox {s}) = H_{r}({s}) + O ((s-s_{0})^{r}) \end{aligned}$$

Theorem 2

Let \(\overline{V_{r}}\) be a matrix matches \(\kappa _r (A,b)\subseteq \hbox {span}\,\overline{V_r}\) whose number of columns is possibly higher than r. Then, the first r moments in the expansion of the transfer function \(H(\hbox {s})\) and r-th transfer function \(H_{r}({s})\) around \(s_{0}\) are identical [16, 18]:

$$\begin{aligned} H(\hbox {s}) = H_{r}({s}) + O ((s-s_{0})^{r}) \end{aligned}$$

Theorem 3

Let \(s_{0}\in \mathbb {R}\) and let V be the projection matrix described in Eqs. (28) and (32), which is the one used in the SPMOR algorithm. Then, the first kr or 2kr moments in the expansions of the transfer function \(H(\hbox {s})\) and the reduced transfer function \(H_{r}({s})\) around \(s_{0}\) are identical. For the special case of \(s_{0}=0\), the results reported in [42] can be readily extended to this slightly more general scenario. The proof results in [16, Theorem 3] are instead readily extendable to all of the other cases of Theorem 3.

It is worth noting that the above theorems hold under the assumption that the electrical parameters of the interconnect system (resistor R, capacitor C and inductor L) are invariant. We will now discuss the moment-matching characteristics of the SPMOR algorithm.

Theorem 1 postulates that the basic idea behind Krylov subspace-based order reduction for electronic circuits with one parameter is to reuse the low-upper (LU) factorization, which is exploited to obtain \(H({s}_{0})\) in order to generate the information contained in the leading Taylor coefficients of H(s), expanded around \({s}_{0}\). According to this, the transfer function is rewritten as:

$$\begin{aligned} H(s)=B^{T}(s_0 C-G+(s-s_0 )C)^{-1}B=B^{T}(I+(s-s_0 )M)^{-1}R \end{aligned}$$
(34)

where

$$\begin{aligned} M:=(s_0 C-G)^{-1}C\hbox { and }R:=(s_0 C-G)^{-1}B \end{aligned}$$

The Taylor expansion of H(s) around \(s_{0}\) is given by

$$\begin{aligned} H(s)=\sum \limits _{j=0}^\infty {(-1)^{j}} B^{T}M^{j}R(s-s_0 )^{j}. \end{aligned}$$
(35)

For the sake of clarity, we firstly show the basic ideas of multi-parameter independent model-order reduction methods with a two parameters system, which indeed corresponds to a homogeneous RC interconnect system. We then proceed by generalizing them to any n parameters interconnect system. The transfer function with two parameters is defined as

$$\begin{aligned} H(s_1 ,s_2 )=B^{T}\left[ Is_1 -\left( Gs_2 +G_0 \right) \right] ^{-1}B \end{aligned}$$
(36)

where \(s_{1}=s\), \(s_{2}=1/{ RC}\). Similar to the model reduction method with one parameter, the Taylor expansion of \(H(s_{1}\), \(s_{2})\) is reported:

$$\begin{aligned} H(s_1 ,s_2 )= & {} -B^{T}\left[ I-\left( s_1 G_0^{-1} -s_2 G_0^{-1} G\right) \right] ^{-1}G_0 ^{-1}B \nonumber \\= & {} -B^{T}\sum \limits _{i=0}^\infty {\left( s_1 G_0^{-1} -s_2 G_0^{-1} G\right) ^{i}} G_0 ^{-1}B \end{aligned}$$
(37)

By expanding each of the terms of the above series, Eq. (37) is expressed as

$$\begin{aligned} H(s_1 ,s_2 )=-B^{T}\sum \limits _{i=0}^\infty {\sum \limits _{j=0}^i {\left( F_j^i \left( G_0^{-1} ,G_0^{-1} G\right) \right) } } G_0^{-1} Bs_1^{i-j} s_2^j \end{aligned}$$
(38)

where \(F_j^i\) are the multipliers before \(s_1^{i-j} s_2^j\), when \(\left( s_1 G_0^{-1} -s_2 G_0^{-1} G\right) ^{i}\) are expanded into separate terms. For example, when \(i=2\), \(\left( s_1 G_0^{-1} -s_2 G_0^{-1}G\right) ^{2}\) is expanded into three terms, each of which corresponds to \(F_{0}^{2}\), \(F_1^2\) and \(F_2^2\), respectively,

$$\begin{aligned} F_0^2 =\left( G_0^{-1}\right) ^{2},F_1^2 =\left( G_0^{-1}\right) ^{2}G+G_0^{-1} GG_0^{-1} ,F_2^2 =\left( G_0^{-1} G\right) ^{2}. \end{aligned}$$
(39)

Assuming that the coupling term can be neglected (i.e., \(F_{1}^{2})\), the projection matrix V is then constructed based only on the terms \(F_j^i \left( G_0^{-1} ,G_0^{-1} G\right) G_0 ^{-1}B\),

$$\begin{aligned} \mathrm{spancol}\{V\}=\mathrm{spancol}\left\{ {B,G_0 B,G_0 ^{2}B,\ldots } \right\} . \end{aligned}$$

where \(G=\sigma G_{0}\), above the Krylov subspace, is derived from a spatial transformation of \(\mathrm{spancol}\left\{ {B,{G_0}^{-1}B,{G_0} ^{-2} B,\ldots } \right\} \). It has been demonstrated that the aforementioned projection matrix ensures that the system matches the term \(F_{j}^{i}\) which is included in the right-hand side of Eq. (38) [42]. Thus, for a two-parameter system, the moment is matched except for the coupling terms (i.e., \(B^{T}F_k^m \left( G_0^{-1} ,G_0^{-1} G\right) G_0^{-1} B=B_r^T F_k^m \left( G_{0r}^{-1} ,G_{0r}^{-1} G_r \right) G_{0r}^{-1} B_r ,k=0,\ldots ,m,m=0,\ldots J)\). The two-parameter system can be generalized to an n-parameter one, as also discussed in [7]. The state variable x can be expanded into series around \(s_{1}\), \(s_{2}\, {\ldots }\, s_{n}\) as

$$\begin{aligned} x= & {} [I-(s_1 G_0^{-1} G_1 +\cdots +s_n G_0^{-1} G_n )]^{-1}G_0^{-1} Bu(t) \\= & {} \sum _{m=0}^\infty {(s_1 G_0^{-1} G_1 +\cdots +s_n G_0^{-1} G_n )^{m}} G_0^{-1} Bu(t) \\= & {} \sum _{m=0}^\infty {\sum _{k_2 =0}^{m-(k_3 +\cdots +k_n )} \cdots } \sum _{k_{n-1} =0}^{m-k_n } \sum _{k_n =0}^m (F_{k_2 ,\ldots ,k_n }^m \\&\quad (G_0^{-1} G_1 ,\ldots ,G_0^{-1} G_n )G_0^{-1} Bu(t)) s_1^{m-(k_2 +\cdots +k_n )} s_2^{k_2 } \ldots s_n^{k_n} \end{aligned}$$

where \(F_{k_2 ,\ldots ,k_n }^m\) is defined as in (39). The projection matrix V, still neglecting the coupling terms, is constructed as

$$\begin{aligned} \mathrm{spancol}\{V\}=\mathrm{spancol}\{B_M ,M_1 B_M ,M_2 B_M ,\ldots M_n B_M ,M_1^2 B_M ,M_2^2 B_M \ldots \} \end{aligned}$$

where \(M_{i}=-G_{0}^{-1}G_{i}\), \({i}=1,2,{\ldots },{n}\), \(B_{M}= G_{0}^{-1}B\). In the experimental examples provided in [7], although only the first few F with \(m=0\), 1 were used to generate the projection matrix V, the reduced models introduced already exhibit enough accuracy. In contrast, the proposed SPMOR algorithm is able to preserve more terms, thus ensuring therefore an even superior accuracy. Apart from the moment-matching characteristics of SPMOR, another advantage of such an algorithm is to preserve the structure parameters, which justifies the fact that the coupling terms of the Taylor expansion could be neglected.

4.5 Passivity

It is well known that H is passive if, and only if, the following three conditions are satisfied (see, e.g., [43]):

  1. (a)

    H(s) has no poles in \(\mathbb {C}_+ =\{s\in \mathbb {C}|\mathrm{Re}s>0\}\);

  2. (b)

    \(H(\overline{x})=\overline{H(s)}\) for all of the \(s\in \mathbb {C}\);

  3. (c)

    \(\hbox {Re}(x^{\mathrm{H}}H({s})x)\ge 0\) for all of the \(s\in \mathbb {C}_+\) and \(x\in \mathbb {C}^{m}\)

In particular, the RC and RLC transfer functions H are passive. In [16] and [32] it is shown that the reduced-order models of RLC circuits obtained by projection preserve passivity. This property applies also to the SPMOR model, which justifies the results we have obtained. We can therefore conclude that the SPMOR reduced-order model \(H_{r}\) given by (29) and (33) is passive.

4.6 Summary

To summarize, the procedure for RLC systems with multiple electrical parameters (i.e., non-homogenous systems) is similar to that adopted for single-parameter systems (i.e., homogenous systems), with the only difference that the projection matrix is larger, since more structure needs to be preserved. The procedure for RC systems with multiple electrical parameters (i.e., non-homogenous systems) is instead slightly different from the single-parameter case, since the former is equivalent to the MOR method with a fixed parameter value, apart from the fact that the projection results need to be multiplied by a parameter preserving matrix. Instead, for RC systems with a single electrical parameter (i.e., homogenous systems), the SPMOR technique is typically adopted, or at least works as a criterion for other PMOR techniques.

The computational complexity of the SPMOR algorithm for parameter variations in interconnect systems, including the affine linear system of variations of electrical (or geometric) parameters and the nonlinear variation parameters system, depends on the number of basis vectors of the projection subspace V. For simplicity, let us assume that all of the electrical parameters are of the same order m among every even block, and the reduced order is r. From the application of the Arnoldi method to calculate the projection matrix, it is quite straightforward to conclude that such method is numerically more reliable and can be implemented in \(O({ nr}^{2})\), while the storage requirement is of \(O({ nr})\), where n is the size of the original system matrix. It is therefore evident how this algorithm can avoid the curse of dimensionality for all of moment-matching based PMOR methods.

5 Numerical Results

In this section, we present two numerical examples to demonstrate the accuracy, stability and efficiency of the proposed SPMOR method. All of the experiments have been performed using MATLAB, run on a PC with a 3.3 GHz Intel Core (TM) i3-3220 processor.

Example 1

We here illustrate the accuracy of the structure-preserving parametric model-order technique for a simple interconnect circuit consisting of only one bit bus (see Fig. 3). The near end of the line is driven by a voltage source and the far end voltage across the capacitor is our output signal. An RLC modified nodal analysis formulation is used to model the capacitive and inductive effects of the interconnect lines. The description matrices are

$$\begin{aligned} C=\left[ {{\begin{array}{c@{\quad }c} Q&{} 0 \\ 0&{} H \\ \end{array} }} \right] , G=\left[ {{\begin{array}{c@{\quad }c} 0&{} E \\ {-E^{T}}&{} N \\ \end{array} }} \right] \end{aligned}$$

where Q, H, and N are capacitance, inductance, and resistance matrices respectively, while E is the incident matrix associated to the inductive connectivity. The order of Q, H, and N is 1000, while the electrical parameters variations due to process spreads are assumed to be bounded within ±15% of the mean values. The RLC distributed circuit is divided into 10 even subcircuits, for a total of 30 variation parameters. To the best of our knowledge, all of the parameterized model-order methods are a generalization of the traditional model-order method for single-parameter systems based on projection techniques. However, the frequency parameter generated from the Laplace transfer is coupled to the other electrical parameters. Therefore, no comparison with [7, 20] and [26] is proposed.

Fig. 6
figure 6

Transfer function-based SPMOR algorithm in different order

Table 1 Electrical parameters of a single interconnect wire

Figure 6 shows the simulated transfer function of a single interconnect wire, where all of the electrical parameters are Gaussian-distributed. The details of the electrical parameters are shown in Table 1, where \(r_{i}\), \(c_{i}\) and \(l_{i}\) (\(i=1 {\ldots }10\)) are respectively the unit resistance, capacitance, and inductance referred to the 10 different interconnect segments. Given the almost perfect overlapping between the two curves, it is evident how the SPMOR algorithm with reduced order of 10 is better matched to the original results. Moreover, such algorithm is stable at high frequency.

From a computational complexity standpoint, we propose a comparison between the traditional single-parameter structure-preserving MOR and the multi-parameter SPMOR method. For the sake of simplicity, we will here purposely avoid a direct comparison among multi-parameter moment-matching methods due to its complexity (an idea of such complexity could be gathered from [26]). Table 2 shows the approximation order of the reduced systems and CPU elapsed time in model-order reduction. The number of homogeneous interconnect segments is 10 in all of the presented conditions. It can be seen that the computational complexity of the SPMOR algorithm is comparable to the single-parameter structure-preserving MOR. The reason is that the main process of SPMOR is structure preserving (i.e., the parameterized interconnect system is a special structure of interconnect system). Moreover, the computational complexity is independent from the number of homogeneous interconnect segments.

Table 2 Comparison of classic structure-preserving and SPMOR methods

Finally, the main advantage of the SPMOR method is that it retains the probability features of the original variation interconnect system. Figure 7 presents a comparison between the probability densities of the transfer function H of the original system and that of the SPMOR model reduced system at 1 MHz. The variation electrical parameters are 3 (i.e., the total wire is one segment). Figure 7 also shows the probability density of the transfer function of the reduced system (reduced to 20 orders), which is similar to the original system probability density (useful characteristic to be preserved for the statistical static timing analysis of a circuit).

Fig. 7
figure 7

Probability comparison between original and reduced system

Fig. 8
figure 8

RLC grid circuit

Fig. 9
figure 9

Transfer function-based SPMOR algorithm in different order

Fig. 10
figure 10

Comparison of the probability densities between the original and reduced systems

Example 2

Let us consider an RLC network with three different electrical parameters. The interconnect circuit consists of a 3-bit bus employed in an industrial application (Fig. 8), which can be modeled as an RLC mesh. The voltage is probed at node 20. The circuit parameters are Gaussian-distributed, with mean values of \(R_{s}=R=1\,\Omega \), \(L=1\,\hbox {nH}\), \(C=1\,\hbox {pF}\) and \({ CC}=0.5\,\hbox {pF}\), and standard deviation equal to 15% of their mean values. The near end of the first line is driven by a voltage source (\(V_{s})\). An RLC modified nodal analysis formulation is used to model the capacitive and magnetic coupling effects between any two of these lines. The description matrices are

$$\begin{aligned} C=\left[ {{\begin{array}{c@{\quad }c} Q&{} 0 \\ 0&{} H \\ \end{array} }} \right] , G=\left[ {{\begin{array}{c@{\quad }c} 0&{} E \\ {-E^{T}}&{} N \\ \end{array} }} \right] \end{aligned}$$

where Q, H, and N are respectively the capacitance, inductance, and resistance matrices, while E is the incident matrix associated to the inductive connectivity.

Figure 9 shows that the SPMOR model with approximation order 19 is visually almost overlapped to the exact transfer function, although the relative error at high frequency is slightly larger due to the frequency expansion point \(s=0\). The number of matched moments is 38 (19 \(\times \) 2). The profiles of the CPU elapsed time in the SPMOR and common structure-preserving algorithms are also reasonably comparable.

Figure 10 shows the probability density of the reduced-order system based on the SPMOR model. The frequency point chosen to compare the probability densities of the original and reduced system is 100 MHz. It is worth noting how the probability density of the SPMOR model reduced to 15 orders is appreciably matched to that of the original system (i.e., all of the probability features are preserved).

6 Conclusions

The main innovative contributions of this paper can be summarized in the following major milestones: introduction of a standard model suitable as a criterion for parameterized MOR of homogeneous RC interconnect systems [(Eq. (12)] and presentation of a stable and efficient numerical procedure to generate an orthonormal basis of the projection subspace in parameterized MOR. The new parameterized MOR method (SPMOR) can efficiently preserve the probability feature, while the SPMOR reduced-order model has the same form of the original system. Its computational complexity is similar to the nonparameterized MOR algorithm, while it preserves the passivity property. As sustained by numerical examples, we can therefore conclude that the SPMOR method is highly flexible with regards to multiple parameters.