Abstract
We consider Uncertainty Quantification (UQ) by expanding the solution in so-called generalized Polynomial Chaos expansions. In these expansions the solution is decomposed into a series with orthogonal polynomials in which the parameter dependency becomes an argument of the orthogonal polynomial basis functions. The time and space dependency remains in the coefficients. In UQ two main approaches are in use: Stochastic Collocation (SC) and Stochastic Galerkin (SG). Practice shows that in many cases SC is more efficient for similar accuracy as obtained by SG. In SC the coefficients in the expansion are approximated by quadrature and thus lead to a large series of deterministic simulations for several parameters. We consider strategies to efficiently perform this sequence of deterministic simulations within SC.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- Failure Probability
- Model Order Reduction
- Uncertainty Quantification
- Sensitivity Matrix
- Time Step Integration
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Polynomial Chaos for Dynamical Systems with Random Parameters
We will denote parameters by \(\mathbf{p} = {(p_{1},\ldots,p_{q})}^{T}\) and assume a probability space \((\varOmega,\mathcal{A},\mathcal{P})\) given where \(\mathcal{A}\) represents a σ-algebra, \(\mathcal{P}:\, \mathcal{A}\,\rightarrow \, \mathbb{R}\) is a measure and \(\mathbf{p} = \mathbf{p}(\omega ):\varOmega \;\rightarrow \; Q \subseteq {\mathbb{R}}^{q}\). Here we will assume that the p i are independent.
For a function \(f: Q\; \rightarrow \; \mathbb{R}\), the mean or expected value is defined by
The specific probability distribution density is given by the function ρ(p). Because \(\mathcal{P}(\varOmega ) = 1\), we have < 1 > = 1. A bilinear form (with associated norm L ρ 2) is defined by
The last form is convenient when products of more functions are involved. Similar definitions hold for vector- or matrix-valued functions \(\mathbf{f}: Q\; \rightarrow \; {\mathbb{R}}^{m\times n}\).
We assume a complete orthonormal basis of polynomials \((\phi _{i})_{i\in \mathbb{N}}\), \(\phi _{i}: {\mathbb{R}}^{q}\; \rightarrow \; \mathbb{R}\), given with \(<\phi _{i},\phi _{j} >=\delta _{ij}\) (i, j, ≥ 0). When q = 1, ϕ i has degree i. To treat a uniform distribution (i.e., for studying effects caused by robust variations) Legendre polynomials are optimal in some sense; for a Gaussian distribution one can use Hermite polynomials [17, 28]. A polynomial ϕ i on \({\mathbb{R}}^{q}\) can be defined from one-dimensional polynomials: \(\phi _{i}(\mathbf{p}) =\prod _{ d=1}^{q}\phi _{i_{d}}(p_{d})\). Actually i orders a vector \(\mathbf{i} = {(i_{1},\ldots,i_{q})}^{T}\).
We will denote a dynamical system by
Here F may contain differential operators. The solution \(\mathbf{x} \in {\mathbb{R}}^{n}\) depends on t and on p. In addition initial and boundary values are assumed. In general these may depend on p as well.
A solution \(\mathbf{x}(t,\mathbf{p}) = {(x_{1}(t,\mathbf{p}),\ldots,x_{n}(t,\mathbf{p}))}^{T}\) of the dynamical system becomes a random process. We assume that second moments are finite: < x j 2(t, p) > < ∞, for all t ∈ [t 0, t 1] and j = 1, …, n. We express x(t, p) in a Polynomial Chaos expansion
where the coefficient functions v i (t) are defined by
A finite approximation x m(t, p) to x(t, p) is defined by
For traditional random distributions ρ(. ) convergence rates for | | x(t, . ) −x m(t, . ) | | for functions x(t, p), that depend smoothly on p, are known (see [2] and [28] for an expansion in Hermite or in Legendre polynomials, respectively). For more general distributions ρ(. ) convergence may not be true. For instance, polynomials in a lognormal variable are not dense in L ρ 2. For convergence one requires that the probability measure is uniquely determined by its moments [8]. One at least needs that the expected value of each polynomial has to exist.
The integrals (5) can be computed by (quasi) Monte Carlo, or by multi-dimensional quadrature. We assume quadrature grid points p k and quadrature weights w k , with 0 ≤ k ≤ K, such that
Typically, Gaussian quadrature is used with corresponding weights. We solve (3) for x(t, p k), k = 0, …, K (K + 1 deterministic simulations). Here any suitable numerical solver for (3) can be used. By post-processing we determine the v i (t) in (7).
As alternative approach, Stochastic Galerkin can be used. Then the sum (6) is put into Eq. (3) and the residues are made orthogonal to the basis functions. This results into one big system for the coefficient functions v i (t) [17, 22, 28]. Due to averaging, this system does not depend on particular parameter values anymore.
2 Statistical Information and Sensitivity
We note that the expansion x m(t, p), see (6), gives full detailed information when varying p; it serves as a response surface model. From this the actual (and probably biased) range of solutions can be determined. These can be different from envelope approximations based on mean and variances.
Let ϕ 0 be the polynomial that is constant c; orthonormality implies that c = 1. By further use of the orthogonality, the mean of x(t, p) is given by
(for the finite expansion with exact coefficients the equality sign holds). This involves all p k together. One may want to consider effects of p i and p j separately. This restricts the parameter space \(Q \subseteq {\mathbb{R}}^{q}\) to a one-dimensional subset with individual distribution densities ρ i (p) and ρ j (p). A covariance function of x(t, p) can also be easily expressed
Having a gPC expansion also the sensitivity (matrix) w.r.t. p is easily obtained
From this a relative sensitivity can be defined by
It describes the amplification of a relative error in p j to the relative error in x i (t, p) (here ∘ denotes the Hadamard product of two matrices).
The sensitivity matrix also is subject to stochastic variations. With a gPC expansion it is possible to determine a mean global sensitivity matrix by
Note that the integrals at the right-hand side can be determined in advance and stored in tables.
3 Failure and Tolerance Analysis
Failure may be defined after introducing a criterion function g(t, x(t, p)), e.g., g(t, x(t, p)) ≡ x(t, p) −θ, with a threshold θ. Then failure is measured by a function χ
The Failure Probability is then
In (14) the expression at the left of the approximation symbol may be obtained using Monte Carlo methods for the original problems, probably speeded up by methods like Importance Sampling [7, 26]. In [26], after applying results from Large Deviations Theory, also realistic, but sharp, upper bounds were derived involving the number of samples that have to be drawn.
Alternatively, after having spent the effort in determining x m(t, p) in (6) the evaluation for different p is surprisingly cheap. Monte Carlo, Quasi Monte Carlo, Importance Sampling can be used again for statistics, but at a much lower price [21]. Determination of Failure Probability, however, deserves additional attention, because the expansion x m(t, p) in (6) may be less accurate in areas of interest for this kind of statistics. The software tool RODEO of Siemens AG seems to be the only industrial implementation of failure probability calculation that fits within the polynomial chaos framework [20].
A hybrid method to compute small failure probabilities that exploits surrogate models has been introduced by [18]. Their method can be slightly generalized as follows. By this we can determine the effect of approximation on the Failure Probability. To each sample z i we assume a numerically obtained approximation \(\tilde{{\mathbf{z}}}^{i}\). In addition g is approximated by \(\tilde{g}\). The probabilities one checks are
Note that in \(\tilde{P}_{\varepsilon }(t)\) one deals with \(\tilde{g}(t,\tilde{\mathbf{z}}(t,\mathbf{p})) \leq -\varepsilon\). In \(\tilde{Q}_{\varepsilon }\) the first two factors involve \(\vert \tilde{g}(t,\tilde{\mathbf{z}}(t,\mathbf{p}))\vert \leq \varepsilon\). The two quantities result in a Failure Probability \(\tilde{P}_{F}(t) =\tilde{ P}_{\varepsilon }(t) +\tilde{ Q}_{\varepsilon }(t)\). The impact of the last factor in \(\tilde{Q}_{\varepsilon }\) is that one additionally evaluates the exact g(t, z(t, p)) (or one approximates it more accurately) when its approximation \(\tilde{g}(t,\tilde{\mathbf{z}}(t,\mathbf{p}))\) is small.
Now let
be the combined quality of both approximations. One should be able to make this small. Note that \(\vert \tilde{g}(\tilde{\mathbf{z}}(t,\mathbf{p})) - g(\mathbf{z}(t,\mathbf{p}))\vert < \vert \tilde{g}(\tilde{\mathbf{z}}(t,\mathbf{p})) -\tilde{ g}(\mathbf{z}(t,\mathbf{p}))\vert + \vert \tilde{g}(\mathbf{z}(t,\mathbf{p})) - g(\mathbf{z}(t,\mathbf{p}))\vert \). The first term needs Lipschitz continuity for \(\tilde{g}\) to deal with \(\tilde{\mathbf{z}}(t,\mathbf{p}) -\mathbf{z}(t,\mathbf{p})\), the second one deals with \(\vert \tilde{g} - g\vert \). By this and exploiting the finite probability measure one may assume, f.i., that \(\tilde{D}_{\varepsilon }(t) <\delta P_{F}(t)\), for 0 < δ < 1.
One can proof (similar to [18], Theorem 4.1)
One may order the (remaining) approximative samples \(\tilde{{g}}^{(i)}(t) =\tilde{ g}(t,\tilde{\mathbf{z}}(t,{\mathbf{p}}^{i}))\) according to \(\vert \tilde{{g}}^{(i)}(t)\vert \) and replace the smallest ones by \({g}^{(i)}(t) = g(t,\tilde{\mathbf{z}}(t,{\mathbf{p}}^{i}))\) and reduce the set of (remaining) approximative samples accordingly. One can stop if the Failure Probability does not change that much anymore [18]. This procedure resembles algorithmic steps in [20].
4 Strategies for Efficient Stochastic Collocation
Stochastic Collocation implies that the problem has to be solved for a sequence (or sweep) of parameter settings p 0, …, p K. One can obtain some benefit by exploiting knowledge derived before.
In [16], the parameters p k are grouped in blocks and in each block one simulation is made, say for \({\mathbf{p}}^{k_{0}}\). At the subset of the \({\mathbf{p}}^{k_{0}}\) the solution \(\mathbf{x}(t,{\mathbf{p}}^{k_{0}})\) is calculated at some higher accuracy (f.i., with a smaller stepsize h 0). The solution is used to estimate the truncation error of the time integration for x(t, p k). One determines the residue \(\mathbf{r}(t,\mathbf{x}(t,{\mathbf{p}}^{k_{0}}))\) for \(\mathbf{x}(t,{\mathbf{p}}^{k_{0}})\) using the same integration method as intended to be used for x(t, p k), with stepsize h, but using \({\mathbf{p}}^{k_{0}}\) in all expressions. By this the discretization error for x(t, p k) is estimated automatically when \({\mathbf{p}}^{k_{0}}\) is close to p k. By subtracting \(\mathbf{r}(t,\mathbf{x}(t,{\mathbf{p}}^{k_{0}}))\) from the equations for x(t, p k), one may expect a larger stepsize h to be used then without this modification. Note that
Here the first factor equals the last Jacobian. The second factor is the sensitivity matrix of the solution with respect to the parameter variation [13, 14]; it can be estimated from its value at \({\mathbf{p}}^{k_{0}}\). When the usual error control is too pessimistic, this approach may be an alternative.
In [25] also first the solution for \({\mathbf{p}}^{k_{0}}\) is calculated for the next time discretization point and used as predictor for the time step integration of the problems for other p k. Here as well the prediction can be improved by additional sensitivity estimates. If parameters are values for capacitors, inductors or resistors they are model bound. Then hierarchy techniques [11] can be exploited to by-pass certain parts of the circuit during the Newton iteration. Of course, the time step integration for the other p k can be solved in parallel.
In [2, 20] one builds an estimator by a moderately-sized gPC approximation
As before the best \(\tilde{\mathbf{v}}_{i}(t)\) has \(\tilde{\mathbf{v}}_{i}(t) =\int \mathbf{x}(t,\mathbf{p})\rho (\mathbf{p})\mathrm{d}\mathbf{p}\). We can approximate them by a Least Squares procedure at each time t
In [2, 20] one applies a Least Squares procedure (18) not for the final solution values x(t, p 0), …, x(t, p K), but after splitting the sequence in already determined values x(t, p 0), …, \(\mathbf{x}(t,{\mathbf{p}}^{\tilde{K}})\), and approximated values \(\tilde{\mathbf{x}}(t,{\mathbf{p}}^{\tilde{K}+1})\), …, \(\tilde{\mathbf{x}}(t,{\mathbf{p}}^{K})\). Clearly the error Δ y is determined by \(\varDelta \mathbf{y} ={ \mathbf{M}}^{+}\varDelta \mathbf{b}\), where the Δ b comes from the errors in the \(\mathbf{z}_{k} \equiv \sqrt{w_{k}}\tilde{\mathbf{x}}(t,{\mathbf{p}}^{k})\), \(k =\tilde{ K} + 1,\ldots,K\). One can sort the z k and update the \(\tilde{\mathbf{x}}(t,{\mathbf{p}}^{k})\) to final solution values x(t, p k) for the \(\varDelta \tilde{K}\) largest z k . This allows to update \(\tilde{{\mathbf{x}}}^{{m}^{{\prime}} }\) iteratively and the approximation values \(\tilde{\mathbf{x}}(t,{\mathbf{p}}^{\tilde{K}+1})\), …, \(\tilde{\mathbf{x}}(t,{\mathbf{p}}^{K})\) may come from the previous \(\tilde{{\mathbf{x}}}^{{m}^{{\prime}} }\). Interpreting the values x(t, p 0), …, \(\mathbf{x}(t,{\mathbf{p}}^{\tilde{K}})\), \(\tilde{\mathbf{x}}(t,{\mathbf{p}}^{\tilde{K}+1})\), …, \(\tilde{\mathbf{x}}(t,{\mathbf{p}}^{K})\) as coming from a function \(\hat{\mathbf{x}}(t,\mathbf{p})\). Then for \(\hat{\mathbf{x}}(t,\mathbf{p})\) the mean, variance and sensitivity simply follow from the gPC expansion. The mean and variance can be used to check their change after an update. Note that here one can exploit the average sensitivity as well, which also simply follows from the gPC expansion. In this way one can assure that one includes dominant parameters first. We finally note that the approximations may come from (parameterized) Model Order Reduction.
5 Parameterized Model Order Reduction
Model Order Reduction (MOR) techniques can be applied to reduce the size of the deterministic problems that have to be simulated using SC. For good general introductions we refer to [1, 5, 23]. For parameterized MOR we refer to [3, 9, 10, 24].
We consider a linear system for circuit equations with capacitance matrix C = C(p), conductance matrix G = G(p) and source u(t) = u(t, p) that involve parameters p,
Here y(t, p) is some output result. This separation of p and x in the expressions in each equation of (19) is quite common in circuit simulation (capacitors, inductors and resistors depend on p), but for more general expressions (like when using controlled sources) this may require some organization in the evaluation tree of the expression handler. In [10] a parameterized system in the frequency domain is considered in which the coefficient matrices have been expanded. We consider, however, the nonexpanded form. Let s be the (angular) frequency. It is assumed that a set \({\mathbf{p}}^{1},{\mathbf{p}}^{2},\ldots,{\mathbf{p}}^{K}\) is given in advance, together with frequencies \(s_{1},s_{2},\ldots,s_{K}\). In our case the \({\mathbf{p}}^{1},{\mathbf{p}}^{2},\ldots,{\mathbf{p}}^{K}\) can come from quadrature points in SC. Let \({\varPsi }^{k} = (s_{k},{\mathbf{p}}^{k})\). Furthermore, let \(\mathbf{A} = s\mathbf{C}(\mathbf{p}) + \mathbf{G}(\mathbf{p})\) and AX = B, where X is the Laplace Transform of x. Similarly, let \(\mathbf{A}_{k} = \mathbf{A}{(\varPsi }^{k}) = s_{k}\mathbf{C}({\mathbf{p}}^{k}) + \mathbf{G}({\mathbf{p}}^{k})\) and A k X k = B.
A projection matrix V (with orthonormal columns v i ) is searched for such that \(\mathbf{X}(s,\mathbf{p}) \approx \bar{\mathbf{X}}(s,\mathbf{p}) \equiv \mathbf{V}\hat{\mathbf{X}}(s,\mathbf{p}) \equiv \sum _{i=1}^{K^{\prime}}\alpha _{i}(s,\mathbf{p})\mathbf{v}_{i}\).
We assume that we have already found some part of the (orthonormal) basis, V = (v 1, …, v k ). Then for any Ψ j that was not selected before to extend the basis the actual error is formally given by \({\mathbf{E}}^{j} = \mathbf{X}{(\varPsi }^{j}) -\sum _{i=1}^{k}\alpha _{i}{(\varPsi }^{j})\mathbf{v}_{i}\) and thus for the residue we have \({\mathbf{R}}^{j} = \mathbf{A}_{j}{\mathbf{E}}^{j} = \mathbf{B} -\sum _{i=1}^{k}\alpha _{i}{(\varPsi }^{j})\mathbf{A}_{j}\mathbf{v}_{i}\). Note that the residues deal with B and with x and not with the effect in y. For UQ one may consider a two-sided projection here, which will bring in the effect due to the quadrature weights. The method of [10] was used in [6] (using expansions of the matrices in moments of p). In [6] the parameter variation in C and G did come from parameterized layout extraction of RC circuits. In the extraction it was assumed that B, as well as the fill-in patterns of C(p) and of G(p), did not depend on p. When B also becomes dependent on p one should determine a basis for the range of B(p). In fact one needs MOR for multi-input, multi-output [4, 15, 27].
The selection of the next parameter introduces a notion of “dominancy” from an algorithmic point of view: this parameter most significantly needs extension of the Krylov subspace. To invest for this parameter will automatically reduce work for other parameters (several may even drop out of the list because of zero residues).
We finally describe two ideas to include sensitivity in parameterized MOR. One can calculate the sensitivities of the solution of the reduced system by adjoint techniques as described by [13, 14]. Alternatively one can exploit the sensitivity indication based on the gPC expansion of the combined list of exact evaluations and outcomes of approximations as mentioned in Sect. 4.
If first order sensitivity matrices are available for \(\mathbf{C}(\mathbf{p}) = \mathbf{C}_{0}(\mathbf{p}_{0}) +{ \mathbf{C}}^{{\prime}}(\mathbf{p}_{0})\mathbf{p}\) and for \(\mathbf{G}(\mathbf{p}) = \mathbf{G}_{0}(\mathbf{p}_{0}) +{ \mathbf{G}}^{{\prime}}(\mathbf{p}_{0})\mathbf{p}\) one can apply a Generalized Singular Value Decomposition [12] to both pairs \((\mathbf{C}_{0}^{T}(\mathbf{p}_{0}),{[{\mathbf{C}}^{{\prime}}]}^{T}(\mathbf{p}_{0}))\) and \((\mathbf{G}_{0}^{T}(\mathbf{p}_{0}),{[{\mathbf{G}}^{{\prime}}]}^{T}(\mathbf{p}_{0}))\). In [19] this was applied in MOR for linear coupled systems. The low-rank approximations for C ′(p 0) and G ′(p 0) give way to increase the basis for the columns of B of the source function. Note that by this one automatically will need MOR methods that can deal with many terminals [4, 15, 27].
6 Conclusion
We have derived strategies to efficiently determine the coefficients in generalized polynomial chaos expansions. When determined by Stochastic Collocation and numerical quadrature this leads to a large number of deterministic simulations. Parameterized Model Order Reduction is a natural choice to reduce sizes. In selecting a next parameter for the subspace extension different options have been described: residue size and options for sensitivity. For UQ however, one should involve the influence of the quadrature weights and one may check the contribution to global statistical quantities. A related algorithm can be used for Failure Probabilities.
References
Antoulas, A.C.: Approximation of Large-Scale Dynamical Systems. SIAM Publications, Philadelphia (2005)
Augustin, F., Gilg, A., Paffrath, M., Rentrop, P., Wever, U.: Polynomial chaos for the approximation of uncertainties: chances and limits. Eur. J. Appl. Math. 19, 149–190 (2008)
Baur, U., Beattie, C., Benner, P., Gugercin, S.: Interpolatory projection methods for parameterized model reduction. SIAM J. Comput. 33, 2489–2518 (2011)
Benner, P., Schneider, A.: Model reduction for linear descriptor systems with many ports. In: Günther, M., Bartel, A., Brunk, M., Schöps, S., Striebel, M. (eds.) Progress in Industrial Mathematics at ECMI 2010. Mathematics in Industry, vol. 17, pp. 137–143. Springer, Berlin (2012)
Benner, P., Hinze, M., ter Maten, E.J.W. (eds.): Model Reduction for Circuit Simulation. Lecture Notes in Electrical Engineering, vol. 74. Springer, Berlin (2011)
Bi, Y., van der Kolk, K.-J., Fernández Villena, J., Silveira, L.M., van der Meijs, N.: Fast statistical analysis of RC nets subject to manufacturing variabilities. In: Proceedings of DATE 2011, Grenoble, 14–18 March 2011
Doorn, T.S., Croon, J.A., ter Maten, E.J.W., Di Bucchianico, A.: A yield centric statistical design method for optimization of the SRAM active column. In: Proceedings of the IEEE ESSCIRC’09, 35th European Solid-State Circuits Conference, Athens, pp. 352–355 (2009)
Ernst, O.G., Mugler, A., Starkloff, H.-J., Ullmann, E.: On the convergence of generalized polynomial chaos expansions. ESAIM Math. Model. Numer. Anal. 46, 317–339 (2012)
Feng, L., Benner, P.: A robust algorithm for parametric model order reduction. PAMM Proc. Appl. Math. Mech. 7, 1021501–1021502 (2007)
Fernández Villena, J., Silveira, L.M.: Multi-dimensional automatic sampling schemes for multi-point modeling methodologies. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 30(8), 1141–1151 (2011)
Fijnvandraat, J.G., Houben, S.H.M.J., ter Maten, E.J.W., Peters, J.M.F.: Time domain analog circuit simulation. J. Comput. Appl. Math. 185, 441–459 (2006)
Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)
Ilievski, Z.: Model order reduction and sensitivity analysis. Ph.D. thesis, TU Eindhoven (2010). http://alexandria.tue.nl/extra2/201010770.pdf
Ilievski, Z., Xu, H., Verhoeven, A., ter Maten, E.J.W., Schilders, W.H.A., Mattheij, R.M.M.: Adjoint transient sensitivity analysis in circuit simulation. In: Ciuprina, G., Ioan, D. (eds.) Scientific Computing in Electrical Engineering SCEE 2006. Mathematics in Industry, vol. 11, pp. 183–189. Springer, Berlin (2007)
Ionutiu, R.: Model order reduction for multi-terminal systems - with applications to circuit simulation. Ph.D. thesis, TU Eindhoven (2011). http://alexandria.tue.nl/extra2/716352.pdf
Jansen, L., Tischendorf, C.: Effective numerical computation of parameter dependent problems. In: Michielsen, B.L., Poirier, J.-R. (eds.) Scientific Computing in Electrical Engineering SCEE 2010. Mathematics in Industry, vol. 16, pp. 49–57. Springer, Berlin (2012)
Le Maître, O.P., Knio, O.M.: Spectral Methods for Uncertainty Quantification, with Applications to Computational Fluid Dynamics. Springer, Science+Business Media B.V., Dordrecht (2010)
Li, J., Xiu, D.: Evaluation of failure probability via surrogate models. J. Comput. Phys. 229, 8966–8980 (2010)
Lutowska, A.: Model order reduction for coupled systems using low-rank approximations. Ph.D. thesis, TU Eindhoven (2012). http://alexandria.tue.nl/extra2/729804.pdf
Paffrath, M., Wever, U.: Adapted polynomial chaos expansion for failure detection. J. Comput. Phys. 226, 263–281 (2007)
Pulch, R.: Polynomial chaos for the computation of failure probabilities in periodic problems. In: Roos, J., Costa, L.R.J. (eds.) Scientific Computing in Electrical Engineering SCEE 2008. Mathematics in Industry, vol. 14, pp. 191–198. Springer, Berlin (2010)
Pulch, R.: Polynomial chaos for linear differential algebraic equations with random parameters. Int. J. Uncertain. Quantif. 1(3), 223–240 (2011)
Schilders, W.H.A., van der Vorst, H.A., Rommes, J. (eds.): Model Order Reduction: Theory, Research Aspects and Applications. Mathematics in Industry, vol. 13. Springer, Berlin (2008)
Stavrakakis, K., Wittig, T., Ackermann, W., Weiland, T.: Parametric model order reduction by neighbouring subspaces. In: Michielsen, B., Poirier, J.-R. (eds.) Scientific Computing in Electrical Engineering SCEE 2010. Series Mathematics in Industry, vol. 16, pp. 443–451. Springer, Berlin (2012)
Tasić, B., Dohmen, J.J., ter Maten, E.J.W., Beelen, T.G.J., Schilders, W.H.A., de Vries, A., van Beurden, M.: Robust DC and efficient time-domain fast fault simulation. COMPEL (2014, accepted)
ter Maten, E.J.W., Wittich, O., Di Bucchianico, A., Doorn, T.S., Beelen, T.G.J.: Importance sampling for determining SRAM yield and optimization with statistical constraint. In: Michielsen, B.L., Poirier, J.-R. (eds.) Scientific Computing in Electrical Engineering SCEE 2010. Mathematics in Industry, vol. 16, pp. 39–48. Springer, Berlin (2012)
Ugryumova, M.V.: Applications of model order reduction for IC modeling. Ph.D. thesis, TU Eindhoven (2011). http://alexandria.tue.nl/extra2/711015.pdf
Xiu, D.: Numerical Methods for Stochastic Computations - A Spectral Method Approach. Princeton University Press, Princeton (2010)
Acknowledgements
The first and last author did part of the work within the project ARTEMOS (Ref. 270683-2), http://www.artmeos.eu/ (ENIAC Joint Undertaking).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Maten, E.J.W.t., Pulch, R., Schilders, W.H.A., Janssen, H.H.J.M. (2014). Efficient Calculation of Uncertainty Quantification. In: Fontes, M., Günther, M., Marheineke, N. (eds) Progress in Industrial Mathematics at ECMI 2012. Mathematics in Industry(), vol 19. Springer, Cham. https://doi.org/10.1007/978-3-319-05365-3_50
Download citation
DOI: https://doi.org/10.1007/978-3-319-05365-3_50
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-05364-6
Online ISBN: 978-3-319-05365-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)