Abstract
This work constructs a new class of multirate schemes based on the recently developed generalized additive Runge–Kutta (GARK) methods (Sandu and Günther, SIAM J Numer Anal, 53(1):17–42, 2015). Multirate schemes use different step sizes for different components and for different partitions of the right-hand side based on the local activity levels. We show that the new multirate GARK family includes many well-known multirate schemes as special cases. The order conditions theory follows directly from the GARK accuracy theory. Nonlinear stability and monotonicity investigations show that these properties are inherited from the base schemes provided that additional coupling conditions hold.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Generalized additive Runge–Kutta (GARK) methods were introduced in [11] to solve initial value problems for additively partitioned systems of ordinary differential equations
where the right-hand side \(f: {\mathbb R}^d \rightarrow {\mathbb R}^d\) is split into in N different parts with respect to, for example, stiffness, nonlinearity, dynamical behavior, and evaluation cost. Additive partitioning includes the case of component partitioning as follows. The set of indices \(\{1,2,\ldots ,d\}\) that number the solution components \(y^i\) is split into N subsets \(\mathcal{I}^{\{m\}}\) to define
where \(e_i\) denotes the ith unit vector in \({\mathbb R}^d\). A GARK method advances the numerical solution as follows [11]
and is characterized by the extended Butcher tableau
Generalized additive Runge–Kutta methods show excellent stability properties and flexibility to exploit the different behavior of the partitions. In contrast to additive Runge–Kutta schemes introduced in [7], GARK schemes allow for different stage values in the different partitions of f. Note that additive Runge–Kutta schemes are a special case of GARK with \({\mathbf {A}}^{\{m,\ell \}}:={\mathbf {A}}^{\{\ell \}}\) for all \(m,\ell =1,\ldots ,N\).
This study develops new multirate schemes in the generalized additive Runge–Kutta framework. The paper is organized as follows. Section 2 introduces the multirate GARK family and discusses their computational effort. The algebraic stability results for GARK schemes are transferred to multirate GARK schemes and order conditions are derived. Section 3 shows that many existing multirate Runge–Kutta schemes can be represented and analyzed in the GARK framework. The generalization of additive Runge–Kutta schemes to multirate versions is considered in Sect. 4. Section 5 discusses absolutely monotonic multirate GARK schemes and shows how to construct such schemes. Finally, conclusions are drawn in Sect. 6.
2 Generalized additive multirate schemes
2.1 Formulation of generalized additive multirate schemes
We consider a two-way partitioned system (1) with one slow component \(\{\mathfrak {s}\}\), and one active (fast) component \(\{\mathfrak {f}\}\). The slow component is solved with a large step H, and the fast one with small steps \(h=H/M\). Denote by \(\widetilde{y}\) the intermediate solutions computed by the fast micro-steps, stating with \(\widetilde{y}_n:= y_n\). A multirate generalization of (3a, 3b) with M micro steps \(h=H/M\) proceeds as follows. The slow stage values are given by:
The fast micro-steps are:
The full (macro-step) solution is given by:
After eliminating the micro-step solutions \(\widetilde{y}\) from the multirate GARK method (5a, 5b, 5c, 5d) we arrive at the following form.
Definition 1
(Multirate GARK method) One macro-step of a generalized additive multirate Runge–Kutta method with M equal micro-steps reads
The base schemes are Runge–Kutta methods, \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) for the slow component and \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) for the fast component. The coefficients \(A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}\), \(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\) realize the coupling between the two components.
The method (6a, 6b, 6c) can be written as a GARK scheme (3a, 3b) over the macro-step H with the fast stage vectors \(Y^{\{\mathfrak {f}\}}:=[Y^{\{\mathfrak {f},1\}}\,^T,\ldots ,Y^{\{\mathfrak {f},M\}}\,^T]^T\). The corresponding Butcher tableau (4) reads
Example 1
(Simple MR GARK) A simple version of (6a, 6b, 6c) uses the same coupling in all micro-steps,
As we will see later, for stability reasons it is necessary in general to introduce additional freedom by using different coupling matrices for the micro-steps.
Example 2
(Telescopic MR GARK) Of particular interest are methods (6a, 6b, 6c) which use the same base scheme for both the slow and the fast components,
Such methods can be easily extended to systems with more than two scales by applying them in a telescopic fashion.
2.2 Computational considerations
The general formulation of the method (5a, 5b, 5c, 5d) leads to a system of coupled equations for all the fast and the slow stages, and the resulting computational effort is larger, not smaller, than solving the coupled system with a small step. For an efficient computational process the macro and micro-steps need to stay mostly decoupled.
A very effective approach is to have the slow stages (5a) for \(i=1,\dots , s^{\{\mathfrak {s}\}}\) coupled only with the first fast micro-step,
Equations (9) and (5b) with \(\lambda =1\) are solved together. When both methods are implicit this first computation has a similar cost as one step of the coupled system. The following fast micro-steps (5b) with \(\lambda \ge 2\) are solved independently. The corresponding slow-fast coupling matrix is
When the slow stages are computed in succession, e.g., when the slow method is explicit or diagonally implicit, a more complex approach to decouple the computations is possible. Namely, the slow stages are coupled only with the micro-steps that have been computed already, and vice-versa, the micro-steps are coupled only with the macro-stages whose solutions are already available. The fast and the slow methods proceed side by side. This decoupling can be achieved by choosing
where \(j(\lambda )\) is the number of slow stages that have been computed before the current micro-step, e.g., \(c^{\{\mathfrak {s}\}}_{j(\lambda )} \le (\lambda -1)/M\). The micro-step \(\lambda \) is only coupled to these (known) stages. Similarly,
where the first \(s^{\{\mathfrak {s}\}}-i(\lambda )\) slow stages are computed before the micro-step \(\lambda \), and therefore they are not coupled to the current micro-step. An example of such methods is discussed in detail Sect. 3.3.
2.3 Nonlinear stability
We consider systems (1) where each of the component functions is dispersive:
with respect to the same scalar product \(\langle \cdot , \cdot \rangle \). For two solutions y(t) and \(\widetilde{y}(t)\) of (1), each starting from a different initial condition, the norm of the solution difference \(\Delta y(t) = \widetilde{y}(t)- y(t)\) is non-increasing, \(\lim _{\varepsilon >0, \varepsilon \rightarrow 0} \left\| \Delta y(t+\varepsilon ) \right\| \le \left\| \Delta y(t) \right\| \).
This section investigates the conditions under which the multirate scheme (6a, 6b, 6c) is nonlinearly stable, i.e. the inequality
holds for any two numerical approximations \(y_{n+1}\) and \(\widetilde{y}_{n+1}\) obtained by applying the scheme to the ODE (1) with (11) and with initial values \(y_n\) and \(\widetilde{y}_n\).
2.3.1 Algebraic stability
Following the stability analysis in [11], a multirate GARK scheme is algebraically stable if the following matrix is non-negative definite
Algebraic stability guarantees unconditional nonlinear stability of the multirate GARK scheme [11]. If the base schemes \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) and \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) are algebraically stable, one can easily verify that \(\mathbf {P}^{\{\mathfrak {f},\mathfrak {f}\}} \ge 0\) and \(\mathbf {P}^{\{\mathfrak {s},\mathfrak {s}\}} \ge 0\) hold, since
where the matrices \({P}^{\{\mathfrak {s},\mathfrak {s}\}}\) and \(P^{\{\mathfrak {f},\mathfrak {f}\}}\) are defined for both base schemes analogous to (12a, 12b, 12c). The scheme (6a, 6b, 6c) is called stability-decoupled [11] if \(\mathbf {P}^{\{\mathfrak {f},\mathfrak {s}\}}=\mathbf {0}\) (and therefore \(\mathbf {P}^{\{\mathfrak {s},\mathfrak {f}\}}= \mathbf {P}^{\{\mathfrak {f},\mathfrak {s}\}}\,^T=\mathbf {0}\)). In this case the individual stability of the slow and fast schemes is a sufficient condition for the stability of the overall multirate method. We have the following result.
Theorem 1
(Stability of multirate GARK schemes) Consider a multirate GARK scheme (6a, 6b, 6c) with positive fast weights, \(b^{\{\mathfrak {f}\}}\,^i > 0\) for \(i=1,\dots ,s^{\{\mathfrak {f}\}}\). The scheme is stability-decoupled iff \(\mathbf {A}^{\{\mathfrak {f},\mathfrak {s}\}}\) is given by
Proof
Equation (13) follows directly from setting (12b) to zero and solving for \(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\). \(\square \)
Remark 1
If we use component partitioning, no additional coupling conditions have to be fulfilled as shown in [11], provided that both base schemes \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) and \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) are algebraically stable.
2.3.2 Conditional stability for coercive problems
Following Higueras [4], consider partitioned systems (1) where each of the component functions is coercive:
Assume that there exist \(r \ge 0\) such that the following matrix is positive definite
The next result extends the one in [11].
Theorem 2
(Conditional stability of multirate GARK methods) Consider a partitioned system (1) with coercive component functions (14) solved by a multirate GARK method, and assume that (15) holds. The solution is nonlinearly stable under the step size restriction
If the GARK method is stability decoupled then the weight r in (15) ensures stability of the slow component for \(H \le -2\mu /r\), and of the fast component under the step restriction \(h \le -2\mu /(r M)\). The multirate GARK method imposes no additional step size restrictions for conditional stability.
2.4 Order conditions
As the multirate method (6a, 6b, 6c) is a particular instance of a generalized additive Runge–Kutta scheme (3a, 3b), the order conditions follow directly from the derivation in [11]. The order conditions for the multirate GARK methods (6a, 6b, 6c) are given in Tables 1 and 2. Hereby we have used the common notation of the identity matrices \(I \in {\mathbb R}^{s^{\{\mathfrak {s}\}}\times s^{\{\mathfrak {s}\}}}\) and \(I \in {\mathbb R}^{s^{\{\mathfrak {f}\}}\times s^{\{\mathfrak {f}\}}}\), resp., and the unit vectors \(1\!\!1=(1,\ldots ,1)^T \in {\mathbb R}^{s^{\{\mathfrak {s}\}}}\) and \(1\!\!1=(1,\ldots ,1)^T \in {\mathbb R}^{s^{\{\mathfrak {f}\}}}\), resp.
2.4.1 Simplifying assumptions
Consider the basis schemes \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) and \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) of order three or higher. The multirate order conditions simplify considerably if the following internal consistency conditions[11]) hold
or, in equivalent form
If (16a, 16b) hold then all order two conditions and most of the order three conditions are automatically fulfilled. The only remaining order three conditions are
or, in equivalent form,
When only the first fast microstep is coupled to the slow part (10) the second simplifying condition (16b) becomes
The first simplifying condition (16a) can be fulfilled by setting
with
which transforms the last order three coupling condition (17b) into
2.4.2 Additive partitioning
We now consider the case of additive partitioning. If the base methods \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) and \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) are algebraically stable then (13) ensures the nonlinear stability of the overall method. However, the stability conditions (13) cannot be fulfilled when the simplifying conditions (16a, 16b) hold.
Theorem 3
(Internally consistent multirate GARK schemes are not stability decoupled) When only the first fast microstep is coupled to the slow part (10), the stability conditions (13) are not compatible with the first simplifying condition (16a) for multirate GARK schemes with \(M>1\).
Proof
Assume that the multirate GARK scheme fulfills the first simplifying conditions (16a, 16b) and the stability decoupling condition (13) for all \(1 \le \lambda \le M\):
Eliminating \({A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}}\) leads to
When only the first fast microstep is coupled to the slow part (10)
yielding a contradiction for \(M > 1\). \(\square \)
Consider the case where the base schemes are of order three or higher and the stability decoupling conditions (13) hold. The coefficients of a stability decoupled multirate GARK scheme have to fulfill the following nine independent order conditions:
where
Example 3
Consider the case (8) where the same order three scheme (A, b) is used for both the fast and the slow components. To easily construct schemes with an order higher than one we set \(A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}=0\) for \(\lambda =2,\ldots ,M\). For \(s=2\) the only second order condition (22c) leads to
assuming \(b_1 \ne 0\) and since \(b_2 \ne 0\) for a method of order two. This choice preserves the explicit structure of the scheme, if the underlying scheme is explicit. Note that the basic method does not depend on M—only the coefficients of the coupling matrices \(A^{\{\mathfrak {s},\mathfrak {f}\}}\) and D depend on M.
2.4.3 Component partitioning
For component partitioning the stability of the fast and slow base schemes ensures the overall stability, and no coupling conditions are needed. Consequently, there are additional degrees of freedom for choosing \(\mathbf {A^{\{\mathfrak {f},\mathfrak {s}\}}}\). The general order conditions for this case have been given in Tables 1 and 2.
Example 4
(First fast step coupling) We construct a multirate scheme from two arbitrary order three basis schemes \((b^{\{\mathfrak {f}\}},A^{\{\mathfrak {f},\mathfrak {f}\}})\) and \((b^{\{\mathfrak {s}\}},A^{\{\mathfrak {s},\mathfrak {s}\}})\) by coupling only the first active microstep to the slow part and keeping the flexibility in the coupling matrices \(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\) The order two coupling conditions are
The order three conditions read
The only degrees of freedom to fulfill these conditions are the parameters of the coupling coefficient matrices \(A^{\{\mathfrak {s},\mathfrak {f},1\}}\) and \(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\) (\(\lambda =1,\ldots ,M\)).
3 Traditional multirate Runge Kutta methods formulated in the GARK framework
In this section we discuss several important multirate Runge Kutta methods proposed in the literature, and show how they can be represented and analyzed in the GARK framework.
3.1 Kvaerno–Rentrop methods
The mRK class of multirate Runke–Kutta methods proposed by Kvaerno and Rentrop [10] can be formulated in the GARK framework. Kvaerno and Rentrop obtain order conditions that are nearly independent of M by making the following choices of coefficients.
-
(a)
The mRK schemes are based on coupling only the first fast microstep to the slow part, which, in this paper’s notation, reads
$$\begin{aligned} A^{\{\mathfrak {s},\mathfrak {f}\}}:=A^{\{\mathfrak {s},\mathfrak {f},1\}}, \quad A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}=0, \quad \lambda =2,\ldots ,M. \end{aligned}$$ -
(b)
The slow to fast coupling is
$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda +1\}}:= & {} A^{\{\mathfrak {f},\mathfrak {s}\}} + F(\lambda ) \\ F(\lambda )= & {} 1\!\!1^{\{\mathfrak {f},\mathfrak {s}\}}\, \begin{bmatrix} \eta _1(\lambda ) \dots \eta _{s^{\{\mathfrak {f},\mathfrak {s}\}}}(\lambda ) \end{bmatrix}, \quad \lambda =0,\ldots ,M-1, \end{aligned}$$i.e., \(F_{i,j}(\lambda )=\eta _j(\lambda )\). The scalar functions \(\eta _j(\lambda )\) fulfill the condition
$$\begin{aligned} \sum _{j=1}^{s^{\{\mathfrak {f},\mathfrak {s}\}}} \eta _j(\lambda ) = \lambda \quad \Leftrightarrow \quad F(\lambda )\, 1\!\!1= \lambda \, 1\!\!1. \end{aligned}$$(25) -
(c)
In [10] the matrix \(A^{\{\mathfrak {s},\mathfrak {f}\}}\) is scaled by M, and the matrix \(A^{\{\mathfrak {f},\mathfrak {s}\}}\) is scaled by 1 / M, i.e., the function evaluations of the active part are always done in the microstep size, and the ones in the slow part in the macrostep size.
Note that this choice corresponds to the simplifying conditions (16a, 16b) with the special choice (18b). With the notation
the corresponding GARK order conditions are given in Table 3.
Choose two order three schemes \((b^{\{\mathfrak {f}\}},A^{\{\mathfrak {f},\mathfrak {f}\}})\) and \((b^{\{\mathfrak {s}\}},A^{\{\mathfrak {s},\mathfrak {s}\}})\). To obtain a multirate method of order three the free parameters \(A^{\{\mathfrak {f},\mathfrak {s}\}}\), \(A^{\{\mathfrak {s},\mathfrak {f}\}}\), and \(F(\lambda )\) have to fulfill the two remaining order three coupling conditions, together with the three simplifying conditions,
Note that the additional condition
imposed by Kvaerno and Rentrop [10], which transforms the second condition (26b) into
ensures that the active solution has order three at all microsteps.
Example 5
(A multirate GARK schemes of order 3 with only two stages) We are now interested in constructing schemes of order 3 with only 2 stages. Note that the overall scheme will be stable, if the basic schemes are stable due to componentwise partitioning. We use the simplifying conditions (26c) and (26d) together with
If we use an order three basis scheme (b, A) , the remaining order conditions (26a, 26b, 26c, 26d) for the free parameters \(\widetilde{A}\) and \(F(\lambda )\) read
The first two conditions coincide if we set
When \(c_{2} \ne c_{1}\) the choice
fulfills this additional condition and condition (27d) at the same time. The remaining conditions (27a) and (27c) can be fulfilled, for example, by setting
For the RADAU-IA scheme (\(p=3,s=2\)) we obtain
and for RADAU-IIA (\(p=3,s=2\))
3.2 Dense output coupling
The use of dense output interpolation for coupling the slow and fast components was developed by Savcenco, Hundsdorfer, and co-workers in the context of Rosenbrock methods [12–16]. This approach can be immediately extended to Runge Kutta methods, and the overall scheme can be formulated in the mutirate GARK framework.
For a traditional Runge Kutta method the dense output provides highly accurate approximations of the solution at intermediate points
or highly accurate approximations of the function values at intermediate points
The slow terms in the micro-steps (5b) can be viewed as approximations of the function value at the micro steps
Consequently, using dense output of function values (29) leads to the standard multirate GARK approach with the coupling given by the dense output coefficients
Alternatively, one can use the dense solution values (28) in the micro-steps (5b)
where
The dense output of the fast variable can be applied only for the current micro-step
or for the previous micro-step, i.e., in extrapolation mode
where the dense output coefficients \(b^{\{\mathfrak {s}\}}(\lambda )\), \(b^{\{\mathfrak {f}\}}(\lambda )\) are appropriately redefined.
The solution interpolation approach (30) can be cast in the GARK framework by adding the additional slow stage values \(Y_i^{\{\mathfrak {s},\lambda \}}\), with no contribution to the output (\(b_i^{\{\mathfrak {s},\lambda \}}=0\)). This is less convenient for analysis, however, as the number of slow stages becomes equal to the number of fast stages.
3.3 Multirate infinitesimal step methods
Multirate infinitesimal step (MIS) methods [21] discretize the slow component with an explicit Runge Kutta method. The fast component is advanced between consecutive stages of this method as the exact solution of a fast ODE system. The fast ODE has a right hand side composed of the original fast component of the function, plus a piecewise constant “tendency” term representing the discretized slow component of the function. The order conditions of the overall method assume that the fast ODE can be solved exactly, which justifies the “infinitesimal step” name. We show here that a multirate infinitesimal step method can be cast in the GARK framework when the inner fast ODEs are solved by a Runge Kutta method with small steps.
We focus on the particular method of Knoth and Wolke [8], which was the first MIS approach, and which has the best practical potential. This approach has been named recursive flux splitting multirate (RFSMR) in [18–20], and has been cast as a traditional partitioned Runge Kutta method in [20]. Applications to the solution of atmospheric flows are discussed in [17–19]. The approach below can be applied to any MIS scheme where the internal ODEs are solved by Runge Kutta methods.
Consider an outer (slow) explicit Runge Kutta scheme with the abscissae \(c^\textsc {o}_1=0\), \(c^\textsc {o}_i < c^\textsc {o}_j\) for \(i<j\), and \(c^\textsc {o}_s<1\). The inner (fast) scheme can be explicit or implicit. If the same explicit scheme is used in both the inner and the outer loops then the method can be applied in a telescopic fashion, where an even faster method is obtained by sub-stepping recursively.
The scheme proceeds, in principle, by solving an ODE between each pair of consecutive stages of the slow explicit method:
The numerical scheme solves the inner ODEs using several steps of an inner Runge Kutta method [18]. For the present analysis we consider the case when only one step of the internal Runge Kutta method \((A^\textsc {i},b^\textsc {i})\) is taken to solve the ODE for \(v_i\) for each subinterval \(i=2,\dots ,s^\textsc {o}\). This is no restriction of generality as any sequence of M sub steps can be written as a single step method.
We interpret this scheme as a GARK method—note that we formally solve the ODEs for the active part with one step of the inner fast method \((A^\textsc {i},b^\textsc {i})\) with \(c^\textsc {i} = A^\textsc {i}1\!\!1\).
Theorem 4
(The MIS scheme is a particular instance of a GARK method)
-
(i)
The MIS scheme defined above can be written as a GARK method with the corresponding Butcher tableau (7) given by \(\mathbf {A}^{\{\mathfrak {s},\mathfrak {s}\}}=A^\textsc {o}\), \(\mathbf {b}^{\{\mathfrak {s}\}}=b^\textsc {o},\)
where
$$\begin{aligned} \mathbf {e}_i = \begin{bmatrix} 0&\dots&1&\dots&0 \end{bmatrix} ^T \in {\mathbb R}^{s^\textsc {o}}, \quad \mathbf {g}_i = \begin{bmatrix} 0&\dots&1&\dots&1 \end{bmatrix} ^T \in {\mathbb R}^{s^\textsc {o}} \end{aligned}$$ -
(ii)
The coefficients fulfill the simplifying “internal consistency” conditions (16a, 16b) given in matrix form by
$$\begin{aligned} \mathbf {c}^{\{\mathfrak {s},\mathfrak {s}\}} = \mathbf {c}^{\{\mathfrak {s},\mathfrak {f}\}} = \mathbf {c}^{\{\mathfrak {s}\}} = c^\textsc {o}, \end{aligned}$$and
$$\begin{aligned} \mathbf {c}^{\{\mathfrak {f},\mathfrak {s}\}} = \mathbf {c}^{\{\mathfrak {f},\mathfrak {f}\}} = \mathbf {c}^{\{\mathfrak {f}\}} = \begin{bmatrix} ( c^\textsc {o}_{2} )\,c^\textsc {i} \\ \vdots \\ c_{s^\textsc {o}}^\textsc {o} \, 1\!\!1+ ( b^\textsc {o}\,^T c^\textsc {o} -c^\textsc {o}_{s^\textsc {o}} )\,c^\textsc {i} \end{bmatrix} \in {\mathbb R}^{s^\textsc {o}s^\textsc {i}}. \end{aligned}$$ -
(iii)
Assuming that both the fast and the slow methods have order at least two, the simplifying assumptions imply that the overall scheme is second order.
-
(iv)
Assuming that both the fast and the slow methods have order at least three, the third order coupling conditions reduce to the single condition
$$\begin{aligned} \frac{1}{3}= & {} \sum _{i=2}^{s^\textsc {o}} (c_i^\textsc {o}-c_{i-1}^\textsc {o})\, ( \mathbf {e}_{i}+\mathbf {e}_{i-1} )^T \, A^\textsc {o} c^\textsc {o} + (1-c_{s^\textsc {o}}^\textsc {o}) \, \left( \frac{1}{2} + \mathbf {e}_{s^\textsc {o}}^T\, A^\textsc {o} c^\textsc {o} \right) . \end{aligned}$$(31)
Proof
The proof is an application of the multirate GARK order conditions. Details are given in [2]. \(\square \)
Remark 2
The condition (31) corresponds to the additional order three condition derived in the original MIS paper [8].
4 Multirate (traditional) additive Runge–Kutta methods
A special case of multirate GARK schemes (6a, 6b, 6c) are the multirate additive Runge–Kutta schemes. They are obtained in the GARK framework by setting
The scheme proceeds as follows
and has the following extended Butcher tableau
4.1 Additive partitioning
The coupling condition (13) for nonlinear stability yields
and only \(b^{\{\mathfrak {s}\}},b^{\{\mathfrak {f}\}}\) and \(A^{\{\mathfrak {f}\}}\) remain as free parameters. As a consequence, the algebraic stability of the basic method \((b^{\{\mathfrak {s}\}},A^{\{\mathfrak {s}\}})\) is equivalent to the algebraic stability of \((b^{\{\mathfrak {s}\}},D)\), where \(D=B^{\{\mathfrak {f}\}}\,^{-1} (A^{\{\mathfrak {f}\}})^T B^{\{\mathfrak {s}\}} \).
Example 6
(A nonlinearly stable additive Runge–Kutta scheme of order two) Besides the algebraic stability of \((b^{\{\mathfrak {s}\}},A^{\{\mathfrak {s}\}})\) and \((b^{\{\mathfrak {s}\}},D)\), the following conditions have to be fulfilled for a method of order two:
A simple choice of parameters is
and
In this case both base methods are not only algebraically stable but also symplectic.
4.2 Componentwise partitioning
For component partitioning there are no additional nonlinear stability conditions. Following again the lines of [10], we set
We also consider the simplifying assumption
The corresponding order conditions are given in Table 4.
If, in addition to (33a, 33b) a second simplifying condition is given by the following relation
any pair \((b^{\{\mathfrak {f}\}}, A^{\{\mathfrak {f}\}})\) and \((b^{\{\mathfrak {s}\}}, A^{\{\mathfrak {s}\}})\) of algebraically stable order three schemes lead to an order three multirate scheme, provided that the compatibility conditions are true:
These compatibility conditions are fulfilled by a scheme with s stages if
with free parameters \(\eta _2(\lambda ),\ldots ,\eta _{s-1}(\lambda )\), provided that \(c_1 \ne c_s\).
It is easy to see that the scheme must have at least four stages, as \(s=3\) would yield \(b^{\{\mathfrak {f}\}}=b^{\{\mathfrak {s}\}}\) and consequently \(M=1\).
Example 7
(An algebraically stable additive Runge–Kutta scheme of order three) To construct an algebraically stable scheme of order \(p=3\), we first choose a pair of algebraically stable schemes \((A,b^{\{\mathfrak {f}\}})\) and \((A,b^{\{\mathfrak {s}\}})\) with \(c:=A 1\!\!1\) and then define \(A^{\{\mathfrak {f}\}}:=A+(M-1) \widetilde{A}^{\{\mathfrak {f}\}}\) and \(A^{\{\mathfrak {s}\}}:=A+(M-1) \widetilde{A}^{\{\mathfrak {s}\}}\). It is straightforward to show that this yields a stable multirate additive Runge–Kutta scheme, if the following conditions hold:
Such a pair can be constructed by extending (doubling) any algebraically stable scheme. For the RADAU-IA method, for example, the extension to the pair \((A,b^{\{\mathfrak {f}\}})\) and \((A,b^{\{\mathfrak {s}\}})\) is given by:
A possible choice for \(\widetilde{A}^{\{\mathfrak {f}\}}\) and \(\widetilde{A}^{\{\mathfrak {s}\}}\) fulfilling all conditions above is
Finally, we set
with \(\eta _2\) and \(\eta _3\) arbitrary. For \(\eta _2:=\eta _3:=0\) we get \(\eta _1=\frac{c_4}{c_4-c_1} \lambda \) and \(\eta _4=-\frac{c_1}{c_4-c_1} \lambda \). For the extended RADAU-IA scheme we have \(\eta _1=\lambda , \eta _2=\eta _3=\eta _4=0\).
5 Monotonicity properties
Consider the method (6a, 6b, 6c) in the general form, represented by the Butcher tableau (7) and let
We are concerned with partitioned systems (1) where there exist \(\rho > 0\) such that for a semi-norm \(\Vert \cdot \Vert \) and for any y
This implies that condition (36) holds for any \(0 \le \tau \le \rho \), i.e., the solutions of forward Euler steps with the slow and fast subsystems, respectively, are monotone under this step size restriction. The condition (36) also implies that the system (1) has a solution of non increasing norm. To see this consider \(\alpha ,\beta > 0\) with \(\alpha +\beta =1\) and write an Euler step with the full system as a convex combination
if \(\theta \le \min \{\alpha , \beta /M\}\, \rho \). For \(\beta \rightarrow 1\) the aggregate Euler step is monotonic for time steps \(0 < \theta < \rho /M\), and therefore the solution of (1) has non increasing norm, \((d/dt) \Vert y \Vert \le 0\) [5].
We seek multirate schemes which guarantee a monotone numerical solution \(\Vert y_{n+1} \Vert \le \Vert y_{n} \Vert \) for (36) under suitable step size restrictions. Specifically, we seek schemes where the macro step is not subject to the above \(\theta < \rho /M\) bound.
The following definition and results follow from the general ones for GARK schemes in [11].
Definition 2
(Absolutely monotonic multirate GARK) Let \(r>0\) and
A multirate GARK scheme (3a, 3b) defined by \(\widetilde{\mathbf {A}} \ge 0\) is called absolutely monotonic (a.m.) at \(r \in {\mathbb R}\) if
where \(\hat{s}=M s^{\{\mathfrak {f}\}}+ s^{\{\mathfrak {s}\}} +1\). Here all the inequalities are taken component-wise.
Let
We note that conditions (38a, 38b) are equivalent to
These are precisely the monotonicity relations for a simple Runge Kutta matrix. Consequently, the machinery developed for assessing the monotonicity of single rate Runge Kutta schemes [3, 9] can be directly applied to the multirate GARK case as well.
Definition 3
(Radius of absolute monotonicity) The radius of absolute monotonicity of the multirate GARK scheme (3a, 3b) is the largest number \(\mathcal {R}\ge 0\) such that the scheme is absolutely monotonic (40a, 40b) for any \(r \in [0,\mathcal {R}]\).
Theorem 5
(Monotonicity of solutions) Consider the GARK scheme (3a,3b) applied to a partitioned system with the property (36). For any macro-step size obeying the restriction
the stage values and the solution are monotonic
In practice we are interested in the largest upper bound for the time step that ensures monotonicity.
We next consider the particular case of telescopic multirate GARK methods, where both the fast and the slow components use the same irreducible monotonic base scheme (A, b). The classical Runge Kutta monotonicity theory [3, 9] states that an irreducible base scheme has a nonzero radius of absolute monotonicity if and only if
where Inc denotes the incidence matrix (i.e., a matrix with entries equal to one or zero for the non-zero and zero entries of the original matrix, respectively).
The matrix (39) of the resulting multirate scheme is
The extra stages added to obtain (44) from (39) do not impact the final solution, therefore the underlying numerical solution is not changed. Denote the upper left block of (44) by \(A_M\). Note that (43) implies that \(Inc(A_M^2) \le Inc(A_M)\) as \(A_M\) represents M concatenated steps of the base method.
By similar arguments as in the classical theory [3, 9], the multirate scheme is absolutely monotonic if the incidence of the extended matrix satisfies
Theorem 6
(Conditions for absolutely monotonic telescopic multirate GARK schemes) The multirate GARK schemes with the same basic scheme for fast and slow components is absolutely monotonic, if the following conditions hold:
for all \(i,j=1,\dots ,M\).
Proof
The square of matrix (44) is
with the following blocks:
Writing the incidence inequality (45) by blocks yields (46a, 46b, 46c, 46d, 46e). \(\square \)
Remark 3
-
1.
A comparison of (46b) with (43) reveals that the monotonicity of the base scheme is a necessary, but not sufficient condition for the absolute monotonicity of the multirate version. The coupling coefficients have to be chosen appropriately in order to preserve this property. For example, since \(A_M\) and \(A_M^2\) are block lower triangular, a necessary condition for (46a) is that \( A^{\{\mathfrak {f},\mathfrak {s},i\}} A^{\{\mathfrak {s},\mathfrak {f},j\}} = \mathbf {0}\) for \(i > j\).
-
2.
If all weights of the base method are nonzero then condition (46e) is automatically satisfied.
-
3.
An interesting choice of coupling coefficients is to use only the first micro-step solution in the slow calculation
$$\begin{aligned} A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} = {0}, \quad \lambda = 2, \dots , M, \end{aligned}$$and to include the slow term contribution only in the last micro-step
$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} = 0,\quad \lambda = 1,\dots ,M-1. \end{aligned}$$In this case the incidence conditions (46a)–(46d) take the much simpler form:
$$\begin{aligned} Inc( A^2 + A^{\{\mathfrak {f},\mathfrak {s},M\}} A^{\{\mathfrak {s},\mathfrak {f},1\}} )\le & {} Inc( A ), \end{aligned}$$(47a)$$\begin{aligned} Inc( b^T A + b^T A^{\{\mathfrak {f},\mathfrak {s},M\}} )\le & {} Inc( b^T ), \end{aligned}$$(47b)$$\begin{aligned} Inc(\ A \, A^{\{\mathfrak {f},\mathfrak {s},M\}} + A^{\{\mathfrak {f},\mathfrak {s},M\}} \, A )\le & {} Inc( A^{\{\mathfrak {f},\mathfrak {s},M\}} ), \end{aligned}$$(47c)$$\begin{aligned} Inc( A^{\{\mathfrak {s},\mathfrak {f},1\}}\, A + A\, A^{\{\mathfrak {s},\mathfrak {f},1\}})\le & {} Inc(A^{\{\mathfrak {s},\mathfrak {f},1\}}). \end{aligned}$$(47d)Condition (47b) is automatically satisfied if \(b > 0\). Moreover, if the couplings are multiples of the base scheme, \(A^{\{\mathfrak {s},\mathfrak {f},1\}} = c_1\, A\) and \(A^{\{\mathfrak {f},\mathfrak {s},M\}} = c_2\, A\), then (43) implies that all conditions (47a, 47b, 47c, 47d) are satisfied.
Example 8
(Monotonicity of an explicit multirate GARK scheme of order two) The base for both the fast and the slow schemes is the following explicit, order two, strong stability preserving method, with an absolute stability radius \(\mathcal {R}=1\)
The general coupling conditions for a second order multirate scheme read:
We consider three different couplings.
-
The coupling coefficients that respect the nonlinear stability decoupling condition have negative values and the resulting GARK method is not absolutely stable.
-
Coupling the slow step with only the first fast step is achieved by
$$\begin{aligned} A^{\{\mathfrak {s},\mathfrak {f},1\}} = \begin{bmatrix} 0&\quad 0 \\ M&\quad 0 \end{bmatrix} , \quad A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} = \mathbf {0}, \quad \lambda \ge 2. \end{aligned}$$(49)The second order condition can be fulfilled by taking
$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} = A, \quad \forall \, \lambda . \end{aligned}$$For \(M \ge 2\) we have \(\mathcal {R}=0\), since (39) corresponds to an irreducible Runge Kutta scheme, and (46c) is not satisfied.
-
A scheme with (49) which includes the slow terms only in the last micro-step is
$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} = 0, \quad \lambda = 1,\dots ,M-1, \quad A^{\{\mathfrak {f},\mathfrak {s},M\}} = M\, A. \end{aligned}$$With this coupling the multirate scheme maintains the absolute stability radius of the base method for any M, as all conditions (47a, 47b, 47c, 47d) are satisfied.
We note that monotonicity conditions for several multirate and partitioned explicit Runge-Kutta schemes are discussed by Hundsdorfer, Mozartova, and Savcenco in a recent report [6].
6 Conclusions and future work
This work develops multirate generalized additive Runge Kutta schemes, which exploit the different levels of activity within the partitions of the right-hand sides and/or components by using appropriate time steps for each of them. Multirate GARK schemes inherit the high level of flexibility from GARK schemes [11], which allow for different stage values as arguments of different components of the right hand side. Many well-known multirate Runge–Kutta schemes, such as the Kvaerno–Rentrop methods [10], the multirate infinitesimal step methods [8], and methods based on dense output coupling, are particular members of the new family of methods.
The paper develops the order conditions (up to order three) for the generalized additive multirate schemes. We extend the GARK algebraic stability and monotonicity analysis [11] to the new multirate family, and show that these properties are inherited from the base schemes provided that some coupling conditions hold.
Future work will construct multirate GARK methods tailored to special applications in, for example, circuit design, vehicle system dynamics, or air quality modeling, and will extend the new family of schemes to differential-algebraic equations. First numerical results obtained for Multirate GARK schemes applied to an electro-thermal coupled problem from network analysis can be found in [1].
References
Günther, M., Hachtel, Ch., Sandu, A.: Multirate GARK schemes for multiphysical problems. In: Bartel, A. et al. (ed.) Proceedings of the SCEE2014. Springer, Berlin (2015) (to be published)
Günther, M., Sandu, A.: Multirate GARK methods. In: Technical Report CSL-TR-6/2013, Virginia Tech, Computational Science Laboratory (2013). arXiv:1310.6055
Higueras, I.: On strong stability preserving time discretization methods. J. Sci. Comput. 21, 193–223 (2004)
Higueras, I.: Monotonicity for Runge–Kutta methods: inner product norms. J. Sci. Comput. 24, 97–117 (2005)
Higueras, I.: Strong stability for additive Runge–Kutta methods. SIAM J. Numer. Anal. 44, 1735–1758 (2006)
Hundsdorfer, W., Mozartova, A., Savcenco, V.: Monotonicity conditions for multirate and partitioned explicit Runge–Kutta methods. In: Ansorge, R., Bijl, H., Meister, A., Sonar, T. (eds.) Recent Developments in the Numerics of Nonlinear Hyperbolic Conservation Laws, pp. 177–195. Springer, New York (2013)
Kennedy, Ch., Carpenter, M.: Additive Runge–Kutta schemes for convection–diffusion–reaction equations. Appl. Numer. Math. 44, 139–181 (2003)
Knoth, O., Wolke, R.: Implicit–explicit Runge–Kutta methods for computing atmospheric reactive flows. Appl. Numer. Math. 28, 327–341 (1998)
Kraaijevanger, J.F.B.M.: Contractivity of Runge–Kutta methods. BIT Numer. Math. 31, 482–528 (1991)
Kvaerno, A., Rentrop, P.: Low Order Multirate Runge–Kutta Methods in Electric Circuit Simulation, Preprint 99/1. University of Karlsruhe, IWRMM, Karlsruhe (1999)
Sandu, A., Günther, M.: A generalized-structure approach to additive Runge–Kutta methods. SIAM J. Numer. Anal. 53(1), 17–42 (2015)
Savcenco, V.: Construction of high-order multirate Rosenbrock methods for stiff ODEs. In: Technical Report MAS-E0716, Centrum voor Wiskunde en Informatica (2007)
Savcenco, V.: Comparison of the asymptotic stability properties for two multirate strategies. J. Comput. Appl. Math. 220, 508–524 (2008)
Savcenco, V.: Construction of a multirate Rodas method for stiff ODEs. J. Comput. Appl. Math. 225, 323–337 (2009)
Savcenco, V., Hundsdorfer, W., Verwer, J.G.: A multirate time stepping strategy for parabolic PDE. In: Technical Report MAS-E0516, Centrum voor Wiskundeen Informatica (2005)
Savcenco, V., Hundsdorfer, W., Verwer, J.G.: A multirate time stepping strategy for stiff ODEs. BIT 47, 137–155 (2007)
Schlegel, M., Knoth, O., Arnold, M., Wolke, R.: Multirate Runge–Kutta schemes for advection equations. J. Comput. Appl. Math. 226, 345–357 (2009)
Schlegel, M., Knoth, O., Arnold, M., Wolke, R.: Multirate implicit–explicit time integration schemes in atmospheric modelling. In: AIP Conference Proceedings, vol. 1281. International Conference of Numerical Analysis and Applied Mathematics, ICNAAM 2010, p. 1831 (2010)
Schlegel, M., Knoth, O., Arnold, M., Wolke, R.: Implementation of splitting methods for air pollution modeling. Geosci. Model Dev. Discuss. 4, 2937–2972 (2011)
Schlegel, M., Knoth, O., Arnold, M., Wolke, R.: Numerical solution of multiscale problems in atmospheric modeling. Appl. Numer. Math. 62, 1531–1543 (2012)
Wensch, J., Knoth, O., Galant, A.: Multirate infinitesimal step methods for atmospheric flow simulation. BIT 49, 449–473 (2009)
Acknowledgments
The work of A. Sandu has been supported in part by NSF through awards NSF OCI-8670904397, NSF CCF-0916493, NSF DMS-0915047, NSF CMMI-1130667, NSF CCF-1218454, AFOSR FA9550-12-1-0293-DEF, AFOSR 12-2640-06, and by the Computational Science Laboratory at Virginia Tech. The work of M. Günther has been supported in part by BMBF through grant 03MS648E.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Günther, M., Sandu, A. Multirate generalized additive Runge Kutta methods. Numer. Math. 133, 497–524 (2016). https://doi.org/10.1007/s00211-015-0756-z
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00211-015-0756-z