1 Introduction

Generalized additive Runge–Kutta (GARK) methods were introduced in  [11] to solve initial value problems for additively partitioned systems of ordinary differential equations

$$\begin{aligned} y'= f(y) = \sum _{m=1}^N f^{\{m\}} (y), \quad y(t_0)=y_0, \end{aligned}$$
(1)

where the right-hand side \(f: {\mathbb R}^d \rightarrow {\mathbb R}^d\) is split into in N different parts with respect to, for example, stiffness, nonlinearity, dynamical behavior, and evaluation cost. Additive partitioning includes the case of component partitioning as follows. The set of indices \(\{1,2,\ldots ,d\}\) that number the solution components \(y^i\) is split into N subsets \(\mathcal{I}^{\{m\}}\) to define

$$\begin{aligned} f^{\{m\}}:= \sum _{i \in \mathcal{I}^{\{m\}}} e_i^T e_i\, f(y), \quad \mathrm{i.e.,}\,\, f^{\{m\}i} (y) = \left\{ \begin{array}{ll} f^i(y), &{}\quad i \in \mathcal{I}^{\{m\}}, \\ 0, &{}\quad i \notin \mathcal{I}^{\{m\}} \end{array} \right. , \end{aligned}$$
(2)

where \(e_i\) denotes the ith unit vector in \({\mathbb R}^d\). A GARK method advances the numerical solution as follows [11]

$$\begin{aligned} Y_i^{\{q\}}= & {} y_{n} + h \sum _{m=1}^N \sum _{j=1}^{s^{\{m\}}} a_{i,j}^{\{q,m\}} \, f^{\{m\}}(Y_j^{\{m\}}), \quad q=1,\ldots ,N, \quad {i=1,\ldots ,s^{\{q\}}},\nonumber \\ \end{aligned}$$
(3a)
$$\begin{aligned} y_{n+1}= & {} y_n + h \sum _{q=1}^N\, \sum _{i=1}^{s^{\{q\}}} b_{i}^{\{q\}} \, f^{\{q\}}(Y_i^{\{q\}}), \end{aligned}$$
(3b)

and is characterized by the extended Butcher tableau

$$\begin{aligned} \begin{array}{cccc} \mathbf {A}^{\{1,1\}} &{} \mathbf {A}^{\{1,2\}} &{} \ldots &{} \mathbf {A}^{\{1,N\}} \\ \mathbf {A}^{\{2,1\}} &{}\mathbf {A}^{\{2,2\}} &{} \ldots &{} \mathbf {A}^{\{2,N\}} \\ \vdots &{} \vdots &{} &{} \vdots \\ \mathbf {A}^{\{N,1\}} &{} \mathbf {A}^{\{N,2\}} &{} \ldots &{} \mathbf {A}^{\{N,N\}} \\ \mathbf {b}^{\{1\}} &{} \mathbf {b}^{\{2\}} &{} \ldots &{}\mathbf {b}^{\{N\}} \end{array}. \end{aligned}$$
(4)

Generalized additive Runge–Kutta methods show excellent stability properties and flexibility to exploit the different behavior of the partitions. In contrast to additive Runge–Kutta schemes introduced in [7], GARK schemes allow for different stage values in the different partitions of f. Note that additive Runge–Kutta schemes are a special case of GARK with \({\mathbf {A}}^{\{m,\ell \}}:={\mathbf {A}}^{\{\ell \}}\) for all \(m,\ell =1,\ldots ,N\).

This study develops new multirate schemes in the generalized additive Runge–Kutta framework. The paper is organized as follows. Section 2 introduces the multirate GARK family and discusses their computational effort. The algebraic stability results for GARK schemes are transferred to multirate GARK schemes and order conditions are derived. Section 3 shows that many existing multirate Runge–Kutta schemes can be represented and analyzed in the GARK framework. The generalization of additive Runge–Kutta schemes to multirate versions is considered in Sect. 4. Section 5 discusses absolutely monotonic multirate GARK schemes and shows how to construct such schemes. Finally, conclusions are drawn in Sect. 6.

2 Generalized additive multirate schemes

2.1 Formulation of generalized additive multirate schemes

We consider a two-way partitioned system (1) with one slow component \(\{\mathfrak {s}\}\), and one active (fast) component \(\{\mathfrak {f}\}\). The slow component is solved with a large step H, and the fast one with small steps \(h=H/M\). Denote by \(\widetilde{y}\) the intermediate solutions computed by the fast micro-steps, stating with \(\widetilde{y}_n:= y_n\). A multirate generalization of (3a, 3b) with M micro steps \(h=H/M\) proceeds as follows. The slow stage values are given by:

$$\begin{aligned} Y_i^{\{\mathfrak {s}\}}= & {} y_n + H \, \sum _{j=1}^{s^{\{\mathfrak {s}\}}} a_{i,j}^{\{\mathfrak {s},\mathfrak {s}\}} f^{\{\mathfrak {s}\}}(Y_j^{\{\mathfrak {s}\}}) \nonumber \\&+\, h \, \sum _{\lambda =1}^M \,\sum _{j=1}^{s^{\{\mathfrak {f}\}}} a_{i,j}^{\{\mathfrak {s},\mathfrak {f},\lambda \}} f^{\{\mathfrak {f}\}}(Y_j^{\{\mathfrak {f},\lambda \}}), \quad i=1,\dots , s^{\{\mathfrak {s}\}}. \end{aligned}$$
(5a)

The fast micro-steps are:

$$\begin{aligned} For \lambda =1,\ldots ,M&\nonumber \\ Y_i^{\{\mathfrak {f},\lambda \}}= & {} \widetilde{y}_{n+\left( \lambda -1\right) /M} + H \, \sum _{j=1}^{s^{\{\mathfrak {s}\}}} a_{i,j}^{\{\mathfrak {f},\mathfrak {s},\lambda \}} f^{\{\mathfrak {s}\}}(Y_j^{\{\mathfrak {s}\}})\nonumber \\&+\, h \, \sum _{j=1}^{s^{\{\mathfrak {f}\}}} a_{i,j}^{\{\mathfrak {f},\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_j^{\{\mathfrak {f},\lambda \}}),\quad i=1,\dots , s^{\{\mathfrak {f}\}}, \end{aligned}$$
(5b)
$$\begin{aligned} \widetilde{y}_{n+\lambda /M}= & {} \widetilde{y}_{n+(\lambda -1)/M} + h \sum _{i=1}^{s^{\{\mathfrak {f}\}}} b_{i}^{\{\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_i^{\{\mathfrak {f},\lambda \}}). \end{aligned}$$
(5c)

The full (macro-step) solution is given by:

$$\begin{aligned} y_{n+1}= & {} \widetilde{y}_{n+M/M} + H \, \sum _{i=1}^{s^{\{\mathfrak {s}\}}} b_{i}^{\{\mathfrak {s}\}} f^{\{\mathfrak {s}\}}(Y_i^{\{\mathfrak {s}\}}). \end{aligned}$$
(5d)

After eliminating the micro-step solutions \(\widetilde{y}\) from the multirate GARK method (5a, 5b, 5c, 5d) we arrive at the following form.

Definition 1

(Multirate GARK method) One macro-step of a generalized additive multirate Runge–Kutta method with M equal micro-steps reads

$$\begin{aligned} Y_i^{\{\mathfrak {s}\}}= & {} y_n + H \, \sum _{j=1}^{s^{\{\mathfrak {s}\}}} a_{i,j}^{\{\mathfrak {s},\mathfrak {s}\}} f^{\{\mathfrak {s}\}}(Y_j^{\{\mathfrak {s}\}}) \nonumber \\&+\, h \, \sum _{\lambda =1}^M \sum _{j=1}^{s^{\{\mathfrak {f}\}}} a_{i,j}^{\{\mathfrak {s},\mathfrak {f},\lambda \}} f^{\{\mathfrak {f}\}}(Y_j^{\{\mathfrak {f},\lambda \}}), {\quad i=1,\ldots ,s^{\{\mathfrak {s}\}}} \end{aligned}$$
(6a)
$$\begin{aligned} \quad Y_i^{\{\mathfrak {f},\lambda \}}= & {} y_n + h \sum _{l=1}^{\lambda -1} \sum _{j=1}^{s^{\{\mathfrak {f}\}}} b_{j}^{\{\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_j^{\{\mathfrak {f},l\}}) + \, \sum _{j=1}^{s^{\{\mathfrak {s}\}}} a_{i,j}^{\{\mathfrak {f},\mathfrak {s},\lambda \}} f^{\{\mathfrak {s}\}}(Y_j^{\{\mathfrak {s}\}}) \nonumber \\&+\, h \, \sum _{j=1}^{s^{\{\mathfrak {f}\}}} a_{i,j}^{\{\mathfrak {f},\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_j^{\{\mathfrak {f},\lambda \}}), \quad \lambda =1,\ldots ,M, {\quad i=1,\ldots ,s^{\{\mathfrak {f}\}}} \end{aligned}$$
(6b)
$$\begin{aligned} y_{n+1}= & {} y_n + h \, \sum _{\lambda =1}^M \sum _{i=1}^{s^{\{\mathfrak {f}\}}} b_{i}^{\{\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_i^{\{\mathfrak {f},\lambda \}}) + H \, \sum _{j=1}^{s^{\{\mathfrak {s}\}}} b_{i}^{\{\mathfrak {s}\}} f^{\{\mathfrak {s}\}}(Y_i^{\{\mathfrak {s}\}}). \end{aligned}$$
(6c)

The base schemes are Runge–Kutta methods, \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) for the slow component and \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) for the fast component. The coefficients \(A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}\), \(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\) realize the coupling between the two components.

The method (6a, 6b, 6c) can be written as a GARK scheme (3a, 3b) over the macro-step H with the fast stage vectors \(Y^{\{\mathfrak {f}\}}:=[Y^{\{\mathfrak {f},1\}}\,^T,\ldots ,Y^{\{\mathfrak {f},M\}}\,^T]^T\). The corresponding Butcher tableau (4) reads

$$\begin{aligned} \begin{array}{c|c} {\mathbf {A}}^{\{\mathfrak {f},\mathfrak {f}\}} &{} {\mathbf {A}}^{\{\mathfrak {f},\mathfrak {s}\}} \\ {\mathbf {A}}^{\{\mathfrak {s},\mathfrak {f}\}} &{}{\mathbf {A}}^{\{\mathfrak {s},\mathfrak {s}\}} \\ {\mathbf {b}}^{\{\mathfrak {f}\}} \,^T &{} {\mathbf {b}}^{\{\mathfrak {s}\}} \,^T \end{array} ~~ :=~~ \begin{array}{cccc|cccc} \frac{1}{M} A^{\{\mathfrak {f},\mathfrak {f}\}} &{} 0 &{} \cdots &{} 0 &{} A^{\{\mathfrak {f},\mathfrak {s},1\}} \\ \frac{1}{M} \mathbf {1} b^{\{\mathfrak {f}\}}\,^T &{} \frac{1}{M} A^{\{\mathfrak {f},\mathfrak {f}\}} &{} \cdots &{} 0 &{} A^{\{\mathfrak {f},\mathfrak {s},2\}} \\ \vdots &{} &{} \ddots &{} &{} \vdots \\ \frac{1}{M} \mathbf {1} b^{\{\mathfrak {f}\}}\,^T &{} \frac{1}{M} \mathbf {1} b^{\{\mathfrak {f}\}}\,^T &{} \ldots &{} \frac{1}{M} A^{\{\mathfrak {f},\mathfrak {f}\}} &{}A^{\{\mathfrak {f},\mathfrak {s},M\}} \\ \hline \frac{1}{M} A^{\{\mathfrak {s},\mathfrak {f},1\}} &{} \frac{1}{M} A^{\{\mathfrak {s},\mathfrak {f},2\}} &{} \cdots &{} \frac{1}{M} A^{\{\mathfrak {s},\mathfrak {f},M\}} &{} A^{\{\mathfrak {s},\mathfrak {s}\}} \\ \hline \frac{1}{M} b^{\{\mathfrak {f}\}}\,^T &{} \frac{1}{M} b^{\{\mathfrak {f}\}}\,^T &{} \ldots &{} \frac{1}{M} b^{\{\mathfrak {f}\}}\,^T &{} b^{\{\mathfrak {s}\}}\,^T \end{array} \end{aligned}$$
(7)

Example 1

(Simple MR GARK) A simple version of (6a, 6b, 6c) uses the same coupling in all micro-steps,

$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},1\}} = \cdots = A^{\{\mathfrak {f},\mathfrak {s},N\}} = A^{\{\mathfrak {f},\mathfrak {s}\}}. \end{aligned}$$

As we will see later, for stability reasons it is necessary in general to introduce additional freedom by using different coupling matrices for the micro-steps.

Example 2

(Telescopic MR GARK) Of particular interest are methods (6a, 6b, 6c) which use the same base scheme for both the slow and the fast components,

$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {f}\}} = A^{\{\mathfrak {s},\mathfrak {s}\}} = A, \quad b^{\{\mathfrak {f}\}} = b^{\{\mathfrak {s}\}} = b. \end{aligned}$$
(8)

Such methods can be easily extended to systems with more than two scales by applying them in a telescopic fashion.

2.2 Computational considerations

The general formulation of the method (5a, 5b, 5c, 5d) leads to a system of coupled equations for all the fast and the slow stages, and the resulting computational effort is larger, not smaller, than solving the coupled system with a small step. For an efficient computational process the macro and micro-steps need to stay mostly decoupled.

A very effective approach is to have the slow stages (5a) for \(i=1,\dots , s^{\{\mathfrak {s}\}}\) coupled only with the first fast micro-step,

$$\begin{aligned} Y_i^{\{\mathfrak {s}\}}= & {} y_n + H \, \sum _{j=1}^{s^{\{\mathfrak {s}\}}} a_{i,j}^{\{\mathfrak {s},\mathfrak {s}\}} f^{\{\mathfrak {s}\}}(Y_j^{\{\mathfrak {s}\}}) + h \, \sum _{j=1}^{s^{\{\mathfrak {f}\}}} a_{i,j}^{\{\mathfrak {s},\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_j^{\{\mathfrak {f},1\}}). \end{aligned}$$
(9)

Equations (9) and (5b) with \(\lambda =1\) are solved together. When both methods are implicit this first computation has a similar cost as one step of the coupled system. The following fast micro-steps (5b) with \(\lambda \ge 2\) are solved independently. The corresponding slow-fast coupling matrix is

$$\begin{aligned} {\mathbf {A}}^{\{\mathfrak {s},\mathfrak {f}\}} = \begin{bmatrix} \frac{1}{M} A^{\{\mathfrak {s},\mathfrak {f}\}}&0&\cdots&0 \end{bmatrix}. \end{aligned}$$
(10)

When the slow stages are computed in succession, e.g., when the slow method is explicit or diagonally implicit, a more complex approach to decouple the computations is possible. Namely, the slow stages are coupled only with the micro-steps that have been computed already, and vice-versa, the micro-steps are coupled only with the macro-stages whose solutions are already available. The fast and the slow methods proceed side by side. This decoupling can be achieved by choosing

$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} = \begin{bmatrix} \bar{A}^{\{\mathfrak {f},\mathfrak {s},\lambda \}}&\mathbf {0} \end{bmatrix} \in {\mathbb R}^{s^{\{\mathfrak {f}\}} \times s^{\{\mathfrak {s}\}}}, \quad \bar{A}^{\{\mathfrak {f},\mathfrak {s},\lambda \}} \in {\mathbb R}^{s^{\{\mathfrak {f}\}} \times j(\lambda )}, \end{aligned}$$

where \(j(\lambda )\) is the number of slow stages that have been computed before the current micro-step, e.g., \(c^{\{\mathfrak {s}\}}_{j(\lambda )} \le (\lambda -1)/M\). The micro-step \(\lambda \) is only coupled to these (known) stages. Similarly,

$$\begin{aligned} A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} = \begin{bmatrix} \mathbf {0} \\ \bar{A}^{\{\mathfrak {s},\mathfrak {f},\lambda \}} \end{bmatrix} \in {\mathbb R}^{s^{\{\mathfrak {s}\}} \times s^{\{\mathfrak {f}\}}}, \quad \bar{A}^{\{\mathfrak {s},\mathfrak {f},\lambda \}} \in {\mathbb R}^{i(\lambda ) \times s^{\{\mathfrak {f}\}} }, \end{aligned}$$

where the first \(s^{\{\mathfrak {s}\}}-i(\lambda )\) slow stages are computed before the micro-step \(\lambda \), and therefore they are not coupled to the current micro-step. An example of such methods is discussed in detail Sect. 3.3.

2.3 Nonlinear stability

We consider systems (1) where each of the component functions is dispersive:

$$\begin{aligned} \langle f^{\{m\}} (y)- f^{\{m\}} (z),\, y-z \rangle \le \nu ^{\{m\}}\, \left\| y-z \right\| ^2, \quad \nu ^{\{m\}} < 0 , \quad m \in \{\mathfrak {f},\mathfrak {s}\} \end{aligned}$$
(11)

with respect to the same scalar product \(\langle \cdot , \cdot \rangle \). For two solutions y(t) and \(\widetilde{y}(t)\) of (1), each starting from a different initial condition, the norm of the solution difference \(\Delta y(t) = \widetilde{y}(t)- y(t)\) is non-increasing, \(\lim _{\varepsilon >0, \varepsilon \rightarrow 0} \left\| \Delta y(t+\varepsilon ) \right\| \le \left\| \Delta y(t) \right\| \).

This section investigates the conditions under which the multirate scheme (6a, 6b, 6c) is nonlinearly stable, i.e. the inequality

$$\begin{aligned} \Vert y_{n+1} - \tilde{y}_{n+1} \Vert \le \Vert y_{n} - \tilde{y}_{n} \Vert \end{aligned}$$

holds for any two numerical approximations \(y_{n+1}\) and \(\widetilde{y}_{n+1}\) obtained by applying the scheme to the ODE (1) with (11) and with initial values \(y_n\) and \(\widetilde{y}_n\).

2.3.1 Algebraic stability

Following the stability analysis in [11], a multirate GARK scheme is algebraically stable if the following matrix is non-negative definite

$$\begin{aligned} \mathbf {B}^{\{m\}}:= & {} \text{ diag }(\mathbf {b}^{\{m\}}), \end{aligned}$$
(12a)
$$\begin{aligned} \quad \mathbf {P}^{\{m,\ell \}}:= & {} \mathbf {A}^{\{m,\ell \}}\,^T \mathbf {B}^{\{m\}} + \mathbf {B}^{\{\ell \}} \mathbf {A}^{\{\ell ,m\}} - \mathbf {b}^{\{\ell \}} \mathbf {b}^{\{m\}}\,^T, \quad \forall ~ m,\ell \in \{\mathfrak {f},\mathfrak {s}\}, \end{aligned}$$
(12b)
$$\begin{aligned} \mathbf {P}= & {} \begin{bmatrix} \mathbf {P}^{\{\mathfrak {f},\mathfrak {f}\}}&\quad \mathbf {P}^{\{\mathfrak {f},\mathfrak {s}\}} \\ \mathbf {P}^{\{\mathfrak {s},\mathfrak {f}\}}&\quad \mathbf {P}^{\{\mathfrak {s},\mathfrak {s}\}} \end{bmatrix} \ge 0. \end{aligned}$$
(12c)

Algebraic stability guarantees unconditional nonlinear stability of the multirate GARK scheme [11]. If the base schemes \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) and \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) are algebraically stable, one can easily verify that \(\mathbf {P}^{\{\mathfrak {f},\mathfrak {f}\}} \ge 0\) and \(\mathbf {P}^{\{\mathfrak {s},\mathfrak {s}\}} \ge 0\) hold, since

$$\begin{aligned} \mathbf {P}^{\{\mathfrak {s},\mathfrak {s}\}}= & {} {P}^{\{\mathfrak {s},\mathfrak {s}\}}, \quad \mathbf {P}^{\{\mathfrak {f},\mathfrak {f}\}} = \frac{1}{M^2}\, \mathbf {I}_{M \times M} \otimes P^{\{\mathfrak {f},\mathfrak {f}\}}, \end{aligned}$$

where the matrices \({P}^{\{\mathfrak {s},\mathfrak {s}\}}\) and \(P^{\{\mathfrak {f},\mathfrak {f}\}}\) are defined for both base schemes analogous to (12a, 12b, 12c). The scheme (6a, 6b, 6c) is called stability-decoupled [11] if \(\mathbf {P}^{\{\mathfrak {f},\mathfrak {s}\}}=\mathbf {0}\) (and therefore \(\mathbf {P}^{\{\mathfrak {s},\mathfrak {f}\}}= \mathbf {P}^{\{\mathfrak {f},\mathfrak {s}\}}\,^T=\mathbf {0}\)). In this case the individual stability of the slow and fast schemes is a sufficient condition for the stability of the overall multirate method. We have the following result.

Theorem 1

(Stability of multirate GARK schemes) Consider a multirate GARK scheme (6a, 6b, 6c) with positive fast weights, \(b^{\{\mathfrak {f}\}}\,^i > 0\) for \(i=1,\dots ,s^{\{\mathfrak {f}\}}\). The scheme is stability-decoupled iff \(\mathbf {A}^{\{\mathfrak {f},\mathfrak {s}\}}\) is given by

$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}:= & {} B^{\{\mathfrak {f}\}}\,^{-1} ( {b}^{\{\mathfrak {f}\}} {b}^{\{\mathfrak {s}\}}\,^T - A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}\,^T B^{\{\mathfrak {s}\}} ), \quad \lambda =1,\ldots ,M. \end{aligned}$$
(13)

Proof

Equation (13) follows directly from setting (12b) to zero and solving for \(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\). \(\square \)

Remark 1

If we use component partitioning, no additional coupling conditions have to be fulfilled as shown in [11], provided that both base schemes \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) and \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) are algebraically stable.

2.3.2 Conditional stability for coercive problems

Following Higueras [4], consider partitioned systems (1) where each of the component functions is coercive:

$$\begin{aligned} \langle f^{\{\mathfrak {s}\}} (y)- f^{\{\mathfrak {s}\}} (z),\, y-z \rangle\le & {} \mu \, \Vert f^{\{\mathfrak {s}\}} (y)- f^{\{\mathfrak {s}\}} (z) \Vert ^2, \\ \nonumber \langle f^{\{\mathfrak {f}\}} (y)- f^{\{\mathfrak {f}\}} (z),\, y-z \rangle\le & {} \mu \, M\, \Vert f^{\{\mathfrak {f}\}} (y)- f^{\{\mathfrak {f}\}} (z) \Vert ^2, \quad \mu < 0. \end{aligned}$$
(14)

Assume that there exist \(r \ge 0\) such that the following matrix is positive definite

$$\begin{aligned} \begin{bmatrix} \mathbf {P}^{\{\mathfrak {f},\mathfrak {f}\}} + r\, M\, \mathbf {B}^{\{\mathfrak {f}\}}&\quad \mathbf {P}^{\{\mathfrak {f},\mathfrak {s}\}} \\ \mathbf {P}^{\{\mathfrak {s},\mathfrak {f}\}}&\quad \mathbf {P}^{\{\mathfrak {s},\mathfrak {s}\}} + r\, \mathbf {B}^{\{\mathfrak {s}\}} \end{bmatrix} \ge 0. \end{aligned}$$
(15)

The next result extends the one in [11].

Theorem 2

(Conditional stability of multirate GARK methods) Consider a partitioned system (1) with coercive component functions (14) solved by a multirate GARK method, and assume that (15) holds. The solution is nonlinearly stable under the step size restriction

$$\begin{aligned} H \le \frac{-2\, \mu }{ r}. \end{aligned}$$

If the GARK method is stability decoupled then the weight r in (15) ensures stability of the slow component for \(H \le -2\mu /r\), and of the fast component under the step restriction \(h \le -2\mu /(r M)\). The multirate GARK method imposes no additional step size restrictions for conditional stability.

2.4 Order conditions

As the multirate method (6a, 6b, 6c) is a particular instance of a generalized additive Runge–Kutta scheme (3a, 3b), the order conditions follow directly from the derivation in [11]. The order conditions for the multirate GARK methods  (6a, 6b, 6c) are given in Tables 1 and 2. Hereby we have used the common notation of the identity matrices \(I \in {\mathbb R}^{s^{\{\mathfrak {s}\}}\times s^{\{\mathfrak {s}\}}}\) and \(I \in {\mathbb R}^{s^{\{\mathfrak {f}\}}\times s^{\{\mathfrak {f}\}}}\), resp., and the unit vectors \(1\!\!1=(1,\ldots ,1)^T \in {\mathbb R}^{s^{\{\mathfrak {s}\}}}\) and \(1\!\!1=(1,\ldots ,1)^T \in {\mathbb R}^{s^{\{\mathfrak {f}\}}}\), resp.

Table 1 Slow order conditions for multirate GARK scheme (6a, 6b, 6c)
Table 2 Fast order conditions for multirate GARK scheme (6a, 6b, 6c)

2.4.1 Simplifying assumptions

Consider the basis schemes \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) and \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) of order three or higher. The multirate order conditions simplify considerably if the following internal consistency conditions[11]) hold

$$\begin{aligned} {\mathbf {A}}^{\{\mathfrak {f},\mathfrak {f}\}} 1\!\!1= {\mathbf {A}}^{\{\mathfrak {f},\mathfrak {s}\}} 1\!\!1:={\mathbf {c}^{\{\mathfrak {f}\}}} \quad and \quad {\mathbf {A}}^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1= {\mathbf {A}}^{\{\mathfrak {s},\mathfrak {f}\}} 1\!\!1:={\mathbf {c}^{\{\mathfrak {s}\}}}, \end{aligned}$$

or, in equivalent form

$$\begin{aligned} \frac{1}{M} A^{\{\mathfrak {f},\mathfrak {f}\}} 1\!\!1+ \frac{\lambda -1}{M} 1\!\!1= & {} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} 1\!\!1= {\mathbf {c}^{\{\mathfrak {f},\lambda \}}}, \quad \lambda = 1,\ldots ,M,\end{aligned}$$
(16a)
$$\begin{aligned} \frac{1}{M} \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} 1\!\!1= & {} A^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1= {\mathbf {c}^{\{\mathfrak {s}\}}}. \end{aligned}$$
(16b)

If (16a, 16b) hold then all order two conditions and most of the order three conditions are automatically fulfilled. The only remaining order three conditions are

$$\begin{aligned} ({\mathbf {b}}^{\{\mathfrak {s}\}})^T {\mathbf {A}}^{\{\mathfrak {s},\mathfrak {f}\}} {\mathbf {c}^{\{\mathfrak {f}\}}}= & {} \frac{1}{6} \quad and \quad ({\mathbf {b}}^{\{\mathfrak {f}\}})^T {\mathbf {A}}^{\{\mathfrak {f},\mathfrak {s}\}} {\mathbf {c}^{\{\mathfrak {s}\}}} = \frac{1}{6}, \end{aligned}$$

or, in equivalent form,

$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} ( A^{\{\mathfrak {f},\mathfrak {f}\}} +(\lambda -1) I ) 1\!\!1= & {} \frac{M^2}{6}, \end{aligned}$$
(17a)
$$\begin{aligned} b^{\{\mathfrak {f}\}}\,^T \left( \sum _{\lambda =1}^M A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} \right) A^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1= & {} \frac{M}{6}. \end{aligned}$$
(17b)

When only the first fast microstep is coupled to the slow part (10) the second simplifying condition (16b) becomes

$$\begin{aligned} \frac{1}{M} A^{\{\mathfrak {s},\mathfrak {f},1\}} 1\!\!1= {\mathbf {c}^{\{\mathfrak {s}\}}}; \quad A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} = 0 ,\quad \lambda =2,\ldots ,M. \end{aligned}$$
(18a)

The first simplifying condition (16a) can be fulfilled by setting

$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} = A^{\{\mathfrak {f},\mathfrak {s},1\}} + F(\lambda ),\quad \lambda =1,\ldots ,M \end{aligned}$$
(18b)

with

$$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},1\}} 1\!\!1= \frac{1}{M} A^{\{\mathfrak {f},\mathfrak {f}\}} 1\!\!1\quad \text{ and } \quad F(\lambda ) 1\!\!1= \frac{\lambda -1}{M} 1\!\!1, \end{aligned}$$
(18c)

which transforms the last order three coupling condition (17b) into

$$\begin{aligned} b^{\{\mathfrak {f}\}}\,^T \left( M A^{\{\mathfrak {f},\mathfrak {s},1\}} + \sum _{\lambda =1}^M F(\lambda ) \right) A^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1= & {} \frac{M}{6}. \end{aligned}$$
(19)

2.4.2 Additive partitioning

We now consider the case of additive partitioning. If the base methods \((A^{\{\mathfrak {s},\mathfrak {s}\}},b^{\{\mathfrak {s}\}})\) and \((A^{\{\mathfrak {f},\mathfrak {f}\}},b^{\{\mathfrak {f}\}})\) are algebraically stable then (13) ensures the nonlinear stability of the overall method. However, the stability conditions (13) cannot be fulfilled when the simplifying conditions (16a, 16b) hold.

Theorem 3

(Internally consistent multirate GARK schemes are not stability decoupled) When only the first fast microstep is coupled to the slow part (10), the stability conditions (13) are not compatible with the first simplifying condition  (16a) for multirate GARK schemes with \(M>1\).

Proof

Assume that the multirate GARK scheme fulfills the first simplifying conditions (16a, 16b) and the stability decoupling condition (13) for all \(1 \le \lambda \le M\):

$$\begin{aligned} \frac{1}{M} A^{\{\mathfrak {f},\mathfrak {f}\}} 1\!\!1+ \frac{\lambda -1}{M} 1\!\!1= & {} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} 1\!\!1, \end{aligned}$$
(20a)
$$\begin{aligned} 1\!\!1{b}^{\{\mathfrak {s}\}}\,^T - B^{\{\mathfrak {f}\}}\,^{-1} A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}\,^T B^{\{\mathfrak {s}\}}= & {} {A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}}. \end{aligned}$$
(20b)

Eliminating \({A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}}\) leads to

$$\begin{aligned} \frac{1}{M} A^{\{\mathfrak {f},\mathfrak {f}\}} 1\!\!1+ \frac{\lambda -1}{M} 1\!\!1= & {} ( I - B^{\{\mathfrak {f}\}}\,^{-1} A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}\,^T B^{\{\mathfrak {s}\}} ) 1\!\!1{.} \end{aligned}$$
(21a)

When only the first fast microstep is coupled to the slow part (10)

$$\begin{aligned} \frac{1}{M} A^{\{\mathfrak {f},\mathfrak {f}\}} 1\!\!1+ \frac{\lambda -1}{M} 1\!\!1= 1\!\!1\quad \quad \forall \lambda \ge 2, \end{aligned}$$

yielding a contradiction for \(M > 1\). \(\square \)

Consider the case where the base schemes are of order three or higher and the stability decoupling conditions (13) hold. The coefficients of a stability decoupled multirate GARK scheme have to fulfill the following nine independent order conditions:

$$\begin{aligned} b^{\{\mathfrak {f}\}}\,^T \left( \sum _{\lambda =1}^M \text{ diag }(D^\lambda 1\!\!1) A^{\{\mathfrak {f},\mathfrak {f}\}} + \sum _{\lambda =1}^M (\lambda -1) D^\lambda \right) 1\!\!1= & {} \frac{M^2}{6}, \end{aligned}$$
(22a)
$$\begin{aligned} b^{\{\mathfrak {f}\}}\,^T \left( \sum _{\lambda =1}^M A^{\{\mathfrak {f},\mathfrak {f}\}} D^\lambda + \sum _{\mu =1}^{M-1} \sum _{\lambda =1}^\mu D^\lambda \right) 1\!\!1= & {} \frac{M^2}{3}, \end{aligned}$$
(22b)
$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T \left( \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} \right) 1\!\!1= & {} \frac{M}{2}, \end{aligned}$$
(22c)
$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T \text{ diag }(A^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1) \left( \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} \right) 1\!\!1= & {} \frac{M}{3}, \end{aligned}$$
(22d)
$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T \text{ diag }\left( \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} 1\!\!1\right) A^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1= & {} \frac{M}{3}, \end{aligned}$$
(22e)
$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T \text{ diag }(\sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} 1\!\!1) \left( \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} \right) 1\!\!1= & {} \frac{M^2}{3}, \end{aligned}$$
(22f)
$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T A^{\{\mathfrak {s},\mathfrak {s}\}} \left( \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} \right) 1\!\!1= & {} \frac{M}{6}, \end{aligned}$$
(22g)
$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T \left( \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} D^\lambda \right) 1\!\!1= & {} \frac{M}{3}, \end{aligned}$$
(22h)
$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T \left( \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} ( A^{\{\mathfrak {f},\mathfrak {f}\}} +(\lambda -1) I ) \right) 1\!\!1= & {} \frac{M^2}{6}, \end{aligned}$$
(22i)

where

$$\begin{aligned} D^\lambda := B^{\{\mathfrak {f}\}}\,^{-1}\; A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}\,^T\; B^{\{\mathfrak {s}\}}. \end{aligned}$$
(23)

Example 3

Consider the case (8) where the same order three scheme (Ab) is used for both the fast and the slow components. To easily construct schemes with an order higher than one we set \(A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}=0\) for \(\lambda =2,\ldots ,M\). For \(s=2\) the only second order condition (22c) leads to

$$\begin{aligned} A^{\{\mathfrak {s},\mathfrak {f},1\}} = \begin{bmatrix} 0&\quad 0 \\ \frac{M}{2 b_2}&\quad 0 \end{bmatrix}, \quad D^1 = \begin{bmatrix} 0&\quad \frac{M}{2 b_1} \\ 0&\quad 0 \end{bmatrix} = \frac{b_2}{b_1}A^{\{\mathfrak {s},\mathfrak {f}\}}\,^T, \end{aligned}$$

assuming \(b_1 \ne 0\) and since \(b_2 \ne 0\) for a method of order two. This choice preserves the explicit structure of the scheme, if the underlying scheme is explicit. Note that the basic method does not depend on M—only the coefficients of the coupling matrices \(A^{\{\mathfrak {s},\mathfrak {f}\}}\) and D depend on M.

2.4.3 Component partitioning

For component partitioning the stability of the fast and slow base schemes ensures the overall stability, and no coupling conditions are needed. Consequently, there are additional degrees of freedom for choosing \(\mathbf {A^{\{\mathfrak {f},\mathfrak {s}\}}}\). The general order conditions for this case have been given in Tables 1 and 2.

Example 4

(First fast step coupling) We construct a multirate scheme from two arbitrary order three basis schemes \((b^{\{\mathfrak {f}\}},A^{\{\mathfrak {f},\mathfrak {f}\}})\) and \((b^{\{\mathfrak {s}\}},A^{\{\mathfrak {s},\mathfrak {s}\}})\) by coupling only the first active microstep to the slow part and keeping the flexibility in the coupling matrices \(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\) The order two coupling conditions are

$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T A^{\{\mathfrak {s},\mathfrak {f},1\}} 1\!\!1= & {} \frac{M}{2}, \end{aligned}$$
(24a)
$$\begin{aligned} b^{\{\mathfrak {f}\}}\,^T \left( \sum _{\lambda =1}^M A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} 1\!\!1\right)= & {} \frac{M}{2}. \end{aligned}$$
(24b)

The order three conditions read

$$\begin{aligned}&\displaystyle b^{\{\mathfrak {s}\}}\,^T \text{ diag }(A^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1) A^{\{\mathfrak {s},\mathfrak {f},1\}} 1\!\!1= \frac{M}{3}, \end{aligned}$$
(24c)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {s}\}}\,^T \text{ diag }(A^{\{\mathfrak {s},\mathfrak {f},1\}} 1\!\!1) A^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1= \frac{M}{3}, \end{aligned}$$
(24d)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {s}\}}\,^T \text{ diag }(A^{\{\mathfrak {s},\mathfrak {f},1\}} 1\!\!1) A^{\{\mathfrak {s},\mathfrak {f},1\}} 1\!\!1= \frac{M^2}{3}, \end{aligned}$$
(24e)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {s}\}}\,^T A^{\{\mathfrak {s},\mathfrak {s}\}} A^{\{\mathfrak {s},\mathfrak {f},1\}} 1\!\!1= \frac{M}{6}, \end{aligned}$$
(24f)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {s}\}}\,^T A^{\{\mathfrak {s},\mathfrak {f},1\}} A^{\{\mathfrak {f},\mathfrak {s},1\}} 1\!\!1= \frac{M}{6}, \end{aligned}$$
(24g)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {s}\}}\,^T A^{\{\mathfrak {s},\mathfrak {f},1\}} A^{\{\mathfrak {f},\mathfrak {f}\}} 1\!\!1= \frac{M^2}{6}, \end{aligned}$$
(24h)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {f}\}}\,^T \left( \text{ diag }(A^{\{\mathfrak {f},\mathfrak {f}\}} 1\!\!1) \sum _{\lambda =1}^M A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} + \sum _{\lambda =1}^M (\lambda -1) A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} \right) 1\!\!1= \frac{M^2}{3}, \end{aligned}$$
(24i)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {f}\}}\,^T \left( \sum _{\lambda =1}^M \text{ diag }(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} 1\!\!1)\{ A^{\{\mathfrak {f},\mathfrak {f}\}} + (\lambda -1) A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} \} \right) 1\!\!1= \frac{M^2}{3}, \end{aligned}$$
(24j)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {f}\}}\,^T \left( \sum _{\lambda =1}^M \text{ diag }(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} 1\!\!1) A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} \right) 1\!\!1= \frac{M}{3}, \end{aligned}$$
(24k)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {f}\}}\,^T \left( A^{\{\mathfrak {f},\mathfrak {f}\}} \sum _{\lambda =1}^M A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} + \sum _{\mu =1}^{M-1} \sum _{\lambda =1}^\mu A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} \right) 1\!\!1= \frac{M^2}{6}, \end{aligned}$$
(24l)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {f}\}}\,^T \left( \sum _{\lambda =1}^M A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\right) A^{\{\mathfrak {s},\mathfrak {f},1\}} 1\!\!1= \frac{M^2}{6}, \end{aligned}$$
(24m)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {f}\}}\,^T \left( \sum _{\lambda =1}^M A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} \right) A^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1= \frac{M}{6}. \end{aligned}$$
(24n)

The only degrees of freedom to fulfill these conditions are the parameters of the coupling coefficient matrices \(A^{\{\mathfrak {s},\mathfrak {f},1\}}\) and \(A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\) (\(\lambda =1,\ldots ,M\)).

3 Traditional multirate Runge Kutta methods formulated in the GARK framework

In this section we discuss several important multirate Runge Kutta methods proposed in the literature, and show how they can be represented and analyzed in the GARK framework.

3.1 Kvaerno–Rentrop methods

The mRK class of multirate Runke–Kutta methods proposed by Kvaerno and Rentrop [10] can be formulated in the GARK framework. Kvaerno and Rentrop obtain order conditions that are nearly independent of M by making the following choices of coefficients.

  1. (a)

    The mRK schemes are based on coupling only the first fast microstep to the slow part, which, in this paper’s notation, reads

    $$\begin{aligned} A^{\{\mathfrak {s},\mathfrak {f}\}}:=A^{\{\mathfrak {s},\mathfrak {f},1\}}, \quad A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}=0, \quad \lambda =2,\ldots ,M. \end{aligned}$$
  2. (b)

    The slow to fast coupling is

    $$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda +1\}}:= & {} A^{\{\mathfrak {f},\mathfrak {s}\}} + F(\lambda ) \\ F(\lambda )= & {} 1\!\!1^{\{\mathfrak {f},\mathfrak {s}\}}\, \begin{bmatrix} \eta _1(\lambda ) \dots \eta _{s^{\{\mathfrak {f},\mathfrak {s}\}}}(\lambda ) \end{bmatrix}, \quad \lambda =0,\ldots ,M-1, \end{aligned}$$

    i.e., \(F_{i,j}(\lambda )=\eta _j(\lambda )\). The scalar functions \(\eta _j(\lambda )\) fulfill the condition

    $$\begin{aligned} \sum _{j=1}^{s^{\{\mathfrak {f},\mathfrak {s}\}}} \eta _j(\lambda ) = \lambda \quad \Leftrightarrow \quad F(\lambda )\, 1\!\!1= \lambda \, 1\!\!1. \end{aligned}$$
    (25)
  3. (c)

    In [10] the matrix \(A^{\{\mathfrak {s},\mathfrak {f}\}}\) is scaled by M, and the matrix \(A^{\{\mathfrak {f},\mathfrak {s}\}}\) is scaled by 1 / M, i.e., the function evaluations of the active part are always done in the microstep size, and the ones in the slow part in the macrostep size.

Note that this choice corresponds to the simplifying conditions (16a, 16b) with the special choice (18b). With the notation

$$\begin{aligned} c^{\mathfrak {f}}:= A^{\{\mathfrak {f},\mathfrak {f}\}} 1\!\!1^{\mathfrak {f}}\quad and \quad c^{\mathfrak {s}}:= A^{\{\mathfrak {s},\mathfrak {s}\}} 1\!\!1^{\mathfrak {s}} \end{aligned}$$

the corresponding GARK order conditions are given in Table 3.

Table 3 Order conditions for the mRK [10] multirate GARK scheme with \(F(\lambda ) 1\!\!1=\lambda 1\!\!1\)

Choose two order three schemes \((b^{\{\mathfrak {f}\}},A^{\{\mathfrak {f},\mathfrak {f}\}})\) and \((b^{\{\mathfrak {s}\}},A^{\{\mathfrak {s},\mathfrak {s}\}})\). To obtain a multirate method of order three the free parameters \(A^{\{\mathfrak {f},\mathfrak {s}\}}\), \(A^{\{\mathfrak {s},\mathfrak {f}\}}\), and \(F(\lambda )\) have to fulfill the two remaining order three coupling conditions, together with the three simplifying conditions,

$$\begin{aligned}&\displaystyle b^{\{\mathfrak {s}\}}\,^T A^{\{\mathfrak {s},\mathfrak {f}\}} c^{\mathfrak {f}} = \frac{M}{6}, \end{aligned}$$
(26a)
$$\begin{aligned}&\displaystyle b^{\{\mathfrak {f}\}}\,^T \left( A^{\{\mathfrak {f},\mathfrak {s}\}} + \frac{1}{M} \sum _{\lambda =0}^{M-1} F(\lambda ) \right) c^{\mathfrak {s}} = \frac{M}{6}, \end{aligned}$$
(26b)
$$\begin{aligned}&\displaystyle A^{\{\mathfrak {f},\mathfrak {s}\}} 1\!\!1= c^{\mathfrak {f}}, \end{aligned}$$
(26c)
$$\begin{aligned}&\displaystyle A^{\{\mathfrak {s},\mathfrak {f}\}} 1\!\!1= c^{\mathfrak {s}}, \end{aligned}$$
(26d)
$$\begin{aligned}&\displaystyle F(\lambda ) 1\!\!1= \lambda , \quad \lambda =0,\ldots ,M-1. \end{aligned}$$
(26e)

Note that the additional condition

$$\begin{aligned} b^{\{\mathfrak {f}\}}\,^T F(\lambda ) \, c^{\mathfrak {s}} = \frac{\lambda (\lambda +1)}{2M} \end{aligned}$$

imposed by Kvaerno and Rentrop [10], which transforms the second condition (26b) into

$$\begin{aligned} b^{\{\mathfrak {f}\}}\,^T A^{\{\mathfrak {f},\mathfrak {s}\}} c^{\mathfrak {s}} = \frac{1}{6M}, \end{aligned}$$

ensures that the active solution has order three at all microsteps.

Example 5

(A multirate GARK schemes of order 3 with only two stages) We are now interested in constructing schemes of order 3 with only 2 stages. Note that the overall scheme will be stable, if the basic schemes are stable due to componentwise partitioning. We use the simplifying conditions (26c) and (26d) together with

$$\begin{aligned} b:=b^{\{\mathfrak {f}\}}=b^{\{\mathfrak {s}\}},\quad A:=A^{\{\mathfrak {f}\}}=A^{\{\mathfrak {s}\}}, \quad c:=c^{\{\mathfrak {f}\}}=c^{\{\mathfrak {s}\}}, \quad \widetilde{A}:=A^{\{\mathfrak {f},\mathfrak {s}\}} = A^{\{\mathfrak {s},\mathfrak {f}\}}. \end{aligned}$$

If we use an order three basis scheme (bA) , the remaining order conditions (26a, 26b, 26c, 26d) for the free parameters \(\widetilde{A}\) and \(F(\lambda )\) read

$$\begin{aligned}&\displaystyle b^T \widetilde{A} \, c = \frac{M}{6}, \end{aligned}$$
(27a)
$$\begin{aligned}&\displaystyle b^T \left( \widetilde{A} + \frac{1}{M} \sum _{\lambda =0}^{M-1} F(\lambda ) \right) c = \frac{M}{6}, \end{aligned}$$
(27b)
$$\begin{aligned}&\displaystyle \widetilde{A}\, 1\!\!1= c, \end{aligned}$$
(27c)
$$\begin{aligned}&\displaystyle F(\lambda )\, 1\!\!1= \lambda . \end{aligned}$$
(27d)

The first two conditions coincide if we set

$$\begin{aligned} b^T \sum _{\lambda =0}^{M-1} F(\lambda ) \, c = 0. \end{aligned}$$

When \(c_{2} \ne c_{1}\) the choice

$$\begin{aligned} \eta _1(\lambda ) = \frac{c_{2}}{c_{2} - c_{1}} \lambda , \quad \eta _2(\lambda ) = - \frac{c_{1}}{c_{2} - c_{1}} \lambda , \end{aligned}$$

fulfills this additional condition and condition (27d) at the same time. The remaining conditions (27a) and (27c) can be fulfilled, for example, by setting

$$\begin{aligned} \widetilde{A} = \begin{bmatrix} c_1&\quad 0 \\ c_2-p&\quad p \end{bmatrix} \quad \text{ with } \quad p = \frac{\frac{M}{6}- b_1 c_1^2 - b_2 c_1 c_2}{b_2(c_2-c_1)}. \end{aligned}$$

For the RADAU-IA scheme (\(p=3,s=2\)) we obtain

$$\begin{aligned} \begin{array}{c|cr} c &{} A \\ \hline &{} b^T \end{array} := \begin{array}{c|cr} 0 &{} \frac{1}{4} &{} - \frac{1}{4} \\ \frac{2}{3} &{} \frac{1}{4} &{} \frac{5}{12} \\ \hline &{} \frac{1}{4} &{} \frac{3}{4} \end{array}; \quad F(\lambda ) = \begin{bmatrix} \lambda&\quad 0 \\ \lambda&\quad 0 \end{bmatrix}, \quad \widetilde{A}= \begin{bmatrix} 0&\quad 0 \\ \frac{2}{3}-\frac{M}{3}&\quad \frac{M}{3} \end{bmatrix}, \end{aligned}$$

and for RADAU-IIA (\(p=3,s=2\))

$$\begin{aligned} \begin{array}{c|cr} c &{}\quad A \\ \hline &{}\quad b^T \end{array} := \begin{array}{c|cr} 0 &{} \frac{1}{4} &{} - \frac{1}{4} \\ \frac{2}{3} &{} \frac{1}{4} &{} \frac{5}{12} \\ \hline &{} \frac{1}{4} &{} \frac{3}{4} \end{array}; \quad F(\lambda ) = \begin{bmatrix} \frac{3}{2} \lambda&\quad - \frac{1}{2} \lambda \\ \frac{3}{2} \lambda&\quad - \frac{1}{2} \lambda \end{bmatrix}, \quad \widetilde{A}= \begin{bmatrix} \frac{1}{3}&\quad 0 \\ 1-(M-1)&\quad M-1 \end{bmatrix}. \end{aligned}$$

3.2 Dense output coupling

The use of dense output interpolation for coupling the slow and fast components was developed by Savcenco, Hundsdorfer, and co-workers in the context of Rosenbrock methods [1216]. This approach can be immediately extended to Runge Kutta methods, and the overall scheme can be formulated in the mutirate GARK framework.

For a traditional Runge Kutta method the dense output provides highly accurate approximations of the solution at intermediate points

$$\begin{aligned} y(t_{n}+\theta h) \approx y_n + H \, \sum _{j=1}^{s} b_{j}(\theta ) f\left( Y_j\right) , \quad 0 \le \theta \le 1, \end{aligned}$$
(28)

or highly accurate approximations of the function values at intermediate points

$$\begin{aligned} f( y(t_{n}+\theta h) ) \approx \, \sum _{j=1}^{s} d_{j}(\theta ) f(Y_j), \quad 0 \le \theta \le 1. \end{aligned}$$
(29)

The slow terms in the micro-steps (5b) can be viewed as approximations of the function value at the micro steps

$$\begin{aligned} H f^{\{\mathfrak {s}\}}( y(t_{n}+(\lambda -1+c_i^{\{\mathfrak {f},\mathfrak {f}\}})h )) \approx H \, \sum _{j=1}^{s^{\{\mathfrak {s}\}}} a_{i,j}^{\{\mathfrak {f},\mathfrak {s},\lambda \}} f^{\{\mathfrak {s}\}}(Y_j^{\{\mathfrak {s}\}}). \end{aligned}$$

Consequently, using dense output of function values (29) leads to the standard multirate GARK approach with the coupling given by the dense output coefficients

$$\begin{aligned} a_{i,j}^{\{\mathfrak {f},\mathfrak {s},\lambda \}} = d_j\left( \frac{\lambda -1+{c_i^{\{\mathfrak {f},\mathfrak {f}\}}}}{M}\right) . \end{aligned}$$

Alternatively, one can use the dense solution values (28) in the micro-steps (5b)

$$\begin{aligned} Y_i^{\{\mathfrak {f},\lambda \}}= & {} \widetilde{y}_{n+(\lambda -1)/M} + H \, f^{\{\mathfrak {s}\}}(Y_i^{\{\mathfrak {s},\lambda \}}) \\ \nonumber&+\, h \, \sum _{j=1}^{s^{\{\mathfrak {f}\}}} a_{i,j}^{\{\mathfrak {f},\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_j^{\{\mathfrak {f},\lambda \}}),\quad i=1,\dots , s^{\{\mathfrak {f}\}} \end{aligned}$$
(30)

where

$$\begin{aligned} Y_i^{\{\mathfrak {s},\lambda \}}= & {} y_n + h \, \sum _{\lambda =1}^M \sum _{i=1}^{s^{\{\mathfrak {f}\}}} b_{i}^{\{\mathfrak {f}\}}(\lambda ) f^{\{\mathfrak {f}\}}(Y_i^{\{\mathfrak {f},\lambda \}}) + H \, \sum _{i=1}^{s^{\{\mathfrak {s}\}}} b_{i}^{\{\mathfrak {s}\}}(\lambda ) f^{\{\mathfrak {s}\}}(Y_i^{\{\mathfrak {s}\}}). \end{aligned}$$

The dense output of the fast variable can be applied only for the current micro-step

$$\begin{aligned} Y_i^{\{\mathfrak {s},\lambda \}}= & {} y_n + H \sum _{i=1}^{s^{\{\mathfrak {f}\}}} b_{i}^{\{\mathfrak {f}\}}(\lambda ) f^{\{\mathfrak {f}\}}(Y_i^{\{\mathfrak {f},\lambda \}}) + H \, \sum _{i=1}^{s^{\{\mathfrak {s}\}}} b_{i}^{\{\mathfrak {s}\}}(\lambda ) f^{\{\mathfrak {s}\}}(Y_i^{\{\mathfrak {s}\}}), \end{aligned}$$

or for the previous micro-step, i.e., in extrapolation mode

$$\begin{aligned} Y_i^{\{\mathfrak {s},\lambda \}}= & {} y_n + H \sum _{i=1}^{s^{\{\mathfrak {f}\}}} b_{i}^{\{\mathfrak {f}\}}(\lambda ) f^{\{\mathfrak {f}\}}(Y_i^{\{\mathfrak {f},\lambda -1\}}) + H \, \sum _{i=1}^{s^{\{\mathfrak {s}\}}} b_{i}^{\{\mathfrak {s}\}}(\lambda ) f^{\{\mathfrak {s}\}}(Y_i^{\{\mathfrak {s}\}}), \end{aligned}$$

where the dense output coefficients \(b^{\{\mathfrak {s}\}}(\lambda )\), \(b^{\{\mathfrak {f}\}}(\lambda )\) are appropriately redefined.

The solution interpolation approach (30) can be cast in the GARK framework by adding the additional slow stage values \(Y_i^{\{\mathfrak {s},\lambda \}}\), with no contribution to the output (\(b_i^{\{\mathfrak {s},\lambda \}}=0\)). This is less convenient for analysis, however, as the number of slow stages becomes equal to the number of fast stages.

3.3 Multirate infinitesimal step methods

Multirate infinitesimal step (MIS) methods [21] discretize the slow component with an explicit Runge Kutta method. The fast component is advanced between consecutive stages of this method as the exact solution of a fast ODE system. The fast ODE has a right hand side composed of the original fast component of the function, plus a piecewise constant “tendency” term representing the discretized slow component of the function. The order conditions of the overall method assume that the fast ODE can be solved exactly, which justifies the “infinitesimal step” name. We show here that a multirate infinitesimal step method can be cast in the GARK framework when the inner fast ODEs are solved by a Runge Kutta method with small steps.

We focus on the particular method of Knoth and Wolke [8], which was the first MIS approach, and which has the best practical potential. This approach has been named recursive flux splitting multirate (RFSMR) in [1820], and has been cast as a traditional partitioned Runge Kutta method in [20]. Applications to the solution of atmospheric flows are discussed in [1719]. The approach below can be applied to any MIS scheme where the internal ODEs are solved by Runge Kutta methods.

Consider an outer (slow) explicit Runge Kutta scheme with the abscissae \(c^\textsc {o}_1=0\), \(c^\textsc {o}_i < c^\textsc {o}_j\) for \(i<j\), and \(c^\textsc {o}_s<1\). The inner (fast) scheme can be explicit or implicit. If the same explicit scheme is used in both the inner and the outer loops then the method can be applied in a telescopic fashion, where an even faster method is obtained by sub-stepping recursively.

The scheme proceeds, in principle, by solving an ODE between each pair of consecutive stages of the slow explicit method:

$$\begin{aligned}&{Y^{\{\mathfrak {s}\}}_1 = y_n} \\&{ for i=2,\dots ,s^\textsc {o}} \\&\quad v_i' = \sum \limits _{j=1}^{i-1} ( a^\textsc {o}_{i,j} - a^\textsc {o}_{i-1,j} )\, f^{\{\mathfrak {s}\}}( Y^{\{\mathfrak {s}\}}_j ) + ( c^\textsc {o}_{i} - c^\textsc {o}_{i-1} ) \, f^{\{\mathfrak {f}\}} ( v_i ), \\&\qquad \quad for \theta \in [0, H], \quad starting with v_i(0) = Y^{\{\mathfrak {s}\}}_{i-1} \\ \quad Y^{\{\mathfrak {s}\}}_{i} :=\,&v_i(H), \\&{ end for i} \\&v' = \sum \limits _{j=1}^{s^\textsc {o}} (b^\textsc {o}_{j} - a^\textsc {o}_{s^\textsc {o},j})\, f^{\{\mathfrak {s}\}} ( Y^{\{\mathfrak {s}\}}_j ) + (1 - c^\textsc {o}_{s^\textsc {o}}) \, f^{\{\mathfrak {f}\}} ( v ) \\&\qquad \quad for \theta \in [0, H], \quad starting with v(0) = Y^{\{\mathfrak {s}\}}_{s} \\&y_{n+1} := v(H). \end{aligned}$$

The numerical scheme solves the inner ODEs using several steps of an inner Runge Kutta method [18]. For the present analysis we consider the case when only one step of the internal Runge Kutta method \((A^\textsc {i},b^\textsc {i})\) is taken to solve the ODE for \(v_i\) for each subinterval \(i=2,\dots ,s^\textsc {o}\). This is no restriction of generality as any sequence of M sub steps can be written as a single step method.

We interpret this scheme as a GARK method—note that we formally solve the ODEs for the active part with one step of the inner fast method \((A^\textsc {i},b^\textsc {i})\) with \(c^\textsc {i} = A^\textsc {i}1\!\!1\).

Theorem 4

(The MIS scheme is a particular instance of a GARK method)

  1. (i)

    The MIS scheme defined above can be written as a GARK method with the corresponding Butcher tableau (7) given by \(\mathbf {A}^{\{\mathfrak {s},\mathfrak {s}\}}=A^\textsc {o}\), \(\mathbf {b}^{\{\mathfrak {s}\}}=b^\textsc {o},\)

    where

    $$\begin{aligned} \mathbf {e}_i = \begin{bmatrix} 0&\dots&1&\dots&0 \end{bmatrix} ^T \in {\mathbb R}^{s^\textsc {o}}, \quad \mathbf {g}_i = \begin{bmatrix} 0&\dots&1&\dots&1 \end{bmatrix} ^T \in {\mathbb R}^{s^\textsc {o}} \end{aligned}$$
  2. (ii)

    The coefficients fulfill the simplifying “internal consistency” conditions (16a, 16b) given in matrix form by

    $$\begin{aligned} \mathbf {c}^{\{\mathfrak {s},\mathfrak {s}\}} = \mathbf {c}^{\{\mathfrak {s},\mathfrak {f}\}} = \mathbf {c}^{\{\mathfrak {s}\}} = c^\textsc {o}, \end{aligned}$$

    and

    $$\begin{aligned} \mathbf {c}^{\{\mathfrak {f},\mathfrak {s}\}} = \mathbf {c}^{\{\mathfrak {f},\mathfrak {f}\}} = \mathbf {c}^{\{\mathfrak {f}\}} = \begin{bmatrix} ( c^\textsc {o}_{2} )\,c^\textsc {i} \\ \vdots \\ c_{s^\textsc {o}}^\textsc {o} \, 1\!\!1+ ( b^\textsc {o}\,^T c^\textsc {o} -c^\textsc {o}_{s^\textsc {o}} )\,c^\textsc {i} \end{bmatrix} \in {\mathbb R}^{s^\textsc {o}s^\textsc {i}}. \end{aligned}$$
  3. (iii)

    Assuming that both the fast and the slow methods have order at least two, the simplifying assumptions imply that the overall scheme is second order.

  4. (iv)

    Assuming that both the fast and the slow methods have order at least three, the third order coupling conditions reduce to the single condition

    $$\begin{aligned} \frac{1}{3}= & {} \sum _{i=2}^{s^\textsc {o}} (c_i^\textsc {o}-c_{i-1}^\textsc {o})\, ( \mathbf {e}_{i}+\mathbf {e}_{i-1} )^T \, A^\textsc {o} c^\textsc {o} + (1-c_{s^\textsc {o}}^\textsc {o}) \, \left( \frac{1}{2} + \mathbf {e}_{s^\textsc {o}}^T\, A^\textsc {o} c^\textsc {o} \right) . \end{aligned}$$
    (31)

Proof

The proof is an application of the multirate GARK order conditions. Details are given in [2]. \(\square \)

Remark 2

The condition (31) corresponds to the additional order three condition derived in the original MIS paper [8].

4 Multirate (traditional) additive Runge–Kutta methods

A special case of multirate GARK schemes (6a, 6b, 6c) are the multirate additive Runge–Kutta schemes. They are obtained in the GARK framework by setting

$$\begin{aligned}&A^{\{\mathfrak {s}\}}:=A^{\{\mathfrak {s},\mathfrak {s}\}}=A^{\{\mathfrak {f},\mathfrak {s},1\}}, \\&A^{\{\mathfrak {f}\}}:=A^{\{\mathfrak {f},\mathfrak {f}\}}=A^{\{\mathfrak {s},\mathfrak {f},1\}}, \\&A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}:=0 \quad and \quad A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}:=A^{\{\mathfrak {s},\lambda \}} \quad for \,\, \lambda =2,\ldots ,M. \end{aligned}$$

The scheme proceeds as follows

$$\begin{aligned} Y_i^{\lambda }= & {} y_n + h \sum _{l=1}^{\lambda -1} \sum _{j=1}^s b_{j}^{\{\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_j^{l}) + H \, \sum _{j=1}^s a_{i,j}^{\{\mathfrak {s},\lambda \}} f^{\{\mathfrak {s}\}}(Y_j^{1}) \nonumber \\&+\, h \, \sum _{j=1}^s a_{i,j}^{\{\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_j^{\lambda }), \quad \lambda =1,\ldots ,M, \end{aligned}$$
(32a)
$$\begin{aligned} y_{n+1}= & {} y_n + h \, \sum _{\lambda =1}^M \sum _{j=1}^s b_{i}^{\{\mathfrak {f}\}} f^{\{\mathfrak {f}\}}(Y_i^{\lambda }) + H \, \sum _{j=1}^s b_{i}^{\{\mathfrak {s}\}} f^{\{\mathfrak {s}\}}(Y_i^{1}), \end{aligned}$$
(32b)

and has the following extended Butcher tableau

4.1 Additive partitioning

The coupling condition (13) for nonlinear stability yields

$$\begin{aligned} A^{\{\mathfrak {s}\}}= & {} 1\!\!1b^{\{\mathfrak {s}\}}\,^T - B^{\{\mathfrak {f}\}}\,^{-1} (A^{\{\mathfrak {f}\}})^T B^{\{\mathfrak {s}\}}, \\ A^{\{\mathfrak {s},\lambda \}}= & {} 1\!\!1b^{\{\mathfrak {s}\}}\,^T, \quad \lambda =2,\ldots ,M, \end{aligned}$$

and only \(b^{\{\mathfrak {s}\}},b^{\{\mathfrak {f}\}}\) and \(A^{\{\mathfrak {f}\}}\) remain as free parameters. As a consequence, the algebraic stability of the basic method \((b^{\{\mathfrak {s}\}},A^{\{\mathfrak {s}\}})\) is equivalent to the algebraic stability of \((b^{\{\mathfrak {s}\}},D)\), where \(D=B^{\{\mathfrak {f}\}}\,^{-1} (A^{\{\mathfrak {f}\}})^T B^{\{\mathfrak {s}\}} \).

Example 6

(A nonlinearly stable additive Runge–Kutta scheme of order two) Besides the algebraic stability of \((b^{\{\mathfrak {s}\}},A^{\{\mathfrak {s}\}})\) and \((b^{\{\mathfrak {s}\}},D)\), the following conditions have to be fulfilled for a method of order two:

$$\begin{aligned} b^{\{\mathfrak {f}\}}\,^T 1\!\!1= & {} 1, \quad b^{\{\mathfrak {f}\}}\,^T A^{\{\mathfrak {f}\}} 1\!\!1= \frac{1}{2}, \\ b^{\{\mathfrak {s}\}}\,^T 1\!\!1= & {} 1, \quad b^{\{\mathfrak {s}\}}\,^T D 1\!\!1= \frac{1}{2}, \quad b^{\{\mathfrak {s}\}}\,^T A^{\{\mathfrak {f}\}} 1\!\!1= \frac{M}{2}. \end{aligned}$$

A simple choice of parameters is

$$\begin{aligned} b^{\{\mathfrak {f}\}}= & {} \begin{bmatrix} \frac{1}{2} \\ \frac{1}{2} \end{bmatrix}, \quad b^{\{\mathfrak {s}\}} = \begin{bmatrix} \frac{3}{4M+2} \\ \frac{4M-1}{4M+2} \end{bmatrix}, \quad A^{\{\mathfrak {f}\}} = \begin{bmatrix} \frac{1}{4}&- \frac{M}{2} \\ \frac{M+1}{2}&\frac{1}{4} \end{bmatrix}, \end{aligned}$$

and

$$\begin{aligned} A^{\{\mathfrak {s}\}}= & {} \frac{1}{4M+2} \begin{bmatrix} \frac{3}{2}&\quad - M(4M-1) \\ 3(M+1)&\quad \frac{4M-1}{2} \end{bmatrix} \end{aligned}$$

In this case both base methods are not only algebraically stable but also symplectic.

4.2 Componentwise partitioning

For component partitioning there are no additional nonlinear stability conditions. Following again the lines of [10], we set

$$\begin{aligned} A^{\{\mathfrak {s},\lambda +1\}}=A^{\{\mathfrak {s}\}} + F(\lambda )\quad with \quad F(\lambda ) 1\!\!1= \lambda 1\!\!1, \quad \lambda =0,\ldots ,M-1. \end{aligned}$$
(33a)

We also consider the simplifying assumption

$$\begin{aligned} c:= A^{\{\mathfrak {f}\}} 1\!\!1= A^{\{\mathfrak {s}\}} 1\!\!1. \end{aligned}$$
(33b)

The corresponding order conditions are given in Table 4.

Table 4 Order conditions for multirate GARK scheme (32a, 32b) when (33a, 33b) hold

If, in addition to (33a, 33b) a second simplifying condition is given by the following relation

$$\begin{aligned} b^{\{\mathfrak {f}\}}\,^T \sum _{\lambda =0}^{M-1} F(\lambda ) \, c= & {} 0, \end{aligned}$$
(34)

any pair \((b^{\{\mathfrak {f}\}}, A^{\{\mathfrak {f}\}})\) and \((b^{\{\mathfrak {s}\}}, A^{\{\mathfrak {s}\}})\) of algebraically stable order three schemes lead to an order three multirate scheme, provided that the compatibility conditions are true:

$$\begin{aligned} b^{\{\mathfrak {s}\}}\,^T A^{\{\mathfrak {f}\}} \,c= & {} \frac{M}{6}, \quad b^{\{\mathfrak {f}\}}\,^T A^{\{\mathfrak {s}\}} \,c = \frac{M}{6}. \end{aligned}$$

These compatibility conditions are fulfilled by a scheme with s stages if

$$\begin{aligned} \eta _1(\lambda ) = \frac{c_{s} \sum _{j=2}^{s-1} \eta _j - \sum _{j=2}^{s-1} c_{j} \eta _j - c_{s} \lambda }{c_{1} - c_{s}}, \quad \eta _s(\lambda )=\lambda - \sum _{j=1}^{s-1} \eta _j, \end{aligned}$$

with free parameters \(\eta _2(\lambda ),\ldots ,\eta _{s-1}(\lambda )\), provided that \(c_1 \ne c_s\).

It is easy to see that the scheme must have at least four stages, as \(s=3\) would yield \(b^{\{\mathfrak {f}\}}=b^{\{\mathfrak {s}\}}\) and consequently \(M=1\).

Example 7

(An algebraically stable additive Runge–Kutta scheme of order three) To construct an algebraically stable scheme of order \(p=3\), we first choose a pair of algebraically stable schemes \((A,b^{\{\mathfrak {f}\}})\) and \((A,b^{\{\mathfrak {s}\}})\) with \(c:=A 1\!\!1\) and then define \(A^{\{\mathfrak {f}\}}:=A+(M-1) \widetilde{A}^{\{\mathfrak {f}\}}\) and \(A^{\{\mathfrak {s}\}}:=A+(M-1) \widetilde{A}^{\{\mathfrak {s}\}}\). It is straightforward to show that this yields a stable multirate additive Runge–Kutta scheme, if the following conditions hold:

$$\begin{aligned} \widetilde{A}^{\{\mathfrak {f}\}} 1\!\!1= & {} \widetilde{A}^{\{\mathfrak {s}\}} 1\!\!1= 0, \end{aligned}$$
(35a)
$$\begin{aligned} b^{\{\mathfrak {f}\}} \widetilde{A}^{\{\mathfrak {f}\}} c= & {} b^{\{\mathfrak {s}\}} \widetilde{A}^{\{\mathfrak {s}\}} c = 0, \end{aligned}$$
(35b)
$$\begin{aligned} b^{\{\mathfrak {s}\}} \widetilde{A}^{\{\mathfrak {f}\}} c= & {} b^{\{\mathfrak {f}\}} \widetilde{A}^{\{\mathfrak {s}\}} c = \frac{1}{6}, \end{aligned}$$
(35c)
$$\begin{aligned} \widetilde{A}^{\{\mathfrak {f}\}}\,^T B^{\{\mathfrak {f}\}} + B^{\{\mathfrak {f}\}} A^{\{\mathfrak {f}\}}= & {} \widetilde{A}^{\{\mathfrak {s}\}}\,^T B^{\{\mathfrak {s}\}} + B^{\{\mathfrak {s}\}} A^{\{\mathfrak {s}\}} = 0. \end{aligned}$$
(35d)

Such a pair can be constructed by extending (doubling) any algebraically stable scheme. For the RADAU-IA method, for example, the extension to the pair \((A,b^{\{\mathfrak {f}\}})\) and \((A,b^{\{\mathfrak {s}\}})\) is given by:

$$\begin{aligned} \begin{array}{c|r@{\quad }r@{\quad }r@{\quad }r@{\quad }r} 0 &{} \frac{1}{4} &{} -\frac{1}{4} &{} 0 &{} 0 \\ \frac{2}{3} &{} \frac{1}{4} &{} \frac{5}{12} &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} \frac{1}{4} &{} -\frac{1}{4} \\ \frac{2}{3} &{} 0 &{} 0 &{} \frac{1}{4} &{} \frac{5}{12} \\ \hline b^{\{\mathfrak {f}\}} &{} \frac{1}{4} &{} \frac{3}{4} &{} 0 &{} 0 \\ b^{\{\mathfrak {s}\}} &{} 0 &{} 0 &{} \frac{1}{4} &{} \frac{3}{4} \end{array}. \end{aligned}$$

A possible choice for \(\widetilde{A}^{\{\mathfrak {f}\}}\) and \(\widetilde{A}^{\{\mathfrak {s}\}}\) fulfilling all conditions above is

$$\begin{aligned} \widetilde{A}^{\{\mathfrak {f}\}} = \begin{bmatrix} 0&\quad 0&\quad 0&\quad 0 \\ 0&\quad 0&\quad 0&\quad 0 \\ 0&\quad 0&\quad 0&\quad 0 \\ 0&\quad 0&\quad -\frac{1}{3}&\quad \frac{1}{3} \end{bmatrix}, \qquad \widetilde{A}^{\{\mathfrak {s}\}} = \begin{bmatrix} 0&\quad 0&\quad 0&\quad 0 \\ -\frac{1}{3}&\quad \frac{1}{3}&\quad 0&\quad 0 \\ 0&\quad 0&\quad 0&\quad 0 \\ 0&\quad 0&\quad 0&\quad 0 \end{bmatrix}. \end{aligned}$$

Finally, we set

$$\begin{aligned} \eta _1(\lambda ) = \frac{c_{4} \sum _{j=2}^{3} \eta _j - \sum _{j=2}^{3} c_{j} \eta _j - c_{4} \lambda }{c_{1} - c_{4}}, \quad \eta _4(\lambda )=\lambda - \sum _{j=1}^{3} \eta _i, \end{aligned}$$

with \(\eta _2\) and \(\eta _3\) arbitrary. For \(\eta _2:=\eta _3:=0\) we get \(\eta _1=\frac{c_4}{c_4-c_1} \lambda \) and \(\eta _4=-\frac{c_1}{c_4-c_1} \lambda \). For the extended RADAU-IA scheme we have \(\eta _1=\lambda , \eta _2=\eta _3=\eta _4=0\).

5 Monotonicity properties

Consider the method (6a, 6b, 6c) in the general form, represented by the Butcher tableau (7) and let

We are concerned with partitioned systems (1) where there exist \(\rho > 0\) such that for a semi-norm \(\Vert \cdot \Vert \) and for any y

$$\begin{aligned} \left\| y + \rho \, f^{\{\mathfrak {s}\}} (y) \right\| \le \left\| y \right\| , \quad \left\| y + \frac{\rho }{M}\, f^{\{\mathfrak {f}\}} (y) \right\| \le \left\| y \right\| . \end{aligned}$$
(36)

This implies that condition (36) holds for any \(0 \le \tau \le \rho \), i.e., the solutions of forward Euler steps with the slow and fast subsystems, respectively, are monotone under this step size restriction. The condition (36) also implies that the system (1) has a solution of non increasing norm. To see this consider \(\alpha ,\beta > 0\) with \(\alpha +\beta =1\) and write an Euler step with the full system as a convex combination

$$\begin{aligned} \Vert y + \theta \, ( f^{\{\mathfrak {s}\}} (y) + f^{\{\mathfrak {f}\}} (y)) \Vert= & {} \left\| \alpha \, \left( y + \frac{\theta }{\alpha }\, f^{\{\mathfrak {s}\}} (y)\right) + \beta \left( y + \frac{\theta }{\beta }\, f^{\{\mathfrak {f}\}} (y) \right) \right\| \\\le & {} \alpha \left\| y \right\| + \beta \left\| y \right\| = \left\| y \right\| , \end{aligned}$$

if \(\theta \le \min \{\alpha , \beta /M\}\, \rho \). For \(\beta \rightarrow 1\) the aggregate Euler step is monotonic for time steps \(0 < \theta < \rho /M\), and therefore the solution of (1) has non increasing norm, \((d/dt) \Vert y \Vert \le 0\) [5].

We seek multirate schemes which guarantee a monotone numerical solution \(\Vert y_{n+1} \Vert \le \Vert y_{n} \Vert \) for (36) under suitable step size restrictions. Specifically, we seek schemes where the macro step is not subject to the above \(\theta < \rho /M\) bound.

The following definition and results follow from the general ones for GARK schemes in  [11].

Definition 2

(Absolutely monotonic multirate GARK) Let \(r>0\) and

$$\begin{aligned} \widetilde{\mathbf {R}}= diag \{ M\, r\,\mathbf {I}_{M s^{\{\mathfrak {f}\}} \times M s^{\{\mathfrak {f}\}}}, r\,\mathbf {I}_{s^{\{\mathfrak {s}\}} \times s^{\{\mathfrak {s}\}}} , 1 \}. \end{aligned}$$
(37)

A multirate GARK scheme (3a, 3b) defined by \(\widetilde{\mathbf {A}} \ge 0\) is called absolutely monotonic (a.m.) at \(r \in {\mathbb R}\) if

$$\begin{aligned} \alpha (r)= & {} (\mathbf {I}_{\hat{s} \times \hat{s}} + \widetilde{\mathbf {A}}\widetilde{\mathbf {R}} )^{-1} \cdot \mathbf {1}_{\hat{s} \times 1} \ge 0, ~~~ and \end{aligned}$$
(38a)
$$\begin{aligned} \beta (r)= & {} (\mathbf {I}_{\hat{s} \times \hat{s}} + \widetilde{\mathbf {A}}\widetilde{\mathbf {R}} )^{-1} \cdot \widetilde{\mathbf {A}}\widetilde{\mathbf {R}} = \mathbf {I}_{\hat{s} \times \hat{s}} - (\mathbf {I}_{\hat{s} \times \hat{s}} + \widetilde{\mathbf {A}}\widetilde{\mathbf {R}} )^{-1} \ge 0, \end{aligned}$$
(38b)

where \(\hat{s}=M s^{\{\mathfrak {f}\}}+ s^{\{\mathfrak {s}\}} +1\). Here all the inequalities are taken component-wise.

Let

$$\begin{aligned} \widehat{\mathbf {A}} = \widetilde{\mathbf {A}} \cdot diag \{ M\,\mathbf {I}_{M s^{\{\mathfrak {f}\}} \times M s^{\{\mathfrak {f}\}}}, \mathbf {I}_{s^{\{\mathfrak {s}\}} \times s^{\{\mathfrak {s}\}}} , 1 \}. \end{aligned}$$
(39)

We note that conditions (38a, 38b) are equivalent to

$$\begin{aligned} \alpha (r)= & {} (\mathbf {I}_{\hat{s} \times \hat{s}} + r\,\widehat{\mathbf {A}} )^{-1} \cdot \mathbf {1}_{\hat{s} \times 1} \ge 0, ~~~ and \end{aligned}$$
(40a)
$$\begin{aligned} \beta (r)= & {} (\mathbf {I}_{\hat{s} \times \hat{s}} + r\, \widehat{\mathbf {A}} )^{-1} \cdot \widehat{\mathbf {A}} \ge 0. \end{aligned}$$
(40b)

These are precisely the monotonicity relations for a simple Runge Kutta matrix. Consequently, the machinery developed for assessing the monotonicity of single rate Runge Kutta schemes [3, 9] can be directly applied to the multirate GARK case as well.

Definition 3

(Radius of absolute monotonicity) The radius of absolute monotonicity of the multirate GARK scheme (3a, 3b) is the largest number \(\mathcal {R}\ge 0\) such that the scheme is absolutely monotonic (40a, 40b) for any \(r \in [0,\mathcal {R}]\).

Theorem 5

(Monotonicity of solutions) Consider the GARK scheme (3a,3b) applied to a partitioned system with the property (36). For any macro-step size obeying the restriction

$$\begin{aligned} H \le \mathcal {R}\, \rho \end{aligned}$$
(41)

the stage values and the solution are monotonic

$$\begin{aligned} \Vert Y_i^{\{q\}} \Vert\le & {} \Vert y_n \Vert , \quad q=1,\dots ,N, \quad i=1,\dots ,s^{\{q\}}, \quad q \in \{\mathfrak {f},\mathfrak {s}\} \end{aligned}$$
(42a)
$$\begin{aligned} \ \Vert y_{n+1} \Vert\le & {} \Vert y_n \Vert . \end{aligned}$$
(42b)

In practice we are interested in the largest upper bound for the time step that ensures monotonicity.

We next consider the particular case of telescopic multirate GARK methods, where both the fast and the slow components use the same irreducible monotonic base scheme (Ab). The classical Runge Kutta monotonicity theory [3, 9] states that an irreducible base scheme has a nonzero radius of absolute monotonicity if and only if

$$\begin{aligned} Inc\left( \begin{bmatrix} A&0 \\ b^T&0 \end{bmatrix}^2 \right) \le Inc\left( \begin{bmatrix} A&0 \\ b^T&0 \end{bmatrix} \right) , \end{aligned}$$
(43)

where Inc denotes the incidence matrix (i.e., a matrix with entries equal to one or zero for the non-zero and zero entries of the original matrix, respectively).

The matrix (39) of the resulting multirate scheme is

(44)

The extra stages added to obtain (44) from (39) do not impact the final solution, therefore the underlying numerical solution is not changed. Denote the upper left block of (44) by \(A_M\). Note that (43) implies that \(Inc(A_M^2) \le Inc(A_M)\) as \(A_M\) represents M concatenated steps of the base method.

By similar arguments as in the classical theory [3, 9], the multirate scheme is absolutely monotonic if the incidence of the extended matrix satisfies

$$\begin{aligned} Inc(\widehat{\mathbf {A}}^2) \le Inc(\widehat{\mathbf {A}}). \end{aligned}$$
(45)

Theorem 6

(Conditions for absolutely monotonic telescopic multirate GARK schemes) The multirate GARK schemes with the same basic scheme for fast and slow components is absolutely monotonic, if the following conditions hold:

$$\begin{aligned} Inc( A_M^2 + ( A^{\{\mathfrak {f},\mathfrak {s},i\}} A^{\{\mathfrak {s},\mathfrak {f},j\}} )_{i,j=1,\dots ,M})\le & {} Inc( A_M ), \end{aligned}$$
(46a)
$$\begin{aligned} Inc\left( \begin{bmatrix} A&0 \\ b^T&0 \end{bmatrix}^2 + \begin{bmatrix} \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}&0 \\ \sum _{\lambda =1}^M b^T A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}&0 \end{bmatrix} \right)\le & {} Inc\left( \begin{bmatrix} A&0 \\ b^T&0 \end{bmatrix} \right) , \end{aligned}$$
(46b)
$$\begin{aligned} Inc\left( \sum _{\lambda =1}^{i-1} 1\!\!1b^T A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} + A \, A^{\{\mathfrak {f},\mathfrak {s},i\}} + A^{\{\mathfrak {f},\mathfrak {s},i\}} \, A \right)\le & {} Inc( A^{\{\mathfrak {f},\mathfrak {s},i\}}), \end{aligned}$$
(46c)
$$\begin{aligned} Inc\left( \sum _{\lambda =j+1}^{M} A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} 1\!\!1b^T + A^{\{\mathfrak {s},\mathfrak {f},j\}}\, A + A\, A^{\{\mathfrak {s},\mathfrak {f},j\}}\right)\le & {} Inc(A^{\{\mathfrak {s},\mathfrak {f},j\}}), \end{aligned}$$
(46d)
$$\begin{aligned} Inc((M-j) b^T + b^T\, A + b^T\, A^{\{\mathfrak {s},\mathfrak {f},j\}})\le & {} Inc(b^T), \end{aligned}$$
(46e)

for all \(i,j=1,\dots ,M\).

Proof

The square of matrix (44) is

$$\begin{aligned} \widehat{\mathbf {A}}^2 = \begin{bmatrix} \widehat{\mathbf {A}}^2_{1,1}&\quad \widehat{\mathbf {A}}^2_{1,2} \\ \widehat{\mathbf {A}}^2_{2,1}&\quad \widehat{\mathbf {A}}^2_{2,2} \end{bmatrix}, \end{aligned}$$

with the following blocks:

$$\begin{aligned} \widehat{\mathbf {A}}^2_{1,1}= & {} A_M^2 + ( A^{\{\mathfrak {f},\mathfrak {s},i\}} A^{\{\mathfrak {s},\mathfrak {f},j\}} )_{i,j=1,\dots ,M}, \\ \widehat{\mathbf {A}}^2_{2,2}= & {} \begin{bmatrix} A&\quad 0 \\ b^T&\quad 0 \end{bmatrix}^2 + \begin{bmatrix} \sum _{\lambda =1}^M A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}&\quad 0 \\ \sum _{\lambda =1}^M b^T A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}&\quad 0 \end{bmatrix}, \\ \widehat{\mathbf {A}}^2_{1,2}= & {} \left( \begin{bmatrix} \sum _{\lambda =1}^{i-1} 1\!\!1b^T A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} + A \, A^{\{\mathfrak {f},\mathfrak {s},i\}} + A^{\{\mathfrak {f},\mathfrak {s},i\}} \, A ,&\quad 0 \end{bmatrix} \right) _{i=1,\dots ,M}, \\ \widehat{\mathbf {A}}^2_{2,1}= & {} \left( \begin{bmatrix} \sum _{\lambda =j+1}^{M} A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} 1\!\!1b^T + A^{\{\mathfrak {s},\mathfrak {f},j\}}\, A + A\, A^{\{\mathfrak {s},\mathfrak {f},j\}} \\ (M-j) b^T + b^T\, A + b^T\, A^{\{\mathfrak {s},\mathfrak {f},j\}} \end{bmatrix} \right) _{j=1,\dots ,M}. \end{aligned}$$

Writing the incidence inequality (45) by blocks yields (46a, 46b, 46c, 46d, 46e). \(\square \)

Remark 3

  1. 1.

    A comparison of (46b) with (43) reveals that the monotonicity of the base scheme is a necessary, but not sufficient condition for the absolute monotonicity of the multirate version. The coupling coefficients have to be chosen appropriately in order to preserve this property. For example, since \(A_M\) and \(A_M^2\) are block lower triangular, a necessary condition for (46a) is that \( A^{\{\mathfrak {f},\mathfrak {s},i\}} A^{\{\mathfrak {s},\mathfrak {f},j\}} = \mathbf {0}\) for \(i > j\).

  2. 2.

    If all weights of the base method are nonzero then condition (46e) is automatically satisfied.

  3. 3.

    An interesting choice of coupling coefficients is to use only the first micro-step solution in the slow calculation

    $$\begin{aligned} A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} = {0}, \quad \lambda = 2, \dots , M, \end{aligned}$$

    and to include the slow term contribution only in the last micro-step

    $$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} = 0,\quad \lambda = 1,\dots ,M-1. \end{aligned}$$

    In this case the incidence conditions (46a)–(46d) take the much simpler form:

    $$\begin{aligned} Inc( A^2 + A^{\{\mathfrak {f},\mathfrak {s},M\}} A^{\{\mathfrak {s},\mathfrak {f},1\}} )\le & {} Inc( A ), \end{aligned}$$
    (47a)
    $$\begin{aligned} Inc( b^T A + b^T A^{\{\mathfrak {f},\mathfrak {s},M\}} )\le & {} Inc( b^T ), \end{aligned}$$
    (47b)
    $$\begin{aligned} Inc(\ A \, A^{\{\mathfrak {f},\mathfrak {s},M\}} + A^{\{\mathfrak {f},\mathfrak {s},M\}} \, A )\le & {} Inc( A^{\{\mathfrak {f},\mathfrak {s},M\}} ), \end{aligned}$$
    (47c)
    $$\begin{aligned} Inc( A^{\{\mathfrak {s},\mathfrak {f},1\}}\, A + A\, A^{\{\mathfrak {s},\mathfrak {f},1\}})\le & {} Inc(A^{\{\mathfrak {s},\mathfrak {f},1\}}). \end{aligned}$$
    (47d)

    Condition (47b) is automatically satisfied if \(b > 0\). Moreover, if the couplings are multiples of the base scheme, \(A^{\{\mathfrak {s},\mathfrak {f},1\}} = c_1\, A\) and \(A^{\{\mathfrak {f},\mathfrak {s},M\}} = c_2\, A\), then (43) implies that all conditions (47a, 47b, 47c, 47d) are satisfied.

Example 8

(Monotonicity of an explicit multirate GARK scheme of order two) The base for both the fast and the slow schemes is the following explicit, order two, strong stability preserving method, with an absolute stability radius \(\mathcal {R}=1\)

$$\begin{aligned} A = \begin{bmatrix} 0&\quad 0 \\ 1&\quad 0 \end{bmatrix}, \quad b = \begin{bmatrix} \frac{1}{2}\\ \frac{1}{2} \end{bmatrix} , \quad c = \begin{bmatrix} 0\\ 1 \end{bmatrix}. \end{aligned}$$
(48)

The general coupling conditions for a second order multirate scheme read:

$$\begin{aligned} \frac{M}{2}= & {} M\, b^{\{\mathfrak {s}\}}\,^T A^{s,a} 1\!\!1= \sum _{\lambda =1}^M b^T \, A^{\{\mathfrak {s},\mathfrak {f},\lambda \}}\, 1\!\!1, \\ \frac{1}{2}= & {} b^{\{\mathfrak {f}\}}\,^T A^{\{\mathfrak {f},\mathfrak {s}\}} 1\!\!1= \frac{1}{M} \sum _{\lambda =1}^M b^T \, A^{\{\mathfrak {f},\mathfrak {s},\lambda \}}\, 1\!\!1. \end{aligned}$$

We consider three different couplings.

  • The coupling coefficients that respect the nonlinear stability decoupling condition have negative values and the resulting GARK method is not absolutely stable.

  • Coupling the slow step with only the first fast step is achieved by

    $$\begin{aligned} A^{\{\mathfrak {s},\mathfrak {f},1\}} = \begin{bmatrix} 0&\quad 0 \\ M&\quad 0 \end{bmatrix} , \quad A^{\{\mathfrak {s},\mathfrak {f},\lambda \}} = \mathbf {0}, \quad \lambda \ge 2. \end{aligned}$$
    (49)

    The second order condition can be fulfilled by taking

    $$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} = A, \quad \forall \, \lambda . \end{aligned}$$

    For \(M \ge 2\) we have \(\mathcal {R}=0\), since (39) corresponds to an irreducible Runge Kutta scheme, and (46c) is not satisfied.

  • A scheme with (49) which includes the slow terms only in the last micro-step is

    $$\begin{aligned} A^{\{\mathfrak {f},\mathfrak {s},\lambda \}} = 0, \quad \lambda = 1,\dots ,M-1, \quad A^{\{\mathfrak {f},\mathfrak {s},M\}} = M\, A. \end{aligned}$$

    With this coupling the multirate scheme maintains the absolute stability radius of the base method for any M, as all conditions (47a, 47b, 47c, 47d) are satisfied.

We note that monotonicity conditions for several multirate and partitioned explicit Runge-Kutta schemes are discussed by Hundsdorfer, Mozartova, and Savcenco in a recent report [6].

6 Conclusions and future work

This work develops multirate generalized additive Runge Kutta schemes, which exploit the different levels of activity within the partitions of the right-hand sides and/or components by using appropriate time steps for each of them. Multirate GARK schemes inherit the high level of flexibility from GARK schemes  [11], which allow for different stage values as arguments of different components of the right hand side. Many well-known multirate Runge–Kutta schemes, such as the Kvaerno–Rentrop methods [10], the multirate infinitesimal step methods [8], and methods based on dense output coupling, are particular members of the new family of methods.

The paper develops the order conditions (up to order three) for the generalized additive multirate schemes. We extend the GARK algebraic stability and monotonicity analysis [11] to the new multirate family, and show that these properties are inherited from the base schemes provided that some coupling conditions hold.

Future work will construct multirate GARK methods tailored to special applications in, for example, circuit design, vehicle system dynamics, or air quality modeling, and will extend the new family of schemes to differential-algebraic equations. First numerical results obtained for Multirate GARK schemes applied to an electro-thermal coupled problem from network analysis can be found in [1].