Keywords

1 Introduction

The (physical) Church-Turing Hypothesis [17] postulates that every physical phenomenon or effect can, at least in principle, be simulated by a sufficiently powerful digital computer up to any desired precision. Its validity had been challenged, though, in the sound setting of Recursive Analysis: with a computable \(C^1\) initial condition to the Wave Equation leading to an incomputable solution [11, 13]. The controversy was later resolved by demonstrating that, in both physically [1, 30] and mathematically more appropriate Sobolev space settings, the solution is computable uniformly in the initial data [23]. Recall that functions f in a Sobolev space are not defined pointwise but by local averages in the \(L_q\) senseFootnote 1 (in particular \(q=2\) corresponding to energy) with derivatives understood in the distributional sense. This led to a series of investigations on the computability of linear and nonlinear partial differential equations [24,25,26].

The (incompressible) Navier-Stokes Equation

$$\begin{aligned} \partial _t\varvec{u} -\bigtriangleup \varvec{ u} +(\varvec{ u}\cdot \nabla )\varvec{ u} + \nabla P= \varvec{f}, \quad \nabla \cdot \varvec{ u}=0, \quad \varvec{u}(0)=\varvec{ a}, \quad \varvec{ u}\big |_{\partial \varOmega }\equiv \varvec{ 0} \end{aligned}$$
(1)

describes the motion of a viscous incompressible fluid filling a rigid box \(\overline{\varOmega }\). The vector field \(\varvec{ u}=\varvec{ u}(\varvec{x},t)=\big (u_{1},u_{2},\ldots ,u_{d}\big )\) represents the velocity of the fluid and \(P=P(\varvec{ x}, t)\) is the scalar pressure with gradient \(\nabla P\); \(\bigtriangleup \) is the Laplace operator; \(\nabla \cdot \varvec{ u}\) denotes componentwise divergence; \(\varvec{ u}\cdot \nabla \) means, in Cartesian coordinates, \(u_{1}\partial _{x_1} + u_{2}\partial _{x_2} +\ldots +u_{d}\partial _{x_d}\); and the function \(\varvec{ a}=\varvec{ a}(\varvec{ x})\) with \(\nabla \cdot \varvec{ a}=0\) provides the initial velocity and \(\varvec{ f}\) is a given external force. Equation (1) thus constitutes a system of \(d+1\) partial differential equations for \(d+1\) functions.

The question of global existence and smoothness of its solutions, even in the homogeneous case \(\varvec{ f}\equiv \varvec{ 0}\), is one of the seven Millennium Prize Problems posted by the Clay Mathematics Institute at the beginning of the 21st century. Local existence has been established, though, in various \(L_q\) settings [5]; and uniqueness of weak solutions in dimension 2, but not in dimension 3 [18, §V.1.5], [2, §V.1.3.1]. Nevertheless, numerical solution methods have been devised in abundance, often based on pointwise (or even uniform, rather than \(L_q\)) approximation and struggling with nonphysical artifacts [14]. In fact, the very last of seven open problems listed in the addendum to [12] asks for a “recursion theoretic study of \(\dots \) the Navier-Stokes equation”. Moreover it has been suggested [16] that hydrodynamics could in principle be incomputable in the sense of allowing to simulate universal Turing computation and to thus ‘solve’ the Halting problem. And indeed recent progress towards (a negative answer to) the Millennium Problem [20] proceeds by simulating a computational process in the vorticity dynamics to construct a blowup in finite time for a PDE very similar to (1).

1.1 Overview

Using the sound framework of Recursive Analysis, we assert the computability of a local strong solution of (1) in the space \(L^{\sigma }_{2,0}(\varOmega )\) (see Sect. 2 for definition) from a given initial condition \(\varvec{a}\in L^{\sigma }_{2,0}(\varOmega )\); moreover, the computation is uniform in the initial data. We follow a common strategy used in the classical existence proofs [3,4,5, 18, 21]:

  1. (i)

    Eliminate the pressure \(P\) by applying, to both sides of Eq. (1), the Helmholtz projection \(\mathbb {P}:\big (L_{2}(\varOmega )\big )^2\rightarrow L^{\sigma }_{2,0}(\varOmega )\), thus arriving at the non-linear evolution equation

    $$\begin{aligned} \partial _t \varvec{u} + \mathbb {A}\varvec{ u} + \mathbb {B}\varvec{ u} = \varvec{ g} \quad (t>0), \qquad \varvec{ u}(0)= \varvec{ a} \in L^{\sigma }_{2,0}(\varOmega ) \end{aligned}$$
    (2)

    where \(L_{2}(\varOmega )\) is the set of all square-integrable real-valued functions defined on \(\varOmega \), \(\varvec{ g}=\mathbb {P}\varvec{f}\), \(\varvec{ u}=\mathbb {P}\varvec{ u}\in L^{\sigma }_{2,0}(\varOmega )\), \(\mathbb {A}=-\mathbb {P}\bigtriangleup \) denotes the Stokes operator, and \(\mathbb {B}\varvec{u}=\mathbb {P}\, (\varvec{ u}\cdot \nabla )\varvec{ u}\) is the nonlinearity.

  2. (ii)

    Construct a mild solution \(\varvec{ v}(t)\varvec{ a} = e^{-t\mathbb {A}}\varvec{ a}\) of the associated homogeneous linear equation

    $$\begin{aligned} \partial _t\varvec{ v} + \mathbb {A}\varvec{ v} = \varvec{ 0} \quad \text {for }t\ge 0, \qquad \varvec{ v}(0)=\varvec{ a}\in L^{\sigma }_{2,0}(\varOmega ) \end{aligned}$$
    (3)
  3. (iii)

    Rewrite (2) using (ii) in an integral form [5, §2]

    $$\begin{aligned} \varvec{ u}(t) = e^{-t\mathbb {A}}\varvec{ a} \;+\;\int _0^t e^{-(t-s)\mathbb {A}} \varvec{ g}(s)\,ds \;-\; \int ^t_0e^{-(t-s)\mathbb {A}}\mathbb {B}\varvec{ u}(s)\, ds \quad \text {for }\; t\ge 0 \end{aligned}$$
    (4)

    and solve it by a limit/fixed-point argument using the following iteration scheme [5, Eq. (2.1)]:

    $$\begin{aligned} \varvec{v}_0(t) = e^{-t\mathbb {A}}\varvec{ a} + \int _0^t e^{-(t-s)\mathbb {A}} \varvec{ g}(s)\,ds , \ \varvec{ v}_{n+1}(t) = \varvec{v}_0(t) - \int ^t_0e^{-(t-s)\mathbb {A}}\mathbb {B}\varvec{v}_n(s)\,ds \end{aligned}$$
    (5)
  4. (iv)

    Recover the pressure \(P\) from \(\varvec{u}\) by solving

    $$\begin{aligned} \nabla P\;=\; \varvec{f}-\partial _t\varvec{u} + \bigtriangleup \varvec{u} - (\varvec{u} \cdot \nabla ) \varvec{u} \end{aligned}$$
    (6)

To make use of the above strategy for deriving an algorithm to compute the solution of (1), there are several difficulties which need to be dealt with. Firstly, a proper representation is needed for coding the solenoidals. The codes should be not only rich enough to capture the functional characters of these vector fields but also robust enough to retain the coded information under elementary function operations, in particular, integration. Secondly, since the Stokes operator \(\mathbb {A}: \hbox {dom}(\mathbb {A}) \rightarrow L^{\sigma }_{2, 0}(\varOmega )\) is neither continuous nor its graph dense in \((L^{\sigma }_{2, 0}(\varOmega ))^2\), there is no convenient way to directly code \(\mathbb {A}\) for computing the solution of the linear equation (3). The lack of computer-accessible information on \(\mathbb {A}\) makes the computation of the solution \(\varvec{v}(t)\varvec{a}=e^{-t\mathbb {A}}\varvec{a}\) of (3) much more intricate. Thirdly, since the nonlinear operator \(\mathbb {B}\) in the iteration (5) involves differentiation and multiplication, and a mere \(L^{\sigma }_{2, 0}\)-code of \(\varvec{v}_n\) is not rich enough for carrying out these operations, it follows that there is a need for computationally derive a stronger code for \(\varvec{v}_n\) from any given \(L^{\sigma }_{2, 0}\)-code of \(\varvec{a}\) so that \(\mathbb {B}\varvec{v}_n\) can be computed. This indicates that the iteration is to move back and forth among different spaces, and thus additional care must be taken in order to keep the computations flowing in and out without any glitches from one space to another. To overcome those difficulties arising in the recursion theoretic study of the Navier-Stokes equation, many estimates - subtle and intricate - are established in addition to the classical estimates.

The paper is organized as follows. Presuming familiarity with Weihrauch’s Type-2 Theory of Effectivity [22], Sect. 2 recalls the standard representation \(\delta _{L_{2}}\) of \(L_{2}(\varOmega )\) and introduces a natural representation \(\delta _{L_{2,0}^{\sigma }}\) of \(L^{\sigma }_{2,0}(\varOmega )\). Section 3 proves that the Helmholtz projection \(\mathbb {P}:\big (L_{2}(\varOmega )\big )^2\rightarrow L^{\sigma }_{2,0}(\varOmega )\) is \(\big ((\delta _{L_{2}})^2,\delta _{L_{2,0}^{\sigma }}\big )\)-computable. Section 4 presents the proof that the solution to the linear homogeneous Dirichlet problem (3) is uniformly computable from the initial condition \(\varvec{ a}\). Section 5 is devoted to show that the solution to the nonlinear Navier-Stokes problem (1) is uniformly computable locally. Subsection 5.1 recalls the Bessel (=fractional Sobolev) space \(H_{2}^{s}(\varOmega )\subseteq L_{2}(\varOmega )\) of s-fold weakly differentiable square-integrable functions on \(\varOmega \) and its associated standard representation \(\delta _{H_{2,0}^{s}}\), \(s\ge 0\). For \(s>1\) we assert differentiation \(H_{2}^{s}(\varOmega )\ni w\mapsto \partial _x w\in L_{2}(\varOmega )\) to be \(\big (\delta _{H_{2,0}^{s}},\delta _{L_{2}}\big )\)-computable and multiplication \(H_{2}^{s}(\varOmega )\times L_{2}(\varOmega )\ni (v,w)\mapsto vw\in L_{2}(\varOmega )\) to be \(\big (\delta _{H_{2,0}^{s}}\times \delta _{L_{2}},\delta _{L_{2}}\big )\)-computable. Based on these preparations, Subsect. 5.3 asserts that in the homogeneous case \(\varvec{ g}\equiv \varvec{0}\), the sequence, generated from the iteration map

$$\begin{aligned} \mathbb {S}\;:\;C\big ([0;\infty ),L^{\sigma }_{2,0}(\varOmega )\big )\times C\big ([0;\infty ),&L^{\sigma }_{2,0}(\varOmega )\big ) \;\ni \;(\varvec{ v}_0,\varvec{ v}_n) \\&\;\mapsto \;\varvec{ v}_{n+1}\;\in \;C\big ([0;\infty ),L^{\sigma }_{2,0}(\varOmega )\big ) \end{aligned}$$

according to Eq. (5), converges effectively uniformly on some positive (but not necessarily maximal) time interval [0; T] whose length \(T=T(\varvec{ a})>0\) is computable from the initial condition \(\varvec{a}\). Subsection 5.4 proves that the iteration map \(\mathbb {S}\) is \(\big ([\rho \!\rightarrow \!\delta _{L_{2,0}^{\sigma }}]\times [\rho \!\rightarrow \!\delta _{L_{2,0}^{\sigma }}],[\rho \!\rightarrow \!\delta _{L_{2,0}^{\sigma }}]\big )\)-computable. We conclude in Subsect. 5.5 with the final extensions regarding the inhomogeneity \(\varvec{f}\) and pressure \(P\), thus establishing the main result of this work:

Theorem 1

There exists a \(\displaystyle \big (\delta _{L_{2,0}^{\sigma }}\!\!\times \!\big [\rho \!\rightarrow \!\delta _{L_{2,0}^{\sigma }}\big ]\,,\rho \big ) \text {-computable map } T\),

$$\begin{aligned} T:L^{\sigma }_{2,0}(\varOmega )\times C\big ([0;\infty ),L^{\sigma }_{2,0}(\varOmega )\big )\rightarrow (0;\infty ), \quad (\varvec{a}, \varvec{ f})\mapsto T(\varvec{a}, \varvec{ f}) \end{aligned}$$

and a \(\big (\delta _{L_{2,0}^{\sigma }}\!\!\!\times \!\big [\rho \!\rightarrow \!\delta _{L_{2,0}^{\sigma }}\big ]\times \rho \,, \delta _{L_{2,0}^{\sigma }}\big )\text {-computable partial map } \mathcal S\),

$$\begin{aligned} \mathcal {S}:\subseteq L^{\sigma }_{2,0}(\varOmega )\times C\big ([0;\infty ),L^{\sigma }_{2,0}(\varOmega )\big )\times [0;\infty ) \rightarrow L^{\sigma }_{2,0}(\varOmega )\!\times \!L_{2}(\varOmega ) \end{aligned}$$

such that, for every \(\varvec{ a}\in L^{\sigma }_{2,0}(\varOmega )\) and \(\varvec{f}\in C\big ([0;\infty ),L^{\sigma }_{2,0}(\varOmega )\big )\), the function \((\varvec{u},P):\big [0;T(\varvec{a},\varvec{ f})]\ni t\mapsto \mathcal {S}(\varvec{a},\varvec{ f},t)\) constitutes a (strong local in time and weak global in space) solution to Eq. (1).

Roughly speaking, a function is computable if it can be approximated by “computer-accessible” functions (such as rational numbers, polynomials with rational coefficients, and so forth) with arbitrary precision, where precision is given as an input; such a sequence of approximations is called an effective approximation. Thus in terms of effective approximations, the theorem states that the solution of Eq. (1) can be effectively approximated locally in the time interval \([0, T(\varvec{a}, \varvec{f})]\), where the time instance \(T(\varvec{a}, \varvec{f})\) is effectively approximable.

More precisely, in computable analysis, a map \(F:X\rightarrow Y\) from a space X with representation \(\delta _X\) to a space Y with representation \(\delta _Y\) is said to be \((\delta _X, \delta _Y)\)-computable if there exists a (Turing) algorithm (or any computer program) that computes a \(\delta _Y\)-name of F(x) from any given \(\delta _X\)-name of x. A metric space (Xd), equipped with a partial enumeration \(\zeta :\subseteq \mathbb {N}\rightarrow X\) of some dense subset, gives rise to a canonical Cauchy representation \(\delta _\zeta \) by encoding each \(x\in X\) with a sequence \(\bar{s}=(s_0,s_1,s_2,\ldots )\in \hbox {dom}(\zeta )^\omega \subseteq \mathbb {N}^\omega \) such that \(d\big (x,\zeta (s_k)\big )\le 2^{-k}\) for all k [22, §8.1]; in other words, \(\{ \zeta (s_k)\}\) is an effective approximation of x. For example, approximating by (dyadic) rationals thus leads to the standard representation \(\rho \) of \(\mathbb {R}\); and for a fixed bounded \(\varOmega \subseteq \mathbb {R}^d\), the standard representation \(\delta _{L_{2}}\) of \(L_{2}(\varOmega )\) encodes \(f\in L_{2}(\varOmega )\) by a sequence \(\{p_k:k\in \mathbb {N}\}\subseteq \mathbb {Q}[\mathbb {R}^d]\) of d-variate polynomials with rational coefficients such that \(\Vert f-p_k\Vert _{2}\le 2^{-k}\), where \(\Vert \cdot \Vert _{2}=\Vert \cdot \Vert _{L_{2}}\). Thus if both spaces X and Y admit Cauchy representations, then a function \(f: X\rightarrow Y\) is computable if there is an algorithm that computes an effective approximation of f(x) on any given effective approximation of x as input. For represented spaces \((X,\delta _X)\) and \((Y,\delta _Y)\), \(\delta _X\!\times \!\delta _Y\) denotes the canonical representation of the Cartesian product \(X\!\times \! Y\). When X and Y are \(\sigma \)-compact metric spaces with respective canonical Cauchy representations \(\delta _X\) and \(\delta _Y\), \([\delta _X\!\rightarrow \!\delta _Y]\) denotes a canonical representation of the space \(C(X,Y)\) of continuous total functions \(f:X\rightarrow Y\), equipped with the compact-open topology [22, Theorem 3.2.11+Definition 3.3.13]. The representation \([\delta _X\!\rightarrow \!\delta _Y]\) supports type conversion in the following sense [22, Theorem 3.3.15]:

Fact 2

On the one hand, the evaluation \((f,x)\mapsto f(x)\) is \(([\delta _X\!\rightarrow \!\delta _Y]\!\times \!\delta _X\,,\,\delta _Y)\)-computable. On the other hand, a map \(f:X\times Y\rightarrow Z\) is \((\delta _X\!\times \!\delta _Y\,,\,\delta _Z)\)-computable iff the map \(X\ni x\mapsto \big (y\mapsto f(x,y)\big )\in C(Y,Z)\) is \((\delta _X\,,\,[\delta _Y\!\rightarrow \!\delta _Z])\)-computable.

We mention in passing that all spaces considered in this paper are equipped with a norm. Thus for any space X considered below, a \(\delta _{X}\)-name of \(f\in X\) is simply an effective approximation of f despite the often cumbersome notations.

2 Representing Divergence-Free \(L_{2}\) Functions on \(\varOmega \)

Let us call a vector field \(\varvec{ f}\) satisfying \(\nabla \cdot \varvec{ f}=0\) in \(\varOmega \) divergence-free. A vector-valued function \(\varvec{p}\) is called a polynomial of degree N if each of its components is a polynomial of degree less than or equal to N with respect to each variable and at least one component is a polynomial of degree N. Let \(L^{\sigma }_{2,0}(\varOmega )\)—or \(L^{\sigma }_{2,0}\) if the context is clear—be the closure in \(L_{2}\)-norm of the set \(\{ \varvec{ u}\in (C^\infty _0(\varOmega ))^2 \, : \, \nabla \cdot \varvec{ u}=0\}\) of all smooth divergence-free functions with support of \(\varvec{ u}\) and all of its partial derivatives contained in some compact subset of \(\varOmega \). Let \(\mathbb {Q}[\mathbb {R}^2]\) be the set of all polynomials of two real variables with rational coefficients and \(\mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\) the subset of all 2-tuples of such polynomials which are divergence-free in \(\varOmega \) and vanish on \(\partial \varOmega \). We note that the boundary value of a \(L^{\sigma }_{2, 0}(\varOmega )\)-function \(\varvec{u}\), \(\varvec{u}|_{\partial \varOmega }\), is not defined unless \(\varvec{u}\) is (weakly) differentiable; if \(\varvec{u}\) is (weakly) differentiable, then \(\varvec{u}|_{\partial \varOmega }=0\).

Notation 3

Hereafter we use \(\Vert w \Vert _2\) for the \(L_2\)-norm \(\Vert w\Vert _{L_2(\varOmega )}\) if w is real-valued, or for \(\Vert w\Vert _{(L_2(\varOmega ))^2}\) if w is vector-valued (in \(\mathbb {R}^2\)). We note that \(\Vert \cdot \Vert _{L^{\sigma }_{2, 0}(\varOmega )}= \Vert \cdot \Vert _{(L_2(\varOmega ))^2}\). For any subset A of \(\mathbb {R}^n\), its closure is denoted as \(\overline{A}\).

Proposition 1

  1. (a)

     A polynomial tuple

    $$\begin{aligned} \varvec{p}=(p_1, p_2)=\big (\sum _{i, j=0}^{N}a^1_{i, j}x^iy^j, \sum _{i, j=0}^{N}a^2_{i, j}x^iy^j\big ) \end{aligned}$$

    is divergence-free and boundary-free if and only if its coefficients satisfy the following system of linear equations with integer coefficients:

    $$\begin{aligned} (i+1)a^1_{i+1, j}+(j+1)a^2_{i, j+1}=0,&0\le i, j\le N-1 \nonumber \\ (i+1)a^1_{i+1, N}=0,&0\le i\le N-1 \\ (j+1)a^2_{N, j+1}=0,&0\le j\le N-1 \nonumber \end{aligned}$$
    (7)

    and for all \(0\le i,j\le N\),

    $$\begin{aligned} \,\,\, \sum \nolimits _{i=0}^N a^1_{i, j}=\sum \nolimits _{i=0}^N a^2_{i, j}=\sum \nolimits _{i=0}^N (-1)^ia^1_{i, j}=\sum \nolimits _{i=0}^N (-1)^ia^2_{i, j}=0 \end{aligned}$$
    (8)
    $$\begin{aligned} \sum \nolimits _{j=0}^N a^1_{i, j}=\sum \nolimits _{j=0}^N a^2_{i, j}=\sum \nolimits _{j=0}^N (-1)^ja^1_{i, j}=\sum \nolimits _{j=0}^N (-1)^ja^2_{i, j}=0 \end{aligned}$$
    (9)
  2. (b)

    \(\mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\) is dense in \(L^{\sigma }_{2,0}(\varOmega )\) w.r.t. \(L_{2}\)-norm.

The proof of Proposition 1 is deferred to Appendix A.

We may be tempted to use \(\mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\) as a set of names for coding/approximating the elements in the space \(L^{\sigma }_{2,0}(\varOmega )\). However, since the closure of \(\mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\) in \(L_2\)-norm contains \(L^{\sigma }_{2,0}(\varOmega )\) as a proper subspace, \(\mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\) is “too big” to be used as a set of codes for representing \(L^{\sigma }_{2,0}(\varOmega )\); one has to “trim” polynomials in \(\mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\) so that any convergent sequence of “trimmed” polynomials converges to a limit in \(L^{\sigma }_{2,0}(\varOmega )\). The trimming process is shown below. For each \(k\in \mathbb {N}\) (where \(\mathbb {N}\) is the set of all positive integers), let \(\varOmega _{k}=(-1+2^{-k};1-2^{-k})^2\). And for each \(\varvec{p} =(p_1, p_2)\in \mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\), define \(\mathbb {T}_k \varvec{ p}=(\mathbb {T}_k p_1, \mathbb {T}_k p_2)\), where

$$\begin{aligned} \mathbb {T}_kp_j(x,y) \; =\; \left\{ \begin{array}{ll} p_j(\frac{x}{1-2^{-k}}, \frac{y}{1-2^{-k}}), &{} \; -1+2^{-k}\le x, y\le 1-2^{-k} \\ 0, &{} \hbox {otherwise} \end{array} \right. \end{aligned}$$
(10)

\(j=1, 2\). Then \(\mathbb {T}_kp_{j}\) and \(\mathbb {T}_k\varvec{p}\) have the following properties:

  1. (a)

    \(\mathbb {T}_k\varvec{p}\) has compact support \(\overline{\varOmega }_k\) contained in \(\varOmega \).

  2. (b)

    \(\mathbb {T}_k\varvec{p}\) is a polynomial with rational coefficients defined in \(\varOmega _k\).

  3. (c)

    \(\mathbb {T}_k\varvec{p}\) is continuous on \(\overline{\varOmega }=[-1, 1]^2\).

  4. (d)

    \(\mathbb {T}_k\varvec{p}=0\) on \(\partial \varOmega _k\), for \(\varvec{p}\) vanishes on the boundary of \(\varOmega \). Thus \(\mathbb {T}_k\varvec{p}\) vanishes in the exterior region of \(\varOmega _k\) including its boundary \(\partial \varOmega _k\).

  5. e)

    \(\mathbb {T}_k\varvec{p}\) is divergence-free in \(\varOmega _k\) following the calculation below: for \((x, y)\in \varOmega _k\), we have \((\frac{x}{1-2^{-k}}, \frac{y}{1-2^{-k}})\in \varOmega \) and

    $$\begin{aligned} \frac{\partial \mathbb {T}_kp_1}{\partial x}(x, y)+ \frac{\partial \mathbb {T}_kp_2}{\partial y}(x, y)&= \frac{1}{1-2^{-k}}\frac{\partial p_1}{\partial x'}(x', y')+ \frac{1}{1-2^{-k}}\frac{\partial p_2}{\partial y'}(x', y') \\&= \frac{1}{1-2^{-k}}\left[ \frac{\partial p_1}{\partial x'}(x', y')+\frac{\partial p_2}{\partial y'}(x',y')\right] \quad =\; 0 \end{aligned}$$

    for \(\varvec{p}\) is divergence-free in \(\varOmega \), where \(x'=\frac{x}{1-2^{-k}}\) and \(y'=\frac{y}{1-2^{-k}}\).

It follows from the discussion above that every \(\mathbb {T}_k\varvec{p}\) is a divergence-free polynomial of rational coefficients on \(\varOmega _k\) that vanishes in \([-1, 1]^2\setminus \varOmega _k\) and is continuous on \([-1, 1]^2\). However, although the functions \(\mathbb {T}_k\varvec{p}\) are continuous on \([-1, 1]^2\) and differentiable in \(\varOmega _k\), they can be non-differentiable along the boundary \(\partial \varOmega _k\subseteq \varOmega \). To use these functions as names for coding elements in \(L^{\sigma }_{2,0}(\varOmega )\), it is desirable to smoothen them along the boundary \(\partial \varOmega _k\) so that they are differentiable in the entire \(\varOmega \). A standard technique for smoothing a function is to convolute it with a \(C^\infty \) function. We use this technique to modify functions \(\mathbb {T}_k\varvec{p}\) so that they become divergence-free and differentiable on the entire region of \(\varOmega \). Let

$$\begin{aligned} \gamma (\varvec{ x)} \;:=\; \left\{ \begin{array}{ll}\gamma _0 \cdot \exp \Big (-\tfrac{1}{1-\Vert \varvec{ x}\Vert ^2}\Big ), &{} \text {if } 1>\Vert \varvec{ x}\Vert :=\max \{|x_1|, |x_2|\} \\ 0, &{} \text {otherwise} \end{array} \right. \end{aligned}$$
(11)

where \(\gamma _0\) is a constant such that \(\int _{\mathbb {R}^2}\gamma (\varvec{ x})\,d\varvec{ x}=1\) holds. The constant \(\gamma _0\) is computable, since integration on continuous functions is computable [22, §6.4]. Let \(\gamma _{k}(\varvec{ x})=2^{2k}\gamma (2^{k}\varvec{ x})\). Then, for all \(k\in \mathbb {N}\), \(\gamma _{k}\) is a \(C^\infty \) function having support in the closed square \([-2^{-k}, 2^{-k}]^2\) and \(\int _{\mathbb {R}^2}\gamma _{k}(\varvec{ x})\,d\varvec{ x}=1\). Recall that for differentiable functions \(f,g:\mathbb {R}^d\rightarrow \mathbb {R}\) with compact support, the convolution \(f*g\) is defined as follows:

$$\begin{aligned} \big (f*g\big )(\varvec{ x})\;=\; \int _{\mathbb {R}^d} f(\varvec{ x}-\varvec{ y})\cdot g(\varvec{ y})\,d\varvec{ y} \end{aligned}$$
(12)

It is easy to see that for \(n\ge k+1\) the support of \(\gamma _n*\mathbb {T}_k\varvec{p}:=(\gamma _n*\mathbb {T}_kp_1,\gamma _n*\mathbb {T}_kp_2)\) is contained in the closed square \([-1+2^{-(k+1)}, 1-2^{-(k+1)}]^2\). It is also known classically that \(\gamma _n*\mathbb {T}_k\varvec{p}\) is a \(C^\infty \) function. Since \(\gamma _n\) is a computable function and integration on compact domains is computable, the map \((n,k,\varvec{p})\mapsto \gamma _n*\mathbb {T}_k\varvec{p}\) is computable. Moreover the following metric is computable:

$$\begin{aligned} \big ((n,k,\varvec{p}),(n',k',\varvec{p}')\big ) \;\mapsto \; \left( \int \big |(\gamma _n*\mathbb {T}_k\varvec{p})(\varvec{ x})-(\gamma _{n'}*\mathbb {T}_{k'}\varvec{p}')(\varvec{ x})\big |^2 \,d\varvec{ x} \right) ^{1/2} \end{aligned}$$
(13)

Lemma 1

Every function \(\gamma _{n}*\mathbb {T}_k\varvec{p}\) is divergence-free in \(\varOmega \), where \(n,k\in \mathbb {N}\), \(n\ge k\), and \(\varvec{p}\in \mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\).

Lemma 2

The set \(\mathcal {P}=\big \{ \gamma _{n}*\mathbb {T}_k\varvec{p} \,:\, n, k\in \mathbb {N}, n\ge k+1, \varvec{p}\in \mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\big \}\) is dense in \(L^{\sigma }_{2,0}(\varOmega )\).

See Appendices B and C for the proofs.

From Lemmas 1 and 2 it follows that \(\mathcal {P}\) is a countable set that is dense in \(L^{\sigma }_{2,0}(\varOmega )\) (in \(L_{2}\)-norm) and every function in \(\mathcal {P}\) is \(C^\infty \), divergence-free on \(\varOmega \), and having a compact support contained in \(\varOmega \); in other words, \(\mathcal {P}\subset \{ \varvec{ u}\in C^\infty _{0}(\varOmega )^2 \, : \, \nabla \cdot \varvec{ u}=0\}\). Thus, \(L^{\sigma }_{2,0}(\varOmega )=\hbox {the closure of} \ \mathcal {P}\ \text {in} \, L_{2}-\text {norm}\). This fact indicates that the set \(\mathcal {P}\) is qualified to serve as codes for representing \(L^{\sigma }_{2,0}(\varOmega )\).

Since the function \(\phi : \bigcup _{N=0}^{\infty }\mathbb {Q}^{(N+1)^2}\times \mathbb {Q}^{(N+1)^2}\rightarrow \{ 0, 1\}\), where

$$\begin{aligned} \phi \big ((r_{i, j})_{0\le i, j\le N}, (s_{i, j})_{0\le i, j\le N}\big )=\left\{ \begin{array}{ll} 1, &{} \hbox {if (7), (8), and (9) are satisfied} \\ &{} \text {(with} \, r_{i, j}=a^1_{i, j} \ \text {and} \ s_{i, j}=a^2_{i, j}) \\ 0, &{} \hbox {otherwise} \end{array} \right. \end{aligned}$$

is computable, there is a total computable function on \(\mathbb {N}\) that enumerates \(\mathbb {Q}^{\sigma }_{0}[\mathbb {R}^2]\). Then it follows from the definition of \(\mathcal {P}\) that there is a total computable function \(\alpha : \mathbb {N}\rightarrow \mathcal {P}\) that enumerates \(\mathcal {P}\); thus, in view of the computable Eq. (13), \(\big (L^{\sigma }_{2,0}(\varOmega ), (\varvec{ u},\varvec{ v})\mapsto \Vert \varvec{ u}-\varvec{v}\Vert _{2}, \mathcal {P}, \alpha \big )\) is a computable metric space. Let \(\delta _{L_{2,0}^{\sigma }}: \mathbb {N}^{\omega }\rightarrow L^{\sigma }_{2,0}\) be the standard Cauchy representation of \(L^{\sigma }_{2,0}\); that is, every function \(\varvec{ u}\in L^{\sigma }_{2,0}(\varOmega )\) is encoded by a sequence \(\{ \varvec{p}_k:k\in \mathbb {N}\}\subseteq \mathcal {P}\), such that \(\Vert \varvec{ u}-\varvec{p}_k\Vert _{2}\le 2^{-k}\). The sequence \(\{ \varvec{ p}_k\}_{k\in \mathbb {N}}\) is called a \(\delta _{L_{2,0}^{\sigma }}\)-name of \(\varvec{ u}\), which is an effective approximation of \(\varvec{ u}\) (in \(L_2\)-norm).

3 Computability of Helmholtz Projection

In this section, we show that the Helmholtz projection \(\mathbb {P}\) is computable.

Proposition 2

The projection \(\mathbb {P}: \big (L_{2}(\varOmega )\big )^2 \rightarrow L^{\sigma }_{2,0}(\varOmega )\) is \(\big ((\delta _{L_{2}})^2,\delta _{L_{2,0}^{\sigma }}\big )\)-computable.

Proof

For simplicity let us set \(\varOmega = (0, 1)^2\). The proof carries over to \(\varOmega =(-1, 1)^2\) by a scaling on sine and cosine functions. We begin with two classical facts which are used in the proof:

  1. (i)

    It follows from [6, p. 40]/[21] that for any \(\varvec{ u}=\big (u_{1}, u_{2}\big )\in \big (L_{2}(\varOmega )\big )^2\),

    $$\begin{aligned} \mathbb {P}\varvec{ u}= (-\partial _{y}\varphi , \partial _{x}\varphi ) \end{aligned}$$
    (14)

    where the scalar function \(\varphi \) is the solution of the following boundary value problem:

    $$\begin{aligned} \bigtriangleup \varphi = \partial _{x}u_{2}-\partial _{y}u_{1} \text { in } \varOmega , \qquad \varphi = 0 \text { on } \partial \varOmega \end{aligned}$$
    (15)

    We note that \(\mathbb {P}\) is a linear operator.

  2. (ii)

    Each of \(\{ \sin (n\pi x)\sin (m\pi y)\}_{n, m\ge 1}\),

    $$\begin{aligned} \{ \sin (n\pi x)\cos (m\pi y)\}_{n\ge 1, m\ge 0}, \ \hbox {or}\ \{ \cos (n\pi x) \sin (m\pi y)\}_{n\ge 0, m\ge 1} \end{aligned}$$

    is an orthogonal basis for \(L^2(\varOmega )\). Thus each \(\varvec{ u}=\big (u_{1}, u_{2}\big )\) in \(\big (L_{2}(\varOmega )\big )^2\), \(u_{i}\), \(i=1\) or 2, can be written in the following form:

    $$\begin{aligned} u_{i}(x, y)\;=&\;\sum \limits _{n, m\ge 0}u_{i, n, m}\cos (n\pi x)\,\sin (m\pi y) \\ \;=&\; \sum \limits _{n, m\ge 0}\tilde{u}_{i, n, m}\sin (n\pi x)\,\cos (m\pi y) \end{aligned}$$

    where

    $$\begin{aligned}&u_{i, n, m}=\int ^1_0\int ^1_0u_{i}(x, y)\cos (n\pi x)\sin (m\pi y)dxdy, \quad \hbox {and}\\&\quad \, \ \tilde{u}_{i, n, m}=\int ^1_0\int ^1_0u_{i}(x, y)\sin (n\pi x)\cos (m\pi y)dxdy \end{aligned}$$

with the property that \(\Vert u_{i}\Vert _{2} = \left( \sum _{n, m\ge 0}|u_{i, n, m}|^2\right) ^{1/2} = \left( \sum _{n, m\ge 0}|\tilde{u}_{i, n, m}|^2\right) ^{1/2}\). We note that the sequences \(\{ u_{i, n, m}\}\), \(\{ \tilde{u}_{i, n, m}\}\), and \(\Vert u_{i}\Vert _{2}\) are computable from \(\varvec{u}\); cf. [22].

To prove that the projection is \(\big ((\delta _{L_{2}})^2, \delta _{L_{2,0}^{\sigma }}\big )\)-computable, it suffices to show that there is an algorithm computing, given any accuracy \(k\in \mathbb {N}\) and for any \(\varvec{u}\in (L^2(\varOmega ))^2\), a vector function \((p_k, q_k) \in \mathcal {P}\) such that \(\Vert \mathbb {P}\varvec{u} - (p_k, q_k)\Vert _{2}\le 2^{-k}\). Let us fix k and \(\varvec{ u}=\big (u_{1}, u_{2})\). Then a straightforward computation shows that the solution \(\varphi \) of (15) can be explicitly written as

$$\begin{aligned} \varphi (x, y)=\sum \nolimits _{n, m=1}^{\infty }\frac{-nu_{2, n, m}+m\tilde{u}_{1, n, m}}{(n^2+m^2)\pi }\sin (n\pi x)\,\sin (m\pi y) \end{aligned}$$

It then follows that

$$\begin{aligned} -\partial _{y}\varphi = \sum \nolimits _{n, m\ge 1}\frac{mnu_{2, n, m}-m^2\tilde{u}_{1, n, m}}{n^2+m^2}\sin (n\pi x)\,\cos (m\pi y) \end{aligned}$$
(16)

Similarly, we can obtain a formula for \(\partial _{x}\varphi \). Since we have an explicit expression for \((-\partial _{y}\varphi , \partial _{x}\varphi )\), a search algorithm is usually a preferred choice for finding a k-approximation \((p_k, q_k)\) of \(\mathbb {P}\varvec{u}\) by successively computing the norms

$$\Vert (-\partial _{y}\varphi , \partial _{x}\varphi ) - (p, q)\Vert _2, \qquad (p, q)\in \mathcal {P}.$$

However, since \(-\partial _{y}\varphi \) and \(\partial _{x}\varphi \) are infinite series which involve limit processes, a truncating algorithm is needed so that one can compute approximations of the two limits before a search program can be executed. The truncating algorithm will find some \(N(k, \varvec{u})\in \mathbb {N}\) such that the \(N(k, \varvec{u})\)-partial sum of \((-\partial _{y}\varphi , \partial _{x}\varphi )\) is a \(2^{-(k+1)}\)-approximation of the series; in other words, the algorithm chops off the infinite tails of the series within pre-assigned accuracy. The following estimate provides a basis for the desired truncating algorithm:

$$\begin{aligned}&\Vert -\partial _{y}\varphi - \sum _{n, m<N}\frac{mnu_{2, n, m}-m^2\tilde{u}_{1, n, m}}{n^2+m^2}\sin (n\pi x)\cos (m\pi y)\Vert ^2_{2} \\= & {} \Vert \sum _{n, m \ge N}\frac{mnu_{2, n, m}-m^2\tilde{u}_{1, n, m}}{n^2+m^2}\sin (n\pi x)\cos (m\pi y)\Vert ^2_{2} \\= & {} \sum _{n, m\ge N}\left| \frac{mnu_{2, n, m}-m^2\tilde{u}_{1, n, m}}{n^2+m^2}\right| ^2 \le 2\sum _{n, m\ge N}(|u_{2, n, m}|^2 + |\tilde{u}_{1, n, m}|^2) \\ \end{aligned}$$

A similar estimate applies to \(\partial _{x}\varphi \). Since

$$\begin{aligned} \Vert u_{i}\Vert ^{2}_{2}=\sum _{n, m\ge 1}|u_{i, n, m}|^2 = \sum _{n, m\ge 1}|\tilde{u}_{i, n, m}|^2\, ,\quad i=1, 2, \end{aligned}$$

is computable, it follows that there is an algorithm computing \(N(k, \varvec{u})\) from k and \(\varvec{u}\) such that the \(N(k, \varvec{u})\)-partial sum of \((-\partial _{y}\varphi , \partial _{x}\varphi )\) is a \(2^{-(k+1)}\)-approximation of the series. Now we can search for \((p_k, q_k)\in \mathbb {P}\) that approximates the \(N(k, \varvec{u})\)-partial sum in \(L^2\)-norm within the accuracy \(2^{-(k+1)}\) as follows: enumerate \(\mathcal {P}=\{ \tilde{p}_j\}\), compute the \(L^2\)-norm of the difference between the \(N(k, \varvec{u})\)-partial sum and \(\tilde{p}_j\), halt the computation at \(\tilde{p}_j\) when the \(L^2\)-norm is less that \(2^{-(k+1)}\), and then set \((p_k, q_k)=\tilde{p}_j\). We note that each computation halts in finitely many steps. The search will succeed since \(\mathbb {P}\varvec{u}=(-\partial _{y}\varphi , \partial _{x}\varphi )\in L^{\sigma }_{2, 0}\) and \(\mathcal {P}\) is dense in \(L^{\sigma }_{2, 0}\). It is then clear that \(\Vert \mathbb {P}\varvec{u} - (p_k, q_k)\Vert _2 \le 2^{-k}\).

4 Computability of the Linear Problem

In this section, we show that the solution operator for the linear homogeneous equation (3) is uniformly computable from the initial data. We begin by recalling the Stokes operator and some of its classical properties. Let \(\mathbb {A}=-\mathbb {P}\bigtriangleup \) be the Stokes operator as defined for instance in [3, §2] or [18, §III.2.1], where \(\mathbb {P}: \big (L_{2}(\varOmega )\big )^2 \rightarrow L^{\sigma }_{2,0}\) is the Helmholtz projection. It is known from the classical study that \(\mathbb {A}\) is an unbounded but closed positively self-adjoint linear operator whose domain is dense in \(L^{\sigma }_{2,0}\), and thus \(-\mathbb {A}\) is the infinitesimal generator of an analytic semigroup; cf. [18, Theorem III.2.1.1] or [2, §IV.5.2]. In this case, the linear homogeneous equation (3) has the solution \(\varvec{ u}(t)=e^{-\mathbb {A}t}\varvec{ a}\), where \(\varvec{ u}(0)=\varvec{ a}\), \(e^{-\mathbb {A}t}\) is the analytic semigroup generated by the infinitesimal generator \(-\mathbb {A}\), and \(\varvec{u}(t)\in L^{\sigma }_{2, 0}(\varOmega )\) for \(t\ge 0\). Furthermore, the following lemma shows that the solution \(\varvec{u}(t)\) decays in \(L^2\)-norm as time t increases.

Lemma 3

For every \(\varvec{a}\in L^{\sigma }_{2, 0}(\varOmega )\) and \(t\ge 0\),

$$\begin{aligned} \Vert \varvec{u}(t)\Vert _2 = \Vert e^{-t\mathbb {A}}\varvec{ a}\Vert _{2}\le \Vert \varvec{ a}\Vert _{2}=\Vert \varvec{u}(0)\Vert _2 \end{aligned}$$
(17)

(Recall that \(\Vert \cdot \Vert _{2}=\Vert \cdot \Vert _{L^{\sigma }_{2,0}(\varOmega )}\); see Notation 3.)

Proof

Classically it is known that for any \(\varvec{a}\in L^{\sigma }_{2, 0}(\varOmega )\), \(\varvec{u}(t)=e^{-t\mathbb {A}}\varvec{ a}\) is in the domain of \(\mathbb {A}\) for \(t>0\). Thus if \(\varvec{ a}=\varvec{u}(0)\) itself is in the domain of \(\mathbb {A}\), then so is \(\varvec{u}(t)\) for \(t\ge 0\). Since \(\mathbb {A}\) is positively self-adjoint, it follows that \(\mathbb {A}^{*}=\mathbb {A}\) and \(\langle \mathbb {A}\varvec{ u}(t), \varvec{ u}(t)\rangle :=\int _{\varOmega } \mathbb {A}\varvec{u}(t)(\varvec{ x})\cdot \varvec{ u}(t)(\varvec{ x})\,d\varvec{ x} > 0\) for every \(\varvec{ a}\) in the domain of \(\mathbb {A}\) with \(\varvec{ a}\not \equiv 0\) and \(t\ge 0\). Now if we rewrite the Eq. (3) in the form of

$$\begin{aligned} \langle \varvec{ u}_t, \varvec{ u} \rangle + \langle \mathbb {A}\varvec{ u}, \varvec{ u} \rangle =0 \end{aligned}$$

or equivalently \(\frac{1}{2}\frac{d}{dt}\langle \varvec{ u}, \varvec{ u}\rangle + \langle \mathbb {A}\varvec{ u}, \varvec{ u} \rangle =0\), then \(\frac{d}{dt}\langle \varvec{ u}, \varvec{ u} \rangle \le 0\) and consequently \(\langle \varvec{ u}, \varvec{ u} \rangle (t)\le \langle \varvec{ u}, \varvec{ u} \rangle (0)\); thus (17) holds true for \(\varvec{a}\) in the domain of \(\mathbb {A}\). Since the domain of \(\mathbb {A}\) is dense in \(L^{\sigma }_{2, 0}(\varOmega )\), it follows that (17) holds true for all \(\varvec{a}\in L^{\sigma }_{2, 0}(\varOmega )\).    \(\sqcup \)

Proposition 3

For the linear homogenous equation (3), the solution operator \(\mathcal {S}:L^{\sigma }_{2,0}(\varOmega )\rightarrow C\big ([0;\infty ), L^{\sigma }_{2,0}(\varOmega )\big )\), \(\varvec{ a}\mapsto (t\mapsto e^{-\mathbb {A}t}\varvec{ a})\), is \((\delta _{L_{2,0}^{\sigma }}, [\rho \rightarrow \delta _{L_{2,0}^{\sigma }}])\)-computable.

By the First Main Theorem of Pour-El and Richards [12, §II.3], the unbounded operator \(\mathbb {A}\) does not preserve computability. In particular, the naive exponential series \(\sum _n (-\mathbb {A}t)^n \varvec{ a}/n!\) does not establish Proposition 3.

Convention. For readability we will not notationally distinguish the spaces of vectors \(\varvec{ u},\varvec{ a}\) and scalar functions ua in the proof below and the proof of Lemma 6.

Proof

We show how to compute a \(\delta _{L_{2,0}^{\sigma }}\)-name of \(e^{-t\mathbb {A}}a\) on inputs \(t\ge 0\) and \(a\in L^{\sigma }_{2,0}(\varOmega )\). Recall that a \(\delta _{L_{2,0}^{\sigma }}\)-name of \(e^{-t\mathbb {A}}a\) is a sequence \(\{ q_K\}\), \(q_K\in \mathcal {P}\), satisfying \(\Vert e^{-t\mathbb {A}}a - q_K\Vert _{2}\le 2^{-K}\) for all \(K\in \mathbb {N}\). Again, for readability, we assume that \(\varOmega =(0, 1)^2\).

We first consider the case where \(a\in \mathcal {P}\) and \(t>0\). The reason for us to start with functions in \(\mathcal {P}\) is that these functions have stronger convergence property in the sense that, for any \(a\in \mathcal {P}\), if \(a=(a^1, a^2)\) is expressed in terms of the orthogonal basis \(\{ \sin (n\pi x)\sin (m\pi y)\}_{n, m\ge 1}\) for \(L_{2}(\varOmega )\): for \(i=1, 2\),

$$\begin{aligned} a^i=\sum _{n, m\ge 1}a^i_{n, m}\sin (n\pi x)\sin (m\pi y) \end{aligned}$$
(18)

where \(a^i_{n, m}=\int _0^1\int _0^1 a^i\sin (n\pi x)\sin (m\pi y)\,dx\,dy\), then the following series is convergent

$$\begin{aligned} \sum _{n, m\ge 1}(1+n^2+m^2) |a^i_{n, m}|^2 < \infty \end{aligned}$$
(19)

The inequality (19) holds true because functions in \(\mathcal {P}\) are \(C^\infty \). In fact, the series is not only convergent but its sum is also computable (from a) (see, for example, [28]).

Now let \(K\in \mathbb {N}\) be any given precision. Since \(-\mathbb {A}\) generates an analytic semigroup, it follows from [10, Section 2.5] that for \(t>0\),

$$\begin{aligned} e^{-t\mathbb {A}}a=\frac{1}{2\pi i}\int _{\varGamma }e^{\lambda t} (\lambda \mathbb {I}+\mathbb {A})^{-1}a\,d\lambda \end{aligned}$$
(20)

where \(\varGamma \) is the path composed from two rays \(re^{i\beta }\) and \(re^{-i\beta }\) with \(0<r<\infty \) and \(\beta =\frac{3\pi }{5}\). Thus we have an explicit expression for \(e^{-t\mathbb {A}}a\), which involves a limit process – an infinite integral – indicating that a search algorithm is applicable for finding a desirable K-approximation provided that a finite approximation of \(e^{-t\mathbb {A}}a\) can be computed by some truncating algorithm.

In the following, we construct such a truncating algorithm. We begin by writing the infinite integral in (20) as a sum of three integrals: two are finite and one infinite; the infinite one can be made arbitrarily small. Now for the details. Let l be a positive integer to be determined; let \(\varGamma _1\) be the path \(re^{i\beta }\) with \(0<r\le l\); \(\varGamma _2\) the path \(re^{-i\beta }\) with \(0<r\le l\); and \(\varGamma _3=\varGamma \setminus (\varGamma _1\cup \varGamma _2)\). Since \(a\in \mathcal {P}\), it follows that \(-\mathbb {A}a=\mathbb {P}\bigtriangleup a=\bigtriangleup a\), which further implies that

$$\begin{aligned}&(\lambda \mathbb {I}+\mathbb {A})^{-1} a = \nonumber \\&\left( \sum _{n, m\ge 1}\frac{a^1_{ n, m}\sin (n\pi x)\sin (m\pi y)}{\lambda + (n\pi )^2+(m\pi )^2}, \sum _{n, m\ge 1}\frac{a^2_{ n, m}\sin (n\pi x)\sin (m\pi y)}{\lambda + (n\pi )^2+(m\pi )^2}\right) \end{aligned}$$
(21)

Note that for any \(\lambda \in \varGamma \), \(|\lambda + (n\pi )^2+(m\pi )^2|\ne 0\). From (20) and (21) we can write \(e^{-t\mathbb {A}}a\) as a sum of three terms:

$$\begin{aligned} e^{-t\mathbb {A}}a= & {} \sum _{j=1}^{3}\frac{1}{2\pi i}\int _{\varGamma _j}\tilde{a}e^{\lambda t}d\lambda \\= & {} \sum _{j=1}^3\frac{1}{2\pi i}\sum _{n, m\ge 1}\left[ \int _{\varGamma _j}\frac{e^{\lambda t}}{\lambda + (n\pi )^2+(m\pi )^2}d\lambda \right] a_{n, m}\sin (n\pi x)\sin (m\pi y) \\=: & {} \beta _1+\beta _2+\beta _3 \end{aligned}$$

where \(\tilde{a}=(\lambda \mathbb {I}+\mathbb {A})^{-1} a\). The functions \(\beta _j\), \(j=1, 2, 3\), are in \(L^{\sigma }_{2,0}(\varOmega )\) as verified as follows: It follows from \(a=(\lambda \mathbb {I}+\mathbb {A})\tilde{a} = (\lambda \mathbb {I}-\mathbb {P}\bigtriangleup )\tilde{a}\) and \(\mathbb {P}\bigtriangleup \tilde{a}=\mathbb {P}(\bigtriangleup \tilde{a})\in L^{\sigma }_{2,0}(\varOmega )\) that \(\bigtriangledown (\mathbb {P}\bigtriangleup \tilde{a})=0\) and

$$\begin{aligned} 0=\bigtriangledown a=\lambda (\bigtriangledown \tilde{a}) - \bigtriangledown (\mathbb {P}\bigtriangleup \tilde{a})=\lambda (\bigtriangledown \tilde{a}) \end{aligned}$$
(22)

Since \(\lambda \in \varGamma \), it follows that \(\lambda \ne 0\); thus \(\bigtriangledown \tilde{a}=0\). This shows that \(\tilde{a}\in L^{\sigma }_{2,0}(\varOmega )\). Then it follows from (22) that

$$ \bigtriangledown \beta _j = \frac{1}{2\pi i}\int _{\varGamma _j}(\bigtriangledown \tilde{a})e^{\lambda t}d\lambda = 0$$

Hence \(\beta _j\in L^{\sigma }_{2,0}(\varOmega )\) for \(1\le j\le 3\).

Next we show that \(\beta _1\) and \(\beta _2\) can be effectively approximated by finite sums while \(\beta _3\) tend to zero effectively as \(l\rightarrow \infty \). We start with \(\beta _3\). Since \(t>0\) and \(\cos \beta =\cos \frac{3\pi }{5}<0\), it follows that

$$ \left| \int _{\varGamma _3}\frac{e^{\lambda t}}{\lambda +(n\pi )^2+(m\pi )^2}d\lambda \right| \le 2\int _l^{\infty }\frac{e^{tr\cos \beta }}{r}dr \rightarrow 0 $$

effectively as \(l\rightarrow \infty \). Thus there is some \(l_K\in \mathbb {N}\), computable from a and t, such that the following estimate is valid for \(i=1, 2\) when we take l to be \(l_K\):

$$\begin{aligned}&\Vert \beta ^i_3\Vert _{2} \\= & {} \left\| \frac{1}{2\pi i}\sum _{n, m\ge 1}\left[ \int _{\varGamma _3}\frac{e^{\lambda t}}{\lambda + (n\pi )^2+(m\pi )^2}d\lambda \right] a^i_{n, m}\sin (n\pi x)\sin (m\pi y)\right\| _{2} \\\le & {} \frac{1}{\pi }\int ^{\infty }_{l_K}\frac{e^{tr\cos \beta }}{r}dr \left( \sum _{n, m\ge 1}|a^i_{n, m}|^2\right) ^{1/2} = \frac{1}{\pi }\int ^{\infty }_{l_K}\frac{e^{tr\cos \beta }}{r}dr \cdot \Vert a\Vert _{2} \le 2^{-(K+7)} \end{aligned}$$

where \(\beta _3 = (\beta ^1_3, \beta ^2_3)\). Now let us set \(l=l_K\) and estimate \(\beta _1\). Since \(\beta =\frac{3\pi }{5}<\frac{3\pi }{4}\), it follows that \(\cos \beta <0\) and \(|\cos \beta |<\sin \beta \). Consequently, for any \(\lambda =re^{i\beta }\) on \(\varGamma _1\), if \(r\ge \frac{1}{\sin \beta }\), then \(|re^{i\beta }+(n\pi )^2+(m\pi )^2|\ge r\sin \beta \ge 1\). On the other hand, if \(0<r<\frac{1}{\sin \beta }\), then \(r\cos \beta +(n\pi )^2+(m\pi )^2 \ge \pi ^2(n^2+m^2)-r\sin \beta \ge 2\pi ^2 - 1 >1\), which implies that \(|re^{i\beta }+(n\pi )^2+(m\pi )^2|\ge |r\cos \beta +(n\pi )^2+(m\pi )^2|>1\). Thus \(|\lambda +(n\pi )^2+(m\pi )^2|\ge 1\) for every \(\lambda \in \varGamma _1\). And so

$$\begin{aligned}&\bigg | \int _{\varGamma _1} \frac{e^{\lambda t}}{\lambda +(n\pi )^2+(m\pi )^2}d\lambda \bigg | = \left| \int _0^l\frac{e^{tre^{i\beta }}}{re^{i\beta }+(n\pi )^2+(m\pi )^2}d(re^{i\beta })\right| \;\le \\&\qquad \qquad \le \; \int _0^l\frac{|e^{tre^{i\beta }}|}{|re^{i\beta }+(n\pi )^2+(m\pi )^2|}dr \le \int _0^l e^{tr\cos \beta }dr \le \int _0^le^{tl}dr = le^{tl} \end{aligned}$$

This estimate together with (19) implies that there exists a positive integer \(k=k(t, a, K)\), computable from \(t>0\), a and K, such that

$$\begin{aligned} \frac{1}{1+2k^2}\left( \frac{le^{lt}}{2\pi }\right) ^2\left( \sum _{n, m\ge 1}(1+n^2+m^2)(|a^1_{n, m}|^2+|a^2_{n, m}|^2)\right) < 2^{-2(K+7)} \end{aligned}$$

Write \(\beta _1(k)=(\beta ^1_1(k), \beta ^2_1(k))\) with

$$\begin{aligned} \beta ^i_1(k)=\sum _{1\le n, m\le k}\left( \frac{1}{2\pi i}\int _{\varGamma _1}\frac{e^{\lambda t}}{\lambda + (n\pi )^2 + (m\pi )^2}d\lambda \right) a^i_{n, m}\sin (n\pi x)\sin (m\pi y), \end{aligned}$$

\(i=1, 2\). Then

$$\begin{aligned}&\Vert \beta _1 - \beta _1(k)\Vert ^2_{2} \\\le & {} \sum _{n, m>k}\left| \frac{1}{2\pi i}\int _{\varGamma _1}\frac{e^{\lambda t}}{\lambda + (n\pi )^2+(m\pi )^2}d\lambda \right| ^2 (|a^1_{n, m}|^2+|a^2_{n, m}|^2) \\\le & {} \sum _{n,m >k}\frac{1}{1+n^2+m^2}\cdot (1+n^2+m^2)\left( \frac{le^{lt}}{2\pi }\right) ^2(|a^1_{n, m}|^2+|a^2_{n, m}|^2) \\\le & {} \frac{1}{1+k^2+k^2}\left( \frac{le^{lt}}{2\pi }\right) ^2 \sum _{n,m \ge 1}(1+n^2+m^2)(|a^1_{n, m}|^2+|a^2_{n, m}|^2) \\< & {} 2^{-2(K+7)} \end{aligned}$$

Similarly, if we write \(\beta _2(k)=(\beta ^1_2(k), \beta ^2_2(k))\) with

$$\begin{aligned} \beta ^i_2(k)=\sum _{n, m\le k}\left( \frac{1}{2\pi i}\int _{\varGamma _2}\frac{e^{\lambda t}}{\lambda + (n\pi )^2 + (m\pi )^2}d\lambda \right) a^i_{n, m}\sin (n\pi x)\sin (m\pi y) \end{aligned}$$

then \(\Vert \beta _2 - \beta _2(k)\Vert _{2}\le 2^{-(K+7)}\). The construction of the truncating algorithm is now complete; the algorithm outputs \(\beta _1(k) + \beta _2(k)\) (uniformly) on the inputs \(a\in \mathcal {P}\), \(t>0\), and precision K; the output has the property that it is a finite sum involving a finite integral and \(\Vert \beta _1(k)+\beta _2(k)-e^{-t\mathbb {A}}a\Vert _{2}\le 2^{-(K+4)}\).

Now we are able to search for a desirable approximation in \(\mathcal {P}\). Let us list \(\mathcal {P}=\{ \phi _j \, : \, j\in \mathbb {N}\}\) and compute \(\Vert \phi _j - (\beta _1(k)+\beta _2(k))\Vert _{2}\). Halt the computation at \(j=j(K)\) when

$$\begin{aligned} \Vert \phi _j - (\beta _1(k)+\beta _2(k))\Vert _{2} < 2^{-(K+4)} \end{aligned}$$

The computation will halt since \(\beta _1, \beta _2\in L^{\sigma }_{2,0}(\varOmega )\), \(\Vert \beta _1 - \beta _1(k)\Vert _{2}\le 2^{-(K+7)}\), \(\Vert \beta _2 - \beta _2(k)\Vert _{2}\le 2^{-(K+7)}\), and \(\mathcal {P}\) is dense in \(L^{\sigma }_{2,0}(\varOmega )\) (in \(L^2\)-norm). Set \(q_K=\phi _{j(K)}\). Then

$$\begin{aligned}&\Vert q_K - e^{-t\mathbb {A}}a\Vert _{2} \\= & {} \Vert q_K -(\beta _1+\beta _2+\beta _3)\Vert _{2} \\\le & {} \Vert q_K - (\beta _1(k)+\beta _2(k))\Vert _{2} + \Vert (\beta _1(k)+\beta _2(k)) - (\beta _1+\beta _2)\Vert _{2} + \Vert \beta _3\Vert _{2} \\< & {} 2^{-K} \end{aligned}$$

Next we consider the case where \(a\in L^{\sigma }_{2, 0}(\varOmega )\) and \(t>0\). In this case, the input a is presented by (any) one of its \(\delta _{L^{\sigma }_{2, 0}}\)-names, say \(\{ a_k\}\), where \(a_k\in \mathcal {P}\). It is then clear from the estimate (17) and the discussion above that there is an algorithm that computes a K-approximation \(p_K\in \mathcal {P}\) on inputs \(t>0\), a and precision K such that \(\Vert p_K - e^{-t\mathbb {A}}a\Vert _{2}\le 2^{-K}\).

Finally we consider the case where \(t\ge 0\) and \(a\in L^{\sigma }_{2, 0}(\varOmega )\). Since \(e^{-t\mathbb {A}}a=a\) for \(t=0\) and we already derived an algorithm for computing \(e^{-t\mathbb {A}}a\) for \(t>0\), it suffices to show that \(e^{-t\mathbb {A}}a \rightarrow a\) in \(L^2\)-norm effectively as \(t\rightarrow 0\). Let \(\{ a_k\}\) be a \(\delta _{L^{\sigma }_{2,0}}\)-name of a. It follows from Theorem 6.13 of Sect. 2.6 [Paz83] that \(\Vert e^{-t\mathbb {A}}a_k - a_k\Vert \le Ct^{1/2}\Vert \mathbb {A}^{1/2}a_k\Vert \). Thus

$$\begin{aligned} \Vert a - e^{-t\mathbb {A}}a\Vert \le \Vert a - a_k\Vert + \Vert a_k - e^{-t\mathbb {A}}a_k\Vert + \Vert e^{-t\mathbb {A}}a_k - e^{-t\mathbb {A}}a\Vert \end{aligned}$$

the right-hand side goes to 0 effectively as \(t \rightarrow 0\).    \(\sqcup \)

We note that the computation of the approximations \(q_K\) of \(e^{-t\mathbb {A}}a\) does not require encoding \(\mathbb {A}\). Let \(W: L^{\sigma }_{2,0}(\varOmega )\times [0, \infty ) \rightarrow L^{\sigma }_{2,0}(\varOmega )\), \((a, t)\mapsto e^{-t\mathbb {A}}a\). Then it follows from the previous Proposition and Fact 2 that W is computable.

5 Extension to the Nonlinear Problem

We now proceed to the nonlinear problem (2) by solving its integral version (4) via the iteration scheme (5) but first restrict to the homogeneous case \(\varvec{ g}\equiv \varvec{0}\):

$$\begin{aligned} \varvec{ u}_0(t)\;=\;e^{-t\mathbb {A}}\varvec{ a} , \qquad \varvec{ u}_{m+1}(t)\;=\;\varvec{ u}_0(t)\;-\;\int ^t_0e^{-(t-s)\mathbb {A}}\mathbb {B}\varvec{ u}_m(s)\,ds \end{aligned}$$
(23)

Classically, it is known that for every initial condition \(\varvec{ a}\in L^{\sigma }_{2,0}(\varOmega )\) the sequence \(\varvec{ u}_m=\varvec{ u}_m(t)\) converges near \(t=0\) to a unique limit \(\varvec{ u}\) solving (4) and thus (2). Since there is no explicit formula for the solution \(\varvec{u}\), the truncation/search type of algorithms such as those used in the proofs of Propositions 2 and 3 is no longer applicable for the nonlinear case. Instead, we use a method based on the fixed-point argument to establish the computability of \(\varvec{ u}\). We shall show that the limit of the above sequence \(\varvec{ u}_m=\varvec{ u}_m(t)\) has an effective approximation. The proof consists of two parts: first we study the rate of convergence and show that the sequence converges at a computable rate as \(m\rightarrow \infty \) for \(t\in [0;T]\) with some \(T=T_{\varvec{ a}}>0\), where \(T_{\varvec{a}}\) is computable from \(\varvec{ a}\); then we show that the sequence – as one entity – can be effectively approximated starting with the given \(\varvec{ a}\). The precise statements of the two tasks are given in the following two propositions.

Proposition 4

There is a computable map \(\mathbb {T}:L^{\sigma }_{2,0}(\varOmega )\rightarrow (0, \infty )\), \(\varvec{a}\mapsto T_{\varvec{ a}}\), such that the sequence \(\{ {\varvec{ u}}_m\}\) converges effectively in m and uniformly for \(t\in [0;T_{\varvec{ a}}]\).

Recall that a sequence \(\{x_m\}\) in a metric space (Xd) is effectively convergent if \(d(x_m,x_{m+1})\le 2^{-m}\). In view of type conversion (Subsect. 1.1), the following proposition asserts (ii):

Proposition 5

The map \(\mathbb {S}: \mathbb {N}\times L^{\sigma }_{2,0}(\varOmega )\times [0, \infty )\rightarrow L^{\sigma }_{2,0}(\varOmega )\), \((m, \varvec{ a}, t)\rightarrow \varvec{u}_m(t)\) according to Eq. (23), is \(\big (\nu \times \delta _{L_{2,0}^{\sigma }}\times \rho , \delta _{L_{2,0}^{\sigma }}\big )\)-computable.

The main difficulties in proving the two propositions are rooted in the nonlinearity of \(\mathbb {B}\): the nonlinear operator \(\mathbb {B}\) requires greater care in estimating the rate of convergence and demands richer codings for computation. Since information on \(\mathbb {B}\varvec{u}_m\) is required in order to compute \(\varvec{ u}_{m+1}\), but \(\mathbb {B}\varvec{u}_m=\mathbb {P}\, (\varvec{ u}_m\cdot \nabla )\varvec{ u}_m\) involves both differentiation and multiplication, it follows that a \(\delta _{L_{2,0}^{\sigma }}\)-name of \(\varvec{u}_m\) may not contain enough information for computing \(\mathbb {B}\varvec{u}_m\). Moreover, since estimates of type \(\Vert \mathbb {A}^{\alpha }\varvec{u}_m(t)\Vert _2\), \(0\le \alpha \le 1\), play a key role in proving Propositions 4 and 5, we need to computationally derive a richer code for \(\varvec{u}_m\) from a given \(\delta _{L^{\sigma }_{2, 0}}\)-name of \(\varvec{u}_m\) in order to capture the fact that \(\varvec{u}_m\) is in the domain of \(\mathbb {A}^{\alpha }\) for \(t>0\).

5.1 Representing and Operating on Space \(H^s_{2, 0}(\varOmega )\)

We begin by recalling several definitions and facts. Let \(\theta _{n,m}(x, y):=e^{i ( nx + my)\pi }\), \(n, m\ge 0\). Then, the sequence \(\{\theta _{n, m}(x, y)\}_{n, m\ge 0}\) is a computable orthogonal basis of \(L_{2}(\varOmega )\). For any \(s\ge 0\), \(H_{2}^{s}(\varOmega )\) is the set of all (generalized) functions w(xy) on \(\varOmega \) satisfying \(\sum _{n,m\ge 0}(1+n^2+m^2)^s|w_{n,m}|^2<\infty \), where \(w_{n,m}=\int _{-1}^1\int _{-1}^1w(x, y)\theta _{n,m}(x, y)\,dx\,dy\). \(H_{2}^{s}(\varOmega )\) is a Banach space with a norm \(\Vert w\Vert _{H_{2}^{s}}=(\sum _{n,m\ge 0}(1+n^2+m^2)^s|w_{n,m}|^2)^{1/2}\).

Let \(D(\mathbb {A}^{\alpha })\) be the domain of \(\mathbb {A}^{\alpha }\). Since

$$\begin{aligned} D(\mathbb {A})&= L^{\sigma }_{2, 0}(\varOmega )\bigcap \{ \varvec{u}\in (H^2_2(\varOmega ))^2 : \varvec{u}=0 \ \hbox {on} \ \partial \varOmega \}, \\ D(\mathbb {A}^{1/2})&= L^{\sigma }_{2, 0}(\varOmega )\bigcap \{ \varvec{u}\in (H^1_2(\varOmega ))^2 : \varvec{u}=0 \ \hbox {on} \ \partial \varOmega \}, \end{aligned}$$

and \(D(\mathbb {A}^{\alpha })\), \(0\le \alpha \le 1\), are the complex interpolation spaces of \(L^{\sigma }_{2, 0}(\varOmega )\) and \(D(\mathbb {A})\), we need to represent the subspace of \(H^s_2(\varOmega )\) in which the functions vanish on \(\partial \varOmega \). However, it is usually difficult to design a coding system for such subspaces. Fortunately, for \(0\le s<3/2\), it is known classically that

$$\begin{aligned} H^s_{2, 0}(\varOmega ) = \{ w\in H^s_{2}(\varOmega ) \, : \, w=0 \ \hbox {on} \ \partial \varOmega \} \end{aligned}$$
(24)

where \(H^s_{2, 0}(\varOmega )\) is the closure in \(H^s_2\)-norm of the set of all \(C^{\infty }\)-smooth functions defined on compact subsets of \(\varOmega \). For \(H^s_{2, 0}(\varOmega )\), there is a canonical coding system

$$ \mathcal {H} = \{ \gamma _n*q \, : \, n\in \mathbb {N}, q\in \mathbb {Q}[\mathbb {R}^2] \} $$

(see (11) and (12) for the definitions of \(\gamma _n\) and \(\gamma _n *q\)). Then every w in \(H^s_{2, 0}(\varOmega )\) can be encoded by a sequence \(\{ p_k\}\subset \mathcal {H}\) such that \(\Vert p_k - w\Vert _{H^s_2}\le 2^{-k}\); the sequence \(\{ p_k \}\), which are mollified polynomials with rational coefficients, is called a \(\delta _{H^s_{2, 0}}\)-name of w. If \(w=(w_1, w_2)\in H^s_{2, 0}(\varOmega )\times H^s_{2, 0}(\varOmega )\), a \(\delta _{H^s_{2, 0}}\)-name of w is a sequences \(\{ (p_k, q_k)\}\), \(p_k, q_k\in \mathcal {H}\), such that \((\Vert w_1 - p_k\Vert ^2_{H^s_{2, 0}} + \Vert w_2 - q_k\Vert ^2_{H^s_{2, 0}})^{1/2}\le 2^{-k}\).

Notation 4

We use \(\Vert w\Vert _{H^s_2}\) to denote the \(H^s_2\)-norm of w if w is in \(H^s_2(\varOmega )\) or \(H^s_2\times H^s_2\)-norm of w if w is in \(H^s_2(\varOmega )\times H^s_2(\varOmega )\). Also for readability we use \([\rho \rightarrow \delta _{H_{2,0}^{s}}]\) to denote the canonical representation of either \(C\big ([0;T]; H_{2,0}^{s}(\varOmega )\big )\) or \(C\big ([0;T]; H_{2,0}^{s}(\varOmega )\times H_{2,0}^{s}(\varOmega )\big )\).

Recall that \(C\big ([0;T]; H_{2,0}^{s}(\varOmega )\big )\) is the set of all continuous functions from the interval [0; T] to \(H_{2,0}^{s}(\varOmega )\). A function \(u\in C\big ([0;T]; H_{2,0}^{s}(\varOmega )\big )\) is computable if there is a machine that computes a \(\delta _{H_{2,0}^{s}}\)-name of u(t) when given a \(\rho \)-name of t as input; and a map \(F: X\rightarrow C\big ([0;T]; H_{2,0}^{s}(\varOmega )\big )\) from a represented space \((X, \delta _X)\) to \(C\big ([0;T]; H_{2,0}^{s}(\varOmega )\big )\) is computable if there is a machine that computes a \(\delta _{H_{2,0}^{s}}\)-name of F(x)(t) when given a \(\delta _X\)-name of x and a \(\rho \)-name of t. Let X be either \(L_{2}(\varOmega )\), \(L^{\sigma }_{2, 0}(\varOmega )\), \(H_{2,0}^{s}(\varOmega )\), or \(C\big ([0;T]; H_{2,0}^{s}(\varOmega )\big )\). We remark again that a \(\delta _{X}\)-name of \(f\in X\) is simply an effective approximation of f because each space X is equipped with a norm.

Lemma 4

For \(s\ge 1\), differentiation \(\partial _x,\partial _y: H_{2,0}^{s}(\varOmega )\rightarrow L_{2}(\varOmega )\) is \((\delta _{H_{2,0}^{s}}, \delta _{L_{2}})\)-computable.

Proof

Let \(\{ p_k\}\) be a \(\delta _{H_{2,0}^{s}}\)-name of \(w\in H^s_{2, 0}(\varOmega )\). Since \(\partial _x (\gamma *q) = \gamma *\partial _xq\), the map \(p_k \mapsto \partial _x p_k\) is computable; hence a polynomial \(\tilde{p}\) in \(\mathbb {Q}[\mathbb {R}^2]\) can be computed from \(p_k\) such that \(\max _{-1\le x, y\le 1}|\partial _x p_k - \tilde{p}_k|<2^{-k}\). Next let us express w and \(p_k\) in the orthogonal basis \(\theta _{n, m}\): \(w(x, y)=\sum _{n,m\ge 0}w_{n, m}e^{in\pi x}e^{im\pi y}\) and \(p_k(x, y)= \sum _{n, m\ge 0}p_{k, n, m}e^{in\pi x}e^{im\pi y}\), where

$$\begin{aligned}&w_{n,m}=\int _0^1\int _0^1w(x, y)e^{in\pi x}e^{im\pi y}\,dx\,dy\, , \\&p_{k, n, m}=\int _0^1\int _0^1p_k(x, y)e^{in\pi x}e^{im\pi y}\,dx\,dy\, . \end{aligned}$$

Since \(s\ge 1\) and \(\{ p_k\}\) is a \(\delta _{H_{2,0}^{s}}\)-name of w, it follows that

$$\begin{aligned}&\Vert \partial _x p_k - \partial _x w\Vert ^2_2 \\&= \left\| \sum \nolimits _{n,m}in\pi (p_{k,n,m}-w_{n,m})e^{in\pi x}e^{im\pi y}\right\| ^2_2 \; =\; \pi ^2\sum \nolimits _{n,m}n^2|p_{k, n, m}-w_{n, m}|^2 \\&= \pi ^2\sum _{n, m}\frac{n^2}{(1+n^2+m^2)^s} (1+n^2+m^2)^s|p_{k, n, m}-w_{n, m}|^2 \\&\le \pi ^2\sum \nolimits _{n, m}(1+n^2+m^2)^s|p_{k, n, m}-w_{n, m}|^2 \;=\; \pi ^2\Vert p_k-w\Vert ^2_{H_{2}^{s}} \;\le \; \pi ^2\cdot 2^{-2k} \end{aligned}$$

which further implies that

$$ \Vert \tilde{p}_k - \partial _x w\Vert _2 \le \Vert \tilde{p}_k - \partial _x p_k\Vert _2 + \Vert \partial _x p_k - \partial _x w\Vert _2 \le 2^{-k} + \pi 2^{-k} $$

Thus, by definition, \(\{ \tilde{p}_k \}\) is a \(\delta _{L_{2}}\)-name of \(\partial _x w\).

It is known classically that every polygonal domain in \(\mathbb {R}^2\) is Lipschitz (see, for example, [9]) and \(H_{2}^{s}(U)\) is continuously embedded in \(C(\overline{U})\) if \(s>1\) and U is a bounded Lipschitz domain, where \(\overline{U}\) is the closure of U in \(\mathbb {R}^2\) and \(C(\overline{U})\) is the set of all continuous functions on \(\overline{U}\). Since \(\varOmega \) is a bounded polygonal domain, it follows that for any \(s>1\), there is a constant \(C_s>0\) such that \(\Vert w\Vert _{C(\overline{\varOmega })}\le C_s\Vert w\Vert _{H_{2}^{s}(\varOmega )}\), where \(\Vert w\Vert _{C(\overline{\varOmega })}=\Vert w\Vert _\infty =\max \{ |w(x, y)| \, : \, (x, y)\in \overline{U}\}\).

Lemma 5

For \(s>1\), multiplication \(Mul: H_{2}^{s}(\varOmega )\times L_{2}(\varOmega )\rightarrow L_{2}(\varOmega )\), \((v, w)\mapsto vw\), is \((\delta _{H_{2,0}^{s}}\times \delta _{L_{2}}, \delta _{L_{2}})\)-computable.

Proof

Assume that \(\{ p_k\}\) is a \(\delta _{H_{2,0}^{s}}\)-name of v and \(\{ q_k\}\) is a \(\delta _{L_{2}}\)-name of w. For each \(n\in \mathbb {N}\), pick \(k(n)\in \mathbb {N}\) such that \(C_{s}\Vert v\Vert _{H_{2}^{s}}\Vert w-q_{k(n)}\Vert _2\le 2^{-(n+1)}\). Since \(\Vert v\Vert _{H_{2}^{s}}\) is computable from \(\{ p_k\}\), the function \(n\mapsto k(n)\) is computable from \(\{p_k\}\) and \(\{ q_k\}\). Next pick \(m(n)\in \mathbb {N}\) such that \(\Vert q_{k(n)}\Vert _{C(\overline{\varOmega })}\Vert v- p_{m(n)}\Vert _{H_{2}^{s}}\le 2^{-(n+1)}\). It is clear that m(n) is computable from k(n), \(\{ q_k\}\), and \(\{ p_k\}\). The sequence \(\{ p_{m(n)}q_{k(n)}\}_n\) is then a \(\delta _{L_{2}}\)-name of vw, for it is a sequence of polynomials of rational coefficients and \(\Vert vw - p_{m(n)} q_{k(n)}\Vert _2 \le \Vert v\Vert _{C(\overline{\varOmega })}\Vert w - q_{k(n)}\Vert _2 + \Vert q_{k(n)}\Vert _{C(\overline{\varOmega })}\Vert v- p_{m(n)}\Vert _{H_{2}^{s}}\le 2^{-n}\).

5.2 Some Classical Properties of Fractional Powers of \(\mathbb {A}\)

It is known that fractional powers of the Stokes operator \(\mathbb {A}\) are well defined; cf. [10, Section 2.6]. In the following, we summarize some classical properties of the Stokes operator and its fractional powers; these properties will be used in later proofs.

Fact 5

Let \(\mathbb {A}\) be the Stokes operator.

  1. (1)

    For every \(0\le \alpha \le 1\), let \(D(\mathbb {A}^{\alpha })\) be the domain of \(\mathbb {A}^{\alpha }\); this is a Banach space with the norm \(\Vert \varvec{u}\Vert _{D(\mathbb {A}^{\alpha })}:=\Vert \mathbb {A}^{\alpha }\varvec{u}\Vert _{L^{\sigma }_{2,0}(\varOmega )}=\Vert \mathbb {A}^{\alpha }\varvec{ u}\Vert _{2}\). In particular, \(D(\mathbb {A}^{\alpha })\) is continuously embedded in \(H_{2}^{2\alpha }\), that is, for every \(\varvec{ u}\in D(\mathbb {A}^{\alpha })\),

    $$\begin{aligned} \Vert \varvec{ u}\Vert _{H_{2}^{2\alpha }} \le \Vert \varvec{u}\Vert _{D(\mathbb {A}^{\alpha })}=C\Vert \mathbb {A}^{\alpha }\varvec{ u}\Vert _{2} \end{aligned}$$
    (25)

    where C is a constant independent of \(\alpha \). Moreover, we have \(D(\mathbb {A}^{1/2})=L^{\sigma }_{2, 0}(\varOmega )\bigcap \{ \varvec{u}\in (H^1(\varOmega ))^2; \varvec{u}=0 \ \hbox {on} \ \partial \varOmega \}\).

  2. (2)

    For every nonnegative \(\alpha \) the estimate

    $$\begin{aligned} \Vert \mathbb {A}^{\alpha }e^{-t\mathbb {A}}\varvec{ u}\Vert _{2}\le C_{\alpha } t^{-\alpha }\Vert \varvec{ u}\Vert _{2}, \quad t>0 \end{aligned}$$
    (26)

    is valid for all \(\varvec{ u}\in L^{\sigma }_{2,0}(\varOmega )\), where \(C_{\alpha }\) is a constant depending only on \(\alpha \). In particular, \(C_0=1\). Moreover, the estimate implies implicitly that for every \(\varvec{u}\in L^{\sigma }_{2, 0}(\varOmega )\), \(e^{-t\mathbb {A}}\varvec{u}\) is in the domain of \(\mathbb {A}\), and thus \(e^{-t\mathbb {A}}\varvec{u}\) vanishes on the boundary of \(\varOmega \) for \(t>0\).

  3. (3)

    If \(\alpha \ge \beta >0\), then \(D(\mathbb {A}^{\alpha })\subseteq D(\mathbb {A}^{\beta })\).

  4. (4)

    For \(0<\alpha <1\), if \(\varvec{ u}\in D(\mathbb {A})\), then

    $$\begin{aligned} \mathbb {A}^{\alpha }\varvec{ u}=\frac{\sin \pi \alpha }{\pi }\int _{0}^{\infty }t^{\alpha -1}\mathbb {A}(t\mathbb {I}+\mathbb {A})^{-1}\varvec{u}\,dt \end{aligned}$$
  5. (5)

    \(\Vert \mathbb {A}^{-1/4}\mathbb {P}(\varvec{ u},\nabla )\varvec{ v}\Vert _{2}\le M \Vert \mathbb {A}^{1/4}\varvec{ u}\Vert _{2}\Vert \mathbb {A}^{1/2}\varvec{ v}\Vert _{2}\) is valid for all \(\varvec{ u}, \varvec{ v}\) in the domain of \(\mathbb {A}^{3/5}\), where M is a constant independent of \(\varvec{ u}\) and \(\varvec{ v}\).

Proof

See Lemmas 2.1, 2.2 and 2.3 in [3] for (1) and (2) except for \(C_0=1\); \(C_0=1\) is proved in Lemma 3. See Theorems 6.8 and 6.9 in Sect. 2.6 of [10] for (3) and (4); Lemma 3.2 in [3] for (5).

We record, without going into the details, that the constants C, M, and \(C_{\alpha }\) (\(0\le \alpha \le 1\)) appeared in Fact 5 are in fact computable (some general discussions on the computability of Sobolev embedding constants and interpolation constants together with other constants in the PDE theory are forthcoming).

5.3 Proof of Proposition 4

In order to show that the iteration sequence is effectively convergent, we need to establish several estimates on various functions such as \(\Vert \mathbb {A}^{\beta }u_m(t)\Vert _2\) and \(\Vert \mathbb {A}^{\beta }(u_{m+1}(t)-u_m(t))\Vert _2\) for \(\beta \) being some positive numbers. Subsequently, as a prerequisite, \(u_m(t)\) must be in the domain of \(\mathbb {A}^{\beta }\); thus the functions \(u_m(t)\) are required to have higher smoothness than the given initial function \(\varvec{a}\) according to Fact 5-(1). This is indeed the case: For functions \(u_m(t)\) obtained by the iteration (23), it is known classically that if \(u_m(0)\in L_2(\varOmega )\) then \(u_m(t)\in H^{2\alpha }_2(\varOmega )\) for \(t>0\), where \(0\le \alpha \le 1\). In other words, \(u_m(t)\) undergoes a jump in smoothness from \(t=0\) to \(t>0\) (due to the integration). In the following lemma, we present an algorithmic version of this increase in smoothness.

Lemma 6

Let \(\alpha =3/5\). Then for the iteration (23)

$$\begin{aligned} \varvec{ u}_0(t)\;=\;e^{-t\mathbb {A}}\varvec{ a} , \qquad \varvec{ u}_{m+1}(t)\;=\;\varvec{ u}_0(t)\;-\;\int ^t_0e^{-(t-s)\mathbb {A}}\mathbb {B}\varvec{ u}_m(s)\,ds \end{aligned}$$

the mapping \(\mathbb {S}_{H}: \mathbb {N}\times L^{\sigma }_{2,0}(\varOmega )\times (0, \infty )\rightarrow H_{2, 0}^{2\alpha }(\varOmega )\times H_{2, 0}^{2\alpha }(\varOmega )\), \((m, \varvec{ a}, t)\mapsto \varvec{u}_m(t)\), is well-defined and \((\nu \times \delta _{L_{2,0}^{\sigma }}\times \rho , \delta _{H_{2,0}^{2\alpha }})\)-computable.

We emphasize that the lemma holds true for \(t>0\) only. Also the choice of \(\alpha = 3/5\) is somewhat arbitrary; in fact, \(\alpha \) can be selected to be any rational number strictly between \(\frac{1}{2}\) and \(\frac{3}{4}\). The requirement \(\alpha < \frac{3}{4}\) guarantees that \(D(\mathbb {A}^{\alpha })\subset H^{2\alpha }_{2, 0}(\varOmega )\times H^{2\alpha }_{2, 0}(\varOmega )\) because \(2\alpha < 3/2\) (see (24)). The other condition \(\alpha > \frac{1}{2}\) ensures that Lemma 5 can be applied for \(2\alpha > 1\).

Proof

We induct on m. Note that for any \(t>0\) and any \(a\in L^{\sigma }_{2, 0}(\varOmega )\), the estimates (25) and (26) imply that

$$\begin{aligned} \Vert e^{-t\mathbb {A}}a\Vert _{H^{2\alpha }_{2}}\le C\Vert \mathbb {A}^{\alpha }e^{-t\mathbb {A}}a\Vert _{2}\le CC_{\alpha }t^{-\alpha }\Vert a\Vert _{2} \end{aligned}$$

Combining this inequality with the following strengthened version of (19): for any \(a\in \mathcal {P}\),

$$\begin{aligned} \sum _{n, m\ge 1}(1+n^2+m^2)^{2}|a_{n,m}|^2<\infty \end{aligned}$$

(the inequality is valid since a is \(C^{\infty }\)), a similar argument used to prove Proposition 3 works for \(m=0\). Moreover, by type conversion (Fact 2), \(a\in L^{\sigma }_{2, 0}(\varOmega ) \mapsto u_0\in C((0, \infty ), H^{6/5}_{2, 0}(\varOmega )\times H^{6/5}_{2, 0}(\varOmega ))\) is \((\delta _{L^{\sigma }_{2, 0}}, [\rho \rightarrow \delta _{H^{6/5}_{2, 0}}])\)-computable.

Assume that \((j, a)\mapsto u_j\) is \((\nu , \delta _{L^{\sigma }_{2, 0}}, [\rho \rightarrow \delta _{H^{6/5}_{2, 0}}])\)-computable for \(0\le j\le m\), where \(a\in L^{\sigma }_{2, 0}(\varOmega )\), and \(u_j\in C((0, \infty ), (H^{6/5}_{2, 0}(\varOmega ))^2)\). We show how to compute a \(\delta _{H_{2,0}^{6/5}}\)-name for \(u_{m+1}(t)= e^{-t\mathbb {A}}a -\int ^t_0e^{-(t-s)\mathbb {A}}\mathbb {B}u_{m}(s)ds\) on inputs \(m+1\), a and \(t>0\). Let us first look into the nonlinear term \(\mathbb {B}u_m\). It is clear that \(\mathbb {B}u_m(s)\) lies in \(L^{\sigma }_{2, 0}(\varOmega )\) for \(s>0\). Moreover, it follows from Lemmas 4 and 5, and Proposition 2 that the map \((u_m, s)\mapsto \mathbb {B}u_m(s)\) is \(([\rho \!\rightarrow \!\delta _{H_{2,0}^{2\alpha }}], \rho , \delta _{L_{2,0}^{\sigma }})\)-computable for all \(s\in (0, t]\). Now since \(\mathbb {B}u_m(s)\) is in \(L^{\sigma }_{2, 0}(\varOmega )\) for \(s>0\), it follows from the case where \(m=0\) that \((u_m, s)\mapsto e^{-(t-s)\mathbb {A}}\mathbb {B}u_m(s)\) is \(([\rho \!\rightarrow \!\delta _{H_{2,0}^{2\alpha }}], \rho , \delta _{H_{2,0}^{6/5}})\)-computable for \(0< s < t\).

Next let us consider the integral \(\int ^t_0e^{-(t-s)\mathbb {A}}\mathbb {B}\varvec{u}_m(s)\,ds\); we wish to compute a \(\delta _{H^{6/5}_{2, 0}}\)-name of the integral from a and \(t>0\). We make use of the following fact: For \(\theta \ge 1\), the integration operator from \(C([a, b]; H^{\theta }_{2, 0}(\varOmega )\times H^{\theta }_{2, 0}(\varOmega ))\) to \(H^{\theta }_{2, 0}(\varOmega )\times H^{\theta }_{2, 0}(\varOmega )\), \(F\mapsto \int ^{b}_{a}F(t)(x)dt\), is computable from a, b, and F. This fact can be proved by a similar argument as the one used in the proof of Lemma 3.7 [24]. However, since the function \(e^{-(t-s)\mathbb {A}}\mathbb {B}u_m(s)\) is not necessarily in \((H^{6/5}_{2}(\varOmega ))^2\) when \(s=0\) or \(s=t\), the stated fact cannot be directly applied to the given integral. To overcome the problem of possible singularities at the two endpoints, we use a sequence of closed subintervals \([t_n, t-t_n]\) to approximate the open interval (0, t), where \(t_n=t/2^n\), \(n\ge 1\). Then it follows from the stated fact and the induction hypotheses that a \(\delta _{H^{6/5}_{2, 0}}\)-name, say \(\{ p_{n, K}\}\), of \(u^n_{m+1}(t)=e^{-t\mathbb {A}}a -\int ^{t-t_n}_{t_n}e^{-(t-s)\mathbb {A}}\mathbb {B}u_m(s)ds\) can be computed from inputs n, \(u_m\), and \(t>0\), which satisfies the condition that \(\Vert u^n_{m+1}(t) - p_{n, K}\Vert _{H^{6/5}_{2}}\le 2^{-K}\). Thus if we can show that the integrals \(\int _{0}^{t_n}e^{-(t-s)\mathbb {A}}\mathbb {B}u_m(s)ds\) and \(\int _{t-t_n}^{t}e^{-(t-s)\mathbb {A}}\mathbb {B}u_m(s)ds\) tend to zero effectively in \(H_{2}^{6/5}\times H_{2}^{6/5}\)-norm as \(n\rightarrow \infty \), then we can effectively construct a \(\delta _{H_{2,0}^{6/5}}\)-name of \(u_{m+1}(t)\) from \(\{ p_{n, K}\}_{n, K}\).

It remains to show that both sequences of integrals tend to zero effectively in \(H_{2}^{6/5}\times H_{2}^{6/5}\)-norm as \(n\rightarrow \infty \). Since a similar argument works for both sequences, it suffices to show that the sequence \(\hbox {Int}_n := \int _{0}^{t_n}e^{-(t-s)\mathbb {A}}\mathbb {B}u_m(s)ds\) tends to zero effectively as \(n\rightarrow \infty \). We are to make use of Fact 5-(1), (2), (5) for showing the effective convergence. The following two claims comprise the proof.

Claim I. Let \(\beta =\frac{1}{2}\) or \(\frac{1}{4}\). Then the map \((a, t, m, \beta ) \mapsto M_{\beta , m}\) is computable, where \(M_{\beta , m}\) is a positive number satisfying the condition

$$\begin{aligned} \Vert \mathbb {A}^{\beta }u_m(s)\Vert _2\le M_{\beta , m}s^{-\beta } \quad \hbox {for all 0<s<t} \end{aligned}$$
(27)

(note that \(M_{\beta , m}\) is independent of s).

Proof. Again we induct on m. For \(m=0\), let \(M_{\beta , 0}=C_{\beta }\Vert a\Vert _2\), where \(C_{\beta }\) is the constant in estimate (26) with \(\alpha \) replaced by \(\beta \) and u by a. Then \(M_{\beta , 0}\) is computable from a and \(\beta \), and \(\Vert \mathbb {A}^{\beta }u_0(s)\Vert _{2}\le C_{\beta }s^{-\beta }\Vert a\Vert _2=M_{\beta , 0}s^{-\beta }\) for any \(s>0\). Assume that \(M_{\beta , k}\), \(0\le k\le m\), has been computed from \(k, \beta , a\), and \(t>0\). We show how to compute \(M_{\beta , m+1}\). Since \(u_{m+1}(s)\) has a singularity at \(s=0\), it may not be in \(H^{2\beta }_{2}(\varOmega )\times H^{2\beta }_{2}(\varOmega )\) at \(s=0\) (recall that \(D(\mathbb {A}^{1/2})=L^{\sigma }_{2, 0}(\varOmega )\bigcap \{\varvec{u}\in H^1_2(\varOmega )\times H^1_2(\varOmega ):\) \(\varvec{u}=0 \ \hbox {on} \ \partial \varOmega \}\)). Let us first compute a bound (in \(L_2\)-norm) for \(\mathbb {A}^{\beta }\int ^{s}_{\epsilon }e^{-(t-r)\mathbb {A}}\mathbb {B}u_{m}(r)dr\), where \(0<\epsilon < s\). It follows from the induction hypothesis, Fact 5-(1), (2), (5), and Theorems 6.8 and 6.13 in [10] that

$$\begin{aligned}&\Vert \mathbb {A}^{\beta } \int ^{s}_{\epsilon }e^{-(s-r)\mathbb {A}}\mathbb {B}u_{m}(r)dr\Vert _2 \nonumber \\&= \Vert \int ^{s}_{\epsilon }\mathbb {A}^{\beta +1/4}e^{-(s-r)\mathbb {A}}\mathbb {A}^{-1/4}\mathbb {B}u_{m}(r)dr\Vert _2 \nonumber \\&\le C_{\beta +1/4}\int ^{s}_{\epsilon }(s-r)^{-(\beta +1/4)}\Vert \mathbb {A}^{-1/4}\mathbb {B}u_{m}(r)\Vert _2dr \nonumber \\&\le C_{\beta +1/4}M\int ^{s}_{\epsilon }(s-r)^{-(\beta +1/4)}\Vert \mathbb {A}^{1/4} u_{m}(r)\Vert _2\Vert \mathbb {A}^{1/2}u_{m}(r)\Vert _2dr \nonumber \\&\le C_{\beta +1/4}MM_{1/4, m}M_{1/2, m}\int ^{t}_{\epsilon }(s-r)^{-(\beta +1/4)}r^{-3/4}dr \end{aligned}$$
(28)

Subsequently, we obtain that

$$\begin{aligned}&\Vert \mathbb {A}^{\beta }u_{m+1}(s)\Vert _{2} \nonumber \\&= \Vert \mathbb {A}^{\beta }u_0(s) - \int ^{s}_{0}\mathbb {A}^{\beta }e^{-(s-r)\mathbb {A}}\mathbb {B}u_{m}(r)dr \Vert _2 \nonumber \\&\le M_{\beta , 0}s^{-\beta } + \Vert \lim _{\epsilon \rightarrow 0}\int ^{s}_{\epsilon }\mathbb {A}^{\beta }e^{-(s-r)\mathbb {A}}\mathbb {B}u_{m}(r)dr \Vert _2 \nonumber \\&\le M_{\beta , 0}s^{-\beta } + C_{\beta + \frac{1}{4}}MM_{\frac{1}{4}, m}M_{\frac{1}{2}, m}\int ^{s}_{0}(s-r)^{-(\beta +\frac{1}{4})}r^{-3/4}dr \nonumber \\&= M_{\beta , 0}s^{-\beta } + C_{\beta + \frac{1}{4}}MM_{\frac{1}{4}, m}M_{\frac{1}{2}, m}B(\frac{3}{4}-\beta , \frac{1}{4})s^{-\beta } \end{aligned}$$
(29)

where \(B(\frac{3}{4}-\beta , 1/4)\) is the integral \(\int _{0}^{1}(1-\theta )^{(\frac{3}{4}-\beta )-1}\theta ^{\frac{1}{4}-1}d\theta \), which is the value of the Beta function \(B(x, y)=\int ^1_0 (1-\theta )^{1-x}\theta ^{1-y}d\theta \) at \(x=\frac{3}{4}-\beta \) and \(y=\frac{1}{4}\). It is clear that \(B(\frac{3}{4}-\beta , 1/4)\) is computable. Thus if we set

$$\begin{aligned} M_{\beta , m+1}=M_{\beta , 0} + C_{\beta + \frac{1}{4}}MM_{\frac{1}{4}, m}M_{\frac{1}{2}, m}B\left( \frac{3}{4}-\beta , \frac{1}{4}\right) \end{aligned}$$
(30)

then \(M_{\beta , m+1}\) is computable and satisfies the condition that \(\Vert \mathbb {A}^{\beta }u_{m+1}(s)\Vert _{2}\le M_{\beta , m+1}s^{-\beta }\) for all \(0<s<t\). The proof of Claim I is complete.

Claim II. \(\left\| \int ^{t_n}_{0}e^{-(t-s)\mathbb {A}}\mathbb {B}u_m(s)ds \right\| _{H_{2}^{6/5}} \rightarrow 0\) effectively as \(n\rightarrow \infty \).

Proof. Once again, to avoid singularity of \(u_m(s)\) at \(s=0\), we begin with the following estimate: Let \(0<\epsilon < t_n\). Then it follows from Fact 5-(1), (2), (5), (27), (30), and a similar calculation as performed in Claim I that

$$\begin{aligned}&\Vert \int ^{t_n}_{\epsilon } e^{-(t-s)\mathbb {A}}\mathbb {B}u_{m}(s)ds \Vert _{H_{2}^{6/5}} \\\le & {} C\Vert \mathbb {A}^{3/5}\int ^{t_n}_{\epsilon } e^{-(t-s)\mathbb {A}}\mathbb {B}u_{m}(s)ds\Vert _2 \\\le & {} CC_{17/20}MM_{\frac{1}{4}, m}M_{\frac{1}{2}, m}\int ^{t_n}_{\epsilon }(t-s)^{-17/20}s^{-3/4}ds \\\le & {} CC_{17/20}MM_{\frac{1}{4}, m}M_{\frac{1}{2}, m}(t-t_n)^{-17/20}\cdot 4(t_{n}^{1/4}-\epsilon ^{1/4}) \end{aligned}$$

which then implies that

$$\begin{aligned}&\Vert \int ^{t_n}_{0} e^{-(t-s)\mathbb {A}}\mathbb {B}u_{m}(s)ds \Vert _{H_{2}^{6/5}} \\= & {} \Vert \lim _{\epsilon \rightarrow 0}\int ^{t_n}_{\epsilon } e^{-(t-s)\mathbb {A}}\mathbb {B}u_{m}(s)ds \Vert _{H_{2}^{6/5}} \\\le & {} \lim _{\epsilon \rightarrow 0}CC_{17/20}MM_{\frac{1}{4}, m}M_{\frac{1}{2}, m}(t-t_n)^{-17/20}\cdot 4(t_{n}^{1/4}-\epsilon ^{1/4}) \\= & {} CC_{17/20}MM_{\frac{1}{4}, m}M_{\frac{1}{2}, m}(t-t_n)^{-17/20}\cdot 4t_{n}^{1/4} \end{aligned}$$

It is readily seen that \(\int ^{t_n}_{0}e^{-(t-s)\mathbb {A}}\mathbb {B}u_m(s)ds\Vert _{H_{2}^{6/5}} \rightarrow 0\) effectively as \(n\rightarrow \infty \) (recall that \(t_n=t/2^n\)). The proof for the claim II, and thus for the lemma is now complete.

Remark 1

In our effort to compute an upper bound for \(\Vert \mathbb {A}^{\beta }u_{m+1}(s)\Vert _2\), we start with the integral \(\int ^{s}_{\epsilon }e^{-(s-r)\mathbb {A}}\mathbb {B}u_{m}(r)dr\) because the integral might have a singularity at 0; then we take the limit as \(\epsilon \rightarrow 0\) to get the desired estimate (see computations of (28) and (29)). The limit exists because the bound, \(C_{\beta + \frac{1}{4}}MM_{\frac{1}{4}, m}M_{\frac{1}{2}, m}B\left( \frac{3}{4}-\beta , \frac{1}{4}\right) \), is uniform in r for \(0<r\le s\). In the rest of the paper, we will encounter several similar computations. In those later situations, we will derive the estimates starting with \(\int ^t_0\) instead of \(\int ^t_{\epsilon }\). There will be no loss in rigor because the integral is uniformly bounded with respect to the integrating variable, say t, for \(t>0\).

Corollary 1

For any \(\varvec{ a}\in L^{\sigma }_{2, 0}(\varOmega )\) and \(t>0\), let \(\{ u_m(t)\}\) be the sequence generated by the iteration scheme (23) based on \(\varvec{ a}\). Then \(u_m(t)\in Dom(\mathbb {A}^{3/5}) \subset Dom(\mathbb {A}^{1/2}) \subset Dom(\mathbb {A}^{1/4})\).

Proof

The corollary follows from Lemma 6 and Fact 5-(3).

Corollary 2

The map from \(\mathcal {P}\) to \(L_{2}(\varOmega )\), \(\varvec{u} \mapsto \Vert \mathbb {A}^{\alpha }\varvec{u}\Vert _2\), is \((\delta _{L_{2,0}^{\sigma }}, \rho )\)-computable, where \(\alpha =1/8, 1/4\), or 1/2.

Proof

We prove the case when \(\alpha = 1/4\); the other two cases can be proved in exactly the same way. Since \(\mathcal {P}\) is contained in the domain of \(\mathbb {A}\), it follows from Theorem 6.9, Sect. 2.6 [10] that for every \(\varvec{u}\in \mathcal {P}\), \(\mathbb {A}^{1/4}\varvec{u}=\frac{\sin \pi /4}{\pi }\int ^{\infty }_{0}t^{-3/4}\mathbb {A}(t\mathbb {I}+\mathbb {A})^{-1}\varvec{u}dt\). By definition of \(\mathcal {P}\), if \(\varvec{u}\in \mathcal {P}\), then \(\varvec{u}\) is \(C^\infty \) with compact support in \(\varOmega \), and \(\mathbb {A}\varvec{u}=-\mathbb {P}\bigtriangleup \varvec{u} = -\bigtriangleup \varvec{u}\). Express each component of \(\varvec{u}=(u^1, u^2)\) in terms of the orthogonal basis \(\{ e^{in\pi x}e^{im\pi y}\}_{n,m}\) of \(L_{2}(\varOmega )\) in the form of \(u^i=\sum _{n, m\ge 0}u^i_{n, m}e^{i\pi nx}e^{i\pi my}\), where \(u^i_{n, m}=\int ^1_{-1}\int _{-1}^{1}u^1(x, y)e^{i\pi nx}e^{i\pi my}dxdy\). Then a straightforward calculation shows that

$$\begin{aligned}&\frac{\sin \pi /4}{\pi }\int ^{\infty }_{0}t^{-3/4}\mathbb {A}(t\mathbb {I}+\mathbb {A})^{-1}u^idt \\= & {} \frac{\sin \pi /4}{\pi }\sum _{n, m\ge 0}\left( \int _{0}^{\infty }t^{-3/4}\frac{(\pi n)^2+(\pi m)^2}{t+(\pi n)^2+(\pi m)^2}dt\right) u^i_{n, m}e^{i\pi nx}e^{i\pi my} \end{aligned}$$

Since the integral is convergent and computable, it follows that \(\mathbb {A}^{1/4}\varvec{u}\) is computable from \(\varvec{u}\) and, consequently, \(\Vert \mathbb {A}\varvec{u}\Vert _2\) is computable.

Proof

(Proof of Proposition 4). For each \(\varvec{a}\in L^{\sigma }_{2,0}\), let \(\{\varvec{a}_k\}\), \(\varvec{a}_k=(a^1_k, a^2_k)\in \mathcal {P}\), be a \(\delta _{L_{2,0}^{\sigma }}\)-name of \(\varvec{a}\); i.e. \(\Vert \varvec{a}-\varvec{a}_k\Vert _{2}\le 2^{-k}\). Let \(\widetilde{C}:=c_1MB_1\), where M is the constant in Fact 5(4), \(c_1=\max \{C_{1/4}, C_{1/2}, C_{3/4}, 1\}\), and

$$B_1=\max \{ B(1/2, 1/4), B(1/4, 1/4), 1\}$$

with \(B(a,b)=\int _{0}^{1}(1-t)^{a-1}t^{b-1}dt\), \(a, b>0\), being the beta function. Then M and \(c_1\) are computable by assumption while \(B_1\) is computable for the beta functions with rational parameters are computable. Note that \(c_1B_1\ge 1\). Let

$$\begin{aligned} v_m(t)=u_{m+1}(t)-u_m(t)=\int _{0}^{t}e^{-(t-s)\mathbb {A}}(\mathbb {B}u_m(s)-\mathbb {B}u_{m-1}(s))ds, \quad m\ge 1 \end{aligned}$$
(31)

Our goal is to compute a constant \(\epsilon \), \(0<\epsilon <1\), such that near \(t=0\),

$$\begin{aligned} \Vert v_m(t)\Vert _2\le L\epsilon ^{m-1} \end{aligned}$$
(32)

where L is a constant. Once this is accomplished, the proof is complete.

It follows from Corollary 1 that Fact 5-(5) holds true for all \(u_m(t)\) and \(v_m(t)\) with \(t>0\). It is also known classically that

$$\begin{aligned}&\Vert \mathbb {A}^{-\frac{1}{4}}(\mathbb {B}u_{m+1 }(t) - \mathbb {B}u_{m}(t))\Vert _2 \nonumber \\= & {} \Vert \mathbb {A}^{-\frac{1}{4}}\mathbb {B}u_{m+1 }(t) - \mathbb {A}^{-\frac{1}{4}}\mathbb {B}u_{m}(t)\Vert _2 \nonumber \\\le & {} M\left( \Vert \mathbb {A}^{\frac{1}{4}}v_m(t)\Vert _2 \Vert \mathbb {A}^{\frac{1}{2}}u_{m+1}(t)\Vert _2 + \Vert \mathbb {A}^{\frac{1}{4}}u_m(t)\Vert _2 \Vert \mathbb {A}^{\frac{1}{2}}v_m(t)\Vert _2 \right) \end{aligned}$$
(33)

(see, for example, [5]). The equality in the above estimate holds true because \(\mathbb {A}^{-1/4}\) is a (bounded) linear operator. The estimate (33) indicates that, in order to achieve (32), there is a need in establishing some bounds on \(\Vert \mathbb {A}^{\beta }u_m(t)\Vert _2\) and \(\Vert \mathbb {A}^{\beta }v_m(t)\Vert _2\) which become ever smaller as m gets larger uniformly for values of t near zero. The desired estimates are developed in a series of claims beginning with the following one.

Claim 1

Let \(\beta = \frac{1}{4}\) or \(\frac{1}{2}\); let

$$\begin{aligned} \tilde{K}^{\varvec{a}}_{\beta , 0}(T)=\max _{0\le t\le T}t^{\beta }\Vert \mathbb {A}^{\beta }e^{-t\mathbb {A}}\varvec{a}\Vert _{2} \end{aligned}$$

and

$$\begin{aligned} k^{\varvec{ a}}_0(T)=\max \{ \tilde{K}^{\varvec{ a}}_{\frac{1}{4}, 0}(T), \tilde{K}^{\varvec{ a}}_{\frac{1}{2},0}(T)\} \end{aligned}$$

Then there is a computable map from \(L^{\sigma }_{2, 0}(\varOmega )\) to (0, 1), \(\varvec{ a} \mapsto T_{\varvec{ a}}\), such that

$$\begin{aligned} k^{\varvec{ a}}_0(T_{\varvec{ a}}) < \frac{1}{8\widetilde{C}} \end{aligned}$$

Proof. First we note that \(t^{\beta }\Vert \mathbb {A}^{\beta }e^{-t\mathbb {A}}\varvec{a}\Vert _{2} = 0\) for any \(\varvec{a}\in L^{\sigma }_{2, 0}(\varOmega )\) if \(t=0\); cf. Theorem 6.1 in [3]. Furthermore, it follows from (17) that the operator norm of \(e^{-t\mathbb {A}}\), \(\Vert e^{-t\mathbb {A}}\Vert _{op}\), is bound above by 1 for any \(t>0\). Since \(e^{-t\mathbb {A}}\) is the identity map on \(L^{\sigma }_{2, 0}(\varOmega )\) when \(t=0\), we conclude that \(\max _{0\le t\le T}\Vert e^{-t\mathbb {A}}\Vert _{op}\le 1\) for any \(T>0\). Now for any \(\varvec{ a}\in L^{\sigma }_{2, 0}(\varOmega )\), it follows from Fact 5-(2) and Theorems 6.8 and 6.13 in Sect. 2.6 of [10] (\(\mathbb {A}^{\alpha }\) and \(e^{-t\mathbb {A}}\) are interchangeable) that

$$\begin{aligned} \tilde{K}^{\varvec{a}}_{\beta , 0}(T)= & {} \max _{0 \le t \le T} t ^\beta \Vert \mathbb {A}^\beta e^{- t\mathbb {A}} \varvec{ a}\Vert _2 \\= & {} \sup _{0 \le t \le T} t ^\beta \Vert \mathbb {A}^\beta e^{- t\mathbb {A}} \varvec{a}\Vert _2 \\\le & {} \sup _{0< t \le T} t ^\beta \Vert \mathbb {A}^\beta e^{- t\mathbb {A}} ( \varvec{a} - \varvec{ a}_k ) \Vert _2 + \sup _{0 < t \le T} t ^\beta \Vert \mathbb {A}^\beta e^{- t\mathbb {A}} \varvec{ a}_k\Vert _2 \\\le & {} C_{\beta }\Vert \varvec{ a} - \varvec{a}_k \Vert _2 + T^\beta \max _{0\le t\le T}\Vert e^{-t\mathbb {A}}\Vert _{op} \Vert \mathbb {A}^\beta \varvec{ a}_k \Vert _2 \\\le & {} c_12^{-k } + \max \{ T^{1/4}, T^{1/2}\} \max \{\Vert \mathbb {A}^{1/4} \varvec{ a}_k\Vert _2, \Vert \mathbb {A}^{1/2} \varvec{a}_k\Vert _2\} \end{aligned}$$

We note that although \(\varvec{ a}\) is not necessarily in the domain of \(\mathbb {A}\) but \(\varvec{ a}_k\in \mathcal {P}\) and \(\mathcal {P}\) is contained in the domain of \(\mathbb {A}\); thus \(\mathbb {A}^{\beta }\varvec{ a}_k\) is well defined. Furthermore, it follows from Corollary 2 that \(\Vert \mathbb {A}^{\beta }\varvec{a}_k\Vert _2\) is computable. Clearly one can compute a positive integer \(\hat{k}\) such that

$$\begin{aligned} 2^{-\hat{k}} < \frac{1}{16c_1\widetilde{C}} \end{aligned}$$

then compute a positive number \(T_{\varvec{a}}\) such that

$$\begin{aligned} \max \{ T^{1/4}_{\varvec{a}}, T^{1/2}_{\varvec{a}}\} \max \{\Vert \mathbb {A}^{1/4} \varvec{ a}_{\tilde{k}}\Vert _2, \Vert \mathbb {A}^{1/2} \varvec{ a}_{\tilde{k}}\Vert _2\} < \frac{1}{16\widetilde{C}} \end{aligned}$$

The computations are performed on the inputs \(\varvec{a}\) and the constants \(c_1\), M, and \(B_1\). Consequently, \(k^{\varvec{a}}_{0}(T_{\varvec{a}}) < 1/(8\widetilde{C})\). The proof of Claim 1 is complete.

We recall that, for a given \(\varvec{ a}\in L^{\sigma }_{2, 0}(\varOmega )\), the iteration scheme (23) is based on the “seed” function \(u_0(t)=e^{-t\mathbb {A}}\varvec{ a}\). Claim 1 asserts that the seed function has the property that \(\max _{0\le t\le T_{\varvec{a}}} t^{\beta }\Vert \mathbb {A}^{\beta }u_0(t)\Vert _2\) is bounded by \(\tilde{K}^{\varvec{a}}_{\beta , 0}\), uniformly in t. We extend this property to the iteration sequence \(\{u_m(t)\}\) in the next claim.

Claim 2

Let \(\beta =\frac{1}{4}\) or \(\frac{1}{2}\). Then there is a computable map \(\mathbb {N}\times L^{\sigma }_{2,0} \rightarrow (0, \infty )\), \((m, \varvec{ a})\mapsto K^{\varvec{ a}}_{\beta , m}\), such that

$$\begin{aligned} \max _{0\le t\le T_{\varvec{a}}} t^{\beta }\Vert \mathbb {A}^{\beta }u_m(t)\Vert _2 \le K^{\varvec{ a}}_{\beta , m} \end{aligned}$$
(34)

Proof. We induct on m. For \(m=0\), let \(K^{\varvec{a}}_{\beta , 0}=1/(8\widetilde{C})\). Then (34) follows from Claim 1. It is clear that \(K^{\varvec{ a}}_{\beta , 0}\) is computable.

For \(m\ge 1\) and \(t>0\), \(K^{\varvec{a}}_{\beta , m+1}\) is computed by the recursive formula:

$$\begin{aligned} K^{\varvec{a}}_{\beta , m+1}=K^{\varvec{a}}_{\beta , 0} + C_{\beta +\frac{1}{4}}MB(1-\beta -\frac{1}{4}, \frac{1}{4})K^{\varvec{a}}_{\frac{1}{4}, m}K^{\varvec{ a}}_{\frac{1}{2}, m} \end{aligned}$$
(35)

The recursive formula is derived similarly as that of (29). Since the upper bound is uniformly valid for all \(0<t\le T_{\varvec{a}}\), it follows that it is also valid for \(t=0\). The proof of Claim 2 is complete.

In the next claim, we show that the sequences \(\{ K^{\varvec{ a}}_{\beta , m}\}\), \(\beta =1/4\) or 1/2, are bounded above with an upper bound strictly less than \(1/(2\widetilde{C})\).

Claim 3

Let \(k^{\varvec{ a}}_m = \max \{ K^{\varvec{a}}_{\frac{1}{4}, m}, K^{\varvec{ a}}_{\frac{1}{2}, m}\}\) and let \(K=\frac{4k^{\varvec{a}}_{0}(\sqrt{2}-1)}{\sqrt{2}}\). Then \(k^{\varvec{a}}_{m}\le K < \frac{1}{2\widetilde{C}}\) for all \(m\ge 1\).

Proof. It follows from Claim 2 that \(k^{\varvec{a}}_{0}=\frac{1}{8\widetilde{C}}\) and \(k^{\varvec{ a}}_{m+1}\le k^{\varvec{a}}_{0}+\widetilde{C}(k^{\varvec{ a}}_{m})^2\) (recall that \(\widetilde{C}=c_1MB_1\)). To get a bound on \(k^{\varvec{a}}_{m}\), let’s write \(k^{\varvec{ a}}_{m}=k^{\varvec{ a}}_{0}w_{m}\). Then \(w_m\) satisfies the following inequality:

$$\begin{aligned} k^{\varvec{ a}}_{0}w_{m+1}\le k^{\varvec{ a}}_{0} + \widetilde{C}(k^{\varvec{ a}}_{0})^2 w^2_{m} \end{aligned}$$

which implies that

$$\begin{aligned} w_{m+1}\le 1 + \widetilde{C}k^{\varvec{ a}}_{0}w^2_m = 1 + \frac{1}{8}w^2_m \end{aligned}$$

Then a direct calculation shows that

$$\begin{aligned} w_m \le \frac{4(\sqrt{2}-1)}{\sqrt{2}} \end{aligned}$$

Thus

$$\begin{aligned} k^{\varvec{ a}}_{m}=k^{\varvec{ a}}_{0}w_{m}\le \frac{4k^{\varvec{a}}_{0}(\sqrt{2}-1)}{\sqrt{2}} = \frac{\sqrt{2}-1}{2\sqrt{2}\widetilde{C}} < \frac{1}{2\widetilde{C}} \end{aligned}$$

And so if we pick \(K=\frac{4k^{\varvec{a}}_{0}(\sqrt{2}-1)}{\sqrt{2}}\), then \(k^{\varvec{ a}}_{m}\le K < \frac{1}{2\widetilde{C}}\) for all \(m\ge 1\). The proof of Claim 3 is complete.

Next we present an upper bound for \(t^{\alpha }\Vert \mathbb {A}^{\alpha }v_m(t)\Vert _2\), \(m\ge 1\). Recall that \(v_m(t)=u_{m+1}(t) - u_m(t)\).

Claim 4

For \(t\in [0, T_{\varvec{ a}}]\), \(0\le \alpha <\frac{3}{4}\), and \(m\ge 1\),

$$\begin{aligned} t^{\alpha }\Vert \mathbb {A}^{\alpha }v_{m}(t)\Vert _2 \le 2KC_{\alpha +\frac{1}{4}}(2\tilde{C}K)^{m-1}B(1-\alpha -\frac{1}{4}, \frac{1}{4}) \end{aligned}$$
(36)

Proof. First we observe that (36) is true for \(t=0\). Next we assume that \(0<t\le T_{\varvec{ a}}\). Once again we induct on m. At \(m=1\): We recall from the definition of \(c_1\) and \(B_1\) that \(\frac{1}{2c_1B_1}\le \frac{1}{2}\). Also it follows from (33), Claims 2 and 3 that \(\Vert \mathbb {A}^{\frac{1}{2}}u_1(t)\Vert _2 \le K^{\varvec{ a}}_{\frac{1}{2}, 1}t^{-\frac{1}{2}}\le Kt^{-\frac{1}{2}}\), \(\Vert \mathbb {A}^{\frac{1}{4}}u_0(t)\Vert _2 \le Kt^{-\frac{1}{4}}\), \(\Vert \mathbb {A}^{\frac{1}{4}}v_0(t)\Vert _2 \le 2Kt^{-\frac{1}{4}}\), and \(\Vert \mathbb {A}^{\frac{1}{2}}v_0(t)\Vert _2 \le 2Kt^{-\frac{1}{2}}\). Making use of these inequalities we obtain the following estimate:

$$\begin{aligned} t^{\alpha }&\Vert \mathbb {A}^{\alpha }v_{1}(t)\Vert _2 \\&= t^{\alpha }\Vert \mathbb {A}^{\alpha }(u_{2}(t)-u_1(t))\Vert _2 \\&= t^{\alpha }\left\| \mathbb {A}^{\alpha }\int _{0}^{t}e^{-(t-s)\mathbb {A}}(\mathbb {B}u_1(s) - \mathbb {B}u_0(s))ds\right\| _2 \\&\le t^{\alpha }C_{\alpha +\frac{1}{4}}\int _{0}^{t}(t-s)^{-\alpha -\frac{1}{4}}\Vert \mathbb {A}^{-\frac{1}{4}}\mathbb {B}u_1(s) - \mathbb {A}^{-\frac{1}{4}}\mathbb {B}u_0(s)\Vert _2 ds \\&\le t^{\alpha }C_{\alpha +\frac{1}{4}}\int _{0}^{t}(t-s)^{\alpha -\frac{1}{4}}M\bigg ( \Vert \mathbb {A}^{\frac{1}{4}}v_0(s)\Vert _2 \cdot \Vert \mathbb {A}^{\frac{1}{2}}u_{1}(s)\Vert _2 \\&\qquad \qquad \qquad \qquad \qquad + \Vert \mathbb {A}^{\frac{1}{4}}u_0(s)\Vert _2\cdot \Vert \mathbb {A}^{\frac{1}{2}}v_0(s)\Vert _2 \bigg ) ds \\&\le t^{\alpha }C_{\alpha +\frac{1}{4}}M2K^2\int _{0}^{t}(t-s)^{-\alpha -\frac{1}{4}}s^{-\frac{3}{4}}ds \\&= 2KC_{\alpha +\frac{1}{4}}MKB(1-\alpha -\frac{1}{4}, \frac{1}{4}) \\&< 2KC_{\alpha +\frac{1}{4}}\frac{M}{2c_1MB_1}B(1-\alpha -\frac{1}{4}, \frac{1}{4}) \quad (\hbox {recall that} \ K<\frac{1}{2\widetilde{C}}=\frac{1}{2c_1MB_1}) \\&< 2KC_{\alpha +\frac{1}{4}}(2\tilde{C}K)^0B(1-\alpha -\frac{1}{4}, \frac{1}{4}) \end{aligned}$$

Thus (36) is true for \(m=1\).

Now assuming that (36) holds for all \(1\le j\le m\), we show that (36) is also true for \(m+1\). First it follows from Claims 2 and 3, and the induction hypothesis that for any \(s\in (0, T_{\varvec{ a}})\),

$$\begin{aligned} \Vert \mathbb {A}^{\frac{1}{4}}&v_{m}(s)\Vert _2 \cdot \Vert \mathbb {A}^{\frac{1}{2}}u_{m+1}(s)\Vert _2 \\&\le 2KC_{\frac{1}{4}+\frac{1}{4}}(2\widetilde{C}K)^{m-1}B(1-\frac{1}{4}-\frac{1}{4}, \frac{1}{4})s^{-\frac{1}{4}}\cdot Ks^{-\frac{1}{2}} \\&\le 2Kc_1(2\widetilde{C}K)^{m-1}B_1Ks^{-\frac{3}{4}} \end{aligned}$$

Similarly,

$$\begin{aligned} \Vert \mathbb {A}^{\frac{1}{2}}v_m(s)\Vert _2\cdot \Vert \mathbb {A}^{\frac{1}{4}}u_{m}(s)\Vert _2 \le 2Kc_1(2\widetilde{C}K)^{m-1}B_1Ks^{-\frac{3}{4}} \end{aligned}$$

Thus,

$$\begin{aligned}&\Vert \mathbb {A}^{\frac{1}{2}}u_{m+1}(s)\Vert _2\cdot \Vert \mathbb {A}^{\frac{1}{4}}v_{m}(s)\Vert _2 + \Vert \mathbb {A}^{\frac{1}{2}}v_m(s)\Vert _2 \cdot \Vert \mathbb {A}^{\frac{1}{4}}u_{m}(s)\Vert _2 \\&\le 2Kc_1(2\widetilde{C}K)^{m-1}B_1\cdot 2Ks^{-\frac{3}{4}} \end{aligned}$$

These inequalities imply the desired estimate:

$$\begin{aligned} t^{\alpha }\Vert&\mathbb {A}^{\alpha } v_{m+1}(t)\Vert _2 \\&\le t^{\alpha }C_{\alpha +\frac{1}{4}}\int _{0}^{t}(t-s)^{-\alpha -\frac{1}{4}}\Vert \mathbb {A}^{-\frac{1}{4}}(\mathbb {B}u_{m+1}(s)-\mathbb {B}u_{m}(s))\Vert _2 ds \\&\le t^{\alpha }C_{\alpha +\frac{1}{4}}\int _{0}^{t}(t-s)^{-\alpha -\frac{1}{4}}M \Big (\Vert \mathbb {A}^{\frac{1}{2}}u_{m+1}(s)\Vert _2 \cdot \Vert \mathbb {A}^{\frac{1}{4}}v_{m}(s)\Vert _2 \\&\qquad \qquad \qquad + \Vert \mathbb {A}^{\frac{1}{2}}v_m(s)\Vert _2 \cdot \Vert \mathbb {A}^{\frac{1}{4}}u_{m}(s)\Vert _2 \Big )ds \\&\le t^{\alpha }C_{\alpha +\frac{1}{4}}\int _{0}^{t}(t-s)^{-\alpha -\frac{1}{4}}M\cdot 2Kc_{1}(2\widetilde{C}K)^{m-1}B_1\cdot 2Ks^{-\frac{3}{4}}ds \\&= t^{\alpha }2KC_{\alpha +\frac{1}{4}}\cdot 2c_{1}MB_{1}K(2\widetilde{C}K)^{m-1}\int _{0}^{t}(t-s)^{-\alpha -\frac{1}{4}}s^{-\frac{3}{4}}ds \\&= 2KC_{\alpha +\frac{1}{4}}(2\widetilde{C}K)^{m}B(1-\alpha -\frac{1}{4}, \frac{1}{4}) \end{aligned}$$

The proof for Claim 4 is complete.

We now set \(\alpha =0\), \(\epsilon = 2\widetilde{C}K\), and \(L=2KC_{\frac{1}{4}}B\left( \frac{3}{4}, \frac{1}{4}\right) \). Since \(K<\frac{1}{2\widetilde{C}}\) by Claim 3, it follows that \(0<\epsilon < 1\) and

$$\begin{aligned} \Vert u_{m+1}(t)-u_m(t)\Vert \le L\epsilon ^{m-1} \end{aligned}$$

Consequently, the iterated sequence \(\{ u_m(t)\}\) converges effectively to u(t) and uniformly on \([0, T_{\varvec{ a}}]\).

We mention in passing the following fact that can be proved by similar computations of Claims 13: On input \((\varvec{a}, m, n)\), a positive number \(T(\varvec{a}, m, n)\) can be computed such that \(k^{\varvec{a}}_0(T(\varvec{a}, m, n)) < (8\tilde{C})^{-1}\cdot 2^{-n}\), \(T(\varvec{a}, m, n+1) < T(\varvec{a}, m, n)\), and \(\max _{0\le t\le T(\varvec{a}, m, n)}t^{\beta }\Vert \mathbb {A}^{\beta }u_m(t)\Vert _2\le L^{\varvec{a}}_{ \beta , m}\cdot 2^{-n}\), where \(L^{\varvec{a}}_{\beta , m}\) is a constant independent of t and n, and computable from \(\varvec{a}\) and m.

5.4 Proof of Proposition 5

We now come to the proof of Proposition 5. We need to show that the map \(\mathbb {S}: \mathbb {N}\times L^{\sigma }_{2,0}\times [0, \infty ) \rightarrow L^{\sigma }_{2,0}\), \((m, \varvec{a}, t)\mapsto \varvec{u}_m(t)\), is \((\nu \times \delta _{L_{2,0}^{\sigma }}\times \rho , \delta _{L_{2,0}^{\sigma }})\)-computable. By a similar argument as we used for proving Lemma 6, we are able to compute \(\varvec{u}_m(t)\) on the input \((m, \varvec{a}, t)\), where \(m\in \mathbb {N}\), \(\varvec{a}\in L^{\sigma }_{2, 0}(\varOmega )\), and \(t>0\). We note that \(\varvec{u}_m(0)=\varvec{u}_0(0)=\varvec{a}\) for all \(m\in \mathbb {N}\). Thus, to complete the proof, it suffices to show that there is a modulus function \(\eta : \mathbb {N}\times \mathbb {N} \rightarrow \mathbb {N}\), computable from \(\varvec{a}\), such that \(\Vert \varvec{u}_{m+1}(t) - \varvec{a}\Vert _2 \le 2^{-k}\) whenever \(0<t<2^{-\eta (m+1, k)}\). Now for the details. Given \(\varvec{a}\) and k. Refereeing to the last paragraph of the previous subsection and Fact 5-(2), (5), we obtain the following estimate: for \(0<t<T(\varvec{a}, m, n)\)

$$\begin{aligned}&\left\| \int _{0}^{t}e^{-(t-s)\mathbb {A}}\mathbb {B}u_m(s)ds\right\| _2 \\= & {} \left\| \mathbb {A}^{1/4}\int _{0}^{t}e^{-(t-s)\mathbb {A}}\mathbb {A}^{-1/4}\mathbb {B}u(s)ds\right\| _2 \\\le & {} C_{1/4}M\int _{0}^{t}(t-s)^{-1/4}\Vert \mathbb {A}^{1/4}u_m(s)\Vert _2\cdot \Vert \mathbb {A}^{1/2}u_m(s)\Vert _2ds \\\le & {} C_{1/4}M\int ^{t}_{0}(t-s)^{-1/4}\cdot s^{-1/4}\cdot L^{\varvec{a}}_{1/4, m}\cdot 2^{-n}\cdot s^{-1/2}\cdot L^{\varvec{a}}_{1/2, m}\cdot 2^{-n} ds \\\le & {} C_{1/4}ML^{\varvec{a}}_{1/4, m}L^{\varvec{a}}_{1/2, m}2^{-2n}\int ^{t}_{0}(t-s)^{-1/4}s^{-3/4}ds \\= & {} C_{1/4}ML^{\varvec{a}}_{1/4, m}L^{\varvec{a}}_{1/2, m}B(3/4, 1/4)\cdot 2^{-2n} \end{aligned}$$

Thus if \(\Vert e^{-t\mathbb {A}}\varvec{a} - \varvec{a}\Vert _2\le 2^{-(k+1)}\) and

$$\begin{aligned} 2^{-2n}C_{1/4}ML^{\varvec{a}}_{1/4, m}L^{\varvec{a}}_{1/2, m}B(3/4, 1/4)\le 2^{-(k+1)}\, , \end{aligned}$$

then

$$\begin{aligned} \Vert \varvec{u}_{m+1}(t)-\varvec{a}\Vert _2 \le \Vert e^{-t\mathbb {A}}\varvec{a} - \varvec{a}\Vert _2+\left\| \int _{0}^{t}e^{-(t-s)\mathbb {A}}\mathbb {B}\varvec{u}_m(s)ds\right\| _{2} \le 2^{-k} \end{aligned}$$

Since \(e^{-t\mathbb {A}}\varvec{a}\) is computable in t by Proposition 3 and \(\varvec{a}=e^{-0\mathbb {A}}\varvec{a}\), there is a computable function \(\theta _1: \mathbb {N}\rightarrow \mathbb {N}\) such that \(\Vert e^{-t\mathbb {A}}\varvec{a} - \varvec{a}\Vert _2\le 2^{-(k+1)}\) whenever \(0< t < 2^{-\theta _1(k)}\). Let \(\theta _2: \mathbb {N}\times \mathbb {N}\rightarrow \mathbb {N}\) be a computable function satisfying \(C_{1/4}ML^{\varvec{a}}_{1/4, m}L^{\varvec{a}}_{1/2, m}B(3/4, 1/4)\cdot 2^{-2\theta _2(m, k)}\le 2^{-(k+1)}\). Let \(\eta (m+1, k)\) be a positive integer such that \(2^{-\eta (m+1, k)}\le \min \{ 2^{-\theta _1(k)}, T(\varvec{a}, m, \theta _2(m, k))\}\). Then \(\eta \) is the desired modulus function. The proof of Proposition 5 is complete.

Propositions 4 and 5 show that the solution \(\varvec{ u}\) of the integral equation (4) is an effective limit of the computable iterated sequence \(\{ {\varvec{u}}_m\}\) starting with \({\varvec{ u}}_0 = \varvec{ a}\) on \([0, T_{\varvec{ a}}]\); consequently, \(\varvec{ u}\) itself is also computable. Thus we obtain the desired preliminary result:

Theorem 6

There is a computable map \(T:L^{\sigma }_{2,0}(\varOmega )\rightarrow (0, \infty )\), \(\varvec{a}\mapsto T(\varvec{ a})\), such that \(\varvec{ u}(t)\), the solution of the integral equation (4), is computable in \(L^{\sigma }_{2,0}\) from \(\varvec{ a}\) and t for \(\varvec{ a}\in L^{\sigma }_{2,0}\) and \(t\in [0;T(\varvec{ a})]\).

5.5 The Inhomogeneous Case and Pressure

It is known [5, Theorem 2.3] that, also in the presence of an inhomogeneity \(\varvec{ g}\in C\big ([0;T],L^{\sigma }_{2,0}(\varOmega )\big )\), the iterate sequence (5) converges to a unique solution \(\varvec{ u}\) of Eq. (2) near \(t=0\). Similarly to (the proofs of) Propositions 5, 4, and [24, Lemma 3.7], this solution is seen to be computable. Moreover, \(\varvec{ g}=\mathbb {P}\varvec{ f}\) is computable from \(\varvec{ f}\in \big (L_{2}(\varOmega )\big )^2\) according to Proposition 2. Finally the right-hand side of Eq. (6) equals

$$\begin{aligned} \big (\mathbb {I}-\mathbb {P}\big )[\varvec{ f} + \bigtriangleup \varvec{ u} - (\varvec{ u}\cdot \nabla )\varvec{ u}\big ] \;=:\; \varvec{ h} \end{aligned}$$

which, by the definition of \(\mathbb {P}\) projecting onto the solenoidal subspace, is conservative (=rotation-free/a pure divergence). Hence the path integral \(\int _{\varvec{ 0}}^{\varvec{ x}} \varvec{h}(\varvec{ y})\cdot d \varvec{\gamma } (\varvec{y})\) does not depend on the chosen path from \(\varvec{ 0}\) to \(\varvec{ x}\) and well-defines \(P(\varvec{ x})\). This concludes our proof of Theorem 1.