1 Introduction

This paper considers the Green tensor of the nonstationary Stokes system in the half space. A major goal is to derive its pointwise estimates. Denote \(x=(x_1,\ldots ,x_{n-1},x_n) = (x',x_n)\) and \(x^*=(x',-x_n)\) for \(x \in {\mathbb {R}}^n\), \(n \ge 2\), and the half space \({\mathbb {R}}^n_+ = \{ (x',x_n)\in {\mathbb {R}}^n \ | \ x_n>0\}\) with boundary \(\Sigma =\partial {\mathbb {R}}^n_+\).

1.1 Background

The nonstationary Stokes system in the half-space \({\mathbb {R}}_{+}^{n}\), \(n\ge 2\), reads

$$\begin{aligned} \begin{aligned} \left. \begin{aligned} u_{t}-\Delta u+\nabla \pi =f \\ \mathop {\textrm{div}}\nolimits u=0 \end{aligned}\ \right\} \ \ \text{ in }\ \ {\mathbb {R}}_{+}^{n}\times (0,\infty ), \end{aligned} \end{aligned}$$
(1.1)

with initial and boundary conditions

$$\begin{aligned} u(\cdot ,0)=u_0; \qquad u(x',0,t)=0\ \ \text{ on }\ \ \Sigma \times (0,\infty ). \end{aligned}$$
(1.2)

Here \(u=(u_{1},\ldots ,u_{n})\) is the velocity, \(\pi \) is the pressure, and \(f=(f_{1},\ldots ,f_{n})\) is the external force. They are defined for \((x,t)\in {\mathbb {R}}_{+}^{n}\times (0,\infty )\). The Green tensor \( G_{ij}(x,y,t)\) and its associated pressure tensor \(g_j(x,y,t)\) are defined for \((x,y,t) \in {\mathbb {R}}^n _+ \times {\mathbb {R}}^n_+ \times {\mathbb {R}}\) and \(1\le i,j\le n\) so that, for suitable f and \(u_0\), the solution of (1.1) is given by

$$\begin{aligned} u_i(x,t)= & {} \sum _{j=1}^n\int _{{\mathbb {R}}^n_+}G_{ij}(x,y,t){u_{0,j}}(y)\,dy\nonumber \\{} & {} + \sum _{j=1}^n\int _0^t \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t-s) f_j(y,s)\,dy\,ds. \end{aligned}$$
(1.3)

Another way to write a solution of (1.1) uses the Stokes semigroup \(e^{-t{\textbf{A}}}\), where \({\textbf{A}}= - {{\textbf{P}}}\Delta \) is the Stokes operator, and \({{\textbf{P}}}\) is the Helmholtz projection (see Remark 3.4)

$$\begin{aligned} u(t) = e^{-t{\textbf{A}}}{{\textbf{P}}}u_0 + \int _0^t e^{-(t-s){\textbf{A}}}{{\textbf{P}}}f(s)\,ds. \end{aligned}$$
(1.4)

We may regard the Green tensor \(G_{ij}\) as the kernel of \(e^{-t{\textbf{A}}}{{\textbf{P}}}\). In using (1.3) and (1.4), we already exclude weird solutions of (1.1) that are unbounded at spatial infinity, and can talk about “the” unique solution in suitable classes. For applications to Navier–Stokes equations,

figure a

with zero boundary condition, a solution of (NS) is called a mild solution if it satisfies (1.3) or (1.4) with \(f = -u \cdot \nabla u\) and suitable estimates.

The Stokes semigroup \(e^{-t{\textbf{A}}}\) and the Helmholtz projection \({{\textbf{P}}}\) are only defined in suitable functional spaces. When defined, the image of \({{\textbf{P}}}\) is solenoidal. A vector field \(u=(u_1,\ldots ,u_n)\) in \({\mathbb {R}}^n_+\) is called solenoidal if

$$\begin{aligned} \mathop {\textrm{div}}\nolimits u=0,\ \ \ u_{n}|_{\Sigma }=0. \end{aligned}$$
(1.5)

An equivalent condition for \(u \in L^1_{\textrm{loc}}(\overline{{\mathbb {R}}^n_+})\) is

$$\begin{aligned} { \int _{{\mathbb {R}}^n_+} u \cdot \nabla \phi \,dx=0, \quad \forall \phi \in C^\infty _c(\overline{{\mathbb {R}}^n_+}). } \end{aligned}$$
(1.6)

For applications to Navier–Stokes equations, although we may assume \(u_0\) is solenoidal, we do not have \(\mathop {\textrm{div}}\nolimits f=0\) for \(f= -u\cdot \nabla u\). Hence we cannot omit \({{\textbf{P}}}\) in the integral of (1.4).

The initial condition \(u(\cdot ,0)=u_0\) in (1.2) is understood by the weak limit

$$\begin{aligned} { \lim _{t\rightarrow 0_+} (u(t),w) = (u_0,w) , \quad \forall w \in C^\infty _{c,\sigma }({\mathbb {R}}^n_+), } \end{aligned}$$
(1.7)

where \(C^\infty _{c,\sigma }({\mathbb {R}}^n_+)=\{ w \in C^\infty _c({\mathbb {R}}^n_+;{\mathbb {R}}^n): \mathop {\textrm{div}}\nolimits w=0\}.\) A strong limit is unavailable unless we further assume \(u_0\) is solenoidal, see Theorem 1.3. This agrees with the expectation that

$$\begin{aligned} \lim _{t\rightarrow 0_+} e^{-t{\textbf{A}}}{{\textbf{P}}}u_0 = {{\textbf{P}}}u_0. \end{aligned}$$

There are many results for (1.1) in Lebesgue and Sobolev spaces because the Stokes semigroup and the Helmholtz projection are bounded in \(L^q({\mathbb {R}}^n_+)\), \(1<q<\infty \). Solonnikov [46] expressed the solution u in terms of Oseen and Golovkin tensors (see Sect. 2) and proved estimates of \(u_t, \nabla ^2 u, \nabla p\) in \(L^q\) in \({\mathbb {R}}^3_+\times {\mathbb {R}}_+\), extending the 2D work by Golovkin [13]. Ukai [52] derived an explicit solution formula to (1.1) when \(f=0\) in \({\mathbb {R}}^n_+\), expressed in terms of Riesz operators and the solution operators for the heat and Laplace equations in \({\mathbb {R}}^n_+\). It is simpler and different from that of [46] and gives estimates in \(L^q\) spaces trivially. Cannone–Planchon–Schonbek [3] extended [52] for nonzero f using pseudo-differential operators. Estimates in borderline \(L^1\) and \(L^\infty \) spaces are studied by Desch, Hieber, and Prüss [7]. Koch and Solonnikov [30] derived gradient estimates of u in \(L^q_{x,t}\) for \(q>1\) when f is a divergence of some tensor field. These results are applied to the study of (NS) in Lebesgue spaces.

The pointwise behavior of the solutions of (NS) is less studied, as the Helmholtz projection is not bounded in \(L^\infty \), and there have been no pointwise estimates for \(G_{ij}\) except for two special cases to be explained below. To circumvent this difficulty, many researchers expand explicitly

$$\begin{aligned} e^{-t{\textbf{A}}}{{\textbf{P}}}\partial _k (u_k u) \end{aligned}$$

to sums of estimable terms for the study of (NS). See also the literature review for mild solutions later, in particular (1.25). The drawback of this approach is that it does not apply to general nonlinearities \(f=f_0(u,\nabla u)\).

The pointwise estimates for \(G_{ij}\) and its derivatives will be useful in the following situations:

  1. 1.

    It gives direct estimates of the Navier–Stokes nonlinearity without expanding its Helmholtz projection.

  2. 2.

    It works for general nonlinearities, for example, those considered in Koba [27], and those from the coupling of the fluid velocity with another physical quantity such as

    $$\begin{aligned} f_j = \textstyle \sum _k \partial _k (b_k b_j),\qquad g_j = - \textstyle \sum _k \partial _k (\partial _k d \cdot \partial _j d), \end{aligned}$$

    where f is the coupling with the magnetic field \(b: {\mathbb {R}}^3_+ \times (0,\infty ) \rightarrow {\mathbb {R}}^3 \) in the magnetohydrodynamic equations in the half space \({\mathbb {R}}^3_+\) with boundary conditions \(b _3=0\) and \((\nabla \times b) \times e_3=0\) (see [15, 19, 20, 34]), and g is the coupling with the orientation field \(d: {\mathbb {R}}^3_+ \times (0,\infty ) \rightarrow {\mathbb {S}}^2 \) in the nematic liquid crystal flows with boundary conditions \(\partial _3 d|_\Sigma =0\) and \(\lim _{|x|\rightarrow \infty } d=e_3\) (see [16]).

  3. 3.

    It allows to estimate the contribution from a non-solenoidal initial data, e.g., \(u_0 \in L^q\) and in particular when \(q=1\), as done by Maremonti [39] for bounded domains.

  4. 4.

    Pointwise estimates are very useful for the study of the local and asymptotic behavior of the solutions of (NS), see e.g. [32] and our companion papers [21, 22].

In contrast to the absence in the time-dependent case, pointwise estimates for stationary Stokes system in the half-space have been known; See [23] for the literature and the most recent refinement.

We now describe the two special cases of known pointwise estimates for \(G_{ij}\). For the special case of solenoidal vector fields f satisfying (1.5), by using the Fourier transform in \(x'\) and the Laplace transform in t of the system (1.1), Solonnikov [47, (3.12)] derived an explicit formula of the restricted Green tensor and their pointwise estimates for \(n=3\) (also see [48, 49] for \(n \ge 2\); The same method is used in [35]). Specifically, he showed that for \(u_0=0\), and f satisfying (1.5),

$$\begin{aligned} \begin{aligned} u_i(x,t)&= \sum _{j=1}^n\int _0^t \int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t-s) f_j(y,s)dy\,ds,\\ \pi (x,t)&=\sum _{j=1}^n\int _0^t \int _{{\mathbb {R}}^n_+}\breve{g}_j(x,y,t-s)f_j(y,s)dy\,ds, \end{aligned} \end{aligned}$$
(1.8)

with

$$\begin{aligned} \begin{aligned} \breve{G}_{ij}(x,y,t)&= \delta _{ij}\Gamma (x-y,t) + G_{ij}^*(x,y,t), \\ G_{ij}^*(x,y,t)&= -\delta _{ij}\Gamma (x-y^*,t) \\&\quad - 4(1-\delta _{jn})\frac{\partial }{\partial x_j} \int _{\Sigma \times [0,x_n]} \frac{\partial }{\partial x_i} E(x-z) \Gamma (z-y^*,t)\,dz, \\ \breve{g}_j(x,y,t)&=4(1-\delta _{jn})\partial _{x_j}\Big [\int _{\Sigma } E(x-\xi ')\partial _n\Gamma (\xi '-y,t)d\xi '\\ {}&\quad +\int _{\Sigma }\Gamma (x'-y'-\xi ',y_{n},t)\partial _nE(\xi ',x_n)\,d\xi '\Big ], \end{aligned} \end{aligned}$$
(1.9)

where \(y^*=(y',-y_n)\) for \(y=(y',y_n)\), and E(x) and \(\Gamma (x,t)\) are the fundamental solutions of the Laplace and heat equations in \({\mathbb {R}}^n\), respectively. (See Sect. 2. Our E(x) differs from [47] by a sign.) Moreover, \(G_{ij}^*\) and \(\breve{g}_j\) satisfy the pointwise bound ([49, (2.38), (2.32)]) for \(n \ge 2\),

$$\begin{aligned} \begin{aligned} |\partial _{x',y'}^l\partial _{x_n}^k \partial _{y_n}^q\partial _t^m G_{ij}^*(x,y,t)|&\lesssim \frac{e^{-\frac{cy_n^2}{t}}}{t^{m+\frac{q}{2}}(|x^*-y|^2+t)^{\frac{l+n}{2}}(x_n^2+t)^{\frac{k}{2}}};\\ |\partial _{x,y'}^l \partial _{y_n}^q\partial _t^m\breve{g}_j(x,y,t)|&\lesssim t^{-1-m-\frac{q}{2}}(|x-y^{*}|^{2}+t)^{-\frac{n-1+l}{2}}e^{-\frac{cy_n^2}{t}}. \end{aligned} \end{aligned}$$
(1.10)

His argument is also valid for \(n=2\) since the fundamental solution E in (1.9) has a derivative, thus has the scaling property.

Another special case is the pointwise estimate of the Green tensor by Kang [17], but only when the second variable y is zero, or equivalently \(y_n=0\),

$$\begin{aligned} |\partial _x^l\partial _t^mG_{ij}(x,y',t)|\lesssim \frac{1}{t^{m+\frac{1+\alpha }{2}}(|x-y'|^2+t)^{\frac{l+n-2}{2}}x_n^{1-\alpha }}, \end{aligned}$$
(1.11)

where \(\alpha \) is any number with \(0<\alpha <1\), and we identify \(y'\) with \((y',0)\). Even for \(y=0\), this estimate does not seem optimal because we anticipate the symmetry of the Green tensor (see Proposition 1.4).

1.2 Results

The following is our first and key pointwise estimates of the (unrestricted) Green tensor and its derivatives. Even when restricted to \(y=0\), it is better than (1.11) by removing the singularity at \(x_n=0\). It will be further improved in Theorem 1.5 after we show symmetry.

Proposition 1.1

(First estimates). Let \(n\ge 2\), \(x,y\in {\mathbb {R}}^n_+\), \(t>0\), \(i,j=1,\ldots ,n\), and \(l,k,q,m \in {\mathbb {N}}_0\). Let \(G_{ij}\) be the Green tensor for the time-dependent Stokes system (1.1) in the half-space \({\mathbb {R}}^n_+\), and \(g_j\) be the associated pressure tensor. We have

$$\begin{aligned} \begin{aligned} |\partial _{x',y'}^l \partial _{x_n}^k\partial _{y_n}^q\partial _t^mG_{ij}(x,y,t)|&{\lesssim }\frac{1}{\left( |x-y|^2+t\right) ^{\frac{l+k+q+n}{2}+m}}\\ {}&+\frac{\text {LN}_{ijkq}^{mn}}{t^{m}(|x^*-y|^2+t)^{\frac{l+k-k_i+n}{2} }(x_n^2+t)^{ \frac{k_i}{2} }(y_n^2+t)^{\frac{q}{2}}}, \end{aligned} \end{aligned}$$
(1.12)

where \(k_i=(k-\delta _{in})_+\),

$$\begin{aligned} { \textrm{LN}_{ijkq}^{mn} := 1+\delta _{n2}\mu _{ik}^m\left[ \log (\nu _{ijkq}^m|x'-y'|+x_n+y_n+\sqrt{t}) - \log (\sqrt{t}) \right] , } \end{aligned}$$
(1.13)

with \(\mu _{ik}^m= 1-(\delta _{k0}+\delta _{k1}\delta _{in})\delta _{m0}\), and \(\nu _{ijkq}^m = \delta _{q0} \delta _{jn} \delta _{k(1+\delta _{in})} \delta _{m0}+\delta _{m>0}\). Also,

$$\begin{aligned} { |\partial _{x',y'}^l\partial _{x_n}^k\partial _{y_n}^q g_j(x,y,t)| {\lesssim }t^{-\frac{1}{2}} \left[ \frac{1}{R^{l+q+n}} \left( \frac{1}{ x_n^{k}}+ \delta _{k0} \log {\frac{R}{x_n}} \right) + \frac{1}{R^{k+n-1}y_n^{l+q+1}} \right] , }\nonumber \\ \end{aligned}$$
(1.14)

where \(R=|x'-y'|+x_n+y_n+\sqrt{t}\sim |x-y^*|+\sqrt{t}\).

Comments on Proposition 1.1:

  1. 1.

    The numerator \(\textrm{LN}_{ijkq}^{mn}\) is a log correction for \(n=2\), and equals 1 if \(n \ge 3\). The parameters \(\mu _{ik}^m,\nu _{ijkq}^m \in \{0,1\}\). For simplicity we may take \(\mu _{ik}^m=\nu _{ijkq}^m=1\) for most cases.

  2. 2.

    As we will see in Proposition 3.5, the pressure tensor g contains a delta function supported at \(t=0\). It is not in (1.14) where \(t>0\).

  3. 3.

    The estimate (1.12) of \(\partial _t G_{ij}\) is not integrable for \(0<t<1\). It can be improved using the Green tensor equation (3.1) and estimates of \(\Delta _x G_{ij}\) and \(\nabla _x g_j\).

With the first estimates, we are able to prove the following theorems on restricted Green tensors, convergence to initial data, and symmetry of the Green tensor. We say a tensor \(\bar{G}_{ij}(x,y,t)\) is a restricted Green tensor if for any solenoidal \(u_0\), the vector field \(u_i(x,t) =\sum _{j=1}^n \int _{{\mathbb {R}}^n_+} {\bar{G}}_{ij}(x,y,t)u_{0,j}(y)\,dy\) is a solution of the Stokes system (1.1)–(1.2).

Theorem 1.2

(Restricted Green tensors). Let \(u_0 \in C^1_{c,\sigma }(\overline{{\mathbb {R}}^n_+})\), i.e., it is a vector field in \(C^1_c(\overline{{\mathbb {R}}^n_+};{\mathbb {R}}^n)\) with \(\mathop {\textrm{div}}\nolimits u_0=0\) and \(u_{0,n}|_\Sigma =0\). Then

$$\begin{aligned} \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t)u_{0,j}(y)\,dy= & {} \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t)u_{0,j}(y)\,dy \\= & {} \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} {\widehat{G}}_{ij}(x,y,t)u_{0,j}(y)\,dy \end{aligned}$$

as continuous functions in \(x\in {\mathbb {R}}^n_+\) and \(t>0\), where \( \breve{G}_{ij}(x,y,t)\) is the restricted Green tensor of Solonnikov given in (1.9), and

$$\begin{aligned} \begin{aligned} {\widehat{G}}_{ij}(x,y,t)&= \delta _{ij}\left[ \Gamma (x-y,t) - \Gamma (x-y^*,t) \right] - 4 \delta _{jn}C_i(x,y,t), \end{aligned} \end{aligned}$$
(1.15)

with \(C_i(x,y,t)=\int _0^{x_n}\int _\Sigma \partial _n\Gamma (x-y^*-z,t)\,\partial _iE(z)\,dz'\,dz_n\).

Comments on Theorem 1.2:

  1. 1.

    The last term of \(\breve{G}_{ij}\) in (1.9) only acts on the tangential components \(u_{0,j}\), \(j<n\). In contrast, the last term of \({\widehat{G}}_{ij}\) in (1.15) only acts on the normal component \(u_{0,n}\). We do not know whether (1.15) has appeared in literature. We will use both \(\breve{G}_{ij}\) and \({\widehat{G}}_{ij}\) in the proof of Lemma 6.1. \(C_i\) will be defined in (4.4) with estimates in Remark 5.2.

  2. 2.

    We can get infinitely many restricted Green tensors by adding to \(\breve{G}_{ij}\) any tensor \(T_{ij}\) that vanishes on all solenoidal vector fields \(f=(f_j)\), \(\int _{{\mathbb {R}}^n_+} T_{ij}(x,y,t) f_j(y) dy = 0\), for example, a tensor of the form \(T_{ij} = \partial _{y_j} T_i(x,y,t)\) with suitable regularity and decay. We do not need \(\sum _i\partial _{x_i} T_{ij}(x,y,t)=0\) nor \(T_{ij}|_{x_n=0}=0\) since \(\int _{{\mathbb {R}}^n_+} T_{ij}(x,y,t) f_j(y) dy = 0\). In fact, if we denote

    $$\begin{aligned} C_{i}^\sharp (x,y,t) := \int _0^{x_n}\int _{\Sigma } \Gamma (x-y^*-z,t) \partial _iE(z)\, dz'dz_n, \end{aligned}$$

    then we have the (more symmetric) alternative forms:

    $$\begin{aligned} \begin{aligned} \breve{G}_{ij}(x,y,t)&= \delta _{ij}\left[ \Gamma (x-y,t) - \Gamma (x-y^*,t) \right] + 4(1-\delta _{jn}) \partial _{y_j} C_{i}^\sharp (x,y,t), \\ {\widehat{G}}_{ij}(x,y,t)&= \delta _{ij}\left[ \Gamma (x-y,t) - \Gamma (x-y^*,t) \right] - 4 \delta _{jn} \partial _{y_j} C_{i}^\sharp (x,y,t) \\&= \breve{G}_{ij}(x,y,t) + \partial _{y_j} 4C_{i}^\sharp (x,y,t). \end{aligned} \end{aligned}$$
    (1.16)
  3. 3.

    In contrast, the unrestricted Green tensor \(G_{ij}\) is unique: We require it to satisfy the equation (3.1)\(_1\), the boundary condition (3.1)\(_2\), and the initial condition that the vector field \(u_i(x,t)=\sum _{j=1}^n\int _{{\mathbb {R}}^n_+}G_{ij}(x,y,t)(u_0)_j(y)\,dy\) satisfies \(\lim _{t\rightarrow 0_+}u(\cdot ,t) = {{\textbf{P}}}u_0\) for any initial data \(u_0\) not necessarily solenoidal. Suppose \(({\bar{G}}_{ij}(x,y,t), {\bar{g}}_j(x,y,t))\) is another pair of unrestricted Green tensor and pressure tensor. For fixed j and y, the difference \(u_i(x,t) = (G_{ij}-{\bar{G}}_{ij})(x,y,t)\) and its companion pressure \(p(x,t)=(g_j- {\bar{g}}_j)(x,y,t)\) satisfy the Stokes system (1.1) with zero boundary and initial values. Under bounds such as

    $$\begin{aligned} |u(x,t)| {\lesssim }\frac{1}{(y_n+|x|+\sqrt{t})^n}, \quad |p(x,t)| {\lesssim }\frac{1}{\sqrt{t}(y_n+ |x|+\sqrt{t})^{n-1}}, \quad \end{aligned}$$

    suggested by Proposition 1.1, we can show \(u=0\) by energy estimate: Testing (1.1) by \(u \phi _R\) for some cut-off function \(\phi _R(x)=\Phi (x/R)\) and integrating over \(t_0<t<t_1\), sending \(R\rightarrow \infty \), and then sending \(t_0 \rightarrow 0_+\). (Also see [36, Theorem 5]). Hence \(G_{ij}={\bar{G}}_{ij}\).

  4. 4.

    Theorem 1.2 is extended to \(u_0\in L^p_\sigma \) in Remark 9.2 for \(1\le p\le \infty \). When \(p=\infty \) we can only show the first equality, and we need \(u_0\) in the \(L^\infty \)-closure of \(C^1_c\).

Theorem 1.3

(Convergence to initial data). Let \(u(x,t)=\sum _{j=1}^n \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t)u_{0,j}(y)dy\) for a vector field \(u_0 \) in \({\mathbb {R}}^n_+\). Let \(\textbf{P}\) be the Helmholtz projection in \({\mathbb {R}}^n_+\) to be given in Remark 3.4.

  1. (a)

    If \(u_0 \in C^1_c({\mathbb {R}}^n_+)\), then \(u(x,t)\rightarrow ({{\textbf{P}}}u_0)(x)\) for all \(x\in {\mathbb {R}}^n_+\), and uniformly for all x with \(x_n\ge \delta \) for any \(\delta >0\).

  2. (b)

    If \(u_0 \in L^q({\mathbb {R}}^n_+)\), \(1<q<\infty \), then \(u(x,t)\rightarrow ({{\textbf{P}}}u_0)(x)\) in \(L^q({\mathbb {R}}^n_+)\).

  3. (c)

    If \(u_0 \in C^1_{c,\sigma }(\overline{{\mathbb {R}}^n_+})\), i.e., it is a vector field in \(C^1_c(\overline{{\mathbb {R}}^n_+};{\mathbb {R}}^n)\) with \(\mathop {\textrm{div}}\nolimits u_0=0\) and \(u_{0,n}|_\Sigma =0\), then \(u_0={{\textbf{P}}}u_0\) and \(u(x,t)\rightarrow u_0(x)\) in \(L^q({\mathbb {R}}^n_+)\) for \(1<q\le \infty \).

In Part (a), the support of \(u_0\) is away from the boundary. In Part (c), the tangential part of \(u_0\) may be nonzero on \(\Sigma \), and \(q=\infty \) is allowed.

Proposition 1.4

(Symmetry of Green tensor). Let \(G_{ij}\) be the Green tensor for the Stokes system in the half-space \({\mathbb {R}}^n_+\), \(n\ge 2\). Then for \(x,y\in {\mathbb {R}}^n_+\) and \(t\ne 0\) we have

$$\begin{aligned} { G_{ij}(x,y,t)=G_{ji}(y,x,t),\quad \forall x\not =y\in {\mathbb {R}}^n_+. } \end{aligned}$$
(1.17)

For the stationary case, the symmetry is known by Odqvist [43, p.358] for \(n=3\) and [23, Lemma 2.1, (2.29)] for \(n \ge 2\). We do not know (1.17) for the nonstationary case in the literature. We will prove Proposition 1.4 in Sect. 7, after we have shown Proposition 1.1. It gives an alternative proof of the stationary case for \(n\ge 3\), see Remark 3.7.

Although \(G_{ij}\) is symmetric by Proposition 1.4, the restricted Green tensors in (1.9) and (1.15) are not. For example, if \(i<n\) and \(j=n\),

$$\begin{aligned} \breve{G}_{in}(x,y,t)= & {} 0,\ \ \ \breve{G}_{ni}(y,x,t)= -4 \int _{\Sigma \times [0,y_n]} \partial _i\partial _n E(y-z) \Gamma (z-x^*,t)\,dz,\\ {\widehat{G}}_{in}(x,y,t)= & {} -4 C_i(x,y,t),\ \ \ {\widehat{G}}_{ni}(y,x,t)= 0. \end{aligned}$$

By the symmetry of the Green tensor in Proposition 1.4, the estimates in Proposition 1.1 can be improved. Our main estimates are the following:

Theorem 1.5

(Main estimates). Let \(n\ge 2\), \(x,y\in {\mathbb {R}}^n_+\), \(t>0\), \(i,j=1,\ldots ,n\), and \(l,k,q,m \in {\mathbb {N}}_0\). We have

$$\begin{aligned} \begin{aligned} |\partial _{x',y'}^l \partial _{x_n}^k \partial _{y_n}^q \partial _t^m G_{ij}(x,y,t)|{\lesssim }&\frac{1}{(|x-y|^2+t)^{\frac{l+k+q+n}{2}+m}} \\ {}&{ +\frac{\textrm{LN}_{ijkq}^{mn}+\textrm{LN}_{jiqk}^{mn}}{t^{m}(|x^*-y|^2+t)^{\frac{l+k-k_i+q-q_j+n}{2}}(x_n^2+t)^{\frac{k_i}{2}}(y_n^2+t)^{\frac{q_j}{2}} }, } \end{aligned} \end{aligned}$$
(1.18)

where \(\textrm{LN}_{ijkq}^{mn}\) is given in (1.13), \(k_i=(k-\delta _{in})_+\), and \(q_j=(q-\delta _{jn})_+\).

Comments on Theorem 1.5:

  1. 1.

    Assume \( l + k + q + n \ge 3\). For the cases when \(k_i = q_j = 0\) and \(m=0\), the time integrals of the above estimates coincide with the well-known estimates of the stationary Green tensor given in [10, IV.3.52]. We lose tangential spatial decay in other cases.

  2. 2.

    The estimates of the stationary Green tensor mentioned above have been improved by [23]. For example, when there is no normal derivative and \(n+l\ge 3\), [23, Theorems 2.4, 2.5] show

    $$\begin{aligned} { |\partial _{x',y'}^l G_{ij}^{0}(x,y)| {\lesssim }\frac{x_n y_n^{1+\delta _{jn}}}{|x-y|^{n-2+l}\, |x-y^*|^{2+\delta _{jn}}}. } \end{aligned}$$
    (1.19)

    (It can be improved using symmetry, but [23] does not have \(i=j=n\) case.) The tangential decay rate is better than the normal decay and the whole space case, probably because of the zero boundary condition. Thus (1.18) may have room for improvement. Compare Theorem 1.6.

The following estimates quantify the boundary vanishing of the Green tensor and its derivatives at \(x_n=0\) or \(y_n=0\).

Theorem 1.6

(Boundary vanishing). Let \(n\ge 2\), \(x,y\in {\mathbb {R}}^n_+\), \(t>0\), \(i,j=1,\ldots ,n\), and \(l,k,q,m \in {\mathbb {N}}_0\). Let \(0\le \alpha \le 1\). If \(k=0\), we have

$$\begin{aligned} \begin{aligned} \left| \partial _{x',y'}^l\partial _{y_n}^q\partial _t^mG_{ij}(x,y,t)\right|&\lesssim \frac{x_n^\alpha }{(|x-y|^2+t)^{\frac{l+q+n}{2}+m}(|x-y^*|^2+t)^{\frac{\alpha }{2}}} \\&\quad +\frac{x_n^\alpha \,\textrm{LN}}{t^{m{+\frac{\alpha }{2}}}(|x-y^*|^2+t)^{{ \frac{l+q-q_j+n}{2}} } {(y_n^2+t)^{\frac{q_j}{2}}} }, \end{aligned} \end{aligned}$$
(1.20)

with \(\textrm{LN}= {\textstyle \sum _{k=0}^1}( \textrm{LN}_{ijkq}^{mn} + \textrm{LN}_{jiqk}^{mn})(x,y,t) \). If \(q=0\), we have

$$\begin{aligned} \begin{aligned} \left| \partial _{x',y'}^l\partial _{x_n}^k\partial _t^mG_{ij}(x,y,t)\right|&\lesssim \frac{y_n^\alpha }{(|x-y|^2+t)^{\frac{l+k+n}{2}+m}(|x-y^*|^2+t)^{\frac{\alpha }{2}}} \\&\quad +\frac{y_n^\alpha \,\textrm{LN}}{t^{m{+\frac{\alpha }{2}}}(|x-y^*|^2+t)^{{ \frac{l+k-k_i+n}{2} }} {(x_n^2+t)^{\frac{k_i}{2}}} }, \end{aligned} \end{aligned}$$
(1.21)

with \(\textrm{LN}= {\textstyle \sum _{q=0}^1}( \textrm{LN}_{ijkq}^{mn} + \textrm{LN}_{jiqk}^{mn})(x,y,t)\).

1.3 Key ideas and the structure of the proof

Let us explain the idea for our key result, Proposition 1.1: The major difficulty is to find a formula for the Green tensor in which each term has good estimates. Our first formula (3.10) with the correction term \(W_{ij}\) given by (3.9) is obtained from the definition using the Oseen and Golovkin tensors. The second formula for \(W_{ij}\) in Lemma 4.2 is obtained using Poisson’s formula for the heat equation to remove the time integration. The idea of using Poisson’s formula is already in the stationary case of [23, 40]. Our final formula for the Green tensor in Lemma 4.3 is obtained by identifying the cancellation of terms in Lemma 4.2, maximizing the tangential decay. We further transform the term \({\widehat{H}}_{ij}\) in Lemma 4.3 in terms of \(D_{ijm}\) in Lemma 5.1, which are integrals over \(\Sigma \times [0,x_n]\). For \(D_{ijm}\), we do space partition and integration by parts to estimate their tangential derivatives, and we explore their algebraic properties, e.g., computing their divergence, to move normal derivatives to tangential derivatives. These enable us to prove Proposition 1.1.

Maximizing the tangential decay is essential: As seen in Proposition 1.1, normal derivatives do not increase tangential decay, and maximal tangential decay allows us to prove the integrability in y of all derivatives of the Green tensor (uniformly in x). This is used in the proofs of (9.3) of Lemma 9.1 and (9.20) of Lemma 9.4, both relying on the function \(H_1 \in L^1\) for \(H_1\) defined in (9.7), for the construction of mild solutions of Navier–Stokes equations. The maximal tangential decay is also used to prove that the Green tensor itself is integrable in y, but with an \(x_n\)-dependent constant,

$$\begin{aligned} \int _{{\mathbb {R}}^n_+} |G_{ij}(x,y,t)|\,dy {\lesssim }\ln (e+\frac{x_n}{\sqrt{t}}). \end{aligned}$$
(1.22)

This is proved in (9.10) of Remark 9.2 using Theorem 1.6, and used to prove an extension of Theorem 1.2 to the \(L^\infty \)-setting, see Remark 9.2. In this sense, the Green tensor in the half space has a stronger decay than the whole space case. This phenomenon is well known in the stationary case.

Having the first estimates of both Green tensor and its associated pressure tensor in hand, we can investigate restricted Green tensors and initial values, and prove Proposition 1.4 on the symmetry of the Green tensor. Our main estimate Theorem 1.5 is proved using Proposition 1.1 and Proposition 1.4. We then prove the boundary vanishing Theorem 1.6 using the normal derivative estimates of Theorem 1.5.

1.4 Applications

As an application, we will construct mild solutions of the Navier–Stokes equations in the half space in various functional spaces. We will provide other applications in forth coming papers [21] and [22]. Since it is only for illustration, we only consider local-in-time solutions with zero external force. Fujita-Kato [9, 25] and Sobolevskii [45] transformed (NS) into an abstract initial value problem using the Stokes semigroup

$$\begin{aligned} u(t) = e^{-t{\textbf{A}}} u_0 -\int _0^t e^{-(t-s){\textbf{A}}}{{\textbf{P}}}\partial _k (u_ku)(s)\, ds, \end{aligned}$$
(1.23)

whose solution u(t) lies in some Banach spaces and is called a mild solution of (NS). In the whole space setting, there is an extensive literature on the unique existence of mild solutions of (NS). See e.g. [2, 8, 11, 12, 24, 31, 37, 42, 54] for the most relevant to our study.

For mild solutions of (NS) in the half-space, the unique local and global existence in \(L^q({\mathbb {R}}^n_+)\) were established by Weissler [53] for \(3\le n<q<\infty \), by Ukai [52] for \(2\le n\le q<\infty \), and by Kozono [33] for \(2\le n=q\). Canaone-Planchon-Schonbek [3] established unique existence of solutions in \(L^\infty L^3\) with initial data in the homogeneous Besov space \(\dot{B}_{q,\infty }^{3/q-1}({\mathbb {R}}^3_+)\). For mild solutions in weighted \(L^q\) spaces, we refer the reader to [28, 29].

For solutions with pointwise decay, Crispo-Maremonti [6] proved the local existence of solutions controlled by \((1+|x|)^{-\alpha }(1+t)^{-\beta /2}\), \(\alpha +\beta =a\in (1/2,n)\) when \(u_0\in L^\infty ({\mathbb {R}}^n_+, (1+|x|)^a dx)\) and \(n\ge 3\). If \(a\in [1,n)\), they further showed the existence is global in time when \(u_0\) is small enough in \(L^\infty ({\mathbb {R}}^n_+, (1+|x|)^a dx)\). The constraints imposed in [6] on a and n are relaxed by Chang-Jin [5] to \(a\in (0,n]\) and \(n\ge 2\). They proved the existence of mild solutions to (NS) having the same weighted decay estimate as the Stokes solutions if \(a\in (0,n]\). Note that for the case \(a=n\), the mild solution is local in time because the weighted estimate of solutions to the Stokes system has an additional log factor. They also obtained the weighted decay estimates for \(n<a<n+1\) in [4] with an additional condition that \(R_j'u_0\in L^\infty ({\mathbb {R}}^n_+, (1+|x|)^a dx)\). Regarding solutions whose initial data has no spatial decay, the local existence and uniqueness of strong mild solutions with initial data in \(L^\infty \) were established by Bae-Jin [1], improving Solonnikov [50] and Maremonti [38] for continuous initial data. Recently, Maekawa-Miura-Prange [36] studied the analyticity of Stokes semigroup in uniformly local \(L^q\) space via the Stokes resolvent problem and constructed mild solutions in such spaces for \(q\ge n\).

In the following, Theorems 1.7, 1.8 and 1.10 are already known, while Theorems 1.9 is new. We will provide new proofs using the following solution formula of (NS) with the Green tensor

$$\begin{aligned} \begin{aligned} u_i(x,t)&= \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}(y)dy\\&\quad + \sum _{j,k=1}^n\int _0^t \int _{{\mathbb {R}}^n_+} \partial _{y_k} G_{ij}(x,y,t-s) (u_k u_j)(y,s)dy\,ds. \end{aligned} \end{aligned}$$
(1.24)

We use the restricted Green tensor \(\breve{G}_{ij}\) for the first term and the (unrestricted) Green tensor \(G_{ij}\) for the second term. Note that the second term is written as

$$\begin{aligned} \begin{aligned}{ -\sum _{j,k=1}^n\int _0^t \int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t-s) ({{\textbf{P}}}\partial _k u_ku )_j(y,s)dy\,ds } \end{aligned} \end{aligned}$$
(1.25)

and explicitly computed in [1, 6], as the Green tensor \(G_{ij}\) was unknown.

For \(1\le q\le \infty \), let

$$\begin{aligned} { L^p_\sigma ({\mathbb {R}}^n_+) = \left\{ f \in L^p({\mathbb {R}}^n_+;{\mathbb {R}}^n) : \mathop {\textrm{div}}\nolimits f=0, \ f_n(x',0)=0 \right\} . } \end{aligned}$$
(1.26)

Theorem 1.7

Let \(2\le n \le q\le \infty \) and \(u_0 \in L^q_{\sigma }({\mathbb {R}}^n_+)\). If \(q=\infty \), we also assume \(u_0\) in the \(L^\infty \)-closure of \(C^1_{c,\sigma }(\overline{{\mathbb {R}}^n_+})\) and \(n\ge 3\). There are \(T= T(n,q, u_0)>0\) and a unique mild solution \(u(t)\in C([0,T]; L^q)\) of (NS) in the class

$$\begin{aligned} \sup _{0<t<T} \left( \left\| u(t) \right\| _{L^q} + t^{\frac{n}{2q}} \left\| u(t) \right\| _{L^\infty } + t^{1/2}\left\| \nabla u(t) \right\| _{L^q} \right) \le C_* \left\| u_0 \right\| _{L^q}. \end{aligned}$$

We can take \(T= T(n,q, \left\| u_0 \right\| _{L^q})\) if \(n<q \le \infty \).

This is known in [33, 52, 53] for \(2\le n\le q<\infty \), and in [1] for \(q=\infty \).

For \(a\ge 0\), denote

$$\begin{aligned} { Y_a = \bigg \{ f \in L^\infty _{\textrm{loc}}({\mathbb {R}}^n_+) \, \bigg |\, \left\| f \right\| _{Y_a} = \sup _{x \in {\mathbb {R}}^n_+} |f(x)|{\langle x \rangle }^a<\infty \bigg \}, } \end{aligned}$$
(1.27)

and

$$\begin{aligned} Z_a = \bigg \{ f \in L^\infty _{\textrm{loc}}({\mathbb {R}}^n_+) \, \bigg |\, \left\| f \right\| _{Z_a} = \sup _{x \in {\mathbb {R}}^n_+} |f(x)|{\langle x_n \rangle }^a<\infty \bigg \}. \end{aligned}$$
(1.28)

Theorem 1.8

Let \(n\ge 2\) and \(0< a \le n\). For any vector field \(u_0 \in Y_a\) with \(\mathop {\textrm{div}}\nolimits u_0=0\) and \(u_{0,n}|_\Sigma =0\), there is a strong mild solution \(u\in L^\infty (0,T;Y_a)\) of (NS) for some time interval (0, T). Moreover, the mild solution is unique in the class \(L^\infty ({\mathbb {R}}^n_+\times (0,T))\).

Theorem 1.9

Let \(n\ge 2\) and \(0< a \le 1\). For any vector field \(u_0 \in Z_a\) with \(\mathop {\textrm{div}}\nolimits u_0=0\) and \(u_{0,n}|_\Sigma =0\), there is a strong mild solution \(u\in L^\infty (0,T;Z_a)\) of (NS) for some time interval (0, T). Moreover, the mild solution is unique in the class \(L^\infty ({\mathbb {R}}^n_+\times (0,T))\).

Theorem 1.8 corresponds to [5, Theorem 1] and [6, Theorem 2.1]. Theorem 1.9 is new. Its upper bound \(a\le 1\) is less than that in Theorem 1.8.

For \(1\le q\le \infty \), denote

$$\begin{aligned} \begin{aligned} L^q_{{\textrm{uloc}}}({\mathbb {R}}^n_+)&= \bigg \{u \in L^q_{{\textrm{loc}}}({\mathbb {R}}^n_+)\ \big |\ \sup _{x \in {\mathbb {R}}^n_+} \left\| u \right\| _{L^q(B_1(x) \cap {\mathbb {R}}^n_+ )} <\infty \bigg \}, \\ L^q_{{\textrm{uloc}},\sigma }({\mathbb {R}}^n_+)&= \left\{ u \in L^q_{{\textrm{uloc}}}({\mathbb {R}}^n_+;{\mathbb {R}}^n) \ \big |\ \mathop {\textrm{div}}\nolimits u=0, \ u_{0,n}|_\Sigma =0 \right\} . \end{aligned} \end{aligned}$$

Theorem 1.10

Let \(2\le n \le q\le \infty \) and \(u_0 \in L^q_{{\textrm{uloc}},\sigma }({\mathbb {R}}^n_+)\).

  1. (a)

    If \(n<q\le \infty \), and suppose \(n\ge 3\) if \(q=\infty \), there are \(T= T(n,q, \left\| u_0 \right\| _{L^q_{\textrm{uloc}}})>0\) and a unique mild solution u of (NS) with

    $$\begin{aligned} \begin{aligned}&u(t)\in L^\infty (0,T; L^q_{{\textrm{uloc}},\sigma })\cap C((0,T); W^{1,q}_{{\textrm{uloc}},0}({\mathbb {R}}^n_+)^n\cap \textrm{BUC}_\sigma ({\mathbb {R}}^n_+)), \\&\quad \sup _{0<t<T} \left( \left\| u(t) \right\| _{L^q_{\textrm{uloc}}} + t^{\frac{n}{2q}} \left\| u(t) \right\| _{L^\infty } + t^{1/2}\left\| \nabla u(t) \right\| _{L^q_{\textrm{uloc}}} \right) \le C_* \left\| u_0 \right\| _{L^q_{\textrm{uloc}}}. \end{aligned} \end{aligned}$$
    (1.29)
  2. (b)

    If \(n=q\), for any \(0<T<\infty \), there are \(\epsilon (T),C_*(T)>0\) such that if \(\left\| u_0 \right\| _{L^n_{\textrm{uloc}}} \le \epsilon (T)\), then there is a unique mild solution u(t) of (NS) in the class (1.29).

This theorem is [36, Propositions 7.1 and 7.2]. Continuity at time zero requires further restrictions on \(u_0\).

In addition to the existence of mild solutions in various spaces, pointwise estimates of the Green tensor is useful for the study of local and asymptotic behavior of solutions. In a forth coming paper [21], we will use the Stokes flows of [18] as the profile to construct solutions of the Navier–Stokes equations in \({\mathbb {R}}^3_+ \times (0,2)\) with finite global energy such that they are globally bounded with spatial decay but their normal derivatives are unbounded near the boundary due to Hölder continuous boundary fluxes which are not \(C^1\) in time. We will collect other applications in [22].

The rest of this paper is organized as follows. In Sect. 2, we give a few preliminaries and recall the Oseen tensor and the Golovkin tensor. In Sect. 3, we consider the Green tensor and its associated pressure tensor, and derive their first formulas. In Sect. 4, we derive a second formula for the Green tensor which has better estimates. In Sect. 5, we give the first estimates in Proposition 1.1 of the Green tensor and the pressure tensor. In Sect. 6, we study the restricted Green tensors, and how the solutions converge to the initial values. In Sect. 7, we prove the symmetry of the Green tensor in Proposition 1.4. In Sect. 8, the ultimate estimate in Theorem 1.5 is derived from Proposition 1.1 using the symmetry of the Green tensor and the divergence-free condition. We also estimate their vanishing at the boundary, proving Theorem 1.6. In Sect. 9, we prove the key estimates for the construction of mild solutions in various spaces for Theorems 1.7, 1.8, 1.9 and 1.10.

Notation. We denote \({\langle \xi \rangle }=(|\xi |^2+2)^{1/2}\) for any \(\xi \in {\mathbb {R}}^m\), \(m \in {\mathbb {N}}\). We denote \(f\lesssim g\) if there is a constant C such that \(|f|\le Cg\).

$$\begin{aligned} \begin{array}{ccc} \text {Green tensor} &{} \cdots &{} G_{ij} ,\ g_j \\ \text {Oseen tensor} &{} \cdots &{} S_{ij},\ s_j \\ \text {Golovkin tensor} &{} \cdots &{} K_{ij},\ k_j \\ \text {Fundamental solution of }-\Delta &{} \cdots &{} E \\ \text {Heat kernel} &{} \cdots &{} \Gamma \\ \text {Poisson kernel for heat equation} &{} \cdots &{} P \end{array} \end{aligned}$$

2 Preliminaries, Oseen and Golovkin Tensors

In this section, we first recall a few definitions and estimates from [46]. We then give two integral estimates. We next recall in Sect. 2.2 the Oseen tensor [44], which is the fundamental solution of the nonstationary Stokes system in \({\mathbb {R}}^n\). We finally recall in Sect. 2.3 the Golovkin tensor [14], which is the Poisson kernel of the nonstationary Stokes system in \({\mathbb {R}}^n_+\).

The heat kernel \(\Gamma \) and the fundamental solution E of \(-\Delta \) are given by

$$\begin{aligned} \Gamma (x,t)=\left\{ \begin{array}{ll}(4\pi t)^{-\frac{n}{2}}e^{\frac{-x^{2}}{4t}}&{}\ \text { for }t>0,\\ 0&{}\ \text { for }t\le 0,\end{array}\right. \ \text { and }\ E(x)=\left\{ \begin{array}{ll}\frac{1}{n\,(n-2)|B_1|}\,\frac{1}{|x|^{n-2}}&{}\ \text { for }n\ge 3,\\ -\frac{1}{2\pi }\,\log |x|&{}\ \text { if }n=2.\end{array}\right. \end{aligned}$$

The Poisson kernel of \(-\Delta \) in \({\mathbb {R}}^n_+\) is \(P_0 (x)= -2\partial _n E(x)\). We will use [23, (2.32)] for \(n \ge 2\),

$$\begin{aligned} { \int _\Sigma E(\xi '-y)P_0(x-\xi ')\,d\xi '=E(x-y^*), \quad P_0 (x)= -2\partial _n E(x). } \end{aligned}$$
(2.1)

It is because the integral is a harmonic function in x that equals \(E(x-y^*)\) when \(x_n=0\), and was first used in Maz\('\)ja–Plamenevskiĭ–Stupjalis [40, Appendix 1] to study the stationary Green tensor for \(n=2,3\).

We will use the following functions defined in [46, (60)–(61)]:

$$\begin{aligned} A(x,t)=\int _\Sigma \Gamma (z',0,t)E(x-z')\,dz'=\int _\Sigma \Gamma (x'-z',0,t)E(z',x_n)\,dz' \end{aligned}$$
(2.2)

and

$$\begin{aligned} B(x,t)=\int _\Sigma \Gamma (x-z',t)E(z',0)\,dz'=\int _\Sigma \Gamma (z',x_n,t)E(x'-z',0)\,dz'. \end{aligned}$$
(2.3)

They are defined only for \(n=3\) in [46] and differ from (2.2)–(2.3) by a factor of \(4\pi \). The estimates for AB, and their derivatives are given in [46, (62, 63)] for \(n=3\). For general case, we can use the same approach and derive the following estimates for \(l+n\ge 3\):

$$\begin{aligned} |\partial _x^l\partial _t^mA(x,t)|\lesssim \frac{1}{t^{m+\frac{1}{2}}(x^2+t)^{\frac{l+n-2}{2}}} \end{aligned}$$
(2.4)

and

$$\begin{aligned} |\partial _{x'}^l\partial _{x_n}^k\partial _t^mB(x,t)|\lesssim \frac{1}{(x^2+t)^{\frac{l+n-2}{2}}(x_n^2+t)^{\frac{k+1}{2}+m}}. \end{aligned}$$
(2.5)

In fact, the last line of [46, page 39] gives

$$\begin{aligned} |\partial _{x'}^l\partial _{x_n}^k B(x,t)|\lesssim \frac{1}{(x^2+t)^{\frac{l+n-2}{2}}t^{\frac{k+1}{2}}}\,e^{-\frac{x_n^2}{10t}}. \end{aligned}$$
(2.6)

Remark 2.1

For \(n=2\), the condition \(l\ge 1\) is needed as A(xt) and B(xt) grow logarithmically as \(|x|\rightarrow \infty \). In fact, one may prove for \(n=2\)

$$\begin{aligned} |A(x,t)|+|B(x,t)|\lesssim \frac{1 + |\log (|x_2|+\sqrt{t})|+|\log (|x_1|+|x_2|+\sqrt{t})|}{\sqrt{t}}. \end{aligned}$$

2.1 Integral estimates

We now give a few useful integral estimates.

Lemma 2.1

For positive Lad, and k we have

$$\begin{aligned} \int _0^L\frac{r^{d-1}\,dr}{(r+a)^k}\lesssim \left\{ \begin{array}{ll}L^d(a+L)^{-k}&{} \text { if } k<d,\\ L^d(a+L)^{-d}(1+\log _+\frac{L}{a}) &{} \text { if } k=d,\\ L^d(a+L)^{-d}a^{-(k-d)} &{} \text { if } k>d.\end{array}\right. \end{aligned}$$

Proof

Denote the integral by I. If \(a\ge \frac{L}{2}\), then

$$\begin{aligned} I\lesssim a^{-k}\int _0^L r^{d-1}\,dr\sim L^da^{-k}. \end{aligned}$$

If \(a<\frac{L}{2}\), then

$$\begin{aligned} \begin{aligned} I=&\int _0^a\frac{r^{d-1}\,dr}{(r+a)^k}+\int _a^L\frac{r^{d-1}\,dr}{(r+a)^k}\\ \lesssim&~a^{d-k}+\int _a^Lr^{d-k-1}\,dr\\ \lesssim&~a^{d-k}+\left\{ \begin{array}{ll}L^{d-k}&{} \text { if } k<d,\\ \log \frac{L}{a}&{} \text { if } k=d,\\ a^{d-k}&{} \text { if } k>d.\end{array}\right. \end{aligned} \end{aligned}$$

For \(k<d\),

$$\begin{aligned} I\lesssim \left\{ \begin{array}{ll}L^da^{-k}&{} \text { if } a\ge \frac{L}{2}\\ L^{d-k}&{} \text { if } a<\frac{L}{2}\end{array}\right. \lesssim L^d\max (a,L)^{-k}\lesssim L^d(a+L)^{-k}, \end{aligned}$$

where we used the fact \(2\max (a,L)\ge a+L\). Next, for \(k=d\),

$$\begin{aligned} I\lesssim \left\{ \begin{array}{ll}\left( \frac{L}{a}\right) ^d&{} \text { if } a\ge \frac{L}{2}\\ 1+\log \frac{L}{a}&{} \text { if } a<\frac{L}{2}\end{array}\right. \lesssim \frac{L^d}{(a+L)^d}(1+ \log _+\frac{L}{a}), \end{aligned}$$

because \(a\ge \frac{L}{2}\) implies that \(\frac{L}{a}\lesssim 1\). Finally, for \(k>d\) we get

$$\begin{aligned} I\lesssim \left\{ \begin{array}{ll}L^da^{-k}&{} \text { if } a\ge \frac{L}{2} \\ a^{d-k}&{} \text { if } a<\frac{L}{2}\end{array}\right. \lesssim a^{-k}\min (a,L)^d \sim \frac{ L^d}{(a+L)^{d}a^{k-d}}. \square \end{aligned}$$

Lemma 2.2

Let \(a>0\), \(b>0\), \(k>0\), \(m>0\) and \(k+m>d\). Let \(0\not =x\in {\mathbb {R}}^d\) and

$$\begin{aligned} I:=\int _{{\mathbb {R}}^d}\frac{dz}{(|z|+a)^k(|z-x|+b)^m}. \end{aligned}$$

Then, with \(R=\max \{|x|,\,a,\,b\}\sim |x|+a+b\),

$$\begin{aligned} \begin{aligned} I\lesssim R^{d-k-m} + \delta _{kd} R^{-m} \log \frac{R}{a} + \delta _{md} R^{-k} \log \frac{R}{b} + \mathbb {1}_{k>d} R^{-m}a^{d-k} + \mathbb {1}_{m>d} R^{-k}b^{d-m}. \end{aligned} \end{aligned}$$

Proof

Decompose I into

$$\begin{aligned} I=\left( \int _{|z|<2R}+\int _{|z|>2R}\right) \frac{dz}{(|z|+a)^k(|z-x|+b)^m}:=I_1+I_2. \end{aligned}$$

For \(I_2\) we have

$$\begin{aligned} I_2 {\lesssim }\int _{|z|>2R}\frac{dz}{|z|^k|z|^m}\sim R^{d-k-m}. \end{aligned}$$

For \(I_1\) we consider the three cases concerning R: \(R=|x|\), \(R=a\), and \(R=b\).

  • If \(R=|x|\), we split \(I_1\) into

    $$\begin{aligned} I_1&=\left( \int _{|z|<\frac{R}{2}}+\int _{|z-x|<\frac{R}{2}}+\int _{\begin{array}{c} \frac{R}{2}<|z|<2R\\ |z-x|>\frac{R}{2} \end{array}}\right) \frac{dz}{(|z|+a)^k(|z-x|+b)^m}\\&=:I_{1,1}+I_{1,2}+I_{1,3}. \end{aligned}$$

    By Lemma 2.1 we obtain

    $$\begin{aligned} \begin{aligned} I_{1,1}\lesssim&\int _{|z|<\frac{R}{2}}\frac{dz}{(|z|+a)^kR^m}\\ \sim&~R^{-m}\int _0^{\frac{R}{2}}\frac{r^{d-1}\,dr}{(r+a)^k}{\lesssim }\left\{ \begin{array}{ll}R^{d-m}(a+R)^{-k}&{} \text { if } k<d,\\ R^{-m}\left( 1+\log _+\frac{R}{a}\right) &{} \text { if } k=d,\\ R^{-m}a^{-k}\min (a,R)^d&{} \text { if } k>d\end{array}\right. \end{aligned} \end{aligned}$$

    since \(|z-x|\ge |x|-|z|=R-|z|\ge \frac{R}{2}\). Also by Lemma 2.1,

    $$\begin{aligned} \begin{aligned} I_{1,2}{\lesssim }\int _{|z-x|<\frac{R}{2}}\frac{dz}{R^k(|z-x|+b)^m}\sim&~R^{-k}\int _0^{\frac{R}{2}}\frac{r^{d-1}\,dr}{(r+b)^m}\\ {\lesssim }&\left\{ \begin{array}{ll}R^{d-k}(b+R)^{-m}&{} \text { if } m<d,\\ R^{-k}\left( 1+\log _+\frac{R}{b}\right) &{} \text { if } m=d,\\ R^{-k}b^{-m}\min (b,R)^d&{} \text { if }m>d\end{array}\right. \end{aligned} \end{aligned}$$

    since \(|z|+a\ge |x|-|z-x|=R-|z-x|>\frac{R}{2}\), and

    $$\begin{aligned} I_{1,3}{\lesssim }\int _{\begin{array}{c} \frac{R}{2}<|z|<2R\\ |z-x|>\frac{R}{2} \end{array}}\frac{dz}{|z|^k|z-x|^m}{\lesssim }R^{-k-m}\int _{\frac{R}{2}}^{2R}r^{d-1}\,dr\sim R^{d-k-m}. \end{aligned}$$
  • If \(R=a>|x|\),

    $$\begin{aligned} \begin{aligned} I_{1}&\le \int _{|z|<2R}\frac{dz}{a^k(|z-x|+b)^m}\\&\le a^{-k}\int _{|z-x|<3R}\frac{dz}{(|z-x|+b)^m}\\&=~R^{-k}\int _0^{3R}\frac{r^{d-1}\,dr}{(r+b)^m}\\&\sim ~I_{1,2}. \end{aligned} \end{aligned}$$
  • If \(R=b>|x|\)

    $$\begin{aligned} \begin{aligned} I_1\le \int _{|z|<2R}\frac{dz}{(|z|+a)^kb^m}=R^{-m}\int _0^{2R}\frac{r^{d-1}\,dr}{(r+a)^k}\sim I_{1,1}. \end{aligned} \end{aligned}$$

Combining the above cases, the proof is complete. \(\square \)

2.2 Oseen tensor

We first recall the Oseen tensor \(S_{ij}(x,y,t) = S_{ij}(x-y,t)\), derived by Oseen in [44]. For the Stokes system in \({\mathbb {R}}^{n}\):

$$\begin{aligned} \begin{aligned} \left. \begin{aligned} v_{t}-\Delta v+\nabla q=f\\ v(x,0)=0, \ \ \mathop {\textrm{div}}\nolimits v=0 \end{aligned}\ \right\} \ \ \text{ in }\ \ {\mathbb {R}}^{n}\times (0,+\infty ), \end{aligned} \end{aligned}$$
(2.7)

with \(f(\cdot ,t)=0\) for \(t<0\), the unknown v and q are given by (see e.g. [8] or [46, (46)]):

$$\begin{aligned} v_{i}(x,t) =&~\sum _{j=1}^n\int _{0}^{t}\int _{{\mathbb {R}}^{n}}S_{ij}(x-y,t-s)f_{j}(y,s)dyds, \end{aligned}$$

and

$$\begin{aligned} q(x,t)&=\sum _{j=1}^n\int _{-\infty }^{\infty }\int _{{\mathbb {R}}^{n}}s_j(x-y,t-s)f_{j}(y,s)dyds\\&=-\sum _{j=1}^n \int _{{\mathbb {R}}^{n}}\partial _j E(x-y)f_{j}(y,t)dy. \end{aligned}$$

Here \((S_{ij},s_j)\), the Oseen tensor, is the fundamental solution of the non-stationary Stokes system in \({\mathbb {R}}^{n}\), and

$$\begin{aligned} S_{ij}(x,t)= & {} \delta _{ij}\Gamma (x,t)+\Gamma _{ij}(x,t),\nonumber \\ \Gamma _{ij}(x,t)= & {} \partial _i\partial _j\int _{{\mathbb {R}}^{n}}\Gamma (x-z,t)E(z)dz, \end{aligned}$$
(2.8)
$$\begin{aligned} s_j(x,t)= & {} -\partial _jE(x)\delta (t). \end{aligned}$$
(2.9)

In [46, (41), (42), (44)] it is shown that (for \(n=3\), but the general case can be treated in the same way)

$$\begin{aligned} |\partial _x^l\partial _t^m\Gamma (x,t)| + |\partial _x^l\partial _t^m\Gamma _{ij}(x,t)| + \left| \partial _x^l\partial _t^mS_{ij}(x,t)\right| \lesssim \frac{1}{\left( x^2+t\right) ^{\frac{l+n}{2}+m}} \end{aligned}$$
(2.10)

for \(n\ge 2\). It holds for \(n=2\) since we can apply one derivative on E to remove the \(\log \).

Remark 2.2

Formally taking the zero time limit of (2.8), we get

$$\begin{aligned} { S_{ij}(x,0_+)=\delta _{ij}\delta (x)+\partial _i\partial _jE(x). } \end{aligned}$$
(2.11)

An exact meaning of (2.11) is given by Lemma 2.3. In other words, the zero time limit of the Oseen tensor is the kernel of the Helmholtz projection \({{\textbf{P}}}_{{\mathbb {R}}^n}\) in \({\mathbb {R}}^n\),

$$\begin{aligned} { ({{\textbf{P}}}_{{\mathbb {R}}^n} u)_i = u_i + \partial _i (-\Delta )^{-1} \nabla \cdot u. } \end{aligned}$$
(2.12)

Lemma 2.3

Fix \(i,j\in \{1,\ldots ,n\}\), \(n \ge 2\). Suppose \(f\in C^1_c({\mathbb {R}}^n)\). Let \(v(x,t)=\int _{{\mathbb {R}}^n} S_{ij}(x-y,t) f(y)\, dy\) and \(v_0(x)= \delta _{ij} f(x) +\partial _i \int _{{\mathbb {R}}^n} \partial _j E(x-y) f(y) \,dy\). Then

$$\begin{aligned} \lim _{t\rightarrow 0_+} \sup _{x \in {\mathbb {R}}^n} {\langle x \rangle }^{n}\,|v(x,t)-v_0(x)|=0. \end{aligned}$$

Some regularity of f is needed to ensure \(L^\infty \) convergence because \(v_0\) may not be continuous if we only assume \(f\in C^0_c\). By Lemma 2.3 and approximation, the convergence \(v(\cdot ,t)\rightarrow v_0\) is also valid in \(L^q({\mathbb {R}}^n)\), \(1< q<\infty \), for \(f \in L^q({\mathbb {R}}^n)\).

Proof

We first consider \(u(x,t)=\int _{{\mathbb {R}}^n} \Gamma (x-y,t)a(y)\,dy\) for a bounded and uniformly continuous function a. Let \(M=\sup |a|\). For any \(\varepsilon >0\), by uniform continuity, there is \(r>0\) such that \(|a(x)-a(y)|\le \varepsilon \) if \(|x-y|\le r\). Using \(\int _{{\mathbb {R}}^n} \Gamma (x-y,t) \,dy=1\),

$$\begin{aligned} \begin{aligned} \left| u(x,t)-a(x) \right|&= \left| \left( \int _{B_r(x)} + \int _{B_r^c(x)} \right) \Gamma (x-y,t)[a(y) - a(x)]\,dy \right| \\ {}&\le \int _{B_r(x)}\Gamma (x-y,t) \varepsilon \,dy + \int _{B_r^c(x)} \Gamma (x-y,t) 2 M\,dy \\ {}&\le \varepsilon + CM \int _{|z|>r} t^{-n/2} e^{-z^2/4t}\,dz \le \varepsilon + CM e^{-r^2/8t}. \end{aligned} \end{aligned}$$

This shows \(\left\| u(x,t)-a(x) \right\| _{L^\infty ({\mathbb {R}}^n)} \rightarrow 0\) as \(t\rightarrow 0_+\). Suppose furthermore \(a\in C^0_c({\mathbb {R}}^n)\), \(a(y)=0\) if \(|y|>R\ge 1\). Then for \(|x|>2R\),

$$\begin{aligned} \begin{aligned} \left| u(x,t)-a(x) \right|&=\left| u(x,t) \right| \le \int _{B_R} \Gamma (x-y,t)M\,dy \\ {}&\le CM e^{-|x|^2/32t} \int _{{\mathbb {R}}^n} t^{-n/2} e^{-|x-y|^2/8t}\,dy = CM e^{-|x|^2/32t} . \end{aligned} \end{aligned}$$

We conclude for any \(\alpha \ge 0\)

$$\begin{aligned} { \left| u(x,t)-a(x) \right| \le \frac{o(1)}{(|x|+R)^\alpha }, \quad \forall x \in {\mathbb {R}}^n, } \end{aligned}$$
(2.13)

where \(o(1)\rightarrow 0\) as \(t\rightarrow 0_+\), uniformly in x. (Estimate (2.13) is valid for \(n\ge 1\).)

Recall the definition (2.8) of \(S_{ij} = \delta _{ij}\Gamma + \Gamma _{ij}\). For \(f\in C^1_c({\mathbb {R}}^n)\), by (2.13) with \(a=f\),

$$\begin{aligned} v(x,t)-v_0(x) = \frac{o(1)}{(|x|+R)^n} + v_1(x,t), \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} v_1(x,t)&= \int _{{\mathbb {R}}^n}\int _{{\mathbb {R}}^n} \Gamma (x-y-z,t)\partial _j E(z)dz \,\partial _i f(y) \,dy - \int _{{\mathbb {R}}^n} \partial _j E(x-w)\partial _i f(w) \,dw \\ {}&= \int _{{\mathbb {R}}^n} \left( \int _{{\mathbb {R}}^n} \Gamma (w-y,t)\partial _i f(y) \,dy - \partial _i f(w) \right) \partial _j E(x-w) \,dw . \end{aligned} \end{aligned}$$

For the second equality we used \(z=x-w\) and Fubini theorem. By (2.13) again with \(a=\partial _i f\),

$$\begin{aligned} |v_1(x,t) | \le \int _{{\mathbb {R}}^n} \frac{o(1)}{(|w|+R)^{n+1} } |x-w|^{1-n}\,dw \le \frac{o(1)}{R(|x|+R)^{n-1} }. \end{aligned}$$

We have used Lemma 2.2 for the second inequality.

We now improve its decay in |x| and assume \(|x|>R+1\). Decompose \({\mathbb {R}}^n = U+V\) where \(U=\{w: |w-x|<|x|/2\}\) and \(V=U^c\). Integrating by parts in \(w_i\) in V, we get

$$\begin{aligned} \begin{aligned} v_1(x,t)&= \int _{U} \left( \int _{{\mathbb {R}}^n} \Gamma (w-y,t)\partial _i f(y) \,dy - \partial _i f(w) \right) \partial _j E(x-w) \,dw \\ {}&\quad + \int _{V} \left( \int _{{\mathbb {R}}^n} \Gamma (w-y,t) f(y) \,dy - f(w) \right) \partial _i \partial _j E(x-w) \,dw \\&\quad + \int _{\partial V} \left( \int \Gamma (w-y,t) f(y) \,dy - f(w) \right) \partial _j E(x-w) \,n_i \,dS_w \\&= I_1+I_2+I_3. \end{aligned} \end{aligned}$$

By (2.13) with \(a=\partial _i f\),

$$\begin{aligned} |I_1 | \le \int _{U} \frac{o(1)}{(|x|+R)^{n+2} } |x-w|^{1-n}\,dw \le \frac{o(1)}{(|x|+R)^{n+1} }. \end{aligned}$$

By (2.13) with \(a=f\),

$$\begin{aligned} |I_2 |\le & {} \int _{V} \frac{o(1)}{(|w|+R)^{n+1} } |x|^{-n}\,dw \le \frac{o(1)}{R(|x|+R)^{n} },\\ |I_3 |\le & {} \int _{\partial V} \frac{o(1)}{(|x|+R)^{n+1} } |x|^{1-n}\,dS_w \le \frac{o(1)}{(|x|+R)^{n+1} }. \end{aligned}$$

The main term is \(I_2\). This shows the lemma. \(\square \)

2.3 Golovkin tensor

The Golovkin tensor \(K_{ij}(x,t): {\mathbb {R}}_{+}^{n}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is the Poisson kernel of the nonstationary Stokes system in \({\mathbb {R}}^n_+\), first constructed by Golovkin [14] for \({\mathbb {R}}^3_+\). Consider the boundary value problem of the Stokes system in the half-space:

$$\begin{aligned} \begin{aligned} \left. \begin{aligned} {\hat{v}}_{t}-\Delta {\hat{v}}+\nabla p=0\\ \mathop {\textrm{div}}\nolimits {\hat{v}}=0 \end{aligned}\ \right\} \ \ \text{ in }\ \ {\mathbb {R}}_{+}^{n}\times (0,\infty ), \\ {\hat{v}}(x',0,t)=\phi (x',t), \ \ \text{ on }\ \ \Sigma \times (0,\infty ). \end{aligned} \end{aligned}$$
(2.14)

We extend \(\phi (x',t)=0\) for \(t<0\). By Solonnikov [46, (82)], the Golovkin tensor \(K_{ij}(x,t)\) and its associated pressure tensor \(k_j\) are explicitly given by

$$\begin{aligned} K_{ij}(x,t)= & {} -2\,\delta _{ij}\,\partial _n\Gamma (x,t)-4\,\partial _j\int _0^{x_n}\int _\Sigma \partial _n\Gamma (z,t)\,\partial _iE(x-z)\,dz'\,dz_n\nonumber \\{} & {} -2\,\delta _{nj}\partial _iE(x)\delta (t), \end{aligned}$$
(2.15)
$$\begin{aligned} k_j(x,t)= & {} 2\,\partial _j\partial _nE(x)\delta (t)+2\,\delta _{nj}E(x)\delta '(t)+\frac{2}{t}\,\partial _jA(x,t), \end{aligned}$$
(2.16)

where A(xt) is defined in (2.2). A solution \(({\hat{v}},p)\) of (2.14) is represented by ([46, (84)]):

$$\begin{aligned} { {\hat{v}}_i(x,t)=\sum _{j=1}^n\int _{-\infty }^{\infty }\int _\Sigma K_{ij}(x-\xi ',t-s)\phi _j(\xi ',s)\,d\xi '\,ds } \end{aligned}$$
(2.17)

and

$$\begin{aligned} \begin{aligned} p(x,t)=&~2\sum _{i=1}^n\partial _i\partial _n\int _{\Sigma }E(x-\xi ')\phi _i(\xi ',t)\,d\xi ' +2\int _{\Sigma }E(x-\xi ')\partial _t\phi _n(\xi ',t)\,d\xi ' \\&+ \sum _{j=1}^n \partial _{i} \int _{-\infty }^{\infty }\int _\Sigma \frac{2}{t-s} A(x-\xi ',t-s) [\phi _i(\xi ',s) - \phi _i(\xi ',t)]\,d\xi '\,ds. \end{aligned} \end{aligned}$$
(2.18)

Note that \( \phi _i(\xi ',t)\) is subtracted from the last integral to make it integrable. Alternatively, using \((\partial _t-\Delta _{x'})A=(-1/(2t))A\) (since \((\partial _t-\Delta _{x'}) \Gamma (x',0,t) = (\partial _t-\Delta _{x'})(4\pi t)^{-1/2} \Gamma _{{\mathbb {R}}^{n-1}}(x',t) = -(2t)^{-1} \Gamma (x',0,t) \)), p(xt) can also be expressed as [46, (85)]

$$\begin{aligned} \begin{aligned} p(x,t)=&~2\sum _{i=1}^n\partial _i\partial _n\int _{\Sigma }E(x-\xi ')\phi _i(\xi ',t)\,d\xi ' +2\int _{\Sigma }E(x-\xi ')\partial _t\phi _n(\xi ',t)\,d\xi '\\&-4\sum _{i=1}^n(\partial _t-\Delta _{x'})\int _{-\infty }^\infty \int _{\Sigma }\partial _iA(x-\xi ',t-\tau )\phi _i(\xi ',\tau )\,d\xi 'd\tau . \end{aligned} \end{aligned}$$
(2.19)

The last term of (2.16) is not integrable (hence not a distribution) and has to be understood in the sense of (2.18) or (2.19). By [46] (for \(n=3\), but the general case can be treated in the same manner), for \(n \ge 2\), the Golovkin tensor satisfies, for \(i,j=1,\ldots ,n\) and \(t>0\),

$$\begin{aligned} \left| \partial _{x'}^l\partial _{x_n}^k\partial _t^mK_{ij}(x,t)\right| \lesssim \frac{1}{t^{m+\frac{1}{2}}\left( x^2+t\right) ^{\frac{l+n-\sigma }{2}}(x_n^2+t)^{\frac{k+\sigma }{2}}}, \quad \sigma =\delta _{i<n} \delta _{jn}. \end{aligned}$$
(2.20)

Here \(\sigma =1\) if \(i<n=j\) and \(\sigma =0\) otherwise. Specifically, the case \(j<n\) is [46, (73)], the case \(j=n\) uses \(j<n\) case, the formulas for \(K_{in}\) on [46, page 47], and [46, (69)].

Remark 2.3

  1. (i)

    In the proof of [46, (73)], in the equation after [46, (72)], there is at least one \(x'\)-derivative acting on B (defined in (2.3)) even if \(l=0\). The same is true for formulas for \(K_{in}\) on [46, page 47]. Hence we have estimate (2.20) for all \(n \ge 2\) and do not have a log factor for \(n=2\). Compare (2.5) and Remark 2.1.

  2. (ii)

    Solonnikov [46, pp.46-48] decomposes \({{\hat{v}}} = w + w'\) where

    $$\begin{aligned} \begin{aligned} w_i (x,t)&= \sum _{j<n} \iint K_{ij}(x-\xi ',t-s) \phi _j(\xi ',s)d\xi 'ds, \\ w_i' (x,t)&= \iint K_{in}(x-\xi ',t-s) \phi _n(\xi ',s)d\xi 'ds, \end{aligned} \end{aligned}$$

    and shows that \(w_i(x',0,t)=(1-\delta _{in})\phi _i(x',t)\) and \(w_i'(x',0,t)=\delta _{in}\phi _i(x',t)\).

  3. (iii)

    The limit of \({{\hat{v}}}(\cdot ,t)\) as \(t \rightarrow 0_+\) depends on the \(\lim _{t \rightarrow 0_+} \phi (\cdot ,t)\). It is in general nonzero unless \(\phi (\cdot ,t)=0\) for \(0<t<\delta \). See the following example.

Example 2.4

Let \(\rho (\xi ',t)\) be any continuous function defined on \(\Sigma \times {\mathbb {R}}\) with suitable decay. Let

$$\begin{aligned} u(x,t) = \nabla _x h(x,t),\quad h(x,t) = \int _\Sigma -2E(x-\xi ') \rho (\xi ',t)d\xi '. \end{aligned}$$

Let \({{\hat{v}}}(x,t)\) be defined by (2.17) with \(\phi (x',t) = u(x',0,t)\). We claim that \({{\hat{v}}}(x,t)=u(x,t)\). Note that h is harmonic in x and \(u_n|_\Sigma = \rho \) as \(-2\partial _n E(x)\) is the Poisson kernel of \(-\Delta \) in \({\mathbb {R}}^n_+\). Since \(\mathop {\textrm{div}}\nolimits u=0\) and \(\mathop {\textrm{curl}}u=0\), by Stein [51] Theorem III.3 on page 65, we have

$$\begin{aligned} u_n|_\Sigma = \rho , \quad u_i |_{\Sigma } = R_i' \rho \quad (i<n), \end{aligned}$$

where \(R'_j\) is the j-th Riesz transform on \({\mathbb {R}}^{n-1}\), \(\widehat{R'_j f}(\xi ') = \frac{i \xi _j}{|\xi '|} {{\hat{f}}}(\xi ')\). By (2.15) and (2.17),

$$\begin{aligned} \begin{aligned} {{\hat{v}}}_i(x,t)&=-2\int _{-\infty }^\infty \int _\Sigma \partial _n\Gamma (x-\xi ',t-s)\phi _i(\xi ',s)\,d\xi 'ds\\ {}&\quad -4\sum _{j=1}^{n-1}\int _{-\infty }^\infty \int _\Sigma \partial _{x_j}\left( \int _0^{x_n}\int _\Sigma \partial _n\Gamma (z,t-s)\,\partial _iE(x-\xi '-z)\,dz'\,dz_n \right) \phi _j(\xi ',s)\,d\xi 'ds\\ {}&\quad -4\int _{-\infty }^\infty \int _\Sigma \partial _{x_n}\left( \int _0^{x_n}\int _\Sigma \partial _n\Gamma (z,t-s)\,\partial _iE(x-\xi '-z)\,dz'\,dz_n \right) \phi _n(\xi ',s)\,d\xi 'ds\\ {}&\quad -2\int _\Sigma \partial _iE(x-\xi ')\phi _n(\xi ',t)\,d\xi ' = : I_1+I_2+I_3+I_4. \end{aligned} \end{aligned}$$

As \(\phi _n= \rho \), \(I_4=u_i(x,t)\) by definition. If \(i<n\), since \(\phi _j = R_j' \rho \), we can switch derivatives

$$\begin{aligned} I_2= & {} -4\sum _{j=1}^{n-1}\int _{-\infty }^\infty \int _\Sigma \partial _{x_j}\left( \int _0^{x_n}\int _\Sigma \partial _n\Gamma (z,t-s)\,\partial _jE(x-\xi '-z)\,dz'\,dz_n \right) \\ {}{} & {} \quad \phi _i(\xi ',s)\,d\xi 'ds\\= & {} 4\int _{-\infty }^\infty \int _\Sigma \partial _{x_n}\left( \int _0^{x_n}\int _\Sigma \partial _n\Gamma (z,t-s)\,\partial _nE(x-\xi '-z)\,dz'\,dz_n \right) \\ {}{} & {} \quad \phi _i(\xi ',s)\,d\xi 'ds \\{} & {} \quad + 2\int _{-\infty }^\infty \int _\Sigma \partial _{n}\Gamma (x-\xi ',t-s) \phi _i(\xi ',s)\,d\xi 'ds = I_{2a}+I_{2b}. \end{aligned}$$

The second equality is [46, (68)]. Note that \(I_{2b}\) cancels \(I_1\), and \(I_{2a}+I_3=0\) because

$$\begin{aligned} -2\int _\Sigma \partial _iE(x-\xi '-z)\phi _n(\xi ',s)\,d\xi '= & {} u_i(x-z,s) \\= & {} -2 \int _\Sigma \partial _nE(x-\xi '-z)\phi _i(\xi ',s)\,d\xi '. \end{aligned}$$

The first equality is by definition of \(u_i\). The second is because \(-2\partial _n E\) is the Poisson kernel. Thus \({{\hat{v}}}_i(x,t)=u_i(x,t)\) for \(i<n\). As they are harmonic conjugates of \({{\hat{v}}}_n\) and \(u_n\), and \({{\hat{v}}}_n\) and \(u_n\) have the same boundary value \(\rho \), we also have \({{\hat{v}}}_n(x,t)=u_n(x,t)\). \(\square \)

3 First Formula for the Green Tensor

In this section, we derive a formula of the Green tensor \(G_{ij}\) of the non-stationary Stokes system in the half-space. We decompose \(G_{ij}={\tilde{G}}_{ij}+W_{ij}\) with explicit \({{\tilde{G}}}_{ij}\) given by (3.5), and derive a formula for the remainder term \(W_{ij}\).

For the nonstationary Stokes system in the half-space \({\mathbb {R}}^n_+\), \(n\ge 2\), the Green tensor \(G_{ij}(x,y,t)\) and its associated pressure tensor \(g_j(x,y,t)\), for each fixed \(j=1,\ldots ,n\) and \(y\in {\mathbb {R}}^n_+\), satisfy

$$\begin{aligned} \begin{aligned}&\partial _t G_{ij}-\Delta _xG_{ij}+\partial _{x_i}g_j=\delta _{ij}\delta _y(x)\delta (t),\quad \sum _{i=1}^n\partial _{x_i}G_{ij}=0,\quad \text { for }x\in {\mathbb {R}}^n_+\text { and } t\in {\mathbb {R}},\\&G_{ij}(x,y,t)|_{x_n=0}=0. \end{aligned} \end{aligned}$$
(3.1)

Recall the defining property that solution \((u,\pi )\) of (1.1)–(1.2) with zero boundary condition is given by (1.3) and

$$\begin{aligned} \pi (x,t) = \int _{{\mathbb {R}}^n_+}g(x,y,t)\cdot u_0(y)\,dy+\int _{-\infty }^\infty \int _{{\mathbb {R}}^n_+} g(x,y,t-s)\cdot f(y,s)\,dy\,ds. \end{aligned}$$
(3.2)

The time interval in (3.2) is the entire \({\mathbb {R}}\) as we will see in Proposition 3.5 that g contains a delta function in time, cf. (2.9). In contrast, \(G_{ij}\) is a function and we can define \(G_{ij}(x,y,t)=0\) for \(t \le 0\) in view of (1.3). Note that \(G_{ij}(x,y,0_+)\not =0\), see Lemma 3.4.

We now proceed to find a formula for \(G_{ij}\). Let \(u,\pi \) solve (1.1)–(1.2) with zero external force \(f=0\), and non-zero initial data \(u(x,0)=u_0(x)\), in the sense of (1.7). Then

$$\begin{aligned} { u_i(x,t)=\sum _{j=1}^n\int _{{\mathbb {R}}^n_+}G_{ij}(x,y,t)(u_0)_j(y)\,dy, } \end{aligned}$$
(3.3)

and \(\pi \) is given by (3.2) with \(f=0\). Let \(\textbf{E}u_0\) be an extension of \(u_0\) to \({\mathbb {R}}^n\) by

$$\begin{aligned} { \textbf{E}u_0(x',x_n)=(-u_0',u_0^n)(x',-x_n)\ \text { for }x_n<0. } \end{aligned}$$
(3.4)

Then \(\mathop {\textrm{div}}\nolimits \textbf{E}u_0(x',x_n)=-\mathop {\textrm{div}}\nolimits u_0(x',-x_n)\) for \(x_n<0\).

Remark 3.1

If \(\mathop {\textrm{div}}\nolimits u_0=0\) and \(u_0^n(x',0)=0\), then \(\mathop {\textrm{div}}\nolimits \textbf{E}u_0=0\) in \({\mathcal {D}}'({\mathbb {R}}^n)\).

Let \({\tilde{u}}\) be the solution to the homogeneous Stokes system in \({\mathbb {R}}^n\) with initial data \(\textbf{E}u_0\). Then

$$\begin{aligned} \begin{aligned} {\tilde{u}}_i(x,t)=&\sum _{j=1}^n\int _{{\mathbb {R}}^n}S_{ij}(x-y,t)(\textbf{E}u_0)_j(y)\,dy\\ =&\sum _{j=1}^n\int _{{\mathbb {R}}^n_+}(S_{ij}(x-y,t)-\epsilon _jS_{ij}(x-y^*))(u_0)_j(y)\,dy\\ =&\sum _{j=1}^n\int _{{\mathbb {R}}^n_+}{\tilde{G}}_{ij}(x,y,t)(u_0)_j(y)\,dy, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} {\tilde{G}}_{ij}(x,y,t)=S_{ij}(x-y,t)-\epsilon _jS_{ij}(x-y^*,t),\quad \epsilon _j=1-2\delta _{nj}. \end{aligned} \end{aligned}$$
(3.5)

Note that the factor \(\epsilon _j\) is absent in the second term of Solonnikov’s restricted Green tensor (1.9). Eqn. (3.5) is closer to [23, (2.22)].

Lemma 3.1

We have

$$\begin{aligned} S_{ij}(x^*,t)= & {} \epsilon _i\epsilon _jS_{ij}(x,t), \end{aligned}$$
(3.6)
$$\begin{aligned} \tilde{G}_{ij}(x,y,t)\big |_{x_n=0}= & {} 2\,\delta _{in} \,S_{nj}(x'-y,t). \end{aligned}$$
(3.7)

Proof

If \(i=j\), then \(S_{ii}(x,t)\) is even in all \(x_k\). If \(i\ne j\), then \(S_{ij}(x,t)\) is odd in \(x_i\) and \(x_j\), but even in \(x_k\) if \(k\ne i,j\). In particular, with \(x_k=x_n\), we get (3.6) for all \(i,j=1,\ldots ,n\). By (3.6),

$$\begin{aligned} \left. {\tilde{G}}_{ij}(x,y,t)\right| _{x_n=0}=S_{ij}(x'-y,t)-\epsilon _jS_{ij}(x'-y^*,t)= S_{ij}(x'-y,t)-\epsilon _iS_{ij}(x'-y,t) \end{aligned}$$

which gives (3.7). \(\square \)

Let \({\hat{u}}=u-{\tilde{u}}|_{{\mathbb {R}}^n_+}\). Then \({\hat{u}}\) solves the boundary value problem (2.14) with boundary data \({\hat{u}}|_{x_n=0}=-{\tilde{u}}(x,t)|_{x_n=0}\). By the Golovkin formula (2.17),

$$\begin{aligned} {\hat{u}}_i(x,t)= & {} ~\sum _{k=1}^n{\int _{-\infty }^{\infty }}\int _{\Sigma }K_{ik}(x-\xi ',t-s)(-{\tilde{u}}_k(\xi ',0,s))\,d\xi ' ds\\= & {} ~\sum _{k=1}^n{\int _{-\infty }^{\infty }}\int _{\Sigma }K_{ik}(x-\xi ',t-s) \left( -\sum _{j=1}^n\int _{{\mathbb {R}}^n_+}{\tilde{G}}_{kj}(\xi ',y,s)(u_0)_j(y)\,dy\right) \,d\xi ' ds\\= & {} ~\sum _{j=1}^n\int _{{\mathbb {R}}^n_+}\left( -\sum _{k=1}^n{\int _{-\infty }^{\infty }}\int _{\Sigma }K_{ik}(x-\xi ',t-s){\tilde{G}}_{kj}(\xi ',y,s)\,d\xi 'ds\right) (u_0)_j(y)dy\\= & {} ~\sum _{j=1}^n\int _{{\mathbb {R}}^n_+}W_{ij}(x,y,t)(u_0)_j(y)dy, \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} W_{ij}(x,y,t)&=-\sum _{k=1}^n{\int _{-\infty }^{\infty }}\int _{\Sigma }K_{ik}(x-\xi ',t-s){\tilde{G}}_{kj}(\xi ',y,s)\,d\xi 'ds. \end{aligned} \end{aligned}$$
(3.8)

By Lemma 3.1, we have the following first formula of \(W_{ij}\).

Lemma 3.2

(The first formula of \(W_{ij}\)). For \(x,y\in {\mathbb {R}}^n_+\), \(t>0\), and \(i,j=1,\ldots ,n\)

$$\begin{aligned} W_{ij}(x,y,t)=-2{\int _{-\infty }^{\infty }}\int _\Sigma K_{in}(x-\xi ',t-s)S_{nj}(\xi '-y,s)\,d\xi 'ds. \end{aligned}$$
(3.9)

Remark 3.2

Because of \(\delta (t)\) in the last term of the formula (2.15) of \(K_{ij}\), when we substitute (2.15) into the right side of (3.9), one of the resulting integrals is spatial only.

As \(u={\tilde{u}}|_{{\mathbb {R}}^n_+}+{\hat{u}}\), the Green tensor \(G_{ij}\) has the decomposition

$$\begin{aligned} \begin{aligned} G_{ij}(x,y,t)&={\tilde{G}}_{ij}(x,y,t)+W_{ij}(x,y,t)\\&=S_{ij}(x-y,t)-\epsilon _jS_{ij}(x-y^*,t) + W_{ij}(x,y,t). \end{aligned} \end{aligned}$$
(3.10)

All of them are zero for \(t \le 0\).

With the first formula of \(W_{ij}\), we have the scaling property of the Green tensor.

Corollary 3.3

For \(n\ge 2\) the Green tensor \(G_{ij}\) obeys the following scaling property

$$\begin{aligned} G_{ij}(x,y,t)=\lambda ^nG_{ij}(\lambda x,\lambda y, \lambda ^2t). \end{aligned}$$

Proof

Note that \(\Gamma (\lambda x,\lambda ^2 t)=\lambda ^{-n}\Gamma (x,t)\) and \(\delta (\lambda ^2 t)=\lambda ^{-2}\delta (t)\). It follows directly from (3.10), (3.9) and the scaling properties of \(K_{ij}\) and \(S_{ij}\). \(\square \)

Remark 3.3

In Lemma 2.1 of the stationary case of [23], the condition \(n\ge 3\) is needed for showing the scaling property of \(G_{ij}(x,y)\) because the 2D fundamental solution E does not have the scaling property. However, in the nonstationary case we do not have this issue. So the scaling property of the nonstationary Green tensor holds for all dimension \(n\ge 2\).

Before we consider the zero time limit of \(G_{ij}\), we consider the Helmholtz projection.

Remark 3.4

(Helmholtz projection in \({\mathbb {R}}^n_+\)) For a vector field u in \({\mathbb {R}}^n_+\), its Helmholtz projection \({{\textbf{P}}}u\) is given by

$$\begin{aligned} ({{\textbf{P}}}u)_i = u_i - \partial _i p, \end{aligned}$$
(3.11)

where p satisfies \(-\Delta p =- \mathop {\textrm{div}}\nolimits u\), and \(\partial _n p=u_n\) on \(x_n=0\). Using the Green function of the Laplace equation with Neumann boundary condition, \(N(x,y)=E(x-y)+E(x-y^*)\), we have

$$\begin{aligned} p(x) = -\int _{{\mathbb {R}}^n_+} N(x,y) \mathop {\textrm{div}}\nolimits u(y)\,dy - \int _\Sigma u_n (y)N(x,y)\,dS_y. \end{aligned}$$
(3.12)

Note the unit outer normal \(\nu =-e_n\) and \(\frac{\partial p}{\partial \nu } = - \partial _n p=-u_n\). The second term is absent in [41, Appendix], [10, (III.1.18)], and [35, Lemma A.3] because they are concerned with \(L^q\) bounds of \({{\textbf{P}}}{{\tilde{u}}}\) with \({{\tilde{u}}}\in L^q\), for which (3.12) is undefined, and they approximate \({{\tilde{u}}}\) in \(L^q\) by \(u\in C^\infty _c({\mathbb {R}}^n_+)\), for which the second term in (3.12) is zero. For our purpose, we want pointwise bounds and hence we need to keep the boundary term. Integrating by parts,

$$\begin{aligned} { p(x)= \int _{{\mathbb {R}}^n_+} \partial _{y_j} N(x,y) u_j(y)\,dy . } \end{aligned}$$
(3.13)

The boundary terms on \(\Sigma \) cancel. Using the definition of N(xy),

$$\begin{aligned} { \partial _{y_j} N(x,y) = - F_j^y(x), \quad F_j^y(x):= \partial _j E(x-y)+\epsilon _j \partial _j E(x-y^*) . } \end{aligned}$$
(3.14)

Thus

$$\begin{aligned} { ({{\textbf{P}}}u)_i (x)= u_i(x) + \partial _i \int _{{\mathbb {R}}^n_+} F_j^y(x) u_j(y)\,dy . } \end{aligned}$$
(3.15)

We now consider the zero time limit of \(G_{ij}\).

Lemma 3.4

  1. (a)

    For \(x,y\in {\mathbb {R}}^n_+\), we have

    $$\begin{aligned} G_{ij}(x,y,0_+)=\delta _{ij}\delta (x-y) + \partial _{x_i}F^y_j(x), \end{aligned}$$
    (3.16)

    where \(F^y_j(x)\) is defined in (3.14), in the sense that, for any \(i,j\in \{1,\ldots ,n\}\) and \(f\in C^1_c({\mathbb {R}}^n_+)\), we have

    $$\begin{aligned} \lim _{t \rightarrow 0_+} \left( \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t) f(y) dy- \delta _{ij} f(x) - \partial _{x_i} \int _{{\mathbb {R}}^n_+} F^y_j(x) f(y) \,dy \right) =0, \nonumber \\ \end{aligned}$$
    (3.17)

    for all \(x \in {\mathbb {R}}^n_+\), and uniformly for \(x_{n}\ge \delta \), for any \(\delta >0\).

  2. (b)

    Let \(u_0 \in C^1_c({\mathbb {R}}^n_+;{\mathbb {R}}^n)\) be a vector field in \({\mathbb {R}}^n_+\) and let u(xt) be given by (3.3). Then \(u(x,t)\rightarrow ({{\textbf{P}}}u_0)(x)\) for all \(x\in {\mathbb {R}}^n_+\), and uniformly for all x with \(x_n\ge \delta \) for any \(\delta >0\).

Note that \(\partial _{x_i}F^y_j(x)\) is a distribution since it may produce delta function at y. This lemma shows that the zero time limit of the Green tensor is exactly the Helmholtz projection in \({\mathbb {R}}^n_+\), given in (3.15). We will show uniform convergence in Lemma 6.1 where we assume \({{\textbf{P}}}u_0\in C^1_c(\overline{{\mathbb {R}}^n_+}) \), allowing nonzero tangential components of \(u_0|_\Sigma \), and show \(L^q\) convergence in Lemma 6.2 where we assume \( u_0\in L^q({\mathbb {R}}^n_+) \) but do not assume \(u_0={{\textbf{P}}}u_0\).

Proof

(a) We may extend f to \({\mathbb {R}}^n\) by setting \(f(y)=0\) for \(y_n \le 0\). Recall that

$$\begin{aligned} G_{ij}(x,y,t)= & {} {\tilde{G}}(x,y,t)+W_{ij}(x,y,t), \ \ \text{ with }\ \\ {\tilde{G}}(x,y,t)= & {} S_{ij}(x-y,t)-\epsilon _jS_{ij}(x-y^*,t). \end{aligned}$$

By (3.6) and Lemma 2.3,

$$\begin{aligned}{} & {} \lim _{t \rightarrow 0_+} \int _{{\mathbb {R}}^n_+} {{\tilde{G}}}_{ij}(x,y,t) f(y) dy\nonumber \\{} & {} \quad =\delta _{ij} f(x)+\partial _i\int _{{\mathbb {R}}^n_+}\partial _j\big [E(x-y)-\epsilon _{j}E(x-y^{*})\big ]f(y)\,dy, \end{aligned}$$
(3.18)

uniformly in \(x\in {\mathbb {R}}^n_+\). Now we consider the contribution from \(W_{ij}(x,y,t)\). By (3.9) and (2.15),

$$\begin{aligned} W_{ij}(x,y,t)= & {} -2\int _{-\infty }^\infty \int _{\Sigma } K_{in}(x-\xi ',t-s)S_{nj}(\xi '-y,s)\,d\xi 'ds \\= & {} W_{ij,1}(x,y,t) + W_{ij,2}(x,y,t), \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} W_{ij,1}(x,y,t)&=-2\int _{-\infty }^\infty \int _{\Sigma } \tilde{K}_{in}(x-\xi ',t-s)S_{nj}(\xi '-y,s)\,d\xi 'ds , \\ W_{ij,2}(x,y,t)&=4 \int _{\Sigma }\partial _iE(x-\xi ')S_{nj}(\xi '-y,t)\,d\xi ', \end{aligned} \end{aligned}$$

and \({{\tilde{K}}}_{ij}\) is the sum of the first two terms in the definition (2.15) of \(K_{ij}\). By (2.20), (2.10), change of variable \(s=u^2\) and Lemma 2.2,

$$\begin{aligned} \begin{aligned} |W_{ij,1}|&\lesssim \int _0^t\int _{\Sigma }\frac{1}{\sqrt{s}(|x-\xi '|+\sqrt{s})^{n-1} (x_n+\sqrt{s})}\,\frac{1}{(|\xi '-y|+\sqrt{t-s})^n}\,d\xi 'ds\\&\le \int _0^t\int _{\Sigma }\frac{1}{\sqrt{s} (x_n+\sqrt{s}) |x-\xi '|^{n-1}}\,\frac{1}{|\xi '-y|^n}\,d\xi 'ds\\&=2 \log \left( 1+\frac{\sqrt{t}}{x_n} \right) \int _{\Sigma }\frac{1}{|x-\xi '|^{n-1}}\,\frac{1}{|\xi '-y|^n}\,d\xi '\\&{\lesssim }\log \left( 1+\frac{\sqrt{t}}{x_n} \right) \left\{ |x-y^*|^{-n} + |x-y^*|^{-n} \log \frac{|x-y^*|}{x_n} + |x-y^*|^{-(n-1)}y_n^{-1} \right\} . \end{aligned} \end{aligned}$$

From this, one has

$$\begin{aligned} \lim _{t \rightarrow 0_+}\int _{{\mathbb {R}}^n_+} W_{ij,1}(x,y,t) f(y) dy=0 \end{aligned}$$

for all \(x \in {\mathbb {R}}^n_+\), and uniformly for \(x_{n}\ge \delta >0\). On the other hand, by Remark 2.2, \(W_{ij,2}(x,y,t)\) for \(x_n,y_n>0\) as \(t\rightarrow 0_+\) formally tends to

$$\begin{aligned} \begin{aligned} 4\int _{\Sigma }\partial _iE(x-\xi ')\partial _n \partial _j E(\xi '-y)\,d\xi '&= -4\partial _{x_i}\partial _{y_j}\int _{\Sigma }E(x-\xi ')\partial _nE(\xi '-y)\,d\xi '\\ {}&= -2\partial _{x_i}\partial _{y_j}\int _{\Sigma }E(\xi '-x)P_0(y-\xi ')\,d\xi '\\ {}&= -2\frac{\partial }{\partial x_i}\frac{\partial }{\partial y_j} E(x-y^*)= 2\epsilon _j\partial _i\partial _j E(x-y^*), \end{aligned} \end{aligned}$$
(3.19)

where \(P_0=-2\partial _nE\) and we’ve used (2.1) for the third equality. It is in the sense of functions since its singularity is at \(y=x^* \not \in {\mathbb {R}}^n_+\). Thus

$$\begin{aligned} \begin{aligned}&\int _{{\mathbb {R}}^n_+} W_{ij,2}(x,y,t) f(y) dy - \int _{{\mathbb {R}}_+^{n}}2\epsilon _j\partial _i\partial _j E(x-y^{*}) f(y)\,dy \\&\quad =\int _{{\mathbb {R}}^n}\! 4 \int _{\Sigma } \partial _iE(x-\xi ')S_{nj}(\xi '-y,t) f(y) dy \\ \\&\quad -\int _{{\mathbb {R}}^n}\! 4 \int _{\Sigma }\partial _iE(x-\xi ') \partial _j E(\xi '-y) d\xi ' \partial _n f(y) dy \\ {}&\quad = 4\int _{\Sigma } \left\{ \int _{{\mathbb {R}}^n} S_{nj}(\xi '-y,t) f(y) dy - \int _{{\mathbb {R}}^n} \partial _j E(\xi '-y) \partial _n f(y) dy \right\} \partial _iE(x-\xi ')d\xi '. \end{aligned} \end{aligned}$$

For the first equality we used (3.19) and integrated by parts in \(y_n\) in the second integral using \(f\in C^1_c({\mathbb {R}}^n_+) \). For the second equality we used the Fubini theorem. By Lemmas 2.3 and 2.2, the above is bounded by

$$\begin{aligned} {\lesssim }\int _{\Sigma } \frac{o(1)}{{\langle \xi ' \rangle }^{n}}\frac{1}{|x-\xi '|^{n-1}} d\xi ' {\lesssim }\frac{o(1)}{{\langle x \rangle }^{n-1}}. \end{aligned}$$

The combination of the above and (3.18) give Part (a).

Part (b) is a consequence of Part (a) and Remark 3.4. \(\square \)

Finally we derive a formula for the pressure tensor \(g_j\), to be used to estimate \(g_j\) in Sect. 5, and show symmetry of \(G_{ij}\) in Sect. 7.

Proposition 3.5

(The pressure tensor \(g_j\)). For \(x,y\in {\mathbb {R}}^n_+\), \(t\in {\mathbb {R}}\), and \(j=1,\ldots ,n\) we have

$$\begin{aligned} \begin{aligned} g_j(x,y,t) = {\widehat{w}}_j(x,y,t) - F^y_j(x) \delta (t), \end{aligned} \end{aligned}$$
(3.20)

where \({\widehat{w}}_j(x,y,t) \) is a function with \(\widehat{w}_j(x,y,t) =0\) for \(t \le 0\) and, for \(t>0\),

$$\begin{aligned} \begin{aligned} {\widehat{w}}_j(x,y,t) =&- \sum _{i<n} 8\int _0^t\int _{\Sigma }\partial _i \partial _nA(\xi ',x_n,\tau )\partial _nS_{ij}(x'-y'-\xi ',-y_n,t-\tau )\,d\xi 'd\tau \\&+ \sum _{i<n} 4\int _{\Sigma }\partial _i E(x-\xi ') \partial _nS_{ij}(\xi '-y,t)\,d\xi '\\&+ 8\int _{\Sigma } \partial _nA(\xi ',x_n,t) \partial _n\partial _jE(x'-y'-\xi ',-y_n)\, d\xi '. \end{aligned} \end{aligned}$$
(3.21)

Proof

For fixed j, the Green tensor \(({G}_{ij}, g_j)\) satisfies (3.1) in \({\mathbb {R}}^n_+\). Let

$$\begin{aligned} \begin{aligned} {\tilde{g}}_j(x,y,t)&=s_j(x-y,t)-\epsilon _j s_j(x-y^*,t)\\&= - \left[ \partial _j E(x-y)-\epsilon _j \partial _j E(x-y^*) \right] \delta (t). \end{aligned} \end{aligned}$$
(3.22)

The pair \(({\tilde{G}}_{ij},{{\tilde{g}}}_j)\) satisfies in \({\mathbb {R}}^n\)

$$\begin{aligned} \begin{aligned} (\partial _t-\Delta _x){\tilde{G}}_{ij}(x,y,t)+\partial _{x_i}\tilde{g}_j(x,y,t)&=\delta _{ij}\delta _y(x)\delta (t) - \epsilon _j \delta _{ij} \delta _{y^*}(x) \delta (t), \\ \textstyle \sum _{i=1}^n \partial _{x_i}{{\tilde{G}}}_{ij}&=0. \end{aligned} \end{aligned}$$
(3.23)

Thus the difference \((W_{ij}, w_j)= ({G}_{ij}, g_j) - ({\tilde{G}}_{ij},{{\tilde{g}}}_j)\) solves in \({\mathbb {R}}^n_+\)

$$\begin{aligned} { \left\{ \begin{array}{l} (\partial _t-\Delta _x)W_{ij}(x,y,t)+\partial _{x_i}w_j(x,y,t)=0,\quad \sum _{i=1}^n \partial _{x_i}W_{ij}=0, \\ W_{ij}(x,y,t)|_{x_n=0}=-2\,\delta _{in}S_{nj}(x'-y,t). \end{array}\right. } \end{aligned}$$
(3.24)

By (2.19), we have

$$\begin{aligned} \begin{aligned} w_j(x,y,t)&= -4\int _{\Sigma }\partial _n^2E(x-\xi ')S_{nj}(\xi '-y,t)\,d\xi ' \\&\quad -4\int _{\Sigma }E(x-\xi ')\partial _tS_{nj}(\xi '-y,t)\,d\xi '\\&\quad +8(\partial _t-\Delta _{x'})\int _{-\infty }^\infty \int _{\Sigma }\partial _nA(x-\xi ',t-\tau )S_{nj}(\xi '-y,\tau )\,d\xi 'd\tau \\&= I_1+I_2+I_3. \end{aligned} \end{aligned}$$
(3.25)

Using \((\partial _t -\Delta ) S_{ij} + \partial _i s_j = \delta _{ij}\delta (x) \delta (t)\), we have

$$\begin{aligned} \begin{aligned} I_2 =&-4\int _{\Sigma }E(x-\xi ')[\Delta S_{nj}(\xi '-y,t) - \partial _n s_j(\xi '-y,t)]\, d\xi '\\ =&-4\int _{\Sigma } \Delta _{x'} E(x-\xi ')S_{nj}(\xi '-y,t)\, d\xi '\\&-4\int _{\Sigma } E(x-\xi ') \partial _n^2 S_{nj}(\xi '-y,t)\, d\xi '\\&-4 \delta (t) \int _{\Sigma } E(x-\xi ') \partial _n\partial _j E(\xi '-y)\, d\xi ' \end{aligned} \end{aligned}$$
(3.26)

The first term of \(I_2\) in (3.26) cancels \(I_1\) since \(\Delta E(x-\xi ')=0\), and the last term of (3.26) is \(\overline{w}_j(x,y) \delta (t)\) with

$$\begin{aligned} \begin{aligned} {\overline{w}}_j(x,y)&= 4\partial _{y_j} \int _{\Sigma }E(x-\xi ')\partial _n E(\xi '-y)\,d\xi ' = 2\partial _{y_j} \int _{\Sigma }E(\xi '-x)P_0(y-\xi ')\,d\xi ' \\ {}&= 2 \partial _{y_j} E(x-y^*) = -2 \epsilon _j \partial _j E(x-y^*) \end{aligned} \end{aligned}$$

using (2.1). Note that

$$\begin{aligned} {{\tilde{g}}}_j(x,y,t) + {\overline{w}}_j(x,y) \delta (t) = - F_j^y(x) \delta (t). \end{aligned}$$
(3.27)

Using \((\partial _t -\Delta ) S_{ij} + \partial _i s_j = \delta _{ij}\delta (x) \delta (t)\) again, we have

$$\begin{aligned} \begin{aligned} I_3 =&~ 8(\partial _t-\Delta _{x'})\int _{-\infty }^\infty \int _{\Sigma }\partial _nA(\xi ',x_n,\tau )S_{nj}(x'-y'-\xi ',-y_n,t-\tau )\,d\xi 'd\tau \\ =&~ 8\int _{-\infty }^\infty \int _{\Sigma }\partial _nA(\xi ',x_n,\tau )\left[ \partial _n^2 S_{nj} - \partial _n s_j \right] (x'-y'-\xi ',-y_n,t-\tau )\, d\xi 'd\tau \\ =&~ 8\int _{-\infty }^\infty \int _{\Sigma }\partial _nA(\xi ',x_n,\tau )\partial _n^2 S_{nj}(x'-y'-\xi ',-y_n,t-\tau )\,d\xi 'd\tau \\&+ 8\int _{\Sigma } \partial _nA(\xi ',x_n,t) \partial _n\partial _jE(x'-y'-\xi ',-y_n)\, d\xi '. \end{aligned} \end{aligned}$$

Denote \({\widehat{w}}_j(x,y,t) = w_j(x,y,t) - {\overline{w}}_j(x,y) \delta (t) \). We conclude

$$\begin{aligned} \begin{aligned} {\widehat{w}}_j(x,y,t)&= 8\int _{-\infty }^\infty \int _{\Sigma }\partial _nA(\xi ',x_n,\tau )\partial _n^2 S_{nj}(x'-y'-\xi ',-y_n,t-\tau )\,d\xi 'd\tau \\&\quad -4\int _{\Sigma } E(x-\xi ') \partial _n^2 S_{nj}(\xi '-y,t) \\&\quad + 8\int _{\Sigma } \partial _nA(\xi ',x_n,t) \partial _n\partial _jE(x'-y'-\xi ',-y_n)\, d\xi ' . \end{aligned} \end{aligned}$$
(3.28)

Using \(\partial _n^2 S_{nj} = -\sum _{i<n} \partial _i \partial _n S_{ij}\) and integrating by parts in \(\xi _i\) the first two terms, we get (3.21) for \({\widehat{w}}_j(x,y,t)\). Integration by parts is justified since the singularities of the integrands are outside of \(\Sigma \), and the integrands have sufficient decay as \(|\xi '| \rightarrow \infty \) by (2.10) and (2.4) even for \(n=2\). This and (3.27) prove the proposition. \(\square \)

Remark 3.5

  1. (i)

    Eq. (3.21) is better than (3.28) because its estimate allows more decay in \(|x-y^*|+\sqrt{t}\), i.e., in tangential direction. However, it has a boundary singularity at \(x_n=0\); see Remark 7.1.

  2. (ii)

    With Proposition 3.5, the pressure formula (3.2) in the case \(u_0=0\) becomes

    $$\begin{aligned} \begin{aligned} \pi (x,t)&= \int _{-\infty }^\infty \int _{{\mathbb {R}}^n_+} g(x,y,t-s)\cdot f(y,s)\,dy\,ds \\&= \int _{0}^t \int _{{\mathbb {R}}^n_+} {\widehat{w}}(x,y,t-s)\cdot f(y,s)\,dy\,ds - \int _{{\mathbb {R}}^n_+}F_j^y(x) \cdot f_j(y,t)\,dy. \end{aligned} \end{aligned}$$
    (3.29)

    The last term comes from the Helmholtz projection of f at time t (see (3.13)–(3.14)), and corresponds to the pressure formula above (2.8) in the whole space case. The first term of (3.29) shows that \(\pi (\cdot ,t)\) also depends on the value of f at times \(s<t\). There is no such term in the whole space case. This history-dependence property of the pressure in the half space case is well known, see e.g. [52].

Remark 3.6

(Kernel of Green tensor) Consider

$$\begin{aligned} {\textbf{G}} = \left\{ u= \nabla h\in C^0( {\mathbb {R}}^n_+ ; {\mathbb {R}}^n), \, \lim _{|x|\rightarrow \infty }h(x)=0 \right\} . \end{aligned}$$

If \(u_0 \in {\textbf{G}}\), then u(xt) given by (3.3) is identically zero, using integration by parts in (3.3). The whole thing vanishes because \(\sum _j \partial _{y_j}G_{ij}=0\) and \(G_{in}|_{y_n=0}=0\). Thus \({\textbf{G}}\) is contained in the kernel of the Green tensor. In fact, it is also inside the kernel of the Helmholtz projection in \(L^q({\mathbb {R}}^n_+)\), \(1<q<\infty \), if we impose suitable spatial decay on functions in \({\textbf{G}}\).

Remark 3.7

(Relation between stationary and nonstationary Green tensors) Denote the Green tensor of the stationary Stokes system in the half space as \(G_{ij}^0(x,y)\). For \(n \ge 3\) we can show

$$\begin{aligned} { \int _{\mathbb {R}}G_{ij}(x,y,t)\,dt = G_{ij}^0(x,y). } \end{aligned}$$
(3.30)

The integral does not converge for \(n=2\). The idea is to decompose \(G_{ij}(x,y,t)={{\tilde{G}}}_{ij}(x,y,t)+W_{ij}(x,y,t)\) and show their time integrations converge to corresponding terms in [23, (2.25)]. This relation gives an alternative proof of symmetry \(G_{ij}^0(x,y)=G_{ji}^0(y,x)\) for \(n\ge 3\) using Proposition 1.4.

4 Revised Formula for the Green Tensor

In this section we derive a second formula for the remainder term \(W_{ij}\) which is suitable for pointwise estimate. We also use it to get a new formula for the Green tensor in Lemma 4.3.

We first recall the Poisson kernel \(P(x,\xi ',t)\) for \(\partial _t-\Delta \) in the half-space \({\mathbb {R}}^n_+\) for \(x\in {\mathbb {R}}^n_+\) and \(\xi '\in \Sigma \),

$$\begin{aligned} P(x,\xi ',t)=-2\,\partial _n\Gamma (x-\xi ',t). \end{aligned}$$
(4.1)

The following lemma is based on Poisson’s formula, and can be used to remove the time integration in the first formula (3.9). It is the time-dependent version of (2.1).

Lemma 4.1

Let \(n \ge 2\). For \(x\in {\mathbb {R}}^n_+\), \(y\in {\mathbb {R}}^n\) and \(t>0\),

$$\begin{aligned} { \int _0^t\int _\Sigma \Gamma (\xi '-y,s)P(x,\xi ',t-s)\,d\xi '\,ds=\Gamma (x-y^\sharp ,t),\quad y^\sharp =(y',-|y_n|).} \end{aligned}$$
(4.2)

Note that \(y^\sharp =y^*\) if \({\mathbb {R}}^n_+\), and \(y^\sharp =y\) if \(y\in {\mathbb {R}}^n_-\).

Proof

First we consider \(y\in {\mathbb {R}}^n_+\). Since \(u(x,t)=\Gamma (x-y^*,t)\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll}(\partial _t-\Delta )u(x,t)=0&{}\ \text { for}(x,t)\in {\mathbb {R}}^n_+\times (0,\infty ), \\ u(x',t)=\Gamma (x'-y^*,t)=\Gamma (x'-y,t) &{}\ \text { for }(x',t)\in \partial {\mathbb {R}}^n_+\times (0,\infty ),\\ u(x,0)=\Gamma (x-y^*,0)=\delta (x-y^*)=0&{}\ \text { for }x\in {\mathbb {R}}^n_+,\end{array}\right. \end{aligned}$$

by Poisson’s formula for \(\partial _t-\Delta \) in \({\mathbb {R}}^n_+\), we have

$$\begin{aligned} \int _0^t\int _\Sigma \Gamma (\xi '-y,s)P(x,\xi ',t-s)\,d\xi '\,ds=\Gamma (x-y^*,t). \end{aligned}$$

For \(y\in {\mathbb {R}}^n_-\), \(y^*\in {\mathbb {R}}^n_+\). Since \(\Gamma (\xi '-y,s)=\Gamma (\xi '-y^*,s)\),

$$\begin{aligned} \begin{aligned} \int _0^t\int _\Sigma \Gamma (\xi '-y,s)P(x,\xi ',t-s)\,d\xi '\,ds&=\int _0^t\int _\Sigma \Gamma (\xi '-y^*,s)P(x,\xi ',t-s)\,d\xi '\,ds\\&=\Gamma (x-y^{**},t)=\Gamma (x-y,t). \end{aligned} \end{aligned}$$

The combination of the two cases \(y\in {\mathbb {R}}^n_+\) and \(y\in {\mathbb {R}}^n_-\) gives (4.2). \(\square \)

With Lemma 4.1 in hand, we are able to derive the second formula for \(W_{ij}\).

Lemma 4.2

(The second formula for \(W_{ij}\)). For \(x,y\in {\mathbb {R}}^n_+\) and \(i,j=1,\ldots ,n\),

$$\begin{aligned} \begin{aligned} W_{ij}(x,y,t)&=-2\delta _{in}\delta _{nj}\Gamma (x-y^*,t)+2\delta _{in}\epsilon _j\Gamma _{nj}(x-y^*,t)\\&\quad -4\delta _{nj}C_i(x,y,t)-4H_{ij}(x,y,t) + V_{ij}(x,y,t), \end{aligned} \end{aligned}$$
(4.3)

where

$$\begin{aligned} C_i(x,y,t)= & {} \int _0^{x_n}\int _\Sigma \partial _n\Gamma (x-y^*-z,t)\,\partial _iE(z)\,dz'\,dz_n, \end{aligned}$$
(4.4)
$$\begin{aligned} H_{ij}(x,y,t)= & {} -\int _{{\mathbb {R}}^n}\partial _{y_j}C_i(x,y+w,t)\partial _nE(w)dw, \end{aligned}$$
(4.5)

and

$$\begin{aligned} { V_{ij}(x,y,t) = -2\delta _{in} \Lambda _j(x,y,t) -4\int _0^{x_n} \int _{\Sigma } \partial _{x_n}\Lambda _j(x-z,y,t) \partial _iE(z)\, dz'dz_n. }\nonumber \\ \end{aligned}$$
(4.6)

Here

$$\begin{aligned} { \Lambda _j(x,y,t) = \partial _{y_n}\partial _{y_j} \int _{w_n<-y_n}G^{ht}(x,y+w,t) E(w)\, dw, } \end{aligned}$$
(4.7)

where \(G^{ht}(x,y,t) = \Gamma (x-y,t) - \Gamma (x-y^*,t)\) is the Green function of heat equation in \({\mathbb {R}}^n_+\times (0,\infty )\). Note that \(C_i(x,y,t)\) is defined in \({\mathbb {R}}^n_+ \times {\mathbb {R}}^n \times (0,\infty )\), and \(y_n \) is allowed to be negative.

Remark 4.1

We can show that \(C_i\), \(H_{ij}\), and \(V_{ij}\) are well defined using Lemma 2.2. The \(x'\)- and \(y'\)-derivatives are interchangeable for \(C_i\) \(H_{ij}\), and \(V_{ij}\): \(\partial _{x'}^lC_i(x,y,t)=(-1)^l\partial _{y'}^lC_i(x,y,t)\) and similarly for \(H_{ij}\), and \(V_{ij}\).

Remark 4.2

The formula (4.3) is better than (3.9) because the definitions of the terms on the right side do not involve integration in time. If an integration in time was involved, there might be singularities at \(s=0,t\) when we use the estimates of \(K_{ij}\) and \(S_{ij}\) in (2.20) and (2.10), respectively. Their estimates would be worse and contain, for example, singularities in \(x_n\) for \(x_n\) small. The quantity \(C_i(x,t)\) studied by Solonnikov [46, (66)] corresponds to our \(C_i(x,0,t)\) with \(y=0\) and he did not study full \(C_i(x,y,t)\) with \(y\not =0\) nor \(H_{ij}(x,y,t)\).

Remark 4.3

The formula (4.3) corresponds to that of the stationary case in [23, (2.36)]:

$$\begin{aligned} W_{ij}(x,y)=-\left( \delta _{in}-x_n\partial _{x_i}\right) \left( \delta _{nj}-y_n\partial _{y_j}\right) E(x-y^*). \end{aligned}$$

Proof of Lemma 4.2

To obtain (4.3), we use the formulae (2.8) and (2.15) and split the integral of (3.9) into six parts as

$$\begin{aligned} \int _{-\infty }^{\infty }\int _\Sigma K_{in}(x-\xi ',t-s)S_{nj}(\xi '-y,s)\,d\xi '\,ds=I_1+I_2+I_3+I_4+I_5+I_6, \end{aligned}$$

where

$$\begin{aligned} I_1&=-2\,\delta _{in}\delta _{nj} \int _{-\infty }^{\infty }\int _\Sigma \partial _n\Gamma (x-\xi ',t-s)\Gamma (\xi '-y,s)\,d\xi 'ds,\\ I_2&=-4\,\delta _{nj} \int _{-\infty }^{\infty }\int _\Sigma \partial _{x_n}\left[ \int _0^{x_n}\!\int _\Sigma \partial _n\Gamma (z,t-s)\,\partial _iE(x-\xi '-z)\,dz'\,dz_n\right] \\&\quad \Gamma (\xi '-y,s)\,d\xi 'ds,\\ I_3&=-2\,\delta _{nj} \int _{-\infty }^{\infty }\int _\Sigma \partial _iE(x-\xi ')\delta (t-s)\Gamma (\xi '-y,s)\,d\xi 'ds,\\ I_4&=-2\,\delta _{in} \int _{-\infty }^{\infty }\int _\Sigma \partial _n\Gamma (x-\xi ',t-s)\int _{{\mathbb {R}}^n}\partial _n\partial _j\Gamma (\xi '-y-w,s)E(w)\,dwd\xi 'ds,\\ I_5&=-4 \int _{-\infty }^{\infty }\int _\Sigma \partial _{x_n}\left[ \int _0^{x_n}\int _\Sigma \partial _n\Gamma (z,t-s)\,\partial _iE(x-\xi '-z)\,dz'\,dz_n\right] \\&\quad \cdot \left[ \int _{{\mathbb {R}}^n}\partial _n\partial _j\Gamma (\xi '-y-w,s)E(w)\,dw\right] d\xi 'ds,\\ I_6&=-2 \int _{-\infty }^{\infty }\int _\Sigma \partial _iE(x-\xi ')\delta (t-s)\int _{{\mathbb {R}}^n}\partial _n\partial _j\Gamma (\xi '-y-w,s)E(w)\,dw\,d\xi 'ds. \end{aligned}$$

We use Lemma 4.1 to compute \(I_1,I_2,I_4,I_5\). Indeed, we have

$$\begin{aligned} \begin{aligned} I_1=&-2\delta _{in}\delta _{nj}\int _0^t\int _\Sigma \partial _n\Gamma (x-\xi ',t-s)\Gamma (\xi '-y,s)\,d\xi 'ds\\=&~\delta _{in}\delta _{nj}\int _0^t\int _\Sigma P(x,\xi ',t-s)\Gamma (\xi '-y,s)\,d\xi 'ds\\=&~{\delta _{in}\delta _{nj}\Gamma (x-y^\sharp ,t) = \delta _{in}\delta _{nj} \Gamma (x-y^*,t)}, \end{aligned} \end{aligned}$$

where we used (4.1), Lemma 4.1 and \(y\in {\mathbb {R}}^n_+\). And, by changing the variables and Fubini’s theorem, we have

$$\begin{aligned} \begin{aligned} I_2=&-4\,\delta _{nj}\int _0^t\int _\Sigma \partial _{x_n}\left[ \int _0^{x_n}\int _\Sigma \partial _n\Gamma (x-\xi '-z,t-s)\,\partial _iE(z)\,dz'\,dz_n\right] \\&\quad \Gamma (\xi '-y,s)\,d\xi 'ds\\ =&-4\,\delta _{nj}\partial _{x_n}\left[ \int _0^{x_n}\int _\Sigma \int _0^t\left( \int _\Sigma \partial _n\Gamma (x-\xi '-z,t-s)\right. \right. \\&\quad \left. \left. \Gamma (\xi '-y,s)\,d\xi 'ds\right) \partial _iE(z)\,dz'dz_n\right] . \end{aligned} \end{aligned}$$

With the aid of (4.1) and Lemma 4.1, we actually get

$$\begin{aligned} \begin{aligned} I_2=&~2\,\delta _{nj}\partial _{x_n}\left[ \int _0^{x_n}\int _\Sigma \left( \int _0^t\int _\Sigma P(x-z,\xi ',t-s)\Gamma (\xi '-y,s)\,d\xi 'ds\right) \partial _iE(z)\,dz'dz_n\right] \\ =&~2\,\delta _{nj}\partial _{x_n}\left[ \int _0^{x_n}\int _\Sigma \Gamma (x-z-{y^\sharp },t)\,\partial _iE(z)\,dz'dz_n\right] \\ =&~{ 2\,\delta _{nj}\partial _{x_n}\left[ \int _0^{x_n}\int _\Sigma \Gamma (x-z-y^*,t)\,\partial _iE(z)\,dz'dz_n\right] \quad \text {(since } y\in {\mathbb {R}}^n_+) }\\ =&~2\,\delta _{nj}\int _\Sigma \Gamma (x'-z'-y^*,t)\,\partial _iE(z',x_n)\,dz' +2\,\delta _{nj} C_i(x,y,t), \end{aligned} \end{aligned}$$

where \(C_i(x,y,t)\) is as defined in (4.4).

Moreover, rearranging the integrals and derivatives and using (4.1), we obtain

$$\begin{aligned} \begin{aligned}I_4=&-2\,\delta _{in}\int _0^t\int _\Sigma \partial _n\Gamma (x-\xi ',t-s)\int _{{\mathbb {R}}^n}\partial _n\partial _j\Gamma (\xi '-y-w,s)E(w)\,dwd\xi 'ds\\ =&-2\,\delta _{in}\partial _{y_n}\partial _{y_j}\left[ \int _0^t\int _\Sigma \partial _n\Gamma (x-\xi ',t-s)\int _{{\mathbb {R}}^n}\Gamma (\xi '-y-w,s)E(w)\,dwd\xi 'ds\right] \\ =&~\delta _{in}\partial _{y_n}\partial _{y_j}\left[ \int _{{\mathbb {R}}^n}\left( \int _0^t\int _\Sigma P(x,\xi ',t-s)\Gamma (\xi '-(y+w),s)d\xi '\,ds\right) E(w)\,dw\right] .\end{aligned} \end{aligned}$$

Hence, applying Fubini’s theorem and Lemma 4.1, we have

$$\begin{aligned} \begin{aligned}I_4=&~\delta _{in}\partial _{y_n}\partial _{y_j}\left[ \int _{{\mathbb {R}}^n}\Gamma (x-(y+w)^\sharp ,t)E(w)\,dw\right] \\ =&~\delta _{in}\partial _{y_n}\partial _{y_j}\left[ \int _{w_n>-y_n}\Gamma (x-(y+w)^*,t)E(w)\,dw \right. \\&\left. + \int _{w_n<-y_n}\Gamma (x-y-w,t)E(w)\,dw\right] \\ =&~\delta _{in}\partial _{y_n}\partial _{y_j}\left[ \int _{{\mathbb {R}}^n}\Gamma (x-(y+w)^*,t)E(w)\,dw\right. \\&\left. + \int _{w_n<-y_n}(\Gamma (x-y-w,t) - \Gamma (x-(y+w)^*,t)) E(w)\,dw\right] \\ =&~-\delta _{in}\epsilon _j \Gamma _{nj}(x-y^*,t) + \delta _{in}\partial _{y_n}\partial _{y_j} \int _{w_n<-y_n} G^{ht}(x,y+w,t) E(w)\,dw\\ =&~-\delta _{in}\epsilon _j \Gamma _{nj}(x-y^*,t) + \delta _{in} \Lambda _j(x,y,t),\end{aligned} \end{aligned}$$

where \(G^{ht}(x,y,t)=\Gamma (x-y,t) - \Gamma (x-y^*,t)\) is the Green function of heat equation in \({\mathbb {R}}^n_+\times (0,\infty )\) and \(\Lambda _j(x,y,t)\) is as defined in (4.7).

In addition, by changing the variables, Fubini’s theorem and (4.1), we get

$$\begin{aligned} \begin{aligned}I_5 =&-4\int _0^t\int _\Sigma \partial _{x_n}\left[ \int _0^{x_n}\int _\Sigma \partial _n\Gamma (x-\xi '-z,t-s)\,\partial _iE(z)\,dz'dz_n\right] \\&\cdot \left[ \int _{{\mathbb {R}}^n}\partial _n\partial _j\Gamma (\xi '-y-w,s)E(w)\,dw\right] d\xi 'ds\\ =&~2\,\partial _{x_n}\left[ \int _0^{x_n}\int _\Sigma \int _{{\mathbb {R}}^n}\partial _{y_n}\partial _{y_j}\left( \int _0^t\int _\Sigma P(x-z,\xi ',t-s)\Gamma (\xi '-(y+w),s)\,d\xi 'ds\right) \right. \\&\cdot \partial _iE(z)E(w)\,dwdz'dz_n\Big ].\end{aligned} \end{aligned}$$

Thus, Lemma 4.1 implies

$$\begin{aligned} \begin{aligned} I_5=&~2\,\partial _{x_n}\left[ \int _0^{x_n}\int _\Sigma \int _{{\mathbb {R}}^n}\partial _{y_n}\partial _{y_j}\Gamma ((x-z)-(y+w)^\sharp ,t)\,\partial _iE(z)E(w)\,dwdz'dz_n\right] \\ =&~ 2 \int _\Sigma \int _{{\mathbb {R}}^n}\partial _{y_n}\partial _{y_j}\Gamma (x'-z'-(y+w)^\sharp ,t)\,\partial _iE(z',x_n)E(w)\,dwdz'\\&\quad + 2\,\int _0^{x_n}\int _\Sigma \partial _{x_n} \left( \int _{{\mathbb {R}}^n}\partial _{y_n}\partial _{y_j}\Gamma ((x-z)-(y-w)^\sharp ,t)E(w)\,dw\right) \partial _iE(z)\,dz'dz_n\\ =&~ 2\int _\Sigma \Gamma _{nj}(x'-z'-y,t)\,\partial _iE(z',x_n)\,dz' + 2H_{ij}^\sharp (x,y,t), \end{aligned} \end{aligned}$$

where we’ve used \(\Gamma (x'-z'-(y+w)^\sharp ,t)=\Gamma ((x'-z'-y)-w,t)\), the functions \(\Gamma _{ij}\) is defined in (2.8), and \(H_{ij}^\sharp \) is expanded as

$$\begin{aligned} \begin{aligned} H_{ij}^\sharp (x,y,t)&= \int _0^{x_n}\int _{\Sigma } \partial _{x_n} \partial _{y_n} \partial _{y_j} \left( \int _{w_n>-y_n} \Gamma ((x-z)-(y+w)^*,t) E(w)\, dw\right. \\&\quad \left. + \int _{w_n<-y_n} \Gamma ((x-z)-y-w,t) E(w)\, dw \right) \partial _iE(z)\, dz'dz_n\\&= \int _0^{x_n}\int _{\Sigma } \partial _{x_n} \partial _{y_n} \partial _{y_j} \left( \int _{{\mathbb {R}}^n} \Gamma ((x^*-z^*-y)-w,t) E(w)\, dw\right. \\&\quad \left. + \int _{w_n<-y_n} G^{ht}(x-z,y+w,t)E(w)\, dw\right) \partial _iE(z)\, dz'dz_n\\&= H_{ij}(x,y,t) + \int _0^{x_n} \int _{\Sigma } \partial _{x_n}\Lambda _j(x-z,y,t) \partial _iE(z)\, dz'dz_n, \end{aligned} \end{aligned}$$

where \(H_{ij}\) is defined in (4.5).

For \(I_3\) and \(I_6\), a direct computation gives

$$\begin{aligned} \begin{aligned} I_3 =&-2\,\delta _{nj}\int _\Sigma \partial _iE(x-\xi ')\Gamma (\xi '-y,t)\,d\xi '\\ =&-2\,\delta _{nj}\int _\Sigma \Gamma (x'-z'-y^*,t)\,\partial _iE(z',x_n)\,dz', \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}I_6 =&-2\int _\Sigma \partial _iE(x-\xi ')\int _{{\mathbb {R}}^n}\partial _n\partial _j\Gamma (\xi '-y-w,t)E(w)\,dwd\xi '\\ =&-2\int _\Sigma \partial _iE(x-\xi ')\Gamma _{nj}(\xi '-y,t)\,d\xi '\\=&-2\int _\Sigma \Gamma _{nj}(x'-z'-y,t)\,\partial _iE(z',x_n)\,dz'. \end{aligned} \end{aligned}$$

Combining the above computations of \(I_1,\cdots ,I_6\), and noting that \(I_3\) cancels the first term of \(I_2\) while \(I_6\) cancels the first term of \(I_5\), we get

$$\begin{aligned} \begin{aligned} \sum _{k=1}^6 I_k&=\delta _{in}\delta _{nj}\Gamma (x-y^*,t)+2\,\delta _{nj}C_i(x,y,t)-\delta _{in}\epsilon _j\Gamma _{nj}(x-y^*,t)\\&\quad + \delta _{in} \Lambda _j(x,y,t) + 2H_{ij}^\sharp (x,y,t). \end{aligned} \end{aligned}$$

This completes the proof. \(\square \)

We now explore a cancellation between \(C_i\) and \(H_{ij}\) in (4.3), and define

$$\begin{aligned} \begin{aligned} \widehat{H}_{ij}(x,y,t) = H_{ij}(x,y,t) + \delta _{nj} C_i (x,y,t). \end{aligned} \end{aligned}$$
(4.8)

Then (4.3) becomes

$$\begin{aligned} \begin{aligned} W_{ij}(x,y,t)&= -2\delta _{in} \delta _{nj} \Gamma (x-y^*,t) + 2 \delta _{in} \epsilon _j \Gamma _{nj}(x-y^*,t) \\&\quad - 4{\widehat{H}}_{ij}(x,y,t) { + V_{ij}(x,y,t).} \end{aligned} \end{aligned}$$
(4.9)

This formula will provide better estimates than summing estimates of individual terms in (4.3). See Remark 5.2 after Proposition 5.5.

We conclude a second formula for the Green tensor.

Lemma 4.3

The Green tensor satisfies

$$\begin{aligned} G_{ij}(x,y,t)= & {} \delta _{ij} \left[ \Gamma (x-y,t) - \Gamma (x-y^*,t) \right] + \left[ \Gamma _{ij}(x-y,t) - \epsilon _i \epsilon _j \Gamma _{ij}(x-y^*,t) \right] \nonumber \\{} & {} - 4{\widehat{H}}_{ij}(x,y,t) { + V_{ij}(x,y,t).} \end{aligned}$$
(4.10)

Proof

Recall (3.10) that \(G_{ij}(x,y,t)=S_{ij}(x-y,t)-\epsilon _jS_{ij}(x-y^*,t)+ W_{ij}(x,y,t)\). By (2.8) and (4.9), we get the lemma. \(\square \)

5 First Estimates of the Green Tensor

In this section, we first estimate \({\widehat{H}}_{ij}\), then estimate \(V_{ij}\), and finally prove the Green tensor estimates in Proposition 1.1.

5.1 Estimates of \({\widehat{H}}_{ij}\)

Lemma 5.1

For \(i,j=1,\ldots ,n\), we have

$$\begin{aligned} \begin{aligned} \left\{ \begin{aligned} {\widehat{H}}_{ij}(x,y,t)&= - D_{ijn}(x,y,t) \quad \text { if } j< n, \quad \\ \widehat{H}_{in}(x,y,t)&= \textstyle \sum _{\beta <n} D_{i\beta \beta }(x,y,t) \quad \text { if }j = n, \end{aligned} \right. \end{aligned} \end{aligned}$$
(5.1)

where for \(m = 1,\ldots , n\),

$$\begin{aligned} { D_{i\beta m}(x,y,t) = \int _0^{x_n}\int _{\Sigma }\partial _\beta \Gamma _{m n}(x^*-y-z^*,t)\,\partial _iE(z)\,dz'dz_n. } \end{aligned}$$
(5.2)

Proof

By definition,

$$\begin{aligned} \begin{aligned} H_{ij}(x,y,t)&=-\int _{{\mathbb {R}}^n}\partial _{y_j}C_i(x,y+w,t)\partial _nE(w)\,dw\\&=-\int _{{\mathbb {R}}^n}\partial _{y_j}\left( \int _0^{x_n}\!\int _{\Sigma }\partial _n\Gamma (x-(y+w)^*-z,t)\partial _iE(z)\,dz'dz_n \right) \\ {}&\quad \partial _nE(w)\,dw. \end{aligned} \end{aligned}$$

Integrating by parts in \(w_n\) and applying Fubini’s theorem give, for \(j=1,\ldots ,n\),

$$\begin{aligned} \begin{aligned} H_{ij}(x,y,t) =&\int _0^{x_n}\int _{\Sigma }\partial _{y_j}\int _{{\mathbb {R}}^n}\partial _n^2\Gamma ((x^*-y-z^*)-w,t)E(w)\,dw\,\partial _iE(z)\,dz'dz_n\\ =&-\int _0^{x_n}\int _{\Sigma }\partial _j\Gamma _{nn}(x^*-y-z^*,t)\partial _iE(z)\,dz'dz_n=-D_{ijn}(x,y,t). \end{aligned} \end{aligned}$$

This proves (5.1) when \(j<n\). For \(j=n\), we use the fact that \(-\Delta E=\delta \) to obtain

$$\begin{aligned} \begin{aligned} H_{in}(x,y,t)=&-\int _{{\mathbb {R}}^n}\partial _{y_n}C_i(x,y+w,t)\partial _nE(w)dw\\ =&- C_i(x,y,t)+\sum _{\beta =1}^{n-1}\int _{{\mathbb {R}}^n}\partial _{y_\beta }C_i(x,y+w,t)\partial _\beta E(w)\,dw. \end{aligned} \end{aligned}$$

Using the same argument above, we get

$$\begin{aligned} \begin{aligned} H_{in}(x,y,t)= - C_i(x,y,t)+\sum _{\beta =1}^{n-1}D_{i\beta \beta }(x,y,t). \end{aligned} \end{aligned}$$

This proves (5.1) when \(j=n\). \(\square \)

The following lemma enables us to change \(x_n\)-derivatives to \(x'\)-derivatives.

Lemma 5.2

Let \(i, j , m=1,\ldots ,n\). For \(i<n\),

$$\begin{aligned} { \partial _{x_n}D_{ijm}(x,y,t)=\partial _{x_i}D_{njm}(x,y,t) + \int _{{\mathbb {R}}^n}\partial _i\partial _j\partial _mB(x^*-y-w,t)\partial _nE(w)\,dw, }\nonumber \\ \end{aligned}$$
(5.3)

and for \(i=n\),

$$\begin{aligned} { \partial _{x_n}D_{njm}(x,y,t)= -\sum _{\beta =1}^{n-1}\partial _{x_\beta }D_{\beta jm}(x,y,t)-\frac{1}{2}\,\partial _n\Gamma _{jm}(x^*-y,t). } \end{aligned}$$
(5.4)

Proof

After changing variables, \(D_{ijm}\) becomes

$$\begin{aligned} D_{ijm}(x,y,t)=\int _{-x_n-y_n}^{-y_n}\int _\Sigma \partial _j\Gamma _{mn}(z,t)\, \partial _iE(x-y^*-z^*)\,dz'dz_n. \end{aligned}$$

For \(i<n\) we have

$$\begin{aligned} D_{ijm}(x,y,t)=\partial _{x_i}\int _{-x_n-y_n}^{-y_n}\int _\Sigma \partial _j\Gamma _{mn}(z,t)\, E(x-y^*-z^*)\,dz'dz_n. \end{aligned}$$

Hence

$$\begin{aligned} \begin{aligned} \partial _{x_n}D_{ijm}(x,y,t)=&~\partial _{x_i}\int _\Sigma \partial _j\Gamma _{mn}(z',-x_n-y_n,t) E(x'-y'-z',0)\,dz'\\&+\partial _{x_i}\int _{-x_n-y_n}^{-y_n}\int _\Sigma \partial _j\Gamma _{mn}(z,t) \partial _nE(x-y^*-z^*)\,dz'dz_n\\ =&~I + \partial _{x_i}D_{njm}(x,y,t), \end{aligned} \end{aligned}$$

where

$$\begin{aligned} I= & {} \partial _{x_i}\int _\Sigma \left( \int _{{\mathbb {R}}^n}\partial _j\partial _m\Gamma (z'-w',-x_n-y_n-w_n,t)\partial _nE(w)\,dw \right) \\ {}{} & {} \quad E(x'-y'-z',0)\,dz'. \end{aligned}$$

After changing variables \(\xi '=x'-y'-z'\) and applying Fubini theorem,

$$\begin{aligned} \begin{aligned} I&=\int _{{\mathbb {R}}^n} \left( \partial _{x_i}\int _\Sigma \partial _j\partial _m\Gamma (x'-y'-\xi '-w',-x_n\right. \\ {}&\quad \left. -y_n-w_n,t) E(\xi ',0)\,d\xi ' \right) \partial _nE(w)\,dw\\&=\int _{{\mathbb {R}}^n}\partial _{x_i}\partial _{y_j}\partial _{y_m}B(x^*-y-w,t)\partial _nE(w)\,dw\\&=\int _{{\mathbb {R}}^n}\partial _i\partial _j\partial _mB(x^*-y-w,t)\partial _nE(w)\,dw \end{aligned} \end{aligned}$$

using \(i<n\) again. This proves (5.3).

For (5.4), we first move normal derivatives in the definition (5.2) of \(D_{njm}\) to tangential derivatives. Observe that, using \(\partial _j \Gamma _{mn} = \partial _n \Gamma _{jm}\),

$$\begin{aligned} \begin{aligned} D_{njm}(x,y,t) =&~\lim _{\varepsilon \rightarrow 0_+}\left[ \int _{\varepsilon }^{x_n}\int _\Sigma \partial _{z_n}\Gamma _{jm}(x^*-y-z^*,t)\partial _nE(z)\,dz'dz_n\right] \\ =&~\lim _{\varepsilon \rightarrow 0_+} \left[ \int _\Sigma \Gamma _{jm}(x'-y'-z',-y_n,t)\partial _nE(z',x_n)\,dz'\right. \\&-\int _\Sigma \Gamma _{jm}(x'-y'-z',-x_n-y_n+\varepsilon ,t)\partial _nE(z',\varepsilon )\,dz'\\&\left. -\int _{\varepsilon }^{x_n}\int _\Sigma \Gamma _{jm}(x^*-y-z^*,t)\partial _n^2E(z)\,dz'dz_n\right] , \end{aligned} \end{aligned}$$

by integration by parts in the \(z_n\)-variable. Using the fact that \(-\Delta E=\delta \), we obtain

$$\begin{aligned} \begin{aligned} D_{njm}(x,y,t)=&~\partial _{y_j}\partial _{y_m}\int _{{\mathbb {R}}^n}e^{\frac{-(y_n+w_n)^2}{4t}}\,\partial _nA(x'-y'-w',x_n,t) E(w)\,dw\\&-\partial _{y_j}\partial _{y_m}\int _{{\mathbb {R}}^n}e^{-\frac{(x_n+y_n+w_n)^2}{4t}}\,\partial _nA(x'-y'-w',0_+,t) E(w)\,dw +J, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} J&=\sum _{\beta =1}^{n-1}\lim _{\varepsilon \rightarrow 0_+}\int _{\varepsilon }^{x_n}\int _\Sigma \Gamma _{mj}(x^*-y-z^*,t)\partial _\beta ^2E(z)\,dz'dz_n\\&= \sum _{\beta =1}^{n-1}\int _0^{x_n}\int _\Sigma \partial _\beta \Gamma _{mj}(x^*-y-z^*,t)\partial _\beta E(z)\,dz'dz_n, \end{aligned} \end{aligned}$$

by integration by parts in the \(z'\)-variable. Note that

$$\begin{aligned} \partial _n A(x',0_+,t) = \lim _{\varepsilon \rightarrow 0_+} \int _\Sigma \Gamma (x'-z',0,t)\partial _n E(z',\varepsilon )dz' = - \frac{1}{2}\, \Gamma (x',0,t) \end{aligned}$$

since \(-2\partial _nE(x)\) is the Poisson kernel for the Laplace equation in \({\mathbb {R}}^n_+\). Using \(e^{-\frac{(x_n+y_n+w_n)^2}{4t}}\Gamma (x'-y'-w',0,t)=\Gamma (x^*-y-w,t)\), we get

$$\begin{aligned} D_{njm}(x,y,t)=&~\partial _{y_j}\partial _{y_m}\int _{{\mathbb {R}}^n}e^{\frac{-(y_n+w_n)^2}{4t}}\,\partial _nA(x'-y'-w',x_n,t) E(w)\,dw\nonumber \\&+\frac{1}{2}\,\Gamma _{mj}(x^*-y,t) \nonumber \\&+\sum _{\beta =1}^{n-1}\int _0^{x_n}\int _\Sigma \partial _\beta \Gamma _{mj}(x^*-y-z^*,t)\partial _\beta E(z)\,dz'dz_n. \end{aligned}$$
(5.5)

In this form we have moved normal derivatives in the definition (5.2) of \(D_{njm}\) to tangential derivatives. Consequently,

$$\begin{aligned}&\partial _{x_n}D_{njm}(x,y,t)\\&\quad =~\partial _{y_j}\partial _{y_m}\int _{{\mathbb {R}}^n}e^{\frac{-(y_n+w_n)^2}{4t}}\,\partial _n^2A(x'-y'-w',x_n,t) E(w)\,dw - \frac{1}{2}\,\partial _n\Gamma _{mj}(x^*-y,t) \\&\qquad +\sum _{\beta =1}^{n-1}\int _\Sigma \partial _\beta \Gamma _{mj}(x'-y'-z',-y_n,t)\partial _\beta E(z',x_n)\,dz'\\&\qquad -\sum _{\beta =1}^{n-1}\int _0^{x_n}\int _\Sigma \partial _n\partial _\beta \Gamma _{mj}(x^*-y-z^*,t)\partial _\beta E(z)\,dz'dz_n\\&\quad =~\partial _{y_j}\partial _{y_m}\int _{{\mathbb {R}}^n}e^{\frac{-(y_n+w_n)^2}{4t}}\,\partial _n^2A(x'-y'-w',x_n,t) E(w)\,dw - \frac{1}{2}\,\partial _n\Gamma _{mj}(x^*-y,t)\\&\qquad +\sum _{\beta =1}^{n-1}\partial _{y_j}\partial _{y_m}\int _{{\mathbb {R}}^n}e^{-\frac{(y_n+w_n)^2}{4t}}\,\partial _\beta ^2A(x'-y'-w',x_n,t) E(w)\,dw \\&\qquad -\sum _{\beta =1}^{n-1}\partial _{x_\beta } D_{\beta jm}(x,y,t). \end{aligned}$$

The first term cancels the third term since \(\Delta _x A(x,t)=0\) for \(x_n>0\). This proves (5.4). \(\square \)

Remark 5.1

Note that Lemma 5.1 and (5.4) imply

$$\begin{aligned} \textstyle \sum _{i=1}^n \partial _{x_i}{\widehat{H}}_{ij}(x,y,t) = \frac{1}{2}\, \epsilon _j\partial _n\Gamma _{nj}(x-y^*,t) - \frac{1}{2}\,\delta _{nj} \partial _n\Gamma (x-y^*,t), \end{aligned}$$

which is equivalent to \(\sum _{i=1}^n \partial _{x_i}G_{ij}(x,y,t)=0\) using Lemma 4.3. Since we will use (5.4) to prove (1.12), the property \(\sum _{i=1}^n \partial _{x_i}G_{ij}(x,y,t)=0\) cannot be used to improve (1.12). However, we will use it to prove (1.18).

The following lemma will be used in the \(x_n\)-derivative estimate of Proposition 5.5.

Lemma 5.3

For B(xt) defined by (2.3), for \(l,k \in {\mathbb {N}}_0\),

$$\begin{aligned} \begin{aligned} \left| \int _{{\mathbb {R}}^n}\partial _{x'}^{l+1}\partial _{x_n}^{k}B(x-w,1)\partial _n E(w)\,dw \right| {\lesssim }\frac{1+\delta _{n2} \log {\langle \delta _{k0}|x'|+ |x_n| \rangle }}{{\langle x \rangle }^{l+n-1} {\langle x_n \rangle }^k}. \end{aligned} \end{aligned}$$
(5.6)

Note in (5.6) \(\delta _{k0}|x'|=0\) for \(k>0\).

Recall that \(\partial _{x'}^{l}\partial _{x_n}^{k}B\) satisfies (2.5)–(2.6) if \(l+n\ge 3\), which is invalid if \(l=0\) and \(n=2\).

Proof

We will prove by induction in k. First consider \(k=0\) and full \(\partial E\) instead of just \(\partial _n E\). Change variables and denote \( J=\int _{{\mathbb {R}}^n}\partial _{w'}^{l+1} B(w,1)\partial E(x-w)\,dw\). By (2.5),

$$\begin{aligned} |J| {\lesssim }\int _{{\mathbb {R}}^n} \frac{dw}{{\langle w \rangle }^{l+n-1}{\langle w_n \rangle } |x-w|^{n-1}}, \end{aligned}$$

which is bounded for all x. We now assume \(|x|>10\) to show its decay. Decompose \({\mathbb {R}}^n\) to 4 regions: \(I =\{w:|w'|>2|x|\}\), \(II =\{w:|w'|<2|x| ,\, |w_n|>|x|/2\}\), \(III =\{w:|x|/2<|w'|<2|x|,\, |w_n|<|x|/2\}\), and \(IV =\{w:|w'|<|x|/2,\, |w_n|<|x|/2\}\). Decompose

$$\begin{aligned} J= \left( \int _{I } + \int _{II } + \int _{III } + \int _{IV } \right) (\partial _{w'}^{l+1}B)(w,1)\partial E(x-w) \,dw = J_1 + J_2+J_3+J_4. \end{aligned}$$

Using (2.6),

$$\begin{aligned} |J_1| {\lesssim }\int _{I }\frac{e^{-w_n^2/10}}{|w|^{l+n-1}\, |x-w|^{n-1}}\,dw {\lesssim }\int _{|w'|>2|x|}\frac{e^{-w_n^2/10}}{|w'|^{l+n-1}\, |w'|^{n-1}}\,dw = \frac{C}{|x|^{l+n-1}}. \end{aligned}$$

Also by (2.6), and with \(z'=x'-w'\),

$$\begin{aligned} \begin{aligned} |J_2|&{\lesssim }\int _{II }\frac{e^{-w_n^2/10}}{|x|^{l+n-1}\, |x-w|^{n-1}}\,dw \\&{\lesssim }\frac{1}{|x|^{l+n-1}} \int _{|w_n|\ge |x|/2} \int _{|z'|<3|x|} \frac{e^{-w_n^2/10}}{(|x_n-w_n|+|z'|)^{n-1}}\,dz'dw_n \\&= \frac{1}{|x|^{l+n-1}} \int _{|w_n|\ge |x|/2} \int _0^{3|x|} \frac{r^{n-2}}{(|x_n-w_n|+r)^{n-1}}\,dr\, e^{-w_n^2/10}\,dw_n. \end{aligned} \end{aligned}$$

By Lemma 2.1, the inner integral is bounded by \(1+\log _+\frac{3|x|}{|x_n-w_n|}\).

$$\begin{aligned} |J_2| {\lesssim }\frac{1}{|x|^{l+n-1}} \int _{|w_n|\ge |x|/2} \left( 1+\log _+\frac{3|x|}{|x_n-w_n|} \right) \, e^{-w_n^2/10}\,dw_n {\lesssim }\frac{1}{|x|^{l+n-1}}. \end{aligned}$$

For \(J_3\), if we have \(\partial _nE(x-w)\sim \frac{x_n-w_n}{|x-w|^n}\) in the integrand, using (2.6) and Lemma 2.1,

$$\begin{aligned} \begin{aligned} |J_{3}|\lesssim&~\int _{III }\frac{e^{-w_n^2/10}}{|x|^{l+n-1}}\frac{|x_n-w_n|}{(|x'-w'|+|x_n-w_n|)^n}\,dw\\ \lesssim&~\frac{1}{|x|^{l+n-1}}\int _{{\mathbb {R}}}|x_n-w_n|e^{-w_n^2/10}\int _0^{3|x|}\frac{r^{n-2}}{(|x_n-w_n|+r)^n}\,drdw_n\\ \lesssim&~\frac{1}{|x|^{l+n-1}}\int _{{\mathbb {R}}}|x_n-w_n|e^{-w_n^2/10}\,\frac{\min (|x_n-w_n|,3|x|)^{n-1}}{|x_n-w_n|^n}\,dw_n\\ \lesssim&\frac{1}{|x|^{l+n-1}}. \end{aligned} \end{aligned}$$

If we have \(\partial _\beta E(x-w)\) with \(\beta <n\) in \(J_3\), and if \(n\ge 3\), we integrate \(J_3\) by parts in \(w_\beta \),

$$\begin{aligned} \begin{aligned} J_{3}&=\int _{III }\partial _{w'}^{l+2}B(w,1)E(x-w)\,dw + \int _\Gamma \partial _{w'}^{l+1}B(w,1)E(x-w)\,dS_w, \end{aligned} \end{aligned}$$

where \(\Gamma =\left\{ (w',w_n)\mid {|w'|=|x|/2 \text { or } |w'|=2|x|,\, |w_n|<|x|/2} \right\} \) is the lateral boundary of III. Now using (2.6) and that \(|x-w|> c|x|\) on \(\Gamma \),

$$\begin{aligned} \begin{aligned} |J_3|&\le \int _{III }\frac{e^{-w_n^2/10}}{|x|^{l+n}}\frac{1}{|x-w|^{n-2}}\,dw + \int _{\Gamma }\frac{e^{-w_n^2/10}}{|x|^{l+n-1}}\frac{1}{|x-w|^{n-2}}dS_w \\&{\lesssim }\int _{|w_n|<|x|/2} \frac{e^{-w_n^2/10}}{|x|^{l+n}} \left( \int _{|z'|<3|x|} \frac{dz'}{|z'|^{n-2}} \right) dw_n + \int _{\Gamma }\frac{e^{-w_n^2/10}}{|x|^{l+n-1}}\frac{1}{|x|^{n-2}}\,dS_w\\&\lesssim \frac{1}{|x|^{l+n-1}}. \end{aligned} \end{aligned}$$

If \(\beta <n=2\), integration by parts does not help. Directly estimating using Lemma 2.1 gives

$$\begin{aligned} \begin{aligned} |J_{3}|&{\lesssim }\frac{1}{|x|^{l+n-1}}\int _{|w_n|<|x|/2} e^{-w_n^2/10} \int _0^{3|x|}\frac{1}{(|x_n-w_n|+r)}\,drdw_n \\&{\lesssim }\frac{1}{|x|^{l+n-1}}\int _{|w_n|<|x|/2} e^{-w_n^2/10} \left( 1+ \log \frac{3|x|}{|x_n-w_n|} \right) \,dw_n. \end{aligned} \end{aligned}$$

If \(|x_n| \ge \frac{3}{4} |x|\) so that \(|x_n-w_n|\ge \frac{1}{4}|x|\), the integral is of order one. If \(|x_n|<\frac{3}{4}|x|\) so that \(|x'| \ge c |x|\), the integral is bounded by \(\log {\langle x' \rangle }\). Thus

$$\begin{aligned} |J_3| {\lesssim }\frac{1}{|x|^{l+n-1}} \left( 1 + \delta _{n2} \log {\langle x' \rangle } \right) . \end{aligned}$$

Finally we consider \(J_4\) in region \(IV \). Denote \(\Gamma =\{(w',w_n): |w'|=|x|/2 \ge |w_n|\}\) the lateral boundary of \(IV \). Integrating by parts repeatedly,

$$\begin{aligned} J_4= \int _{IV }B(w,1)\partial _{w'}^{l+1}\partial E(x-w)\,dw + \sum _{p=0}^{l} \int _{\Gamma }\partial _{w'}^{l-p}B(w,1)\,\partial _{w'}^p\partial E(x-w)\cdot \chi _p(w)dS_w \end{aligned}$$

where \(\chi _p\) are uniformly bounded functions on \(\Gamma \) depending on multi-index p. By (2.6), that \(|x-w|> c|x|\) on IV and \(\Gamma \), and \(|w|> c|x|\) on \(\Gamma \), and Lemma 2.1,

$$\begin{aligned} \begin{aligned} |J_4|&\le \int _{IV }\frac{e^{-w_n^2/10}}{{\langle w \rangle }^{n-2}}\frac{1}{|x|^{l+n}}\,dw + \sum _{p=0}^l \int _{\Gamma }\frac{e^{-w_n^2/10}}{|x|^{l-p+n-2}}\frac{1}{|x|^{p+n-1}}dS_w \\&{\lesssim }\int _{|w_n|<|x|/2} \frac{e^{-w_n^2/10}}{|x|^{l+n}} \left( \int _{|z'|<3|x|} \frac{dz'}{|z'|^{n-2}} \right) dw_n + \int _{\Gamma }\frac{e^{-w_n^2/10}}{|x|^{l+2n-3}}\,dS_w \lesssim \frac{1}{|x|^{l+n-1}}. \end{aligned} \end{aligned}$$

If \(n=2\), we do one less step in integration by parts,

$$\begin{aligned} J_4= & {} \int _{IV }\partial _{w'}B(w,1)\partial _{w'}^{l}\partial E(x-w)\,dw \\{} & {} + \sum _{p=0}^{l-1} \int _{\Gamma }\partial _{w'}^{l-p}B(w,1)\,\partial _{w'}^p\partial E(x-w)\cdot \chi _p(w)dS_w \end{aligned}$$

Thus for \(n=2\), by (2.6) and Lemma 2.1,

$$\begin{aligned} \begin{aligned} |J_4|&\le \int _{IV }\frac{e^{-w_n^2/10}}{|w|}\frac{1}{|x|^{l+n-1}}\,dw + \sum _{p=0}^{l-1} \int _{\Gamma }\frac{e^{-w_n^2/10}}{|x|^{l-p+n-2}}\frac{1}{|x|^{p+n-1}}dS_w \\&{\lesssim }\int _{|w_n|<|x|/2} \frac{e^{-w_n^2/10}}{|x|^{l+n-1}} \left( \int _0^{|x|/2} \frac{dr}{|w_n|+r} \right) dw_n + \frac{1}{|x|^{l+n-1}} \\&{\lesssim }\frac{1}{|x|^{l+n-1}} \left( 1+ \int _{|w_n|<|x|/2} e^{-w_n^2/10} \left( 1+ \log \frac{|x|}{|w_n|} \right) dw_n \right) \lesssim \frac{ \log {\langle x \rangle }}{|x|^{l+n-1}}. \end{aligned} \end{aligned}$$

Unlike \(\log {\langle x' \rangle }\) for \(J_3\), we need \( \log {\langle x \rangle }\) for \(J_4\).

Summing the estimates, we conclude for \(k=0\), for all \(x\in {\mathbb {R}}^n\) and \(n \ge 2\),

$$\begin{aligned} \begin{aligned} \left| \int _{{\mathbb {R}}^n}\partial _{x'}^{l+1}B(x-w,1)\partial E(w)\,dw \right| \lesssim \frac{1 + \delta _{n2} \log {\langle x \rangle }}{{\langle x \rangle }^{l+n-1}}. \end{aligned} \end{aligned}$$
(5.7)

Suppose now \(k \ge 1\) and (5.6) has been proved for all \(k' \le k-1\). Thanks to \(-\Delta E=\delta \), we can reduce the order of the \(x_n\)-derivative in the integral as

$$\begin{aligned} \begin{aligned} J&=\int _{{\mathbb {R}}^n}(\partial _{x'}^{l+1} \partial _{x_n}^{k}B)(x-w,1)\partial _nE(w)\,dw\\&=(\partial _{x'}^{l+1} \partial _{x_n}^{k-1}B)(x,1)-\sum _{\beta _1=1}^{n-1}\int _{{\mathbb {R}}^n}(\partial _{x'}^{l+1} \partial _{x_n}^{k-1}\partial _{w_{\beta _1}}B)(x-w,1)\partial _{\beta _1}E(w)\,dw. \end{aligned} \end{aligned}$$

If \(k=1\), (5.6) follows from (2.5) and (5.7),

$$\begin{aligned} \begin{aligned} |J|&{\lesssim }|\partial _{x'}^{l+1} B(x,1)| + \frac{1+ \delta _{n2} \log {\langle x \rangle }}{{\langle x \rangle }^{l+n}} \\&{\lesssim }\frac{e^{-x_n^2/10}}{{\langle x \rangle }^{l+n-1}}+\frac{1+ \delta _{n2} \log (|x|+e)}{(|x|+e)^{l+n}} {\lesssim }\frac{1+ \delta _{n2} \log (|x_n|+e)}{{\langle x \rangle }^{l+n-1}\,(|x_n|+e)}. \end{aligned} \end{aligned}$$

In the last inequality we have used that for \(m \ge 1\)

$$\begin{aligned} \begin{aligned} f(t) =t^{-m} \log t \quad \text { is decreasing in }t>e. \end{aligned} \end{aligned}$$
(5.8)

If \(k\ge 2\), by integrating by parts, the second term becomes

$$\begin{aligned} \begin{aligned}&\int _{{\mathbb {R}}^n}(\partial _{x'}^{l+1} \partial _{x_n}^{k-1}\partial _{w_{\beta _1}}B)(x-w,1)\partial _{\beta _1}E(w)\,dw\\&\quad =\int _{{\mathbb {R}}^n}(\partial _{x'}^{l+1} \partial _{x_n}^{k-2}\partial _{w_{\beta _1}}^2B)(x-w,1)\partial _nE(w)\,dw. \end{aligned} \end{aligned}$$

By (5.6) for \(k'=k-2\), and (5.8) with \(m=2\),

$$\begin{aligned} \begin{aligned} |J|&{\lesssim }|\partial _{x'}^{l+1} \partial _{x_n}^{k-1} B(x,1)| + \frac{1+ \delta _{n2} \log {\langle x \rangle }}{{\langle x \rangle }^{l+n+1}{\langle x_n \rangle }^{k-2}} \\&{\lesssim }\frac{e^{-x_n^2/10}}{{\langle x \rangle }^{l+n-1}} + \frac{1+ \delta _{n2} \log (|x|+e)}{{\langle x \rangle }^{l+n+1}(|x_n|+e)^{k-2}} {\lesssim }\frac{1+ \delta _{n2} \log {\langle x_n \rangle }}{{\langle x \rangle }^{l+n-1}{\langle x_n \rangle }^k}.\qquad \qquad \qquad \qquad \qquad \qquad \,\,\quad \square \end{aligned} \end{aligned}$$

Lemma 5.4

For B(xt) defined by (2.3), for \(l,k \in {\mathbb {N}}_0\), for \(\beta <n\),

$$\begin{aligned} \begin{aligned} \left| \int _{{\mathbb {R}}^n}\partial _{x'}^{l+1}\partial _{x_n}^{k}B(x-w,1)\partial _\beta E(w)\,dw \right| {\lesssim }\frac{1+\delta _{n2} \log {\langle \delta _{k\le 1}|x'|+|x_n| \rangle }}{{\langle x \rangle }^{l+n-\delta _{k0}} {\langle x_n \rangle }^{(k-1)_+}}. \end{aligned} \end{aligned}$$
(5.9)

Note in (5.9) \(\delta _{k\le 1}|x'|=0\) for \(k>1\).

Proof

The case \(k=0\) is proved in the proof for Lemma 5.3. When \(k \ge 1\), we integrate by parts

$$\begin{aligned} \begin{aligned} J=\int _{{\mathbb {R}}^n}\partial _{x'}^{l+1}\partial _{x_n}^{k}B(x-w,1)\partial _\beta E(w)\,dw = \int _{{\mathbb {R}}^n}\partial _{x'}^{l+1}\partial _{x_n}^{k-1}\partial _\beta B(x-w,1)\partial _n E(w)\,dw. \end{aligned} \end{aligned}$$

By Lemma 5.3,

$$\begin{aligned} |J| {\lesssim }\frac{1+\delta _{n2} \log {\langle \delta _{k\le 1}|x'|+|x_n| \rangle }}{{\langle x \rangle }^{l+n} {\langle x_n \rangle }^{k-1}}.\square \end{aligned}$$

The following is our estimates of derivatives of \(D_{ijm}\).

Proposition 5.5

For \(x,y\in {\mathbb {R}}^n_+\), \(l,k,q \in {\mathbb {N}}_0\), \(i,m=1,\ldots ,n\), and \(j<n\), we have

$$\begin{aligned} { |\partial _{x',y'}^l\partial _{x_n}^k\partial _{y_n}^qD_{ijm}(x,y,1)|{\lesssim }\frac{ 1+ \mu \delta _{n2} \log {\langle \nu |x'-y'|+x_n+y_n \rangle }}{{\langle x-y^* \rangle }^{l+k+n-\sigma } {\langle x_n+y_n \rangle }^\sigma {\langle y_n \rangle }^q}, } \end{aligned}$$
(5.10)

where \(\sigma = (k+ \delta _{m n}- \delta _{in} -1)_+\), \(\mu =1-\delta _{k0}-\delta _{k1}\delta _{in}\), and \(\nu = \delta _{q0} \delta _{m<n} \delta _{k(1+\delta _{in})}\).

Remark 5.2

By a similar proof, we can show

$$\begin{aligned} { |\partial _{x',y'}^l\partial _{x_n}^k\partial _{y_n}^q C_i(x,y,1)|\lesssim \frac{e^{-\frac{1}{30}{ y_n^2}}}{{\langle x-y^* \rangle }^{l+n-1}{\langle x_n+y_n \rangle }^k{\langle y_n \rangle }^{q+1}} , } \end{aligned}$$
(5.11)

whose decay in \(x'\) is not as good as (5.10) since \(\partial _n \Gamma \) in the definition of \(C_i\) has an additional \(\partial _n\) derivative than \(\partial _j \Gamma _{mn}\) in the definition of \(D_{ijm}\). This is why formula (4.9) for \(W_{ij}\) is preferred than (4.3). It is worth to note that the main term of \(G^*_{ij}\) in (1.9) is closely related to \(\partial _{y_j}C_i\) (compare (1.16)). Henceforth, their estimates (1.10) and (5.11) are similar.

Proof

\(\bullet \,\varvec{\partial _{x',y'}, \partial _{y_n}}\)-estimate: Recall the definition (5.2) of \(D_{ijm}\). Changing the variables \(w=x-y^*-z\) after taking derivatives, and using \(j<n\),

$$\begin{aligned} \partial _{x',y'}^l\partial _{y_n}^q D_{ijm} (x,y,1) = \int _\Pi \partial _{w'}^{l+1}\partial _n^{q}\Gamma _{mn}(w,1)\,\partial _iE(x-y^*-w)\,dw \end{aligned}$$

up to a sign, where \(\Pi =\{w\in {\mathbb {R}}^n:y_n\le w_n\le x_n+y_n\}\). It is bounded for finite \(|x-y^*|\), and to prove the estimate, we may assume \(R=|x-y^*|>100\). Decompose \(\Pi =\Pi _1+\Pi _2\) where

$$\begin{aligned}\Pi _1=\Pi \cap \left\{ |w|<\tfrac{3}{4}R\right\} ,\ \ \ \Pi _2=\Pi \cap \left\{ |w|>\tfrac{3}{4}R\right\} . \end{aligned}$$

Integrating by parts in \(\Pi _1\) with respect to \(w'\) iteratively, it equals

$$\begin{aligned} \begin{aligned}&=\int _{\Pi _1}\left( \partial _{w_n}^{q}\Gamma _{mn}(w,1) \right) \,\partial _{w'}^{l+1}\partial _iE(x-y^*-w)\,dw'dw_n\\&\quad +\sum _{p=0}^{l}\int _{\Pi \cap \{|w|=\frac{3}{4}R\}}\left( \partial _{w'}^{l-p}\partial _{w_n}^{q}\Gamma _{mn}(w,1) \right) \,\partial _{w'}^p\partial _iE\cdot \chi _p(x-y^*-w)\,dS_w\\&\quad +\int _{\Pi _2}\left( \partial _{w'}^{l+1}\partial _{w_n}^q\Gamma _{mn}(w,1) \right) \,\partial _iE(x-y^*-w)\,dw=I_1+I_2+I_3, \end{aligned} \end{aligned}$$

where \(\chi _p\) are bounded functions on the boundary. Estimate (2.10) and Lemma 2.1 imply

$$\begin{aligned} \begin{aligned} |I_1|\lesssim&~\int _{y_n}^{x_n+y_n}\int _{{\mathbb {R}}^{n-1}}\frac{1}{(|w'|+w_n+1)^{q+n}R^{l+n}}\,dw'dw_n \\ \lesssim&~\frac{1}{R^{l+n}}\int _{y_n}^{x_n+y_n}\frac{1}{(w_n+1)^{q+1}}\,dw_n \lesssim \frac{x_n}{R^{l+n}(y_n+1)^{q}(x_n+y_n+1)}. \end{aligned} \end{aligned}$$

For \(I_2\), estimate (2.10) gives

$$\begin{aligned} \begin{aligned} |I_2|\lesssim&~\sum _{p=0}^{l}\int _{|w|=\frac{3}{4}R}\frac{1}{{\langle w \rangle }^{l+q-p+n}}\,\frac{1}{|x-y^*-w|^{n+p-1}}\,dS_w \\ \lesssim&~\sum _{p=0}^{l}\frac{1}{R^{l+q-p+n}R^{n+p-1}}\,R^{n-1} \sim \frac{1}{R^{l+q+n}} \end{aligned} \end{aligned}$$

Using the estimate (2.10) and Lemma 2.2,

$$\begin{aligned} \begin{aligned} |I_3|\lesssim&~\int _{\Pi _2}\frac{1}{{\langle w \rangle }^{l+q+n+1}}\,\frac{1}{|x-y^*-w|^{n-1}}\,dw \\ \lesssim&\frac{1}{R^{l+q+n+1/2}}\int _{y_n}^{x_n+y_n}\int _{{\mathbb {R}}^{n-1}}\frac{1}{(|w'|+w_n+1)^{1/2}(|x'-y'-w'|+(x_n+y_n-w_n))^{n-1}}\,dw'dw_n\\ \lesssim&~\frac{1}{R^{l+q+n+1/2}}\int _{y_n}^{x_n+y_n}\left( R^{-1/2}+R^{-1/2}\log \frac{R}{(x_n+y_n-w_n)} \right) dw_n\\ \sim&~\frac{x_n}{R^{l+q+n+1}}\left( 1+\log \frac{R}{x_n} \right) {\lesssim }\frac{1}{R^{l+q+n}} , \end{aligned} \end{aligned}$$

noting \(|x'-y'|+w_n+1+x_n+y_n-w_n\sim R\). Therefore, we conclude that for \(i,m=1,\ldots ,n\) and \(j<n\),

$$\begin{aligned} \begin{aligned} |\partial _{x',y'}^l\partial _{y_n}^qD_{ijm}(x,y,1)|\lesssim&\frac{1}{{\langle x-y^* \rangle }^{l+n}{\langle y_n \rangle }^{q}} . \end{aligned} \end{aligned}$$
(5.12)

\(\bullet \,\varvec{\partial _{x_n}}\)-estimate: Note \(j<n\) always. Also note that j and m in \(D_{ijm}\) are not changed in (5.3) and (5.4). For \(k \ge 1\) and \(i<n\), by (5.3) and Lemma 5.3,

$$\begin{aligned} \partial _{x',y'}^l \partial _{x_n}^k\partial _{y_n}^q D_{ijm}(x,y,1)&{\lesssim }\partial _{x',y'}^{l+1} \partial _{x_n}^{k-1}\partial _{y_n}^q D_{njm}(x,y,1) \nonumber \\&+ \frac{\textrm{LN}'}{ {\langle x-y^* \rangle }^{l+n+1-\delta _{m n}} {\langle x_n+y_n \rangle }^{k+q-1+\delta _{m n}}}, \end{aligned}$$
(5.13)

where

$$\begin{aligned} \textrm{LN}' = 1+ \delta _{n2} \log {\langle \nu |x'-y'|+x_n+y_n \rangle },\quad \nu =\delta _{0(k+q-1+\delta _{m n})} = \delta _{k1} \delta _{q0} \delta _{m<n}. \end{aligned}$$

For \(k \ge 1\) and \(i=n\), by (5.4) and (2.10),

$$\begin{aligned} { \partial _{x',y'}^l \partial _{x_n}^k\partial _{y_n}^q D_{njm}(x,y,1){\lesssim }\partial _{x',y'}^{l+1} \partial _{x_n}^{k-1}\partial _{y_n}^q D_{\beta jm}(x,y,1) + \frac{1}{ {\langle x-y^* \rangle }^{l+n+k+q}}, } \end{aligned}$$
(5.14)

where \(\beta <n\).The proof of (5.10) is then completed by induction in k using (5.13), (5.14) and the base case (5.12). \(\square \)

Proposition 5.6

For \(x,y\in {\mathbb {R}}^n_+\), \(t>0\), \(l,k,q,m \in {\mathbb {N}}_0\), \(i,j=1,\ldots ,n\), we have

$$\begin{aligned} \begin{aligned}&|\partial _{x',y'}^l\partial _{x_n}^k\partial _{y_n}^q\partial _t^m \widehat{H}_{ij}(x,y,t)|\\&\quad {\lesssim }\frac{1+\mu \,\delta _{n2}\left[ \log (\nu |x'-y'|+x_n+y_n+\sqrt{t})-\log (\sqrt{t}) \right] }{t^{m}(|x^*-y|^2+t)^{\frac{l+k+n-\sigma }{2}}((x_n+y_n)^2+t)^{\frac{\sigma }{2}}(y_n^2+t)^{\frac{q}{2}}}, \end{aligned} \end{aligned}$$
(5.15)

where \(\sigma = (k- \delta _{in}-\delta _{jn} )_+\), \(\mu =1-(\delta _{k0}+\delta _{k1}\delta _{in})\delta _{m0}\), and \(\nu = \delta _{q0} \delta _{jn} \delta _{k(1+\delta _{in})} \delta _{m0}+\delta _{m>0}\).

Proof

From (5.1) and (5.10),

$$\begin{aligned} \begin{aligned} |\partial _{x',y'}^l\partial _{x_n}^k\partial _{y_n}^q {\widehat{H}}_{ij}(x,y,1)|{\lesssim }\frac{1+\mu \,\delta _{n2}\log {\langle \nu |x'-y'|+x_n+y_n \rangle }}{{\langle x^*-y \rangle }^{l+k+n-\sigma }{\langle x_n+y_n \rangle }^{\sigma }{\langle y_n \rangle }^{q}}, \end{aligned} \end{aligned}$$
(5.16)

with corresponding \(\sigma \), \(\mu \) and \(\nu \). Note that \(\widehat{H}_{ij}\) satisfies the scaling property

$$\begin{aligned} {\widehat{H}}_{ij}(x,y,t)=\frac{1}{t^{\frac{n}{2}}}\,\widehat{H}_{ij}\left( \frac{x}{\sqrt{t}},\frac{y}{\sqrt{t}},1\right) . \end{aligned}$$
(5.17)

Therefore, (5.15) can be obtained by differentiating (5.17) in t and using (5.16). Indeed,

$$\begin{aligned}&\partial _{x',y'}^l \partial _{x_n}^k\partial _{y_n}^q\partial _t^m {\widehat{H}}_{ij}(x,y,t) =\left( \frac{\partial }{\partial t} \right) ^m \left( t^{-\frac{l+k+q+n}{2}} \partial _{X',Y'}^l \partial _{X_n}^k\partial _{Y_n}^q {\widehat{H}}_{ij} \left( \frac{x}{\sqrt{t}} , \frac{y}{\sqrt{t}} ,1 \right) \right) \\&\quad \sim t^{-\frac{l+k+q+n}{2}-m} \left( 1+ \textstyle \sum _{p=1}^n\frac{x_p}{\sqrt{t}} \partial _{X_p} + \frac{y_p}{\sqrt{t}} \partial _{Y_p} \right) ^m \partial _{X',Y'}^l \partial _{X_n}^k\partial _{Y_n}^q {\widehat{H}}_{ij} \left( \frac{x}{\sqrt{t}} , \frac{y}{\sqrt{t}} ,1 \right) . \end{aligned}$$

Here we use \(\frac{\partial }{\partial t}\) to indicate a total derivative, and \(\partial _{X_p}\) for a partial derivative in that position, e.g., \(\frac{\partial }{\partial x} (f(ax,by)) = a \partial _{X} f(ax,by) \). Note that \(\frac{x_p}{\sqrt{t}} \partial _{X_p} \) and \(\frac{y_p}{\sqrt{t}} \partial _{Y_p}\) do not change the decay estimate no matter \(p<n\) or \(p=n\), except that we take \(\mu =\nu =1\) when \(m>0\) for simplicity. This completes the proof of Proposition 5.6. \(\square \)

5.2 Estimates of \(V_{ij}\)

Lemma 5.7

Let \(V_{ij}(x,y,t)\) be defined by (4.6), \(x,y\in {\mathbb {R}}^n_+\), \(t>0\). For \(i<n\),

$$\begin{aligned} V_{ij}(x,y,t)= & {} {} 2\epsilon _j\int _0^{x_n} \int _{{\mathbb {R}}^n_+} \partial _{x_n} G^{ht}((x_n-z_n)e_n, w,t)\,\nonumber \\{}{} & {} {}\partial _j\partial _i E(w+x'-y^* +z_ne_n )\, dw\, dz_n. \end{aligned}$$
(5.18)

For \(i=n\),

$$\begin{aligned} V_{nj}(x,y,t)= & {} {} -2\epsilon _j\sum _{\beta <n} \int _0^{x_n} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,t)\nonumber \\{}{} & {} {}\,\partial _j\partial _\beta ^2 E(w+x'-y^* +z_ne_n )\, dw\, dz_n. \end{aligned}$$
(5.19)

Proof

First of all, by changing variables \(\tilde{w} = (y+w)^*\) in definition (4.7),

$$\begin{aligned} \begin{aligned} \Lambda _j(x,y,t)&=\partial _{y_n}\partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}(x,\tilde{w}^*,t) E(\tilde{w}^*-y)\, d\tilde{w}\\&=-\partial _{y_n}\partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}(x,\tilde{w},t) E(\tilde{w}-y^*)\, d\tilde{w}\\&=-\partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}(x,\tilde{w},t) \partial _{n}E(\tilde{w}-y^*)\, d\tilde{w}. \end{aligned} \end{aligned}$$
(5.20)

Decompose \(V_{ij}(x,y,t) = V_{ij,1}(x,y,t) + V_{ij,2}(x,y,t)\), where \(V_{ij,1}(x,y,t) = -2\delta _{in} \Lambda _j(x,y,t)\) and

$$\begin{aligned} { V_{ij,2}(x,y,t) = -4\int _0^{x_n} \int _{\Sigma } \partial _{x_n}\Lambda _j(x-z,y,t) \partial _iE(z)\, dz'dz_n. } \end{aligned}$$
(5.21)

If \(i<n\), integrating by parts,

$$\begin{aligned} \begin{aligned} V_{ij,2}(x,y,t)&= 4\int _0^{x_n}\!\!\! \int _\Sigma \partial _{z_i} \partial _{x_n}\Lambda _j(x-z,y,t) E(z)\, dz'dz_n\\&= -4 \partial _{x_i}\int _0^{x_n}\!\!\! \int _\Sigma \partial _{x_n}\Lambda _j(x-z,y,t) E(z)\, dz'dz_n. \end{aligned} \end{aligned}$$

From the third line of (5.20), changing variable \(w=\tilde{w}-x'\) and using \(G^{ht}(x,w+p',t)=G^{ht}(x-p',w,t)\) for any \(p'\in \Sigma \),

$$\begin{aligned} { \Lambda _j(x,y,t) =-\partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}(x_ne_n, w,t) \partial _{n}E(w+x'-y^*)\, dw. } \end{aligned}$$
(5.22)

Using (5.22),

$$\begin{aligned} \begin{aligned}&V_{ij,2}(x,y,t)\\&\quad = 4\partial _{x_i}\int _0^{x_n}\!\!\! \int _\Sigma \partial _{x_n}\left( \partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,t) \partial _{n} E(w+x'-z'-y^*)\, dw \right) \\&\qquad E(z)\, dz'dz_n \\&\quad = 2\partial _{x_i}\int _0^{x_n} \partial _{x_n}\partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,t)\\&\quad \left( 2\int _\Sigma \partial _{n}E(w+x'-z'-y^*) E(z)\, dz' \right) \, dw\, dz_n. \end{aligned} \end{aligned}$$

Using the stationary Poisson formula (2.1), \(w+x'-z'-y^*\in {\mathbb {R}}^n_+\) and \(E(z) = E(z' - (z_ne_n))\),

$$\begin{aligned} \begin{aligned} V_{ij,2}(x,y,t)&=- 2\partial _{x_i}\int _0^{x_n} \partial _{x_n}\partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,t)\,\\ {}&\quad E(w+x'-y^* +z_ne_n )\, dw\, dz_n \\&=- 2\int _0^{x_n} \partial _{y_j} \int _{{\mathbb {R}}^n_+} \partial _{x_n} G^{ht}((x_n-z_n)e_n, w,t) \\&\quad \,\partial _iE(w+x'-y^* +z_ne_n )\, dw\, dz_n. \end{aligned} \end{aligned}$$
(5.23)

Since \(V_{ij,1}=0\) when \(i<n\), we get (5.18).

If \(i=n\),

$$\begin{aligned} V_{ij,2}(x,y,t) = -4\int _0^{x_n}\!\!\! \int _\Sigma \partial _{x_n}\Lambda _j(x-z,y,t) \partial _nE(z)\, dz'dz_n. \end{aligned}$$

From the second line of (5.20), changing variable \(w=\tilde{w}-x'\) and using \(G^{ht}(x,w+p',t)=G^{ht}(x-p',w,t)\) for any \(p'\in \Sigma \),

$$\begin{aligned} \begin{aligned} \Lambda _j(x,y,t)&=-\partial _{y_n}\partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}(x,\tilde{w},t) E(\tilde{w}-y^*)\, d\tilde{w} \\&=-\partial _{y_n}\partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}(x_ne_n, w,t) E(w+x'-y^*)\, dw. \end{aligned} \end{aligned}$$
(5.24)

Using (5.24),

$$\begin{aligned} \begin{aligned}&V_{ij,2}(x,y,t)\\ {}&\quad = 4\int _0^{x_n}\!\!\! \int _\Sigma \partial _{x_n}\left( \partial _{y_n} \partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,t) E(w+x'-z'-y^*)\, dw \right) \\ {}&\qquad \partial _nE(z)\, dz'dz_n \\ {}&\quad = 2\int _0^{x_n} \partial _{x_n}\partial _{y_n} \partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,t) \left( 2\int _\Sigma E(w+x'-z'-y^*)\partial _{n} E(z)\, dz' \right) \, dw\, dz_n.\end{aligned} \end{aligned}$$

By (2.1), one has

$$\begin{aligned} \begin{aligned} 2\int _\Sigma E(w+x'-z'-y^*)\partial _{n} E(z)\, dz'&=2\int _\Sigma E(z'-(w+x'-y^*))\partial _{n} E(z_ne_n-z')\, dz' \\ {}&= -E(w+x'-y^*+z_ne_n) \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} V_{ij,2}(x,y,t)&=-2\int _0^{x_n} \partial _{x_n}\partial _{y_n} \partial _{y_j} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,t)\\&\quad E(w+x'-y^*+z_ne_n)\, dw\, dz_n \\&=-2\int _0^{x_n} \partial _{y_j} \int _{{\mathbb {R}}^n_+} \partial _{x_n}G^{ht}((x_n-z_n)e_n, w,t)\partial _{n} \\&\quad E(w+x'-y^*+z_ne_n)\, dw\, dz_n. \end{aligned} \end{aligned}$$
(5.25)

Therefore, for all \(1\le i ,j\le n\), including \(i=n\) or \(j=n\), we have (5.23). Integrating (5.23) by parts for \(i=n\),

$$\begin{aligned}&V_{ij,2}(x,y,t) \\&\quad = 2\epsilon _j\int _0^{x_n} \int _{{\mathbb {R}}^n_+} \partial _{x_n} G^{ht}((x_n-z_n)e_n, w,t)\,\partial _j\partial _n E(w+x'-y^* +z_ne_n )\, dw\, dz_n\\&\quad = - 2\epsilon _j\int _0^{x_n} \int _{{\mathbb {R}}^n_+} \partial _{z_n} G^{ht}((x_n-z_n)e_n, w,t)\,\partial _j\partial _n E(w+x'-y^* +z_ne_n )\, dw\, dz_n\\&\quad = - 2\epsilon _j\int _{{\mathbb {R}}^n_+} G^{ht}(0, w,t)\,\partial _j\partial _n E(w+x-y^*)\, dw\\&\qquad + 2\epsilon _j\int _{{\mathbb {R}}^n_+} G^{ht}(x_ne_n, w,t)\,\partial _j\partial _n E(w+x'-y^*)\, dw\\&\qquad + 2\epsilon _j\int _0^{x_n} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,t)\,\partial _j\partial _n^2 E(w+x'-y^* +z_ne_n )\, dw\, dz_n\\&\quad = 0 - V_{ij,1}(x,y,t) + 2\epsilon _j\int _0^{x_n} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,t) \\&\qquad \quad \sum \limits _{\beta < n}\partial _j\partial _n^2 E(w+x'-y^* +z_ne_n )\, dw\, dz_n. \end{aligned}$$

Then (5.19) follows from the above equation and \(\partial _n^2 E=-\sum _{\beta <n}\partial _\beta ^2 E\), completing the proof of the lemma. \(\square \)

Proposition 5.8

For \(x,y\in {\mathbb {R}}^n_+\), \(t>0\), \(l,k,q,m \in {\mathbb {N}}_0\), \(i,j=1,\ldots ,n\), we have

$$\begin{aligned} | \partial _{x',y'}^l \partial _{x_n}^k \partial _{y_n}^q \partial _t^m V_{ij}(x,y,t) | {\lesssim }\frac{1}{t^m (|x-y^*|^2 + t)^{\frac{l+k-k_i+q+n}{2}} (x_n^2 + t)^{\frac{k_i}{2}}},\quad {}k_i={(k-\delta _{in})}_{+}.\nonumber \\ \end{aligned}$$
(5.26)

Proof

\(\bullet \,\varvec{\partial _{x',y'}, \partial _{y_n}}\)-estimate: We first estimate \(V_{ij}(x,y,1)\).

For \(i=n\) and \(j<n\), changing variable in (5.19), it follows that

$$\begin{aligned} V_{nj}(x,y,1)= & {} -2\sum _{\beta<n} \int _0^{x_n} \int _{w_n<x_n-z_n} G^{ht}((x_n-z_n)e_n, (x_n-z_n)e_n-w,1)\nonumber \\{} & {} \,\partial _j\partial _\beta ^2 E(w-x+y^* )\, dw\, dz_n. \end{aligned}$$
(5.27)

We split the set \(A:=\left\{ w\in {\mathbb {R}}^n: w_n<x_n-z_n \right\} \) into two disjoints sets denoted by

$$\begin{aligned} A_L= & {} \left\{ w: |w-x+y^*|>\frac{|x-y^*|}{2} \right\} \cap A, \\ A_S= & {} \left\{ w: |w-x+y^*|\le \frac{|x-y^*|}{2} \right\} \cap A. \end{aligned}$$

For the region on \(A_L\), it is direct that

$$\begin{aligned} \begin{aligned}&\left| \int _0^{x_n} \int _{A_L} G^{ht}((x_n-z_n)e_n, (x_n-z_n)e_n-w,1)\,\partial _j\partial _\beta ^2 E(w-x+y^* )\, dw\, dz_n \right| \\&\quad \le \frac{c}{|x-y^*|^{n+1}}\int _0^{x_n} \int _{A_L} \left| G^{ht}((x_n-z_n)e_n, (x_n-z_n)e_n-w,t) \right| \, dw\, dz_n. \end{aligned}\nonumber \\ \end{aligned}$$
(5.28)

On the other hand, on \(A_S\), noting that \(|w|>\frac{|x-y^*|}{2}\) and using the integration by parts, we estimate

$$\begin{aligned} \begin{aligned}&\left| \int _0^{x_n} \int _{A_S} \partial _j\partial _\beta ^2G^{ht}((x_n-z_n)e_n, (x_n-z_n)e_n-w,1)\, E(w-x+y^* )\, dw\, dz_n \right| \\&\quad \le ce^{-c|x-y^*|^2} |x-y^*|^2 x_n \le \frac{c}{|x-y^*|^{n}}. \end{aligned} \end{aligned}$$

For \(i=n\) and \(j=n\), as similarly as the case \(j \ne n\), it split the integral as follows:

$$\begin{aligned} \begin{aligned} V_{ij}(x,y,1)&=2\sum _{\beta<n} \int _0^{x_n} \int _{w_n<x_n-z_n} G^{ht}((x_n-z_n)e_n, (x_n-z_n)e_n-w,1)\\&\quad \partial _n\partial _\beta ^2 \, E(w-x+y^* ) \,dw\, dz_n\\&=2\sum _{\beta<n} \int _0^{x_n} \int _{A_L} \cdots dw\, dz_n.+2\sum _{\beta <n} \int _0^{x_n} \int _{A_S} \cdots dw\, dz_n. \end{aligned} \end{aligned}$$

The first term can be treated exactly the same way as (5.28), and thus the detail is skipped. For the second term, we use the integration by parts for only tangential derivatives, which gives

$$\begin{aligned} \begin{aligned}&\left| 2\sum _{\beta <n} \int _0^{x_n} \int _{A_S} \partial _\beta ^2 G^{ht}((x_n-z_n)e_n, (x_n-z_n)e_n-w,1)\,\partial _n E(w-x+y^* )\, dw\, dz_n \right| \\&\quad \le ce^{-c|x-y^*|^2} |x-y^*| x_n \le \frac{c}{|x-y^*|^{n}}. \end{aligned} \end{aligned}$$

For \(i<n\), noting that \(G^{ht}((x_n-z_n)e_n, w,1)=0\) if \(z_n =x_n\), it follows via integration by parts in (5.18) that

$$\begin{aligned} \begin{aligned} V_{ij}(x,y,1)&=2\epsilon _j\int _{{\mathbb {R}}^n_+} G^{ht}(x_ne_n, w,1)\,\partial _j\partial _i E(w+x'-y^* )\, dw\\&\quad +2\epsilon _j\int _0^{x_n} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,1)\,\partial _n\partial _j\partial _i \\&\quad E(w+x'-y^* +z_ne_n )\, dw\, dz_n. \end{aligned} \end{aligned}$$
(5.29)

If \(j=n\), then the second term can be treated exactly the same way as the case \(i=n\) and \(j<n\), since \(\partial ^2_{n}\partial _i E(w+x'-y^* +z_ne_n )=-\sum _{k=1}^{n-1}\partial ^2_{k}\partial _i E(w+x'-y^* +z_ne_n )\), and thus it remains to consider the first term. As before, due to change of variables, we rewrite it as

$$\begin{aligned} \begin{aligned}&-2\int _{{\mathbb {R}}^n_+} G^{ht}(x_ne_n, w,1)\,\partial _{n}\partial _i E(w+x'-y^* )\, dw\\&\quad =-2\int _{w_n<x_n} G^{ht}(x_ne_n, x_ne_n -w,1)\,\partial _{n}\partial _i E(w-x+y^* )\, dw\\&\quad =-2\int _{A_L} \cdots \, dw-2\int _{A_S} \cdots \, dw. \end{aligned} \end{aligned}$$

Here we split the integral into two regions \(A_L\) and \(A_S\) with replacement of \(A:=\left\{ w\in {\mathbb {R}}^n: w_n<x_n \right\} \). The first term is rather direct that

$$\begin{aligned} \left| -2\int _{A_L} G^{ht}(x_ne_n, x_ne_n -w,1)\,\partial _{n}\partial _i E(w-x+y^* )\, dw \right| \le \frac{c}{|x-y^*|^{n}}. \end{aligned}$$

For the second term, since \(i<n\), by integration by parts, we have

$$\begin{aligned} -2\int _{A_S} \cdots \, dw=2\int _{A_S} \partial _{w_i} G^{ht}(x_ne_n, x_ne_n -w,1)\,\partial _{n} E(w-x+y^* )\, dw. \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} \left| 2\int _{A_S} \partial _{w_i} G^{ht}(x_ne_n, x_ne_n -w,1)\,\partial _{n} E(w-x+y^* )\, dw \right|\le & {} ce^{-c|x-y^*|^2} |x-y^*| \\\le & {} \frac{c}{|x-y^*|^{n}}. \end{aligned}$$

If \(j<n\), then the first term in (5.29) can be estimated similarly as the boundary term as the case \(i<n, j=n\), and thus we omit the details. It remains to estimate the second term in (5.29). Using the change of variables and separating the domain, we have

$$\begin{aligned} \begin{aligned}&2\int _0^{x_n} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, w,1)\, \partial _{z_n}\partial _{w_j}\partial _i E(w+x'-y^* +z_ne_n )\, dw\, dz_n\\&\quad =2\int _0^{x_n} \int _{{\mathbb {R}}^n_+} G^{ht}((x_n-z_n)e_n, (x_n-z_n)e_n-w,1)\, \partial _{z_n}\partial _{w_j}\partial _i E(w-x+y^*)\, dw\, dz_n\\&\quad =2\int _0^{x_n} \int _{A_L} \cdots \, dw\, dz_n+2\int _0^{x_n} \int _{A_S} \cdots \, dw\, dz_n. \end{aligned} \end{aligned}$$

The first parts is controlled by \(|x-y^*|^{-n}\), which can be shown as before, and thus we consider the only the second term. Since \(i, j<n\), via the integration by parts, we obtain

$$\begin{aligned} \begin{aligned}&\left| 2\int _0^{x_n} \int _{A_S} \partial _{w_j}\partial _{w_i} G^{ht}((x_n-z_n)e_n, (x_n-z_n)e_n-w,1)\, \partial _{z_n} E(w-x+y^*)\, dw\, dz_n \right| \\&\quad \le ce^{-c|x-y^*|^2} |x-y^*| x_n\le \frac{c}{|x-y^*|^{n}}. \end{aligned} \end{aligned}$$

Hence, for \(i,j=1,\ldots ,n\), we have

$$\begin{aligned} \begin{aligned} |V_{ij}(x,y,1)|\lesssim \frac{1}{{\langle x-y^* \rangle }^n} . \end{aligned} \end{aligned}$$
(5.30)

Any higher tangential derivative can be treated similar way as above. Furthermore, any order of normal derivative for \(y_n\) work out as well, with the aid of \(\Delta E=0\). Therefore, we conclude that for \(i,j=1,\ldots ,n\),

$$\begin{aligned} \begin{aligned} |\partial _{x',y'}^l\partial _{y_n}^qV_{ij}(x,y,1)|\lesssim \frac{1}{{\langle x-y^* \rangle }^{l+q+n}} . \end{aligned} \end{aligned}$$
(5.31)

\(\bullet \,\varvec{\partial _{x_n}}\)-estimate:

For \(i<n\), it follows from (5.29) and \(G^{ht}((x_n-z_n)e_n, w,1)=0\) if \(z_n =x_n\) that

$$\begin{aligned} \begin{aligned}&|\partial _{x_n}^{k}V_{ij}(x,y,1)|\\&\quad \le \left| \int _{{\mathbb {R}}^n_+}\partial _{x_n}^k G^{ht}(x_ne_n, w,1)\,\partial _j\partial _i E(w+x'-y^*)\, dw \right| \\&\qquad + \left| \int _{{\mathbb {R}}^n_+} \partial _{x_n}G^{ht}(0, w,1)\,\partial _{x_n}^{k-2}\partial _{z_n}\partial _j\partial _i E(w+x-y^* )\, dw \right| + \cdots \\&\qquad +\left| \int _{{\mathbb {R}}^n_+} \partial _{x_n}^{k-1}G^{ht}(0, w,1)\,\partial _{z_n}\partial _j\partial _i E(w+x-y^*)\, dw \right| \\&\qquad + \left| \int _0^{x_n}\int _{{\mathbb {R}}^n_+} \partial _{x_n}^{k}G^{ht}((x_n-z_n)e_n, w,1)\,\partial _{z_n}\partial _j\partial _i E(w+x'-y^* +z_ne_n )\, dw\, dz_n \right| \\&\quad {\lesssim }e^{-\frac{x_n^2}{8}}|x-y^*|^{-n}+\cdots + |x-y^*|^{-n-k} {\lesssim }|x-y^*|^{-n}x_n^{-k}. \end{aligned} \end{aligned}$$

For \(i=n\), using \(G^{ht}((x_n-z_n)e_n, w,1)=0\) if \(z_n =x_n\) in (5.19), we deduce

$$\begin{aligned} \partial _{x_n} V_{nj}(x,y,1)= & {} -2\epsilon _j\sum _{\beta <n} \int _0^{x_n} \int _{w_n>0} \partial _{x_n} G^{ht}((x_n-z_n)e_n, w,1)\,\partial _j\partial _\beta ^2\\{} & {} E(w+x'-y^* +z_ne_n )\, dw\, dz_n. \end{aligned}$$

Thus, it is readily to show that

$$\begin{aligned} |\partial _{x_n} V_{nj}(x,y,1)|{\lesssim }|x-y^*|^{-n-1}. \end{aligned}$$

Similarly, for \(k\ge 2\),

$$\begin{aligned} \begin{aligned}&\partial _{x_n}^k V_{nj}(x,y,1) \\&\quad = -2\epsilon _j \sum _{\beta<n} \int _{w_n>0} \partial _{x_n}^{k-1} G^{ht}(x_ne_n,w,t)\,\partial _j\partial _\beta ^2 E(w+x'-y^*)\, dw\\&\qquad -2\epsilon _j \sum _{\beta<n} \int _{w_n>0} \partial _{x_n}^{k-2} G^{ht}(x_ne_n,w,t)\,\partial _j\partial _\beta ^2 \partial _nE(w+x'-y^*)\, dw - \cdots \\&\qquad -2\epsilon _j \sum _{\beta<n} \int _{w_n>0} \partial _{x_n} G^{ht}(x_ne_n,w,t)\,\partial _j\partial _\beta ^2 \partial _n^{k-2}E(w+x'-y^*)\, dw\\&\qquad -2\epsilon _j \sum _{\beta <n} \int _0^{x_n} \int _{w_n>0} \partial _{x_n} G^{ht}((x_n-z_n)e_n, w,t)\,\partial _j\partial _\beta ^2 \partial _n^2\\&\qquad E(w+x'-y^* +z_ne_n )\, dw\, dz_n, \end{aligned} \end{aligned}$$

and thus,

$$\begin{aligned} | \partial _{x_n}^k V_{nj}(x,y,1) |&{\lesssim }&e^{-\frac{x_n^2}{8}} |x-y^*|^{-n-1} + e^{-\frac{x_n^2}{8}} |x-y^*|^{-n-2} + \cdots + |x-y^*|^{-n-k} \\&{\lesssim }&|x-y^*|^{-n} x_n^{-k}. \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} \begin{aligned} | \partial _{x_n}^k V_{ij}(x,y,1) | {\lesssim }\frac{1}{{\langle x-y^* \rangle }^{k-k_i+n} x_n^{k_i}}, \end{aligned} \end{aligned}$$
(5.32)

where \(k_i=(k-\delta _{in})_+\).

Finally, Proposition 5.8 follows from (5.31), (5.32), and the scaling property

$$\begin{aligned} V_{ij}(x,y,t)=\frac{1}{t^{\frac{n}{2}}}\,V_{ij}\left( \frac{x}{\sqrt{t}},\frac{y}{\sqrt{t}},1\right) . \end{aligned}$$

\(\square \)

5.3 Proof of Proposition 1.1

We now prove Proposition 1.1.

Proof of Proposition 1.1

We first estimate the Green tensor \(G_{ij}\), which satisfies the formula in Lemma 4.3. By (2.10) and Proposition 5.6, the estimates of \(G_{ij}\) is bounded by the sum of \(\left( |x-y|^2+t\right) ^{-\frac{l+k+q+n}{2}-m}\), those in Proposition 5.6 for \({\widehat{H}}_{ij}\) and those in Proposition 5.8 for \(V_{ij}\). This shows (1.12).

We now estimate the pressure tensor \(g_j\). Recall the decomposition formula (3.20) that \(g_j = -F_j^y(x)\delta (t) +\widehat{w}_j\) in Proposition 3.5. For \(t>0\), it suffices to estimate

$$\begin{aligned} \begin{aligned} \partial _{x',y'}^l\partial _{x_n}^k\partial _{y_n}^q {\widehat{w}}_j(x,y,t) \sim&- \sum _{i<n} 8\int _0^t\int _{\Sigma }\partial _i \partial _n^{k+1}A(\xi ',x_n,\tau )\partial _{x'}^l\partial _n^{q+1}\\&S_{ij}(x'-y'-\xi ',-y_n,t-\tau )\,d\xi 'd\tau \\&+ \sum _{i<n} 4\int _{\Sigma }\partial _{x'}^l\partial _{x_n}^k\partial _i E(x-\xi ') \partial _n^{q+1}S_{ij}(\xi '-y,t)\,d\xi ' \\&+ 8\int _{\Sigma } \partial _n^{k+1}A(\xi ',x_n,t) \partial _{x'}^l \partial _n^{q+1}\partial _jE(x'-y'-\xi ',-y_n)\, d\xi '\\ =:&I +II +III . \end{aligned} \end{aligned}$$

We first estimate \(I \). Using (2.4) and (2.10), we get

$$\begin{aligned} \begin{aligned} I \lesssim&\int _0^t\int _{\Sigma }\frac{1}{\tau ^{\frac{1}{2}}(|\xi '|+x_n+\sqrt{\tau })^{k+n}}\,\frac{1}{(|\xi '-(x'-y')|+y_n+\sqrt{t-\tau })^{l+q+n+1}}\,d\xi 'd\tau \\ =&~\left( \int _0^{t/2}\! \int _{\Sigma } + \int _{t/2}^t \int _{\Sigma } \right) \left\{ \cdots \right\} \,d\xi 'd\tau =:~I _1+I _2. \end{aligned} \end{aligned}$$

We have

$$\begin{aligned} |I _1|\lesssim \int _{\Sigma } \left( \int _0^{t/2}\frac{1}{\tau ^{\frac{1}{2}}(|\xi '|+x_n+\sqrt{\tau })^{k+n}}\,d\tau \right) \frac{1}{(|\xi '-(x'-y')|+y_n+\sqrt{t})^{l+q+n+1}}\,d\xi '. \end{aligned}$$

Let

$$\begin{aligned} R=|x-y^*|+\sqrt{t}. \end{aligned}$$

By Lemma 2.1 (\(k>d\) case),

$$\begin{aligned} \begin{aligned} |I _1|&{\lesssim }\int _{\Sigma }\frac{\sqrt{t}}{(|\xi '|+x_n)^{k+n-1}(|\xi '|+x_n+\sqrt{t})}\,\frac{1}{(|\xi '-(x'-y')|+y_n+\sqrt{t})^{l+q+n+1}}\,d\xi ' \\&{\lesssim }\int _{\Sigma }\frac{1}{(|\xi '|+x_n)^{k+n-1}}\,\frac{1}{(|\xi '-(x'-y')|+y_n+\sqrt{t})^{l+q+n+1}}\,d\xi '. \end{aligned} \end{aligned}$$

By Lemma 2.2,

$$\begin{aligned} |I _1|&{\lesssim }R^{-l-q-k-n-1} + \delta _{k0} R^{-l-q-n-1} \log \left( \frac{R}{x_n} \right) + \mathbb {1}_{k>0} R^{-l-q-n-1} x_n^{-k} \\&\quad + R^{-k-n+1} (y_n + \sqrt{t})^{-l-q-2}. \end{aligned}$$

For \(I _2\) and all \(n\ge 2\), by Lemma 2.1,

$$\begin{aligned} \begin{aligned} |I _2|\lesssim&\int _{\Sigma }\frac{1}{t^{\frac{1}{2}}(|\xi '|+x_n+\sqrt{t})^{k+n}} \left( \int _{\frac{t}{2}}^t\frac{1}{(|\xi '-(x'-y')|+y_n+\sqrt{t-\tau })^{l+q+n+1}}\,d\tau \right) d\xi '\\ \lesssim&\int _{\Sigma }\frac{1}{t^{\frac{1}{2}}(|\xi '|+x_n+\sqrt{t})^{k+n}}\,\\&\frac{t}{(|\xi '-(x'-y')|+y_n)^{l+q+n-1}(|\xi '-(x'-y')|^2+y_n^2+ t) }\,d\xi ' \\ \lesssim&\int _{\Sigma }\frac{1}{(|\xi '|+x_n+\sqrt{t})^{k+n}}\,\frac{1}{(|\xi '-(x'-y')|+y_n)^{l+q+n} }\,d\xi ' . \end{aligned} \end{aligned}$$

By Lemma 2.2,

$$\begin{aligned} |I _2| {\lesssim }R^{-l-q-k-n-1} + R^{-l-q-n} (x_n + \sqrt{t})^{-k-1} + R^{-k-n} y_n^{-l-q-1}. \end{aligned}$$

Now, we estimate \(II \). Using the definition of E, (2.10) and Lemma 2.2, after integrating by parts, we get

$$\begin{aligned} \begin{aligned} |II |\lesssim&\int _{\Sigma }\frac{1}{(|\xi '|+x_n)^{k+n-1}}\,\frac{1}{(|\xi '-(x'-y')|+y_n+\sqrt{t})^{l+q+n+1}}\,d\xi ', \end{aligned} \end{aligned}$$

which is similar to \(I_1\). Hence

$$\begin{aligned} \begin{aligned} |I +II |&{\lesssim }\delta _{k0} R^{-l-q-n-1} \log \left( \frac{R}{x_n} \right) + \mathbb {1}_{k>0} R^{-l-q-n-1} x_n^{-k}+ R^{-l-q-n} (x_n + \sqrt{t})^{-k-1} \\&\quad + R^{-k-n+1} (y_n + \sqrt{t})^{-l-q-2} + R^{-k-n} y_n^{-l-q-1}. \end{aligned} \end{aligned}$$

Using (2.4) and the definition of E, we have

$$\begin{aligned} \begin{aligned} |III |\lesssim&\int _{\Sigma }\frac{1}{t^{\frac{1}{2}}(|\xi '|+x_n+\sqrt{t})^{k+n-1}}\,\frac{1}{(|\xi '-(x'-y')|+y_n)^{l+q+n}}\,d\xi '. \end{aligned} \end{aligned}$$

By Lemma 2.2,

$$\begin{aligned} |III | {\lesssim }t^{-\frac{1}{2}}\left( \delta _{k0} R^{-l-q-n}\log \frac{R}{x_n + \sqrt{t}} + \mathbb {1}_{k>0} R^{-l-q-n}(x_n+\sqrt{t})^{-k} + R^{-k-n+1} y_n^{-l-q-1} \right) . \end{aligned}$$

We conclude

$$\begin{aligned} |\partial _{x',y'}^l\partial _{x_n}^k\partial _{y_n}^q{\widehat{w}}_j(x,y,t)| {\lesssim }t^{-\frac{1}{2}} \left( \delta _{k0} \frac{1}{R^{l+q+n}} \log {\frac{R}{x_n}}+ \frac{1}{R^{l+q+n} x_n^{k}} + \frac{1}{R^{k+n-1}y_n^{l+q+1}} \right) . \end{aligned}$$

This proves estimate (1.14) and completes the proof of Proposition 1.1. \(\square \)

Remark 5.3

The pressure tensor estimate (1.14) is sufficient for our proof of Proposition 1.4, and can be improved by several ways: One can get alternative estimates by integrating \(\xi '\) by parts in all three terms I, II and III to move decay exponents from \(y_n\) to \(x_n\). Furthermore, we can rewrite the last term III using integration by parts and \(\Delta E = 0\) as

$$\begin{aligned} III = \left\{ \begin{aligned} 8\int _{\Sigma } \partial _j \partial _nA(\xi ',x_n,t) \partial _n E(x'-y'-\xi ',-y_n)\, d\xi '\qquad \text {if } j<n, \\ \textstyle \sum _{i<n} 8\int _{\Sigma } \partial _i \partial _nA(\xi ',x_n,t) \partial _i E(x'-y'-\xi ',-y_n)\, d\xi '\qquad \text {if } j=n. \end{aligned} \right. \end{aligned}$$

6 Restricted Green Tensors and Convergence to Initial Data

In this section we first study the restricted Green tensors acting on solenoidal vector fields, showing Theorem 1.2. In addition to the restricted Green tensor \(\breve{G}_{ij}\) of Solonnikov given in (1.9), we also identify another restricted Green tensor \({\widehat{G}}_{ij}\) in (1.15).

We then use them to show the convergence to initial data in pointwise and \(L^q\) sense for solenoidal and general \(u_0\) in Lemma 6.1 and Lemma 6.2, respectively. These show Theorem 1.3.

Proof of Theorem 1.2

Suppose \(\mathop {\textrm{div}}\nolimits u_0 = 0\) and \(u_{0,n}|_{\Sigma } = 0\). Let

$$\begin{aligned} \begin{aligned} u_i^L(x,t)&= \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t) u_{0,j}(y)\, dy,\\ \breve{u}_i^L(x,t)&= \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}(y)\, dy,\quad {\widehat{u}}_i^L(x,t) \\&= \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} {\widehat{G}}_{ij}(x,y,t)u_{0,j}(y)\,dy. \end{aligned} \end{aligned}$$
(6.1)

By Lemma 4.3,

$$\begin{aligned} \begin{aligned} u_i^L (x,t) =&~ \int _{{\mathbb {R}}^n_+} ( \Gamma (x-y,t) - \Gamma (x-y^*,t)) (u_0)_i(y) \, dy \\&+ \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} \left( \Gamma _{ij}(x-y,t) - \epsilon _i \epsilon _j \Gamma _{ij}(x-y^*,t) \right) (u_0)_j(y) \, dy\\&-4 \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} {\widehat{H}}_{ij}(x,y,t) (u_0)_j(y) \, dy {+ \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} V_{ij}(x,y,t) (u_0)_j(y) \, dy}\\ =&: I_1+I_2+I_3 {+ I_4}. \end{aligned} \end{aligned}$$
(6.2)

Note that \(I_1\) corresponds to the tensor \(\delta _{ij}\left[ \Gamma (x-y,t) - \Gamma (x-y^*,t) \right] \) in both (1.9) and (1.15). We claim \(I_2{+I_4}=0\). Indeed, since \(\Gamma _{ij}(x-y,t) - \epsilon _i\epsilon _j\Gamma _{ij}(x-y^*,t) + V_{ij}(x,y,t) = \partial _{y_j} T_i(x,y,t)\) with

$$\begin{aligned} \begin{aligned} T_i(x,y,t)&= \int _{{\mathbb {R}}^n}\partial _{y_i}\left[ \Gamma (x-y-w,t)-\Gamma (x-y^*-w,t) \right] E(w)\,dw\\&\quad + \left[ -2\delta _{in} \partial _{y_n} \int _{w_n<-y_n} G^{ht}(x,y+w,t) E(w)\, dw \right. \\&\quad \left. -4 \int _0^{x_n} \int _{\Sigma } \partial _{x_n} \left( \partial _{y_n} \int _{w_n<-y_n} G^{ht}(x-z,y+w,t) E(w)\, dw \right) dz'dz_n \right] , \end{aligned} \end{aligned}$$

by (2.8), (4.6) and (4.7),

$$\begin{aligned} I_2 + I_4 = \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} \partial _{y_j} T_i(x,y,t) (u_0)_j(y) \, dy =- \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} T_i(x,y,t) \partial _{y_j} (u_0)_j(y) \, dy=0. \end{aligned}$$

For \(I_3\), by separating the sum over \(j<n\) and \(j=n\), and using Lemma 5.1,

$$\begin{aligned} I_3 =-4 \sum _{j<n} \int _{{\mathbb {R}}^n_+} (-D_{ijn})(x,y,t) (u_0)_j(y) \, dy - 4 \int _{{\mathbb {R}}^n_+} \sum _{\beta <n} D_{i\beta \beta }(x,y,t) (u_0)_n(y) \, dy. \end{aligned}$$

Note that

$$\begin{aligned}&\sum _{j<n} \int _{{\mathbb {R}}^n_+} (-D_{ijn})(x,y,t) (u_0)_j(y) \, dy\\&\quad =\sum _{j<n} \int _{{\mathbb {R}}^n_+} \partial _{y_j}\left( \int _0^{x_n}\int _{\Sigma } \Gamma _{nn}(x^*-y-z^*,t)\partial _iE(z)\,dz'dz_n \right) (u_0)_j(y) \, dy \\&\quad =-\sum _{j<n} \int _{{\mathbb {R}}^n_+} \left( \int _0^{x_n}\int _{\Sigma } \Gamma _{nn}(x^*-y-z^*,t)\partial _iE(z)\,dz'dz_n \right) \partial _{y_j} (u_0)_j(y) \, dy \\&\quad = \int _{{\mathbb {R}}^n_+} \left( \int _0^{x_n}\int _{\Sigma } \Gamma _{nn}(x^*-y-z^*,t)\partial _iE(z)\,dz'dz_n \right) \partial _{y_n} (u_0)_n(y) \, dy \\&\quad = \int _{{\mathbb {R}}^n_+}\int _0^{x_n}\int _{\Sigma } \partial _n\Gamma _{nn}(x^*-y-z^*,t)\partial _iE(z)\,dz'dz_n\, (u_0)_n(y) \, dy\\&\quad = \int _{{\mathbb {R}}^n_+} D_{inn}(x,y,t) (u_0)_n(y) \, dy. \end{aligned}$$

Hence

$$\begin{aligned} I_3 = - 4 \int _{{\mathbb {R}}^n_+} \sum _{\beta =1}^n D_{i\beta \beta }(x,y,t)\,(u_0)_n(y) \, dy. \end{aligned}$$

Since

$$\begin{aligned} \begin{aligned} \sum _{\beta =1}^n D_{i\beta \beta }(x,y,t) =&~ \int _0^{x_n} \int _{\Sigma } \sum _{\beta =1}^n \partial _{\beta } \Gamma _{\beta n}(x^*-y-z^*,t)\partial _iE(z)\,dz'dz_n \\ =&~ \int _0^{x_n} \int _{\Sigma } \sum _{\beta =1}^n \partial _{y_\beta }^2 \int _{{\mathbb {R}}^n} \partial _n\Gamma (x^*-y-z^*-w,t)\\ {}&\quad E(w) \, dw\,\partial _iE(z)\,dz'dz_n\\ =&~ -\int _0^{x_n} \int _{\Sigma } \partial _n \Gamma (x^*-y-z^*,t) \partial _iE(z)\,dz'dz_n =+ C_i(x,y,t), \end{aligned} \end{aligned}$$

where we used \(-\Delta E=\delta \), (6.2) becomes

$$\begin{aligned} \begin{aligned}&\sum _{j=1}^n \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t) (u_0)_j(y) \, dy\\&\quad = \int _{{\mathbb {R}}^n_+} ( \Gamma (x-y,t) - \Gamma (x-y^*,t)) (u_0)_i(y) \, dy - 4 \int _{{\mathbb {R}}^n_+} C_i(x,y,t)\, (u_0)_n(y)\, dy. \end{aligned} \end{aligned}$$
(6.3)

This gives (1.15). On the other hand,

$$\begin{aligned} \begin{aligned} I_3&= 4\int _{{\mathbb {R}}^n_+} \int _0^{x_n} \int _{\Sigma } \partial _n \Gamma (x^*-y-z^*,t) \partial _iE(z)\,dz'dz_n\, (u_0)_n(y)\, dy\\&= 4\int _{{\mathbb {R}}^n_+} \int _0^{x_n} \int _{\Sigma } \Gamma (x^*-y-z^*,t) \partial _iE(z)\,dz'dz_n\, \partial _n(u_0)_n(y)\, dy\\&= -4\sum _{\beta<n} \int _{{\mathbb {R}}^n_+} \int _0^{x_n} \int _{\Sigma } \Gamma (x^*-y-z^*,t) \partial _iE(z)\,dz'dz_n\, \partial _\beta (u_0)_\beta (y)\, dy\\&= -4\sum _{\beta <n} \int _{{\mathbb {R}}^n_+} J_{i\beta }(x,y,t) \cdot (u_0)_\beta (y)\, dy, \end{aligned} \end{aligned}$$

where for \(\beta <n\)

$$\begin{aligned} \begin{aligned} J_{i\beta }&= \partial _{x_\beta }\int _0^{x_n} \int _{\Sigma } \Gamma (x^*-y-z^*,t) \partial _iE(z)\,dz'dz_n \\&=\partial _{x_\beta } \int _0^{x_n} \int _{\Sigma } \Gamma (z-y^*,t) \partial _iE(x-z)\,dz'dz_n. \end{aligned} \end{aligned}$$

we conclude that

$$\begin{aligned} \begin{aligned}&\sum _{j=1}^n \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t) (u_0)_j(y) \, dy \\&\quad = \int _{{\mathbb {R}}^n_+} ( \Gamma (x-y,t) - \Gamma (x-y^*,t)) (u_0)_i(y) \, dy \\&\qquad - 4 \sum _{\beta <n} \int _{{\mathbb {R}}^n_+} J_{i\beta }(x,y,t) \cdot (u_0)_\beta (y)\, dy, \end{aligned} \end{aligned}$$
(6.4)

which gives (1.9). This completes the proof of Theorem 1.2. \(\square \)

Remark 6.1

Similar to Theorem 1.2, we have restricted pressure tensors. Let \(f \in C^1_c(\overline{{\mathbb {R}}^n_+}\times {\mathbb {R}};{\mathbb {R}}^n)\) be a vector field in \({\mathbb {R}}^n_+\times {\mathbb {R}}\) and \(f = {{\textbf{P}}}f\), i.e., \(\mathop {\textrm{div}}\nolimits f=0\) and \(f_{n}|_\Sigma =0\). Then

$$\begin{aligned} \begin{aligned}&\sum _{j=1}^n \int _{-\infty }^\infty \int _{{\mathbb {R}}^n_+} g_j(x,y,t-s) f_j(y,s)\,dy\,ds\\&\quad = \sum _{j=1}^n \int _{-\infty }^t \int _{{\mathbb {R}}^n_+} \breve{g}_j(x,y,t-s) f_j(y,s)\,dy\,ds\\&\quad = \sum _{j=1}^n \int _{-\infty }^t \int _{{\mathbb {R}}^n_+} {\widehat{g}}_j(x,y,t-s) f_j(y,s)\,dy\,ds, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \breve{g}_j(x,y,t) = (\delta _{jn}-1)\partial _{y_j} Q(x,y,t) ,\quad {\widehat{g}}_j(x,y,t)= \delta _{jn}\partial _{y_j} Q(x,y,t) , \end{aligned}$$

and

$$\begin{aligned} Q(x,y,t) = 4 \int _{\Sigma } \bigg [ E(x-\xi ')\partial _n\Gamma (\xi '-y,t) + \Gamma (x'-y'-\xi ',y_{n},t)\partial _nE(\xi ',x_n)\bigg ] \,d\xi '. \end{aligned}$$

An equivalent formula of \(\breve{g}_j\) appeared in Solonnikov [49, (2.4)], but no \({\widehat{g}}_j\). Both \(\breve{g}_j\) and \({\widehat{g}}_j\) are functions and do not contain delta function in time. Note that \({\widehat{g}}_j(x,y,t)= \breve{g}_j(x,y,t) - \partial _{y_j} Q(x,y,t)\). We can get infinitely many restricted pressure tensors by adding to them any gradient field \(\partial _{y_j} P(x,y,t)\). \(\square \)

Lemma 6.1

Let \(u_0 \in C^1_c(\overline{{\mathbb {R}}^n_+};{\mathbb {R}}^n)\) be a vector field in \({\mathbb {R}}^n_+\) and \(u_0 = {{\textbf{P}}}u_0\), i.e., \(\mathop {\textrm{div}}\nolimits u_0=0\) and \(u_{0,n}|_\Sigma =0\). Then for all \(i=1,\ldots ,n\), and \(1< q\le \infty \),

$$\begin{aligned} \begin{aligned} \lim _{t\rightarrow 0_+}\left\| u_{0,i}(x) - \textstyle \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t)u_{0,j}(y)\,dy \right\| _{L^q_x({\mathbb {R}}^n_+)}=0. \end{aligned} \end{aligned}$$
(6.5)

Note that the exponent q in (6.5) includes \(\infty \) but not 1.

Proof

Choose \(R>0\) so that \(K= \left\{ (x',x_n)\in {\mathbb {R}}^n: |x'|\le R, 0\le x_n\le R \right\} \) contains the support of \(u_0\). Since \(u_0\) is uniformly continuous with compact support inside K,

$$\begin{aligned} { \left\| \int _{{\mathbb {R}}^n_+}\Gamma (x-y,t) u_{0,i}(y)\, dy - u_{0,i}(x)\right\| _{L^q_x({\mathbb {R}}^n_+)} + \left\| \int _{{\mathbb {R}}^n_+}\Gamma (x^*-y,t) u_{0,i}(y)\, dy \right\| _{L^q_x({\mathbb {R}}^n_+)} \rightarrow 0 }\nonumber \\ \end{aligned}$$
(6.6)

as \( t\rightarrow 0_+\) for all i. In view of (6.3), to show (6.5), it suffices to show

$$\begin{aligned} { \lim _{t\rightarrow 0_+}\sup _{x\in {\mathbb {R}}^n_+} \left\| v_i(\cdot ,t) \right\| _{L^q({\mathbb {R}}^n_+)}=0, } \end{aligned}$$
(6.7)

where

$$\begin{aligned} v_i(x,t)=\int _{{\mathbb {R}}^n_+} C_i(x,y,t) u_{0,n}(y)\, dy. \end{aligned}$$

Note that \(|u_{0,n}(y)| \le Cy_n\), for some \(C>0\), since \(u_{0,n}|_\Sigma =0\) and \(u_0 \in C^1_c(\overline{{\mathbb {R}}^n_+})\). Using estimate (5.11) for \(C_i\), we have

$$\begin{aligned} \begin{aligned} |v_i(x,t)|&\le \int _K \frac{e^{-\frac{ y_n^2}{30t}}}{\left( y_n+\sqrt{t} \right) \left( |x-y^*|+\sqrt{t} \right) ^{n-1}}\, Cy_n \,dy \\&{\lesssim }\int _{K} \frac{1}{(|x-y|+\sqrt{t})^{n-1}} e^{-\frac{ y_n^2}{30t}} \, dy = \int _{{\mathbb {R}}^n} f(x-y{, t})g(y{, t})\,dy, \end{aligned} \end{aligned}$$
(6.8)

where

$$\begin{aligned} f(x{, t})= \frac{1}{(|x|+\sqrt{t})^{n-1}},\quad g(x{, t})=e^{-\frac{ x_n^2}{30t}} \mathbb {1}_K(x). \end{aligned}$$

By Young’s convolution inequality,

$$\begin{aligned} \left\| v_i(\cdot ,t) \right\| _{L^q({\mathbb {R}}^n_+)}{\lesssim }\Vert {(f*g)(\cdot ,t)}\Vert _{L^q({\mathbb {R}}^n)} \le \Vert f{(\cdot ,t)}\Vert _{L^p({\mathbb {R}}^n)}\Vert g{(\cdot ,t)}\Vert _{L^r({\mathbb {R}}^n)} \end{aligned}$$

where

$$\begin{aligned} \frac{1}{p}+\frac{1}{r} = \frac{1}{q} + 1, \qquad 1\le p,q,r\le \infty . \end{aligned}$$

We first compute \(L^p\)-norm of f. If \(p>\frac{n}{n-1}\),

$$\begin{aligned} \Vert f{(\cdot ,t)}\Vert _{L^p({\mathbb {R}}^n)}=\left( \int _{{\mathbb {R}}^n} \frac{1}{(|z|+\sqrt{t})^{(n-1)p}}\,dz \right) ^{1/p} =C \sqrt{t}^{\frac{n}{p}-(n-1)}. \end{aligned}$$

Next, we compute \(L^r\)-norm of g. We need \(0\le \frac{1}{p}- \frac{1}{q}< 1\) so that \(1\le r<\infty \).

$$\begin{aligned} \int _{{\mathbb {R}}^n} |g|^r \le \int _0^{R} \int _{B_R'} e^{-\frac{z_n^2}{30t}}\,dz'dz_n = CR^{n-1}\sqrt{t} \int _0^{\frac{R}{\sqrt{t}}} e^{-\frac{u^2}{30}}\,du \lesssim \sqrt{t}. \end{aligned}$$

Hence \(\Vert g(\cdot ,t)\Vert _{L^r}\lesssim \sqrt{t}^{\frac{1}{r}}\), and

$$\begin{aligned} \Vert {(f*g)(\cdot ,t)}\Vert _{L^q}\lesssim \sqrt{t}^{\frac{n}{p}-(n-1)+\frac{1}{r}} = \sqrt{t}^{\frac{1}{q}+1+(n-1)\left( \frac{1}{p}-1 \right) }. \end{aligned}$$

To have vanishing limit when \(t\rightarrow 0_+\), we require \(\frac{1}{q}+1+(n-1)\left( \frac{1}{p}-1 \right) >0\).

When \(q \in (\frac{n}{n-1},\infty ]\), we can choose \(p \in (\frac{n}{n-1},\min (q,\frac{n-1}{n-2}))\) so that all conditions on p,

$$\begin{aligned} p>\frac{n}{n-1}, \quad 0\le \frac{1}{p}- \frac{1}{q}< 1, \quad \frac{1}{q}+1+(n-1)\left( \frac{1}{p}-1 \right) >0 \end{aligned}$$

are satisfied. This shows (6.7) for all \(q \in (\frac{n}{n-1},\infty ]\).

For the small q case, let

$$\begin{aligned} u^*_i(x,t) = \int _{{\mathbb {R}}^n_+} G^*_{ij}(x,y,t) u_{0, j}(y)\,dy, \end{aligned}$$

where \(G^*_{ij}\) is given in the (1.9), and is the sum of the last terms of (1.9). It suffices to show

$$\begin{aligned} \lim _{t \rightarrow 0} \left\| u^*_i(x,t) \right\| _{L^q_x ({\mathbb {R}}^n_+)}=0. \end{aligned}$$

By estimate (1.10), \(|G^*_{ij}(x,y,t)|\lesssim e^{-\frac{Cy_n^2}{t}}\left( |x^*-y|^2+t \right) ^{-\frac{n}{2}}\). For \(1<q<\infty \), using the Minkowski’s inequality,

$$\begin{aligned} \begin{aligned} \left\| u^*_i(x,t) \right\| _{L^q_x({\mathbb {R}}^n_+)}&\lesssim \int _{{\mathbb {R}}^n_+}\left( \int _{{\mathbb {R}}^n_+}\left| G^*_{ij}(x,y,t) \right| ^q dx \right) ^{\frac{1}{q}} \left| u_{0}(y) \right| dy \\&\lesssim \int _{{\mathbb {R}}^n_+}\left( \int _{{\mathbb {R}}^n_+}\frac{dx}{\left( |x^*-y|^2+t \right) ^{\frac{nq}{2}} } \right) ^{\frac{1}{q}} e^{-\frac{Cy_n^2}{t}}\left| u_{0}(y) \right| dy \\&\lesssim \int _0^R \int _{|y'|<R}\frac{1}{(y_n +\sqrt{t})^{\frac{n(q-1)}{q}}}\,e^{-\frac{Cy_n^2}{t}}\,dy'\,dy_n \\&\lesssim t^{\frac{1}{2}\left( 1-\frac{n(q-1)}{q} \right) }\int _0^{R/\sqrt{t}}\frac{1}{(z_n +1)^{\frac{n(q-1)}{q}}}\,e^{-Cz_n^2} \,dz_n, \end{aligned} \end{aligned}$$

where \(y_n=\sqrt{t} z_n\). Therefore, if \(1<q< \frac{n}{n-1}\), then the right hand side goes to zero as \( t\rightarrow 0_+\).

The case \(q=\frac{n}{n-1}\) can be obtained using the previous cases and the Hölder inequality.

This finishes the proof of Lemma 6.1. \(\square \)

Remark 6.2

In the proof of Lemma 6.1, we have used \({\widehat{G}}_{ij}\) for large q and \(\breve{G}_{ij}\) for small q. We do not use \({\widehat{G}}_{ij}\) for small q because the estimate (6.8) for \(v_i\) does not have enough decay in x. We can not use \(\breve{G}_{ij}\) for \(q=\infty \) because, although the pointwise estimate of \(u_i^*(x,t)\) using (1.10) converges to 0 as \(t\rightarrow 0\) for each \(x\in {\mathbb {R}}^n_+\), it is not uniform in x. In contrast, it is uniform for \(v_i\) thanks to \(|u_{0,n}(y)| \le C y_n\).

Lemma 6.2

Let \(u_0\) be a vector field in \({\mathbb {R}}^n_+\), \(u_0\in L^q({\mathbb {R}}^n_+)\), \(1<q<\infty \), and let \(u_i(x,t)=\textstyle \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t)u_{0,j}(y)\,dy\). Then \(u(x,t)\rightarrow ({{\textbf{P}}}u_0)(x)\) in \(L^q({\mathbb {R}}^n_+)\).

This lemma does not assume \(u = {{\textbf{P}}}u\), and implies (1.7).

Proof

Since the Helmholtz projection \({{\textbf{P}}}\) is bounded in \(L^q({\mathbb {R}}^n_+)\), we also have \({{\textbf{P}}}u_0 \in L^q({\mathbb {R}}^n_+)\). For any \(\varepsilon >0\), choose \(a ={{\textbf{P}}}a \in C^\infty _c(\overline{{\mathbb {R}}^n_+};{\mathbb {R}}^n)\) with \(\left\| a-{{\textbf{P}}}u_0 \right\| _{L^q} \le \varepsilon \). Such a may be obtained by first localizing \({{\textbf{P}}}u_0\) using a Bogovskii map, and then mollifying the extension defined in (3.4) of the localized vector field. Let \(v_i(x,t)=\sum _{j=1}^n\int _{{\mathbb {R}}^n_+}G_{ij}(x,y,t)a_j(y)\,dy\). By Lemma 6.1, there is \(t_\varepsilon >0\) such that

$$\begin{aligned} \left\| v(\cdot ,t) -a \right\| _{L^q({\mathbb {R}}^n)} \le \varepsilon , \quad \forall t \in (0,t_\varepsilon ). \end{aligned}$$

By \(L^q\) estimate (9.1) in Lemma 9.1, \(\left\| u(t) - v(t) \right\| _{L^q} \le C \left\| {{\textbf{P}}}u_0-a \right\| _{L^q}\le C\varepsilon \). Hence

$$\begin{aligned} \left\| u(t)-{{\textbf{P}}}u_0 \right\| _{L^q} \le \left\| u(t)-v(t) \right\| _{L^q} + \left\| v(t)-a \right\| _{L^q} + \left\| a-{{\textbf{P}}}u_0 \right\| _{L^q} \le C\varepsilon \end{aligned}$$

for \(t \in (0,t_\varepsilon )\). This shows \(L^q\)-convergence of u(t) to \({{\textbf{P}}}u_0\). \(\square \)

Proof of Theorem 1.3

Part (a) is by Lemma 3.4. Part (b) is by Lemma 6.2. Part (c) is by Lemma 6.1. \(\square \)

7 The Symmetry of the Green Tensor

In this section we prove Proposition 1.4, i.e., the symmetry of the Green tensor of the Stokes system in the half-space,

$$\begin{aligned} \begin{aligned} G_{ij}(x,y,t)=G_{ji}(y,x,t),\quad \forall x,y\in {\mathbb {R}}^n_+,\ \forall t\in {\mathbb {R}}\setminus \{0\}. \end{aligned} \end{aligned}$$
(7.1)

In the Green tensor formula in Lemma 4.3, this symmetry property is valid for the first three terms but unclear for the last two terms \( - \epsilon _i \epsilon _j \Gamma _{ij}(x-y^*,t) - 4\widehat{H}_{ij}(x,y,t)\). To prove it rigorously, we will use its regularity away from the singularity, bounds on spatial decay, and estimates near the singularity from the previous sections. For example, without the pointwise bound in Proposition 1.1, the bound (7.4) is unclear, and it will take extra effort to show their zero limits as \(\epsilon \rightarrow 0\).

Denote \(G^y_{ij}(z,\tau )=G_{ij}(z, y, \tau )\) and \(g^y_j(z,\tau )=g_j(z,y,\tau )={\widehat{w}}_j^y(z,\tau ) - F_j^y(z)\delta (\tau )\) by Proposition 3.5. Equation (3.1) reads: For fixed \(j=1,2,\cdots , n\) and \(y\in {\mathbb {R}}^n_+\),

$$\begin{aligned} \begin{aligned} \partial _{\tau } G^y_{ij}-\Delta _z G^y_{ij}+\partial _{z_i}g^y_j=\delta _{ij}\delta _y(z)\delta (\tau ), \quad \sum _{i=1}^n \partial _{z_i}G^y_{ij}=0,\quad (z,\tau )\in {\mathbb {R}}^n_+\times {\mathbb {R}}, \end{aligned} \end{aligned}$$
(7.2)

and \(G^y_{ij}(z',0,\tau )=0\). Denote \(U:={\mathbb {R}}^{n}_+\times {\mathbb {R}}\) and

$$\begin{aligned} Q^{y,t}_{\epsilon }=B^y_\epsilon \times (t-\epsilon , \, t+ \epsilon ). \end{aligned}$$

The inward normal \(\nu _z\) on \(\partial Q^{y,t}_{\epsilon }\) is defined on its lateral boundary as

$$\begin{aligned} \nu _i (z,\tau )= -\frac{z_i-y_i}{|z-y|}. \end{aligned}$$

Lemma 7.1

For \(j=1,\ldots ,n\), \(y\in {\mathbb {R}}^n_+\), \(t>0\), and all \(f\in C^\infty ({\mathbb {R}}^n_+\times [0,t];{\mathbb {R}}^n)\), we have

$$\begin{aligned} f_j(y,0)&=\lim _{\epsilon \rightarrow 0_+}\sum _{k=1}^n\left[ \int _{|z-y|=\epsilon }F^y_j(z)f_k(z,0)\nu _k\,dS_z\right. \nonumber \\&\quad -\int _0^\epsilon \!\int _{|z-y|=\epsilon }G^y_{kj}(z,\tau )\nabla _zf_k(z,\tau )\cdot \nu _z\,dS_zd\tau \nonumber \\&\quad +\int _0^\epsilon \!\int _{|z-y|=\epsilon }(\nabla _zG^y_{kj}(z,\tau )\cdot \nu _z)f_k(z,\tau )\,dS_zd\tau \nonumber \\&\quad \left. - \int _0^\epsilon \!\int _{|z-y|=\epsilon } {\widehat{w}}^y_j (z,\tau )f_k(z,\tau )\nu _k\,dS_zd\tau \right] . \end{aligned}$$
(7.3)

Proof

We first assume \(f\in C^\infty _c({\mathbb {R}}^n_+\times {\mathbb {R}};{\mathbb {R}}^n)\). By the defining property (7.2) of Green tensor, we have

$$\begin{aligned} \begin{aligned} f_j(y,0)=&\sum _{k=1}^n\int _{U}\left[ G^y_{kj}(z,\tau )(-\partial _{\tau } f_k(z,\tau )-\Delta _zf_k(z,\tau ))-{\widehat{w}}^y_j (z,\tau )\partial _{z_k}f_k(z,\tau )\right] dz\,d\tau \\&+\int _{{\mathbb {R}}^n_+} {F_j^y} (z)\mathop {\textrm{div}}\nolimits f(z,0)\,dz. \end{aligned} \end{aligned}$$

Separating the domain of the first integral, we have

$$\begin{aligned} \begin{aligned} f_j(y,0)=&\lim _{\epsilon \rightarrow 0_+}\sum _{k=1}^n\int _{U\setminus Q^{y,0}_\epsilon }\left[ G^y_{kj}(z,\tau )(-\partial _\tau f_k(z,\tau )-\Delta _zf_k(z,\tau )) \right. \\&\left. - {\widehat{w}}_j^y(z,\tau )\partial _{z_k} f_k(z,\tau )\right] dz\,d\tau \\&+\int _{{\mathbb {R}}^n_+}{F_j^y}\mathop {\textrm{div}}\nolimits f(z,0)\,dz\\ =&\lim _{\epsilon \rightarrow 0_+}\sum _{k=1}^n\left( \int _0^\infty \int _{|z-y|>\epsilon }+\int _{\epsilon }^\infty \int _{|z-y|<\epsilon }\right) [\cdots ]\,dz\,d\tau \\&+\int _{{\mathbb {R}}^n_+}{F_j^y} (z)\mathop {\textrm{div}}\nolimits f(z,0)\,dz. \end{aligned} \end{aligned}$$

Here we have used the fact that \(G^y_{kj}(z,\tau )=w^y_j(z,\tau )=0\) for \(\tau <0\).

Integrating by parts and using \(f\in C^\infty _c({\mathbb {R}}^n_+\times {\mathbb {R}})\), we get

$$\begin{aligned} f_j(y,0)=&\lim _{\epsilon \rightarrow 0_+}\sum _{k=1}^n\left[ \int _{|z-y|>\epsilon }G^y_{kj}(z, 0_+)f_k(z,0)\,dz\right. \\&\left. +\int _0^\infty \int _{|z-y|>\epsilon }\partial _\tau G^y_{kj}(z,\tau )f_k(z,\tau )\,dzd\tau \right. \\&+\int _0^\infty \int _{|z-y|=\epsilon }\left\{ -G^y_{kj}(z,\tau )\nabla _zf_k(z,\tau ) + f_k(z,\tau ) \nabla _z G^y_{kj}(z,\tau ) \right\} \cdot \nu _z\,dS_zd\tau \\&-\int _0^\infty \int _{|z-y|>\epsilon }\Delta _zG^y_{kj}(z,\tau )f_k(z,\tau )\,dzd\tau \\&-\int _0^\infty \int _{|z-y|=\epsilon } {\widehat{w}}^y_j (z,\tau )f_k(z,\tau )\nu _k\,dS_zd\tau \\&+\int _0^\infty \int _{|z-y|>\epsilon }\partial _{z_k} {\widehat{w}}^y_j (z,\tau )f_k(z,\tau )\,dzd\tau \\&+\int _{|z-y|<\epsilon } G^y_{kj}(z,\epsilon )f_k(z,\epsilon )\,dz +\int _\epsilon ^\infty \int _{|z-y|<\epsilon }\partial _\tau G^y_{kj}(z,\tau )f_k(z,\tau )\,dzd\tau \\&+\int _\epsilon ^\infty \int _{|z-y|=\epsilon } \left\{ -G^y_{kj}(z,\tau )\nabla _zf_k(z,\tau ) + f_k(z,\tau ) \nabla _z G^y_{kj}(z,\tau ) \right\} \\&\cdot (-\nu _z)\,dS_zd\tau -\int _\epsilon ^\infty \int _{|z-y|<\epsilon }\Delta _zG^y_{kj}(z,\tau )f_k(z,\tau )\,dzd\tau \\&-\int _\epsilon ^\infty \int _{|z-y|=\epsilon } {\widehat{w}}^y_j (z,\tau )f_k(z,\tau )(-\nu _k)\,dzd\tau \\&+\int _\epsilon ^\infty \int _{|z-y|<\epsilon }\partial _{z_k} {\widehat{w}}^y_j (z,\tau )f_k(z,\tau )\,dzd\tau \\&+ \left( \int _{|z-y|<\epsilon }+ \int _{|z-y|>\epsilon } \right) {F_j^y}(z)\partial _{z_k} f_k(z,0)\,dz\bigg ]. \end{aligned}$$

Note that \(\partial _\tau G^y_{kj}-\Delta _zG^y_{kj}+\partial _{z_k}{\widehat{w}}^y_j =\partial _\tau G^y_{kj}-\Delta _zG^y_{kj}+\partial _{z_k}g^y_j=0\) for \(\tau >0\) and that \(G^y_{kj}(z,0_+)=\partial _kF^y_j(z)\) if \(y\not =z\). Therefore, after combining and integrating by parts the sum of the first term and the last term,

$$\begin{aligned} \begin{aligned} f_j(y,0)=&\lim _{\epsilon \rightarrow 0_+}\sum _{k=1}^n\left[ \int _{|z-y|=\epsilon }F^y_j(z)f_k(z,0)\nu _k\,dS_z\right. \\&-\int _0^\epsilon \int _{|z-y|=\epsilon }G^y_{kj}(z,\tau )\nabla _zf_k(z,\tau )\cdot \nu _z\,dS_zd\tau \\&+\int _0^\epsilon \int _{|z-y|=\epsilon }(\nabla _zG^y_{kj}(z,\tau )\cdot \nu _z)f_k(z,\tau )\,dS_zd\tau \\&-\int _0^\epsilon \int _{|z-y|=\epsilon } {\widehat{w}}^y_j (z,\tau )f_k(z,\tau )\nu _k\,dS_zd\tau \\&+\int _{|z-y|<\epsilon } G^y_{kj}(z,\epsilon )f_k(z,\epsilon )\,dz +\left. \int _{|z-y|<\epsilon } {F_j^y}(z)\partial _{z_k}f_k(z,0)\,dz \right] . \end{aligned} \end{aligned}$$

The last two terms vanish as \(\epsilon \rightarrow 0_+\) since

$$\begin{aligned} \begin{aligned} \left| \int _{|z-y|<\epsilon }G^y_{kj}(z,\epsilon )f_k(z,\epsilon )\,dz\right| \lesssim \int _0^\epsilon \frac{\Vert f\Vert _\infty }{(r+\sqrt{\epsilon })^{n}}\,r^{n-1}\,dr\lesssim \epsilon ^{\frac{n+1}{2}}\rightarrow 0\ \text { as }\epsilon \rightarrow 0_+ \end{aligned}\nonumber \\ \end{aligned}$$
(7.4)

by (1.12) and Lemma 2.1, and

$$\begin{aligned} \left| \int _{|z-y|<\epsilon } {F_j^y}(z)\partial _{z_k}f_k(z,0)\,dz\right| \lesssim \int _0^\epsilon \frac{ \Vert \nabla f\Vert _\infty }{r^{n-1}}\,r^{n-1}\,dr\lesssim \epsilon \rightarrow 0\ \text { as }\epsilon \rightarrow 0_+. \end{aligned}$$

Hence (7.3) is valid for all \(f\in C^\infty _c({\mathbb {R}}^n_+\times {\mathbb {R}})\).

If \(f\in C^\infty _c({\mathbb {R}}^n_+\times [0,t])\), we can extend it to \(\tilde{f}\in C^\infty _c({\mathbb {R}}^n_+\times {\mathbb {R}})\). Hence (7.3) is valid for all such f. Finally, if \(f\in C^\infty ({\mathbb {R}}^n_+\times [0,t])\) for some \(t>0\), let \({{\tilde{f}}} = f \zeta \) where \(\zeta (z,\tau )\) is a smooth cut-off function which equals 1 in \(Q^{y,0}_{2\epsilon }\). Then (7.3) is valid for \({{\tilde{f}}}\in C^\infty _c({\mathbb {R}}^n_+\times [0,\infty ))\) and hence also for f. This completes the proof of the lemma. \(\square \)

We now prove the symmetry.

Proof of Proposition 1.4

Fix \(\Phi \in C^\infty _c({\mathbb {R}})\), \(\Phi (s)=1\) for \(s \le 1\), and \(\Phi (s)=0\) for \(s\ge 2\). For fixed \(x\not = y\in {\mathbb {R}}^n_+\), \(t>0\), and \(i,j=1,\ldots ,n\), by choosing \(f_k(z,\tau )=G^x_{ki}(z,t-\tau )\eta ^{x,t}(z,\tau )\) in (7.3) of Lemma 7.1, where \(\eta ^{x,t}\) is a smooth cut-off function defined by

$$\begin{aligned} \eta ^{x,t}(z,\tau )=1-\Phi \left( \frac{|x-z|}{\epsilon }\right) \Phi \left( \frac{|t-\tau |}{\epsilon }\right) , \end{aligned}$$

and using that \(\eta ^{x,t}(z,\tau )=1\) on \(\{(z,\tau ):0\le \tau \le \epsilon ,\,|z-y|\le \epsilon \}\) for \(\epsilon <|x-y|/3\), we obtain

$$\begin{aligned} G^x_{ji}(y,t)=&\lim _{\epsilon \rightarrow 0_+}\sum _{k=1}^n\left[ \int _{|z-y|=\epsilon }F^y_j(z)G^x_{ki}(z,t)\nu _k\,dS_z\right. \nonumber \\&-\int _0^\epsilon \int _{|z-y|=\epsilon }G^y_{kj}(z,\tau )\nabla _zG^x_{ki}(z,t-\tau )\cdot \nu _z\,dS_zd\tau \nonumber \\&+\int _0^\epsilon \int _{|z-y|=\epsilon }(\nabla _zG^y_{kj}(z,\tau )\cdot \nu _z)G^x_{ki}(z,t-\tau )\,dS_zd\tau \nonumber \\&-\left. \int _0^\epsilon \int _{|z-y|=\epsilon }{\widehat{w}}^y_j (z,\tau )G^x_{ki}(z,t-\tau )\nu _k\,dS_zd\tau \right] . \end{aligned}$$
(7.5)

Switching y and j in the above identity with x and i, respectively, and changing the variables in \(\tau \), we get

$$\begin{aligned} \begin{aligned} G^y_{ij}(x,t)=&\lim _{\epsilon \rightarrow 0_+}\sum _{k=1}^n\left[ \int _{|z-x|=\epsilon }F^x_i(z)G^y_{kj}(z,t)\nu _k\,dS_z\right. \\&-\int _{t-\epsilon }^t\int _{|z-x|=\epsilon }G^x_{ki}(z,t-\tau )\nabla _zG^y_{kj}(z,\tau )\cdot \nu _z\,dS_zd\tau \\&+\int _{t-\epsilon }^t\int _{|z-x|=\epsilon }(\nabla _zG^x_{ki}(z,t-\tau )\cdot \nu _z)G^y_{kj}(z,\tau )\,dS_zd\tau \\&\left. -\int _{t-\epsilon }^t\int _{|z-x|=\epsilon } {\widehat{w}^x_i}(z,t-\tau )G^y_{kj}(z,\tau )\nu _k\,dS_zd\tau \right] . \end{aligned} \end{aligned}$$
(7.6)

Denote

$$\begin{aligned} U_\epsilon ^{L,\delta }:=\left\{ ({\mathbb {R}}^n_+\cap \left\{ |z|<L, L z_n>1 \right\} ) \times [\delta ,t-\delta ] \right\} \setminus (Q^{x,t}_{\epsilon }\cup Q^{y,0}_{\epsilon }) \end{aligned}$$

for \(0<\delta<\epsilon <\min (t,|x-y|)/2\) and \(L > 2(|x|+|y|+1)\). Since \(G^x_{ki}(z, t-\tau )\) and \(G^y_{kj}(z, \tau )\) are smooth in \(U_\epsilon ^{L,\delta }\), \(\left[ (\partial _{\tau }-\Delta _z)G^y_{kj}+\partial _{z_k}g^y_j \right] (z,\tau )\) and \(\left[ (-\partial _{\tau }-\Delta _z)G^x_{ki} \right] {+\partial _{z_k}g^x_i}(z,t-\tau )\) vanish in \(U_\epsilon ^{L,\delta }\), and \(g_j(x,y,t)= {\widehat{w}}_j(x,y,t)\) for \(t>0\),

$$\begin{aligned} \begin{aligned} 0 =&~\sum _{k=1}^n \int _{U_\epsilon ^{L,\delta }}G^x_{ki}(z, t-\tau )\left[ (\partial _{\tau }-\Delta _z)G^y_{kj}+\partial _{z_k} {\widehat{w}}^y_j \right] (z,\tau )\,dz\,d\tau \\&~-\sum _{k=1}^n \int _{U_\epsilon ^{L,\delta }}G^y_{kj}(z,\tau )\left[ (-\partial _{\tau }-\Delta _z)G^x_{ki}+\partial _{z_k} {{\widehat{w}}^x_i} \right] (z,t-\tau )\,dz\,d\tau \\ =&~\sum _{k=1}^n\left( \int _\delta ^\epsilon \int _{|z-y|>\epsilon }+\int _\epsilon ^{t-\epsilon }\int _{|z|<L}+\int _{t-\epsilon }^{t-\delta }\int _{|z-x|>\epsilon }\right) [\cdots ]\,dz\,d\tau . \end{aligned} \end{aligned}$$

By integration by parts, \(G^y_{kj}(z',0,t)=0\), \(G^y_{kj}(z,0_+)=\partial _kF_j^y(z)\) if \(y\not =z\), \(\sum _{k=1}^n \partial _{z_k}G^x_{kj}=0\), and taking limits \(L\rightarrow \infty \) and \(\delta \rightarrow 0_+\), (\(\varepsilon >0\) fixed), we get

$$\begin{aligned} \sum _{k=1}^n{} & {} \left[ -\int _{|z-y|<\epsilon }G^x_{ki}(z,t-\epsilon )G^y_{kj}(z,\epsilon )\,dz-\int _{|z-y|=\epsilon }G^x_{ki}(z,t)F^y_j(z)\nu _k\,dS_z\right. \nonumber \\{} & {} +\int _{|z-x|<\epsilon }G^x_{ki}(z,\epsilon )G_{kj}^y(z,t-\epsilon )\,dz+\int _{|z-x|=\epsilon }G_{kj}^y(z,t)F^x_i(z)\nu _k\,dS_z.\nonumber \\{} & {} -\int _0^\epsilon \int _{|z-y|=\epsilon }\left[ G^x_{ki}(z,t-\tau )\nabla _zG^y_{kj}(z,\tau )-G^y_{kj}(z,\tau )\nabla _zG^x_{ki}(z,t-\tau ) \right] \cdot \nu _z\,dS_zd\tau .\nonumber \\{} & {} -\int _{t-\epsilon }^t\int _{|z-x|=\epsilon }\left[ G^x_{ki}(z,t-\tau )\nabla _zG^y_{kj}(z,\tau )-G^y_{kj}(z,\tau )\nabla _zG^x_{ki}(z,t-\tau ) \right] \cdot \nu _z\,dS_zd\tau .\nonumber \\{} & {} +\int _0^\epsilon \int _{|z-y|=\epsilon }\left[ G^x_{ki}(z,t-\tau ){\widehat{w}}^y_j(z,\tau )-G^y_{kj}(z,\tau ){\widehat{w}}^x_i(z,t-\tau ) \right] \nu _k\,dS_zd\tau .\nonumber \\{} & {} \left. +\int _{t-\epsilon }^t\int _{|z-x|=\epsilon }\left[ G^x_{ki}(z,t-\tau )\widehat{w}^y_j(z,\tau )-G^y_{kj}(z,\tau )\widehat{w}^x_i(z,t-\tau ) \right] \nu _k\,dS_zd\tau \right] =0. \end{aligned}$$
(7.7)

Note that the above integrals are over finite regions. We can take limits \(\delta \rightarrow 0_+\) because in these regions we do not evaluate \(G_{kj}^y(z,\tau )\) and \({\widehat{w}}_j^y(z,\tau )\) at their singularity (y, 0), nor \(G_{ki}^x(z,t-\tau )\) and \(\widehat{w}_i^x(z,t-\tau )\) at their singularity (xt). To justify the limits \(L \rightarrow \infty \), we first need to show that the far-field integrals

$$\begin{aligned} \begin{aligned} J_1&= \int _{{\mathbb {R}}^n_+\cap \{|z|=L\}}\left[ G^x_{ki}(z,t)F^y_j(z)-G_{kj}^y(z,t)F^x_i(z) \right] \nu _k\,dS_z\\ J_2&=\int _0^t\int _{{\mathbb {R}}^n_+\cap \{|z|=L\}}\left[ G^x_{ki}(z,t-\tau )\nabla _zG^y_{kj}(z,\tau )-G^y_{kj}(z,\tau )\nabla _zG^x_{ki}(z,t-\tau ) \right] \cdot \nu _z\,dS_zd\tau \\ J_3&=\int _0^t\int _{{\mathbb {R}}^n_+\cap \{|z|=L\}} \left[ G^x_{ki}(z,t-\tau )\widehat{w}^y_j(z,\tau )-G^y_{kj}(z,\tau ){\widehat{w}}^x_i(z,t-\tau ) \right] \nu _k\,dS_zd\tau \end{aligned} \end{aligned}$$

vanish as \(L\rightarrow \infty \). By (1.12),

$$\begin{aligned} |J_1| {\lesssim }\int _{{\mathbb {R}}^n_+\cap \{|z|=L\}} L^{-n}\,L^{1-n} \,dS_z = CL^{-n}\rightarrow 0. \end{aligned}$$

For \(J_2\) with \(L>2(|x|+|y|+\sqrt{t})\), the worst estimate of \(\nabla _zG_{kj}^y(z,\tau )\) by (1.12) is \(L^{-n} (z_n+\sqrt{\tau })^{-1} \log \frac{L}{\sqrt{\tau }}\). Thus

$$\begin{aligned}{} & {} |J_2| {\lesssim }\int _0^t \int _{{\mathbb {R}}^n_+\cap \{|z|=L\}}L^{-n}\,L^{-n} \tau ^{-1/2} (\log L + |\log \tau | \mathbb {1}_{\tau <1} )\,dS_z d\tau \\ {}{} & {} \lesssim \sqrt{t}\,L^{-(n+1)}\log L \rightarrow 0. \end{aligned}$$

For the integral \(J_3\), by (1.14) with \(r=\min (x_n,y_n)>0\),

$$\begin{aligned} |J_3| {\lesssim }\int _0^t \int _{{\mathbb {R}}^n_+\cap \{|z|=L\}} L^{-n} \tau ^{-1/2} \left[ \frac{1}{L^{n}} \log \frac{L}{z_n} + \frac{1}{L^{n-1}r} \right] \,dS_z d\tau . \end{aligned}$$

Using

$$\begin{aligned} \int _{|z|=L,\, z_n<1} |\log z_n| \,dS_z= & {} \int _0^1\!\int _{|z'|={\sqrt{L^2-z_n^2}}} |\log z_n| \, dS_{z'} dz_n\\\lesssim & {} \int _0^1L^{n-2} \, |\log z_n|\,dz_n {\lesssim }L^{n-2}, \end{aligned}$$

we get

$$\begin{aligned} |J_3| {\lesssim }\sqrt{t} \left( L^{-n-1}\log L + L^{-n-2} + L^{-n} r^{-1} \right) \rightarrow 0,\quad \text {as } L \rightarrow \infty . \end{aligned}$$

We also need to show the boundary integrals similar to \(J_1\), \(J_2\) and \(J_3\) at \(z_n=1/L\) (instead of \(|z|=L\)) vanish as \(L \rightarrow \infty \). This is clear for \(J_1\) and \( J_2 \) as \(G_{ki}^x(z',0,t)=0\) and the factors \(F_j^y\) and \(\nabla _z G^y_{kj}\) are bounded near \(z_n=0\). For \(J_3\), estimate (1.14) of the factor \({\widehat{w}}_j^y\) has a log singularity \(\log z_n\), and we use the boundary vanishing estimate (1.20) of \(G_{ki}^x\),

$$\begin{aligned} \begin{aligned} |J_3|{\lesssim }&\int _0^{t} \int _{z_n = 1/L} \frac{z_n \log \left( e+\frac{|x^*-z|}{\sqrt{t-s}} \right) }{\sqrt{t-\tau }(|z'-x'|+|z_n-x_n|+\sqrt{t-\tau })^n}\\&\cdot \tau ^{-\frac{1}{2}}\,\frac{1}{(|z'-y'|+z_n+y_n+\sqrt{\tau })^n}\,\log \left( 1+\frac{|z'-y'|+y_n+\sqrt{\tau }}{z_n} \right) dS_zd\tau , \end{aligned} \end{aligned}$$

which vanishes as \(L\rightarrow \infty \). Note that the proof of the base case (no derivatives) of (1.20), to be given in Sect. 8, does not rely on the symmetry.

The above show (7.7).

Now take \(\epsilon \rightarrow 0\). Using (7.5) and (7.6), the identity (7.7) becomes

$$\begin{aligned} \begin{aligned}&\lim _{\epsilon \rightarrow 0_+}\sum _{k=1}^n\left[ -\int _{|z-y|<\epsilon }G^x_{ki}(z,t-\epsilon )G^y_{kj}(z,\epsilon )\,dz+\int _{|z-x|<\epsilon }G^x_{ki}(z,\epsilon )G^y_{kj}(z,t-\epsilon )\,dz\right. \\&\quad -\int _0^\epsilon \int _{|z-y|=\epsilon }G^y_{kj}(z,\tau ){{\widehat{w}}^x_i} (z,t-\tau )\nu _k\,dS_zd\tau \\&\quad \left. +\int _{t-\epsilon }^t\int _{|z-x|=\epsilon }G^x_{ki}(z,t-\tau )\widehat{w}^y_j (z,\tau )\nu _k\,dS_zd\tau \right] -G^x_{ji}(y,t) +G^y_{ij}(x,t) =0. \end{aligned} \end{aligned}$$
(7.8)

The first two terms tend to zero as \(\epsilon \rightarrow 0_+\) by the same reason as for (7.4). Moreover, since \(w^x_i(z,t-\tau )\) is uniformly bounded (independent of \(\epsilon \)) for \((z,\tau )\in \{(z,\tau ):|z-y|=\epsilon ,\,0<\tau <\epsilon \}\) by (1.14), we obtain from (1.12) that

$$\begin{aligned} \begin{aligned}&\left| \int _0^\epsilon \!\int _{|z-y|=\epsilon }G^y_{kj}(z,\tau ){{\widehat{w}}^x_i}(z,t-\tau )\nu _k\,dS_zd\tau \right| \lesssim \int _0^\epsilon \!\int _{|z-y|=\epsilon }\frac{1}{(|z-y|+\sqrt{\tau })^{n}}\,dS_zd\tau \\&\quad \lesssim \int _0^\epsilon \frac{1}{(\epsilon +\sqrt{\tau })^{n}}\,\epsilon ^{n-1}\,d\tau \lesssim \epsilon + \delta _{n2} \epsilon \log \frac{1}{\epsilon }\rightarrow 0\ \ \ \text { as }\epsilon \rightarrow 0_+. \end{aligned} \end{aligned}$$
(7.9)

Similarly, \(\int _{t-\epsilon }^t\int _{|z-x|=\epsilon }G^x_{ki}(z,t-\tau )w^y_j(z,\tau )\nu _k\,dS_zd\tau \) goes to zero as \(\varepsilon \rightarrow 0_+\). By (7.4) and (7.9), the equation (7.8) turns into

$$\begin{aligned} \begin{aligned} -G^x_{ji}(y,t)+G^y_{ij}(x,t)=0. \end{aligned} \end{aligned}$$

This completes the proof of Proposition 1.4, i.e., the symmetry (7.1) of the Green tensor. \(\square \)

Remark 7.1

We can actually show an alternative estimate of \(\widehat{w}_j^y(z,\tau )\) which has no singularity as \(z_n \rightarrow 0_+\) by estimating (3.28) instead of (3.21), cf. Remark 3.5(i). Using it, we don’t need the vanishing estimate (1.20). We do not present it in this way since its proof is more involved, in particular in the case \(n=2\).

8 The Main Estimates of the Green Tensor

In this section we prove the main estimates in Theorems 1.5 and 1.6.

Proof of Theorem 1.5

From (1.12) we have that

$$\begin{aligned} \begin{aligned} |\partial _{x',y'}^l\partial _{x_n}^k\partial _{y_n}^q\partial _t^mG_{ij}(x,y,t)|&\lesssim \frac{1}{(|x-y|^2+t)^{\frac{l+k+q+n}{2}+m}}\\&\quad +\frac{\textrm{LN}_{ijkq}^{mn}}{t^{m}(|x^*-y|^2+t)^{{\frac{ l+k-k_i+n }{2}}}{ (x_n^2+t)^{\frac{k_i}{2}} }(y_n^2+t)^{\frac{q}{2}}}, \end{aligned} \end{aligned}$$
(8.1)

where \(k_i = (k - \delta _{in})_+\), and

$$\begin{aligned} \textrm{LN}_{ijkq}^{mn}&:= 1+ \delta _{n2}\mu _{ik}^m\left[ \log (\nu _{ijkq}^m|x'-y'|+x_n+y_n+\sqrt{t}) - \log (\sqrt{t}) \right] ,\\ \mu _{ik}^m&=1-(\delta _{k0}+\delta _{k1}\delta _{in})\delta _{m0},\quad \nu _{ijkq}^m = \delta _{q0} \delta _{jn} \delta _{k(1+\delta _{in})} \delta _{m0}+\delta _{m>0}. \end{aligned}$$

On the other hand, by the symmetry of the Green tensor (Proposition 1.4) and (1.12),

$$\begin{aligned} \begin{aligned}&|\partial _{x',y'}^l\partial _{x_n}^k\partial _{y_n}^q\partial _t^mG_{ij}(x,y,t)|=|\left( \partial _{x',y'}^l\partial _{Y_n}^k\partial _{X_n}^q\partial _t^mG_{ji} \right) (y,x,t)|\\&\quad \lesssim \frac{1}{(|x-y|^2+t)^{\frac{l+k+q+n}{2}+m}} +\frac{\textrm{LN}_{jiqk}^{mn}}{t^{m}(|x^*-y|^2+t)^{{ \frac{l+q-q_j+n}{2} }}{ (y_n^2+t)^{\frac{q_j}{2}} }(x_n^2+t)^{\frac{k}{2}}}, \end{aligned} \end{aligned}$$
(8.2)

where \(q_j = (q-\delta _{jn})_+\), \(\partial _{X_n}\) denotes the partial derivative in the n-th variable, and \(\partial _{Y_n}\) denotes the partial derivative in the 2n-th variable. The combination of (8.1) and (8.2) gives

$$\begin{aligned} \begin{aligned} |\partial _{x',y'}^l \partial _{x_n}^k \partial _{y_n}^q \partial _t^m G_{ij}(x,y,t)|&\lesssim \frac{1}{(|x-y|^2+t)^{\frac{l+k+q+n}{2}+m}} \\ {}&\quad +\frac{\textrm{LN}_{ijkq}^{mn}+\textrm{LN}_{jiqk}^{mn}}{t^{m}(|x^*-y|^2+t)^{\frac{l+k-k_i+q-q_j+n}{2}}(x_n^2+t)^{\frac{k_i}{2}}(y_n^2+t)^{\frac{q_j}{2}} }. \end{aligned} \end{aligned}$$

This shows (1.18) and completes the proof of Theorem 1.5. \(\square \)

We next show the boundary vanishing of derivatives of \(G_{ij}\) at \(x_n=0\) or \(y_n=0\).

Proof of Theorem 1.6

Denote

$$\begin{aligned} \textrm{LN}= {\textstyle \sum _{k=0}^1}( \textrm{LN}_{ijkq}^{mn} + \textrm{LN}_{jiqk}^{mn})(x,y,t) . \end{aligned}$$

By \(\partial _{x',y'}^l\partial _{y_n}^q\partial _t^mG_{ij}|_{x_n=0}=0\) and (1.18) with \(k=1\), we have

$$\begin{aligned} \begin{aligned}&\left| \partial _{x',y'}^l\partial _{y_n}^q \partial _t^mG_{ij}(x,y,t)\right| \\ {}&\le \int _0^{x_n}\left| \partial _{x',y'}^l\partial _{x_n}\partial _{y_n}^q\partial _t^mG_{ij}(x',z_n,y,t)\right| \,dz_n \\&\lesssim \int _0^{x_n}\left[ \frac{1}{(|x'-y'|^2+|z_n-y_n|^2+t)^{\frac{l+q+n+1}{2}+m}}\right. \\&\quad \left. +\frac{\textrm{LN}}{t^{m}(|x'-y'|^2+(z_n+y_n)^2+t)^{ { \frac{l+q-q_j+n}{2} }}{ (z_n^2+t)^{\frac{1}{2} } (y_n^2 + t)^{\frac{q_j}{2}}} }\right] \,dz_n \\&=: I_1 + I_2. \end{aligned} \end{aligned}$$
(8.3)

Above we have used that \(\textrm{LN}_{ijkq}^{mn}(x',z_n,y,t)\) is nondecreasing in \(z_n\).

We first estimate \(I_1\).

Case 1. If \(3x_n<y_n\), then \(|z_n-y_n|>\frac{1}{2} (x_n + y_n)\) and \(z_n+y_n>\frac{1}{4}(x_n+y_n)\) for \(0<z_n<x_n\). Thus, (8.3) gives

$$\begin{aligned} \begin{aligned} I_1&\lesssim \frac{x_n}{(|x-y^*|^2+t)^{\frac{l+q+n+1}{2}+m}}. \end{aligned} \end{aligned}$$

Case 2. If \(y_n<3x_n < \frac{1}{2} \left( |x'-y'|+y_n+\sqrt{t} \right) \), then \(x_n+y_n<\frac{4}{3}(|x'-y'|+\sqrt{t})\), which implies \(|x-y^*|+\sqrt{t}{\lesssim }|x'-y'|+\sqrt{t}\). We drop \(|z_n-y_n|\) in the integrand of (8.3) to get

$$\begin{aligned} \begin{aligned} I_1 \lesssim \frac{x_n}{(|x'-y'|^2+t)^{\frac{l+q+n+1}{2}+m}} \lesssim \frac{x_n}{(|x-y^*|^2+t)^{\frac{l+q+n+1}{2}+m}}. \end{aligned} \end{aligned}$$

Case 3. If \(3 x_n> y_n > \frac{1}{2} \left( |x'-y'|+y_n+\sqrt{t} \right) \) or \(3 x_n> \frac{1}{2} \left( |x'-y'|+y_n+\sqrt{t} \right) > y_n\), then \(x_n\approx |x-y^*|+\sqrt{t}\). By (1.18) with \(k=0\),

$$\begin{aligned} \begin{aligned} I_1 \lesssim \frac{1}{(|x-y|^2+t)^{\frac{l+q+n}{2}+m}} \lesssim \frac{x_n}{(|x-y|^2+t)^{\frac{l+q+n}{2}+m}(|x-y^*|^2+t)^{\frac{1}{2}}} . \end{aligned} \end{aligned}$$

Thus, we have

$$\begin{aligned} I_1 \lesssim \frac{1}{(|x-y|^2+t)^{\frac{l+q+n}{2}+m}} \lesssim \frac{x_n}{(|x-y|^2+t)^{\frac{l+q+n}{2}+m}(|x-y^*|^2+t)^{\frac{1}{2}}}. \end{aligned}$$

Next, we estimate \(I_2\).

If \(x_n< y_n+ \frac{1}{2} \sqrt{t}\), then \(|x-y^*|^2+t\approx |x'-y'|^2+y_n^2+t\). We drop \(z_n\) in and the integrand of (8.3) to get

$$\begin{aligned} \begin{aligned} I_2&\lesssim \frac{x_n\,\textrm{LN}}{t^{m+\frac{1}{2}}(|x'-y'|^2+y_n^2+t)^{ \frac{l+q-q_j+n}{2} } (y_n^2+t)^{ \frac{q_j}{2}}} \\&\lesssim \frac{x_n\,\textrm{LN}}{t^{m+\frac{1}{2}}(|x-y^*|^2+t)^{ \frac{l+q-q_j+n}{2} } (y_n^2+t)^{\frac{q_j}{2}}}. \end{aligned} \end{aligned}$$

If \(x_n>y_n+ \frac{1}{2}\sqrt{t}\), then \(x_n > rsim \sqrt{t}\). By (1.18) with \(k=0\),

$$\begin{aligned} \begin{aligned} I_2&\lesssim \frac{\textrm{LN}}{t^{m}(|x-y^*|^2+t)^{ \frac{l+q-q_j+n}{2} } (y_n^2+t)^{\frac{q_j}{2}}} \lesssim \frac{x_n\,\textrm{LN}}{t^{m+\frac{1}{2}}(|x-y^*|^2+t)^{ \frac{l+q-q_j+n}{2} }(y_n^2+t)^{\frac{q_j}{2}}}. \end{aligned} \end{aligned}$$

Combining the above cases, we derive

$$\begin{aligned} \begin{aligned} \left| \partial _{x',y'}^l\partial _{y_n}^q\partial _t^mG_{ij}(x,y,t)\right|&\lesssim \frac{x_n}{(|x-y|^2+t)^{\frac{l+q+n}{2}+m}(|x-y^*|^2+t)^{\frac{1}{2}}} \\&\quad +\frac{x_n\,\textrm{LN}}{t^{m{+\frac{1}{2}}}(|x-y^*|^2+t)^{{ \frac{l+q-q_j+n}{2} }}{(y_n^2+t)^{\frac{q_j}{2}}}}, \end{aligned} \end{aligned}$$

which is (1.20) for \(\alpha =1\). Since (1.20) also holds for \(\alpha =0\) by (1.18), it holds for all \(0\le \alpha \le 1\). Finally, (1.21) follows from the symmetry. This completes the proof of Theorem 1.6. \(\square \)

9 Mild Solutions of Navier–Stokes Equations

In this section we apply our linear estimates to the construction of mild solutions of Navier–Stokes equations (NS).

9.1 Mild solutions in \(L^q\)

In this subsection we prove Lemma 9.1. It is standard to prove Theorem 1.7 using estimates in Lemma 9.1 and a fixed point argument. We skip the proof of Theorem 1.7.

Lemma 9.1

Let \(n \ge 2\), \(1\le p\le q\le \infty \) and \(1<q\).

  1. (a)

    If \(u_0 \in L^p_\sigma ({\mathbb {R}}^n_+)\) and \(\breve{u}_i(x,t)=\sum _{j=1}^n\int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}(y) dy\), then

    $$\begin{aligned} \left\| \breve{u}(\cdot ,t) \right\| _{L^q({\mathbb {R}}^n_+)}\le & {} C t^{-\frac{n}{2}(\frac{1}{p}-\frac{1}{q})}\left\| u_0 \right\| _{L^p({\mathbb {R}}^n_+)}, \quad \text {if } u_0 = {{\textbf{P}}}u_0, \end{aligned}$$
    (9.1)
    $$\begin{aligned} L^q\text {-} \lim _{t \rightarrow 0_+} t^{\frac{n}{2}(\frac{1}{p}-\frac{1}{q})} \breve{u}(\cdot ,t)= & {} \left\{ \begin{aligned} 0 , \quad \text {if } 1\le p<q\le \infty , \\ u_0, \quad \text {if } 1<p=q<\infty . \end{aligned}\right. \end{aligned}$$
    (9.2)

    (9.2)\(_2\) is also valid for \(p=q=\infty \) if \(u_0 \) in the \(L^\infty \)-closure of \(C^1_{c,\sigma }(\overline{{\mathbb {R}}^n_+})\).

  2. (b)

    Let \(F \in L^p({\mathbb {R}}^n_+)\), \(a,b\in {\mathbb {N}}_0\), and \(1 \le a+b\). Assume \(b\ge 1\) and \(n\ge 3\) if \(p=q=\infty \). Then

    $$\begin{aligned} { \left\| \int _{{\mathbb {R}}^n_+} \partial _x^a \partial _{y}^b G_{ij}(x,y,t) F(y) dy \right\| _{L^q({\mathbb {R}}^n_+)} \le C t^{-\frac{a+b}{2}-\frac{n}{2}(\frac{1}{p}-\frac{1}{q})} \left\| F \right\| _{L^p({\mathbb {R}}^n_+)}. } \end{aligned}$$
    (9.3)

Proof

We consider (9.1) and decompose \(\breve{u}_i(x,t)\) defined in (6.1) as

$$\begin{aligned} \begin{aligned} \breve{u}_i(x,t)&=\int _{{\mathbb {R}}^n_+} \Gamma (x-y,t) u_{0,i}(y) dy + \int _{{\mathbb {R}}^n_+} G_{ij}^*(x,y,t) u_{0,j}(y) dy \\&= : u^{heat}_i(x,t) + u_i^*(x,t). \end{aligned} \end{aligned}$$

The basic property of heat kernel yields

$$\begin{aligned} \Vert u^{heat}\Vert _{L^{q}({\mathbb {R}}_{+}^{n})}\le C t^{\frac{n}{2}(\frac{1}{q}-\frac{1}{p})} \left\| u_0 \right\| _{L^p({\mathbb {R}}_{+}^{n})}. \end{aligned}$$

By (1.10), \(u^*(x,t)\) is bounded by

$$\begin{aligned} \begin{aligned} J_t(x)&= \int _{{\mathbb {R}}^n_+}\frac{e^{-\frac{cy_n^2}{t}}}{(|x'-y'| + x_n+y_n+\sqrt{t})^n} |u_0(y)| \,dy \\&= \int _0^\infty \frac{1}{(|x'| + x_n+y_n+\sqrt{t})^n} *_{\Sigma } |u_0(x',y_n)| e^{-\frac{cy_n^2}{t}}\,dy_n, \end{aligned} \end{aligned}$$

where \(*_\Sigma \) indicates convolution over \(\Sigma \). By Minkowski and Young inequalities,

$$\begin{aligned} \begin{aligned} \left\| J_t(\cdot ,x_n) \right\| _{L^q(\Sigma )}&{\lesssim }\int _0^\infty \left\| \frac{1}{(|x'| + x_n+y_n+\sqrt{t})^n} *_{\Sigma } |u_0(x',y_n)| \right\| _{L^q(\Sigma )} e^{-\frac{cy_n^2}{t}}\,dy_n \\&{\lesssim }\int _0^\infty \left\| \frac{1}{(|x'| + x_n+y_n+\sqrt{t})^n} \right\| _{L^r(\Sigma )} \cdot \\&\quad \left\| u_0(\cdot ,y_n) \right\| _{L^p(\Sigma )} e^{-\frac{cy_n^2}{t}}\,dy_n,\quad \frac{1}{q} + 1 = \frac{1}{r} + \frac{1}{p}, \\&{\lesssim }\int _0^\infty \frac{1}{ (x_n+y_n+\sqrt{t})^{1-(n-1)\left( \frac{1}{q} - \frac{1}{p} \right) }} \cdot \left\| u_0(\cdot ,y_n) \right\| _{L^p(\Sigma )} e^{-\frac{cy_n^2}{t}}\,dy_n. \end{aligned} \end{aligned}$$

By Minkowski inequality again, (here we need \(q>1\))

$$\begin{aligned} \left\| J_t \right\| _{L^q({\mathbb {R}}^n_+)}&=\left\| \left\| J_t(\cdot ,x_n) \right\| _{L^q(\Sigma )} \right\| _{L^q(0,\infty )} \nonumber \\&{\lesssim }\int _0^\infty \left\| \frac{1}{ (x_n+y_n+\sqrt{t})^{1-(n-1)\left( \frac{1}{q} - \frac{1}{p} \right) }} \right\| _{ L^q(x_n\in (0,\infty ))} \nonumber \\&\quad \cdot \left\| u_0(\cdot ,y_n) \right\| _{L^p(\Sigma )} e^{-\frac{cy_n^2}{t}}\,dy_n \nonumber \\&{\lesssim }\int _0^\infty \frac{1}{(y_n+\sqrt{t})^{1-\frac{1}{q} - (n-1)\left( \frac{1}{q} - \frac{1}{p} \right) } } \cdot \left\| u_0(\cdot ,y_n) \right\| _{L^p(\Sigma )} e^{-\frac{cy_n^2}{t}}\,dy_n. \end{aligned}$$
(9.4)

By Hölder inequality,

$$\begin{aligned} \begin{aligned} \left\| J_t \right\| _{L^q({\mathbb {R}}^n_+)}&{\lesssim }\left\| u_0 \right\| _{L^p({\mathbb {R}}^n_+)} \left( \int _0^\infty \left( \frac{1}{(y_n+\sqrt{t})^{1-\frac{1}{q} - (n-1)\left( \frac{1}{q} - \frac{1}{p} \right) } }\, e^{-\frac{cy_n^2}{t}} \right) ^{\frac{p}{p-1}}\,dy_n \right) ^{\frac{p-1}{p}}\\&{\lesssim }t^{\frac{n}{2}\left( \frac{1}{q} - \frac{1}{p} \right) } \left\| u_0 \right\| _{L^p({\mathbb {R}}^n_+)} \end{aligned} \end{aligned}$$

by the change of variables \(y_n = \sqrt{t} z\). This proves (9.1).

For (9.2), denote \(\sigma = \frac{n}{2}(\frac{1}{p}-\frac{1}{q})\). If \(1\le p < q \le \infty \), then \(\sigma >0\). For any \(\varepsilon >0\), we can choose \(b \in L^p_\sigma \cap L^q_\sigma \) with \(\left\| u_0 - b \right\| _{L^p} \le \varepsilon \). Let \(v_i(x,t)=\sum _{j=1}^n\int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) b_{j}(y) dy\). Then by (9.1),

$$\begin{aligned} \begin{aligned} t^\sigma \left\| \breve{u}(\cdot ,t) \right\| _{L^q}&\le t^\sigma \left\| v(\cdot ,t) \right\| _{L^q} + t^\sigma \left\| \breve{u}(\cdot ,t)-v(\cdot ,t) \right\| _{L^q} \\&{\lesssim }t^\sigma \left\| b \right\| _{L^q} + \left\| u_0-b \right\| _{L^p} \end{aligned} \end{aligned}$$

which is less than \(C\varepsilon \) for t sufficiently small. This shows (9.2)\(_1\).

If \(1<p=q<\infty \), For any \(\varepsilon >0\), there is \(M>0\) such that \(\big \Vert \left\| u_0(\cdot ,y_n) \right\| _{L^q(\Sigma )} \mathbb {1}_M\big \Vert _{L^q(0,\infty )} \le \varepsilon \), where \(\mathbb {1}_{M}(y_n) = 1\) if \(\left\| u_0(\cdot ,y_n) \right\| _{L^q(\Sigma )}\ge M\), and \(\mathbb {1}_{M}(y_n) = 0\) otherwise. Then

$$\begin{aligned} \left\| u_0(\cdot ,y_n) \right\| _{L^q(\Sigma )} \le M + \left\| u_0(\cdot ,y_n) \right\| _{L^q(\Sigma )}\mathbb {1}_M. \end{aligned}$$

Applying Hölder inequality to (9.4),

$$\begin{aligned} \begin{aligned} \left\| J_t \right\| _{L^q({\mathbb {R}}^n_+)}&{\lesssim }\big \Vert \left\| u_0(\cdot ,y_n) \right\| _{L^q(\Sigma )} \mathbb {1}_M\big \Vert _{L^q(0,\infty )} + \int _0^\infty \frac{1}{(y_n+\sqrt{t})^{1-1/q} } \cdot M e^{-\frac{cy_n^2}{t}}\,dy_n\\&{\lesssim }\varepsilon + M t^{1/2q} \end{aligned} \end{aligned}$$

which is bounded by \(C\varepsilon \) for t sufficiently small. Since \(u^{heat}(\cdot ,t) \rightarrow u_0\) in \(L^q\) as \( t\rightarrow 0_+\), this shows \(\breve{u}^{L}(\cdot ,t) \rightarrow u_0\) in \(L^q\) as \( t\rightarrow 0_+\). This shows (9.2)\(_2\).

If \(p=q=\infty \) and \(u_0 \) in the \(L^\infty \)-closure of \(C^1_{c,\sigma }(\overline{{\mathbb {R}}^n_+})\), for any \(\varepsilon >0\), we can choose \(b \in C^1_{c,\sigma }(\overline{{\mathbb {R}}^n_+})\) with \(\left\| u_0 - b \right\| _{L^\infty } \le \varepsilon \). Let \(v_i(x,t)=\sum _{j=1}^n\int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) b_{j}(y) dy\). By Lemma 6.1, \(\lim _{t \rightarrow 0_+}\left\| v_i(\cdot ,t) - b \right\| _{L^\infty ({\mathbb {R}}^n_+)}=0\). Then by (9.1),

$$\begin{aligned} \left\| \breve{u}(\cdot ,t)-u_0 \right\| _{L^\infty }\le \left\| \breve{u}(\cdot ,t)-v(\cdot ,t) \right\| _{L^\infty } + \left\| v(\cdot ,t) -b \right\| _{L^\infty } + \left\| b-u_0 \right\| _{L^\infty } {\lesssim }\varepsilon + o(1), \end{aligned}$$

which is less than \(C\varepsilon \) for t sufficiently small. This shows the remark after (9.2)\(_2\).

For (9.3), denote

$$\begin{aligned} w(x,t) = \int _{{\mathbb {R}}^n_+} \partial _{x}^a\partial _{y}^b G_{ij}(x,y,t) F(y) dy ,\quad m=a+b. \end{aligned}$$

By Theorem 1.5,

$$\begin{aligned} |\partial _{x}^a\partial _{y}^b G_{ij}(x,y,t)|\lesssim \frac{1}{(|x-y|^2+t)^{\frac{n+m}{2}}} +\frac{1+ \delta _{n2} \log (1+ \frac{|x^*-y|}{\sqrt{t}})}{(|x^*-y|^2+t)^{\frac{n}{2}} { (x_n^2+t)^{\frac{a}{2}} (y_n^2+t)^{\frac{b}{2}} }}. \end{aligned}$$

Using

$$\begin{aligned} \frac{\log (e+r)}{(e+r)^n} \le \frac{\log (e+s)}{(e+s)^n} , \quad \forall 0\le s\le r, \end{aligned}$$

we have

$$\begin{aligned} |\partial _{x}^a\partial _{y}^b G_{ij}(x,y,t)| \lesssim \frac{1}{(|x-y|^2+t)^{\frac{n+m}{2}}} + \frac{{\delta _{n2}}\log (e+ \frac{|x-y|}{\sqrt{t}})}{(|x-y|^2+t)^{\frac{n}{2}} { (x_n^2+t)^{\frac{a}{2}} (y_n^2+t)^{\frac{b}{2}} }}. \end{aligned}$$

Extend F(y) to \(y \in {\mathbb {R}}^n\) by zero for \(y_n<0\). We have

$$\begin{aligned} \begin{aligned} |w(x,t)|&{\lesssim }\int _{{\mathbb {R}}^n} H_t^0(x-y) |F(y)|dy \\&\quad + \int _{{\mathbb {R}}^n} H_t(x-y^*) |F(y)|\, \frac{1}{(y_n^2+t)^{\frac{b}{2}}}\, dy\, \frac{1}{(x_n^2+t)^{\frac{a}{2}}}\\&:= w_1(x,t) + w_2(x,t), \end{aligned} \end{aligned}$$
(9.5)

where

$$\begin{aligned} H_{t}^0(x)= & {} t^{-\frac{n+m}{2}} H_1^0\left( \frac{x}{\sqrt{t}} \right) ,\quad H_1^0(x) = \frac{1}{(|x|^2+1)^{\frac{n+m}{2}}} \in L^1\cap L^\infty ({\mathbb {R}}^n), \end{aligned}$$
(9.6)
$$\begin{aligned} H_{t}(x)= & {} t^{-\frac{m}{2}} H_1\left( \frac{x}{\sqrt{t}} \right) ,\quad H_1(x) = \frac{{\delta _{n2}}\log (e+ |x|)}{(|x|^2+1)^{\frac{n}{2}} } . \end{aligned}$$
(9.7)

By Young’s convolution inequality with \(\frac{1}{q}=\frac{1}{r}+\frac{1}{p}-1\),

$$\begin{aligned} \begin{aligned} \Vert w_1(\cdot ,t)\Vert _{L^q} {\lesssim }\left\| H_t^0 \right\| _{L^r({\mathbb {R}}^n)} \left\| F \right\| _{L^p} = t^{-\frac{m}{2}+\frac{n}{2}(\frac{1}{q}-\frac{1}{p})} \left\| H_1^0 \right\| _{L^r({\mathbb {R}}^n)} \left\| F \right\| _{L^p}. \end{aligned} \end{aligned}$$

It remains to estimate \(w_2(\cdot ,t)\).

If \(p<q\), we drop the factors \((y_n^2+t)^{-\frac{b}{2}}\) and \((x_n^2+t)^{-\frac{a}{2}}\) in (9.5), \(H_t(x-y^*)\) by \(H_t(x-y)\), and applying Young’s convolution inequality with \(\frac{1}{q} = \frac{1}{r} + \frac{1}{p} - 1\) to get

$$\begin{aligned} \left\| t^{-\frac{m}{2}} \int _{{\mathbb {R}}^n} H_t(x-y) |F(y)|\, dy \right\| _{L^q}&{\lesssim }&t^{-\frac{m}{2}} \left\| H_t \right\| _{L^r({\mathbb {R}}^n)} \left\| F \right\| _{L^p} \\&{\lesssim }&t^{-\frac{m}{2} + \frac{n}{2}\left( \frac{1}{q} - \frac{1}{p} \right) } \left\| H_1 \right\| _{L^r({\mathbb {R}}^n)} \left\| F \right\| _{L^p}. \end{aligned}$$

Note that \(H_1\in L^r\) since \(r>1\) when \(p<q\). Thus, we get for \(p<q\) that

$$\begin{aligned} \left\| w_2(\cdot ,t) \right\| _{L^q} {\lesssim }t^{-\frac{m}{2} + \frac{n}{2}\left( \frac{1}{q} - \frac{1}{p} \right) } \left\| F \right\| _{L^p}. \end{aligned}$$

If \(p=q=\infty \), by the hypotheses \(b\ge 1\) and \(n\ge 3\) so there is no log term in (9.7). In this case,

$$\begin{aligned} \begin{aligned} w_2(x,t)&{\lesssim }\int _{{\mathbb {R}}^n_+} H_t(x-y^*) |F(y)|\, \frac{1}{(y_n^2 + t)^{\frac{b}{2}}}\, dy\, \frac{1}{(x_n^2 + t)^{\frac{a}{2}}}\\&{\lesssim }\left\| F \right\| _{L^\infty }\, \frac{1}{(x_n + t)^{\frac{a}{2}}} \int _{{\mathbb {R}}^n_+} \frac{1}{(|x-y^*|^2 + t)^{\frac{n}{2}} (y_n^2 + t)^{\frac{b}{2}}}\, dy\\&{\lesssim }\left\| F \right\| _{L^\infty }\, \frac{1}{(x_n + t)^{\frac{a}{2}}} \int _0^\infty \frac{1}{(x_n^2 + y_n^2 + t)^{\frac{1}{2}} (y_n^2 + t)^{\frac{b}{2}}}\, dy_n\\&\le \left\| F \right\| _{L^\infty }\, \frac{1}{(x_n + t)^{\frac{a}{2}}} \int _0^\infty \frac{1}{( y_n^2 + t)^{\frac{b+1}{2}} }\, dy {\lesssim }t^{-\frac{m}{2}} \left\| F \right\| _{L^\infty }. \end{aligned} \end{aligned}$$

This proves (9.3). \(\square \)

Remark 9.1

Let \(1\le p<q\le \infty \) and \(u_0 \in L^p_\sigma ({\mathbb {R}}^n_+)\). We claim that

$$\begin{aligned} { u^L_i (x,t)= \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t) u_{0,j}(y)\, dy,\quad \widehat{u}^L_i (x,t)= \int _{{\mathbb {R}}^n_+} {\widehat{G}}_{ij}(x,y,t) u_{0,j}(y)\, dy, } \end{aligned}$$
(9.8)

are also defined in \(L^q({\mathbb {R}}^n_+)\) for fixed \(t>0\) and (9.1) holds for \(u^L\) and \({\widehat{u}}^L\):

$$\begin{aligned} { \left\| u^L(\cdot ,t) \right\| _{L^q({\mathbb {R}}^n_+)} + \left\| {\widehat{u}}^L(\cdot ,t) \right\| _{L^q({\mathbb {R}}^n_+)} \le C t^{-\frac{n}{2}(\frac{1}{p}-\frac{1}{q})}\left\| u_0 \right\| _{L^p({\mathbb {R}}^n_+)}. } \end{aligned}$$
(9.9)

Our claim does not include the case \(p=q\) as in (9.1). For \(u^L\), this is because \(|G_{ij}(x,y,t)|{\lesssim }(|x-y|+\sqrt{t})^{-n}\) and, by Young’s convolution inequality with \(1+\frac{1}{q} = \frac{1}{r} + \frac{1}{p}\),

$$\begin{aligned} \begin{aligned} \left\| u^L(\cdot ,t) \right\| _{L^{q}}&{\lesssim }\left\| (|x|+\sqrt{t})^{-n} * u_0 \right\| _{L^q} {\lesssim }\Big (\int _{{\mathbb {R}}^n}(|x|+\sqrt{t})^{-nr}dx\Big )^{\frac{1}{r}}\left\| u_0 \right\| _{L^p}\\&{\lesssim }t^{-\frac{n}{2}(\frac{1}{p}-\frac{1}{q})}\left\| u_0 \right\| _{L^p} \end{aligned} \end{aligned}$$

where we used \(q>p\) so that \(r>1\). For \({\widehat{u}}^L\), by (1.15), we can decompose

$$\begin{aligned} \begin{aligned} {\widehat{u}}_i^L(x,t)&=\int _{{\mathbb {R}}^n_+} \Gamma (x-y,t) u_{0,i}(y) dy \\ {}&\quad + \int _{{\mathbb {R}}^n_+} {\widehat{G}}_{ij}^*(x,y,t) u_{0,j}(y) dy = : u^{heat}_i(x,t) + {\widehat{u}}_i^*(x,t), \end{aligned} \end{aligned}$$

where \({\widehat{G}}_{ij}^*(x,y,t) = -\delta _{ij} \Gamma (x-y^*,t) - 4 \delta _{jn}C_i(x,y,t)\). The first term \(u^{heat}\) satisfies (9.9) by the basic property of heat kernel. The second term \({\widehat{u}}^*(x,t)\) is bounded by

$$\begin{aligned} |{\widehat{u}}^*(x,t)|{\lesssim }\int _{{\mathbb {R}}^n_+}\frac{e^{-\frac{cy_n^2}{t}}}{(|x'-y'| + x_n+y_n+\sqrt{t})^{n-1} (y_n+\sqrt{t})} |u_0(y)| \,dy \end{aligned}$$

using (5.11). Similar to the proof of (9.1), we can first apply Minkowski and Young inequalities in \(x'\) (using \(q>p\) so that \(r>1\)), and then Minkowski and Hölder inequalities in \(x_n\) to bound \(\left\| {\widehat{u}}^*(\cdot ,t) \right\| _{L^q({\mathbb {R}}^n_+)}\) by \(t^{\frac{n}{2}\left( \frac{1}{q} - \frac{1}{p} \right) } \left\| u_0 \right\| _{L^p({\mathbb {R}}^n_+)} \). The above shows (9.9) for \(1\le p<q\le \infty \) and \(u_0 \in L^p_\sigma ({\mathbb {R}}^n_+)\).

Remark 9.2

The following extends Theorem 1.2. Assume \(u_0\in L^p_{\sigma }({\mathbb {R}}^n_+)\), \(1\le p<\infty \), and \(u^L\), \(\breve{u}^L\) and \({\widehat{u}}^L\) are defined as in (9.8). There exist \(u^k_0\in C^1_{c,\sigma }(\overline{{\mathbb {R}}^n_+})\) such that \(u^k_0\rightarrow u_0\) in \(L^p({\mathbb {R}}^n_+)\) as \(k\rightarrow \infty \). Let

$$\begin{aligned} u^k_i (x,t)= & {} \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t) u_{0,j}^k(y)\, dy,\quad \breve{u}^k_i (x,t)= \int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}^k(y)\, dy,\\ {\widehat{u}}^k_i (x,t)= & {} \int _{{\mathbb {R}}^n_+} {\widehat{G}}_{ij}(x,y,t) u_{0,j}^k(y)\, dy. \end{aligned}$$

They are equal by Theorem 1.2 since \(u_{0}^k\in C^1_{c,\sigma }(\overline{{\mathbb {R}}^n_+})\). On the other hand, by (9.1) and (9.9),

$$\begin{aligned}{} & {} \left\| u^L(t) - u^k(t) \right\| _{ L^q} +\left\| \breve{u}^L(t) - \breve{u}^k(t) \right\| _{ L^q} +\left\| \widehat{u}^L(t) - \widehat{u}^k(t) \right\| _{ L^q}\\{} & {} \quad \le C t^{-\frac{n}{2}(\frac{1}{p}-\frac{1}{q})} \left\| u_{0}-u_{0}^k \right\| _{L^p} \end{aligned}$$

which vanishes as \(k\rightarrow \infty \). This shows \(u^L(t) = \breve{u}^L(t) =\widehat{u}^L(t) \) in \(L^q\) for any \(q\in (p, \infty ]\) and fixed t.

For \(u_0\in L^\infty _{\sigma }({\mathbb {R}}^n_+)\) and \(u_0\) in the \(L^\infty \)-closure of \(C^1_{c,\sigma }(\overline{{\mathbb {R}}^n_+})\), we can also show \(u^L(t) = \breve{u}^L(t)\) (but we do not know about \(\widehat{u}^L(t) \)). We use the boundary vanishing (1.20) to get

$$\begin{aligned} \begin{aligned} |u^L(x,t) - u^k(x,t)|&=\left| \int _{{\mathbb {R}}^n_+} G_{ij}(x,y,t) \left( u_{0,j}(y) - u_{0,j}^k(y) \right) dy \right| \le C_1 \left\| u_0 - u_0^k \right\| _{L^\infty } \end{aligned} \end{aligned}$$

where

$$\begin{aligned} C_1&= \int _{{\mathbb {R}}^n_+} \frac{x_n}{(|x-y|+\sqrt{t})^n (x_n+y_n+\sqrt{t})}\, dy\nonumber \\&{\lesssim }\int _0^\infty \frac{x_n}{(|x_n-y_n|+\sqrt{t}) (x_n+y_n+\sqrt{t})}\, dy_n\nonumber \\&{\lesssim }\int _0^{2(x_n+\sqrt{t})}\frac{x_n}{(|x_n-y_n|+\sqrt{t}) (x_n+\sqrt{t})}\, dy_n \nonumber \\&\quad + \int _{2(x_n+\sqrt{t})}^ \infty \frac{x_n}{y_n^2}\, dy_n {\lesssim }\ln (e+\frac{x_n}{\sqrt{t}}). \end{aligned}$$
(9.10)

It converges to 0 as \(k\rightarrow \infty \), and the convergence is uniform in \(x_n\le M \sqrt{t}\) for any fixed \(t,M>0\). As \(u^k (t)\rightarrow \breve{u}^L(t)\) in \(L^\infty \) by (9.1), this shows \(u^L(x,t)=\breve{u}^L(x,t)\). \(\square \)

9.2 Mild solutions with pointwise decay

In this subsection we prove Theorems 1.8 and 1.9. We first consider Theorem 1.8. Recall Theorem 1.8 is a direct consequence of [5, Theorem 1] using the estimates in [4, Theorem 1] for \(0<a<n\). For \(a=n\), the hypothesis of [5, Theorem 1] is not satisfied: \((1+|x|+\sqrt{t})^n e^{-tA} u_0 \sim \log (2+t) \not \in L^\infty ({\mathbb {R}}^n_+\times (0,\infty ))\) (see [4, Theorem 1] and (9.11)). Nonetheless, the proof of local existence still works if \(\left\| (1+|x|+\sqrt{t})^n e^{-tA} u_0 \right\| _{L^\infty ({\mathbb {R}}^n_+\times (0,T))}\le C(T)\), which is true for \(u_0\in Y_n\). Theorem 1.8 can be proved using the estimates in Lemma 9.2 below and the same iteration argument in [5]. We omit its proof and focus on Lemma 9.2.

Lemma 9.2

Let \(n\ge 2\) and \(0 \le a \le n\). For \(u_0\in Y_a\) with \(\mathop {\textrm{div}}\nolimits u_0 =0\) and \(u_{0,n}|_\Sigma =0\),

$$\begin{aligned} { \left\| \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}(y) dy \right\| _{Y_a} \le C (1 + \delta _{an} \log _+t) \left\| u_0 \right\| _{Y_a}. } \end{aligned}$$
(9.11)

For \(F\in Y_{2a}\),

$$\begin{aligned} { \left\| \int _{{\mathbb {R}}^n_+} \partial _{y_p} G_{ij}(x,y,t) F_{pj}(y)\,dy \right\| _{Y_a} \le C t^{-1/2} \left\| F \right\| _{Y_{2a}}. } \end{aligned}$$
(9.12)

The estimate (9.11) is proved in [4, Theorem 1] with space-time decay (see also [6, Theorem 4.2]), whereas (9.12) is not known in [6] and [4] since the pointwise estimates of the Green tensor \(G_{ij}\) was not available. Instead, they used (1.25) for the bilinear form in the Duhamel’s formula when constructing mild solutions.

Note that \(Y_0=L^\infty \) and \(a\le n\) in (9.11) since the decay cannot be faster than the Green tensor. The case \(a=0\) is a special case of (9.1). It is similar to [50, Theorem 1.1] which further assumes continuity. We do not assume any boundary condition on \(F_{pj}\). Also note

$$\begin{aligned} \left\| |u|^2 \right\| _{Y_{2a}} = \sup _{x \in {\mathbb {R}}^n_+} |u(x)|^2{\langle x \rangle }^{2a} = \left\| u \right\| _{Y_{a}}^2. \end{aligned}$$

Proof

If \(a=0\), the lemma follows from (9.1) and (9.3) with \(p=q=\infty \). Thus, we consider \(a>0\). For (9.11), write

$$\begin{aligned} \begin{aligned} \sum _{j=1}^n\int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}(y) dy&=\int _{{\mathbb {R}}^n_+} \Gamma (x-y,t) u_{0,i}(y) dy + \int _{{\mathbb {R}}^n_+} G_{ij}^*(x,y,t) u_{0,j}(y) dy\\&= : u^{heat}_i(x,t) + u_i^*(x,t). \end{aligned} \end{aligned}$$

It is known that for \(0\le a \le n\)

$$\begin{aligned} { \left\| u^{heat} \right\| _{Y_a} {\lesssim }(1+\delta _{an}\log _+ t) \left\| u_0 \right\| _{Y_a}. } \end{aligned}$$
(9.13)

See e.g. [26, Lemma 1] for \(n=3\) case. Its statement corresponds to \(1\le a \le n\) but its proof also works for \(0\le a <1\).

For \(u^*\) with \(|u_0(y)| {\lesssim }{\langle y \rangle }^{-a}\), by (1.10), (for both \(n \ge 3\) and \(n=2\))

$$\begin{aligned} |u^*(x,t)| {\lesssim }J(x)= \int _{{\mathbb {R}}^n_+}\frac{e^{-\frac{cy_n^2}{t}}}{(|x^*-y|^2+t)^{\frac{n}{2}} {\langle y \rangle }^a} \,dy. \end{aligned}$$

Suppose \(0< a<n-1\). By Lemma 2.2,

$$\begin{aligned} \begin{aligned} J {\lesssim }&\int _0^\infty e^{-\frac{cy_n^2}{t}} \int _{\Sigma } \frac{1}{(|x'-y'|+x_n+y_n+\sqrt{t})^n(|y'|+y_n+1)^a}\, dy'dy_n\\ {\lesssim }&\int _0^\infty \left[ \frac{1}{(|x|+y_n+\sqrt{t}+1)^{a+1}} + \frac{e^{-\frac{cy_n^2}{t}}}{(|x|+y_n+\sqrt{t}+1)^a(x_n+y_n+\sqrt{t})} \right] dy_n\\ {\lesssim }&\frac{1}{(|x|+\sqrt{t}+1)^{a}} + \frac{1}{(|x|+\sqrt{t}+1)^{a}} \int _0^\infty \frac{e^{-u^2}}{\left( \frac{x_n}{\sqrt{t}} \right) +1}\, du\\ {\lesssim }&\frac{1}{(|x|+\sqrt{t}+1)^{a}}. \end{aligned} \end{aligned}$$

This proves

$$\begin{aligned} \left\| u^* \right\| _{Y_a} {\lesssim }\left\| u_0 \right\| _{Y_a},\ \ \ 0< a<n-1. \end{aligned}$$

If \(a=n-1\), we have an additional term from Lemma 2.2,

$$\begin{aligned} \begin{aligned}&\int _0^\infty e^{-\frac{cy_n^2}{t}}\, \frac{1}{(|x|+y_n+\sqrt{t}+1)^{n}}\, \log \left( 1 + \frac{|x|+\sqrt{t}}{y_n+1} \right) dy_n\\&\quad {\lesssim }\frac{1}{(|x|+\sqrt{t}+1)^{n}} \int _0^\infty e^{-\frac{cy_n^2}{t}} \left( \frac{|x|+\sqrt{t}}{y_n+1} \right) ^\varepsilon \, dy_n, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned}&\int _0^\infty e^{-\frac{cy_n^2}{t}} \left( \frac{|x|+\sqrt{t}}{y_n+1} \right) ^\varepsilon \, dy_n\\&\quad \le \int _0^{|x|+\sqrt{t}} \left( \frac{|x|+\sqrt{t}}{y_n+1} \right) ^\varepsilon dy_n + \int _{|x|+\sqrt{t}}^\infty e^{-\frac{cy_n^2}{t}} \left( \frac{|x|+\sqrt{t}}{y_n+1} \right) ^\varepsilon dy_n\\&\quad \le (|x|+\sqrt{t})^\varepsilon (|x|+\sqrt{t}+1)^{1-\varepsilon } + \frac{(|x|+\sqrt{t})^\varepsilon }{(|x|+\sqrt{t}+1)^{\varepsilon }} \int _0^\infty e^{-u^2} \sqrt{t}\, du\\&\quad \le |x|+\sqrt{t}+1. \end{aligned} \end{aligned}$$

So the additional term is bounded by \((|x|+\sqrt{t}+1)^{1-n} = (|x|+\sqrt{t}+1)^{-a}\).

If \(n-1<a<n\), we have an additional term from Lemma 2.2,

$$\begin{aligned} \begin{aligned}&\int _0^\infty \frac{e^{-\frac{cy_n^2}{t}}}{(|x|+y_n+\sqrt{t}+1)^{n}(y_n+1)^{a-n+1}}\, dy_n \\&\quad {\lesssim }\frac{1}{(|x|+\sqrt{t}+1)^{n}} \left[ \int _0^{|x|+\sqrt{t}+1} \frac{1}{(y_n+1)^{a-n+1}}\, dy_n \right. \\&\qquad \left. + \int _{|x|+\sqrt{t}+1}^\infty \frac{e^{-\frac{cy_n^2}{t}}}{(|x|+\sqrt{t}+1)^{a-n+1}}\, dy_n \right] \\&\quad {\lesssim }\frac{1}{(|x|+\sqrt{t}+1)^{n}} \left[ \frac{1}{(|x|+\sqrt{t}+1)^{a-n}} + \frac{1}{(|x|+\sqrt{t}+1)^{a-n+1}} \int _0^\infty e^{-u^2}\sqrt{t}\,du \right] \\&\quad {\lesssim }\frac{1}{(|x|+\sqrt{t}+1)^{a}}. \end{aligned} \end{aligned}$$

If \(a=n\), we have the same additional term from Lemma 2.2,

$$\begin{aligned} \begin{aligned}&\int _0^\infty \frac{e^{-\frac{cy_n^2}{t}}}{(|x|+y_n+\sqrt{t}+1)^{n}(y_n+1)}\, dy_n {\lesssim }\frac{1}{(|x|+\sqrt{t}+1)^{n}} \int _0^\infty \frac{e^{-\frac{cy_n^2}{t}}}{y_n+1}\, dy_n \\&\quad {\lesssim }\frac{1}{(|x|+\sqrt{t}+1)^{n}}\left( \int _0^{\sqrt{t}} \frac{1}{y_n+1}\, dy_n + \int _{\sqrt{t}}^\infty \frac{e^{-\frac{cy_n^2}{t}}}{y_n}\, dy_n \right) \\&\quad {\lesssim }\frac{1}{(|x|+\sqrt{t}+1)^{n}}\left( \log (1+\sqrt{t}) + 1 \right) . \end{aligned} \end{aligned}$$

We have proved

$$\begin{aligned} \left\| u^* \right\| _{Y_a} {\lesssim }\left\| u_0 \right\| _{Y_a},\ \ \ 0< a<n;\quad \left\| u^*(t) \right\| _{Y_n} {\lesssim }\log (2+t) \left\| u_0 \right\| _{Y_n}, \end{aligned}$$

and hence (9.11).

We next consider (9.12). For \(k=0\) and \(l+q=1\), by Proposition 1.1 with \(k=k_i=0\) and \(q=1\) we have

$$\begin{aligned} { |\partial _{y'}^l \partial _{y_n}^q G_{ij}(x,y,t)|{\lesssim }\frac{1}{(|x-y|^2+t)^{\frac{n+1}{2}}} +\frac{1}{(|x^*-y|^2+t)^{\frac{n}{2}}(y_n^2+t)^{\frac{1}{2}} }. } \end{aligned}$$
(9.14)

It suffices to show

$$\begin{aligned} I_1+I_2{\lesssim }t^{-1/2} \frac{1}{{\langle x \rangle }^{a}} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} I_1&= \int _{{\mathbb {R}}^n_+} \frac{1}{(|x-y|+\sqrt{t})^{n+1} } \frac{1}{{\langle y \rangle }^{2a}}\,dy, \\ I_2&= \int _{{\mathbb {R}}^n_+} \frac{1}{(|x^*-y|+\sqrt{t})^n (y_n+\sqrt{t})} \frac{1}{{\langle y \rangle }^{2a}}\,dy . \end{aligned} \end{aligned}$$

For \(I_1\), by Lemma 2.2, we have

$$\begin{aligned} \begin{aligned} I_1&\le \int _{{\mathbb {R}}^n} \frac{1}{(|x-y|+\sqrt{t})^{n+1} } \frac{1}{(|y|+1)^{2a}}\,dy \\&{\lesssim }\frac{1}{(|x|+\sqrt{t} + 1)^{2a} \sqrt{t} } + \frac{1}{(|x|+\sqrt{t} + 1)^{n+1}} \left( \mathbb {1}_{2a=n}\log (|x|+\sqrt{t} + 1) + \mathbb {1}_{2a>n} \right) \end{aligned} \end{aligned}$$

Thus, if \(0<a \le n\),

$$\begin{aligned} { I_1 {\lesssim }\frac{1}{(|x|+\sqrt{t} + 1)^{a} \sqrt{t} } . } \end{aligned}$$
(9.15)

For \(I_2\), let \(A=x_n+y_n+\sqrt{t}\). We have

$$\begin{aligned} I_2 {\lesssim }\int _0^\infty \left( \int _\Sigma \frac{1}{(|x'-y'|+ A)^n(|y'|+ y_n+1)^{2a}}\,dy' \right) \,\frac{dy_n}{y_n+\sqrt{t}}. \end{aligned}$$

Let \(R=|x'|+A+(y_n+1) \sim |x|+y_n + 1+\sqrt{t}\). By Lemma 2.2,

$$\begin{aligned} \begin{aligned} I_2&{\lesssim }\int _0^\infty \left( R^{-2a} A^{-1} + R^{-n} \left( \mathbb {1}_{2a=n-1} \log \frac{R}{y_n+1} + \frac{\mathbb {1}_{2a>n-1}}{ (y_n+1)^{2a+1-n}} \right) \right) \frac{dy_n}{y_n+\sqrt{t}} \\&=I_3+I_4+I_5. \end{aligned} \end{aligned}$$

We have

$$\begin{aligned} I_3 {\lesssim }\int _0^\infty \frac{dy_n}{( |x|+1+\sqrt{t})^{2a}(y_n+\sqrt{t})^2} {\lesssim }\frac{1}{(|x|+1+\sqrt{t})^{2a}\sqrt{t}}. \end{aligned}$$

If \(2a=n-1\), for any \(0<\epsilon <a\), we have \(n-1-\epsilon >a\) and

$$\begin{aligned} \begin{aligned} I_4&{\lesssim }\int _0^\infty \frac{\log (y_n + |x|+1+\sqrt{t}) }{(y_n + |x|+1+\sqrt{t})^{n}\sqrt{t}} dy_n \\&{\lesssim }\frac{1 }{( |x|+1+\sqrt{t})^{n-1-\epsilon }\sqrt{t}} {\lesssim }\frac{1 }{( |x|+1+\sqrt{t})^{a}\sqrt{t}} . \end{aligned} \end{aligned}$$

If \(\frac{n-1}{2} < a \le n\),

$$\begin{aligned} \begin{aligned} I_5&{\lesssim }\frac{1}{\sqrt{t}} \left( \int _0^{|x| + 1 + \sqrt{t}} + \int _{|x| + 1 + \sqrt{t}}^\infty \right) \frac{dy_n}{(y_n + |x|+1+\sqrt{t})^{n} (y_n+1)^{2a+1-n}} \\&{\lesssim }\frac{1}{\sqrt{t}}\int _0^{|x| +1+ \sqrt{t}} \frac{dy_n}{( |x|+1+\sqrt{t})^{n} (y_n+1)^{2a+1-n}} + \frac{1}{\sqrt{t}}\int _{|x| + 1+\sqrt{t}}^\infty \frac{dy_n}{y_n^{2a+1}} \\&{\lesssim }\frac{1}{\sqrt{t}} \left( \frac{1 }{( |x|+1+\sqrt{t})^{2a}} + \frac{\mathbb {1}_{2a=n} \log ( |x|+1+\sqrt{t})}{( |x|+1+\sqrt{t})^{2a}} + \frac{\mathbb {1}_{2a>n} }{( |x|+1+\sqrt{t})^{n}} \right) . \end{aligned} \end{aligned}$$

Thus, if \(0<a \le n\),

$$\begin{aligned} I_2 {\lesssim }I_3+I_4+I_5{\lesssim }\frac{1}{(|x|+\sqrt{t} + 1)^{a} \sqrt{t} } . \end{aligned}$$

This and the \(I_1\) estimate (9.15) show (9.12). \(\square \)

Remark

In the proof of (9.12), we use Proposition 1.1 instead of Theorem 1.5 to avoid \(\textrm{LN}\) since \(\mu _{jq}\) may be 1 when \(q=1\).

We next consider Theorem 1.9. It can be proved using the same iteration argument in [5] and the estimates in the following.

Lemma 9.3

Let \(n\ge 2\) and \(0 \le a \le 1\). For \(u_0\in Z_a\) with \(\mathop {\textrm{div}}\nolimits u_0 =0\) and \(u_{0,n}|_\Sigma =0\),

$$\begin{aligned} \left\| \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}(y) dy \right\| _{Z_a} \le C (1 + \delta _{a1} \log _+t) \left\| u_0 \right\| _{Z_a}. \end{aligned}$$
(9.16)

For \(F\in Z_{2a}\),

$$\begin{aligned} \left\| \int _{{\mathbb {R}}^n_+} \partial _{y_p} G_{ij}(x,y,t) F_{pj}(y)\,dy \right\| _{Z_a} \le C t^{-1/2} \left\| F \right\| _{Z_{2a}}. \end{aligned}$$
(9.17)

Our estimates for both inequalities fail for \(a >1\). See Remark 9.3 after the proof.

Proof

If \(a=0\), the lemma follows from (9.1) and (9.3) with \(p=q=\infty \). Thus, we only consider \(0<a\le 1\). We may suppose that \(\left\| u_0 \right\| _{Z_a}=1\) without loss of generality. For (9.16), write

$$\begin{aligned} \begin{aligned} \sum _{j=1}^n\int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}(y) dy&=\int _{{\mathbb {R}}^n_+} \Gamma (x-y,t) u_{0,i}(y) dy + \int _{{\mathbb {R}}^n_+} G_{ij}^*(x,y,t) u_{0,j}(y) dy\\&= : u^{heat}_i(x,t) + u_i^*(x,t). \end{aligned} \end{aligned}$$

Denote by \(\Gamma _k\) the k-dimensional heat kernel. When \(|u_0(y)| \le {\langle y_n \rangle }^{-a}\), we have

$$\begin{aligned} \begin{aligned} |u^{heat}_i(x,t) | {\lesssim }&\int _0^\infty \frac{\Gamma _{1}(x_n-y_n,t)}{(y_n+1)^a} \int _{\Sigma } \Gamma _{n-1}(x'-y',t)\, dy'dy_n\\ {\lesssim }&\int _0^\infty \frac{\Gamma _{1}(x_n-y_n,t)}{(y_n+1)^a} dy_n\\ {\lesssim }&(1+\delta _{a=1}\log _+ t)(x_n+1)^{-a}. \end{aligned} \end{aligned}$$

We have used the one dimensional version of (9.13) for the last inequality and \(0<a\le 1\).

For \(u^*\) with \(|u_0(y)| \le {\langle y_n \rangle }^{-a}\), by (1.10), (for both \(n \ge 3\) and \(n=2\)) we get

$$\begin{aligned} |u^*(x,t)| {\lesssim }J(x,t)= \int _{{\mathbb {R}}^n_+}\frac{e^{-\frac{cy_n^2}{t}}}{(|x^*-y|^2+t)^{\frac{n}{2}} {\langle y_n \rangle }^a} \,dy. \end{aligned}$$

For \(0< a<\infty \), we have

$$\begin{aligned} \begin{aligned} J&{\lesssim }\int _0^\infty \frac{e^{-\frac{cy_n^2}{t}}}{(y_n+1)^a} \int _{\Sigma } \frac{1}{(|x'-y'|+x_n+y_n+\sqrt{t})^n}\, dy'dy_n\\&{\lesssim }\int _0^\infty { \frac{e^{-\frac{cy_n^2}{t}}}{(y_n+1)^a(x_n+y_n+\sqrt{t})}}\, dy_n \\&{\lesssim }\frac{1}{x_n+\sqrt{t}} \int _0^{x_n+\sqrt{t}} \frac{1}{(y_n+1)^a}\, dy_n + \frac{1}{(x_n+\sqrt{t}+1)^a} \int _{x_n+\sqrt{t}}^\infty \frac{e^{-c\frac{y_n^2}{t}}}{y_n+\sqrt{t}}\, dy_n. \end{aligned} \end{aligned}$$

Using Lemma 2.1 to bound the first integral, we have

$$\begin{aligned} \begin{aligned} J&{\lesssim }\frac{1}{x_n+\sqrt{t}}\, \frac{(x_n+\sqrt{t})\left( 1 + \delta _{a1} \log _+(x_n+\sqrt{t}) \right) }{(1+x_n+\sqrt{t})^{\min (a,1)}} + \frac{1}{(x_n+\sqrt{t}+1)^a} \int _0^\infty \frac{e^{-u^2}}{u+1}\, du\\&{\lesssim }\frac{1 + \delta _{a1} \log _+(x_n+\sqrt{t})}{(x_n+\sqrt{t}+1)^{\min (a,1)}}. \end{aligned} \end{aligned}$$

When \(a=1\), we want to improve the above numerator \(1 + \delta _{a1} \log _+(x_n+\sqrt{t})\) to a function of t independent of \(x_n\). It suffices to consider the case \(x_n > 10+\sqrt{t}\). In this case,

$$\begin{aligned} J&{\lesssim }&~\frac{1}{x_n} \int _{0}^\infty \frac{e^{-c\frac{y_n^2}{t}}}{y_n+1}\, dy_n {\lesssim }\frac{1}{x_n} \int _{0}^{\sqrt{t}} \frac{1}{y_n+1}\, dy_n+ \frac{1}{x_n} \int _{\sqrt{t}}^\infty \frac{e^{-c\frac{y_n^2}{t}}}{y_n}\, dy_n \\&{\lesssim }&~ \frac{1}{x_n} \log (\sqrt{t} +1)+ \frac{1}{x_n}. \end{aligned}$$

We conclude when \(a=1\), either \(x_n > 10+\sqrt{t}\) or not,

$$\begin{aligned} J ~{\lesssim }~ \frac{\log (2+\sqrt{t})}{x_n+\sqrt{t}+1}. \end{aligned}$$

Combining the above estimates of \(u^{heat}\) and J, the estimate (9.16) is deduced.

Next, we will show (9.17). For \(k=0\) and \(l+q=1\), by Proposition 1.1 with \(k=k_i=0\) and \(q=1\), we have

$$\begin{aligned} { |\partial _{y'}^l \partial _{y_n}^q G_{ij}(x,y,t)|{\lesssim }\frac{1}{(|x-y|^2+t)^{\frac{n+1}{2}}} +\frac{1}{(|x^*-y|^2+t)^{\frac{n}{2}}(y_n^2+t)^{\frac{1}{2}} }. } \end{aligned}$$
(9.18)

It suffices to show, for \(a>0\),

$$\begin{aligned} I_1+I_2{\lesssim }t^{-1/2} \frac{1}{{\langle x_n \rangle }^{a}} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} I_1&= \int _{{\mathbb {R}}^n_+} \frac{1}{(|x-y|+\sqrt{t})^{n+1} } \frac{1}{{\langle y_n \rangle }^{2a}}\,dy, \\ I_2&= \int _{{\mathbb {R}}^n_+} \frac{1}{(|x^*-y|+\sqrt{t})^n (y_n+\sqrt{t})} \frac{1}{{\langle y_n \rangle }^{2a}}\,dy . \end{aligned} \end{aligned}$$

Indeed, via Lemma 2.2, we have

$$\begin{aligned} \begin{aligned} I_1&{\lesssim }\int _0^\infty \frac{1}{(y_n+1)^{2a}} \int _{\Sigma } \frac{1}{(|x-y|+\sqrt{t})^{n+1}}\, dy'dy_n\\&{\lesssim }\int _0^\infty \frac{1}{(y_n+1)^{2a}(|x_n-y_n|+\sqrt{t})^{2}}\, dy_n\\&{\lesssim }R^{-1-2a} + \delta _{2a=1} R^{-2} \log R + \mathbb {1}_{2a>1} R^{-2} + R^{-2a} t^{-1/2}\\&{\lesssim }t^{-1/2} \frac{1}{{\langle x_n \rangle }^{a}}, \end{aligned} \end{aligned}$$

where \(R=x_n+\sqrt{t}+1\). We have used \(a\le 1\) to bound \(\mathbb {1}_{2a>1} R^{-2}{\lesssim }t^{-1/2} \frac{1}{{\langle x_n \rangle }^{a}}\). On the other hand,

$$\begin{aligned} \begin{aligned} I_2&{\lesssim }\int _0^\infty \frac{1}{(y_n+1)^{2a}(y_n+\sqrt{t})} \int _{\Sigma } \frac{1}{(|x^*-y|+\sqrt{t})^{n}}\, dy'dy_n\\&{\lesssim }\int _0^\infty \frac{1}{(y_n+1)^{2a}(y_n+\sqrt{t})(x_n+y_n+\sqrt{t})}\, dy_n. \end{aligned} \end{aligned}$$

If \(x_n \le 1\), we have

$$\begin{aligned} I_2{\lesssim }\int _0^1\frac{1}{(y_n+\sqrt{t})^2}\, dy_n+ \int _1^\infty \frac{1}{y_n^{2a+1}\sqrt{t}}\, dy_n {\lesssim }\frac{1}{\sqrt{t}} . \end{aligned}$$

If \(x_n \ge 1\), using \(0<a\le 1\) we have

$$\begin{aligned} \begin{aligned} I_2&{\lesssim }\int _0^\infty \frac{1}{(y_n+1)^{2a}(\sqrt{t})\, x_n^a (y_n+1)^{1-a}}\, dy_n \\&= \frac{1}{x_n^a \sqrt{t}}\int _0^\infty \frac{1}{(y_n+1)^{1+a}}\, dy_n = \frac{c}{x_n^a \sqrt{t}}. \end{aligned} \end{aligned}$$

Combining the above estimates of \(I_1\) and \(I_2\), we obtain (9.17). \(\square \)

Remark 9.3

The restriction \(a\le 1\) is used for both estimates of \(u^{heat}\) and J for (9.16) and for both \(I_1\) and \(I_2\) for (9.17) in the above proof. In fact, J has the lower bound for \(t=1\) and all \(a>0\),

$$\begin{aligned} J(x,1) { > rsim }\int _{0<y_n<1} \int _\Sigma \frac{dy'\,dy_n}{(|y'|+x_n+1)^n} { > rsim }\frac{1}{1+x_n}. \end{aligned}$$

9.3 Mild solutions in \(L^q_{\textrm{uloc}}\)

In this subsection we prove Lemma 9.4. The estimates in Lemma 9.4 are used by Maekawa, Miura and Prange to construct local in time mild solutions of (NS) in \(L^q_{\textrm{uloc}}({\mathbb {R}}^n_+)\) in [36, Prop 7.1] for \(n<q \le \infty \) and [36, Prop 7.2] for \(q=n\). Their same proofs give Theorem 1.10.

Lemma 9.4

Let \(n \ge 2\). Let \(1 \le p \le q \le \infty \). For \(u_0\in L^p_{{\textrm{uloc}},\sigma }\),

$$\begin{aligned}{} & {} \left\| \sum _{j=1}^n \int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}(y) dy \right\| _{L^q_{\textrm{uloc}}} \nonumber \\{} & {} \quad \le C \left( 1+t^{-\frac{n}{2}(\frac{1}{p}-\frac{1}{q})}+ \mathbb {1}_{p=q=1} \ln _+ \frac{1}{ t} \right) \left\| u_0 \right\| _{L^p_{\textrm{uloc}}}. \end{aligned}$$
(9.19)

Let \(F\in L^p_{\textrm{uloc}}\), \(a,b\in {\mathbb {N}}_0\) and \(1\le a+b\). Assume \(b\ge 1\) and \(n\ge 3\) if \(p=q=\infty \). Then

$$\begin{aligned} { \left\| \int _{{\mathbb {R}}^n_+} \partial _{x}^a\partial _{y}^b G_{ij}(x,y,t) F(y) dy \right\| _{L^q_{\textrm{uloc}}} \le C t^{-\frac{a+b}{2}} \big (1+t^{-\frac{n}{2}(\frac{1}{p}-\frac{1}{q})}\big ) \left\| F \right\| _{L^p_{\textrm{uloc}}}. } \end{aligned}$$
(9.20)

These estimates correspond to [36, Proposition 5.3] and [36, Theorem 3]. Their proof is based on resolvent estimates in [36, Theorem 1], which does not allow \(q=1\). Thus our estimates for \(p=q=1\) are new. Also note that we do not restrict \(a,b\le 1\) as in [36, Theorem 3].

Proof

First consider (9.19). The endpoint case \(p=q=\infty \) follows from (9.1). Let \(p<\infty \). The formula (1.8) gives

$$\begin{aligned} \begin{aligned} \sum _{j=1}^n\int _{{\mathbb {R}}^n_+} \breve{G}_{ij}(x,y,t) u_{0,j}(y) dy&=\int _{{\mathbb {R}}^n} \Gamma (x-y,t) \mathbb {1}_{y_n>0}u_{0,i}(y) dy \\&\quad + \int _{{\mathbb {R}}^n_+} G_{ij}^*(x,y,t) u_{0,j}(y) dy\\&= : u^{heat}_i(x,t) + u_i^{*}(x,t). \end{aligned} \end{aligned}$$

Since \(u^{heat}\) is a convolution with the heat kernel in \({\mathbb {R}}^n\), it satisfies the estimate in (9.19) by Maekawa–Terasawa [37, (3.18)]. It suffices now to show that \(u^*(x,t)\) also satisfies the same estimate. By (1.10), \(u^*(x,t)\) is bounded by

$$\begin{aligned} \begin{aligned} J_t(x)&= \int _{{\mathbb {R}}^n_+}\frac{e^{-\frac{cy_n^2}{t}}}{(|x'-y'|+x_n+y_n+\sqrt{t})^n } |u_0(y)| \,dy \\&= \int _0^\infty \frac{1}{(|x'|+x_n+y_n+\sqrt{t})^n }*_\Sigma |u_0(x',y_n)| e^{-\frac{cy_n^2}{t}} \,dy_n, \end{aligned} \end{aligned}$$

where \(*_\Sigma \) indicates convolution over \(\Sigma \). Denote

$$\begin{aligned} Q= [-\tfrac{1}{2},\tfrac{1}{2}]^{n-1} \subset \Sigma , \quad Q_k = k + Q, \quad k \in {\mathbb {Z}}^{n-1}. \end{aligned}$$

Our goal is to bound

$$\begin{aligned} \left\| J_t \right\| _{L^q(Q_{j'} \times (j_n,j_n+1))} \end{aligned}$$

by the right side of (9.19), uniformly for all \(j' \in {\mathbb {Z}}^{n-1}\) and \(j_n \in {\mathbb {N}}_0\). By translation, we may assume \(j'=0\). Decompose

$$\begin{aligned} J_t(x) = \sum _{k,l\in {\mathbb {Z}}^{n-1}} \int _0^\infty \frac{\mathbb {1}_{Q_k}(x')}{(|x'|+x_n+y_n+\sqrt{t})^n }*_{\Sigma _{x'}} \left( \mathbb {1}_{Q_l}(x') |u_0(x',y_n)| \right) e^{-\frac{cy_n^2}{t}} \,dy_n. \end{aligned}$$

By Minkowski and Young inequalities with \(1+\frac{1}{q}=\frac{1}{p}+\frac{1}{r}\),

$$\begin{aligned} \begin{aligned}&\left\| J_t(\cdot ,x_n) \right\| _{L^q(Q)}\\&\quad {\lesssim }\sum _{k,l\in {\mathbb {Z}}^{n-1},\, k-l \in 3Q} \int _0^\infty \left\| \frac{\mathbb {1}_{Q_k}(x')}{(|x'| + x_n+y_n+\sqrt{t})^n} *_{\Sigma _{x'}}\left( \mathbb {1}_{Q_l}(x') |u_0(x',y_n)| \right) \right\| _{L^q_{x'}(Q)}\\&\qquad e^{-\frac{cy_n^2}{t}}\,dy_n \\&\quad {\lesssim }\sum _{k,l\in {\mathbb {Z}}^{n-1},\, k-l \in 3Q} \int _0^\infty I_k \cdot \left\| u_0(\cdot ,y_n) \right\| _{L^p(Q_l)} e^{-\frac{cy_n^2}{t}}\,dy_n, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} I_k&= \left\| \frac{1}{(|x'| + x_n+y_n+\sqrt{t})^n} \right\| _{L^r_{x'}(Q_k)}. \end{aligned} \end{aligned}$$

We have \(I_k \approx (1+|k|+ x_n+y_n+\sqrt{t})^{-n}\) when \(k \not =0\), and \(I_0 {\lesssim }(1+ x_n+y_n+\sqrt{t})^{-\frac{n-1}{r}} ( x_n+y_n+\sqrt{t})^{-n+\frac{n-1}{r}}\) by Lemma 2.1 .

By Minkowski inequality again with \(I=(j_n,j_n+1)\),

$$\begin{aligned} \left\| J_t \right\| _{L^q(Q\times I)}= & {} \left\| \left\| J_t(\cdot ,x_n) \right\| _{L^q(Q)} \right\| _{L^q_{x_n}(I)} \\&{\lesssim }&\sum _{k\in {\mathbb {Z}}^{n-1}} \int _0^\infty \left\| I_k \right\| _{L^q_{x_n}(I)} \left\| u_0(\cdot ,y_n) \right\| _{L^p(k+4Q)} e^{-\frac{cy_n^2}{t}}\,dy_n . \end{aligned}$$

We have \(\left\| I_k \right\| _{L^q_{x_n}(I)} {\lesssim }\frac{1}{(1+|k| +y_n+\sqrt{t})^n}\) except when \(k=0\) and \(y_n+\sqrt{t}<1\). For \(k=0\),

$$\begin{aligned} \begin{aligned} \left\| I_0 \right\| _{L^q_{x_n}(I)}&{\lesssim }\left\| \frac{1}{ (x_n+y_n+\sqrt{t})^{n-\frac{n-1}{r}}} \right\| _{L^q_{x_n}(0,1)} \\ {}&{\lesssim }\frac{1}{ (y_n+\sqrt{t})^{1+\frac{n-1}{p} - \frac{n}{q}}} + \mathbb {1}_{p=q=1} \ln _+ \frac{1}{y_n+\sqrt{t}}, \end{aligned} \end{aligned}$$

using \(n-\frac{n-1}{r} = 1 + (n-1)(\frac{1}{p}-\frac{1}{q})\ge 1\). Thus

$$\begin{aligned} \begin{aligned} \left\| J_t \right\| _{L^q(Q\times I)}&{\lesssim }\sum _{k\in {\mathbb {Z}}^{n-1}} \sum _{j=0}^\infty \int _j^{j+1} \frac{1}{(1+|k| +y_n+\sqrt{t})^n} \left\| u_0(\cdot ,y_n) \right\| _{L^p(k+4Q)} e^{-\frac{cy_n^2}{t}}\,dy_n+M, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} M= \int _0^1 \left\| I_0 \right\| _{L^q_{x_n}(I)} \left\| u_0(\cdot ,y_n) \right\| _{L^p(4Q)} e^{-\frac{cy_n^2}{t}}\,dy_n. \end{aligned} \end{aligned}$$

By Hölder inequality with \(p'=\frac{p}{p-1}\),

$$\begin{aligned} \begin{aligned} \left\| J_t \right\| _{L^q(Q\times I)}&{\lesssim }\sum _{k\in {\mathbb {Z}}^{n-1}} \sum _{j=0}^\infty \left\| u_0 \right\| _{L^p((k+4Q)\times (j,j+1))}\\&\quad \cdot \left\| \frac{1}{(1+|k| +y_n+\sqrt{t})^n} e^{-\frac{cy_n^2}{t}} \right\| _ {L^{p'}_{y_n} (j,j+1)}+M \\&{\lesssim }\sum _{k\in {\mathbb {Z}}^{n-1}} \sum _{j=0}^\infty \left\| u_0 \right\| _{L^p_{\textrm{uloc}}} \frac{1}{(1+|k| +j+\sqrt{t})^n} e^{-\frac{cj^2}{t}}+M \\&{\lesssim }\left\| u_0 \right\| _{L^p_{\textrm{uloc}}} \int _{{\mathbb {R}}^n_+} \frac{ e^{-\frac{cy_n^2}{t}}}{(1+|y| +\sqrt{t})^n} dy+M {\lesssim }\left\| u_0 \right\| _{L^p_{\textrm{uloc}}}+M. \end{aligned} \end{aligned}$$

Also by Hölder inequality, when \((p,q)\not =(1,1)\),

$$\begin{aligned} M {\lesssim }\left\| u_0 \right\| _{L^p_{\textrm{uloc}}}\cdot \left\| \frac{1}{ (y_n+\sqrt{t})^{1+\frac{n-1}{p} - \frac{n}{q}}} e^{-\frac{cy_n^2}{t}} \right\| _ {L^{p'}_{y_n}(0,1)} {\lesssim }t^{-\frac{n}{2}(\frac{1}{p}-\frac{1}{q})} \left\| u_0 \right\| _{L^p_{\textrm{uloc}}}, \end{aligned}$$

while when \(p=q=1\),

$$\begin{aligned} M {\lesssim }\left\| u_0 \right\| _{L^1_{\textrm{uloc}}}\cdot \left\| \left( 1+ \ln _+ \frac{1}{y_n+\sqrt{t}} \right) e^{-\frac{cy_n^2}{t}} \right\| _{L^\infty (0,1) } {\lesssim }\left( 1+ \ln _+ \frac{1}{ t} \right) \left\| u_0 \right\| _{L^1_{\textrm{uloc}}}. \end{aligned}$$

We have shown

$$\begin{aligned} { \left\| J_t \right\| _{L^q_{\textrm{uloc}}} {\lesssim }\left( t^{-\frac{n}{2}(\frac{1}{p}-\frac{1}{q})}+ \mathbb {1}_{p=q=1} \ln _+ \frac{1}{ t} \right) \left\| u_0 \right\| _{L^p_{\textrm{uloc}}}. } \end{aligned}$$
(9.21)

This proves (9.19).

For (9.20), denote

$$\begin{aligned} w(x,t) = \int _{{\mathbb {R}}^n_+} \partial _{x}^a\partial _{y}^b G_{ij}(x,y,t) F(y) dy ,\quad m=a+b. \end{aligned}$$

By (9.5), \(|w(x,t)|{\lesssim }w_1(x,t) + w_2(x,t)\), where \(w_1(x,t) = \int _{{\mathbb {R}}^n} H_t^0(x-y) |F(y)|dy\) with \(H_t^0(x)\) given by (9.6), and \(w_2(x,t) = (x_n^2 + t)^{a/2}\int _{{\mathbb {R}}^n} H_t(x-y^*) |F(y)| (y_n^2+t)^{b/2}\,dy\) with \(H_t^0(x)\) given by (9.7).

For \(w_1(x,t)\), by Maekawa-Terasawa [37, Theorem 3.1] with \(\frac{1}{q}=\frac{1}{r}+\frac{1}{p}-1\),

$$\begin{aligned} \begin{aligned} \Vert w_1(\cdot ,t)\Vert _{L^q_{{\textrm{uloc}}}}&{\lesssim }t^{-\frac{m}{2}}\left( t^{\frac{n}{2}(\frac{1}{q}-\frac{1}{p})} \left\| H_1^0 \right\| _{L^r({\mathbb {R}}^n)} +\left\| H_1^0 \right\| _{L^1({\mathbb {R}}^n)} \right) \left\| F \right\| _{L^p_{\textrm{uloc}}}. \end{aligned} \end{aligned}$$

It remains to estimate \(w_2(x,t)\). When \(p=q=\infty \), noting that \(L^\infty _{{\textrm{uloc}}} = L^\infty \), (9.20) follows from (9.3). For \(p<q\), we drop the factors \((y_n^2+t)^{-\frac{b}{2}}\) and \((x_n^2+t)^{-\frac{a}{2}}\) in (9.5), \(H_t(x-y^*)\) by \(H_t(x-y)\), and applying Maekawa-Terasawa [37, Theorem 3.1] with \(\frac{1}{q}=\frac{1}{r}+\frac{1}{p}-1\),

$$\begin{aligned} \begin{aligned} \Vert w_2(\cdot ,t)\Vert _{L^q_{{\textrm{uloc}}}}&{\lesssim }t^{-\frac{m}{2}}\left( t^{\frac{n}{2}(\frac{1}{q}-\frac{1}{p})} \left\| H_1 \right\| _{L^r({\mathbb {R}}^n)} +\left\| H_1 \right\| _{L^1({\mathbb {R}}^n)} \right) \left\| F \right\| _{L^p_{\textrm{uloc}}}. \end{aligned} \end{aligned}$$

Note that \(H_1\in L^r\) since \(r>1\) when \(p<q\). Thus, we get for \(p<q\) that

$$\begin{aligned} \left\| w_2(\cdot ,t) \right\| _{L^q_{{\textrm{uloc}}}} {\lesssim }t^{-\frac{m}{2}}\left( t^{\frac{n}{2}(\frac{1}{q}-\frac{1}{p})} + 1 \right) \left\| F \right\| _{L^p_{\textrm{uloc}}}. \end{aligned}$$

This shows (9.20). \(\square \)