1 Introduction

In this paper we study traveling wave solutions to the one-phase Muskat problem, which concerns the dynamics of the free boundary of a viscous fluid in homogeneously permeable porous media. The n-dimensional (\(n\geqq 2\)) wet region \(\Omega _{\zeta (\cdot ,t)}\) lies above the flat bed of depth \(b>0\) and below the free boundary that is the graph of an unknown time-dependent function \(\zeta \), i.e.

$$\begin{aligned} \Omega _{\zeta (\cdot ,t)} = \{x \in \Gamma \times \mathbb {R}\;\vert \;-b< x_n < \zeta (x',t)\}, \end{aligned}$$
(1.1)

where the cross-section \(\Gamma \) is either \(\mathbb {R}^{n-1}\) or \(\mathbb {T}^{n-1}:= \mathbb {R}^{n-1} / (2\pi \mathbb {Z}^{n-1})\). Here, for any point \(x\in \Gamma \times \mathbb {R}\) we split its horizontal and vertical coordinates as \(x=(x', x_n)\). We denote the free-boundary and the flat bed respectively by \(\Sigma _{\zeta (\cdot ,t)}\) and \(\Sigma _{-b}\); that is,

$$\begin{aligned} \Sigma _{\zeta (\cdot ,t)} = \{x \in \Gamma \times \mathbb {R}\;\vert \;x_n = \zeta (x',t)\} \text { and } \Sigma _{-b} = \{x \in \Gamma \times \mathbb {R}\;\vert \;x_n = -b\}. \end{aligned}$$
(1.2)

We posit that when the cross-section is \(\mathbb {R}^{n-1}\), the free boundary \(\zeta (x', t)\) decays as \(|x'|\rightarrow \infty \).

The fluid is acted upon in the bulk by a uniform gravitational field \(-e_n\) pointing downward, where \(e_n\) is the upward pointing unit vector in the vertical direction, and a generic body force \(\tilde{\mathfrak {f}}(\cdot , t): \Omega _{\zeta (\cdot , t)}\rightarrow \mathbb {R}^n\). Then the fluid motion in the porous medium is modeled by the Darcy law

$$\begin{aligned} w + \nabla P = - e_n + \tilde{\mathfrak {f}} \quad \text {and} \quad {{\,\textrm{div}\,}}{w} = 0\quad \text {in } \Omega _{\zeta (\cdot ,t)}, \end{aligned}$$
(1.3)

where, for the sake of simplicity, we have normalized the dynamic viscosity, the fluid density, and the permeability of the medium to unity. Here, w and P respectively denote the fluid velocity and pressure. On the surface, the fluid is acted upon by a constant pressure \(P_0\) from the dry region above \(\Omega _{\zeta (\cdot ,t)}\) and an externally applied pressure \(\phi (\cdot , t): \Sigma _{\zeta (\cdot , t)}\rightarrow \mathbb {R}\). This leads to the boundary condition

$$\begin{aligned} P=P_0+\phi \quad \text {on } \Sigma _{\zeta (\cdot ,t)}. \end{aligned}$$
(1.4)

The no-penetration boundary condition is assumed on the flat bed:

$$\begin{aligned} w_n=0\quad \text {on }\Sigma _{-b}. \end{aligned}$$
(1.5)

Finally, the free boundary evolves according to the kinematic boundary condition

$$\begin{aligned} \partial _t\zeta = w \cdot (-\nabla ' \zeta ,1) \quad \text {on } \Sigma _{\zeta (\cdot ,t)}. \end{aligned}$$
(1.6)

We shall refer to the following system as the (one-phase) Muskat problem

$$\begin{aligned} {\left\{ \begin{array}{ll} w + \nabla P = - e_n + \tilde{\mathfrak {f}} &{} \text {in }\Omega _{\zeta (\cdot ,t)} \\ {{\,\textrm{div}\,}}{w} = 0 &{}\text {in } \Omega _{\zeta (\cdot ,t)} \\ \partial _t\zeta = w \cdot (-\nabla ' \zeta ,1) &{}\text {on } \Sigma _{\zeta (\cdot ,t)} \\ P = P_0 + \phi &{}\text {on } \Sigma _{\zeta (\cdot ,t)} \\ w_n =0 &{} \text {on } \Sigma _{-b}. \end{array}\right. } \end{aligned}$$
(1.7)

In the absence of the body force and the external pressure, i.e. \(\tilde{\mathfrak {f}} =0\) and \(\phi =0\), (1.7) is called the free Muskat problem.

The free Muskat problem can be recast as a nonlocal equation for the free boundary function \(\eta \) (see (2.17)). It was proved in [12] that the problem is locally-in-time well-posed for large data \(\eta _0\in H^s(\Gamma )\) for any \(1+\frac{n-1}{2}<s\in \mathbb {R}\), which is the lowest Sobolev index guaranteeing that \(\eta _0\in W^{1, \infty }(\Gamma )\). We also refer to [3] for local well-posedness for the case of non-graph free boundary. The free Muskat problem admits the following trivial stationary solutions

$$\begin{aligned} (\overline{w}, \overline{P}, \overline{\zeta })= {\left\{ \begin{array}{ll} (0, -x_n+P_0, 0)\quad \text {if } \Gamma =\mathbb {R}^{n-1},\\ (0, -x_n + c+P_0, c),~c\in \mathbb {R}\quad \text {if } \Gamma =\mathbb {T}^{n-1}. \end{array}\right. } \end{aligned}$$
(1.8)

In fact, under mild regularity and decay assumptions, (1.8) are the only stationary solutions. They have been proved to be stable in various norms [4, 5, 11, 13]. To the best of our knowledge, (1.8) are the only solutions that are known to be stable.

In this paper we are interested in the construction of nontrivial special solutions and the stability of them. In view of the translation invariance of (1.7) in the horizontal directions, it is natural to consider traveling wave solutions. These are solutions that propagate along a fixed direction, which without loss of generality we may assume is the \(x_1\)-direction, with constant velocity \(\gamma \). To this end, we assume that

$$\begin{aligned} \phi (x,t) = \varphi (x-\gamma t e_1) \text { and } \tilde{\mathfrak {f}}(x,t) = \mathfrak {f}(x-\gamma t e_1) \end{aligned}$$
(1.9)

and make the traveling wave ansatz

$$\begin{aligned} \zeta (x',t) = \eta (x' - \gamma t e_1 ). \end{aligned}$$
(1.10)

This determines the unknown domain \(\Omega _\eta = \{x \in \Gamma \times \mathbb {R}\;\vert \;-b< x_n < \eta (x')\}\) as well as the free boundary \(\Sigma _\eta = \{x \in \Gamma \times \mathbb {R}\;\vert \;x_n = \eta (x')\}\) as before. We then define the traveling wave unknowns \(v: \Omega _\eta \rightarrow \mathbb {R}^n\) and \(q: \Omega _\eta \rightarrow \mathbb {R}\) via

$$\begin{aligned} w(x,t) = v(x-\gamma t e_1 ) \text { and } P(x,t) = P_0 - x_n + q(x - \gamma t e_1 ). \end{aligned}$$
(1.11)

In the latter we have subtracted off the hydrostatic pressure, as is often convenient. The new equations for \((v,q, \eta )\) read

$$\begin{aligned} {\left\{ \begin{array}{ll} v + \nabla q = \mathfrak {f} &{} \text {in }\Omega _{\eta } \\ {{\,\textrm{div}\,}}{v} = 0 &{}\text {in } \Omega _{\eta } \\ -\gamma \partial _1 \eta = v \cdot N&{}\text {on } \Sigma _{\eta } \\ q - \eta = \varphi &{}\text {on } \Sigma _{\eta } \\ v_n =0 &{} \text {on } \Sigma _{-b}, \end{array}\right. } \end{aligned}$$
(1.12)

where

$$\begin{aligned} N=(-\nabla '\eta ,1). \end{aligned}$$
(1.13)

We pause to remark that the only solutions to the free version of (1.12) (\( \mathfrak {f}=0\) and \(\phi =0\)) are the trivial solutions as given in (1.8). Indeed, assuming that \((v, q, \eta )\) is a decaying solution, then using Green’s theorem and the boundary conditions for v and q, we obtain

$$\begin{aligned}{} & {} \int _{\Omega _\eta }|v|^2\textrm{d}x =-\int _{\Omega _\eta } v\cdot \nabla q \textrm{d}x = -\int _{\Sigma _\eta } q\left( v\cdot \frac{N}{|N|}\right) \textrm{d}S\nonumber \\{} & {} -\int _{\Sigma _{-b}}qv_n \textrm{d}S+\int _{\Omega _\eta } q{{\,\textrm{div}\,}}{v}\textrm{d}x=\gamma \int _{\Gamma }\eta \partial _1\eta \textrm{d}x=0. \end{aligned}$$
(1.14)

It follows that \(v=0\) and hence \(q=c\), a constant. Consequently \(\eta (x')=q(x', \eta (x'))=c\). When \(\Gamma = \mathbb {R}^{n-1}\) this implies that \(\eta =q=0\) since \(\eta \) decays. Thus \((v, q, \eta )=(0, 0, 0)\) is the trivial solution when \(\Gamma = \mathbb {R}^{n-1}\). In the periodic case, \(\Gamma = \mathbb {T}^{n-1}\), we obtain the trivial solutions \((v, q, \eta )=(0, c, c)\), for \(c\in \mathbb {R}\) (one can uniquely determine c by fixing a mass of the fluid). This is not a surprise since for free Muskat problem, the energy dissipates, so it cannot sustain the permanent structure of traveling waves. It is therefore necessary to have some sort of external energy in order for traveling wave solutions to exist. In the context of (1.12), this is provided by the external bulk force \( \mathfrak {f} \) and the external pressure \(\phi \).

Our first main result states that for suitably small \(\mathfrak {f}\) and \(\phi \), there exists a locally unique traveling wave solution to (1.12) in Sobolev-type spaces. Note that in the following statement we employ a reformulation of the problem (1.12) as well as some nonstandard function spaces; these will be explained after the theorem statement:

Theorem 1.1

(Proved in Sect. 4.2) Let \(\frac{n}{2} - 1 < s \in \mathbb {N}\) and consider the open set

$$\begin{aligned} U^s_\delta = \{(u,p,\eta ) \in {_n}H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s+2}(\Omega ) \times \mathcal {H}^{s+3/2}(\Sigma ) \;\vert \;\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta \} \end{aligned}$$
(1.15)

with \(\delta >0\) as constructed in Theorem 4.3. Define the open set \(\mathfrak {C}\subseteq \mathbb {R}\) to be \(\mathbb {R}\) if \(\Gamma = \mathbb {T}^{n-1}\) and \(\mathbb {R}\backslash \{0\}\) if \(\Gamma = \mathbb {R}^{n-1}\). Then there exist open sets

$$\begin{aligned} \mathcal {D}^s \subseteq \mathfrak {C} \times H^{s+3/2}(\Sigma ) \times H^{s+3}(\Gamma \times \mathbb {R}) \times H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s+2}(\Gamma \times \mathbb {R};\mathbb {R}^n) \text { and } \mathcal {S}^s \subseteq U^s_\delta \end{aligned}$$
(1.16)

such that the following hold:

  1. (1)

    \(\mathfrak {C} \times \{0\} \times \{0\} \times \{0\} \times \{0\} \subseteq \mathcal {D}^s\) and \((0,0,0) \in \mathcal {S}^s\).

  2. (2)

    For each \((\gamma , \varphi _0, \varphi _1, \mathfrak {f}_0, \mathfrak {f}_1) \in \mathcal {D}^s\) there exists a locally unique \((u,p,\eta ) \in \mathcal {S}^s\) classically solving

    $$\begin{aligned} {\left\{ \begin{array}{ll} u + \nabla _{\mathcal {A}} p + \nabla _{\mathcal {A}} \mathfrak {P}\eta = \mathfrak {J}\mathcal {M}^T[ \mathfrak {f}_0 + \mathfrak {f}_1\circ \mathfrak {F}_\eta ] &{} \text {in }\Omega :=\Gamma \times (-b, 0) \\ {{\,\textrm{div}\,}}{u} = 0 &{}\text {in } \Omega \\ -\gamma \partial _1 \eta = u _n &{}\text {on } \Sigma :=\Gamma \times \{0\} \\ p = \varphi _0 + \varphi _1 \circ \mathfrak {F}_\eta &{}\text {on } \Sigma \\ u_n =0 &{} \text {on } \Sigma _{-b}. \end{array}\right. } \end{aligned}$$
    (1.17)
  3. (3)

    The map \(\mathcal {D}^s \ni (\gamma , \varphi _0, \varphi _1, \mathfrak {f}_0, \mathfrak {f}_1) \mapsto (u,p,\eta ) \in \mathcal {S}^s\) is \(C^1\) and locally Lipschitz.

Some remarks are in order.

  1. (1)

    (1.17) is a reformulation of (1.12) in the fixed domain \(\Omega \) and with \(\mathfrak {f}(x)=\mathfrak {f}_0(x')+\mathfrak {f}_1(x)\) and \(\varphi (x)=\varphi _0(x')+\varphi _1(x)\). See Sect. 2.1 for the derivation of (1.17) and for the precise meaning of \(\mathcal {A}\), \(\mathfrak {J}\), \(\mathcal {M}\) and \(\mathfrak {F}_\eta \). The preceding forms of \(\mathfrak {f}\) and \(\varphi \) are imposed since we assume less regularity for \(\mathfrak {f}\) and \(\varphi \) when they are independent of the vertical variable \(x_n\). Note also that the integer constraint for the regularity parameter s comes from the need to verify that the maps \((\mathfrak {f}_1,\eta ) \mapsto \mathfrak {f}_1 \circ \mathfrak {F}_\eta \) and \((\varphi _1,\eta ) \mapsto \varphi _1 \circ \mathfrak {F}_\eta \) are \(C^1\). If these forcing terms are ignored, then we may relax this requirement for s: see Theorems 6.1 and 6.3.

  2. (2)

    The space \({_n}H^{s+1}(\Omega ;\mathbb {R}^n)\) is defined in Definition 3.13.

  3. (3)

    When the cross-section is \(\mathbb {T}^{n-1}\), the boundary function \(\eta \) is constructed in \(\mathring{H}^s(\mathbb {T}^{n-1})\), the usual Sobolev space of zero-mean functions. On the other hand, when the cross-section is \(\mathbb {R}^{n-1}\), then \(\eta \) belongs to the anisotropic Sobolev space \(\mathcal {H}^{s+\frac{3}{2}}(\mathbb {R}^{n-1})\), as defined in Definition A.1. At high frequencies this space provides standard \(H^{s+3/2}\) Sobolev control, but at low frequencies it only controls

    $$\begin{aligned} \int _{B(0,1)} \frac{\xi _1^2+|\xi |^4}{|\xi |^2}|\widehat{\eta }(\xi )|^2\textrm{d}\xi . \end{aligned}$$
    (1.18)

    The modulus \(\frac{\xi _1^2+|\xi |^4}{|\xi |^2}\) naturally arises from the linearized operator \(\partial _1-|D|\tanh (b|D|)\) and the following structure of the nonlinearity at low frequencies: \(\mathcal {N}=|D|\widetilde{\mathcal {N}}\). The anisotropic Sobolev space \(\mathcal {H}^s(\mathbb {R}^d)\), which satisfies the inclusions \(H^s(\mathbb {R}^d) \subset \mathcal {H}^s(\mathbb {R}^d) \subseteq H^s(\mathbb {R}^d) + C^\infty _0(\mathbb {R}^d)\), was introduced in [9] for the construction of traveling wave solutions to the free boundary Navier–Stokes equations and plays a key role in our construction here. We recall the definition and basic properties of \(\mathcal {H}^s\) in Appendix A.1.

  4. (4)

    Theorem 1.1 asserts the uniqueness of traveling wave solutions in the small but does not exclude the possibility of nonuniqueness in the large.

Our proof of Theorem 1.1 is based on the implicit function theorem, applied in a neighborhood of the trivial solutions obtained with \(\gamma \in \mathfrak {C}\), \(\mathfrak {f}_0 = \mathfrak {f}_1 = 0\), \(\varphi _0 = \varphi _1 =0\), \(u=0\), \(p=0\), and \(\eta =0\). In order for this strategy to work, we need a good understanding of the solvability of the linear problem

$$\begin{aligned} {\left\{ \begin{array}{ll} u + \nabla p + \nabla \mathfrak {P}\eta = F &{} \text {in }\Omega \\ {{\,\textrm{div}\,}}{u} = G &{}\text {in } \Omega \\ u_n +\gamma \partial _1 \eta = H &{}\text {on } \Sigma \\ p = K &{}\text {on } \Sigma \\ u_n = 0 &{} \text {on } \Sigma _{-b}, \end{array}\right. } \end{aligned}$$
(1.19)

which is obtained by linearizing the flattened reformulation of (1.12) given in (1.17) around the trivial solutions. More precisely, we need to identify appropriate function spaces \(\mathbb {E}\) and \(\mathbb {F}\) for which the linear map \(\mathbb {E} \ni (u,p,\eta ) \mapsto (F,G,H,K) \in \mathbb {F}\) induced by (1.19) is an isomorphism. When \(\Gamma = \mathbb {T}^{n-1}\) the function spaces we employ are standard \(L^2-\)Sobolev spaces, but when \(\Gamma = \mathbb {R}^{n-1}\) even identifying appropriate spaces turns out to be quite delicate for a couple reasons. First, in \(L^2-\)Sobolev spaces on the infinite domain \(\Omega = \mathbb {R}^{n-1} \times (-b,0)\) there are some subtle compatibility conditions that the data tuple (FGHK) need to satisfy, and these need to be encoded in \(\mathbb {F}\). Second, as mentioned in the above remarks, even when the data satisfy the appropriate compatibility conditions, the free surface function \(\eta \) necessarily lives in the strange anisotropic Sobolev spaces given in Definition A.1, which behave like standard \(L^2-\)based Sobolev spaces at large frequencies but have unusual anisotropic behavior at low frequencies (for instance, these spaces are not closed under composition with rotations). Similar issues arose in the second author’s recent work on the construction of traveling wave solutions to the incompressible Navier–Stokes system [7, 9, 14], and fortunately, we were able to adapt some of the techniques used in those works to handle the Muskat construction of Theorem 1.1.

In identifying the appropriate function spaces, we also uncover the method for showing that (1.19) induces an isomorphism. We first take the divergence of the first equation and eliminate u to arrive at a problem for p and \(\eta \) only, (3.1). To solve this problem we initially ignore the \(\eta \) terms and view the resulting problem as an overdetermined problem for p, (3.13). This overdetermined problem is only solvable for data satisfying certain compatibility conditions, reminiscent of those from the closed range theorem, which we identify in Sect. 3.2. These turn out to be the key to solving (3.1), as they lead us to a pseudodifferential equation for \(\eta \) that can be solved independently of p:

$$\begin{aligned} \left[ - i \gamma \xi _1 + \left| \xi \right| \tanh ( \left| \xi \right| b)\right] \hat{\eta }(\xi ) = \psi (\xi ). \end{aligned}$$
(1.20)

Here \(\psi \) is a specific function determined linearly by the data in (3.1) (see (3.59) for the precise definition). It is this equation that forces \(\eta \) into the anisotropic Sobolev spaces, but in turn the spaces allow us to construct \(\eta \) and verify that it is a reasonably nice function. Note that when \(\Gamma = \mathbb {R}^{n-1}\) we require \(\gamma \ne 0\) precisely because this term is responsible for ensuring that \(\eta \) is a nice function; in the case \(\gamma =0\) we lose the ability to verify this. With \(\eta \) in hand, we can then solve for p and show that (3.1) induces an isomorphism (see Theorem 3.12). Then in Theorem 3.17 we show that we can return to (1.19) and uncover an isomorphism. Finally, in Sect. 4, we verify that our function spaces are nice enough to be used in an implicit function theorem argument and then employ the IFT to prove Theorem 1.1.

It is natural to investigate the stability of the traveling wave solutions constructed in Theorem 1.1, and we next turn to this topic. We expect that the stability analysis depends on the type and form of external forces. In our second main result, we prove that under the sole effect of the external pressure (i.e. \(\mathfrak {f}_0=\mathfrak {f}_1=0\)), the small periodic traveling wave solutions constructed in Theorem 1.1 are asymptotically stable. For simplicity we state the result for \(\varphi (x)=\varphi _0(x')\).

Theorem 1.2

(Proved in Sect. 6.2 ) Let \(\gamma \in \mathbb {R}\) and \(1+\frac{n-1}{2}<s\in \mathbb {R}\). There exists a small positive constant \(\varepsilon _*=\varepsilon _*(s, b, n)\) such that if \(\Vert \nabla \varphi _0\Vert _{H^{s-\frac{1}{2}}(\mathbb {T}^{n-1})}<\varepsilon _*\), then the unique steady solution \(\eta _*\in \mathring{H}^s(\mathbb {T}^{n-1})\) of (2.17) with \(\varphi (x)=\varphi _0(x')\) is asymptotically stable in \(\mathring{H}^s(\mathbb {T}^{n-1})\). More precisely, there exist positive constants \(\nu \) and \(\delta \), both depending only on (sbn), such that if \(\eta _0 \in \mathring{H}^s(\mathbb {T}^{n-1})\) satisfies \(\Vert \eta _0-\eta _*\Vert _{\mathring{H}^s}<\delta \), then the dynamic problem (2.17) with initial data \(\eta _0\) has a unique solution \(\eta \in \eta _*+B_{Y^s([0, T])}(0, \nu )\) for all \(T>0\), where

$$\begin{aligned} Y^s([0, T])=\widetilde{L}^\infty ([0, T]; \mathring{H}^s(\mathbb {T}^{n-1}) )\cap L^2([0, T]; \mathring{H}^{s+\frac{1}{2}}(\mathbb {T}^{n-1})); \end{aligned}$$
(1.21)

moreover, we have the estimates

$$\begin{aligned} \Vert \eta (t)-\eta _*\Vert _{H^s}\leqq \Vert \eta _0-\eta _*\Vert _{H^s}e^{-c_0t}\quad \forall t>0 \end{aligned}$$
(1.22)

and

$$\begin{aligned} \int _0^\infty \Vert \eta (t)-\eta _*\Vert _{H^{s+\frac{1}{2}}}^2\textrm{d}t\leqq \frac{1}{2c_0}\Vert \eta _0-\eta _*\Vert ^2_{H^s}, \end{aligned}$$
(1.23)

where \(c_0=c_0(s, b, d)\).

To be best of our knowledge, Theorem 1.2 provides the first class of nontrivial stable solutions to the one-phase Muskat problem with graph free boundary.

Inspired by the proof of stability of the trivial solution for the Muskat problem in [11], we obtain the stability of small periodic traveling wave solutions by linearizing the Dirichlet–Neumann operator about the flat surface,

$$\begin{aligned} G(\eta )h=m(D)h+R(\eta )h,\quad m(D)=|D|\tanh (b|D|), \end{aligned}$$
(1.24)

and establish good boundedness and contraction estimates for the remainder \(R(\eta )\). More precisely, the results obtained in Sect. 5 imply the estimates

$$\begin{aligned} \Vert |D|^{-\frac{1}{2}}R(\eta )h\Vert _{H^{s}(\mathbb {T}^d)}\lesssim \Vert \eta \Vert _{H^{s}(\mathbb {T}^d)}\Vert \nabla h\Vert _{H^{s-\frac{1}{2}}(\mathbb {T}^d)}+ \Vert \eta \Vert _{H^{s+\frac{1}{2}}(\mathbb {T}^d)}\Vert \nabla h\Vert _{H^{s-1}(\mathbb {T}^d)} \end{aligned}$$
(1.25)

and

$$\begin{aligned} \begin{aligned}&\Vert |D|^{-\frac{1}{2}}\left\{ R(\eta _1)h-R(\eta _2)h\right\} \Vert _{H^{s}(\mathbb {T}^d)}\\&\quad \lesssim \Vert \eta _\delta \Vert _{H^{s}(\mathbb {T}^d)}\Big (\Vert \nabla h\Vert _{H^{s-\frac{1}{2}}(\mathbb {T}^d)}+\Vert \eta _1\Vert _{H^{s+\frac{1}{2}}(\mathbb {T}^d)}\Vert \nabla h\Vert _{H^{s-1}(\mathbb {T}^d)}\Big )\\&\quad +\Vert \eta _\delta \Vert _{H^{s+\frac{1}{2}}(\mathbb {T}^d)} \Vert \nabla h\Vert _{H^{s-1}(\mathbb {T}^d)}, \end{aligned} \end{aligned}$$
(1.26)

where \(\eta _\delta =\eta _1-\eta _2\) and \(d=n-1\). The estimates in Sect. 5 for the Dirichlet–Neumann operator are obtained for the free boundary belonging to the anisotropic Sobolev spaces \(\mathcal {H}^s(\Gamma )\), \(\Gamma \in \{\mathbb {R}^d, \mathbb {T}^d\}\), and are of independent interest.

Now, fix a traveling wave solution \(\eta _*\) with data \(\varphi _0\). The perturbation \(f=\eta -\eta _*\) then satisfies

$$\begin{aligned} \partial _tf=\gamma \partial _1 f-m(D)f+\left[ R(\eta _*)(\eta _*+{\varphi _0})-R(\eta _*+f)(\eta _*+{\varphi _0})\right] -R(\eta _*+f)f. \end{aligned}$$
(1.27)

Assuming that \(f_0\) has zero mean in \(H^s(\mathbb {T}^d)\), then f(t) has zero mean for \(t>0\). When performing the \(H^s\) energy estimate, the dissipation term m(D) yields a gain of \(\frac{1}{2}\) derivative:

$$\begin{aligned} (m(D)f, f)_{H^s(\mathbb {T}^d)}\geqq c_0(s, b, d)\Vert f\Vert _{H^{s+\frac{1}{2}}(\mathbb {T}^d)}^2. \end{aligned}$$
(1.28)

On the other hand, by virtue of (1.25) and (1.26), we can control the nonlinear terms in (1.27) in \(H^{s-\frac{1}{2}}(\mathbb {T}^d)\) by

$$\begin{aligned} C\big (\alpha +\Vert f\Vert _{H^s(\mathbb {T}^d)}\big )\Vert f\Vert _{H^{s+\frac{1}{2}}(\mathbb {T}^d)}^2, \end{aligned}$$
(1.29)

where the coefficient \(\alpha \) is small when \(\varphi _0\) and \(\eta _*\) are small. Therefore, if \(\Vert f(t)\Vert _{H^s}\) is small globally, then it decays exponentially. On the other hand, the global existence and smallness of \(\Vert f(t)\Vert _{H^s}\) are proved by appealing to the estimates (1.25) and (1.26) again for the mild-solution formulation of (1.27).

2 Problem Reformulations

In this section we present two reformulations we will use in proving our two main theorems. The first one is a reformulation for the general traveling wave system (1.12) in a flattened domain. When the generic body force \(\mathfrak {f}\) is absent, we present a reformulation for the dynamic problem (1.7) using the Dirichlet–Neumann operator.

2.1 Flattening the Traveling Wave System

Consider the flat domain \(\Omega := \Omega _0 = \Gamma \times (-b,0)\) and write \(\Sigma = \Sigma _0 = \Gamma \times \{0\}\). We define the Poisson extension operator \(\mathfrak {P}\) as in Appendix 7. Assuming that \(\eta \in \mathcal {H}^{s+3/2}(\Sigma )\) (see Definition A.1 for the precise definition of this anisotropic Sobolev space), we define the flattening map \(\mathfrak {F}_\eta : \bar{\Omega } \rightarrow \bar{\Omega }_\eta \) via

$$\begin{aligned} \mathfrak {F}_\eta (x) = (x', x_n + \mathfrak {P}\eta (x)(1+x_n/b)) = x + \mathfrak {P}\eta (x)\left( 1+x_n/b\right) e_n. \end{aligned}$$
(2.1)

Note that \(\mathfrak {F}_\eta \vert _{\Sigma _{-b}} = I\) and \(\mathfrak {F}_\eta (\Sigma ) = \Sigma _{\eta }\). We compute

$$\begin{aligned} \nabla \mathfrak {F}_\eta (x) = \begin{pmatrix} I_{n-1 } &{} 0_{(n-1) \times 1} \\ (1+x_n/b) \nabla '\mathfrak {P}\eta (x) &{} 1 + \mathfrak {P}\eta (x) /b + \partial _n \mathfrak {P}\eta (x) (1+x_n/b) \end{pmatrix}. \end{aligned}$$
(2.2)

We define the functions \(\mathfrak {J},\mathfrak {K}: \Omega \rightarrow (0,\infty )\) via

$$\begin{aligned} \mathfrak {J}(x) = \det \nabla \mathfrak {F}_\eta (x) = 1 + \mathfrak {P}\eta (x) /b + \partial _n \mathfrak {P}\eta (x) (1+x_n/b) \text { and } \mathfrak {K}(x) = 1/\mathfrak {J}(x). \end{aligned}$$
(2.3)

It will be useful to introduce the matrix field \(\mathcal {M}: \Omega \rightarrow \mathbb {R}^{n \times n}\) via

$$\begin{aligned} \mathcal {M}(x) = (\nabla \mathfrak {F}_\eta (x))^{-T} = \begin{pmatrix} I_{n-1} &{} - \mathfrak {K}(x)(1+x_n/b) \nabla ' \mathfrak {P}\eta (x) \\ 0_{1 \times (n-1)} &{} \mathfrak {K}(x) \end{pmatrix}. \end{aligned}$$
(2.4)

Our interest in the field \(\mathcal {M}\) comes from a trio of useful identities it satisfies. The first is Piola identity,

$$\begin{aligned} \partial _j[\mathfrak {J}\mathcal {M}_{ij} ] =0 \text { for } 1\leqq i \leqq n. \end{aligned}$$
(2.5)

The second and third are a pair of identities on \(\Sigma \) and \(\Sigma _{-b}\):

$$\begin{aligned} \mathfrak {J}\mathcal {M}e_n \vert _{\Sigma } = (-\nabla ' \eta ,1) \text { and } \mathfrak {J}\mathcal {M}e_n \vert _{\Sigma _{-b}} = e_n. \end{aligned}$$
(2.6)

To see the utility of the Piola identity note that \(v: \Omega _\eta \rightarrow \mathbb {R}^n\) satisfies \({{\,\textrm{div}\,}}{v}=0\) if and only if \(\hat{v} = v \circ \mathfrak {F}_\eta : \Omega \rightarrow \mathbb {R}^n\) satisfies \(\mathfrak {J}\mathcal {M}_{ij} \partial _j \hat{v}_i =0\) (the summation convention is used here), but

$$\begin{aligned} \mathfrak {J}\mathcal {M}_{ij} \partial _j \hat{v}_i = \partial _j[\mathfrak {J}\mathcal {M}_{ij} \hat{v}_i ] = \partial _j[\mathfrak {J}\mathcal {M}^T_{ji} \hat{v}_i ], \end{aligned}$$
(2.7)

so a further equivalent condition is that \(u = \mathfrak {J}\mathcal {M}^T \hat{v}: \Omega \rightarrow \mathbb {R}^n\) satisfies \({{\,\textrm{div}\,}}{u}=0\).

In light of the previous calculations, we use \(\mathfrak {F}_\eta \) and \(\mathcal {M}\) to rephrase the traveling wave Muskat system (1.12) in the fixed domain \(\Omega \) by defining \(u = \mathfrak {J}\mathcal {M}^T v \circ \mathfrak {F}_\eta \) and \(p = -\mathfrak {P}\eta + q \circ \mathfrak {F}_\eta \). The new system reads

$$\begin{aligned} {\left\{ \begin{array}{ll} u + \nabla _{\mathcal {A}} p + \nabla _{\mathcal {A}} \mathfrak {P}\eta = \mathfrak {J}\mathcal {M}^T \mathfrak {f}\circ \mathfrak {F}_\eta &{} \text {in }\Omega \\ {{\,\textrm{div}\,}}{u} = 0 &{}\text {in } \Omega \\ -\gamma \partial _1 \eta = u_n &{}\text {on } \Sigma \\ p = \varphi \circ \mathfrak {F}_\eta &{}\text {on } \Sigma \\ u_n =0 &{} \text {on } \Sigma _{-b}, \end{array}\right. } \end{aligned}$$
(2.8)

where \(\mathcal {A}: \Omega \rightarrow \mathbb {R}^{n \times n}_{sym}\) is defined by

$$\begin{aligned} \mathcal {A}(x){} & {} = \mathfrak {J}(x) \mathcal {M}^T(x) \mathcal {M}(x)\nonumber \\{} & {} = \begin{pmatrix} \mathfrak {J}(x) I_{n-1} &{} - (1+x_n/b) \nabla ' \mathfrak {P}\eta (x) \\ - (1+x_n/b) \nabla ' \mathfrak {P}\eta (x) &{} \mathfrak {K}(x) + \mathfrak {K}(x) (1+x_n/b)^2 \left| \nabla ' \mathfrak {P}\eta (x)\right| ^2 \end{pmatrix},\nonumber \\ \end{aligned}$$
(2.9)

and we write

$$\begin{aligned} \nabla _{\mathcal {A}}\psi = \mathcal {A}\nabla \psi . \end{aligned}$$
(2.10)

2.2 Dirichlet–Neumann Reformulation

We consider the dynamic Muskat problem (1.7) with \(\widetilde{\mathfrak {f}}=0\) and \(\phi (x,t) = \varphi (x-\gamma t e_1)\). In the moving frame \(x\mapsto x-\gamma t e_1\), we make the change of variables

$$\begin{aligned}{} & {} \zeta (x', t)=\eta (x'-\gamma t e_1, t),\quad w(x,t) = v(x-\gamma t e_1, t),\nonumber \\{} & {} P(x,t) = P_0 - x_n - q(x - \gamma t e_1, t) \end{aligned}$$
(2.11)

to obtain the system

$$\begin{aligned} {\left\{ \begin{array}{ll} v+\nabla q=0\quad \text {in } \Omega _\eta ,\\ {{\,\textrm{div}\,}}v=0\quad \text {in } \Omega _\eta ,\\ \partial _t \eta -\gamma \partial _1 \eta =v\cdot (\nabla ' \eta , 1)\quad \text {on } \Sigma _\eta ,\\ q=\eta +\varphi \quad \text {on } \Sigma _\eta ,\\ v_n=0\quad \text {on } \Sigma _{-b}, \end{array}\right. } \end{aligned}$$
(2.12)

This problem can be recast on the free boundary by means of the Dirichlet–Neumann operator (2.14) defined as follows. Let \(\psi \) be the solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta \psi =0\quad \text {in } \Omega _\eta ,\\ \psi =f\quad \text {on } \Sigma _\eta ,\\ \partial _n\psi =0\quad \text {on } \Sigma _{-b}. \end{array}\right. } \end{aligned}$$
(2.13)

The Dirichlet–Neumann operator associated to \(\Omega \) is denoted by \(G(\eta )\) and

$$\begin{aligned} {[}G(\eta )f](x'):=N(x')\cdot (\nabla \psi )(x', \eta (x')),\quad N(x')=(-\nabla ' \eta (x'), 1). \end{aligned}$$
(2.14)

By taking the divergence of the first equation in (2.12), we deduce that q satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta q=0\quad \text {in } \Omega _\eta ,\\ q=\eta +\varphi (\cdot , \eta (\cdot ))\quad \text {on } \Sigma _\eta ,\\ \partial _nq=0\quad \text {on } \Sigma _{-b}, \end{array}\right. }. \end{aligned}$$
(2.15)

It follows from the third equation in (2.12) that

$$\begin{aligned} v\cdot (-\nabla '\eta , 1)=-\nabla q\cdot (-\nabla '\eta , 1)=-G(\eta )\big (\eta +\varphi (\cdot , \eta (\cdot ))\big ), \end{aligned}$$
(2.16)

where \(G(\eta )\) denotes the Dirichlet–Neumann operator for \(\Omega _\eta \). Therefore, \(\eta \) obeys the equation

$$\begin{aligned} \partial _t\eta =\gamma \partial _1\eta -G(\eta )\big (\eta +\varphi (\cdot , \eta (\cdot ))\big )\quad \text {on } \mathbb {R}^{n-1}\times \mathbb {R}_+. \end{aligned}$$
(2.17)

3 Linear Analysis for the Traveling Wave System

In this section we study the linearization of (2.8) around the trivial solution, which is the system (1.19), where (FGHK) are given data. Note that for the purposes of studying the linearization of (2.8) we could reduce to the case \(G=0\) and \(H=0\); we have retained these terms here for the sake of generality.

We can eliminate u to get an equivalent formulation of the problem. Indeed, we take the divergence of the first equation and then use the first equation to remove u from the boundary conditions. This results in the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta p = G-{{\,\textrm{div}\,}}{F} &{} \text {in }\Omega \\ -\partial _n p -\partial _n \mathfrak {P}\eta +\gamma \partial _1 \eta = H - F_{n}(\cdot ,0) &{}\text {on } \Sigma \\ p = K &{}\text {on } \Sigma \\ -\partial _n p - \partial _n \mathfrak {P}\eta = -F_{n}(\cdot ,-b) &{} \text {on } \Sigma _{-b}. \end{array}\right. } \end{aligned}$$
(3.1)

We will study this form of the problem and eventually show that it is equivalent to (1.19).

Remark 3.1

Throughout what follows we will often abuse notation by identifying

$$\begin{aligned} \Sigma \simeq \Sigma _{-b} \simeq \Gamma \in \{\mathbb {R}^{n-1}, \mathbb {T}^{n-1}\} \end{aligned}$$
(3.2)

in order to allow us to handle linear combinations of functions defined on \(\Sigma \), \(\Sigma _{-b}\), and \(\Gamma \) in a simple way. In reality we actually identify these through the natural isometric isomorphism, but this is obvious and the corresponding notation is too cumbersome to introduce.

3.1 The upper-Dirichlet–lower-Neumann Isomorphism

Consider the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta p = f &{} \text {in } \Omega \\ p = k &{} \text {on } \Sigma \\ -\partial _n p = l &{}\text {on } \Sigma _{-b} \end{array}\right. } \end{aligned}$$
(3.3)

for given \((f,k,l) \in H^s(\Omega ) \times H^{s+3/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b})\). Associated to this PDE is the bounded linear map

$$\begin{aligned} T_0 : H^{s+2}(\Omega ) \rightarrow H^s(\Omega ) \times H^{s+3/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b}) \end{aligned}$$
(3.4)

given by

$$\begin{aligned} T_0 p = (-\Delta p, p \vert _{\Sigma }, -\partial _n p \vert _{\Sigma _{-b}}). \end{aligned}$$
(3.5)

Theorem 3.2

The map \(T_0\) is an isomorphism for every \(0 \leqq s \in \mathbb {R}\).

Proof

To see that \(T_0\) is injective we suppose that \(T_0 p =0\), multiply the resulting equation \(-\Delta p =0\) by p and integrate by parts. Using the boundary conditions contained in the identity \(T_0 p =0\), we deduce that \(\int _{\Omega } \left| \nabla p\right| ^2 = 0,\) and so \(p=0\) in \(\Omega \) since \(p=0\) on \(\Sigma \). Thus \(p=0\), and injectivity is proved.

It remains to prove that \(T_0\) is surjective, and this ultimately boils down to the weak solvability and elliptic regularity associated to the problem (3.3), which we will briefly sketch. We initially define the space \({^0}H^1(\Omega ) = \{f \in H^1(\Omega ) \;\vert \;f=0 \text { on } \Sigma \},\) which we can equip with the inner-product \(\left( f,g \right) _{{^0}H^1} = \int _\Omega \nabla f \cdot \nabla g\). This in indeed an inner-product and generates the usual \(H^1\) topology thanks to a Poincaré-type inequality provided by the vanishing on \(\Sigma \). Then by Riesz representation, for any \(\mathcal {F} \in ( {^0}H^1(\Omega ))^*\), there exists a unique \(p \in {^0}H^1(\Omega )\) such that

$$\begin{aligned} \int _{\Omega } \nabla p \cdot \nabla q = \langle \mathcal {F},q \rangle \text { for all } q \in {^0}H^1(\Omega ), \text { and } \left\| p\right\| _{ {^0}H^1} = \left\| \mathcal {F}\right\| _{( {^0}H^1(\Omega ))^*}. \end{aligned}$$
(3.6)

Next we consider \(s \in \mathbb {N}\) and data \(f \in H^{s}(\Omega )\) and \(l \in H^{s+1/2}(\Sigma _{-b})\). According to standard trace theory and the above, we can then find a unique \(p \in {^0}H^1(\Omega )\) such that

$$\begin{aligned} \int _{\Omega } \nabla p \cdot \nabla q = \int _{\Omega } fq + \int _{\Sigma _{-b}} l q \text { for all } q \in {^0}H^1(\Omega ), \text { and } \left\| p\right\| _{ {^0}H^1} \lesssim \left\| f\right\| _{H^s} + \left\| l\right\| _{H^{s+1/2}}. \end{aligned}$$
(3.7)

Standard interior elliptic regularity shows that \(p \in H^{s+2}_{\text {loc}}(\Omega )\) and \(-\Delta p =f\) in \(\Omega \). Using horizontal difference quotients, we may deduce in turn that

$$\begin{aligned} \sum _{\left| \alpha \right| \leqq s+1, \alpha _n=0} \left\| \partial ^\alpha p\right\| _{ {^0}H^1} \lesssim \left\| f\right\| _{H^s} + \left\| l\right\| _{H^{s+1/2}}. \end{aligned}$$
(3.8)

We then recover control of the vertical derivatives by using the identity \(-\partial _n^2 p = \Delta ' p + f\) together with a simple iteration argument; this yields the inclusion \(p \in H^{s+2}(\Omega )\) with the estimate

$$\begin{aligned} \left\| p\right\| _{H^{s+2}} \lesssim \left\| f\right\| _{H^s} + \left\| l\right\| _{H^{s+1/2}}. \end{aligned}$$
(3.9)

Returning to the weak formulation and integrating by parts, we find that

$$\begin{aligned} \int _{\Omega } (-\Delta p - f) q = \int _{\Sigma _{-b}} (l+\partial _n p) q \text { for all } q \in {^0}H^1(\Omega ), \end{aligned}$$
(3.10)

and hence that \(-\partial _n p = l\) on \(\Sigma _{-b}\). Thus, \(p \in H^{s+2}(\Omega ) \cap {^0}H^1(\Omega )\) satisfies \(T_0 p = (f,0,l)\).

For each \(s \in \mathbb {N}\) this analysis defines a bounded linear map \(S_0: H^{s}(\Omega ) \times H^{s+1/2}(\Sigma _{-b}) \rightarrow H^{s+2}(\Omega ) \cap {^0}H^1(\Omega )\) via \(S_0(f,l) = p\). Employing the usual Sobolev interpolation theory (see, for instance, [2, 15]), we deduce that \(S_0\) extends to a map between the same spaces but for all \(0 \leqq s \in \mathbb {R}\).

Now suppose that \(f \in H^s(\Omega )\), \(k \in H^{s+3/2}(\Sigma )\), and \(l \in H^{s+1/2}(\Sigma _{-b})\) for some \(0\leqq s \in \mathbb {R}\). By trace theory, we can pick \(K \in H^{s+2}(\Omega )\) such that \(P = k\) on \(\Sigma _b\) H:\(K=k\) on \(\Sigma \). Using the above, we then find \(P = S_0(f+\Delta K,l+\partial _n K) \in H^{s+2}(\Omega )\cap {^0}H^1(\Omega )\), which satisfies \(T_0 P = (f + \Delta K, 0, l + \partial _n K)\). Then \(p:= P+K \in H^{s+2}(\Omega )\) satisfies \(T_0 p = (f,k,l)\), and we conclude that \(T_0\) is surjective. \(\square \)

Later in our analysis we will need to consider the following bounded linear operator:

Definition 3.3

We define the bounded linear map \(\Xi : H^{s+3/2}(\Sigma ) \rightarrow H^{s+2}(\Omega )\) via \(\Xi k = T_0^{-1}(0,k,0)\).

The next result records a crucial property of \(\Xi \).

Theorem 3.4

The map \(\Xi \) from Definition 3.3 satisfies \(\widehat{\Xi k} (\xi ,x_n) = \hat{k}(\xi ) Q(\xi ,x_n)\) for \(\xi \in \hat{\Gamma }\), where \(Q: \mathbb {R}^{n-1} \times (-b,0) \rightarrow \mathbb {R}\) is defined by

$$\begin{aligned} Q(\xi ,x_n) = \frac{\cosh ( \left| \xi \right| (x_n+b))}{\cosh ( \left| \xi \right| b)}. \end{aligned}$$
(3.11)

Note that in the case \(\Gamma = \mathbb {T}^{n-1}\) we have that the dual group is \(\hat{\Gamma } = \mathbb {Z}^{n-1} \subset \mathbb {R}^{n-1}\) and Q is given by restriction to \(\hat{\Gamma }\).

Proof

Write \(p = T_0^{-1}(0,k,0) \in H^{s+2}(\Omega )\), and let \(\hat{p}\) denote its horizontal Fourier transform. Then \(\hat{p}\) satisfies the ordinary differential boundary value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} (- \left| \xi \right| ^2 + \partial _n^2) \hat{p}(\xi ,x_n) =0 &{}\text {for }x_n \in (-b,0) \\ \hat{p}(\xi ,0) = \hat{k}(\xi ) \\ \partial _n\hat{p}(\xi ,-b) = 0. \end{array}\right. } \end{aligned}$$
(3.12)

From this it’s an elementary exercise to verify that \(\hat{p}(\xi ,x_n) = \hat{k}(\xi ) Q(\xi ,x_n)\), and the result follows. \(\square \)

3.2 The Over-Determined Problem: Compatibility Conditions

Consider the over-determined problem

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta p = f &{} \text {in } \Omega \\ p = k &{} \text {on } \Sigma \\ -\partial _n p = h_+ &{}\text {on } \Sigma \\ -\partial _n p = h_- &{}\text {on } \Sigma _{-b} \end{array}\right. } \end{aligned}$$
(3.13)

for given \((f,h_+,h_-,k) \in H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b}) \times H^{s+3/2}(\Sigma )\).

Associated to (3.13) are a pair of compatibility conditions. The first actually is associated to a sub-system of (3.13).

Proposition 3.5

Suppose that \((f,h_+,h_-) \in H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b})\) and \(p \in H^{s+2}(\Omega )\) satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta p = f &{} \text {in } \Omega \\ -\partial _n p = h_+ &{}\text {on } \Sigma \\ -\partial _n p = h_- &{}\text {on } \Sigma _{-b}. \end{array}\right. } \end{aligned}$$
(3.14)

Then the following hold:

  1. (1)

    If \(\Gamma = \mathbb {R}^{n-1}\), then

    $$\begin{aligned} \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ - h_-) \in \dot{H}^{-2}(\Sigma ) \cap \dot{H}^{-1}(\Sigma ) \end{aligned}$$
    (3.15)

    and we have the bounds

    $$\begin{aligned}{} & {} \left[ \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ - h_-) \right] _{\dot{H}^{-2}} \lesssim \left\| p\right\| _{L^2}\nonumber \\{} & {} \qquad \text { and } \left[ \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ - h_-) \right] _{\dot{H}^{-1}} \lesssim \left\| \nabla ' p\right\| _{L^2}. \end{aligned}$$
    (3.16)
  2. (2)

    If \(\Gamma = \mathbb {T}^{n-1}\), then

    $$\begin{aligned} \int _{-b}^0 \hat{f}(0,x_n) \textrm{d}x_n - (\hat{h}_+(0) - \hat{h}_-(0)) =0. \end{aligned}$$
    (3.17)

Proof

We will only record the proof for \(\Gamma = \mathbb {R}^{n-1}\), as the other case follows from similar but simpler analysis. We have that

$$\begin{aligned} \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n = \int _{-b}^0 -\Delta ' p(\cdot ,x_n) \textrm{d}x_n - \int _{-b}^0 \partial _n^2 p(\cdot ,x_n) \textrm{d}x_n. \end{aligned}$$
(3.18)

We then compute

$$\begin{aligned} \int _{-b}^0 -\Delta ' p(\cdot ,x_n) \textrm{d}x_n = -\Delta ' \int _{-b}^0 p(\cdot ,x_n) \textrm{d}x_n \end{aligned}$$
(3.19)

and

$$\begin{aligned} \int _{-b}^0 \partial _n^2 p(\cdot ,x_n) \textrm{d}x_n = \partial _n p(\cdot ,0) - \partial _n p(\cdot ,-b) = -(h_+ - h_-). \end{aligned}$$
(3.20)

Combining these, we see that

$$\begin{aligned} \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ - h_-) = -\Delta ' \int _{-b}^0 p(\cdot ,x_n) \textrm{d}x_n, \end{aligned}$$
(3.21)

and the \(\dot{H}^{-2}\) inclusion and estimate then follow from an application of Cauchy–Schwarz, Fubini–Tonelli, and Parseval:

$$\begin{aligned}{} & {} \left[ -\Delta ' \int _{-b}^0 p(\cdot ,x_n) \textrm{d}x_n\right] _{\dot{H}^{-2}}^2 = \int _{\mathbb {R}^{n-1}} \frac{( \left| \xi \right| ^2)^2}{\left| \xi \right| ^4} \left| \int _{-b}^0 \hat{p}(\xi ,x_n) \textrm{d}x_n\right| ^2 \textrm{d}\xi \nonumber \\{} & {} \quad \leqq b^2 \int _{\mathbb {R}^{n-1}} \int _{-b}^0 \left| \hat{p}(\xi ,x_n)\right| ^2 \textrm{d}x_n \textrm{d}\xi \lesssim \int _{\Omega } \left| p(x)\right| ^2 \textrm{d}x. \end{aligned}$$
(3.22)

The \(\dot{H}^{-1}\) inclusion and estimate follow similarly. \(\square \)

Next we identify the formal adjoint of the over-determined problem as an under-determined problem, given here in homogeneous form:

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta q = 0 &{} \text {in } \Omega \\ -\partial _n q = 0 &{}\text {on } \Sigma _{-b}. \end{array}\right. } \end{aligned}$$
(3.23)

We can augment this problem with an extra Dirichlet condition at the upper boundary in order to introduce the upper-Dirichlet–lower-Neumann problem (3.3). Indeed, we can parameterize solutions to (3.23) by letting \(q = \Xi \psi \) for some \(k \in H^{s+3/2}(\Sigma )\), where \(\Xi \) is as in Definition 3.3. With this in mind we borrow an idea from the closed range theorem to deduce a second compatibility condition.

Proposition 3.6

Suppose that \((f,h_+,h_-,k) \in H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b}) \times H^{s+3/2}(\Sigma )\) and \(p \in H^{s+2}(\Omega )\) satisfy (3.13). Then the data \((f,h_+,h_-,k)\) satisfy both of the following equivalent conditions:

  1. (1)

    For every \(\psi \in H^{s+3/2}(\Sigma )\), if we let \(q = \Xi \psi \in H^{s+2}(\Omega )\) for \(\Xi \) as in Definition 3.3, then

    $$\begin{aligned} \int _{\Omega } f q - \int _{\Sigma } k \partial _n q + h_+ \psi + \int _{\Sigma _{-b}} h_- q = 0. \end{aligned}$$
    (3.24)
  2. (2)

    For a.e. \(\xi \in \hat{\Gamma }\) we have that

    $$\begin{aligned} 0{} & {} = \int _{-b}^0 \hat{f}(\xi , x_n) \frac{\cosh ( \left| \xi \right| (x_n+b))}{\cosh ( \left| \xi \right| b)} \textrm{d}x_n - \hat{k}(\xi ) \left| \xi \right| \tanh ( \left| \xi \right| b) - \hat{h}_+(\xi )\nonumber \\{} & {} \quad + \hat{h}_-(\xi ) {{\,\textrm{sech}\,}}( \left| \xi \right| b). \end{aligned}$$
    (3.25)

Proof

Let \(\psi \in H^{s+3/2}(\Sigma )\) and write \(q = \Xi \psi \in H^{s+2}(\Omega )\). Multiplying the first equation in (3.13) by q and integrating by parts, we find that

$$\begin{aligned} \int _{\Omega } f q= & {} \int _{\Omega } -\Delta p q = \int _{\Omega } -\Delta q p + \int _{\partial \Omega } p \partial _\nu q - \partial _\nu p q\nonumber \\= & {} \int _{\Sigma } p \partial _n q - \partial _n p q - \int _{\Sigma _{-b}} p \partial _n q - \partial _n p q \nonumber \\= & {} \int _{\Sigma } k \partial _n q + h_+ \psi - \int _{\Sigma _{-b}} h_- q. \end{aligned}$$
(3.26)

Rearranging yields (3.24). It remains to prove that (3.25) is equivalent to this.

Viewing k, \(h_+\), and \(h_-\) as functions on \(\hat{\Gamma }\) in the natural way, we may rearrange (3.26) and apply Fubini–Tonelli to see that

$$\begin{aligned} \int _{\hat{\Gamma }} \left[ \int _{-b}^0 f(\cdot , x_n) q(\cdot ,x_n) \textrm{d}x_n - k \partial _n q(\cdot ,0) - h_+ \psi + h_- q(\cdot ,-b) \right] =0. \end{aligned}$$
(3.27)

From this, Parseval’s theorem, and Theorem 3.4, we then find that

$$\begin{aligned}{} & {} \int _{\hat{\Gamma }} \left[ \int _{-b}^0 \hat{f}(\xi , x_n) \overline{Q(\xi ,x_n)} \textrm{d}x_n - \hat{k}(\xi ) \overline{\partial _n Q(\xi ,0)} - \hat{h}_+(\xi ) + \hat{h}_-(\xi ) \overline{Q(\xi ,-b)} \right] \overline{\hat{\psi }}(\xi ) d\xi \nonumber \\{} & {} \qquad =0 \end{aligned}$$
(3.28)

for all \(\psi \in H^{s+3/2}(\Sigma )\). This implies the identity

$$\begin{aligned} 0= \int _{-b}^0 \hat{f}(\xi , x_n) \overline{Q(\xi ,x_n)} \textrm{d}x_n - \hat{k}(\xi ) \overline{\partial _n Q(\xi ,0)} - \hat{h}_+(\xi ) + \hat{h}_-(\xi ) \overline{Q(\xi ,-b)} \nonumber \\ \end{aligned}$$
(3.29)

for a.e. \(\xi \in \hat{\Gamma }\), and (3.25) then follows by employing the formula for \(Q(\xi ,x_n)\) from Theorem 3.4. The fact that (3.25) implies (3.24) is readily seen by multiplying (3.25) by \(\overline{\hat{\psi }}\) and then working backward through the above argument with Parseval and Fubini–Tonelli. \(\square \)

Next we show that data obeying the conditions identified in this result must also obey an estimate in \(\dot{H}^{-2}\) as in Proposition 3.5.

Proposition 3.7

If \((f,h_+,h_-,k) \in H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b}) \times H^{s+3/2}(\Sigma )\) satisfy either (and thus both) of the conditions in Proposition 3.6. Then the following hold.

  1. (1)

    If \(\Gamma = \mathbb {R}^{n-1}\), then

    $$\begin{aligned} \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ - h_-) \in \dot{H}^{-2}(\mathbb {R}^{n-1}) \end{aligned}$$
    (3.30)

    and

    $$\begin{aligned} \left[ \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ - h_-) \right] _{\dot{H}^{-2}} \lesssim \left\| f\right\| _{L^2} + \left\| h_+\right\| _{L^2} + \left\| h_-\right\| _{L^2} + \left\| k\right\| _{L^2}. \end{aligned}$$
    (3.31)
  2. (2)

    If \(\Gamma = \mathbb {T}^{n-1}\), then

    $$\begin{aligned} \int _{-b}^0 \hat{f}(0,x_n) \textrm{d}x_n - (\hat{h}_+(0) - \hat{h}_-(0)) =0. \end{aligned}$$
    (3.32)

Proof

We will only record the proof when \(\Gamma = \mathbb {R}^{n-1}\) as the other case is simpler. The condition (3.25) implies that

$$\begin{aligned}{} & {} \int _{-b}^0 \hat{f}(\xi ,x_n) \textrm{d}x_n - (\hat{h}_+(\xi ) - \hat{h}_-(\xi )) = \int _{-b}^0 \hat{f}(\xi , x_n) \left[ 1-\frac{\cosh ( \left| \xi \right| (x_n+b))}{\cosh (\left| \xi \right| b)} \right] \textrm{d}x_n \nonumber \\{} & {} \quad + \hat{k}(\xi ) \left| \xi \right| \tanh (\left| \xi \right| b) + \hat{h}_-(\xi ) \left[ 1- {{\,\textrm{sech}\,}}( \left| \xi \right| b)\right] . \end{aligned}$$
(3.33)

Upon making routine Taylor expansions and applying Cauchy–Schwarz and Parseval, we see that

$$\begin{aligned}{} & {} \int _{B(0,1)} \frac{1}{\left| \xi \right| ^4} \left| \int _{-b}^0 \hat{f}(\xi ,x_n) \textrm{d}x_n - (\hat{h}_+(\xi ) - \hat{h}_-(\xi )) \right| ^2 \textrm{d}\xi \nonumber \\{} & {} \quad \lesssim \int _{B(0,1)} \left[ \left| \hat{k}(\xi )\right| ^2 + \left| \hat{h}_-(\xi )\right| ^2 + \int _{-b}^0 x_n^2 \left| \hat{f}(\xi ,x_n)\right| ^2 \textrm{d}x_n \right] \textrm{d}\xi \nonumber \\{} & {} \quad \lesssim \left\| k\right\| _{L^2}^2 + \left\| h_-\right\| _{L^2}^2 + \left\| f\right\| _{L^2}^2. \end{aligned}$$
(3.34)

This yields the low frequency control of the \(\dot{H}^{-2}\) seminorm, but the high frequency control comes directly from Cauchy–Schwarz and Parseval:

$$\begin{aligned} \int _{B(0,1)^c} \frac{1}{\left| \xi \right| ^4} \left| \int _{-b}^0 \hat{f}(\xi ,x_n) \textrm{d}x_n - (\hat{h}_+(\xi ) - \hat{h}_-(\xi )) \right| ^2 \textrm{d}\xi \lesssim \left\| f\right\| _{L^2}^2 + \left\| h_+\right\| _{L^2}^2 + \left\| h_-\right\| _{L^2}^2. \end{aligned}$$
(3.35)

Thus, the inclusion (3.30) holds, and upon summing (3.34) and (3.35) we deduce the estimate (3.31). \(\square \)

3.3 A Pair of Useful Function Spaces

We now introduce a couple function spaces that will be useful in our study of the over-determined problem (3.13).

Definition 3.8

For \(0 \leqq s \in \mathbb {R}\) we define the following spaces:

  1. (1)

    For \(\Gamma = \mathbb {R}^{n-1}\) and \(0 < t \in \mathbb {R}\) we define the space

    $$\begin{aligned} Y^s_{t}= & {} \{(f,h_+,h_-,k) \in H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma ) \times H^{s+3/2}(\Sigma _{-b}) \;\vert \;\nonumber \\{} & {} \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ - h_-) \in \dot{H}^{-t}(\Sigma )\} \end{aligned}$$
    (3.36)

    and endow this space with the square-norm

    $$\begin{aligned}{} & {} \left\| (f,h_+,h_-,k)\right\| _{Y^s_t}^2 = \left\| f\right\| _{H^s}^2 + \left\| h_+\right\| _{H^{s+1/2}}^2 + \left\| h_-\right\| _{H^{s+1/2}}^2 + \left\| k\right\| _{H^{s+3/2}}^2\nonumber \\{} & {} \quad + \left[ \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ - h_-)\right] _{\dot{H}^{-t}}^2 \end{aligned}$$
    (3.37)

    and its associated inner-product.

  2. (2)

    For \(\Gamma = \mathbb {T}^{n-1}\) and \(0 < t \in \mathbb {R}\) we define the space

    $$\begin{aligned} Y^s_{t}= & {} \{(f,h_+,h_-,k) \in H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma ) \times H^{s+3/2}(\Sigma _{-b}) \;\vert \;\nonumber \\{} & {} \int _{-b}^0 \hat{f}(0,x_n) \textrm{d}x_n - (\hat{h}_+(0) - \hat{h}_-(0)) =0\} \end{aligned}$$
    (3.38)

    and endow this space with the square-norm

    $$\begin{aligned} \left\| (f,h_+,h_-,k)\right\| _{Y^s_t}^2 = \left\| f\right\| _{H^s}^2 + \left\| h_+\right\| _{H^{s+1/2}}^2 + \left\| h_-\right\| _{H^{s+1/2}}^2 + \left\| k\right\| _{H^{s+3/2}}^2 \nonumber \\ \end{aligned}$$
    (3.39)

    and its associated inner-product.

  3. (3)

    We define the space

    $$\begin{aligned} Z^s= & {} \{(f,h_+,h_-,k) \in H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b}) \times H^{s+3/2}(\Sigma ) \;\vert \;\nonumber \\{} & {} \int _{\Omega } f q - \int _{\Sigma } k \partial _n q + h_+ \psi + \int _{\Sigma _{-b}} h_- q = 0 \text { for every } \psi \in H^{s+3/2}(\Sigma ),\nonumber \\{} & {} \text { where } q = \Xi \psi \}. \end{aligned}$$
    (3.40)

    Here we recall that \(\Xi : H^{s+3/2}(\Sigma ) \rightarrow H^{s+2}(\Omega )\) is defined in Definition 3.3. We endow \(Z^s\) with the square norm

    $$\begin{aligned} \left\| (f,h_+,h_-,k)\right\| _{Z^s}^2 = \left\| f\right\| _{H^s}^2 + \left\| h_+\right\| _{H^{s+1/2}}^2 + \left\| h_-\right\| _{H^{s+1/2}}^2 + \left\| k\right\| _{H^{s+3/2}}^2 \nonumber \\ \end{aligned}$$
    (3.41)

    and its associated inner-product.

The next result establishes some key properties of these spaces.

Proposition 3.9

Let \(0 \leqq s \in \mathbb {R}\), \(0 < t \in \mathbb {R}\), and let \(Y^s_t\) and \(Z^s\) be as in Definition 3.8. Then the following hold.

  1. (1)

    \(Y^s_t\) and \(Z^s\) are Hilbert spaces.

  2. (2)

    If \(t < r \in \mathbb {R}\) then we have the continuous inclusion \(Y^s_r \hookrightarrow Y^s_t\).

  3. (3)

    We have the continuous inclusion \(Z^s \hookrightarrow Y^s_2\).

Proof

The completeness of \(Y^s_t\) is routine to verify, and since \(\Xi \) is a bounded linear map it is easy to see that \(Z^s\) is a closed subspace of \(H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b}) \times H^{s+3/2}(\Sigma )\) and thus complete. This proves the first item. The second item is trivial when \(\Gamma = \mathbb {T}^{n-1}\), and when \(\Gamma = \mathbb {R}^{n-1}\) it follows from the fact that

$$\begin{aligned} \int _{B(0,1)} \frac{1}{\left| \xi \right| ^{2t}} \left| \psi (\xi )\right| ^2 \textrm{d}\xi \leqq \int _{B(0,1)} \frac{1}{\left| \xi \right| ^{2r}} \left| \psi (\xi )\right| ^2 \textrm{d}\xi \end{aligned}$$
(3.42)

when \(t < r\) and \(\psi \) is measurable. The continuous inclusion \(Z^s \hookrightarrow Y^s_2\) follows from Proposition 3.7. \(\square \)

3.4 The Over-Determined Problem: Isomorphism

We now aim to establish an isomorphism associated to the over-determined problem (3.13).

Theorem 3.10

The bounded linear map \(T_1: H^{s+2}(\Omega ) \rightarrow Z^s\) associated to (3.13), which is given by

$$\begin{aligned} T_1 p = (-\Delta p, -\partial _n p \vert _{\Sigma }, -\partial _n p\vert _{\Sigma _{-b}}, p\vert _{\Sigma }), \end{aligned}$$
(3.43)

is well-defined and is an isomorphism for every \(0 \leqq s \in \mathbb {R}\).

Proof

The map \(T_1\) is obviously a bounded linear map into \(H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b}) \times H^{s+3/2}(\Sigma )\), but the range of \(T_1\) lies in \(Z^s\) by virtue of Proposition 3.6. Thus, \(T_1: H^{s+2}(\Omega )\rightarrow Z^s\) is a well-defined and bounded linear map. If \(T_1 p =0\), then in particular \(T_0 p =0\), where \(T_0\) is the isomorphism from Theorem 3.2, and so \(p=0\). This means that \(T_1\) is injective.

Now let \((f,h_+,h_-,k) \in Z^s\). Then Theorem 3.2 allows us to set \(p = T_0^{-1}(f,k,h_-) \in H^{s+2}(\Omega )\). Set \(H_+ = -\partial _n p \vert _{\Sigma } \in H^{s+1/2}(\Sigma )\). Then p solves the over-determined problem

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta p = f &{} \text {in } \Omega \\ p = k &{} \text {on } \Sigma \\ -\partial _n p = H_+ &{}\text {on } \Sigma \\ -\partial _n p = h_- &{}\text {on } \Sigma _{-b}, \end{array}\right. } \end{aligned}$$
(3.44)

and so Proposition 3.6 tells us that

$$\begin{aligned} \int _{\Omega } f q - \int _{\Sigma } k \partial _n q + \int _{\Sigma _{-b}} h_- q = \int _{\Sigma } H_+ \psi \end{aligned}$$
(3.45)

for every \(\psi \in H^{s+3/2}(\Sigma )\), where \(q = \Xi \psi \in H^{s+2}(\Omega )\) for \(\Xi \) defined by Definition 3.3. On the other hand, the compatibility condition on \((f,h_+,h_-,k)\) built into the definition of \(Z^s\) requires that

$$\begin{aligned} \int _{\Omega } f q - \int _{\Sigma } k \partial _n q + \int _{\Sigma _{-b}} h_- q = \int _{\Sigma } h_+ \psi \end{aligned}$$
(3.46)

for all such \(\psi \) and q. Equating these then shows that

$$\begin{aligned} \int _{\Sigma } (h_+ - H_+) \psi =0 \text { for all } \psi \in H^{s+3/2}(\Sigma ), \end{aligned}$$
(3.47)

from which we conclude that \(h_+ = H_+\). Hence p solves (3.13), or equivalently \(T_1 p = (f,h_+,h_-,k)\). Thus \(T_1\) is surjective and so defines an isomorphism. \(\square \)

3.5 The Isomorphism for the Pressure-Free Surface System

Next we aim to show that the PDE system

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta p = f &{} \text {in }\Omega \\ -\partial _n p -\partial _n \mathfrak {P}\eta +\gamma \partial _1 \eta = h_+ &{}\text {on } \Sigma \\ p = k &{}\text {on } \Sigma \\ -\partial _n p -\partial _n \mathfrak {P}\eta = h_- &{} \text {on } \Sigma _{-b} \end{array}\right. } \end{aligned}$$
(3.48)

induces an isomorphism between appropriate Banach spaces. As a first step, in the next lemma we establish that the linear mapping associated to our PDE system actually takes values in \(Y^s_1\) and is bounded.

Lemma 3.11

Let \(0 \leqq s \in \mathbb {R}\). If \((p,\eta ) \in H^{s+2}(\Omega ) \times \mathcal {H}^{s+3/2}(\Sigma )\) then we have the inclusion

$$\begin{aligned} (-\Delta p, -\partial _n p \vert _{\Sigma } -\partial _n \mathfrak {P}\eta \vert _{\Sigma } + \gamma \partial _1 \eta , -\partial _n p \vert _{\Sigma _{-b}} -\partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}, p\vert _{\Sigma }) \in Y^s_1, \end{aligned}$$
(3.49)

and

$$\begin{aligned}{} & {} \left\| (-\Delta p, -\partial _n p \vert _{\Sigma } -\partial _n \mathfrak {P}\eta \vert _{\Sigma } + \gamma \partial _1 \eta , -\partial _n p \vert _{\Sigma _{-b}} -\partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}} , p\vert _{\Sigma })\right\| _{Y^s_1}\nonumber \\{} & {} \qquad \lesssim \left\| p\right\| _{H^{s+2}} + \left\| \eta \right\| _{\mathcal {H}^{s+3/2}}. \end{aligned}$$
(3.50)

Proof

Write the tuple in (3.49) as \((f,h_+,h_-,k)\). From Theorems A.2, A.4, A.7 and standard trace theory we see that

$$\begin{aligned} \left\| f\right\| _{H^s} + \left\| h_+\right\| _{H^{s+1/2}} + \left\| h_-\right\| _{H^{s+1/2}} + \left\| k\right\| _{H^{s+3/2}} \lesssim \left\| p\right\| _{H^{s+2}} + \left\| \eta \right\| _{\mathcal {H}^{s+3/2}}, \nonumber \\ \end{aligned}$$
(3.51)

and so in particular \((f,h_+,h_-,k) \in H^s(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+1/2}(\Sigma _{-b}) \times H^{s+3/2}(\Sigma )\).

Suppose now that \(\Gamma = \mathbb {R}^{n-1}\). Proposition 3.5 implies that

$$\begin{aligned} \left[ \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ + \partial _n \mathfrak {P}\eta \vert _{\Sigma } - \gamma \partial _1 \eta - h_- - \partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}) \right] _{\dot{H}^{-1}} \lesssim \left\| p\right\| _{H^{s+2}}. \end{aligned}$$
(3.52)

We know from Theorem A.2 that \(\left[ \partial _1 \eta \right] _{\dot{H}^{-1}} \lesssim \left\| \eta \right\| _{\mathcal {H}^{s+3/2}}\), and we know from Proposition A.8 that

$$\begin{aligned} \left[ \partial _n \mathfrak {P}\eta \vert _{\Sigma } - \partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}} \right] _{\dot{H}^{-1}} \lesssim \left\| \eta \right\| _{\mathcal {H}^{s+3/2}}, \end{aligned}$$
(3.53)

so we deduce that

$$\begin{aligned} \left[ \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n - (h_+ - h_-) \right] _{\dot{H}^{-1}} \lesssim \left\| p\right\| _{H^{s+2}} + \left\| \eta \right\| _{\mathcal {H}^{s+3/2}}. \end{aligned}$$
(3.54)

Thus \((f,h_+,h_-,k) \in Y^s_1\) and the estimate (3.50) holds when \(\Gamma = \mathbb {R}^{n-1}\).

Now consider the case \(\Gamma =\mathbb {T}^{n-1}\). In this case Proposition 3.5 shows that

$$\begin{aligned} \int _{-b}^0 \hat{f}(0,x_n) \textrm{d}x_n - (\hat{h}_+(0) + \widehat{\partial _n \mathfrak {P}\eta \vert _{\Sigma }}(0) - \gamma \widehat{\partial _1 \eta }(0) - \hat{h}_-(0) - \widehat{\partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}}(0) ) =0, \end{aligned}$$
(3.55)

but Proposition A.8 shows \(\widehat{\partial _n \mathfrak {P}\eta \vert _{\Sigma }}(0) = \widehat{\partial _1 \eta }(0) = \widehat{\partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}}(0)=0,\) so

$$\begin{aligned} \int _{-b}^0 \hat{f}(0,x_n) \textrm{d}x_n - (\hat{h}_+(0) - \hat{h}_-(0) ) =0. \end{aligned}$$
(3.56)

Thus, \((f,h_+,h_-,k) \in Y^s_1\) and the estimate (3.50) holds when \(\Gamma = \mathbb {T}^{n-1}\). \(\square \)

We can now state our isomorphism theorem associated to (3.48).

Theorem 3.12

If \(\Gamma = \mathbb {R}^{n-1}\), then assume that \(\gamma \ne 0\). Then the bounded linear map \(T_2: H^{s+2}(\Omega ) \times \mathcal {H}^{s+3/2}(\Sigma ) \rightarrow Y^s_1\) associated to (3.48), which is defined by

$$\begin{aligned} T_2(p,\eta ) = (-\Delta p, -\partial _n p \vert _{\Sigma } -\partial _n \mathfrak {P}\eta \vert _{\Sigma } + \gamma \partial _1 \eta , -\partial _n p \vert _{\Sigma _{-b}} -\partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}, p\vert _{\Sigma }), \end{aligned}$$
(3.57)

is an isomorphism for every \(0 \leqq s \in \mathbb {R}\).

Proof

First note that Lemma 3.11 tells us that \(T_2\) is a well-defined bounded linear map. If \(T_2(p,\eta ) =0\), then

$$\begin{aligned} 0= & {} \int _{\Omega } {{\,\textrm{div}\,}}(-\nabla p - \nabla \mathfrak {P}\eta ) (p+\mathfrak {P}\eta ) = \int _{\Omega } \left| \nabla p +\nabla \mathfrak {P}\eta \right| ^2\nonumber \\{} & {} \qquad - \int _{\partial \Omega } \partial _\nu (p+\mathfrak {P}\eta ) (p+ \mathfrak {P}\eta ) \nonumber \\= & {} \int _{\Omega } \left| \nabla p + \nabla \mathfrak {P}\eta \right| ^2 + \int _{\Sigma } -\partial _n (p+\mathfrak {P}\eta ) (p+ \mathfrak {P}\eta ) \nonumber \\= & {} \int _{\Omega } \left| \nabla p + \nabla \mathfrak {P}\eta \right| ^2 + \int _{\Sigma } -\gamma \partial _1 \eta \eta = \int _{\Omega } \left| \nabla p + \nabla \mathfrak {P}\eta \right| ^2, \end{aligned}$$
(3.58)

and so \(p + \mathfrak {P}\eta =C\) for some constant \(C \in \mathbb {R}\). However, on \(\Sigma \) we have that \(p=0\) and \(\mathfrak {P}\eta =\eta \), so \(\eta =C\). In turn this requires that \(\eta =0\) (since \(\eta \in \mathcal {H}^{s+3/2}(\Sigma )\)) and \(p =0\), and so \(T_2\) is injective.

Now let \((f,h_+,h_-,k) \in Y^s_1\). Define the function \(\psi : \hat{\Gamma } \rightarrow \mathbb {C}\) via

$$\begin{aligned} \psi (\xi ){} & {} = \int _{-b}^0 \hat{f}(\xi , x_n) \frac{\cosh ( \left| \xi \right| (x_n+b))}{\cosh ( \left| \xi \right| b)} \textrm{d}x_n - \hat{k}(\xi ) \left| \xi \right| \tanh ( \left| \xi \right| b) - \hat{h}_+(\xi )\nonumber \\{} & {} \quad + \hat{h}_-(\xi ) {{\,\textrm{sech}\,}}( \left| \xi \right| b). \end{aligned}$$
(3.59)

Note that we may rewrite

$$\begin{aligned} \psi (\xi )= & {} \int _{-b}^0 \hat{f}(\xi ,x_n) \textrm{d}x_n - (\hat{h}_+(\xi ) - \hat{h}_-(\xi )) \nonumber \\{} & {} + \int _{-b}^0 \hat{f}(\xi , x_n) \left[ \frac{\cosh ( \left| \xi \right| (x_n+b))}{\cosh (\left| \xi \right| b)}-1 \right] \textrm{d}x_n \nonumber \\{} & {} - \hat{k}(\xi ) \left| \xi \right| \tanh ( \left| \xi \right| b) + \hat{h}_-(\xi ) \left[ {{\,\textrm{sech}\,}}( \left| \xi \right| b) -1 \right] . \end{aligned}$$
(3.60)

When \(\Gamma = \mathbb {R}^{n-1}\), we readily deduce from this and standard Taylor expansion that

$$\begin{aligned}{} & {} \int _{B(0,1)} \left| \xi \right| ^{-2} \left| \psi (\xi )\right| ^2 \textrm{d}\xi \lesssim \left[ \int _{-b}^0 f(\cdot ,x_n) \textrm{d}x_n + (h_+ - h_-) \right] _{\dot{H}^{-1}}^2 \nonumber \\{} & {} \qquad + \left\| f\right\| _{L^2}^2 + \left\| h_-\right\| _{L^2}^2 + \left\| k\right\| _{L^2}^2 \nonumber \\{} & {} \quad \lesssim \left\| (f,h_+,h_-,k)\right\| _{Y^s_1}^2. \end{aligned}$$
(3.61)

Similarly, when \(\Gamma = \mathbb {T}^{n-1}\), we must have that \(\psi (0) =0\). On the other hand, in both cases we can bound

$$\begin{aligned}{} & {} \int _{B(0,1)^c} (1+\left| \xi \right| ^2)^{s+1/2} \left| \psi (\xi )\right| ^2 \textrm{d}\xi \lesssim \left\| f\right\| _{H^s}^2 + \left\| h_+\right\| _{H^{s+1/2}}^2 \nonumber \\{} & {} \quad + \left\| h_-\right\| _{H^{s+1/2}}^2 + \left\| k\right\| _{H^{s+3/2}}^2 \lesssim \left\| (f,h_+,h_-,k)\right\| _{Y^s_1}^2. \end{aligned}$$
(3.62)

Combining these bounds shows that

$$\begin{aligned} \int _{B(0,1)} \left| \xi \right| ^{-2} \left| \psi (\xi )\right| ^2 \textrm{d}\xi + \int _{B(0,1)^c} (1+\left| \xi \right| ^2)^{s+1/2} \left| \psi (\xi )\right| ^2 \textrm{d}\xi \lesssim \left\| (f,h_+,h_-,k)\right\| _{Y^s_1}^2 \end{aligned}$$
(3.63)

with the understanding that the first integral is replaced with 0 when \(\Gamma = \mathbb {T}^{n-1}\).

Next note that for \(\xi \in \hat{\Gamma }\) we have that

$$\begin{aligned} \left| - i \gamma \xi _1 + \left| \xi \right| \tanh (\left| \xi \right| b)\right| ^2{} & {} = \gamma ^2 \xi _1^2 + \left| \xi \right| ^2 \tanh ^2( \left| \xi \right| b)\nonumber \\{} & {} \asymp {\left\{ \begin{array}{ll} \gamma ^2 \xi _1^2 + \left| \xi \right| ^4 b^2 &{} \text {for } \left| \xi \right| \asymp 0 \\ (1+\gamma ^2) \left| \xi \right| ^2 &{}\text {for } \left| \xi \right| \asymp \infty , \end{array}\right. } \end{aligned}$$
(3.64)

and in particular the quantity on the left side vanishes if and only if \(\xi =0\). Consequently, we can define the measurable function \(\hat{\eta }: \hat{\Gamma } \rightarrow \mathbb {C}\) via the identity

$$\begin{aligned} \left[ - i \gamma \xi _1 + \left| \xi \right| \tanh ( \left| \xi \right| b)\right] \hat{\eta }(\xi ) = \psi (\xi ) \end{aligned}$$
(3.65)

for \(\xi \ne 0\) and \(\hat{\eta }(0)=0\). It may be easily checked that since the data are real-valued we have that \(\overline{\psi (\xi )} = \psi (-\xi )\). The multiplier on the left side of (3.65) satisfies the same identity, and so we conclude that \(\overline{\hat{\eta }(\xi )} = \hat{\eta }(-\xi )\), which means that \(\eta \) is also real-valued. Synthesizing (3.63) and (3.64), we see from (3.65) that \(\eta \in \mathcal {H}^{s+3/2}(\Sigma )\) and

$$\begin{aligned} \left\| \eta \right\| _{\mathcal {H}^{s+3/2}}^2{} & {} \asymp \int _{B(0,1)} \frac{\gamma ^2 \xi _1^2 + \left| \xi \right| ^4 }{\left| \xi \right| ^2} \left| \hat{\eta }(\xi )\right| ^2 \textrm{d}\xi + \int _{B(0,1)^c} (1+\left| \xi \right| ^2)^{s+3/2} \left| \hat{\eta }(\xi )\right| ^2 \textrm{d}\xi \nonumber \\{} & {} \asymp \int _{B(0,1)} \left| \xi \right| ^{-2} \left| \psi (\xi )\right| ^2 \textrm{d}\xi + \int _{B(0,1)^c} (1+\left| \xi \right| ^2)^{s+1/2} \left| \psi (\xi )\right| ^2 \textrm{d}\xi \nonumber \\{} & {} \lesssim \left\| (f,h_+,h_-,k)\right\| _{Y^s_1}^2, \end{aligned}$$
(3.66)

again with the understanding that the integrals over B(0, 1) are replaced by 0 when \(\Gamma = \mathbb {T}^{n-1}\), and recalling that \(\mathcal {H}^{s+3/2}(\mathbb {T}^{n-1}) = H^{s+3/2}(\mathbb {T}^{n-1})\).

We now know that \(\eta \in \mathcal {H}^{s+3/2}(\Sigma )\), so we can use Theorem A.7 to see that \(\mathfrak {P}\eta \in \mathbb {P}^{s+2}(\Omega )\), as defined in Definition A.3. In particular, this, Theorem A.4, and standard trace theory show that \(\partial _n \mathfrak {P}\eta \vert _{\Sigma } \in H^{s+3/2}(\Sigma )\) and \(\partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}} \in H^{s+3/2}(\Sigma _{-b})\). Moreover, a simple computation shows that

$$\begin{aligned} \widehat{ \partial _n \mathfrak {P}\eta } \vert _{\Sigma }(\xi ) = \left| \xi \right| \hat{\eta }(\xi ) \text { and } \widehat{ \partial _n \mathfrak {P}\eta } \vert _{\Sigma _{-b}}(\xi ) = \left| \xi \right| e^{- \left| \xi \right| b} \hat{\eta }(\xi ) \end{aligned}$$
(3.67)

for \(\xi \in \hat{\Gamma }\). From these and the properties of \(\mathcal {H}^{s+3/2}(\Sigma )\) given in Theorem A.2, we readily deduce that we have the inclusion

$$\begin{aligned} (f,h_+ -\gamma \partial _1 \eta + \partial _n \mathfrak {P}\eta \vert _\Sigma , h_- + \partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}, k){} & {} \in H^s(\Omega ) \times H^{s+1/2}(\Sigma )\nonumber \\{} & {} \times H^{s+1/2}(\Sigma _{-b}) \times H^{s+3/2}(\Sigma ).\nonumber \\ \end{aligned}$$
(3.68)

We claim that, in fact, this modified tuple belongs to the space \(Z^s\). To show this it suffices to check that the modified tuple satisfies the compatibility condition of Proposition 3.6. Using the identities (3.67), we compute

$$\begin{aligned}{} & {} - (\hat{h}_+(\xi ) -\gamma \widehat{\partial _1 \eta }(\xi ) + \widehat{\partial _n \mathfrak {P}\eta \vert _\Sigma }(\xi )) + (\hat{h}_-(\xi ) + \widehat{\partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}}(\xi ) ) {{\,\textrm{sech}\,}}(\left| \xi \right| b) \nonumber \\{} & {} \quad = - \hat{h}_+(\xi ) + \hat{h}_-(\xi ) {{\,\textrm{sech}\,}}( \left| \xi \right| b) - \left[ - i \gamma \xi _1 + \left| \xi \right| \tanh ( \left| \xi \right| b) \right] \hat{\eta }(\xi ). \end{aligned}$$
(3.69)

Thus, the identity

$$\begin{aligned} 0= & {} \int _{-b}^0 \hat{f}(\xi , x_n) \frac{\cosh ( \left| \xi \right| (x_n+b))}{\cosh (\left| \xi \right| b)} \textrm{d}x_n - \hat{k}(\xi ) \left| \xi \right| \tanh ( \left| \xi \right| b) \nonumber \\{} & {} - (\hat{h}_+(\xi ) -\gamma \widehat{\partial _1 \eta }(\xi ) + \widehat{\partial _n \mathfrak {P}\eta \vert _\Sigma }(\xi )) + (\hat{h}_-(\xi ) + \widehat{\partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}}(\xi ) ) {{\,\textrm{sech}\,}}( \left| \xi \right| b)\nonumber \\ \end{aligned}$$
(3.70)

is equivalent to the identity (3.65), which is satisfied by the construction of \(\eta \). Thus, for the modified tuple we have the inclusion

$$\begin{aligned} (f,h_+ -\gamma \partial _1 \eta + \partial _n \mathfrak {P}\eta \vert _\Sigma , h_- + \partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}, k) \in Z^s \end{aligned}$$
(3.71)

as claimed.

In light of (3.71) and Theorem 3.10 we may then define

$$\begin{aligned} p = T_1^{-1}(f,h_+ -\gamma \partial _1 \eta + \partial _n \mathfrak {P}\eta \vert _\Sigma , h_- + \partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}}, k) \in H^{s+2}(\Omega ), \end{aligned}$$
(3.72)

which satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta p = f &{} \text {in } \Omega \\ p = k &{} \text {on } \Sigma \\ -\partial _n p = h_+ -\gamma \partial _1 \eta + \partial _n \mathfrak {P}\eta \vert _\Sigma &{}\text {on } \Sigma \\ -\partial _n p = h_- + \partial _n \mathfrak {P}\eta \vert _{\Sigma _{-b}} &{}\text {on } \Sigma _{-b}. \end{array}\right. } \end{aligned}$$
(3.73)

Thus, \(T_2(p,\eta ) = (f,h_+,h_-,k)\), and we conclude that \(T_2\) is surjective and hence an isomorphism. \(\square \)

3.6 The Isomorphism for the Velocity-Pressure-Free Surface System

Finally, we aim to show that the PDE system (1.19) induces an isomorphism between appropriate Hilbert spaces. First we must identify the domain and codomain by introducing two definitions. The first defines a closed subspace of \(H^s(\Omega ;\mathbb {R}^n)\).

Definition 3.13

For \(1/2 < s \in \mathbb {R}\) we define the space

$$\begin{aligned} {_n}H^s(\Omega ;\mathbb {R}^n) = \{u \in H^s(\Omega ;\mathbb {R}^n) \;\vert \;u_n \vert _{\Sigma _{-b}} =0 \}. \end{aligned}$$
(3.74)

Standard trace theory shows that this is a closed subspace of \(H^s(\Omega ;\mathbb {R}^n)\) and thus a Hilbert space.

The second definition introduces a container space for the data in the problem (1.19).

Definition 3.14

Let \(0 \leqq s \in \mathbb {R}\). For \(\Gamma = \mathbb {R}^{n-1}\) we define the space

$$\begin{aligned} V^s= & {} \{ (F,G,H,K) \in H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s}(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+3/2}(\Sigma ) \;\vert \;\nonumber \\{} & {} \int _{-b}^0 (G-{{\,\textrm{div}\,}}{F})(\cdot ,x_n) \textrm{d}x_n - (H - F_n\vert _{\Sigma } + F_n\vert _{\Sigma _{-b}}) \in \dot{H}^{-1}(\Sigma )\} \nonumber \\ \end{aligned}$$
(3.75)

and endow it with the square norm

$$\begin{aligned} \left\| (F,G,H,K)\right\| _{V^s}^s= & {} \left\| F\right\| _{H^{s+1}}^2 + \left\| G\right\| _{H^s}^2 + \left\| H_+\right\| _{H^{s+1/2}}^2 + \left\| K\right\| _{H^{s+3/2}}^2 \nonumber \\{} & {} + \left[ \int _{-b}^0 (G-{{\,\textrm{div}\,}}{F})(\cdot ,x_n) \textrm{d}x_n - (H - F_n\vert _{\Sigma } + F_n\vert _{\Sigma _{-b}} )\right] _{\dot{H}^{-1}}^2\nonumber \\ \end{aligned}$$
(3.76)

and the associated inner-product. On the other hand, for \(\Gamma = \mathbb {T}^{n-1}\) we define the space

$$\begin{aligned} V^s= & {} \{ (F,G,H,K) \in H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s}(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+3/2}(\Sigma ) \;\vert \;\nonumber \\{} & {} \int _{-b}^0 (\hat{G}-\widehat{{{\,\textrm{div}\,}}{F}})(0,x_n) \textrm{d}x_n - (\hat{H}(0) - \hat{F}_n\vert _{\Sigma }(0) + \hat{F}_n\vert _{\Sigma _{-b}}(0)) =0 \} \end{aligned}$$
(3.77)

and endow it with the square norm

$$\begin{aligned} \left\| (F,G,H,K)\right\| _{V^s}^s = \left\| F\right\| _{H^{s+1}}^2 + \left\| G\right\| _{H^s}^2 + \left\| H_+\right\| _{H^{s+1/2}}^2 + \left\| K\right\| _{H^{s+3/2}}^2 \end{aligned}$$
(3.78)

and the associated inner-product. It’s easy to see that in both cases \(V^s\) is a Hilbert space.

Remark 3.15

Note that

$$\begin{aligned} \int _{-b}^0 -{{\,\textrm{div}\,}}{F}(\cdot ,x_n) \textrm{d}x_n = -{{\,\textrm{div}\,}}'\int _{-b}^0 F'(\cdot ,x_n) \textrm{d}x_n - F_n\vert _{\Sigma } + F_n \vert _{\Sigma _{-b}} \end{aligned}$$
(3.79)

and so

$$\begin{aligned} \int _{-b}^0 -{{\,\textrm{div}\,}}{F}(\cdot ,x_n) \textrm{d}x_n + F_n \vert _{\Sigma } - F_n \vert {\Sigma _{-b}} = -{{\,\textrm{div}\,}}'\int _{-b}^0 F'(\cdot ,x_n) \textrm{d}x_n. \end{aligned}$$
(3.80)

When \(\Gamma = \mathbb {R}^{n-1}\) this provides the estimate

$$\begin{aligned} \left[ \int _{-b}^0 -{{\,\textrm{div}\,}}{F}(\cdot ,x_n) \textrm{d}x_n + F_n \vert _{\Sigma } - F_n \vert {\Sigma _{-b}}\right] _{\dot{H}^{-1}} \leqq \left\| F'\right\| _{L^2}, \end{aligned}$$
(3.81)

and this means that the term appearing in the \(\dot{H}^{-1}\) seminorm in the definition of the \(V^s\) norm can be replaced with

$$\begin{aligned} \left[ \int _{-b}^0 G(\cdot ,x_n) \textrm{d}x_n - H \right] _{\dot{H}^{-1}} \end{aligned}$$
(3.82)

to produce an equivalent norm. Similarly, when \(\Gamma = \mathbb {T}^{n-1}\) these calculations show that a data tuple \((F,G,H,K) \in H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s}(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+3/2}(\Sigma )\) belongs to \(V^s\) if and only if

$$\begin{aligned} \int _{-b}^0 \hat{G}(0,x_n) \textrm{d}x_n - \hat{H}(0) =0. \end{aligned}$$
(3.83)

Our next lemma shows that the linear map associated to (1.19) takes values in \(V^s\) and is bounded.

Lemma 3.16

Let \(0 \leqq s \in \mathbb {R}\). Suppose \((u,p,\eta ) \in {_n}H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s+2}(\Omega ) \times \mathcal {H}^{s+3/2}(\Sigma )\) and set

$$\begin{aligned} F = u + \nabla p + \nabla \mathfrak {P}\eta , G = {{\,\textrm{div}\,}}{u}, H = u_n + \gamma \partial _1 \eta , \text { and } K = p. \end{aligned}$$
(3.84)

Then \((F,G,H,K) \in V^s\), \((G - {{\,\textrm{div}\,}}{F}, H - F_n \vert _{\Sigma }, - F_n\vert _{\Sigma _{-b}}, K) \in Y^s_1\), and we have the bound

$$\begin{aligned}{} & {} \left\| (F,G,H,K)\right\| _{V^s} + \left\| (G - {{\,\textrm{div}\,}}{F}, H - F_n \vert _{\Sigma } , - F_n\vert _{\Sigma _{-b}}, K)\right\| _{Y^s_1}\nonumber \\{} & {} \quad \lesssim \left\| u\right\| _{H^{s+1}} + \left\| p\right\| _{H^{s+2}} + \left\| \eta \right\| _{\mathcal {H}^{s+3/2}}. \end{aligned}$$
(3.85)

Proof

We may readily bound

$$\begin{aligned} \left\| F\right\| _{H^{s+1}} + \left\| G\right\| _{H^s} + \left\| H\right\| _{H^{s+1/2}} + \left\| K\right\| _{H^{s+3/2}} \lesssim \left\| u\right\| _{H^{s+1}} + \left\| p\right\| _{H^{s+2}} + \left\| \eta \right\| _{\mathcal {H}^{s+3/2}} . \end{aligned}$$
(3.86)

On the other hand, if we define \(f = G- {{\,\textrm{div}\,}}{F} \in H^{s}(\Omega )\), \(h_+ = H - F_n \vert _{\Sigma } \in H^{s+1/2}(\Sigma )\), \(h_- = - F_n \vert _{\Sigma _{-b}} \in H^{s+1/2}(\Sigma _{-b})\), and \(k=K \in H^{s+3/2}(\Sigma )\), then we see that

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta p = G-{{\,\textrm{div}\,}}{F} = f &{} \text {in }\Omega \\ -\partial _n p -\partial _n \mathfrak {P}\eta +\gamma \partial _1 \eta = H - F_{n}(\cdot ,0) = h_+&{}\text {on } \Sigma \\ p = K =k &{}\text {on } \Sigma \\ -\partial _n p -\partial _n \mathfrak {P}\eta = -F_{n}(\cdot ,-b) = h_-&{} \text {on } \Sigma _{-b}, \end{array}\right. } \end{aligned}$$
(3.87)

and so Lemma 3.11 implies that \(\left\| (f,h_+,h_-,k) \right\| _{Y^s_1} \lesssim \left\| p\right\| _{H^{s+2}} + \left\| \eta \right\| _{\mathcal {H}^{s+3/2}}\). When \(\Gamma = \mathbb {R}^{n-1}\), the \(\dot{H}^{-1}\) control provided by the \(Y^s_1\) norm is exactly the \(\dot{H}^{-1}\) control in the \(V^s\) norm, and the stated estimate follows by summing our two bounds. Similarly, when \(\Gamma = \mathbb {T}^{n-1}\), the vanishing zero mode condition required for inclusion in \(Y^s_1\) corresponds with the vanishing condition needed for inclusion in \(V^s\). \(\square \)

Finally, we can state the isomorphism theorem for the map associated to (1.19).

Theorem 3.17

Assume that \(0 \leqq s \in \mathbb {R}\), and if \(\Gamma = \mathbb {R}^{n-1}\), then assume that \(\gamma \ne 0\). Then the bounded linear map \(T_3: {_n}H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s+2}(\Omega ) \times \mathcal {H}^{s+3/2}(\Sigma ) \rightarrow V^s\) associated to (1.19), which is defined by

$$\begin{aligned} T_3(u,p,\eta ) = (u+\nabla p +\nabla \mathfrak {P}\eta , {{\,\textrm{div}\,}}{u}, u_n \vert _{\Sigma } + \gamma \partial _1 \eta , p\vert _{\Sigma }), \end{aligned}$$
(3.88)

is an isomorphism.

Proof

Lemma 3.16 tells us that \(T_3\) is well-defined and bounded. If \(T_3(u,p,\eta ) =0\), then in particular \(u+ \nabla p + \nabla \mathfrak {P}\eta =0\), and in turn this means that \(T_2(p,\eta ) = 0\). Theorem 3.12 then implies that \(p=0\) and \(\eta =0\), and so \(u=0\) as well. Thus, \(T_3\) is injective.

Now let \((F,G,H,K) \in V^s\). Lemma 3.16 shows that

$$\begin{aligned} (f,h_+,h_-,k) :=(G - {{\,\textrm{div}\,}}{F}, H - F_n \vert _{\Sigma } , - F_n\vert _{\Sigma _{-b}}, K) \in Y^s_1, \end{aligned}$$
(3.89)

and so we may use Theorem 3.12 to define \((p,\eta ) = T_2^{-1}(f,h_+,h_-,k) \in H^{s+2}(\Omega ) \times \mathcal {H}^{s+3/2}(\Sigma )\). In other words, \((p,\eta )\) satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta p = G-{{\,\textrm{div}\,}}{F} &{} \text {in }\Omega \\ -\partial _n p -\partial _n \mathfrak {P}\eta +\gamma \partial _1 \eta = H - F_{n}(\cdot ,0) &{}\text {on } \Sigma \\ p = K &{}\text {on } \Sigma \\ -\partial _n p -\partial _n \mathfrak {P}\eta = -F_{n}(\cdot ,-b) &{} \text {on } \Sigma _{-b}, \end{array}\right. } \end{aligned}$$
(3.90)

and upon setting \(u = F - \nabla p -\nabla \mathfrak {P}\eta \in {_n}H^{s+1}(\Omega ;\mathbb {R}^n)\) (where we have used Theorems A.2, A.4, A.7 to handle the \(\nabla \mathfrak {P}\eta \) term) we deduce that \(T_3(u,p,\eta ) = (F,G,H,K)\). Thus, \(T_3\) is surjective and so is an isomorphism. \(\square \)

4 Nonlinear Analysis for the Traveling Wave System

Now we aim to invoke the implicit function theorem to solve (2.8).

4.1 The Nonlinear Mapping

To employ the implicit function theorem we first check that a number of basic nonlinear maps are well-defined.

Proposition 4.1

Let \(s > n/2 -1\). Then there exists \(\delta _0 >0\) such that the following hold:

  1. (1)

    If \(\eta \in \mathcal {H}^{s+3/2}(\Sigma )\) and \(\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta _0\), then

    $$\begin{aligned} \left\| b^{-1} \mathfrak {P}\eta + \tilde{b} \partial _n \mathfrak {P}\eta \right\| _{C^0_b} \leqq \frac{1}{2}, \end{aligned}$$
    (4.1)

    where \(\tilde{b}(x) = 1+x_n/b\). In particular, for such \(\eta \) the functions \(\mathfrak {K}\) and \(\mathcal {A}\), defined in terms of \(\eta \) via (2.3) and (2.9), are well-defined.

  2. (2)

    If \(\eta \in \mathcal {H}^{s+3/2}(\Sigma )\) and \(\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta _0\), then the flattening map \(\mathfrak {F}_\eta : \Omega \rightarrow \Omega _\eta \) given by (2.1) is a \(C^{1 + \lfloor s-n/2 +1 \rfloor }\) orientation-preserving diffeomorphism.

  3. (3)

    For \(\eta \in \mathcal {H}^{s+3/2}(\Sigma )\) such that \(\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta _0\), the functions \(\mathfrak {J}\) and \(\mathfrak {K}\), given in terms of \(\eta \) as in (2.3), define \(H^{s+1}(\Omega )\) multipliers. Moreover, the maps

    $$\begin{aligned} \begin{aligned} B_{\mathcal {H}^{s+3/2}(\Sigma )}(0,\delta _0)&\ni \eta \mapsto \mathfrak {J}\in \mathcal {L}(H^{s+1}(\Omega )) \text { and } \\ B_{\mathcal {H}^{s+3/2}(\Sigma )}(0,\delta _0)&\ni \eta \mapsto \mathfrak {K}\in \mathcal {L}(H^{s+1}(\Omega )) \end{aligned} \end{aligned}$$
    (4.2)

    are smooth.

  4. (4)

    For \(\eta \in \mathcal {H}^{s+3/2}(\Sigma )\) such that \(\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta _0\), the functions \(\mathcal {M}\) and \(\mathcal {A}\), given in terms of \(\eta \) as in (2.4) and (2.9), define \(H^{s+1}(\Omega ;\mathbb {R}^n)\) multipliers. Moreover, the maps

    $$\begin{aligned} \begin{aligned} B_{\mathcal {H}^{s+3/2}(\Sigma )}(0,\delta _0)&\ni \eta \mapsto \mathcal {M}\in \mathcal {L}(H^{s+1}(\Omega ;\mathbb {R}^n)) \text { and }\\ B_{\mathcal {H}^{s+3/2}(\Sigma )}(0,\delta _0)&\ni \eta \mapsto \mathcal {A}\in \mathcal {L}(H^{s+1}(\Omega ;\mathbb {R}^n)) \end{aligned} \end{aligned}$$
    (4.3)

    are smooth.

Proof

We will only provide the proof for the case \(\Gamma = \mathbb {R}^{n-1}\), as the case \(\Gamma = \mathbb {T}^{n-1}\) is similar but simpler. Since \(s+1 > n/2\), the existence of a \(\delta _1 >0\) such that \(\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta _1\) implies the bound (4.1) follows readily from the results in Theorems A.4 and A.7, which show that \(\mathfrak {P}\eta \in \mathbb {P}^{s+2}(\Omega ) \hookrightarrow C^1_b(\Omega )\). With this bound in hand, we may appeal to (2.3) to write

$$\begin{aligned} \mathfrak {K}= \mathfrak {J}^{-1} = \left( 1 + b^{-1} \mathfrak {P}\eta + \tilde{b}\partial _n \mathfrak {P}\eta \right) ^{-1} = \sum _{m=0}^\infty (-1)^m (b^{-1} \mathfrak {P}\eta + \tilde{b}\partial _n \mathfrak {P}\eta )^m. \nonumber \\ \end{aligned}$$
(4.4)

In turn, (2.4) allows us to write

$$\begin{aligned} \mathcal {B} := \mathcal {M}-I = (-\mathfrak {K}\tilde{b} \nabla ' \mathfrak {P}\eta , \mathfrak {K}-1) \otimes e_n, \end{aligned}$$
(4.5)

and we conclude that \(\mathfrak {K}\) and \(\mathcal {B}\), and hence \(\mathfrak {J}= \mathfrak {K}^{-1}\), \(\mathcal {M}=I + \mathcal {B}\), and \(\mathcal {A}= \mathfrak {J}(I+\mathcal {B})^T (I+ \mathcal {B})\) are well-defined when \(\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta _1\).

Next we use Theorem A.5 (again noting that \(s+1> n/2\)) to see that the power series

$$\begin{aligned} \mathbb {P}^{s+1}(\Omega ) \ni z \mapsto \sum _{m=0}^\infty (-1)^m z^m \in \mathcal {L}(H^{s+1}(\Omega )) \end{aligned}$$
(4.6)

converges and defines an analytic function for \(\left\| z\right\| _{\mathbb {P}^{s+1}} < \delta _2\), for some \(\delta _2 >0\). Again employing Theorems A.4 and A.7, we may choose \(\delta _3 >0\) such that \(\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta _3\) implies that \(\left\| b^{-1} \mathfrak {P}\eta + \tilde{b}\partial _n \mathfrak {P}\eta \right\| _{\mathbb {P}^{s+1}} < \delta _2\).

Set \(\delta _0 = \min \{\delta _1,\delta _2,\delta _3\}\). Then for \(\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta _0\) we have that \(\mathfrak {K}\), \(\mathfrak {J}\), \(\mathcal {M}\), and \(\mathcal {A}\) are well-defined pointwise, and the formulas (4.4) and (4.5) then show that the maps given in (4.2) and (4.3) are smooth.

Finally, suppose \(\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta _0\). Then according to Theorems A.7 and A.4, the map \(\mathfrak {F}_\eta \) is \(C^{1 + \lfloor s-n/2+1 \rfloor }\). Moreover, the bound (4.1) implies that for each \(x' \in \mathbb {R}^{n-1}\), the map \((-b,0) \ni x_n \mapsto e_n \cdot \mathfrak {F}_\eta (x',x_n) \in (-b, \eta (x'))\) is increasing since its derivative is greater than 1/2 everywhere. From this and the fact that \(\mathfrak {F}_\eta (x)' = x'\) we conclude that \(\mathfrak {F}_\eta \) is a bijection from \(\Omega \) to \(\Omega _\eta \). On the other hand, the bound (4.1) also shows that \(\det \nabla \mathfrak {F}_\eta (x) \geqq 1/2\) for \(x \in \Omega \), so \(\mathfrak {F}_\eta \) is a \(C^{1 + \lfloor s-n/2 +1\rfloor }\) diffeomorphism by virtue of the inverse function theorem. \(\square \)

We next introduce some useful notation.

Definition 4.2

Let \(0 \leqq s \in \mathbb {R}\) and V be a finite dimensional real inner-product space. We define the bounded linear map \(L_\Omega : H^s(\Gamma ;V) \rightarrow H^s(\Omega ;V)\) via \(L_\Omega f(x) = f(x')\).

The next theorem verifies that the nonlinear maps associated to the problem 2.8 are well-defined and \(C^1\), which is essential for our subsequent use of the implicit function theorem.

Theorem 4.3

Let \(n/2-1 < s \in \mathbb {N}\) and for \(\delta >0\) define the set

$$\begin{aligned} U^s_\delta = \{(u,p,\eta ) \in {_n}H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s+2}(\Omega ) \times \mathcal {H}^{s+3/2}(\Sigma ) \;\vert \;\left\| \eta \right\| _{\mathcal {H}^{s+3/2}} < \delta \}. \end{aligned}$$
(4.7)

There exists a constant \(\delta >0\) such that if \(\gamma \in \mathbb {R}\), \(\varphi _0 \in H^{s+3/2}(\Sigma )\), \(\varphi _1 \in H^{s+3}(\Gamma \times \mathbb {R})\), \(\mathfrak {f}_0 \in H^{s+1}(\Omega ;\mathbb {R}^n)\), \(\mathfrak {f}_1 \in H^{s+2}(\Gamma \times \mathbb {R};\mathbb {R}^n)\), and \((u,p,\eta ) \in U^s_\delta \), and we define \(F: \Omega \rightarrow \mathbb {R}^n\), \(G: \Omega \rightarrow \mathbb {R}\), and \(H,K: \Sigma \rightarrow \mathbb {R}\) via

$$\begin{aligned} \begin{aligned} F&= u + \nabla _{\mathcal {A}}p + \nabla _{\mathcal {A}}\mathfrak {P}\eta - \mathfrak {J}\mathcal {M}^T \left[ L_\Omega \mathfrak {f}_0 + \mathfrak {f}_1 \circ \mathfrak {F}_\eta \right] ,{} & {} {}&G&= {{\,\textrm{div}\,}}{u}, \\ H&= u_n + \gamma \partial _1 \eta ,{} & {} {}&K&= p -\varphi _0 - \varphi _1 \circ \mathfrak {F}_\eta \vert _{\Sigma }, \end{aligned} \end{aligned}$$
(4.8)

where \(\mathfrak {F}_\eta \), \(\mathfrak {J}\), \(\mathcal {M}\), and \(\mathcal {A}\) are determined by \(\eta \) via (2.1), (2.3), (2.4), and (2.9), then \((F,G,H,K) \in V^s\), where \(V^s\) is as in Definition 3.14. Moreover, the map

$$\begin{aligned} \Psi : \mathbb {R}\times H^{s+3/2}(\Sigma ) \times H^{s+3}(\Gamma \times \mathbb {R}) \times H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s+2}(\Gamma \times \mathbb {R};\mathbb {R}^n) \times U^s_\delta \rightarrow V^s \end{aligned}$$
(4.9)

defined by \(\Psi (\gamma , \varphi _0, \varphi _1, \mathfrak {f}_0, \mathfrak {f}_1, u,p,\eta ) = (F,G,H,K)\) is \(C^1\).

Proof

Again, we will only write the proof for the case \(\Gamma = \mathbb {R}^{n-1}\), as the case \(\Gamma = \mathbb {T}^{n-1}\) is similar but simpler.

Let \(\delta >0\) be the smaller of \(\delta _0>0\) from Proposition 4.1 and \(\delta _*>0\) from Corollary A.12. Proposition 4.1, Theorems A.4 and A.7, and the first item of Corollary A.12, applied with \(r = \sigma = s+1\), show that the map

$$\begin{aligned} (\mathfrak {f}_0, \mathfrak {f}_1, u,p,\eta ) \mapsto u + \nabla _{\mathcal {A}}p + \nabla _{\mathcal {A}}\mathfrak {P}\eta - \mathfrak {J}\mathcal {M}^T \left[ L_\Omega \mathfrak {f}_0 + \mathfrak {f}_1 \circ \mathfrak {F}_\eta \right] \in H^{s+1}(\Omega ;\mathbb {R}^n) \end{aligned}$$
(4.10)

is well-defined and \(C^1\). Next, we note that the maps

$$\begin{aligned} u \mapsto {{\,\textrm{div}\,}}{u} \in H^s(\Omega ) \text { and } (\gamma ,u,\eta ) \mapsto u_n\vert _{\Sigma } + \gamma \partial _1 \eta \in H^{s+1/2}(\Sigma ) \end{aligned}$$
(4.11)

are smooth since the former is linear and the latter is a sum of the linear trace map and a quadratic map. Finally, Proposition 4.1 and the second item of Corollary A.12, applied with \(r = \sigma +1 = s+2\), show that the map \((\varphi _0,\varphi _1,p,\eta ) \mapsto p -\varphi _0 - \varphi _1 \circ \mathfrak {F}_\eta \vert _{\Sigma }\) is \(C^1\). These combine to show initially that \(\Psi \) is well-defined and \(C^1\) as a map into \(H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s}(\Omega ) \times H^{s+1/2}(\Sigma ) \times H^{s+3/2}(\Sigma )\).

On the other hand, the map

$$\begin{aligned} (\gamma ,u,\eta ){} & {} \mapsto \int _{-b}^0 ({{\,\textrm{div}\,}}{u})(\cdot ,x_n) \textrm{d}x_n - (u_n\vert _{\Sigma } + \gamma \partial _1 \eta )\nonumber \\{} & {} = {{\,\textrm{div}\,}}' \int _{-b}^0 u'(\cdot ,x_n) \textrm{d}x_n - \gamma \partial _1 \eta \in \dot{H}^{-1}(\Sigma ) \end{aligned}$$
(4.12)

is well-defined (thanks to Theorem A.2) and quadratic, and thus smooth. Combining the above observations with Remark 3.15, we conclude that \(\Psi \) is actually a \(C^1\) mapping into the space \(V^s\). \(\square \)

4.2 Invoking the Implicit Function Theorem: Proof of the Main Existence Theorem

Finally, we are ready to invoke the implicit function theorem to prove the existence of solutions to the system (2.8).

Proof of Theorem 1.1

For the sake of brevity, write \(X^s = \mathbb {R}\times H^{s+3/2}(\Sigma ) \times H^{s+3}(\Gamma \times \mathbb {R}) \times H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s+2}(\Gamma \times \mathbb {R};\mathbb {R}^n)\) and \(W^s ={_n}H^{s+1}(\Omega ;\mathbb {R}^n) \times H^{s+2}(\Omega ) \times \mathcal {H}^{s+3/2}(\Sigma )\) and note that \(U^s_\delta \subseteq W^s\) is an open subset. Let \(\Psi : X^s \times U^s_\delta \rightarrow V^s\) be the \(C^1\) map given in Theorem 4.3 and note that a given tuple \((\gamma , \varphi _0, \varphi _1, \mathfrak {f}_0, \mathfrak {f}_1, u,p,\eta ) \in X^s \times U^s_\delta \) satisfies (1.17) if and only if \(\Psi (\gamma , \varphi _0, \varphi _1, \mathfrak {f}_0, \mathfrak {f}_1, u,p,\eta ) = (0,0,0,0) \in V^s\).

Given the product structure \(\Psi : X^s \times U^s_\delta \rightarrow V^s\), we can construct the partial derivatives of \(\Psi \) with respect to each factor:

$$\begin{aligned} D_1 \Psi : X^s \times U^s_\delta \rightarrow \mathcal {L}(X^s; V^s) \text { and } D_2 \Psi : X^s \times U^s_\delta \rightarrow \mathcal {L}(W^s; V^s). \end{aligned}$$
(4.13)

It is then a simple matter to check that, for \(\gamma \in \mathfrak {C}\),

$$\begin{aligned} \Psi (\gamma ,0,0,0,0,0,0,0) = (0,0,0,0) \text { and that } D_2 \Psi (\gamma ,0,0,0,0,0,0,0) = T_3, \end{aligned}$$
(4.14)

where \(T_3: W^s \rightarrow V^s\) is the isomorphism constructed in Theorem 3.17. For any \(\gamma _*\in \mathfrak {C}\) we may then employ the implicit function theorem to find open sets \(\mathcal {D}^s(\gamma _*) \subseteq X^s\) and \(\mathcal {S}^s(\gamma _*) \subseteq U^s_\delta \) and a \(C^1\) and Lipschitz function \(\Xi _{\gamma _*}: \mathcal {D}^s(\gamma _*) \rightarrow \mathcal {S}^s(\gamma _*)\) such that \((\gamma _*,0,0,0,0) \in \mathcal {D}^s(\gamma _*)\), \((0,0,0) \in \mathcal {S}^s(\gamma _*)\), and

$$\begin{aligned} \Psi (\gamma , \varphi _0,\varphi _1, \mathfrak {f}_0, \mathfrak {f}_1, \Xi _{\gamma _*}(\gamma , \varphi _0,\varphi _1, \mathfrak {f}_0, \mathfrak {f}_1)) = (0,0,0,0) \end{aligned}$$
(4.15)

for all \((\gamma , \varphi _0,\varphi _1, \mathfrak {f}_0, \mathfrak {f}_1) \in \mathcal {D}^s(\gamma _*)\). The implicit function theorem also guarantees that \(\Xi _{\gamma _*}\) parameterizes the unique such solution within \(\mathcal {S}^s(\gamma _*)\).

We now define the open sets

$$\begin{aligned} \mathcal {D}^s = \bigcup _{\gamma _*\in \mathfrak {C}} \mathcal {D}^s(\gamma _*) \subseteq X^s \text { and } \mathcal {S}^s = \bigcup _{\gamma _*\in \mathfrak {C}} \mathcal {S}^s(\gamma _*) \subseteq U^s_\delta . \end{aligned}$$
(4.16)

By construction, we have the inclusions listed in the first item. Using the above analysis, we can define the map \(\Xi : \mathcal {D}^s \rightarrow \mathcal {S}^s\) via \(\Xi (\gamma , \varphi _0,\varphi _1, \mathfrak {f}_0, \mathfrak {f}_1) = \Xi _{\gamma _*} (\gamma , \varphi _0,\varphi _1, \mathfrak {f}_0, \mathfrak {f}_1)\) whenever \((\gamma , \varphi _0,\varphi _1, \mathfrak {f}_0, \mathfrak {f}_1) \in \mathcal {D}^s(\gamma _*)\). This is well-defined, \(C^1\), and locally Lipschitz by the above consequences of the implicit function theorem. The second and third items then follow by setting \((u,p,\eta ) = \Xi (\gamma , \varphi _0,\varphi _1, \mathfrak {f}_0, \mathfrak {f}_1)\)\(\quad \square \)

5 Analysis of the Dirichlet–Neumann Operator

This section is devoted to the analysis of the Dirichlet–Neumann operator \(G(\eta )\) when \(\eta \) is small in \(\mathcal {H}^s\). Since we will primarily work with the horizontal coordinates, it is more convenient to denote a point in \(\Omega _\eta \) by (xy), where \(x\in \mathbb {R}^d\), \(d=n-1\geqq 1\), and \(y\in \mathbb {R}\). Then, we recall that

$$\begin{aligned} {[}G(\eta )f](x)=N(x)\cdot (\nabla _{x, y}\psi )(x, \eta (x)),\quad N(x)=(-\nabla f(x), 1), \end{aligned}$$
(5.1)

where \(\psi \) solves the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta _{x, y} \psi =0 &{}\text {in } \Omega _\eta ,\\ \psi =f &{}\text {on } \Sigma _\eta ,\\ \partial _y\psi =0 &{}\text {on } \Sigma _{-b}. \end{array}\right. } \end{aligned}$$
(5.2)

We straighten the domain \(\Omega _\eta =\{(x, y)\in \mathbb {R}^d\times \mathbb {R}: -b<y<\eta (x)\}\) using the mapping

$$\begin{aligned} \mathfrak {F}_\eta : (x, z)\ni \Omega _\eta =M^d\times (-b, 0)\mapsto (x, \varrho (x, z))\in \Omega _\eta , \varrho (x, z)=\frac{z+b}{b}e^{z|D|}\eta (x)+z. \end{aligned}$$
(5.3)

Note that \(\mathfrak {F}_\eta \) is the mapping (2.1) written our new notation. Since

$$\begin{aligned} \partial _z\varrho (x, z)=\frac{1}{b}e^{z|D|}\eta (x)+\frac{z+b}{b}e^{z|D|}|D|\eta (x)+1, \end{aligned}$$
(5.4)

if

$$\begin{aligned} \Vert e^{z|D|}\eta \Vert _{L^\infty }+ b\Vert e^{z|D|}|D|\eta \Vert _{L^\infty }<b \end{aligned}$$
(5.5)

then \(\mathfrak {F}_\eta \) is a Lipschitz diffeomorphism. A direct calculation shows that if \(g:\Omega _\eta \rightarrow \mathbb {R}\) then \(\widetilde{g}(x, z):=(g\circ \mathfrak {F}_\eta )(x, z)=g(x, \varrho (x, z))\) satisfies

$$\begin{aligned} {{\,\textrm{div}\,}}_{x, z}(\mathcal {A}\nabla _{x, z}\widetilde{g})(x, z)=\partial _z\varrho (\Delta _{x, y}g)(x, \varrho (x)), \end{aligned}$$
(5.6)

where

$$\begin{aligned} \mathcal {A}= \begin{bmatrix} \partial _z\varrho I_d&{} -\nabla _x\varrho \\ -(\nabla _x\varrho )^T &{} \frac{1+|\nabla _x\varrho |^2}{\partial _z\varrho } \end{bmatrix}, \end{aligned}$$
(5.7)

\(I_d\) being the \(d\times d\) identity matrix. \(\mathcal {A}\) is the matrix (2.9) written in our new notation. Since \(\psi \) is harmonic in \(\Omega _\eta \), \(v=\psi \circ \mathfrak {F}_\eta \) satisfies \( {{\,\textrm{div}\,}}_{x, z}(\mathcal {A}\nabla _{x, z}v)=0\) in \(\Omega _\eta \). We write \(\mathcal {A}\) as a perturbation of the identity matrix

$$\begin{aligned} \mathcal {A}=I_{d+1}+\begin{bmatrix}\frac{1}{b}e^{z|D|}\eta (x)+\frac{z+b}{b}e^{z|D|}|D|\eta (x)&{} -\frac{z+b}{b}e^{z|D|}\nabla \eta \\ -\frac{z+b}{b}e^{z|D|}\nabla \eta ^T &{} \frac{\left( \frac{z+b}{b}\right) ^2|e^{z|D|}\nabla \eta |^2-\frac{1}{b}e^{z|D|} \eta (x)-\frac{z+b}{b}e^{z|D|}|D|\eta (x)}{\frac{1}{b}e^{z|D|}\eta (x)+\frac{z+b}{b}e^{z|D|}|D|\eta (x)+1},\end{bmatrix} \end{aligned}$$
(5.8)

Consequently, v satisfies

$$\begin{aligned} \Delta _{x, z}v=\partial _zQ_a[v]+{{\,\textrm{div}\,}}_x Q_b[v], \end{aligned}$$
(5.9)

where

$$\begin{aligned} \begin{aligned} Q_a[v]&=\frac{z+b}{b}e^{z|D|}\nabla \eta \cdot \nabla _x v-\frac{(\frac{z+b}{b})^2|e^{z|D|}\nabla \eta |^2-\frac{1}{b}e^{z|D|}\eta (x)-\frac{z+b}{b}e^{z|D|}|D|\eta (x)}{\frac{1}{b}e^{z|D|}\eta (x)+\frac{z+b}{b}e^{z|D|}|D|\eta (x)+1}\partial _zv,\\ Q_b[v]&=-\left( \frac{1}{b}e^{z|D|}\eta (x)+\frac{z+b}{b}e^{z|D|}|D|\eta (x)\right) \nabla _xv+\frac{z+b}{b}e^{z|D|}\nabla \eta \partial _zv. \end{aligned} \end{aligned}$$
(5.10)

Setting

$$\begin{aligned} \mathcal {D}(z)=|D|\tanh ((z+b)|D|), \end{aligned}$$
(5.11)

we decompose \(\Delta _{x, z}=(\partial _z+\mathcal {D}(z))(\partial _z-\mathcal {D}(z))\). Then, (5.9) is equivalent to the following system of forward and backward parabolic equations

$$\begin{aligned}&(\partial _z+\mathcal {D}(z))w={{\,\textrm{div}\,}}_x Q_b[v]-\mathcal {D}(z)Q_a[v], \end{aligned}$$
(5.12)
$$\begin{aligned}&(\partial _z-\mathcal {D}(z))v=w+Q_a[v]. \end{aligned}$$
(5.13)

Since \(\mathcal {D}(-b)=0\) and \(\partial _zv(x, -b)=\partial _z\varrho (x, -b)\partial _y\phi (x, -b)=0\), we have \(Q_a[v](x, -b)\) and \(w(x, -b)=0\).

By the chain rule and (5.13), the Dirichlet–Neumann operator can be written in terms of f and w as

$$\begin{aligned} \begin{aligned} G(\eta )f&=\Big (-\nabla _x\varrho \cdot \nabla _xv+\frac{1+|\nabla _x\varrho |^2}{\partial _z\varrho }\partial _zv\Big )\vert _{z=0} =\partial _zv\vert _{z=0}\\&\quad +\Big (-\nabla _x\varrho \cdot \nabla _xv+\big (\frac{1+|\nabla _x\varrho |^2}{\partial _z\varrho }-1\big )\partial _zv\Big )\vert _{z=0}\\&=\Big (\mathcal {D}(z)v+w+Q_a[v]\Big )\vert _{z=0}-Q_a[v]\vert _{z=0} =m(D)f+w\vert _{z=0}, \end{aligned} \end{aligned}$$
(5.14)

where we have denoted

$$\begin{aligned} m(D)=|D|\tanh (b|D|). \end{aligned}$$
(5.15)

Using the identities

$$\begin{aligned} \begin{aligned} (\partial _z+\mathcal {D}(z))w&=[\cosh ((z+b)|D|)]^{-1}\partial _z\Big \{\cosh ((z+b)|D|) w\Big \}\\ (\partial _z-\mathcal {D}(z))v&=\cosh ((z+b)|D|)\partial _z\Big \{[\cosh ((z+b)|D|)]^{-1}v\Big \}, \end{aligned} \end{aligned}$$
(5.16)

we can integrate (5.12) and (5.13) to obtain

$$\begin{aligned}&w(z)=\int _{-b}^z\frac{\cosh ((z'+b)|D|)}{\cosh ((z+b)|D|)}\left\{ {{\,\textrm{div}\,}}_x Q_b[v](z')-\mathcal {D}(z)Q_a[v](z')\right\} \textrm{d}z', \end{aligned}$$
(5.17)
$$\begin{aligned}&v(z)=\frac{\cosh ((z+b)|D|)}{\cosh (b|D|)}f-\int _z^0 \frac{\cosh ((z+b)|D|)}{\cosh ((z'+b)|D|)}\left\{ w(z')+Q_a[v](z')\right\} \textrm{d}z' \end{aligned}$$
(5.18)

for \(z\in [-b, 0]\). It then follows from (5.14) that

$$\begin{aligned}&G(\eta )f=m(D)f+R(\eta )f, \end{aligned}$$
(5.19)
$$\begin{aligned}&R(\eta )f=\int _{-b}^0\frac{\cosh ((z'+b)|D|)}{\cosh (b|D|)}\left\{ {{\,\textrm{div}\,}}_x Q_b[v](z')-|D|\tanh (b|D|)Q_a[v](z')\right\} \textrm{d}z'. \end{aligned}$$
(5.20)

Clearly \(R(\eta )f\) is a derivative.

We denote \(I=[-b, 0]\) and

$$\begin{aligned} U^r=\widetilde{L}^\infty (I; H^r(M^d))\cap \widetilde{L}^1(I; H^{r+1}(M^d)), \end{aligned}$$
(5.21)

where the definition of the Chemin–Lerner spaces \(\widetilde{L}^p H^s\) is recalled in Definition A.15. In Appendix A.4, we establish estimates in Chemin–Lerner norms for the operators appearing in (5.17) and (5.18). The next proposition uncovers the low-frequency structure and provides the boundedness of the remainder \(R(\eta )\).

Proposition 5.1

Let \(\sigma \geqq \sigma _0> 1+\frac{d}{2}\). There exist a small positive constant \(c_1=c_1(\sigma , \sigma _0, b, d)\) such that if \(\Vert \eta \Vert _{\mathcal {H}^{\sigma _0}}<c_1\) then the following assertions hold.

(1) We have

$$\begin{aligned} \Vert \nabla _{x,z}v\Vert _{U^{\sigma -1}}\leqq C\Vert \nabla _xf\Vert _{H^{\sigma -1}}+C\Vert \eta \Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}}, \end{aligned}$$
(5.22)

where and \(C=C(\sigma , \sigma _0, b, d)\).

(2) For any continuous symbol \(\ell :\mathbb {R}\rightarrow \mathbb {R}\) satisfying

$$\begin{aligned} \ell (\xi )\asymp {\left\{ \begin{array}{ll} |\xi | &{}\text {if } |\xi |\asymp \infty ,\\ |\xi |^2 &{}\text {if } |\xi |\asymp 0\end{array}\right. }, \end{aligned}$$
(5.23)

we have

$$\begin{aligned} \Vert \ell ^{-\frac{1}{2}}(D) R(\eta )f\Vert _{H^{\sigma -\frac{1}{2}}(\mathbb {R}^d)}{} & {} \leqq C\Vert \eta \Vert _{\mathcal {H}^{\sigma _0}(\mathbb {R}^d)}\Vert \nabla _xf\Vert _{H^{\sigma -1}(\mathbb {R}^d)}\nonumber \\{} & {} \quad + C\Vert \eta \Vert _{\mathcal {H}^\sigma (\mathbb {R}^d)}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}(\mathbb {R}^d)}, \end{aligned}$$
(5.24)

where \(C=C(\ell , \sigma , \sigma _0, b, d)\). Moreover, for \(\eta ,~f: \mathbb {T}^d\rightarrow \mathbb {R}\) we have \(\widehat{R(\eta )f}(0)=0\) and

$$\begin{aligned}{} & {} \Vert |D|^{-\frac{1}{2}}R(\eta )f\Vert _{H^{\sigma -\frac{1}{2}}(\mathbb {T}^d)}\nonumber \\{} & {} \quad \leqq C\Vert \eta \Vert _{H^{\sigma _0}(\mathbb {T}^d)}\Vert \nabla _xf\Vert _{H^{\sigma -1}(\mathbb {T}^d)}+ C\Vert \eta \Vert _{H^\sigma (\mathbb {T}^d)}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}(\mathbb {T}^d)}, \end{aligned}$$
(5.25)

where \(C=C(\sigma , \sigma _0, b, d)\).

Proof

Applying the estimates (A.42) and (A.43) to (5.18) and (5.17), we obtain

$$\begin{aligned} \begin{aligned} \Vert \nabla _xv\Vert _{U^{\sigma -1}}&\leqq C\Vert \nabla _xf\Vert _{H^{\sigma -1}}+C\Vert \nabla _xw\Vert _{\widetilde{L}^1(I; H^{\sigma -1})}+C\Vert \nabla _xQ_a[v]\Vert _{\widetilde{L}^1(I; H^{\sigma -1})}\\&\leqq C\Vert \nabla _xf\Vert _{H^{\sigma -1}}+C\Vert w\Vert _{\widetilde{L}^1(I; H^{\sigma })}+C\Vert Q_a[v]\Vert _{\widetilde{L}^1(I; H^{\sigma })} \end{aligned} \end{aligned}$$
(5.26)

and

$$\begin{aligned} \begin{aligned} \Vert w\Vert _{U^{\sigma -1}}&\leqq C\Vert {{\,\textrm{div}\,}}_x Q_b[v]\Vert _{\widetilde{L}^1(I; H^{\sigma -1})}+ C\Vert \mathcal {D}(z)Q_a[v]\Vert _{\widetilde{L}^1(I; H^{\sigma -1})}\\&\leqq C\Vert Q_b[v]\Vert _{\widetilde{L}^1(I; H^{\sigma })}+ C\Vert Q_a[v]\Vert _{\widetilde{L}^1(I; H^{\sigma })}, \end{aligned} \end{aligned}$$
(5.27)

where \(C=C(b)\). It follows that

$$\begin{aligned} \Vert \nabla _xv\Vert _{U^{\sigma -1}}&\leqq C\Vert \nabla _xf\Vert _{H^{\sigma -1}}+C\Vert Q_a[v]\Vert _{\widetilde{L}^1(I; H^{\sigma })}\nonumber \\&\quad + C\Vert Q_b[v]\Vert _{\widetilde{L}^1(I; H^{\sigma })},\quad C=C(b). \end{aligned}$$
(5.28)

On the other hand, equation (5.13) gives \(\partial _zv= \mathcal {D}(z)v+w+Q_a[v]\). Since \(\mathcal {D}(z)=|D|\tanh ((z+b)|D|)\) and \(0\leqq \tanh ((z+b)|\xi |)\leqq 1\) for \(z\in [-b, 0]\), we obtain \(\Vert \mathcal {D}(z)v\Vert _{U^{\sigma -1}}\leqq \Vert \nabla _xv\Vert _{U^{\sigma -1}}\). Combining this with (5.26) and (5.27), we deduce

$$\begin{aligned} \Vert \partial _zv\Vert _{U^{\sigma -1}}\leqq C\Vert \nabla _xf\Vert _{H^{\sigma -1}}+C\Vert Q_a[v]\Vert _{U^{\sigma -1}}+ C\Vert Q_b[v]\Vert _{U^{\sigma -1}},\quad C=C(b). \end{aligned}$$
(5.29)

For \(\sigma \geqq \sigma _0>1+\frac{d}{2}\) and \(\Vert \eta \Vert _{\mathcal {H}^{\sigma _0}}<c_1\) small enough, we can apply the product estimate (A.57) and the nonlinear estimate (A.68) to obtain

$$\begin{aligned} \Vert (Q_a[v], Q_b[v])\Vert _{U^{\sigma -1}}\leqq \mathcal {F}(\Vert \eta \Vert _{\mathcal {H}^{\sigma _0}})\left\{ \Vert \eta \Vert _{\mathcal {H}^{\sigma _0}}\Vert \nabla _{x,z}v\Vert _{U^{\sigma -1}}+\Vert \eta \Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _{x,z}v\Vert _{U^{\sigma _0-1}}\right\} , \end{aligned}$$
(5.30)

where \(\mathcal {F}:\mathbb {R}^+\rightarrow \mathbb {R}^+\) is nondecreasing and depends only on \((\sigma , \sigma _0, b, d)\).

A combination of (5.28), (5.29) and (5.30) yields

$$\begin{aligned}{} & {} \Vert \nabla _{x,z}v\Vert _{U^{\sigma -1}}\nonumber \\{} & {} \quad \leqq C\Vert \nabla _xf\Vert _{H^{\sigma -1}}+ \mathcal {F}(\Vert \eta \Vert _{\mathcal {H}^\sigma _0})\left\{ \Vert \eta \Vert _{\mathcal {H}^{\sigma _0}}\Vert \nabla _{x,z}v\Vert _{U^{\sigma -1}}+\Vert \eta \Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _{x,z}v\Vert _{U^{\sigma _0-1}}\right\} , \nonumber \\ \end{aligned}$$
(5.31)

where \(C=C(\sigma , \sigma _0, b, d)\). Applying (5.31) with \(\sigma =\sigma _0\) we deduce that there exists \(c_0=c_0(\sigma _0, b, d)>0\) small enough such that if \(\Vert \eta \Vert _{\mathcal {H}^{\sigma _0}}<c_0\), then \(\Vert \nabla _{x,z}v\Vert _{U^{\sigma _0-1}}\leqq C(\sigma _0, b, d)\Vert \nabla _xf\Vert _{H^{\sigma _0-1}}\). Inserting this into (5.31) yields

$$\begin{aligned}{} & {} \Vert \nabla _{x,z}v\Vert _{U^{\sigma -1}}\leqq C\Vert \nabla _xf\Vert _{H^{\sigma -1}}+ C\Vert \eta \Vert _{\mathcal {H}^{\sigma _0}}\Vert \nabla _{x,z}v\Vert _{U^{\sigma -1}}\nonumber \\{} & {} +C\Vert \eta \Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}},\quad C=C(\sigma , \sigma _0, b, d). \end{aligned}$$
(5.32)

Therefore, for some \(c_1=c_1(\sigma , \sigma _0, b, d)\leqq c_0\) small enough, we have

$$\begin{aligned} \Vert \nabla _{x,z}v\Vert _{U^{\sigma -1}}\leqq C\Vert \nabla _xf\Vert _{H^{\sigma -1}}+C\Vert \eta \Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}} \end{aligned}$$
(5.33)

provided that \(\Vert \eta \Vert _{\mathcal {H}^{\sigma _0}}<c_1\). This concludes the proof of (5.22).

We turn to prove (5.24). Using the formula (5.20) and the estimate (A.44) we obtain

$$\begin{aligned} \Vert \ell ^{-\frac{1}{2}}(D)R(\eta )f\Vert _{H^{\sigma -\frac{1}{2}}}{} & {} \lesssim \Vert \ell ^{-\frac{1}{2}}(D){{\,\textrm{div}\,}}_x Q_b[v]\Vert _{\widetilde{L}^1(I; H^{\sigma -\frac{1}{2}})}\nonumber \\{} & {} \quad + \Vert \ell ^{-\frac{1}{2}}(D)|D|\tanh (b|D|)Q_a[v]\Vert _{\widetilde{L}^1(I; H^{\sigma -\frac{1}{2}})} \end{aligned}$$
(5.34)

Noticing that

$$\begin{aligned} \ell ^{-\frac{1}{2}}(\xi )|\xi |\asymp {\left\{ \begin{array}{ll} 1 &{} \text {if } |\xi |\asymp 0,\\ |\xi |^\frac{1}{2}&{}\text {if } |\xi |\asymp \infty \end{array}\right. }, \end{aligned}$$
(5.35)

we deduce

$$\begin{aligned} \begin{aligned} \Vert \ell ^{-\frac{1}{2}}(D)R(\eta )f\Vert _{H^{\sigma -\frac{1}{2}}}&\lesssim \Vert Q_b[v]\Vert _{\widetilde{L}^1(I; H^{\sigma })}+ \Vert \tanh (b|D|)Q_a[v]\Vert _{\widetilde{L}^1(I; H^{\sigma })}\\&\lesssim \Vert Q_b[v]\Vert _{\widetilde{L}^1(I; H^{\sigma })}+ \Vert Q_a[v]\Vert _{\widetilde{L}^1(I; H^{\sigma })}. \end{aligned} \nonumber \\ \end{aligned}$$
(5.36)

Therefore, (5.24) follows from (5.36), (5.30) and (5.33). Finally, (5.25) can be proved analogously. \(\square \)

Next we establish the contraction estimate for \(R(\eta )\).

Proposition 5.2

Let \(\sigma \geqq \sigma _0>1+\frac{d}{2}\) and consider \(f\in H^\sigma \) and \(\eta _j\in H^\sigma \), \(j=1, 2\). Set \(\eta _\delta =\eta _1-\eta _2\). There exists a positive constant \(c_2=c_2(\sigma , \sigma _0, b, d)\leqq c_1\) such that if \(\Vert \eta _j\Vert _{\mathcal {H}^{\sigma _0}}<c_2\), \(j=1, 2\), then the following estimates hold:

(1) For any symbol \(\ell \) satisfying (5.23), we have

$$\begin{aligned} \begin{aligned}&\Vert \ell ^{-\frac{1}{2}}(D)\left\{ R(\eta _1)f-R(\eta _2)f\right\} \Vert _{H^{\sigma -\frac{1}{2}}(\mathbb {R}^d)}\\&\quad \leqq C\Vert \eta _\delta \Vert _{\mathcal {H}^{\sigma _0}(\mathbb {R}^d)}\Big (\Vert \nabla _xf\Vert _{H^{\sigma -1}(\mathbb {R}^d)}+\Vert \eta _1\Vert _{\mathcal {H}^{\sigma }(\mathbb {R}^d)}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}(\mathbb {R}^d)}\Big )\\&\qquad +C\Vert \eta _\delta \Vert _{\mathcal {H}^\sigma (\mathbb {R}^d)}\Vert \nabla _x f\Vert _{H^{\sigma _0-1}(\mathbb {R}^d)}, \end{aligned} \nonumber \\ \end{aligned}$$
(5.37)

where \(C=C(\ell , \sigma , \sigma _0, b, d)\).

(2) We have

$$\begin{aligned} \begin{aligned}&\Vert |D|^{-\frac{1}{2}}\left\{ R(\eta _1)f-R(\eta _2)f\right\} \Vert _{H^{\sigma -\frac{1}{2}}(\mathbb {T}^d)}\\&\quad \leqq C\Vert \eta _\delta \Vert _{H^{\sigma _0}(\mathbb {T}^d)}\Big (\Vert \nabla _xf\Vert _{H^{\sigma -1}(\mathbb {T}^d)}+\Vert \eta _1\Vert _{H^{\sigma }(\mathbb {T}^d)}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}(\mathbb {T}^d)}\Big )\\&\qquad +C\Vert \eta _\delta \Vert _{H^\sigma (\mathbb {T}^d)}\Vert \nabla _x f\Vert _{H^{\sigma _0-1}(\mathbb {T}^d)}, \end{aligned} \nonumber \\ \end{aligned}$$
(5.38)

where \(C=C(\sigma , \sigma _0, b, d)\).

Proof

We shall only prove (5.37) since the proof of (5.38) is similar. Consider \(\eta _j,~f: \mathbb {R}^d\rightarrow \mathbb {R}\) such that \(\Vert \eta _j\Vert _{\mathcal {H}^{s_0}}<c_1\). We recall from (5.20) that

$$\begin{aligned} R(\eta _j)f_j(x)=\int _{-b}^0\frac{\cosh ((z'+b)|D|)}{\cosh (b|D|)}\left\{ {{\,\textrm{div}\,}}_x Q_b[v](z')-|D|\tanh (b|D|)Q_a[v](z')\right\} \textrm{d}z', \nonumber \\ \end{aligned}$$
(5.39)

where \(\Vert \nabla _{x,z}v_j\Vert _{U^{\sigma -1}}\lesssim \Vert \nabla _xf\Vert _{H^{\sigma -1}}\) by virtue of (5.22).

We shall adopt the notation \(g_\delta =g_1-g_2\). Arguing as in (5.36), we obtain

$$\begin{aligned} \Vert \ell ^{-\frac{1}{2}}(D)\left\{ R(\eta _1)f-R(\eta _2)f\right\} \Vert _{H^{\sigma -\frac{1}{2}}} \lesssim \Vert \big ((Q_b[v])_\delta , (Q_a[v])_\delta \big )\Vert _{\widetilde{L}^1(I; H^\sigma )}:=A_\sigma . \nonumber \\ \end{aligned}$$
(5.40)

Using the product estimate (A.57), the nonlinear estimate (A.68) and the bound

$$\begin{aligned} \Vert \big (e^{z|D|}\eta _1, e^{z|D|}\eta _2, e^{z|D|}\nabla \eta _1, e^{z|D|}\nabla \eta _2\big )\Vert _{L^\infty (I; \mathcal {H}^{\sigma _0-1})}\Big )\lesssim \Vert \eta _1\Vert _{_{\mathcal {H}^{\sigma _0}}} +\Vert \eta _2\Vert _{\mathcal {H}^{\sigma _0}}\lesssim 1, \nonumber \\ \end{aligned}$$
(5.41)

one can prove that

$$\begin{aligned} A_\sigma{} & {} \lesssim \Vert \big ((e^{z|D|}\eta )_\delta , (e^{z|D|}\nabla \eta )_\delta \big )\Vert _{L^\infty (I; \mathcal {H}^{\sigma _0-1})}\Vert \nabla _{x,z}v_1\Vert _{\widetilde{L}^1(I; H^\sigma )} \nonumber \\{} & {} \quad +\Vert \big ((e^{z|D|}\eta )_\delta , (e^{z|D|}\nabla \eta )_\delta \big )\Vert _{\widetilde{L}^1(I; H^\sigma _\sharp )}\Vert \nabla _{x,z}v_1\Vert _{L^\infty (I; L^\infty )}\nonumber \\{} & {} \quad +\Vert \big (e^{z|D|}\eta _1, e^{z|D|}\nabla \eta _1\big )\Vert _{L^\infty (I; \mathcal {H}^{\sigma _0-1})}\Vert \nabla _{x,z}v_\delta \Vert _{\widetilde{L}^1(I; H^\sigma )}\nonumber \\{} & {} \quad + \Vert \big (e^{z|D|}\eta _1, e^{z|D|}\nabla \eta _1\big )\Vert _{\widetilde{L}^1(I; H^\sigma _\sharp )}\Vert \nabla _{x,z}v_\delta \Vert _{L^\infty (I; L^\infty )}. \end{aligned}$$
(5.42)

Combining this with the easy inequalities

$$\begin{aligned}{} & {} \Vert \big (e^{z|D|}g, e^{z|D|}\nabla g\big )\Vert _{\widetilde{L}^1(I; H_\sharp ^\sigma )}\nonumber \\{} & {} \quad \lesssim \Vert g\Vert _{H^\sigma _\sharp }\lesssim \Vert g\Vert _{\mathcal {H}^\sigma } \text { and } \Vert \big (e^{z|D|}g, e^{z|D|}\nabla g\big )\Vert _{L^\infty (I; \mathcal {H}^{\sigma _0-1})}\nonumber \\{} & {} \quad \lesssim \Vert g\Vert _{\mathcal {H}^{\sigma _0}}, \end{aligned}$$
(5.43)

we obtain

$$\begin{aligned} \begin{aligned} A_\sigma&\lesssim \Vert \eta _\delta \Vert _{\mathcal {H}^{\sigma _0}}\Vert \nabla _{x,z}v_1\Vert _{\widetilde{L}^1(I; H^\sigma )}+\Vert \eta _\delta \Vert _{\mathcal {H}^\sigma }\Vert \nabla _{x,z}v_1\Vert _{L^\infty (I; L^\infty )}\\&\quad +\Vert \eta _1\Vert _{\mathcal {H}^{\sigma _0}}\Vert \nabla _{x,z}v_\delta \Vert _{\widetilde{L}^1(I; H^\sigma )}+\Vert \eta _1\Vert _{\mathcal {H}^\sigma }\Vert \nabla _{x,z}v_\delta \Vert _{L^\infty (I; L^\infty )}. \end{aligned} \end{aligned}$$
(5.44)

Assuming that \(\Vert \eta _j\Vert _{\mathcal {H}^{\sigma _0}}<c_1\), we can invoke the estimate (5.22) to have

$$\begin{aligned} \begin{aligned}&\Vert \nabla _{x, z}v_j\Vert _{L^\infty (I; L^\infty )}\lesssim \Vert \nabla _{x, z}v_j\Vert _{\widetilde{L}^\infty (I; H^{\sigma _0})}\lesssim \Vert \nabla _x f\Vert _{H^{\sigma _0-1}},\\&\Vert \nabla _{x, z}v_j\Vert _{\widetilde{L}^1(I; H^\sigma )}\lesssim \Vert \nabla _xf\Vert _{H^{\sigma -1}}+\Vert \eta _j\Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}}. \end{aligned}\end{aligned}$$
(5.45)

Consequently,

$$\begin{aligned} \begin{aligned} A_\sigma&\lesssim \Vert \eta _\delta \Vert _{\mathcal {H}^{\sigma _0}}(\Vert \nabla _xf\Vert _{H^{\sigma -1}}+\Vert \eta _1\Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}})+\Vert \eta _\delta \Vert _{\mathcal {H}^\sigma } \Vert \nabla _x f\Vert _{H^{\sigma _0-1}}\\&\quad +\Vert \eta _1\Vert _{\mathcal {H}^{\sigma _0}}\Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma -1}}+\Vert \eta _1\Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma _0-1}}. \end{aligned} \nonumber \\ \end{aligned}$$
(5.46)

On the other hand, the proof of (5.28) and (5.29) yields

$$\begin{aligned} \Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma -1}}\lesssim \Vert ((Q_a[v])_\delta , (Q_b[v])_\delta )\Vert _{U^{\sigma -1}}. \end{aligned}$$
(5.47)

Arguing as in the proof of (5.46) one can show that

$$\begin{aligned}{} & {} \Vert ((Q_a[v])_\delta , (Q_b[v])_\delta )\Vert _{U^{\sigma -1}} \lesssim \Vert \eta _\delta \Vert _{\mathcal {H}^{\sigma _0}}(\Vert \nabla _xf\Vert _{H^{\sigma -1}}+\Vert \eta _1\Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}})\nonumber \\{} & {} \quad +\Vert \eta _\delta \Vert _{\mathcal {H}^\sigma } \Vert \nabla _x f\Vert _{H^{\sigma _0-1}}+\Vert \eta _1\Vert _{\mathcal {H}^{\sigma _0}}\Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma -1}} +\Vert \eta _1\Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma _0-1}}. \nonumber \\ \end{aligned}$$
(5.48)

It follows from (5.47) and (5.48) that

$$\begin{aligned} \begin{aligned} \Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma -1}}&\lesssim \Vert \eta _\delta \Vert _{\mathcal {H}^{\sigma _0}}(\Vert \nabla _xf\Vert _{H^{\sigma -1}}+\Vert \eta _1\Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}})+\Vert \eta _\delta \Vert _{\mathcal {H}^\sigma } \Vert \nabla _x f\Vert _{H^{\sigma _0-1}}\\&\quad +\Vert \eta _1\Vert _{\mathcal {H}^{\sigma _0}}\Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma -1}}+\Vert \eta _1\Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma _0-1}}. \end{aligned} \nonumber \\ \end{aligned}$$
(5.49)

With \(\sigma =\sigma _0\), (5.49) implies that if \(\Vert \eta _j\Vert _{\mathcal {H}^{\sigma _0}}<\widetilde{c_1}\leqq c_1\) then

$$\begin{aligned} \Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma _0-1}}\lesssim \Vert \eta _\delta \Vert _{\mathcal {H}^{\sigma _0}} \Vert \nabla _x f\Vert _{H^{\sigma _0-1}}. \end{aligned}$$
(5.50)

We then insert the preceding estimate into (5.49) to obtain that if \(\Vert \eta _1\Vert _{\mathcal {H}^{\sigma _0}}<c_2\leqq \widetilde{c_1}\) then

$$\begin{aligned} \Vert \nabla _{x,z}v_\delta \Vert _{U^{\sigma -1}}\lesssim & {} \Vert \eta _\delta \Vert _{\mathcal {H}^{\sigma _0}}(\Vert \nabla _xf\Vert _{H^{\sigma -1}}+\Vert \eta _1\Vert _{\mathcal {H}^{\sigma }}\Vert \nabla _xf\Vert _{H^{\sigma _0-1}})\nonumber \\{} & {} +\Vert \eta _\delta \Vert _{\mathcal {H}^\sigma } \Vert \nabla _x f\Vert _{H^{\sigma _0-1}}. \end{aligned}$$
(5.51)

Finally, inserting (5.51) into (5.46) we arrive at (5.37). \(\square \)

6 Stability of Traveling Wave Solutions

In this section, we consider the Muskat problem without external bulk force (\(\mathfrak {f}=0\)). In order to simplify the presentation, we shall assume that the external pressure \(\varphi \) is independent of the vertical variable, i.e. \(\varphi (x, y)=\varphi _0(x)\), where we adopt the notation (xy) for the horizontal and vertical components of a point in the fluid domain. More precisely, we study equation (2.17) for the free boundary \(\eta \):

$$\begin{aligned} \partial _t\eta =\gamma \partial _1\eta -G(\eta )(\eta +{\varphi _0}),\quad (x, t)\in \mathbb {R}^d\times \mathbb {R}_+,\quad d=n-1\geqq 1. \end{aligned}$$
(6.1)

The proofs of all the results in this section can be generalized to the more general case \(\varphi (x, y)=\varphi _0(x)+\varphi _1(x, y)\) with extra regularity assumption of \(\varphi _1\) as in Theorem 1.1.

6.1 Existence of Traveling Wave Solutions

The existence and uniqueness of steady solutions to (6.1) have been obtained in Theorem 1.1 by means of the implicit function theorem. In this subsection, we shall apply the results in Sect. 5 for the Dirichlet–Neumann operator to provide an alternative proof in this special case of the data, which serves to motivate and inform the strategy we will employ in studying the time-dependent problem (6.1).

Solutions to the steady equation

$$\begin{aligned} \gamma \partial _1\eta -G(\eta )(\eta +{\varphi _0})=0 \end{aligned}$$
(6.2)

shall be constructed by a fixed point argument. To this end, we first use the expansion \(G(\eta )=m(D)+R(\eta )\) to equivalently rewrite (6.2) as

$$\begin{aligned} (\gamma \partial _1-m(D))\eta =R(\eta )(\eta +{\varphi _0})+m(D){\varphi _0}. \end{aligned}$$
(6.3)

We note that the symbol \(\gamma i\xi _1-m(\xi )\) vanishes only at \(\xi =0\), and it follows from the definition (5.20) of \(R(\eta )\) that the right-hand side of (6.3) vanishes at zero frequency. Therefore, we may seek solutions that vanish at zero frequency by solving the fixed point problem

$$\begin{aligned} \eta =\mathcal {T}_{\varphi _0} (\eta ):=(\gamma \partial _1-m(D))^{-1}\left\{ R(\eta )(\eta +{\varphi _0})+m(D){\varphi _0}\right\} , \end{aligned}$$
(6.4)

where we adopt the convention that \(\widehat{\mathcal {T}_{\varphi _0}(\eta )}(0)=0\).

Theorem 6.1

Let \(d\geqq 1\), \(s>1+\frac{d}{2}\) and \(\gamma \in \mathbb {R}{\setminus }\{0\}\). There exist small positive constants \(r_0\) and \(r_1\), both depending only on \((\gamma , s, b, d)\), such that for \(\Vert \nabla {\varphi _0}\Vert _{H^{s-1}(\mathbb {R}^d)}<r_1\), \(\mathcal {T}_{\varphi _0}\) is a contraction mapping on \(B_{\mathcal {H}^s(\mathbb {R}^d)}(0, r_0)\). Moreover, the mapping that maps \({\varphi _0}\) to the unique fixed point of \(\mathcal {T}_{\varphi _0}\) in \(B_{\mathcal {H}^s(\mathbb {R}^d)}(0, r_0)\) is Lipschitz continuous.

Proof

In the finite depth case, we have that \(m(D)=|D|\tanh (b|D)\) satisfies

$$\begin{aligned} m(\xi )\asymp {\left\{ \begin{array}{ll} |\xi |^2 &{} \text {for } |\xi |\asymp 0,\\ |\xi | &{} \text {for } |\xi |\asymp \infty . \end{array}\right. } \end{aligned}$$
(6.5)

Consequently, for low frequencies \(|\xi |<1\), we have

$$\begin{aligned} \begin{aligned}&\int _{|\xi |<1}\omega (\xi )\left| \mathscr {F}\left\{ (\gamma \partial _1-m(D))^{-1}g\right\} (\xi )\right| ^2 \textrm{d}\xi \\&\quad =\int _{|\xi |<1}\frac{\xi _1^2+|\xi |^4}{|\xi |^2}\frac{1}{|\gamma |^2\xi _1^2+m^2(\xi )}|\widehat{g}(\xi )|^2 \textrm{d}\xi \\&\quad \leqq C(\gamma , b)\int _{|\xi |<1}\frac{1}{|\xi |^2}|\widehat{g}(\xi )|^2 \textrm{d}\xi . \end{aligned} \end{aligned}$$
(6.6)

where \(\mathscr {F}\) denotes the Fourier transform. On the other hand, for high frequencies \(|\xi |\geqq 1\), we have

$$\begin{aligned} \begin{aligned}&\int _{|\xi |\geqq 1}\langle \xi \rangle ^{2s}\left| \mathscr {F}\left\{ (\gamma \partial _1-m(D))^{-1}g\right\} (\xi )\right| ^2 \textrm{d}\xi \\&\quad \leqq C(b) \int _{|\xi |\geqq 1}\langle \xi \rangle ^{2s}\frac{1}{|\xi |^2}|\widehat{g}(\xi )|^2 \textrm{d}\xi \\&\quad \leqq C(b) \int _{|\xi |\geqq 1}\langle \xi \rangle ^{2(s-\frac{1}{2})}\frac{1}{|\xi |}|\widehat{g}(\xi )|^2 \textrm{d}\xi . \end{aligned} \end{aligned}$$
(6.7)

We note that the condition \(\gamma \ne 0\) was used in the low-frequency estimate (6.6) but not in the high-frequency estimate (6.7).

It follows from (6.5), (6.6) and (6.7) that

$$\begin{aligned} \Vert (\gamma \partial _1-m(D))^{-1}g\Vert _{\mathcal {H}^s}\leqq C(\gamma , b) \Vert m^{-\frac{1}{2}}(D)g\Vert _{H^{s-\frac{1}{2}}} \end{aligned}$$
(6.8)

From (6.8) and the definition of \(\mathcal {T}_{\varphi _0}(\eta )\), we deduce

$$\begin{aligned} \begin{aligned} \Vert \mathcal {T}_{\varphi _0}(\eta )\Vert _{\mathcal {H}^s}&\leqq C(\gamma , b) \Vert m^{-\frac{1}{2}}(D)\left\{ R(\eta )(\eta +{\varphi _0})+m(D){\varphi _0}\right\} \Vert _{H^{s-\frac{1}{2}}}\\&\leqq C(\gamma , b)\left\{ \Vert m^{-\frac{1}{2}}(D)R(\eta )(\eta +{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}+\Vert m^\frac{1}{2}(D){\varphi _0}\Vert _{H^{s-\frac{1}{2}}}\right\} \\&\leqq C(\gamma , b)\left\{ \Vert m^{-\frac{1}{2}}(D)R(\eta )(\eta +{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}+\Vert \nabla {\varphi _0}\Vert _{H^{s-1}}\right\} , \end{aligned} \nonumber \\ \end{aligned}$$
(6.9)

where we have used the fact that the norms \(\Vert m^\frac{1}{2}(D){\varphi _0}\Vert _{H^{s-\frac{1}{2}}}\) and \(\Vert \nabla {\varphi _0}\Vert _{H^{s-1}}\) are equivalent.

Suppose that \(\Vert \eta \Vert _{\mathcal {H}^s}<c_1\), where \(c_1\) is given in Proposition 5.1. Then the estimate (5.24) with \(\sigma =\sigma _0=s\) yields

$$\begin{aligned} \Vert m^{-\frac{1}{2}}(D) R(\eta )(\eta +{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}\leqq & {} C(s, b, d)\Vert \eta \Vert _{\mathcal {H}^s}\Vert \nabla (\eta +{\varphi _0})\Vert _{H^{s-1}}\nonumber \\\leqq & {} C(s, b, d) \Vert \eta \Vert _{\mathcal {H}^s}\left( \Vert \eta \Vert _{\mathcal {H}^s}+\Vert \nabla {\varphi _0} \Vert _{H^{s-1}}\right) ,\nonumber \\ \end{aligned}$$
(6.10)

where we have used the inequality \(\Vert \nabla \eta \Vert _{H^{s-1}}\leqq C\Vert \eta \Vert _{\mathcal {H}^s}\).

It follows from (6.9) and (6.10) that for \(\Vert \eta \Vert _{\mathcal {H}^s}<c_1\) we have

$$\begin{aligned} \Vert \mathcal {T}_{\varphi _0}(\eta )\Vert _{\mathcal {H}^s}\leqq C_1\left\{ \Vert \eta \Vert _{\mathcal {H}^s}\left( \Vert \eta \Vert _{\mathcal {H}^s}+\Vert \nabla {\varphi _0} \Vert _{H^{s-1}}\right) +\Vert \nabla {\varphi _0}\Vert _{H^{s-1}}\right\} , \end{aligned}$$
(6.11)

where \(C_1=C_1(\gamma , s, b, d)\). If \(\Vert \nabla {\varphi _0}\Vert _{H^{s-1}}<\frac{1}{2C_1(2C_1+1)}\), then \(\mathcal {T}_{\varphi _0}\) maps the ball \(B_{\mathcal {H}^s}(0, r_0)\subset \mathcal {H}^s\) to itself, where \(r_0=\min \{c_1, \frac{1}{2C_1+1}\}\). By virtue of Proposition 5.2, one can reduce the size of \(\Vert \nabla {\varphi _0}\Vert _{H^{s-1}}\) and the radius \(r_0\) so that \(\mathcal {T}_{\varphi _0}\) is a contraction on \(B_{\mathcal {H}^s}(0, r_0)\). By the Banach contraction mapping principle, \(\mathcal {T}_{\varphi _0}\) has a unique fixed point \(\eta \) in \( B_{\mathcal {H}^s}(0, r_0)\). The Lipschitz continuous dependence of \(\eta \) on \({\varphi _0}\) again follows from Proposition 5.2. \(\square \)

We now define the space of Sobolev functions with average zero on the torus.

Definition 6.2

For \(0 \leqq s \in \mathbb {R}\) we define the space \(\mathring{H}^s(\mathbb {T}^d)=\big \{f\in H^s(\mathbb {T}^d) \;\vert \;\int _{\mathbb {T}^d} f=0\big \}\).

We now record a variant of Theorem 6.1 for the torus case.

Theorem 6.3

Let \(d\geqq 1\), \(s>1+\frac{d}{2}\) and \(\gamma \in \mathbb {R}\). There exist small positive constants \(r_0\) and \(r_1\), both depending only on (sbd), such that for \(\Vert \nabla {\varphi _0}\Vert _{H^{s-1}(\mathbb {T}^d)}<r_1\), \(\mathcal {T}_{\varphi _0}\) is a contraction mapping on \(B_{\mathring{H}^s(\mathbb {T}^d)}(0, r_0)\). Moreover, the mapping that maps \({\varphi _0}\) to the unique fixed point of \(\mathcal {T}_{\varphi _0}\) in \(B_{\mathring{H}^s(\mathbb {T}^d)}(0, r_0)\) is Lipschitz continuous.

Proof

The proof mostly follows from obvious modifications to the proof of Theorem 6.1 with the following caveat. Since the zero mode of \(\mathcal {T}_{\varphi _0}\) vanishes in the periodic setting, there is no need for a low frequency estimate such as (6.6) and only the high frequency estimate (6.7) is needed. Consequently, \(\gamma \ne 0\) is not needed, and the constants then do not depend on \(\gamma \). \(\square \)

6.2 Stability of Periodic Traveling Wave Solutions

In this subsection, we prove that small periodic traveling wave solutions obtained in Theorem 6.3 are nonlinearly asymptotically stable. The remainder of this section is devoted to the proof of Theorem 1.2.

Suppose that, for fixed \((\gamma , {\varphi _0})\), \(\eta _*\) is a steady solution of (2.17). We perturb \(\eta _*\) by \(f_0\) and set \(f(x, t)=\eta (x, t)-\eta _*(x)\), where \(\eta \) is the solution of (2.17) with initial data \(\eta _0=\eta _*+f_0\). We have that

$$\begin{aligned} \partial _t f=\gamma \partial _1 f-\big \{G(\eta _*+f)(\eta _*+f+{\varphi _0})-G(\eta _*)(\eta _*+{\varphi _0})\big \}. \end{aligned}$$
(6.12)

Using the expansion \(G(\eta )g=m(D)g+R(\eta )f\) we rewrite (6.12) as

$$\begin{aligned} \begin{aligned} \partial _t f&=\gamma \partial _1 f-G(\eta _*+f)f-\big \{G(\eta _*+f)(\eta _*+{\varphi _0})-G(\eta _*)(\eta _*+{\varphi _0})\big \}\\&=\gamma \partial _1 f-m(D)f-R(\eta _*+f)f\\&\quad -\left\{ m(D)(\eta _*+{\varphi _0})+R(\eta _*+f)(\eta _*+{\varphi _0})-m(D)(\eta _*+{\varphi _0})-R(\eta _*)(\eta _*+{\varphi _0})\right\} \\&=\gamma \partial _1 f-m(D)f+\left[ R(\eta _*)(\eta _*+{\varphi _0})-R(\eta _*+f)(\eta _*+{\varphi _0})\right] -R(\eta _*+f)f. \end{aligned} \nonumber \\ \end{aligned}$$
(6.13)

The solution of (6.13) with initial data \(f_0\) will be sought as the fixed point of the mapping

$$\begin{aligned} \mathcal {N}(f):=e^{(\gamma \partial _1 -m(D))t}f_0+\mathcal {L}(f)(t)-\mathcal {B}(f, f)(t), \end{aligned}$$
(6.14)

where

$$\begin{aligned}&\mathcal {L}(f)(t)=\int _0^t e^{(\gamma \partial _1 -m(D))(t-\tau )} \left[ R(\eta _*)(\eta _*+{\varphi _0})-R(\eta _*+f)(\eta _*+{\varphi _0})\right] (\tau )\textrm{d}\tau , \end{aligned}$$
(6.15)
$$\begin{aligned}&\mathcal {B}(g, f)(t)=\int _0^t e^{(\gamma \partial _1 -m(D))(t-\tau )}[R(\eta _*+g)f](\tau ) \textrm{d}\tau . \end{aligned}$$
(6.16)

To that end, we shall appeal to the following fixed point lemma:

Lemma 6.4

Let \((E, \Vert \cdot \Vert )\) be a Banach space and let \(\nu >0\). Denote by \(B_\nu \) the open ball of radius \(\nu \) centered at 0 in E. Assume that \(\mathcal {L}:B_\nu \rightarrow E\) and \(\mathcal {B}:B_\nu \times E\rightarrow E\) satisfy the following conditions.

  • For all \(x\in B_\nu \), \(\mathcal {B}(x, \cdot )\) is linear.

  • There exists a constant \(C_\mathcal {L}\in (0, 1)\) such that \(\Vert \mathcal {L}(x)\Vert \leqq C_\mathcal {L}\Vert x\Vert \) for all \(x\in B_\nu \).

  • There exists an increasing function \(\mathcal {G}_\mathcal {B}\) such that \(\Vert \mathcal {B}(x, y)\Vert \leqq \mathcal {G}_\mathcal {B}(\Vert x\Vert )\Vert y\Vert \) for all \(x\in B_\nu \) and \(y\in E\). There exists \(r_*>0\) such that

    $$\begin{aligned} C_\mathcal {L}+\mathcal {G}_\mathcal {B}(r_*)<\frac{1}{2}. \end{aligned}$$
    (6.17)
  • There exists an increasing function \(\mathcal {F}_\mathcal {L}:\mathbb {R}_+\rightarrow \mathbb {R}_+\) such that

    $$\begin{aligned} \forall x_1, x_2\in B_\nu ,~\Vert \mathcal {L}(x_1)-\mathcal {L}(x_2)\Vert \leqq \Vert x_1-x_2\Vert \mathcal {F}_\mathcal {L}(\Vert x_1\Vert +\Vert x_2\Vert ). \end{aligned}$$
    (6.18)
  • There exists an increasing function \(\mathcal {F}_\mathcal {B}:\mathbb {R}_+\rightarrow \mathbb {R}_+\) such that

    $$\begin{aligned}{} & {} \forall x_1, x_2\in B_\nu ,~\forall y\in E,~\Vert \mathcal {B}(x_1, y)-\mathcal {B}(x_2, y)\Vert \nonumber \\{} & {} \quad \leqq \Vert x_1-x_2\Vert \Vert y\Vert \mathcal {F}_\mathcal {B}(\Vert x_1\Vert +\Vert x_2\Vert ). \end{aligned}$$
    (6.19)

Assume, moreover, that

$$\begin{aligned} \mathcal {F}_\mathcal {L}(2\nu )+\mathcal {G}_\mathcal {B}(\nu )+\nu \mathcal {F}_\mathcal {B}(2\nu )<\frac{1}{2}. \end{aligned}$$
(6.20)

Then, for all \(x_0\in E\) with \(\Vert x_0\Vert <\frac{1}{2}\min \{\nu , r_*\}\), the mapping \(B_\nu \ni x\mapsto \mathcal {N}(x):=x_0+\mathcal {L}(x)+\mathcal {B}(x, x) \in B_\nu \) has a unique fixed point \(x_*\) in \(B_\nu \) with \(\Vert x_*\Vert \leqq 2\Vert x_0\Vert \).

Proof

Let \(x_0\in E\), \(\Vert x_0\Vert <\frac{1}{2}\min \{\nu , r_*\}\). The fixed point of \(\mathcal {N}\) will be obtained by the Picard iteration \(x_{n+1}=\mathcal {N}(x_n)\), \(n\geqq 1\). It can be shown using induction with the aid of (6.17) that \(\Vert x_n\Vert <2\Vert x_0\Vert \) for all \(n\geqq 0\), hence \((x_n)\subset B_\nu \). Using the assumptions on \(\mathcal {L}\) and \(\mathcal {B}\), we obtain

$$\begin{aligned} \Vert \mathcal {N}(x)-\mathcal {N}(y)\Vert{} & {} \leqq \Vert x-y\Vert \left\{ \mathcal {F}_\mathcal {L}(\Vert x\Vert +\Vert y\Vert )+\mathcal {G}_\mathcal {B}(\Vert x\Vert )+\Vert y\Vert \mathcal {F}_\mathcal {B}(\Vert x\Vert +\Vert y\Vert )\right\} \nonumber \\{} & {} \quad \forall x, y\in B_\nu . \end{aligned}$$
(6.21)

Combining (6.21) with (6.20) yields

$$\begin{aligned} \Vert x_{n+1}-x_n\Vert \leqq \Vert x_n-x_{n-1}\Vert \left\{ \mathcal {F}_\mathcal {L}(2\nu )+\mathcal {G}_\mathcal {B}(\nu )+\nu \mathcal {F}_\mathcal {B}(2\nu )\right\} \leqq \frac{1}{2}\Vert x_n-x_{n-1}\Vert . \nonumber \\ \end{aligned}$$
(6.22)

It follows that \((x_n)\) is a Cauchy sequence, hence \(x_n\rightarrow x_*\in E\). In particular, we have \(\Vert x_*\Vert \leqq 2\Vert x_0\Vert <\nu \), and thus (6.21) implies that \(\mathcal {N}(x_n)\rightarrow \mathcal {N}(x_*)\). Passing to the limit in the scheme \(x_{n+1}=\mathcal {N}(x_n)\) yields \(x_*=\mathcal {N}(x_*)\). The uniqueness of \(x_*\) in \(B_\nu \) again follows from (6.21). \(\square \)

We now have all the tools needed to prove Theorem 1.2.

Proof of Theorem 1.2

We consider \(\eta _*\) and \({\varphi _0}\) in \(H^s(\mathbb {T}^d)\) with \(s>1+\frac{d}{2}\) and \(\Vert {\varphi _0}\Vert _{H^s}<\frac{c_2}{3}\) and \(\Vert \eta _*\Vert _{H^s}<\frac{c_2}{3},\) where \(c_2\) is the constant given in Proposition 5.2.

We note that if \(\int _{\mathbb {T}^d}u=0\) then \(\Delta _0 u=0\) (see (A.33)). Consequently, for \(1\leqq q_2\leqq q_1\leqq \infty \), we have

$$\begin{aligned}{} & {} \Vert e^{(\gamma \partial _1 -m(D))t}u\Vert _{\widetilde{L}^{q_1}\left( [\alpha , \beta ]; H^{\mu +\frac{1}{q_1}}(\mathbb {T}^d)\right) }\leqq C(b, d)\Vert u\Vert _{H^\mu }, \end{aligned}$$
(6.23)
$$\begin{aligned}{} & {} \left\| \int _a^t e^{(\gamma \partial _1 -m(D))t}g \right\| _{\widetilde{L}^{q_1}\left( [\alpha , \beta ]; H^{\mu +\frac{1}{q_1}}(\mathbb {T}^d)\right) }\leqq C(b, d)\Vert g\Vert _{\widetilde{L}^{q_2}\left( [\alpha , \beta ]; H^{\mu -1+\frac{1}{q_2}}(\mathbb {T}^d)\right) }, \nonumber \\ \end{aligned}$$
(6.24)

provided that u and \(g(\cdot , t)\) have zero mean. These estimates can be proved as in Proposition A.17 with the aid of the dyadic estimate

$$\begin{aligned} \Vert \Delta _j e^{(\gamma \partial _1 -m(D))t}u\Vert _{L^2_x}\leqq C(d)e^{-c(b, d)2^jt}\Vert \Delta _j u\Vert _{L^2_x},\quad j\geqq 1. \end{aligned}$$
(6.25)

Set \(Y^\mu ([\alpha , \beta ])=\widetilde{L}^\infty ([\alpha , \beta ]; H^{\mu }(\mathbb {T}^d))\cap \widetilde{L}^2([\alpha , \beta ]; H^{\mu +\frac{1}{2}}(\mathbb {T}^d))\) and

$$\begin{aligned} \Vert \cdot \Vert _{Y^\mu ([\alpha , \beta ])}=\Vert \cdot \Vert _{\widetilde{L}^\infty ([\alpha , \beta ]; H^{\mu }(\mathbb {T}^d))}+\Vert \cdot \Vert _{\widetilde{L}^2\left( [\alpha , \beta ]; H^{\mu +\frac{1}{2}}(\mathbb {T}^d)\right) }. \end{aligned}$$
(6.26)

Consider \(f_0\in \mathring{H}^s(\mathbb {T}^d)\) and let \(T>0\) be arbitrary. By Proposition 5.1 2), R always has zero mean, and so do \(\mathcal {L}\) and \(\mathcal {B}\). Therefore, using (6.23), (6.24) and the fact that

$$\begin{aligned} \Vert \cdot \Vert _{\widetilde{L}^2([\alpha , \beta ]; H^{\mu }(\mathbb {T}^d))}=\Vert \cdot \Vert _{L^2([\alpha , \beta ]; H^{\mu }(\mathbb {T}^d))}, \end{aligned}$$
(6.27)

we obtain

$$\begin{aligned}&\Vert e^{(\gamma \partial _1 -m(D))t}f_0\Vert _{Y^s([0, T])}\leqq C(b, d)\Vert f_0\Vert _{H^s}, \end{aligned}$$
(6.28)
$$\begin{aligned}&\Vert \mathcal {L}(f)\Vert _{Y^s([0, T])}\leqq C(b, d)\Vert R(\eta _*)(\eta _*+{\varphi _0})-R(\eta _*+f)(\eta _*+{\varphi _0})\Vert _{L^2\left( [0, T]; H^{s-\frac{1}{2}}\right) }, \end{aligned}$$
(6.29)
$$\begin{aligned}&\Vert \mathcal {B}(g, f)\Vert _{Y^s([0, T])}\leqq C(b, d)\Vert R(g)f\Vert _{L^2([0, T]; H^{s-\frac{1}{2}})}. \end{aligned}$$
(6.30)

We want to apply Lemma 6.4 with

$$\begin{aligned} E\equiv E_T:=\left\{ f\in Y^s([0, T]) \;\vert \;\int _{\mathbb {T}^d}f(x, t)dx=0~\text {a.e.~}t\in [0, T]\right\} . \end{aligned}$$
(6.31)

Let \(B_\nu \) be the open ball in \(E_T\) with center 0 and radius \(\nu <\frac{2c_2}{3}\).

Let \(f\in B_\nu \). We have \(\Vert f\Vert _{L^\infty ([0, T]; H^s)}\leqq \Vert f\Vert _{\widetilde{L}^\infty ([0, T]; H^s)}\), hence \(\Vert f(t)\Vert _{H^s}<\nu <c_2/3\) a.e. \(t\in [0, T]\). Consequently, \(\Vert \eta _*+f(t)\Vert _{H^s}<c_2\) a.e. \(t\in [0, T]\) and thus we can apply the contraction estimate (5.37) with \(\eta _1=\eta _*\), \(\eta _2=\eta _*+f\), \(\sigma =s+\frac{1}{2}\) and \(\sigma _0=s\),

$$\begin{aligned}{} & {} \Vert R(\eta _*)(\eta _*+{\varphi _0})-R(\eta _*+f)(\eta _*+{\varphi _0})\Vert _{H^{s-\frac{1}{2}}} \nonumber \\{} & {} \quad \leqq C\Vert f\Vert _{\mathcal {H}^s}\left\{ \Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-\frac{1}{2}}} +\Vert \eta _*\Vert _{\mathcal {H}^{s+\frac{1}{2}}}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\right\} \nonumber \\{} & {} \qquad +C\Vert f\Vert _{\mathcal {H}^{s+\frac{1}{2}}} \Vert \nabla _x (\eta _*+{\varphi _0})\Vert _{H^{s-1}}\nonumber \\{} & {} \quad \leqq C\Vert f\Vert _{H^{s+\frac{1}{2}}}\left\{ \Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}+\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\right\} \nonumber \\ \end{aligned}$$
(6.32)

a.e. \(t\in [0, T]\), where \(C=C(s, b, d)\) and we have used the embedding \(H^\mu \subset \mathcal {H}^\mu \) for \(\mu \geqq 0\). Combining (6.29) and (6.32) yields

$$\begin{aligned}{} & {} \Vert \mathcal {L}(f)\Vert _{Y^s([0, T])} \nonumber \\{} & {} \quad \leqq C\Vert f\Vert _{L^2([0, T]; H^{s+\frac{1}{2}})}\left\{ \Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}+\Vert \eta _*\Vert _{\mathcal {H}^{s+\frac{1}{2}}}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\right\} ,\nonumber \\ \end{aligned}$$
(6.33)

where \(C=C(s, b, d)\).

Next, for \(g\in B_\nu \), we can apply the remainder estimate (5.24) with \(\sigma =s+\frac{1}{2}\) and \(\sigma _0=s\) to have

$$\begin{aligned} \begin{aligned} \Vert R(\eta _*+g)f\Vert _{H^{s-\frac{1}{2}}}&\leqq C\Vert \eta _*+g\Vert _{\mathcal {H}^s}\Vert \nabla _xf\Vert _{H^{s-\frac{1}{2}}} + C\Vert \eta _*+g\Vert _{\mathcal {H}^{s+\frac{1}{2}}}\Vert \nabla _xf\Vert _{H^{s-1}}\\&\leqq C\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _xf\Vert _{H^{s-\frac{1}{2}}}+ C\Vert g\Vert _{H^s}\Vert \nabla _xf\Vert _{H^{s-\frac{1}{2}}}\\&\quad +C \Vert g\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _xf\Vert _{H^{s-1}} \end{aligned} \nonumber \\ \end{aligned}$$
(6.34)

a.e. \(t\in [0, T]\), where \(C=C(s, b, d)\). It follows from (6.30) and (6.34) that

$$\begin{aligned} \Vert \mathcal {B}(g, f)\Vert _{Y^s([0, T])}{} & {} \leqq C\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _xf\Vert _{L^2\left( [0, T]; H^{s-\frac{1}{2}}\right) }\nonumber \\{} & {} \quad +C\Vert g\Vert _{L^\infty ([0, T]; H^s)}\Vert \nabla _xf\Vert _{L^2\left( [0, T]; H^{s-\frac{1}{2}}\right) }\nonumber \\{} & {} \quad + C\Vert g\Vert _{L^2\left( [0, T]; H^{s+\frac{1}{2}}\right) }\Vert \nabla _xf\Vert _{L^\infty ([0, T]; H^{s-1})}\nonumber \\{} & {} \leqq C\big (\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}+\Vert g\Vert _{Y^s([0, T])}\big )\Vert f\Vert _{Y^s([0, T])}, \end{aligned}$$
(6.35)

where \(C=C(s, b, d)\). By a completely analogous argument, we obtain that

$$\begin{aligned} \begin{aligned}&\Vert \mathcal {L}(f_1)-\mathcal {L}(f_2)\Vert _{Y^s([0, T])}\\&\quad \leqq C\Vert R(\eta _*+f_1)(\eta _*+{\varphi _0})-R(\eta _*+f_2)(\eta _*+{\varphi _0})\Vert _{L^2\left( [0, T]; H^{s-\frac{1}{2}}\right) }\\&\quad \leqq C\Vert f_1-f_2\Vert _{L^2([0, T]; H^s)}\left\{ \Vert \nabla _x(\eta _*+{\varphi _0}) \Vert _{H^{s-\frac{1}{2}}}+\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\right\} \\&\qquad +C\Vert f_1-f_2\Vert _{L^\infty ([0, T]; H^s)}\Vert f_1\Vert _{L^2\left( [0, T]; H^{s+\frac{1}{2}}\right) }\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\\&\qquad +C\Vert f_1-f_2\Vert _{L^2([0, T]; H^{s+\frac{1}{2}})}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\\&\quad \leqq C\Vert f_1-f_2\Vert _{Y^s([0, T])}\left\{ \Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}+\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\right. \\&\qquad \left. +\Vert f_1\Vert _{Y^s([0, T])}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\right\} \end{aligned} \nonumber \\ \end{aligned}$$
(6.36)

for \(f_1\), \(f_2\in B_\nu \), and

$$\begin{aligned} \begin{aligned}&\Vert \mathcal {B}(g_1, f)-\mathcal {B}(g_2, f)\Vert _{Y^s([0, T])}\\&\quad \leqq C\Vert R(\eta _*+g_1)f-R(\eta _*+g_2)f\Vert _{L^2\left( [0, T]; H^{s-\frac{1}{2}}\right) }\\&\quad \leqq C\Vert g_1-g_2\Vert _{L^\infty ([0, T]; H^s)}\\&\qquad \times \left\{ \Vert \nabla _xf\Vert _{L^2\left( [0, T]; H^{s-\frac{1}{2}}\right) } +\Vert g_1\Vert _{L^2\left( [0, T]; H^{s+\frac{1}{2}}\right) }\Vert \nabla _xf\Vert _{L^\infty ([0, T]; H^{s-1})}\right\} \\&\qquad +C\Vert g_1-g_2\Vert _{L^2([0, T]; H^s)}\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _xf\Vert _{L^\infty ([0, T]; H^{s-1})}\\&\qquad +C\Vert g_1-g_2\Vert _{L^2([0, T]; H^{s+\frac{1}{2}})}\Vert \nabla _xf\Vert _{L^\infty ([0, T]; H^{s-1})}\\&\quad \leqq C\Vert g_1-g_2\Vert _{Y^s([0, T])}\Vert f\Vert _{Y^s([0, T])}\left( 1+\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}} +\Vert g_1\Vert _{Y^s([0, T])}\right) \end{aligned} \nonumber \\ \end{aligned}$$
(6.37)

for \(g_1\), \(g_2\in B_\nu \). In both (6.36) and (6.37), \(C=C(s, b, d)\).

In view of (6.33), (6.35), (6.36) and (6.37), we find that the conditions in Lemma 6.4 are satisfied with

$$\begin{aligned} \begin{aligned} C_\mathcal {L}&= C\left\{ \Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}+\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\right\} ,\\ \mathcal {G}_\mathcal {B}(z)&= C\left( \Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}+z\right) ,\\ \mathcal {F}_\mathcal {L}(z)&= C\Big \{\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}+\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\\&\quad +z\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\Big \},\\ \mathcal {F}_\mathcal {B}(z)&=C\left( 1+\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}+z\right) , \end{aligned} \end{aligned}$$
(6.38)

where \(C=C(s, b, d)\). According to Theorem 6.3, if \(\Vert \nabla {\varphi _0}\Vert _{H^{s-\frac{1}{2}}}\) is small enough then \(\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\leqq C(s, b, d)\Vert \nabla {\varphi _0}\Vert _{H^{s-\frac{1}{2}}}\). Therefore, for sufficiently small \(\Vert \nabla {\varphi _0}\Vert _{H^{s-\frac{1}{2}}}\), we have \(C_\mathcal {L}<\frac{1}{4}\) and the conditions (6.17) and (6.20) hold for sufficiently small \(r_*\) and \(\nu \). Therefore, by virtue of Lemma 6.4 and with the aid of (6.28), if \(\Vert f_0\Vert _{H^s}<\frac{1}{2}\min \{\nu , r_*\}:=\delta \) then \(\mathcal {N}\) has a unique fixed point f in \(B_\nu \), and

$$\begin{aligned} \Vert f\Vert _{Y^s([0, T])}\leqq C(b, d)\Vert f_0\Vert _{H^s}\quad \forall T>0. \end{aligned}$$

Since the smallness of \({\varphi _0}\), \(\nu \) and \(r_*\) is independent of T, f is a global solution.

Next, we prove that f decay exponentially in \(H^s\). Since \(f\in Y^s([0, T])\) for all \(T>0\), using (6.32) and (6.34) we deduce that \(\partial _t f\in L^2([0, T]; H^{s-\frac{1}{2}})\) for all \(T>0\). Then applying Theorem 3.1 in [10] yields \(f\in C([0, T]; H^s)\) for all \(T>0\). Appealing to (6.32) and (6.34) again, we deduce that

$$\begin{aligned} \begin{aligned}&\frac{1}{2}\frac{\textrm{d}}{\textrm{d}t}\Vert f(t)\Vert _{H^s}^2\\&\quad =\langle \partial _t f, f\rangle _{H^{s-\frac{1}{2}}, H^{s+\frac{1}{2}}}\\&\quad =\langle \gamma \partial _1f, f\rangle _{H^{s-\frac{1}{2}}, H^{s+\frac{1}{2}}}-\langle m(D)f, f\rangle _{H^{s-\frac{1}{2}}, H^{s+\frac{1}{2}}}\\&\qquad -\langle R(\eta _*+f)f, f\rangle _{H^{s-\frac{1}{2}}, H^{s+\frac{1}{2}}}+\langle R(\eta _*)(\eta _*+{\varphi _0})\\&\qquad -R(\eta _*+f)(\eta _*+{\varphi _0}), f\rangle _{H^{s-\frac{1}{2}}, H^{s+\frac{1}{2}}}\\&\quad \leqq -c_0\Vert f\Vert _{H^{s+\frac{1}{2}}}^2+C\Vert f\Vert ^2_{H^{s+\frac{1}{2}}}\Big \{\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}\\&\qquad +\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}\Big \}\\&\qquad +C\left\{ \Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert f\Vert _{H^{s+\frac{1}{2}}}+\Vert f\Vert _{H^s}\Vert f\Vert _{H^{s+\frac{1}{2}}}\right\} \Vert f\Vert _{H^{s+\frac{1}{2}}}, \end{aligned} \end{aligned}$$
(6.39)

where we have used the facts that \(f(\cdot , t)\) has zero mean to get that

$$\begin{aligned} \langle m(D)f, f\rangle _{H^{s-\frac{1}{2}}, H^{s+\frac{1}{2}}}\geqq c_0\Vert f\Vert _{H^{s+\frac{1}{2}}}^2,\quad c_0=c_0(s, b, d). \end{aligned}$$
(6.40)

It follows that

$$\begin{aligned} \frac{1}{2}\frac{\textrm{d}}{\textrm{d}t}\Vert f(t)\Vert _{H^s}^2\leqq -c_0\Vert f\Vert _{H^{s+\frac{1}{2}}}^2+C\Vert f\Vert ^2_{H^{s+\frac{1}{2}}}\left\{ \mu +\Vert f\Vert _{H^s}\right\} , \end{aligned}$$
(6.41)

where

$$\begin{aligned} \mu =\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-\frac{1}{2}}}+\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}\Vert \nabla _x(\eta _*+{\varphi _0})\Vert _{H^{s-1}}+\Vert \eta _*\Vert _{H^{s+\frac{1}{2}}}. \nonumber \\ \end{aligned}$$
(6.42)

We choose \(\Vert \nabla _x {\varphi _0}\Vert _{H^{s-\frac{1}{2}}}\) small enough so that \(\mu <\frac{c_0}{3C}\), and assume that \(\Vert f_0\Vert _{H^s}<\min \{\delta , \frac{c_0}{3C}\}\). Using the continuity of \(t\mapsto \Vert f(t)\Vert _{H^s}\), we deduce from (6.41) that \(\Vert f(t)\Vert _{H^s}<\frac{c_0}{3C}\) for all \(t>0\), and thus

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t}\Vert f(t)\Vert _{H^s}^2\leqq -\frac{c_0}{3}\Vert f(t)\Vert _{H^{s+\frac{1}{2}}}^2\leqq -\frac{c_0}{3}\Vert f(t)\Vert _{H^s}^2. \end{aligned}$$
(6.43)

Therefore, we obtain the exponential decay

$$\begin{aligned} \Vert f(t)\Vert _{H^s}\leqq \Vert f_0\Vert _{H^s}e^{-\frac{c_0}{6}t}\quad \forall t>0 \end{aligned}$$
(6.44)

and the global dissipation bound

$$\begin{aligned} \int _0^\infty \Vert f(t)\Vert _{H^{s+\frac{1}{2}}}^2\textrm{d}t\leqq \frac{3}{c_0}\Vert f_0\Vert ^2_{H^s}. \end{aligned}$$
(6.45)

This completes the proof of Theorem 1.2. \(\square \)