1 Introduction

The strong unique continuation properties and the related quantitative estimates have been well understood for second order equations of elliptic [1, 6, 22, 27] and parabolic type [5, 15, 28]. The three sphere inequalities [30], doubling inequalities [20], or two-sphere one cylinder inequality [16] are the typical form in which such quantitative estimates of unique continuation occur in the elliptic or in the parabolic context. We refer to [4, 36] for a more extensive literature on these subjects. On the contrary, the strong properties of unique continuation are much less studied in the context of hyperbolic equations, [7, 31, 32].

To the author knowledge there exits no result in the literature concerning quantitative estimates of unique continuation in the framework of hyperbolic equations. In this paper we study this issue for the wave equation

$$\begin{aligned} \partial ^2_{t}u-\text{ div }\left( A(x)\nabla _x u\right) =0, \end{aligned}$$
(1.1)

(\(\text{ div }:=\sum _{j=1}^n\partial _{x_j}\)) where A(x) is a real-valued symmetric \(n\times n\), \(n\ge 2\), matrix whose entries are functions of Lipschitz class and satisfying uniform ellipticity condition.

The quantitative estimates of unique continuation for the Eq. (1.1) represent the quantitative counterparts of the following strong unique continuation property. Let u be a weak solution to (1.1) and assume that

$$\begin{aligned} \sup _{t\in J}\left\| u(\cdot ,t) \right\| _{L^2\left( B_{r}\right) }=C_Nr^N\text{, } \quad \forall N\in \mathbb {N} \text{, } \quad \forall r<1, \end{aligned}$$

where \(C_N\) is arbitrary and independent on r, \(J=(-T,T)\) is an interval of \(\mathbb {R}\). Then we have

$$\begin{aligned} u= 0 \quad \hbox {in }\; \mathcal {U}, \end{aligned}$$

where \(\mathcal {U}\) is a neighborhood of \(\{0\}\times J\). The above property was proved by Lebeau in [31]. As a consequence of such a result and using the weak unique continuation property proved in [23, 34, 35], see also [24], we have that, if the entries of A are function in \(C^{\infty }(\mathbb {R}^n)\) then \(u=0\) in the domain of dependence of a cylinder \(B_{\delta }\times J\), where \(B_{\delta }\) is the ball of \(\mathbb {R}^n\), \(n\ge 2\), centered at 0 with a small radius \(\delta \). Previously the strong unique continuation property was proved by Masuda [32] whenever \(J=\mathbb {R}\) and the entries of the matrix A are functions of \(C^2\) class and by Baouendi and Zachmanoglou [7] whenever the entries of A are analytic functions. In both [7, 32], the above property was proved also for first order perturbation of operator \(\partial ^2_{t}u-\text{ div }(A(x)\nabla u)\). Also, we recall here the papers [11, 12, 33]. In such papers unique continuation properties are proved along and across lower dimensional manifolds for the wave equation.

The quantitative estimate of strong unique continuation (in the interior) that we prove is, roughly speaking, the following one (for the precise statement see Theorem 2.1). Let u be a solution to (1.1) in the cylinder \(B_1\times J\) and let \(r\in (0,1)\). Assume that

$$\begin{aligned} \sup _{t\in J}\left\| u(\cdot ,t) \right\| _{L^2\left( B_{r}\right) }\le \varepsilon \quad \text{ and } \; \left\| u(\cdot ,0) \right\| _{H^2\left( B_{1}\right) }\le 1, \end{aligned}$$

where \(\varepsilon <1\). Then

$$\begin{aligned} \left\| u(\cdot ,0) \right\| _{L^2\left( B_{s_0}\right) } \le C\left| \log \left( \varepsilon ^{\theta }\right) \right| ^{-1/6}, \end{aligned}$$
(1.2)

where \(s_0\in (0,1)\), \(C\ge 1\) are constants independent of u and r and

$$\begin{aligned} \theta = |\log r|^{-1}. \end{aligned}$$
(1.3)

The estimate (1.2) are sharp estimate from two points of view:

  1. (i)

    The logarithmic character of the estimate cannot be improved as it is shown by a well-known counterexample of John for the wave equation, [26];

  2. (ii)

    The sharp dependence of \(\theta \) by r. Indeed it is easy to check that the estimate (1.2) implies the strong unique continuation for the Eq. (1.1) (see Remark 2.2 for more details).

As a consequence of estimate (1.2) and some reflection transformation introduced in [1] we derive a quantitative estimate of unique continuation at the boundary (Theorem 2.3). Also, we extend (1.2) to a first order perturbation of the wave operator (Sect. 4).

One of the main purposes that led us to derive the above estimates is their applications in the framework of stability for inverse hyperbolic problems with time independent unknown boundaries from transient data with a finite time of observation. Some uniqueness results has been proved in [25]. In the paper [37] the most important tools that are used to prove a sharp stability estimate are precisely the strong unique continuation (at the interior and at the boundary) for the Eq. (1.1). The quantitative estimate of strong unique continuation was applied for the first time to the elliptic inverse problems with unknown boundaries in [3]. Concerning the parabolic inverse problems with unknown boundaries such estimates were applied in [9, 10, 14, 18, 36]. In both the cases, elliptic and parabolic, the stability estimates that were proved are optimal [13] and [2] (elliptic case), [14] (parabolic case).

The proof of (1.2) follows a similar strategy and ingredients as the one used in [31]. In particular, in order to perform a suitable transformation of the wave equation in a nonhomogeneous second order elliptic equation we use the Boman transformation [8], then, to the obtained elliptic equation, we use the Carleman estimate with singular weight, [6, 17, 22] and the stability estimates for the Cauchy problem. The main difference between our proof and the one of [31] relies in the different nature of the results; in our case the results are quantitative while in [31] the results are only qualitative. More precisely, in [31] the parameter \(\varepsilon \) has the particular form \(\varepsilon =C_Nr^N\) while in the present paper \(\varepsilon \) is a free parameter. An important consequence of this fact is that we need to control very accurately how much the error \(\varepsilon \) affects the growth of the solution to (1.1) in order to reach the above sharpness character (i) and (ii).

The plan of the paper is as follows. In Sect. 2 we state the main results of this paper, in Sect. 3 we prove the theorems of Sect. 2, in Sect. 4 we consider the case of the more general equation \(q(x)\partial ^2_{t}u-\text{ div }(A(x)\nabla _x u)=b(x)\cdot \nabla _x u+c(x)u\).

2 The main results

2.1 Notation and definition

Let \(n\in \mathbb {N}\), \(n\ge 2\). For any \(x\in \mathbb {R}^n\), we will denote \(x=(x',x_n)\), where \(x'=(x_1,\ldots ,x_{n-1})\in \mathbb {R}^{n-1}\), \(x_n\in \mathbb {R}\) and \(|x|=(\sum _{j=1}^nx_j^2)^{1/2}\). Given \(x\in \mathbb {R}^n\), \(r>0\), we will denote by \(B_r\), \(B'_r\) \(\widetilde{B}_r\) the ball of \(\mathbb {R}^{n}\), \(\mathbb {R}^{n-1}\) and \(\mathbb {R}^{n+1}\) of radius r centered at 0. For any open set \(\Omega \subset \mathbb {R}^n\) and any function (smooth enough) u we denote by \(\nabla _x u=(\partial _{x_1}u,\ldots , \partial _{x_n}u)\) the gradient of u. Also, for the gradient of u we use the notation \(D_xu\). If \(j=0,1,2\) we denote by \(D^j_x u\) the set of the derivatives of u of order j, so \(D^0_x u=u\), \(D^1_x u=\nabla _x u\) and \(D^2_xu\) is the hessian matrix \(\{\partial _{x_ix_j}u\}_{i,j=1}^n\). Similar notation are used whenever other variables occur and \(\Omega \) is an open subset of \(\mathbb {R}^{n-1}\) or a subset \(\mathbb {R}^{n+1}\). By \(H^{\ell }(\Omega )\), \(\ell =0,1,2\) we denote the usual Sobolev spaces of order \(\ell \), in particular we have \(H^0(\Omega )=L^2(\Omega )\).

For any interval \(J\subset \mathbb {R}\) and \(\Omega \) as above we denote by

$$\begin{aligned} \mathcal {W}(J;\Omega )=\{u\in C^0(J;H^2(\Omega )): \partial _t^\ell u\in C^0(J;H^{2-\ell }(\Omega )), \ell =1,2\}. \end{aligned}$$

We shall use the letters \(C,C_0,C_1,\ldots \) to denote constants. The value of the constants may change from line to line, but we shall specified their dependence everywhere they appear.

2.2 Statements of the main results

Let \(A(x)=\{a^{ij}(x)\}^n_{i,j=1}\) be a real-valued symmetric \(n\times n\) matrix whose entries are measurable functions and they satisfy the following conditions for given constants \(\rho _0>0\), \(\lambda \in (0,1]\) and \(\Lambda >0\),

$$\begin{aligned} \lambda \left| \xi \right| ^2\le A(x)\xi \cdot \xi \le \lambda ^{-1}\left| \xi \right| ^2, \quad \hbox {for every }\; x, \xi \in \mathbb {R}^n, \end{aligned}$$
(2.1a)
$$\begin{aligned} \left| A(x)-A(y)\right| \le \frac{\Lambda }{\rho _0} \left| x-y \right| , \quad \hbox {for every }\; x, y\in \mathbb {R}^n. \end{aligned}$$
(2.1b)

Let \(q=q(x)\) be a a real-valued measurable function that satisfies

$$\begin{aligned} \lambda \le q(x)\le \lambda ^{-1}, \quad \hbox {for every }\; x\in \mathbb {R}^n, \end{aligned}$$
(2.2a)
$$\begin{aligned} \left| q(x)-q(y)\right| \le \frac{\Lambda }{\rho _0} \left| x-y \right| , \quad \hbox {for every}\; x, y\in \mathbb {R}^n. \end{aligned}$$
(2.2b)

Let \(u\in \mathcal {W}([-\lambda \rho _0,\lambda \rho _0];B_{\rho _0})\) be a weak solution to

$$\begin{aligned} q(x)\partial ^2_{t}u-\text{ div }\left( A(x)\nabla _x u\right) =0, \quad \hbox {in }\; B_{\rho _0}\times (-\lambda \rho _0,\lambda \rho _0). \end{aligned}$$
(2.3)

Let \(r_0\in (0,\rho _0]\) and denote by

$$\begin{aligned} \varepsilon :=\sup _{t\in (-\lambda \rho _0,\lambda \rho _0)}\left( \rho _0^{-n}\int _{B_{r_0}}u^2(x,t)dx\right) ^{1/2} \end{aligned}$$
(2.4)

and

$$\begin{aligned} H:=\left( \sum _{j=0}^2\rho _0^{j-n}\int _{B_{\rho _0}}\left| D_x^ju(x,0)\right| ^2 dx\right) ^{1/2}. \end{aligned}$$
(2.5)

Theorem 2.1

(estimate at the interior) Let \(u\in \mathcal {W}([-\lambda \rho _0,\lambda \rho _0];B_{\rho _0})\) be a weak solution to (2.3) and let (2.1) and (2.2) be satisfied. There exist constants \(s_0\in (0,1)\) and \(C\ge 1\) depending on \(\lambda \) and \(\Lambda \) only such that for every \(0<r_0\le \rho \le s_0 \rho _0\) the following inequality holds true

$$\begin{aligned} \left\| u(\cdot ,0) \right\| _{L^2\left( B_{\rho }\right) } \le \frac{C\left( \rho _0\rho ^{-1}\right) ^{C}(H+e\varepsilon )}{\left( \theta \log \left( \frac{H+e\varepsilon }{\varepsilon }\right) \right) ^{1/6}}, \end{aligned}$$
(2.6)

where

$$\begin{aligned} \theta =\frac{\log (\rho _0/C\rho )}{\log (\rho _0/r_0)}. \end{aligned}$$
(2.7)

The proof of Theorem 2.1 is given in Sect. 3.

Remark 2.2

Observe that estimate (2.6) implies the following property of strong unique continuation. Let \(u\in \mathcal {W}([-\lambda \rho _0,\lambda \rho _0];B_{\rho _0})\) be a weak solution to (2.3) and assume that

$$\begin{aligned} \sup _{t\in (-\lambda \rho _0,\lambda \rho _0)}\left( \rho _0^{-n}\int _{B_{r_0}}u^2(x,t)dx\right) ^{1/2}=O(r_0^N)\text{, } \quad \forall N\in \mathbb {N},\quad \text{ as } \; r_0\rightarrow 0, \end{aligned}$$

then

$$\begin{aligned} u(\cdot ,t)=0,\quad \text{ for } \; |x|+\lambda ^{-1}s_0|t|\le s_0 \rho _0. \end{aligned}$$
(2.8)

It is enough to consider the case \(t=0\). If \(\Vert u(\cdot ,0) \Vert _{L^2(B_{s_0\rho _0})}=0\) there is nothing to proof, otherwise if

$$\begin{aligned} \left\| u(\cdot ,0) \right\| _{L^2\left( B_{s_0\rho _0}\right) }>0, \end{aligned}$$
(2.9)

we argue by contradiction. By (2.9) it is not restrictive to assume that \(H=\Vert u(\cdot ,0) \Vert _{H^2(B_{\rho _0})}=1\). Now we apply inequality (2.6) with \(\varepsilon _0=C_Nr_0^N\), \(N\in \mathbb {N}\), and passing to the limit as \(r_0\rightarrow 0\) we have that (2.6) implies

$$\begin{aligned} \left\| u(\cdot ,0) \right\| _{L^2\left( B_{s_0\rho _0}\right) }\le Cs_0^{-C}N^{-1/6}, \quad \forall N\in \mathbb {N}, \end{aligned}$$

by passing again to the limit as \(N\rightarrow 0\) we get, by (2.9), \(\Vert u(\cdot ,0) \Vert _{L^2(B_{\rho })}=0\) that contradicts (2.9).

In order to state Theorem 2.3 below let us introduce some notation. Let \(\phi \) be a function belonging to \(C^{1,1}(B^{\prime }_{\rho _0})\) that satisfies

$$\begin{aligned} \phi (0)=\left| \nabla _{x'}\phi (0)\right| =0 \end{aligned}$$
(2.10)

and

$$\begin{aligned} \left\| \phi \right\| _{C^{1,1}\left( B^{\prime }_{\rho _0}\right) }\le E\rho _0, \end{aligned}$$
(2.11)

where

$$\begin{aligned} \left\| \phi \right\| _{C^{1,1}\left( B^{\prime }_{\rho _0}\right) }=\left\| \phi \right\| _{L^{\infty }\left( B^{\prime }_{\rho _0}\right) } +\rho _0\left\| \nabla _{x'}\phi \right\| _{L^{\infty }\left( B^{\prime }_{\rho _0}\right) }+ \rho _0^2\left\| D_{x'}^2\phi \right\| _{L^{\infty }\left( B^{\prime }_{\rho _0}\right) }. \end{aligned}$$

For any \(r\in (0,\rho _0]\) denote by

$$\begin{aligned} K_{r}:=\{(x',x_n)\in B_{r}: x_n>\phi (x')\} \end{aligned}$$

and

$$\begin{aligned} Z:=\{(x',\phi (x')): x' \in B^{\prime }_{\rho _0}\}. \end{aligned}$$

Let \(u\in \mathcal {W}([-\lambda \rho _0,\lambda \rho _0];K_{\rho _0})\) be a solution to

$$\begin{aligned} \partial ^2_{t}u-\text{ div }\left( A(x)\nabla _x u\right) =0, \quad \hbox {in }\; K_{\rho _0}\times (-\lambda \rho _0,\lambda \rho _0), \end{aligned}$$
(2.12)

satisfying one of the following conditions

$$\begin{aligned} u=0, \quad \hbox {on }\; Z\times (-\lambda \rho _0,\lambda \rho _0) \end{aligned}$$
(2.13)

or

$$\begin{aligned} A\nabla _x u\cdot \nu =0, \quad \hbox {on }\; Z\times (-\lambda \rho _0,\lambda \rho _0), \end{aligned}$$
(2.14)

where \(\nu \) denotes the outer unit normal to Z.

Let \(r_0\in (0,\rho _0]\) and denote by

$$\begin{aligned} \varepsilon =\sup _{t\in (-\lambda \rho _0,\lambda \rho _0)}\left( \rho _0^{-n}\int _{K_{r_0}}u^2(x,t)dx\right) ^{1/2} \end{aligned}$$
(2.15)

and

$$\begin{aligned} H=\left( \sum _{j=0}^2\rho _0^{j-n}\int _{K_{\rho _0}}\left| D_x^ju(x,0)\right| ^2 dx\right) ^{1/2}. \end{aligned}$$
(2.16)

Theorem 2.3

(estimate at the boundary) Let (2.1) be satisfied. Let \(u\in \mathcal {W}([-\lambda \rho _0,\lambda \rho _0];K_{\rho _0})\) be a solution to (2.12) satisfying (2.15) and (2.16). Assume that u satisfies either (2.13) or (2.14). There exist constants \(\overline{s}_0\in (0,1)\) and \(C\ge 1\) depending on \(\lambda \), \(\Lambda \) and E only such that for every \(0<r_0\le \rho \le \overline{s}_0 \rho _0\) the following inequality holds true

$$\begin{aligned} \left\| u(\cdot ,0) \right\| _{L^2\left( K_{\rho }\right) }\le \frac{C\left( \rho _0\rho ^{-1}\right) ^{C}(H+e\varepsilon )}{\left( \widetilde{\theta }\log \left( \frac{H+e\varepsilon }{\varepsilon }\right) \right) ^{1/6}} , \end{aligned}$$
(2.17)

where

$$\begin{aligned} \widetilde{\theta }=\frac{\log (\rho _0/C\rho )}{\log (\rho _0/r_0)}. \end{aligned}$$
(2.18)

The proof of Theorem 2.3 is given in Sect. 3.2.

Remark 2.4

By arguing similarly to Remark 2.2 we have that estimate (2.17) implies the following property of strong unique continuation at the boundary. Let \(u\in \mathcal {W}([-\lambda \rho _0,\lambda \rho _0];K_{\rho _0})\) be a solution to (2.12) satisfying either (2.13) or (2.14) and assume that

$$\begin{aligned} \sup _{t\in (-\lambda \rho _0,\lambda \rho _0)}\left( \rho _0^{-n}\int _{K_{r_0}}u^2(x,t)dx\right) ^{1/2}=O(r_0^N),\quad \forall N\in \mathbb {N},\quad \text{ as } \; r_0\rightarrow 0, \end{aligned}$$

then

$$\begin{aligned} u(x,t)=0,\quad \text{ for }\; x\in K_{\rho (t)},\quad t\in (-\lambda \rho _0,\lambda \rho _0), \end{aligned}$$

where \(\rho (t)=\overline{s}_0(\rho _0-\lambda ^{-1}|t|)\).

3 Proof of Theorems 2.1 and 2.3

3.1 Proof of Theorem 2.1

Observe that to prove Theorem 2.1 we can assume that u(xt) is even with respect to the variable t. Indeed defining

$$\begin{aligned} u_+(x,t)=\frac{u(x,t)+u(x,-t)}{2}, \end{aligned}$$

we see that \(u_+\) satisfies all the hypotheses of Theorem 2.1 and, in particular, we have

$$\begin{aligned} u_+(x,0)=u(x,0), \end{aligned}$$
$$\begin{aligned} \sup _{t\in (-\lambda \rho _0,\lambda \rho _0)}\left( \rho _0^{-n}\int _{B_{r_0}}u_+^2(x,t)dx\right) ^{1/2}\le \varepsilon , \end{aligned}$$

and

$$\begin{aligned} \left( \sum _{j=0}^2\rho _0^{j-n}\int _{B_{\rho _0}}\left| D_x^ju_+(x,0)\right| ^2 dx\right) ^{1/2}=H, \end{aligned}$$

also, notice that the function of \(\varepsilon \) at the right hand side of (2.6) is not decreasing. Hence, from now on we assume that u(xt) is even with respect to the variable t. Moreover it is not restrictive to assume \(\rho _0=1\).

In order to prove Theorem 2.1 we proceed in the following way.

First step. After a standard extension of \(u(\cdot ,0)\) in \(H^2(B_2)\cap H_0^1 (B_2)\) we will construct, similarly to [31], a sequence of function \(\{v_k(x,y)\}_{k\in \mathbb {N}}\), with the following properties:

  1. (i)

    for every \(k\in \mathbb {N}\) the function \(v_k\) belongs to \(H^2(B_2)\cap H_0^1 (B_2)\), in addition \(v_k(x,y)\) is even with respect to the variable \(y\in \mathbb {R}\),

  2. (ii)

    the sequence \(\{v_k(\cdot ,0)\}_{k\in \mathbb {N}}\) approximates \(u(\cdot ,0)\) in \(L^2(B_1)\), more precisely we have

    $$\begin{aligned} \left\| u(\cdot ,0)-v_{k} \right\| _{L^2 \left( B_{1}\right) }\le C H k^{-1/6}. \end{aligned}$$

    Moreover, for every \(k\in \mathbb {N}\) the function \(v_k(x,y)\) is a solution to the elliptic problem,

    $$\begin{aligned} \left\{ \begin{array}{ll} q(x)\partial ^2_{y}v_{k}+\text{ div }\left( A(x)\nabla _x v_{k}\right) =f_{k}(x,y), \quad \hbox {in } B_2\times \mathbb {R},\\ \Vert v_k(\cdot ,0)\Vert _{L^2\left( B_{r_0}\right) }\le \varepsilon , \end{array}\right. \end{aligned}$$

    where \(f_k\) satisfies

    $$\begin{aligned} \Vert f_k(\cdot ,y)\Vert _{L^2\left( B_2\right) }\le (C|y|)^{2k} \quad \forall k\in \mathbb {N}. \end{aligned}$$

Second step. Here we derive some stability estimates of Cauchy problem for the above elliptic equation getting estimates \(v_k\) in the ball of \(\mathbb {R}^{n+1}\) centered at 0 with radius \(r_0/4\), (Proposition 3.6). Then we use Carleman estimates with singular weight (Theorem 3.7) for the elliptic equation and the above estimate of \(\Vert u(\cdot ,0)-v_{k} \Vert _{L^2 (B_{1})}\). Finally, we choose the parameter k and we get the estimate (2.6).

First step.

Let us start by introducing some notation and by giving an outline of the proof of Theorem 2.1. Let \(\widetilde{u}_0\) an extension of the function \(u_0:=u(\cdot ,0)\) such that \(\widetilde{u}_0\in H^2(B_2)\cap H_0^1(B_2)\) and

$$\begin{aligned} \Vert \widetilde{u}_0\Vert _{H^2\left( B_2\right) }\le CH, \end{aligned}$$
(3.1)

where C is an absolute constant.

Let us denote by \(\lambda _j\), with \(0<\lambda _1\le \lambda _2\le \cdots \le \lambda _j\le \cdots \) the eigenvalues associated to the Dirichlet problem

$$\begin{aligned} \left\{ \begin{array}{ll} \text{ div }\left( A(x)\nabla _x v\right) +\omega q(x)v=0, &{}\quad \text {in }B_2,\\ v\in H_0^1\left( B_2\right) , \end{array}\right. \end{aligned}$$
(3.2)

and by \(e_j(\cdot )\) the corresponding eigenfunctions normalized by

$$\begin{aligned} \int _{B_2}e^2_j(x)q(x)dx=1. \end{aligned}$$
(3.3)

By (2.1a), (2.2) and Poincaré inequality we have for every \(j\in \mathbb {N}\)

$$\begin{aligned} \lambda _j=\int _{B_2} A(x)\nabla _x e_j(x)\cdot \nabla _x e_j(x) dx\ge c\lambda ^2 \int _{B_2}e^2_j(x)q(x)dx=c\lambda ^2 \end{aligned}$$
(3.4)

where c is an absolute constant. Denote by

$$\begin{aligned} \alpha _j :=\int _{B_2}\widetilde{u}_0(x) e_j(x)q(x)dx, \end{aligned}$$
(3.5)

and let

$$\begin{aligned} \widetilde{u}(x,t):=\sum _{j=1}^{\infty }\alpha _j e_j(x)\cos \sqrt{\lambda _j} t. \end{aligned}$$
(3.6)

Proposition 3.1

We have

$$\begin{aligned} \sum _{j=1}^{\infty }\left( 1+\lambda _j^2\right) \alpha ^2_j\le C H^2, \end{aligned}$$
(3.7)

where C depends on \(\lambda , \Lambda \) only. Moreover, \(\widetilde{u}\in \mathcal {W}(\mathbb {R};B_2)\cap C^0(\mathbb {R};H^2(B_2)\cap H^1_0(B_2))\) is an even function with respect to the variable t and it satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} q(x)\partial ^2_{t}\widetilde{u}-\text{ div }\left( A(x)\nabla _x \widetilde{u}\right) =0, &{}\quad \hbox {in }\, B_2\times \mathbb {R},\\ \widetilde{u}(\cdot ,0)=\widetilde{u}_0, &{}\quad \hbox {in }\, B_2,\\ \partial _t\widetilde{u}(\cdot ,0)=0, &{}\quad \hbox {in }\, B_2. \end{array}\right. \end{aligned}$$
(3.8)

Proof

By (3.2) and (3.3) we have

$$\begin{aligned} \lambda _j\alpha _j=\int _{B_2}\widetilde{u}_0(x) \lambda _jq(x)e_j(x)dx =-\int _{B_2}\text{ div }\left( A(x)\nabla _x \widetilde{u}_0(x)\right) e_j(x)dx. \end{aligned}$$

Hence, by (2.1), (2.2) and (3.1) we have

$$\begin{aligned} \sum _{j=1}^{\infty }\left( 1+\lambda _j^2\right) \alpha ^2_j=\Vert \widetilde{u}_0\Vert ^2_{L^2\left( B_2;qdx\right) }+\left\| {\frac{1}{q}\text{ div }\left( A\nabla _x \widetilde{u}_0\right) }\right\| ^2_{L^2\left( B_2;qdx\right) }\le C H^2, \end{aligned}$$

where C depends on \(\lambda , \Lambda \) only and (3.7) follows. \(\square \)

Notice that, since \(\widetilde{u}(\cdot ,0)=u_+(\cdot ,0)\) and \(\partial _t\widetilde{u}(\cdot ,0)=0=\partial _t u_+(\cdot ,0)\) in \(B_1\), we have for the uniqueness to the Cauchy problem for Eq. (2.3), (see, for instance [19]),

$$\begin{aligned} \widetilde{u}(x,t)=u_+(x,t), \quad \text{ for } \; |x|+\lambda ^{-1}|t|< 1. \end{aligned}$$
(3.9)

Let us introduce the following nonnegative, even function \(\psi \) such that

$$\begin{aligned} \psi (t)=\left\{ \begin{array}{ccc} \frac{1}{2}\left( 1+\cos \pi t\right) , &{}\quad \text{ for }&{} |t|\le 1, \\ 0, &{}\quad \text{ for }&{}|t|>1. \end{array} \right. \end{aligned}$$
(3.10)

Notice that \(\psi \in C^{1,1}\), \(\text{ supp } \psi \subseteq [-1,1]\) and

$$\begin{aligned} \int _{\mathbb {R}}\psi (t)dt=1. \end{aligned}$$
(3.11)

Let

$$\begin{aligned} \widehat{\psi }(\tau )=\int _{\mathbb {R}}\psi (t)e^{-i\tau t}dt=\int _{\mathbb {R}}\psi (t) \cos \tau t dt, \quad \tau \in \mathbb {R}. \end{aligned}$$
(3.12)

Since \(\psi \) has compact support, \(\widehat{\psi }\) is an entire function. By (3.11) we have

$$\begin{aligned} \left| \widehat{\psi }(\tau )\right| \le \int _{\mathbb {R}}\psi (t)dt=1,\quad \text{ for } \text{ every } \; \tau \in \mathbb {R}, \end{aligned}$$

and

$$\begin{aligned} \left| \tau ^2\widehat{\psi }(\tau )\right| =\left| -\int _{\mathbb {R}}\psi (t)\frac{d^2}{dt^2} \cos \tau tdt\right| =\left| -\int _{\mathbb {R}}\psi ^{''}(t) \cos \tau tdt\right| \le \pi ^2,\quad \text{ for } \text{ every }\; \tau \in \mathbb {R}, \end{aligned}$$

hence we have

$$\begin{aligned} \left| \widehat{\psi }(\tau )\right| \le \min \left\{ 1,\pi ^2\tau ^{-2}\right\} ,\quad \text{ for } \text{ every }\; \tau \in \mathbb {R}. \end{aligned}$$
(3.13)

Let

$$\begin{aligned} \vartheta (t)=4\lambda ^{-1}\psi (4\lambda ^{-1}t),\quad t\in \mathbb {R}. \end{aligned}$$
(3.14)

In the following proposition we collect the elementary properties of \(\vartheta \) that we need.

Proposition 3.2

The function \(\vartheta \) is an even and non negative function such that \(\vartheta \in C^{1,1}\), \(\text{ supp } \vartheta =[-\frac{\lambda }{4},\frac{\lambda }{4}]\), \(\int _{\mathbb {R}}\vartheta (t)dt=1\), \(\widehat{\vartheta }(\tau )=\widehat{\psi }(\frac{\lambda \tau }{4})\) and

$$\begin{aligned} \int _{\mathbb {R}}\left| \vartheta ^{\prime }(t)\right| dt=8\lambda ^{-1}, \end{aligned}$$
(3.15)
$$\begin{aligned} \left| \widehat{\vartheta }(\tau )\right| \le \min \left\{ 1,16\pi ^2(\tau \lambda )^{-2}\right\} ,\quad \text{ for } \text{ every }\; \tau \in \mathbb {R}, \end{aligned}$$
(3.16)
$$\begin{aligned} \left| \widehat{\vartheta }(\tau )-1\right| \le \left( \frac{\lambda \tau }{4}\right) ^2,\quad \hbox {for }\; \left| \frac{\lambda \tau }{4} \right| \le \frac{\pi }{2}, \end{aligned}$$
(3.17)
$$\begin{aligned} \frac{1}{2}\le \widehat{\vartheta }(\tau ), \quad \hbox {for }\; \left| \frac{\lambda \tau }{4} \right| \le \frac{1}{\sqrt{2}}. \end{aligned}$$
(3.18)

Proof

We limit ourselves to prove property (3.17) and (3.18), since the other properties are immediate consequences of (3.12), (3.13) and (3.14). We have

$$\begin{aligned} \left| \widehat{\vartheta }(\tau )-1\right| \le \int ^1_{-1} \psi (s)\left( 1-\cos \left( \frac{\lambda s \tau }{4}\right) \right) ds. \end{aligned}$$
(3.19)

Now, if \(s\in [-1,1]\) and \(\vert \frac{\lambda \tau }{4} \vert \le \frac{\pi }{2}\) then

$$\begin{aligned} 1-\cos \left( \frac{\lambda s \tau }{4}\right) \le \left( \frac{\lambda \tau }{4}\right) ^2 . \end{aligned}$$

Hence by (3.19) we get (3.17). Finally (3.18) is an immediate consequence of (3.17) \(\square \)

As usual, if \(f,g\in L^1(\mathbb {R})\), we denote by \((f*g)(t):=\int _{\mathbb {R}}f(t-s)g(s)ds\). Moreover we denote by \(f^{*(k)}:=f*f^{*(k-1)}\), for \(k\ge 2\), where \(f^{*(1)}:=f\).

Let us define

$$\begin{aligned} \vartheta _k(t):=\left( k\vartheta (kt)\right) ^{*(k)},\quad \text{ for } \text{ every }\; k\in \mathbb {N}. \end{aligned}$$
(3.20)

Notice that \(\vartheta _k\ge 0\), \(\text{ supp } \vartheta _k\subset [-\frac{\lambda }{4},\frac{\lambda }{4}]\), \(\int _{\mathbb {R}}\vartheta _k(t)dt=1\), for every \(k\in \mathbb {N}\) and

$$\begin{aligned} \widehat{\vartheta }_k(\tau )=\left( \widehat{\vartheta }(k^{-1} \tau )\right) ^k,\quad \text{ for } \text{ every }\; k\in \mathbb {N}, \tau \in \mathbb {R}. \end{aligned}$$
(3.21)

Moreover, by (3.17) we have

$$\begin{aligned} \lim \limits _{k\rightarrow +\infty } \widehat{\vartheta }_k(\tau )=1,\quad \text{ for } \text{ every }\; \tau \in \mathbb {R}. \end{aligned}$$
(3.22)

For any number \(\mu \in (0,1]\) and any \(k\in \mathbb {N}\) let us set

$$\begin{aligned} \varphi _{\mu ,k}=\left( \vartheta _k*\varphi _{\mu }\right) , \end{aligned}$$
(3.23)

where

$$\begin{aligned} \varphi _{\mu }(t)=\mu ^{-1}\vartheta \left( \mu ^{-1}t\right) ,\quad \text{ for } \text{ every }\; t\in \mathbb {R}. \end{aligned}$$
(3.24)

We have \(\text{ supp } \varphi _{\mu ,k}\subset [-\frac{\lambda (\mu +1)}{4},\frac{\lambda (\mu +1)}{4}]\), \(\varphi _{\mu ,k}\ge 0\) and \(\int _{\mathbb {R}}\varphi _{\mu ,k}(t)dt=1\). Moreover \(\varphi _{\mu ,k}\) is an even function.

Now, let us define the following mollified form of the Boman transformation of \(\widetilde{u}(x,\cdot )\) [8],

$$\begin{aligned} \widetilde{u}_{\mu ,k}(x)=\int _{\mathbb {R}}\widetilde{u}(x,t)\varphi _{\mu ,k}(t)dt,\quad \text{ for } \; x\in B_2. \end{aligned}$$
(3.25)

Proposition 3.3

If \(k\in \mathbb {N}\) and \(\mu =k^{-1/6}\) then the following inequality holds true

$$\begin{aligned} \left\| u(\cdot ,0)-\widetilde{u}_{\mu ,k} \right\| _{L^2 \left( B_{1}\right) }\le C H k^{-1/6}, \end{aligned}$$
(3.26)

where C depends on \(\lambda \) only.

Proof

Let \(\mu \in (0,1]\). By applying the triangle inequality and taking into account (3.11) and (3.24) we have

$$\begin{aligned}&\left\| u(\cdot ,0)-\widetilde{u}_{\mu ,k}(\cdot )\right\| _{L^2 \left( B_1\right) }\le \left( \int _{B_1}dx\int _{-\lambda \mu /4}^{\lambda \mu /4}\left| u(x,0)- \widetilde{u}(x,t)\right| ^{2}\varphi _{\mu }(t)dt\right) ^{1/2}\nonumber \\&\quad +\,\left( \int _{B_1}dx \int _{-\lambda \left( \mu +1\right) /4}^{\lambda \left( \mu +1\right) /4}\left| \widetilde{u}(x,t)\right| ^{2}dt\right) ^{1/2}\left\| \varphi _{\mu }-\varphi _{\mu ,k}\right\| _{L^{2}\left( \mathbb {R}\right) }:=I_1+I_2. \end{aligned}$$
(3.27)

In order to estimate \(I_1\) from above we observe that by the energy inequality, (3.1), and taking into account that \(\partial _t \widetilde{u}(x,0)=0\), we have

$$\begin{aligned}&\int _{B_2}\left| \partial _t \widetilde{u}(x,t) \right| ^{2}dx\le \int _{B_2}\left( \left| \partial _t \widetilde{u}(x,t) \right| ^{2}+\left| \nabla _x \widetilde{u}(x,t) \right| ^{2}\right) dx \\&\quad \le \lambda ^{-2}\int _{B_2}\left( \left| \partial _t \widetilde{u}(x,0) \right| ^{2}+\left| \nabla _x \widetilde{u}(x,0) \right| ^{2}\right) dx\le CH^2, \end{aligned}$$

where C depends on \(\lambda \) only. Therefore

$$\begin{aligned} I^2_1\le 2\int _{B_{1}}dx\left| \int _{0}^{\lambda \mu /4}\partial _{\eta } \widetilde{u}(x,\eta )d\eta \right| ^{2}\le \frac{\lambda \mu }{2} \int _{B_{1}}dx\int _{0}^{\lambda \mu /4}\left| \partial _{\eta } \widetilde{u}(x,\eta )\right| ^{2}d\eta \le CH^2\mu ^{2}. \end{aligned}$$

Hence

$$\begin{aligned} I_1\le CH\mu , \end{aligned}$$
(3.28)

where C depends on \(\lambda \) only.

Concerning \(I_2\), first we observe that by using Poincaré inequality, by energy inequality, and by (3.1) (recalling that \(\mu \in (0,1]\)), we have

$$\begin{aligned}&\int _{-\lambda \left( \mu +1\right) /4}^{\lambda \left( \mu +1\right) /4}dt\int _{B_1}\left| \widetilde{u}(x,t)\right| ^{2}dx\le \int _{-\lambda /2 }^{\lambda /2}dt\int _{B_2}\left| \widetilde{u}(x,t)\right| ^{2}dx\nonumber \\&\quad \le C\int _{-\lambda /2 }^{\lambda /2}dt\int _{B_2}\left| \nabla _x\widetilde{u}(x,t)\right| ^{2}dx\le CH^2, \end{aligned}$$
(3.29)

where C depends on \(\lambda \) only.

In order to estimate from above \(\Vert \varphi _{\mu }-\varphi _{\mu ,k}\Vert _{L^{2}( \mathbb {R})}\) we recall that \(\widehat{\varphi }_\mu (\tau )=\widehat{\vartheta }(\mu \tau )\) and \(\widehat{\varphi }_{\mu ,k}(\tau )=\widehat{\vartheta }(\mu \tau )(\widehat{\vartheta }(k^{-1}\tau ))^k\), hence the Parseval identity and a change of variable give

$$\begin{aligned} 2\pi \left\| \varphi _{\mu }-\varphi _{\mu ,k}\right\| _{L^{2}\left( \mathbb {R}\right) }^2=\frac{1}{\mu }\int _{\mathbb {R}}\left| \left( \widehat{\vartheta }((\mu k)^{-1}\tau )\right) ^k-1\right| ^2 \left| \widehat{\vartheta }(\tau )\right| ^2 d\tau . \end{aligned}$$
(3.30)

By (3.16), (3.17) and (3.18) and by using the elementary inequalities \(1-e^{-z}\le z\), for every \(z\in \mathbb {R}\), and \(\log s\le s-1\), for every \(s>0\), we have, whenever \(\vert \frac{\lambda \tau }{4\mu k}\vert \le \frac{1}{\sqrt{2}}\),

$$\begin{aligned} 0\le 1-\left( \widehat{\vartheta }((\mu k)^{-1}\tau )\right) ^k=1-e^{k\log \widehat{\vartheta }((\mu k)^{-1}\tau )}\le \frac{\lambda ^2\tau ^2}{8\mu ^2k}. \end{aligned}$$
(3.31)

Now let \(\delta \in (0,1]\) be a number that we shall choose later and denote \(\beta =\frac{4\mu k}{\sqrt{2}\lambda }\delta \). By (3.30), (3.16) and (3.31) we have

$$\begin{aligned}&2\pi \left\| \varphi _{\mu }-\varphi _{\mu ,k}\right\| _{L^{2}\left( \mathbb {R}\right) }^2=\frac{1}{\mu }\int _{|\tau |\le \beta }\left| \left( \widehat{\vartheta }((\mu k)^{-1}\tau )\right) ^k-1\right| ^2 \left| \widehat{\vartheta }(\tau )\right| ^2 d\tau \nonumber \\&\quad +\frac{1}{\mu }\int _{|\tau |\ge \beta }\left| \left( \widehat{\vartheta }((\mu k)^{-1}\tau )\right) ^k-1\right| ^2 \left| \widehat{\vartheta }(\tau )\right| ^2 d\tau \nonumber \\&\quad \le \frac{1}{\mu }\int _{|\tau |\le \beta }\left( \frac{\lambda ^2\tau ^2}{8\mu ^2 k}\right) ^2 d\tau +\frac{1}{\mu }\int _{|\tau |>\beta }\left( \frac{32\pi ^2}{\lambda ^2\tau ^2}\right) ^2 d\tau \le C\left( k^3\delta ^5+\frac{1}{\delta ^3\mu ^4 k^3}\right) ,\nonumber \\ \end{aligned}$$
(3.32)

where C depends on \(\lambda \) only. If \(\mu ^2 k^{3/5}\ge 1\), we choose \(\delta =(\mu ^2 k^3)^{-1/4}\) and by (3.32) we have

$$\begin{aligned} \left\| \varphi _{\mu }-\varphi _{\mu ,k}\right\| _{L^{2}\left( \mathbb {R}\right) }\le C \left( k^{3/5}\mu ^{2}\right) ^{-5/8}, \end{aligned}$$
(3.33)

where C depends on \(\lambda \) only. Hence recalling (3.29) we have

$$\begin{aligned} I_2 \le C H\left( k^{3/5}\mu ^{2}\right) ^{-5/8}. \end{aligned}$$
(3.34)

By (3.27), (3.28) and (3.34) we obtain

$$\begin{aligned} \Vert u(\cdot ,0)-\widetilde{u}_{\mu ,k} \Vert _{L^2 (B_{1})}\le C H(\mu +(k^{3/5}\mu ^{2})^{-5/8}). \end{aligned}$$
(3.35)

Now, if \(\mu =k^{-\frac{1}{6}}\), \(k\ge 1\) then (3.35) implies (3.26). \(\square \)

From now on we fix \(\overline{\mu }:=k^{-\frac{1}{6}}\) for \(k\ge 1\) and we denote

$$\begin{aligned} \widetilde{u}_k:=\widetilde{u}_{\overline{\mu },k}. \end{aligned}$$
(3.36)

Let us introduce now, for every \(k\in \mathbb {N}\) an even function \(g_k\in C^{1,1}(\mathbb {R})\) such that if \(|z|\le k\) then we have \(g_k(z)=\cosh z\), if \(|z|\ge 2k\) then we have \(g_k(z)=\cosh 2k\) and such that it satisfies the condition

$$\begin{aligned} \left| g_k(z) \right| +\left| g^{\prime }_k(z) \right| +\left| g^{\prime \prime }_k(z) \right| \le ce^{2k},\quad \text{ for } \text{ every }\; z\in \mathbb {R}, \end{aligned}$$
(3.37)

where c is an absolute constant.

The following proposition is the main result of this first step.

Proposition 3.4

Let

$$\begin{aligned} v_{k}(x,y):=\sum _{j=1}^{\infty }\alpha _j \widehat{\varphi }_{\overline{\mu },k}\left( \sqrt{\lambda _j}\right) g_k\left( y\sqrt{\lambda _j}\right) e_j(x),\quad \text{ for }\; (x,y)\in B_2\times \mathbb {R}. \end{aligned}$$
(3.38)

We have that \(v_{k}(\cdot ,y)\) belongs to \(H^2(B_2)\cap H_0^1(B_2)\) for every \(y\in \mathbb {R}\), \(v_{k}(x,y)\) is an even function with respect to y and it satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} q(x)\partial ^2_{y}v_{k}+\text{ div }\left( A(x)\nabla _x v_{k}\right) =f_{k}(x,y), \quad \hbox {in } B_2\times \mathbb {R},\\ v_{k}(\cdot ,0)=\widetilde{u}_{k},\quad \hbox {in } B_2. \end{array}\right. \end{aligned}$$
(3.39)

and

$$\begin{aligned} \Vert v_{k}(\cdot ,0)\Vert _{L^2\left( B_{r_0}\right) }\le \varepsilon . \end{aligned}$$
(3.40)

where

$$\begin{aligned} f_{k}(x,y)=\sum _{j=1}^{\infty }\lambda _j\alpha _j \widehat{\varphi }_{\overline{\mu },k}\left( \sqrt{\lambda _j}\right) \left( g^{\prime \prime }_k\left( y\sqrt{\lambda _j}\right) - g_k\left( y\sqrt{\lambda _j}\right) \right) q(x)e_j(x). \end{aligned}$$
(3.41)

Moreover we have

$$\begin{aligned} \sum _{j=0}^{2}\Vert \partial ^{j}_yv_{k}(\cdot ,y)\Vert _{H^{2-j}\left( B_2\right) }\le CH e^{2k},\quad \text{ for } \text{ every }\; y\in \mathbb {R}, \end{aligned}$$
(3.42)
$$\begin{aligned} \Vert f_{k}(\cdot ,y)\Vert _{L^2\left( B_2\right) }\le CH e^{2k}\min \left\{ 1,\left( 4\pi \lambda ^{-1}|y|\right) ^{2k}\right\} ,\quad \text{ for } \text{ every }\; y\in \mathbb {R}, \end{aligned}$$
(3.43)

where C depends on \(\lambda \) and \(\Lambda \) only.

Proof

First of all observe that

$$\begin{aligned} \left| \widehat{\varphi }_{\overline{\mu },k}\left( \sqrt{\lambda _j}\right) \right| \le \Vert \varphi _{\overline{\mu },k}\Vert _{L^1\left( \mathbb {R}\right) }=1. \end{aligned}$$
(3.44)

For the sake of brevity, in what follows we shall omit k from \(v_{k}\).

In order to prove that \(v(\cdot ,y)\in H^2\left( B_2\right) \cap H_0^1\left( B_2\right) \) for \(y\in \mathbb {R}\), let \(M,N\in \mathbb {N}\) such that \(M>N\) and let us denote by

$$\begin{aligned} V_{M,N}(x,y):=\sum _{j=N+1}^{M}\alpha _j \widehat{\varphi }_{\overline{\mu },k}\left( \sqrt{\lambda _j}\right) g_k\left( y\sqrt{\lambda _j}\right) e_j(x). \end{aligned}$$
(3.45)

By (3.37) and (3.44) we have, for every \(y\in \mathbb {R}\),

$$\begin{aligned}&\lambda \int _{B_2}\left| \nabla _x V_{M,N}(x,y)\right| ^2 dx\le \int _{B_2}A(x)\nabla _x V_{M,N}(x,y)\cdot \nabla _x V_{M,N}(x,y) dx\\&\quad =\sum _{j=N+1}^{M} \left( \int _{B_2}A(x)\nabla _x e_j(x)\cdot \nabla _x V_{M,N}(x,y) dx\right) \widehat{\varphi }_{\overline{\mu },k}\left( \sqrt{\lambda _j}\right) g_k\left( y\sqrt{\lambda _j}\right) \alpha _j\\&\quad =\sum _{j=N+1}^{M}\lambda _j\alpha ^2_j\widehat{\varphi }^2_{\overline{\mu },k}\left( \sqrt{\lambda _j}\right) g^2_k\left( y\sqrt{\lambda _j}\right) \le c e^{4k}\sum _{j=N+1}^{M}\lambda _j\alpha ^2_j. \end{aligned}$$

Therefore, since \(V_{M,N}(\cdot ,y)\in H^1_0(B_2)\) we have

$$\begin{aligned} \Vert V_{M,N}(\cdot ,y)\Vert ^2_{H^1_0\left( B_2\right) }\le c e^{4k}\sum _{j=N+1}^{M}\lambda _j\alpha ^2_j,\quad \text{ for } \text{ every } \; y\in \mathbb {R}. \end{aligned}$$
(3.46)

The inequality above and (3.7) gives

$$\begin{aligned} \Vert V_{M,N}(\cdot ,y)\Vert _{H^1_0\left( B_2\right) }\rightarrow 0,\quad \text{ as } \; M,N\rightarrow \infty ,\quad \text{ for } \text{ every } \; y\in \mathbb {R}, \end{aligned}$$

hence \(v\in H^1_0\left( B_2\right) \).

In order to prove that \(v\in H^2(B_2)\), first observe that by (3.37), (3.44) and (3.45) we have

$$\begin{aligned} \Vert \text{ div }\left( A\nabla _x V_{M,N}\right) \Vert ^2_{L^2\left( B_2\right) }\le c\lambda ^{-1} e^{4k}\sum _{j=N+1}^{M}\lambda ^2_j\alpha ^2_j,\quad \text{ for } \text{ every } \; y\in \mathbb {R}, \end{aligned}$$

then by the above inequality and standard \(L^2\) regularity estimate [21] we obtain

$$\begin{aligned}&\Vert D^2_x V_{M,N}(\cdot ,y)\Vert ^2_{L^2\left( B_2\right) }\le C \Vert \text{ div }\left( A\nabla _x V_{M,N}\right) \Vert ^2_{L^2\left( B_2\right) } \nonumber \\&\quad \le e^{4k}\sum _{j=N+1}^{M}\lambda ^2_j\alpha ^2_j,\quad \text{ for } \text{ every } \; y\in \mathbb {R}, \end{aligned}$$
(3.47)

where C depends on \(\lambda \) and \(\Lambda \) only. Hence \(v\in H^2(B_2)\). Moreover by (3.7), (3.46) and (3.47) we have

$$\begin{aligned}&\Vert v(\cdot ,y)\Vert _{L^2\left( B_2\right) }+\Vert \nabla _x v(\cdot ,y)\Vert _{L^2\left( B_2\right) }+\Vert D^2_x v(\cdot ,y)\Vert _{L^2\left( B_2\right) } \nonumber \\&\quad \le C H e^{2k},\quad \text{ for } \text{ every } \; y\in \mathbb {R}, \end{aligned}$$
(3.48)

where C depends on \(\lambda \) and \(\Lambda \) only. Similarly we have \(\partial _yv (\cdot ,y) ,\partial ^2_y v (\cdot ,y), \partial _y \nabla _x v(\cdot ,y)\in L^2(B_2)\) and

$$\begin{aligned} \sum _{j=1}^{2}\Vert \partial ^j_y D^{2-j}_x v(\cdot ,y)\Vert _{L^2\left( B_2\right) }\le CH e^{2k},\quad \text{ for } \text{ every } \; y\in \mathbb {R}, \end{aligned}$$
(3.49)

where C depends on \(\lambda \) and \(\Lambda \) only.

Inequality (3.49) and (3.48), yields (3.42). By (3.38) we have immediately that the function v is an even function and it satisfies (3.39).

Concerning (3.40), we have by \(\Vert \varphi _{\overline{\mu },k}\Vert _{L^1(\mathbb {R})}=1\), by Schwarz inequality, by (2.4) and by (3.25),

$$\begin{aligned} \Vert v_k(\cdot ,0)\Vert ^2_{L^2\left( B_{r_0}\right) }\!=\!\int _{B_{r_0}} \left| \widetilde{u}_{k}(x)\right| ^{2}dx\le \int _{-\lambda \left( \overline{\mu } \!+\!1\right) /4}^{\lambda \left( \overline{\mu } +1\right) /4}\left( \int _{B_{r_0}}\left| u(x,t)\right| ^{2}dx\right) \varphi _{\overline{\mu },k}(t)dt\le \varepsilon ^2. \end{aligned}$$

Concerning (3.43), first observe that by the definition of \(g_k\) we have that \(g''_k (y\sqrt{\lambda _j})-g_k(y\sqrt{\lambda _j})=0\), for \(|y|\sqrt{\lambda _j}\le k\) and \(\vert g''_k(y\sqrt{\lambda _j})-g_k(y\sqrt{\lambda _j}) \vert \le ce^{2k}\), for \(|y|\sqrt{\lambda _j}\ge k\). Hence, taking into account (3.16) and (3.21), we have, for every \(y\in \mathbb {R}\) and for every \(k\in \mathbb {N}\),

$$\begin{aligned}&\left| g''_k(y\sqrt{\lambda _j})-g_k(y\sqrt{\lambda _j}) \right| \left| \widehat{\varphi }_{\overline{\mu }, k}(\sqrt{\lambda _j}) \right| \le c e^{2k}\left| \widehat{\vartheta }(k^{-1}\sqrt{\lambda _j}) \right| ^k \chi _{\{y:|y|\sqrt{\lambda _j}\ge k\}}\nonumber \\&\quad \le c e^{2k}\sup \left\{ \left| \widehat{\vartheta }(k^{-1}\sqrt{\lambda _j}) \right| ^k : |y|\sqrt{\lambda _j}\ge k\right\} \le c e^{2k} \min \left\{ 1,\left( 4\pi \lambda ^{-1}|y|\right) ^{2k}\right\} .\nonumber \\ \end{aligned}$$
(3.50)

By (3.42) and (3.50) we have

$$\begin{aligned} \Vert f_{k}(\cdot ,y)\Vert _{L^2\left( B_2\right) }\le c e^{2k} \min \left\{ 1,\left( 4\sqrt{2}\pi \lambda ^{-1}|y|\right) ^{2k}\right\} \left( \sum _{j=1}^{\infty }\lambda ^2_j\alpha ^2_j\right) ^{1/2},\quad \text{ for } \text{ every } \; y\in \mathbb {R}. \end{aligned}$$

By the above inequality and by (3.7) we obtain (3.43). \(\square \)

Second step.

In what follows we shall denote by \(\widetilde{B}_r\) the ball of \(\mathbb {R}^{n+1}\) of radius r centered at 0. In order to prove Proposition 3.6 stated below we need the following Lemma.

Lemma 3.5

Let r be a positive number and let \(w\in H^2(\widetilde{B}_r)\) be a solution to the problem

$$\begin{aligned} \left\{ \begin{array}{ll} q(x)\partial ^2_{y}w(x,y)+\text{ div }\left( A(x)\nabla _x w(x,y)\right) =0, &{}\quad \hbox {in }\; \widetilde{B}_r, \\ \partial _{y}w(\cdot ,0)=0 , &{}\quad \hbox {in }\; B_r, \end{array}\right. \end{aligned}$$
(3.51)

where A satisfies (2.1) and q satisfies (2.2).

Then there exist \(\beta \in (0,1)\) and \(C\ge 1\) depending on \(\lambda \) and \(\Lambda \) only such that

$$\begin{aligned} \int _{\widetilde{B}_{r/4}}w^2dxdy\le C\left( \int _{\widetilde{B}_{r}}w^2dxdy\right) ^{1-\beta }\left( r\int _{B_{r/2}}w^2(x,0)dx\right) ^{\beta }. \end{aligned}$$
(3.52)

Proof

After scaling, we may assume \(r=1\). By [4, Theorem 1.7] we have

$$\begin{aligned} \Vert w\Vert _{L^2\left( \widetilde{B}_{1/4}\right) }\le C\left( \Vert w\Vert _{L^2\left( \widetilde{B}_{1}\right) }\right) ^{1-\widetilde{\beta }}\left( \Vert w\Vert _{H^{1/2}\left( B_{1/2}\right) }\right) ^{\widetilde{\beta }}, \end{aligned}$$
(3.53)

where C and \(\widetilde{\beta }\in (0,1)\) depend on \(\lambda \) and \(\Lambda \) only. Now, by the interpolation inequality, the trace inequality and standard regularity for elliptic equation [21] we have

$$\begin{aligned} \Vert w\Vert _{H^{1/2}\left( B_{1/2}\right) }\le & {} C \Vert w\Vert ^{2/3}_{L^2\left( B_{1/2}\right) }\Vert w\Vert ^{1/3}_{H^{3/2}\left( B_{1/2}\right) }\le C\Vert w\Vert ^{2/3}_{L^2\left( B_{1/2}\right) }\Vert w\Vert ^{1/3}_{H^{2}\left( \widetilde{B}_{3/4}\right) } \nonumber \\\le & {} C'\Vert w\Vert ^{2/3}_{L^2\left( B_{1/2}\right) }\Vert w\Vert ^{1/3}_{L^{2}\left( \widetilde{B}_{1}\right) }, \end{aligned}$$
(3.54)

where \(C'\) depends on \(\lambda \) and \(\Lambda \) only. By (3.53) and (3.54) we get (3.52) with \(\beta =\frac{2\widetilde{\beta }}{3}\). \(\square \)

Proposition 3.6

Let \(v_{k}\) be defined in (3.38) and let \(r_0\le \frac{\lambda }{8}\). Then we have

$$\begin{aligned} \Vert v_{k}\Vert _{L^2\left( \widetilde{B}_{r_0/4}\right) }\le C\sqrt{r_0}\left( \varepsilon +H\left( C_0r_0\right) ^{2k}\right) ^{\beta }\left( He^{2k}+H\left( C_0r_0\right) ^{2k}\right) ^{1-\beta }. \end{aligned}$$
(3.55)

where \(\beta \in (0,1)\), C depend on \(\lambda \) and \(\Lambda \) only and \(C_0=4\pi e \lambda ^{-1}\).

Proof

Let \(w_{k}\in H^2\left( \widetilde{B}_{r_0}\right) \) be the solution to the following Dirichlet problem

$$\begin{aligned} \left\{ \begin{array}{ll} q(x)\partial ^2_{y}w_{k}+\text{ div }\left( A(x)\nabla _x w_{k}\right) =f_{k}, &{}\quad \hbox {in }\; \widetilde{B}_{r_0},\\ w_{k}=0, &{}\quad \hbox {on }\; \partial \widetilde{B}_{r_0}. \end{array}\right. \end{aligned}$$
(3.56)

Notice that, since \(f_{k}\) is an even function with respect to y, by the uniqueness to the Dirichlet problem (3.56) we have that \(w_{k}\) is an even function with respect to y.

By standard regularity estimates we have

$$\begin{aligned} \Vert w_{k}\Vert _{L^2\left( \widetilde{B}_{r_0}\right) }+r_0\Vert \nabla _{x,y}w_{k}\Vert _{L^2\left( \widetilde{B}_{r_0}\right) }\le Cr_0^2\Vert f_{k}\Vert _{L^2\left( \widetilde{B}_{r_0}\right) }, \end{aligned}$$
(3.57)

where C depends on \(\lambda \) only. By the above inequality and by the trace inequality we get

$$\begin{aligned} \Vert w_{k}(\cdot ,0)\Vert _{L^2\left( B_{r_0/2}\right) }\le & {} C\left( r^{-1/2}_0\Vert w_{k}\Vert _{L^2\left( \widetilde{B}_{r_0}\right) }+r^{1/2}_0\Vert \nabla _{x,y}w_{k}\Vert _{L^2\left( \widetilde{B}_{r_0}\right) }\right) \nonumber \\\le & {} C r^{3/2}_0\Vert f_{k}\Vert _{L^2\left( \widetilde{B}_{r_0}\right) }, \end{aligned}$$
(3.58)

where C depends on \(\lambda \) only.

Now, denoting

$$\begin{aligned} z_{k}=v_{k}-w_{k}, \end{aligned}$$
(3.59)

by (3.43), (3.40), (3.57) and (3.58) we have

$$\begin{aligned} \Vert z_{k}(\cdot ,0)\Vert _{L^2\left( B_{r_0/2}\right) }\le \varepsilon +Cr^2_0H\left( C_0r_0\right) ^{2k}, \end{aligned}$$
(3.60)

and

$$\begin{aligned} \Vert z_{k}\Vert _{L^2\left( \widetilde{B}_{r_0}\right) }\le Cr^{1/2}_0H\left( e^{2k}+r^2_0\left( C_0r_0\right) ^{2k}\right) , \end{aligned}$$
(3.61)

where C depends on \(\lambda \) only.

Now by (3.56) we have

$$\begin{aligned} \left\{ \begin{array}{ll} q(x)\partial ^2_{y}z_{k}+\text{ div }\left( A(x)\nabla _x z_{k}\right) =0, &{}\quad \hbox {in }\; \widetilde{B}_{r_0},\\ \partial _y z_{k}(\cdot ,0)=0, &{}\quad \hbox {on }\; B_{r_0}, \end{array}\right. \end{aligned}$$

hence by applying Lemma 3.5 to the function \(z_{k}\) and by using (3.42), (3.59), (3.60) and (3.61) the thesis follows. \(\square \)

In order to prove Theorem 2.1 we use a Carleman estimate with singular weight, proved for the first time by [6]. In order to control the dependence of the various constants, we use here the following version of such a Carleman estimate that was proved, in the context of parabolic operator, in [17]. First we introduce some notation. Let P be the elliptic operator

$$\begin{aligned} P:=q(x)\partial ^2_{y}+\text{ div }\left( A(x)\nabla _x \right) . \end{aligned}$$
(3.62)

Denote

$$\begin{aligned} \sigma (x,y)=\left( A^{-1}(0)x\cdot x+\left( q(0)\right) ^{-1}y^2\right) ^{1/2}, \end{aligned}$$
(3.63)
$$\begin{aligned} \widetilde{B}^{\sigma }_{r}= \left\{ (x,y)\in \mathbb {R}^{n+1}: \sigma (x,y)\le r\right\} , \quad \hbox { } r>0, \end{aligned}$$
(3.64)

Notice that

$$\begin{aligned} \widetilde{B}^{\sigma }_{\sqrt{\lambda } r}\subset \widetilde{B}_r \subset \widetilde{B}^{\sigma }_{r/\sqrt{\lambda }} \quad \hbox {, for every }r>0. \end{aligned}$$
(3.65)

Theorem 3.7

Let P be the operator (3.62) and assume that (2.1) and (2.2) are satisfied. There exists a constant \(C_{*}>1\) depending on \(\lambda \) and \(\Lambda \) only such that, denoting

$$\begin{aligned} \phi (s)=s\exp \left( \int ^s_0\frac{e^{-C_{*}\eta }-1}{\eta }d\eta \right) , \end{aligned}$$
(3.66a)
$$\begin{aligned} \delta (x,y)=\phi \left( \sigma (x,y)/2\sqrt{\lambda }\right) , \end{aligned}$$
(3.66b)

for every \(\tau \ge C_{*}\) and \(U\in C^{\infty }_0\left( \widetilde{B}^{\sigma }_{2\sqrt{\lambda }/C_{*}}{\setminus }\{0\}\right) \) we have

$$\begin{aligned}&\tau \int _{\mathbb {R}^{n+1}}\delta ^{1-2\tau }(x,y)\left| \nabla _{x,y}U\right| ^2 dxdy+ \tau ^3\int _{{\mathbb {R}^{n+1}}}\delta ^{-1-2\tau }(x,y) U^2 dxdy\nonumber \\&\quad \le C_{*}\int _{{\mathbb {R}^{n+1}}}\delta ^{2-2\tau }(x,y) \left| PU\right| ^2dxdy. \end{aligned}$$
(3.67)

Conclusion of the proof of Theorem 2.1

Set

$$\begin{aligned} r_1=\frac{\sqrt{\lambda } r_0}{16} \end{aligned}$$

by (3.55) we have

$$\begin{aligned} \Vert v_{k}\Vert _{L^2\left( \widetilde{B}^{\sigma }_{4r_1}\right) }\le C \sqrt{r_1} S_k, \end{aligned}$$
(3.68)

where C depends on \(\lambda \) and \(\Lambda \) only and

$$\begin{aligned} S_k=\left( \varepsilon +H\left( C_1r_1\right) ^{2k}\right) ^{\beta }\left( He^{2k}+H\left( C_1r_1\right) ^{2k}\right) ^{1-\beta }, \end{aligned}$$
(3.69)

where \(C_1=16C_0/\sqrt{\lambda }\), recall that \(C_0\) has been introduced in Proposition 3.6.

Denote

$$\begin{aligned} \delta _0(r):=\phi (r/2\sqrt{\lambda }),\quad \hbox {for every }\; r>0 \end{aligned}$$

and

$$\begin{aligned} R=\frac{\sqrt{\lambda }}{C_{*}}. \end{aligned}$$

Let us consider a function \(h\in C^2_0\left( 0, \delta _0\left( 2R\right) \right) \) such that \(0\le h\le 1\) and

$$\begin{aligned}&h(s)=1 ,\quad \hbox {for every } s\in \left[ \delta _0\left( 2r_1\right) , \delta _0\left( R\right) \right] ,\\&h(s)=0, \quad \hbox {for every } s\in \left[ 0,\delta _0\left( r_1\right) \right] \cup \left[ \delta _0\left( 3R/2\right) , \delta _0\left( 2R\right) \right] ,\\&r_1\left| h'(s)\right| +r_1^2\left| h''(s)\right| \le c, \quad \hbox {for every } s\in \left[ \delta _0\left( r_1\right) , \delta _0\left( 2r_1\right) \right] ,\\&\left| h'(s)\right| +\left| h''(s)\right| \le c, \quad \hbox {for every } s\in \left[ \delta _0\left( R\right) , \delta _0\left( 3R/2\right) \right] , \end{aligned}$$

where c is an absolute constant.

Moreover, let us define

$$\begin{aligned} \zeta (x,y)=h\left( \delta (x,y)\right) . \end{aligned}$$

Notice that if \(2r_1\le \sigma (x,y)\le R\) then \(\zeta (x,y)=1\) and if \(\sigma (x,y)\ge 2R\) or \(\sigma (x,y)\le r_1\) then \(\zeta (x,y)=0\).

For the sake of brevity, in what follows we shall omit k from \(v_{k}\) and \(f_k\). By density, we can apply (3.67) to the function \(U=\zeta v\) and we have, for every \(\tau \ge C_{*}\),

$$\begin{aligned}&\tau \int _{\widetilde{B}^{\sigma }_{2R}}\delta ^{1-2\tau }(x,y)\left| \nabla _{x,y}\left( \zeta v\right) \right| ^2+ \tau ^3\int _{\widetilde{B}^{\sigma }_{2R}}\delta ^{-1-2\tau }(x,y) \left| \zeta v\right| ^2 \nonumber \\&\quad \le C\int _{\widetilde{B}^{\sigma }_{2R}}\delta ^{2-2\tau }(x,y) \left| f\right| ^2 \zeta ^2+C\int _{\widetilde{B}^{\sigma }_{2R}}\delta ^{2-2\tau }(x,y) \left| P\zeta \right| ^2 v^2 \nonumber \\&\quad \quad +\,C\int _{\widetilde{B}^{\sigma }_{2R}}\delta ^{2-2\tau }(x,y) \left| \nabla _{x,y} v\right| ^2\left| \nabla _{x,y}\zeta \right| ^2:=I_1+I_2+I_3, \end{aligned}$$
(3.70)

where C depends \(\lambda \) and \(\Lambda \) only.

Estimate of \(I_1\).

Notice that

$$\begin{aligned} \frac{\sqrt{|x|^2+y^2}}{2C_2}\le \delta (x,y)\le \frac{C_2\sqrt{|x|^2+y^2}}{2} \quad \text{ for } \text{ every } (x,y)\in \widetilde{B}_2, \end{aligned}$$
(3.71)

where \(C_2>1\) depends on \(\lambda \) and \(\Lambda \) only.

By (3.43), (3.65) and (3.71) we have

$$\begin{aligned}&\int _{\widetilde{B}^{\sigma }_{2\sqrt{\lambda }/C_{*}}}\delta ^{2-2\tau }(x,y) \left| f\right| ^2 \zeta ^2dxdy\le \int _{\widetilde{B}_{2}}\left( 2C_2 |y|^{-1}\right) ^{-2+2\tau } \left| f\right| ^2 dxdy\nonumber \\&\quad \le \int ^{2}_{-2}\left[ \left( 2C_2 |y|^{-1}\right) ^{-2+2\tau }\int _{B_2}\left| f(x,y)\right| ^2 dx\right] dy\le CH^2 \int ^{2}_{-2}\left( 2C_2 |y|^{-1}\right) ^{-2+2\tau }\left( C_0|y|\right) ^{4k}dy,\nonumber \\ \end{aligned}$$
(3.72)

where C depends on \(\lambda \) and \(\Lambda \) only.

Now let k and \(\tau \) satisfy the relation

$$\begin{aligned} \frac{\tau -1}{2}\le k. \end{aligned}$$
(3.73)

By (3.72) and (3.73) we get

$$\begin{aligned} I_1\le C H^2 \left( C_3\right) ^{4k}, \end{aligned}$$
(3.74)

where \(C_3=2 C_0 C_2\).

Estimate of \(I_2\)

By (3.42) and (3.68) and (3.70) we have

$$\begin{aligned} I_2\le & {} Cr_1^{-4}\int _{\widetilde{B}^{\sigma }_{2r_1}{\setminus }\widetilde{B}^{\sigma }_{r_1}}\delta ^{2-2\tau }(x,y) v^2 dxdy +C\int _{\widetilde{B}^{\sigma }_{3R/2}{\setminus }\widetilde{B}^{\sigma }_{R}}\delta ^{2-2\tau }(x,y) v^2 dxdy\\\le & {} C\left( r_1^{-3}\delta ^{2-2\tau }_0(r_1)S^2_k+e^{4k}H^2\delta ^{2-2\tau }_0(R)\right) , \end{aligned}$$

hence (3.71) gives

$$\begin{aligned} I_2 \le C\left( \delta ^{-1-2\tau }_0(r_1)S^2_k+e^{4k}H^2\delta ^{-1-2\tau }_0(R)\right) , \end{aligned}$$
(3.75)

Estimate of \(I_3\)

By (3.70) we have

$$\begin{aligned} I_3 \le Cr_1^{-2}\delta ^{2-2\tau }_0(r_1)\int _{\widetilde{B}^{\sigma }_{2r_1}{\setminus }\widetilde{B}^{\sigma }_{r_1}} \left| \nabla _{x,y} v\right| ^2 dxdy+C\delta ^{2-2\tau }_0(R)\int _{\widetilde{B}^{\sigma }_{3R/2}{\setminus }\widetilde{B}^{\sigma }_{R}} \left| \nabla _{x,y} v\right| ^2 dxdy.\nonumber \\ \end{aligned}$$
(3.76)

Now in order to estimate from above the righthand side of (3.76) we use the Caccioppoli inequality, (3.42), (3.43) and (3.68) and we get

$$\begin{aligned}&I_3 \le C\delta ^{2-2\tau }_0(r_1)\left( r_1^{-4}\int _{\widetilde{B}^{\sigma }_{4r_1}{\setminus }\widetilde{B}^{\sigma }_{r_1/2}} v^2 dxdy+\int _{\widetilde{B}^{\sigma }_{4r_1}{\setminus }\widetilde{B}^{\sigma }_{r_1/2}} f^2 dxdy\right) \nonumber \\&\quad +\,C\delta ^{2-2\tau }_0(R)\int _{\widetilde{B}^{\sigma }_{3R/2}{\setminus }\widetilde{B}^{\sigma }_{R}} \left| \nabla _{x,y} v\right| ^2 dxdy\le C \left( S_k^2+H^2\left( C_1r_1\right) ^{4k}\right) \delta ^{-1-2\tau }_0(r_1)\nonumber \\&\quad +\,CH^2e^{4k}\delta ^{1-2\tau }_0(R):=\widetilde{I}_3 \end{aligned}$$
(3.77)

Now let \(r_1\le \frac{R}{2}\) and let \(\rho \) be such that \(\frac{2r_1}{\sqrt{\lambda }}\le \rho \le \frac{R}{\sqrt{\lambda }}\) and denote by \(\widetilde{\rho }=\sqrt{\lambda }\rho \). By estimating from below trivially the left hand side of (3.70) and taking into account (3.77) we have

$$\begin{aligned} \delta ^{1-2\tau }_0(\widetilde{\rho })\int _{\widetilde{B}^{\sigma }_{\widetilde{\rho }}{\setminus }\widetilde{B}^{\sigma }_{2r_1}}\left| \nabla _{x,y} v\right| ^2+ \delta ^{-1-2\tau }_0(\widetilde{\rho })\int _{\widetilde{B}^{\sigma }_{\widetilde{\rho }}{\setminus }\widetilde{B}^{\sigma }_{2r_1}}\left| v\right| ^2\le I_1+I_2+\widetilde{I}_3. \end{aligned}$$
(3.78)

Now let us add at both the side of (3.78) the quantity

$$\begin{aligned} \delta ^{1-2\tau }_0(\widetilde{\rho })\int _{\widetilde{B}^{\sigma }_{2r_1}}\left| \nabla _{x,y} v\right| ^2+ \delta ^{-1-2\tau }_0(\widetilde{\rho })\int _{\widetilde{B}^{\sigma }_{2r_1}} v^2, \end{aligned}$$

since this term can be estimated from above by \(\widetilde{I}_3\), by using standard estimates for second order elliptic equations and by taking into account that \(\delta _0(\widetilde{\rho })\ge \delta _0(r_1)\), we have

$$\begin{aligned} \rho ^2\int _{\widetilde{B}^{\sigma }_{\widetilde{\rho }}}\left| \nabla _{x,y} v\right| ^2+ \int _{\widetilde{B}^{\sigma }_{\widetilde{\rho }}} v^2 \le \delta ^{1+2\tau }_0(\widetilde{\rho }) \left( I_1+I_2+C\widetilde{I}_3\right) , \end{aligned}$$
(3.79)

where C depends on \(\lambda \) and \(\Lambda \) only.

Now by (3.71), (3.74), (3.75), (3.77) and (3.79) it is simple to derive that if (3.73) is satisfied then we have

$$\begin{aligned} \rho ^2\int _{\widetilde{B}_{\lambda \rho }}\left| \nabla _{x,y} v\right| ^2+ \int _{\widetilde{B}_{\lambda \rho }} v^2 \le C \left[ S^2_k\left( \frac{\delta _0(\widetilde{\rho })}{\delta _0(r_1)}\right) ^{1+2\tau }+H^2C_4^k\left( \frac{\delta _0(\widetilde{\rho })}{\delta _0(R)}\right) ^{1+2\tau }\right] ,\nonumber \\ \end{aligned}$$
(3.80)

where \(C_4>1\) depends on \(\lambda \) and \(\Lambda \) only.

Now, by applying a standard trace inequality and by recalling that \(v(\cdot ,0)=\widetilde{u}_{k}(\cdot ,0)\) in \(B_2\) (where \(\widetilde{u}_{k}\) is defined by (3.36)) we have

$$\begin{aligned} \int _{B_{\lambda \rho /2}} \left| \widetilde{u}_{k}(\cdot ,0) \right| ^2\le C\rho ^{-1}\left[ S^2_k\left( \frac{\delta _0(\widetilde{\rho })}{\delta _0(r_1)}\right) ^{1+2\tau }+H^2C_4^k\left( \frac{\delta _0(\widetilde{\rho })}{\delta _0(R)}\right) ^{1+2\tau }\right] .\qquad \end{aligned}$$
(3.81)

By Proposition 3.3, by (3.69) and (3.81) we have, for \(r_1\le \frac{R}{2}\)

$$\begin{aligned}&\rho \int _{B_{\lambda \rho /2}} \left| u(\cdot ,0) \right| ^2\le C \left( H_{k,\tau }+H^2k^{-1/3}\right) \nonumber \\&\quad +\,C \left[ C_5^{k}\left( \frac{\delta _0(\widetilde{\rho })}{\delta _0(r_1)}\right) ^{1+2\tau }H^{2(1-\beta )} \varepsilon ^{2\beta }+H^2C_4^k\left( \frac{\delta _0(\widetilde{\rho })}{\delta _0(R)}\right) ^{1+2\tau } \right] , \end{aligned}$$
(3.82)

where

$$\begin{aligned} H_{k,\tau }:=H^2 \left( \frac{\delta _0(\widetilde{\rho })}{\delta _0(r_1)}\right) ^{1+2\tau }C_5^{k}r_1^{4\beta k}. \end{aligned}$$

and C, \(C_5\) depend on \(\lambda , \Lambda \) only.

Now let us choose \(\tau =\frac{4\beta k-1}{2}\). We have that (3.73) is satisfied and by (3.71), (3.82) we have that there exist constants \(C_6>1\) and \(k_0\) depending on \(\lambda \) and \(\Lambda \) only such that for every \(k\ge k_0\) we have

$$\begin{aligned} \rho \int _{B_{\lambda \rho /2}} \left| u(\cdot ,0) \right| ^2\le C_6 H_1^2\left[ \left( C_6\rho r_1^{-1}\right) ^{4\beta k}\varepsilon _1^{2\beta }+\left( C_6\rho \right) ^{4\beta k}+k^{-1/3}\right] , \end{aligned}$$
(3.83)

where

$$\begin{aligned} H_1:=H+e\varepsilon \quad \text{ and } \varepsilon _1:=\frac{\varepsilon }{H+e\varepsilon }. \end{aligned}$$

Now, let us denote by

$$\begin{aligned} \overline{k}:= \left[ \frac{\log \varepsilon _1}{2\log r_1}\right] +1, \end{aligned}$$

where, for any \(s\in \mathbb {R}\), we set \([s]:=\max \left\{ p\in \mathbb {Z}:p \le s\right\} \). If \(\overline{k}\ge k_0\) we choose \(k=\overline{k}\) so that by (3.83) we have, for \(\rho \le 1/C_6\),

$$\begin{aligned} \rho \int _{B_{\lambda \rho /2}} \left| u(\cdot ,0) \right| ^2\le C_2 H_1^2\left( \varepsilon _1^{2\beta \theta _0}+\left( \frac{2\log (1/r_1)}{\log (1/\varepsilon _1)}\right) ^{1/3}\right) , \end{aligned}$$
(3.84)

where

$$\begin{aligned} \theta _0=\frac{\log (1/C_6\rho )}{2\log (1/r_1)}. \end{aligned}$$
(3.85)

Otherwise, if \(\overline{k} < k_0\) then multiplying both the side of such an inequality by \(\log (1/C_6\rho )\) and by (3.85) we get \(\theta _0 \log (1/\varepsilon _1)\le k_0 \log (1/C_6\rho )\). Hence

$$\begin{aligned} (H+e\varepsilon )^{2\beta \theta _0}\le (C_6\rho )^{-2\beta k_0}\varepsilon ^{2\beta \theta _0}. \end{aligned}$$

By this inequality and by (2.5) we have trivially

$$\begin{aligned} \int _{B_{\lambda \rho /2}} \left| u(\cdot ,0) \right| ^2\le (H+e\varepsilon )^2= & {} (H+e\varepsilon )^{2(1-\beta \theta _0)}(H+e\varepsilon )^{2\beta \theta _0}\nonumber \\\le & {} (H+e\varepsilon )^{2(1-\beta \theta _0)}(C_6\rho )^{-2\beta k_0}\varepsilon ^{2\beta \theta _0}. \end{aligned}$$
(3.86)

Finally by (3.84) and (3.86) we obtain (2.6). \(\square \)

3.2 Proof of Theorem 2.3

First, let us assume \(A(0)=I\) where I is the identity matrix \(n\times n\). Following the arguments of [1] or [3] we have there exist \(\rho _1, \rho _2\in (0,\rho _0]\) such that \(\frac{\rho _1}{\rho _0},\frac{\rho _2}{\rho _0}\) depend on \(\lambda ,\Lambda , E\) only and we can construct a function \(\Phi \in C^{1,1}(\overline{B}_{\rho _2}(0),\mathbb {R}^n)\) such that

$$\begin{aligned} \Phi \left( B_{\rho _2}\right) \subset B_{\rho _1}, \end{aligned}$$
(3.87a)
$$\begin{aligned} \Phi (y',0)=(y',\phi (y')),\quad \hbox {for every } y'\in B^{\prime }_{\rho _2}, \end{aligned}$$
(3.87b)
$$\begin{aligned} \Phi \left( B^+_{\rho _2}\right) \subset K_{\rho _1}, \end{aligned}$$
(3.87c)
$$\begin{aligned} C_1^{-1} |y-z|\le |\Phi (x)-\Phi (z)|\le C_1|y-z|,\quad \hbox {for every } y,z\in B_{\rho _2}, \end{aligned}$$
(3.87d)
$$\begin{aligned} C_2^{-1}\le |\text {det}D\Phi (y)|\le C_2,\quad \hbox {for every } y\in B_{\rho _2}, \end{aligned}$$
(3.87e)
$$\begin{aligned} |\text {det}D\Phi (y)-\text {det}D\Phi (z)|\le C_3|y-z|,\quad \hbox {for every } y,z\in B_{\rho _2}, \end{aligned}$$
(3.87f)

where \(C_1,C_2,C_3\ge 1\) depend on \(\lambda ,\Lambda , E\) only.

Denoting

$$\begin{aligned} \overline{A}(y)=|\text {det}D\Phi (y)|(D\Phi ^{-1})(\Phi (y)) A(\Phi (y))(D\Phi ^{-1})^{*}(\Phi (y)), \end{aligned}$$
$$\begin{aligned} v(y,t)=u(\Phi (y),t), \end{aligned}$$
(3.88)

we have

$$\begin{aligned} \overline{A}(0)=I \end{aligned}$$
(3.89a)
$$\begin{aligned} \overline{a}^{nk}(y',0)=\overline{a}^{kn}(y',0)=0,\quad k=1,\ldots ,n-1. \end{aligned}$$
(3.89b)

Moreover, we have that the ellipticity and Lipschitz constants of \(\overline{A}\) depend on \(\lambda ,\Lambda , E\) only. For every \(y\in B_{\rho _2}(0)\), let us denote by \(\tilde{A}(y)=\{\tilde{a}_{ij}(y)\}_{i,j=1}^n\) the matrix with entries given by

$$\begin{aligned} \tilde{a}^{ij}(y',|y_n|)=\overline{a}^{ij}(y',|y_n|),\quad \hbox {if either }\; i,j\in \{1,\ldots ,n-1\},\quad \hbox {or }\; i=j=n, \end{aligned}$$
$$\begin{aligned} \tilde{a}^{nj}(y',y_n)=\tilde{a}^{jn}(y',y_n)=\text {sgn}(y_n)\overline{a}^{nj}(y',|y_n|),\quad \hbox {if } 1\le j\le n-1. \end{aligned}$$

We have that \(\tilde{A}\) satisfies the same ellipticity and Lipschitz continuity conditions as \(\overline{A}\).

Now, if u satisfies the boundary condition (2.13) then we define

$$\begin{aligned} U(y,t)=\text {sgn}(y_n)v(y',|y_n|,t),\quad \hbox {for } (y,t)\in B_{\rho _2}\times (-\lambda \rho _2,\lambda \rho _2), \end{aligned}$$
$$\begin{aligned} \widetilde{q}(y)=|\text {det}D\Phi (y',|y_n|)|,\quad \hbox {for } y\in B_{\rho _2}, \end{aligned}$$

we have that \(U\in \mathcal {W}\left( (-\lambda \rho _2,\lambda \rho _2);B_{\rho _2}\right) \) is a solution to

$$\begin{aligned} \widetilde{q}(y)\partial ^2_{t}U-\text{ div }\left( \widetilde{A}(y)\nabla U\right) =0, \quad \hbox {in } B_{\rho _2}\times (-\lambda \rho _2,\lambda \rho _2). \end{aligned}$$
(3.90)

Moreover, by (3.87d) we have that

$$\begin{aligned} K_{r/C_1}\subset \Phi \left( B^+_{r}\right) \subset K_{C_1r},\quad \hbox {for every }\; r\le \rho _2. \end{aligned}$$

Now we can apply Theorem 2.1 to the function U and then by simple changes of variables in the integrals we obtain (2.17). In the general case \(A(0)\ne I\) we can consider a linear transformation \(G:\mathbb {R}^n\rightarrow \mathbb {R}^n\) such that setting \(A'(Gx)=\frac{GA(x)G^{*}}{\text {det}G}\) we have \(A'(0)=I\). Therefore, noticing that

$$\begin{aligned} B_{\sqrt{\lambda } r}\subset G\left( B_{r}\right) \subset B_{\sqrt{\lambda ^{-1}} r},\quad \hbox {for every } r>0, \end{aligned}$$

it is a simple matter to get (2.17) in the general case.

If u satisfies the boundary condition (2.14) then we define

$$\begin{aligned} V(y,t)=v(y',|y_n|,t),\quad \hbox {for } (y,t)\in B_{\rho _2}\times (-\lambda \rho _2,\lambda \rho _2), \end{aligned}$$

and we get that V is a solution to (2.12). Therefore, arguing as before we obtain again (2.17).\(\square \)

4 Concluding remark: a first order perturbation

In this subsection we outline the proof of an extension of Theorems 2.1, 2.3 for solution to the equation

$$\begin{aligned} q(x)\partial ^2_{t}u-Lu=0, \quad \hbox {in } B_{\rho _0}\times (-\lambda \rho _0,\lambda \rho _0). \end{aligned}$$
(4.1)

where

$$\begin{aligned} Lu=\text{ div }\left( A(x)\nabla _x u\right) +b(x)\cdot \nabla _x u+c(x)u, \end{aligned}$$
(4.2)

and A, q satisfy (2.1), (2.2), \(b=(b^1,\ldots ,b^n)\) \(b^j\in C^{0,1}(\mathbb {R}^n)\), \(c\in L^{\infty }(\mathbb {R}^n)\), b(x) and c(x) real valued. Moreover we assume

$$\begin{aligned} \left| b(x)\right| \le \lambda ^{-1}\rho _0^{-1}, \quad \hbox {for every } x\in \mathbb {R}^n, \end{aligned}$$
(4.3a)
$$\begin{aligned} \left| b(x)-b(y)\right| \le \frac{\Lambda }{\rho ^2_0} \left| x-y \right| , \quad \hbox {for every } x, y\in \mathbb {R}^n. \end{aligned}$$
(4.3b)

and

$$\begin{aligned} \left| c(x)\right| \le \lambda ^{-1}\rho _0^{-2}, \quad \hbox {for every } x\in \mathbb {R}^n. \end{aligned}$$
(4.4)

In what follows we assume \(\rho _0=1\).

First of all we consider the case in which

$$\begin{aligned} b\equiv 0 \end{aligned}$$
(4.5)

and we set

$$\begin{aligned} L_0u=\text{ div }\left( A(x)\nabla _x u\right) +c(x)u, \end{aligned}$$
(4.6)

Let us denote by \(\lambda _j\), with \(\lambda _1\le \cdots \le \lambda _m\le 0<\lambda _{m+1}\le \cdots \le \lambda _j\le \cdots \) the eigenvalues associated to the problem

$$\begin{aligned} \left\{ \begin{array}{ll} L_0v+\omega q(x)v=0, &{}\quad \text {in }B_2,\\ v\in H^1\left( B_2\right) , \end{array}\right. \end{aligned}$$
(4.7)

and by \(e_j(\cdot )\) the corresponding eigenfunctions normalized by

$$\begin{aligned} \int _{B_2}e^2_j(x)q(x)dx=1. \end{aligned}$$
(4.8)

In this case the main difference with respect to the case considered above is the presence of non positive eigenvalues \(\lambda _1\le \cdots \le \lambda _m\). In what follows we indicate the simple changes in the proof of Theorem 2.1 in order to get the same estimate (2.6) (with maybe different constants \(s_0\) and C). Let \(\varepsilon \) and H be the same of (2.4) and (2.5).

Likewise the case \(c\equiv 0\), the proof can be reduced to the even part \(u_+\) with respect to t of solution u of Eq. (4.1). Moreover denoting again by

$$\begin{aligned} \widetilde{u}(x,t):=\sum _{j=1}^{\infty }\alpha _j e_j(x)\cos \sqrt{\lambda _j} t, \end{aligned}$$
(4.9)

it is easy to check that instead of Proposition 3.1 we have

Proposition 4.1

We have

$$\begin{aligned} \sum _{j=1}^{\infty }\left( 1+|\lambda _j|+\lambda _j^2\right) \alpha ^2_j\le C H^2, \end{aligned}$$
(4.10)

where C depends on \(\lambda , \Lambda \) only. Moreover, \(\widetilde{u}\in \mathcal {W}\left( \mathbb {R};B_2\right) \cap C^0\left( \mathbb {R};H^2\left( B_2\right) \cap H^1_0\left( B_2\right) \right) \) is an even function with respect to variable t and it satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} q(x)\partial ^2_{t}\widetilde{u}-L_0 \widetilde{u}=0, &{}\quad \hbox {in } B_2\times \mathbb {R},\\ \widetilde{u}(\cdot ,0)=\widetilde{u}_0, &{}\quad \hbox {in } B_2,\\ \partial _t\widetilde{u}(\cdot ,0)=0, &{}\quad \hbox {in } B_2. \end{array}\right. \end{aligned}$$
(4.11)

Similarly to (3.9), the uniqueness to the Cauchy problem for the equation \(q(x)\partial ^2_{t}u-L_0u=0\) implies

$$\begin{aligned} \widetilde{u}(x,t)=u_+(x,t), \quad \text{ for } |x|+\lambda ^{-1}|t|< 1. \end{aligned}$$

Likewise the Sect. 3 we set

$$\begin{aligned} \widetilde{u}_k:=\widetilde{u}_{\overline{\mu },k}, \end{aligned}$$

where \(\overline{\mu }:=k^{-\frac{1}{6}}\), \(k\ge 1\) and \(\widetilde{u}_{\mu ,k}\) is defined by (3.25). In the present case we set, instead of (3.38),

$$\begin{aligned} v_{k}(x,y):=v^{(1)}_{k}(x,y)+v^{(2)}_{k}(x,y), \end{aligned}$$
(4.12)

where

$$\begin{aligned} v^{(1)}_{k}(x,y)=\sum _{j=1}^{m}\alpha _j \widehat{\varphi }_{\overline{\mu },k}\left( i\sqrt{|\lambda _j|}\right) \cos \left( \sqrt{|\lambda _j|y}\right) e_j(x),\quad \text{ for } \; (x,y)\in B_2\times \mathbb {R}\nonumber \\ \end{aligned}$$
(4.13a)
$$\begin{aligned} v^{(2)}_{k}(x,y)=\sum _{j=m+1}^{\infty }\alpha _j \widehat{\varphi }_{\overline{\mu },k}\left( \sqrt{\lambda _j}\right) g_k\left( y\sqrt{\lambda _j}\right) e_j(x),\quad \text{ for } \; (x,y)\in B_2\times \mathbb {R}.\nonumber \\ \end{aligned}$$
(4.13b)

and \(g_k(z)\) is the same function introduced in Sect. 3, in particular it satisfies (3.37).

Instead of Proposition 3.4 we have

Proposition 4.2

Let \(v_k\) be defined by (4.12). We have that \(v_{k}(\cdot ,y)\) belongs to \(H^1\left( B_2\right) \cap H_0^1\left( B_2\right) \) for every \(y\in \mathbb {R}\), \(v_{k}(x,y)\) is an even function with respect to y and it satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} q(x)\partial ^2_{y}v_{k}+\text{ div }\left( A(x)\nabla _x v_{k}\right) =f_{k}(x,y), &{}\quad \hbox {in } B_2\times \mathbb {R},\\ v_{k}(\cdot ,0)=\widetilde{u}_{k},&{}\quad \hbox {in }\; B_2. \end{array}\right. \end{aligned}$$
(4.14)

and

$$\begin{aligned} \Vert v_{k}(\cdot ,0)\Vert _{L^2\left( B_{r_0}\right) }\le \varepsilon . \end{aligned}$$
(4.15)

where

$$\begin{aligned} f_{k}(x,y)=\sum _{j=m+1}^{\infty }\lambda _j\alpha _j \widehat{\varphi }_{\overline{\mu },k}\left( \sqrt{\lambda _j}\right) \left( g^{\prime \prime }_k\left( y\sqrt{\lambda _j}\right) - g_k\left( y\sqrt{\lambda _j}\right) \right) e_j(x). \end{aligned}$$
(4.16)

Moreover we have

$$\begin{aligned} \sum _{j=0}^{2}\Vert \partial ^{j}_yv_{k}(\cdot ,y)\Vert _{H^{2-j}\left( B_2\right) }\le Ce^{\lambda \sqrt{|\lambda _1|}}H e^{2k},\quad \text{ for } \text{ every } \; y\in \mathbb {R}, \end{aligned}$$
(4.17)
$$\begin{aligned} \Vert f_{k}(\cdot ,y)\Vert _{L^2(B_2)}\le CH e^{2k}\min \{1,(4\pi \lambda ^{-1}|y|)^{2k}\},\quad \text{ for } \text{ every } \; y\in \mathbb {R}, \end{aligned}$$
(4.18)

where C depends on \(\lambda \) and \(\Lambda \) only.

Instead of Proposition 3.6 we have

Proposition 4.3

Let \(v_{k}\) be defined in (4.12). Then there exists a constant c, \(0<c<1\), depending on \(\lambda \) only such that if \(r_0\le c\), we have

$$\begin{aligned} \Vert v_{k}\Vert _{L^2(\widetilde{B}_{r_0/4})}\le C\sqrt{r_0}e^{\lambda \sqrt{|\lambda _1|}}(\varepsilon +H(C_0r_0)^{2k})^{\beta }(He^{2k}+H(C_0r_0)^{2k})^{1-\beta }.\qquad \quad \end{aligned}$$
(4.19)

where \(\beta \in (0,1)\), C depend on \(\lambda \) and \(\Lambda \) only and \(C_0=4\pi e \lambda ^{-1}\).

With propositions 4.1, 4.2, 4.3 at hand and by using Carleman estimate (3.67), the proofs of estimates (2.6) and (2.17) are straightforward, whenever (4.5) is satisfied.

In the more general case we use a well known trick, see for instance [29], to transform the Eq. (4.1) in a self-adjoint equation. Let z be a new variable and denote by \(A_{0}(x,z)=\{a_{0}^{ij}(x,z)\}^{(n+1)}_{i,j=1}\) the real-valued symmetric \((n+1)\times (n+1)\) matrix whose entries are defined as follows. Let \(\eta \in C^{1}(\mathbb {R})\) be a function such that \(\eta (z)=z\), for \(z\in (-1,1)\), and \(|\eta (z)|+|\eta '(z)|\le 2\lambda ^{-1}\)

$$\begin{aligned} a_{0}^{ij}(x,z)=a_{0}^{ij}(x),\quad \hbox {if }\; i,j\in \{1,\ldots ,n\}, \end{aligned}$$
$$\begin{aligned} a_{0}^{(n+1)j}(x,z)= a_{0}^{j(n+1)}(x,z)=\eta (z)b^j(x),\quad \hbox {if }\; 1\le j\le n, \end{aligned}$$
$$\begin{aligned} a_{0}^{(n+1)(n+1)}(x,z)=K_0 \end{aligned}$$

where \(K_0=8\lambda ^{-3}+1\). We have that \(A_0\) satisfies

$$\begin{aligned} \lambda _0|\zeta |^2\le A_0(x,z)\zeta \cdot \zeta \le \lambda _0^{-1}|\zeta |^2,\quad \hbox {for every }\;\zeta \in \mathbb {R}^{n+1} \end{aligned}$$

and

$$\begin{aligned} \left| A_0(x,z)-A_0(y,w)\right| \le \Lambda _0 \left( \left| x-y \right| +|z-w|\right) , \quad \hbox {for every }\; (x,z), (y,w)\in \mathbb {R}^{n+1} \end{aligned}$$

where \(\lambda _0\) depends on \(\lambda \) only and \(\Lambda _0\) depends on \(\lambda , \Lambda \) only. Denote

$$\begin{aligned} \mathcal {L}U:=\text{ div }_{x,z}\left( A_0(x,z)\nabla _{x,z} U\right) +c(x)U \end{aligned}$$

It is easy to check that if u(xt) is a solution of (4.1) (\(\rho _0=1\)) then \(U(x,z,t):=u(x,t)\) is solution to

$$\begin{aligned} q(x)\partial ^2_{t}U-\mathcal {L}U=0, \quad \hbox {in }\; \widetilde{B}_{1}\times (-\lambda ,\lambda ). \end{aligned}$$

Therefore we are reduced to the case considered previously in this subsection and again the proofs of estimates (2.6) and (2.17) are now straightforward.