1 Introduction

The telegraph equations are a system of coupled linear equations governing voltage and current flow on a linear electrical line. For t denoting the time and x the distance from any fixed point, and \({\nu },\zeta \) the voltage and the current, respectively, the equations are as follows

$$\begin{aligned} \bigg \lbrace \begin{array}{l} \partial _{x}\nu (t,x)=-L\partial _{t}\zeta (t,x)-R\zeta (t,x),\\ \partial _{x}\zeta (t,x)=-C\partial _{t}\nu (t,x)-G\nu (t,x), \end{array} \end{aligned}$$

where L is the inductance, C the capacitance, R the resistance and G stands for the conductance. When combined, a hyperbolic partial differential equation of the following form is obtained

$$\begin{aligned} \partial _{x}^2u(t,x) - LC\partial _{t}^2u(t,x) = (RC+GL)\partial _{t}u(t,x) + GR u(t,x), \end{aligned}$$
(1.1)

where u represents either the voltage \(\nu \) or the current \(\zeta \). For the derivation of equations, we refer the reader to [1] for more details. The form (1.1) can be regarded as a wave equation with additional mass and dissipation terms. This form is widely used in the literature to study wave propagation phenomena and random walk theory. See, for instance [2,3,4,5,6] and the references therein.

In the present paper, we consider the telegraph equation in a more general case. That is, we use the fractional Laplacian instead of the classical one and for fixed \(T>0\), we consider the Cauchy problem:

$$\begin{aligned} \bigg \lbrace \begin{array}{l} u_{tt}(t,x) + (-\Delta )^{s}u(t,x) + a(x)u(t,x) + b(x)u_{t}(t,x)=0,\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x), \end{array} \end{aligned}$$
(1.2)

where \((t,x)\in [0,T]\times {\mathbb {R}}^d\) and \(s>0\). Motivated by the fact that mechanical and physical properties of nowadays materials cannot be described by smooth functions due to the non-homogeneity of the material structure, the spatially dependent mass a and the dissipation coefficient b in (1.2) are assumed to be non-negative and singular, in particular to have \(\delta \)-like behaviours. Our aim is to prove that this problem is well posed in the sense of the very weak solution concept introduced in [7] by Garetto and the first author in order to give a neat solution to the problem of multiplication that Schwartz theory of distributions is concerned with, see [8], and to provide a framework in which partial differential equations involving coefficients and data of low regularity can be rigorously studied. Let us give a brief literature review about this concept of solutions. After the original work of Garetto and Ruzhansky [7], many researchers started using this notion of solutions for different situations, either for abstract mathematical problems as [9,10,11] or for physical models as in [12,13,14] and [15,16,17,18,19] where it is shown that the concept of very weak solutions is very suitable for numerical modelling, and in [20] where the question of propagation of coefficients singularities of the very weak solution is studied. More recently, we cite [21,22,23,24,25].

The novelty of this work lies in the fact that we consider equations that cannot be formulated in the classical or the distributional sense. We employ the concept of very weak solutions which allows to overcome the problem of the impossibility of multiplication of distributions. Furthermore, the results obtained in this paper extend those of [16], firstly by incorporating a dissipation term, and secondly by relaxing the assumptions on the Cauchy data, allowing them to be as singular as the equation coefficients, whereas in [16] they were supposed to be smooth functions.

2 Preliminaries

For the reader’s convenience, we review in this section notations and notions that are frequently used in the sequel.

2.1 Notation

  • By the notation \(f\lesssim g\), we mean that there exists a positive constant C, such that \(f \le Cg\) independently on f and g.

  • We also define

    $$\begin{aligned} \Vert u(t,\cdot )\Vert _1:= \Vert u(t,\cdot )\Vert _{L^2} + \Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2} + \Vert u_t(t,\cdot )\Vert _{L^2}, \end{aligned}$$

    and

    $$\begin{aligned} \Vert u(t,\cdot )\Vert _2:= \Vert u(t,\cdot )\Vert _{L^2} + \Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2} + \Vert (-\Delta )^{s}u(t,\cdot )\Vert _{L^2} + \Vert u_t(t,\cdot )\Vert _{L^2}. \end{aligned}$$

We also recall the well-known Hölder inequality.

Proposition 2.1

Let \(r\in (0,\infty )\) and \(p,q\in (0,\infty )\) be such that \(\frac{1}{r}=\frac{1}{p} + \frac{1}{q}\). Assume that \(f\in L^{p}({\mathbb {R}}^d)\) and \(g\in L^{q}({\mathbb {R}}^d)\), then, \(fg\in L^{r}({\mathbb {R}}^d)\) and we have

$$\begin{aligned} \Vert fg\Vert _{L^r}\le \Vert f\Vert _{L^p}\Vert g\Vert _{L^q}. \end{aligned}$$
(2.1)

2.2 The fractional Sobolev space \(H^s\) and the fractional Laplacian

Definition 1

(Fractional Sobolev space) Given \(s>0\), the fractional Sobolev space is defined by

$$\begin{aligned} H^s({\mathbb {R}}^d) = \big \{f\in L^2({\mathbb {R}}^d): \mathop {\int }\limits _{{\mathbb {R}}^d}(1+\vert \xi \vert ^{2s})|{\widehat{f}}(\xi )|^2\mathrm d\xi < +\infty \big \}, \end{aligned}$$

where \({\widehat{f}}\) denotes the Fourier transform of f.

We note that, the fractional Sobolev space \(H^s\) endowed with the norm

$$\begin{aligned} \Vert f\Vert _{H^s}:=\left( \mathop {\int }\limits _{{\mathbb {R}}^d}(1+\vert \xi \vert ^{2s})|{\widehat{f}}(\xi )|^2\mathrm d\xi \right) ^{\frac{1}{2}},\quad \text {for } f\in H^s({\mathbb {R}}^d), \end{aligned}$$
(2.2)

is a Hilbert space.

Definition 2

(Fractional Laplacian) For \(s>0\), \((-\Delta )^s\) denotes the fractional Laplacian defined by

$$\begin{aligned} (-\Delta )^{s}f = {\mathcal {F}}^{-1}(\vert \xi \vert ^{2s}({\widehat{f}})), \end{aligned}$$

for all \(\xi \in {\mathbb {R}}^d\).

In other words, the fractional Laplacian \((-\Delta )^s\) can be viewed as the pseudo-differential operator with symbol \(\vert \xi \vert ^{2s}\). With this definition and the Plancherel theorem, the fractional Sobolev space can be defined as:

$$\begin{aligned} H^{s}({\mathbb {R}}^{d})=\big \{ f\in L^{2}({\mathbb {R}}^{d}): (-\Delta )^{\frac{s}{2}}f \in L^{2}({\mathbb {R}}^{d})\big \}, \end{aligned}$$
(2.3)

moreover, the norm

$$\begin{aligned} \Vert f\Vert _{H^{s}}:=\Vert f\Vert _{L^2}+\Vert (-\Delta )^{\frac{s}{2}}f\Vert _{L^2}, \end{aligned}$$
(2.4)

is equivalent to the one defined in (2.2).

Remark 2.1

We note that the fractional Sobolev space \(H^s({\mathbb {R}}^d)\) can also be defined via the Gagliardo norm; however, we chose this approach, since it is valid for any real \(s>0\), unlike the one via Gagliardo norm which is valid only for \(s\in (0,1)\). We refer the reader to [26,27,28] for more details and alternative definitions.

Proposition 2.2

(Fractional Sobolev inequality, e.g. Theorem 1.1. [29]) For \(d\in {\mathbb {N}}_0\) and \(s\in {\mathbb {R}}_+\), let \(d>2s\) and \(q=\frac{2d}{d-2s}\). Then, the estimate

$$\begin{aligned} \Vert f\Vert _{L^q} \le C(d,s)\Vert (-\Delta )^{\frac{s}{2}}f\Vert _{L^2}, \end{aligned}$$
(2.5)

holds for all \(f\in H^{s}({\mathbb {R}}^d)\), where the constant C depends only on the dimension d and the order s.

2.3 Duhamel’s principle

We prove the following special version of Duhamel’s principle that will frequently be used throughout this paper. For more general versions of this principle, we refer the reader to [30]. Let us consider the following Cauchy problem,

$$\begin{aligned} \left\{ \begin{array}{l} u_{tt}(t,x)+\lambda (x)u_{t}(t,x)+Lu(t,x)=f(t,x),~~~(t,x)\in \left( 0,\infty \right) \times {\mathbb {R}}^{d},\\ u(0,x)=u_{0}(x),\,\,\, u_{t}(0,x)=u_{1}(x),\,\,\, x\in {\mathbb {R}}^{d}, \end{array} \right. \end{aligned}$$
(2.6)

for a given function \(\lambda \) and L is a linear partial differential operator acting over the spatial variable.

Proposition 2.3

The solution to the Cauchy problem (2.6) is given by

$$\begin{aligned} u(t,x)= w(t,x) + \mathop {\int }\limits _0^t v(t,x;\tau )\mathrm d\tau , \end{aligned}$$
(2.7)

where w(tx) is the solution to the homogeneous problem

$$\begin{aligned} \left\{ \begin{array}{l} w_{tt}(t,x)+\lambda (x)w_{t}(t,x)+Lw(t,x)=0,~~~(t,x)\in \left( 0,\infty \right) \times {\mathbb {R}}^{d},\\ w(0,x)=u_{0}(x),\,\,\, w_{t}(0,x)=u_{1}(x),\,\,\, x\in {\mathbb {R}}^{d}, \end{array} \right. \end{aligned}$$
(2.8)

and \(v(t,x;\tau )\) solves the auxiliary Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{l} v_{tt}(t,x;\tau )+\lambda (x)v_{t}(t,x;\tau )+Lv(t,x;\tau )=0,~~~(t,x)\in \left( \tau ,\infty \right) \times {\mathbb {R}}^{d},\\ v(\tau ,x;\tau )=0,\,\,\, v_{t}(\tau ,x;\tau )=f(\tau ,x),\,\,\, x\in {\mathbb {R}}^{d}, \end{array} \right. \end{aligned}$$
(2.9)

where \(\tau \) is a parameter varying over \(\left( 0,\infty \right) \).

Proof

Firstly, we apply \(\partial _{t}\) to u in (2.7). We get

$$\begin{aligned} \partial _t u(t,x)=\partial _t w(t,x) + \mathop {\int }\limits _0^t \partial _t v(t,x;\tau )\mathrm d\tau , \end{aligned}$$
(2.10)

and accordingly

$$\begin{aligned} \lambda (x)\partial _t u(t,x)=\lambda (x)\partial _t w(t,x) + \mathop {\int }\limits _0^t \lambda (x)\partial _t v(t,x;\tau )\mathrm d\tau , \end{aligned}$$
(2.11)

where we used the fact that \(v(t,x;t)=0\) by the imposed initial condition in (2.9). We differentiate again (2.10) with respect to t to get

$$\begin{aligned} \partial _{tt} u(t,x)=\partial _{tt} w(t,x) + f(t,x) + \mathop {\int }\limits _0^t \partial _{tt} v(t,x;\tau )\mathrm d\tau , \end{aligned}$$
(2.12)

where we used that \(\partial _t v(t,x;t) = f(t,x)\). Now, applying L to u in (2.7) gives

$$\begin{aligned} L u(t,x)=L w(t,x) + \mathop {\int }\limits _0^t L v(t,x;\tau )\mathrm d\tau . \end{aligned}$$
(2.13)

By adding (2.12), (2.13) and (2.11), and by taking into consideration that w and v satisfy the equations in (2.8) and (2.9), we get

$$\begin{aligned} u_{tt}(t,x)+\lambda (x)u_{t}(t,x)+Lu(t,x)=f(t,x). \end{aligned}$$

It remains to prove that u satisfy the initial conditions. Indeed, from (2.7) and (2.10), we have that \(u(0,x)=w(0,x)=u_0(x)\) and that \(u_t(0,x)=\partial _t w(0,x)=u_1(x)\). This concludes the proof. \(\square \)

Remark 2.2

We note that the above statement of Duhamel’s principle can be extended to differential operators of order \(k\in {\mathbb {N}}\). Indeed, if we consider the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{l} \partial _{t}^{k}u(t,x)+\sum _{j=1}^{k-1}\lambda _{j}(x)\partial _{t}^{j} u(t,x)+Lu(t,x)=f(t,x),~~~(t,x)\in \left( 0,\infty \right) \times {\mathbb {R}}^{d},\\ \partial _{t}^{j}u(0,x)=u_{j}(x),\,\text {for}\quad j=0,\cdots ,k-1, \quad x\in {\mathbb {R}}^{d}, \end{array} \right. \end{aligned}$$

then, the solution is given by

$$\begin{aligned} u(t,x)= w(t,x) + \mathop {\int }\limits _0^t v(t,\tau ;\tau )\mathrm d\tau , \end{aligned}$$

where w(tx) is the solution to the homogeneous problem

$$\begin{aligned} \left\{ \begin{array}{l} \partial _{t}^{k}w(t,x)+\sum _{j=1}^{k-1}\lambda _{j}(x)\partial _{t}^{j}w(t,x)+Lw(t,x)=0,~~~(t,x)\in \left( 0,\infty \right) \times {\mathbb {R}}^{d},\\ \partial _{t}^{j}w(0,x)=u_{j}(x),\,\text {for}\quad j=0,\cdots ,k-1, \quad x\in {\mathbb {R}}^{d}, \end{array} \right. \end{aligned}$$

and \(v(t,x;\tau )\) solves the auxiliary Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{l} \partial _{t}^{k}v(t,x;\tau )+\sum _{j=1}^{k-1}\lambda _{j}(x)\partial _{t}^{j}v_{t}(t,x;\tau )+Lv(t,x;\tau )=0,~~~(t,x)\in \left( \tau ,\infty \right) \times {\mathbb {R}}^{d},\\ \partial _{t}^{j}w(\tau ,x;\tau )=0,\,\text {for}\quad j=0,\cdots ,k-2,\,\,\, \partial _{t}^{k-1}w(\tau ,x;\tau )=f(\tau ,x),\,\,\, x\in {\mathbb {R}}^{d}, \end{array} \right. \end{aligned}$$

where \(\tau \in \left( 0,\infty \right) \).

2.4 Energy estimates for the classical solution

In order to prove existence and uniqueness of a very weak solution to the Cauchy problem (1.2) as well as the coherence with classical theory, we will often use the following lemmas that are stated in the case when the mass a and the dissipation coefficient b are regular functions. The statements of the lemmas are given under different assumptions on a and b.

Lemma 2.4

Let \(a,b\in L^{\infty }({\mathbb {R}}^d)\) be non-negative and suppose that \(u_0 \in H^{s}({\mathbb {R}}^d)\) and \(u_1 \in L^{2}({\mathbb {R}}^d)\). Then the unique solution \(u\in C([0,T];H^{s}({\mathbb {R}}^d))\cap C^1([0,T];L^{2}({\mathbb {R}}^d))\) to the Cauchy problem (1.2) satisfies the estimate

$$\begin{aligned} \Vert u(t,\cdot )\Vert _1 \lesssim \Big (1+\Vert a\Vert _{L^{\infty }}\Big )\Big (1+\Vert b\Vert _{L^{\infty }}\Big )\bigg [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\bigg ], \end{aligned}$$
(2.14)

for all \(t\in [0,T]\).

Proof

Multiplying the equation in (1.2) by \(u_t\) and integrating with respect to the variable x over \({\mathbb {R}}^d\) and taking the real part, we get

$$\begin{aligned} Re\Bigg (\langle u_{tt}(t,\cdot ),&u_{t}(t,\cdot )\rangle _{L^2} + \langle (-\Delta )^{s}u(t,\cdot ),u_{t}(t,\cdot )\rangle _{L^2}\\&+ \langle a(\cdot )u(t,\cdot ),u_{t}(t,\cdot )\rangle _{L^2} + \langle b(\cdot )u_{t}(t,\cdot ),u_{t}(t,\cdot )\rangle _{L^2}\Bigg ) = 0.\nonumber \end{aligned}$$
(2.15)

We easily see that

$$\begin{aligned} Re\langle u_{tt}(t,\cdot ),u_{t}(t,\cdot )\rangle _{L^2} = \frac{1}{2}\partial _{t}\langle u_{t}(t,\cdot ),u_{t}(t,\cdot )\rangle _{L^2} = \frac{1}{2}\partial _{t}\Vert u_{t}(t,\cdot )\Vert _{L^2}^2, \end{aligned}$$
(2.16)

and

$$\begin{aligned} Re\langle (-\Delta )^{s}u(t,\cdot ),u_{t}(t,\cdot )\rangle _{L^2}&= \frac{1}{2}\partial _{t}\langle (-\Delta )^{\frac{s}{2}}u(t,\cdot ),(-\Delta )^{\frac{s}{2}}u(t,\cdot )\rangle _{L^2} \nonumber \\&= \frac{1}{2}\partial _{t}\Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2}^2, \end{aligned}$$
(2.17)

where we used the self-adjointness of the operator \((-\Delta )^s\). For the remaining terms in (2.15), we have

$$\begin{aligned} Re\langle a(\cdot )u(t,\cdot ),u_{t}(t,\cdot )\rangle _{L^2} = \frac{1}{2}\partial _t\Vert a^{\frac{1}{2}}(\cdot )u(t,\cdot )\Vert _{L^2}^2, \end{aligned}$$
(2.18)

and

$$\begin{aligned} Re\langle b(\cdot )u_{t}(t,\cdot ),u_{t}(t,\cdot )\rangle _{L^2} = \Vert b^{\frac{1}{2}}(\cdot )u_{t}(t,\cdot )\Vert _{L^2}^2. \end{aligned}$$
(2.19)

By substituting (2.16),(2.17),(2.18) and (2.19) in (2.15), we get

$$\begin{aligned} \partial _{t}\Big [ \Vert u_{t}(t,\cdot )\Vert _{L^2}^2 + \Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2}^2 + \Vert a^{\frac{1}{2}}(\cdot )u(t,\cdot )\Vert _{L^2}^2\Big ] = -2\Vert b^{\frac{1}{2}}(\cdot )u_{t}(t,\cdot )\Vert _{L^2}^2. \end{aligned}$$
(2.20)

Let us denote

$$\begin{aligned} E(t):= \Vert u_{t}(t,\cdot )\Vert _{L^2}^2 + \Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2}^2 + \Vert a^{\frac{1}{2}}(\cdot )u(t,\cdot )\Vert _{L^2}^2, \end{aligned}$$
(2.21)

the energy function of the system (1.2). It follows from (2.20) that \(\partial _{t}E(t)\le 0\) and consequently that we have a decay of energy, that is: \(E(t)\le E(0)\) for all \(t\in [0,T]\). By taking into consideration the estimate

$$\begin{aligned} \Vert a^{\frac{1}{2}}(\cdot )u_{0}\Vert _{L^2}^2 \le \Vert a\Vert _{L^{\infty }}\Vert u_0\Vert _{L^2}^2, \end{aligned}$$
(2.22)

it follows that all terms in E(t) satisfy the estimates:

$$\begin{aligned} \Vert a^{\frac{1}{2}}(\cdot )u(t,\cdot )\Vert _{L^2}^2&\lesssim \Vert u_{1}\Vert _{L^2}^2 + \Vert (-\Delta )^{\frac{s}{2}}u_0\Vert _{L^2}^2 + \Vert a\Vert _{L^{\infty }}\Vert u_0\Vert _{L^2}^2\nonumber \\&\lesssim \Vert u_{1}\Vert _{L^2}^2 + \Vert u_0\Vert _{H^s}^2 + \Vert a\Vert _{L^{\infty }}\Vert u_0\Vert _{H^s}^2\nonumber \\&\lesssim \big (1+\Vert a\Vert _{L^{\infty }}\big )\big [\Vert u_0\Vert _{H^s}^2 + \Vert u_{1}\Vert _{L^2}^2\big ]\nonumber \\&\lesssim \big (1+\Vert a\Vert _{L^{\infty }}\big )\big [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\big ]^2, \end{aligned}$$
(2.23)

as well as

$$\begin{aligned} \bigg \{\Vert u_{t}(t,\cdot )\Vert _{L^2}^2, \Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2}^2\bigg \} \lesssim \big (1+\Vert a\Vert _{L^{\infty }}\big )\big [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\big ]^2, \end{aligned}$$
(2.24)

uniformly in \(t\in [0,T]\), where we use the fact that:

$$\begin{aligned} \Big \{ \Vert (-\Delta )^{\frac{s}{2}}u_0\Vert _{L^2}, \Vert u_0\Vert _{L^2}\Big \} \le \Vert u_0\Vert _{H^s}. \end{aligned}$$

We now need to estimate u. For this purpose, we apply the Fourier transform to (1.2) with respect to the variable x to get the non-homogeneous ordinary differential equation

$$\begin{aligned} {\widehat{u}}_{tt}(t,\xi )+\vert \xi \vert ^{2s}{\widehat{u}}(t,\xi )={\widehat{f}}(t,\xi ), \quad (t,\xi )\in [0,T]\times {\mathbb {R}}^d, \end{aligned}$$
(2.25)

with the initial conditions \({\widehat{u}}(0,\xi )={\widehat{u}}_{0}(\xi )\) and \({\widehat{u}}_{t}(0,\xi )={\widehat{u}}_{1}(\xi )\). Here \({\widehat{f}}\), \({\widehat{u}}\) denote the Fourier transform of f and u, respectively, where \(f(t,x):=-a(x)u(t,x)-b(x)u_{t}(t,x)\). Treating \({\widehat{f}}(t,\xi )\) as a source term and using Duhamel’s principle (Proposition 2.3 with \(\lambda \equiv 0\)) to solve (2.25), we derive the following representation of the solution,

$$\begin{aligned} {\widehat{u}}(t,\xi )=\cos (t\vert \xi \vert ^{s}) {\widehat{u}}_{0}(\xi ) + \frac{\sin (t\vert \xi \vert ^{s})}{\vert \xi \vert ^{s}} {\widehat{u}}_{1}(\xi ) + \mathop {\int }\limits _{0}^{t}\frac{\sin ((t-\tau )\vert \xi \vert ^{s})}{\vert \xi \vert ^{s}} {\widehat{f}}(\tau ,\xi ) \mathrm d\tau . \end{aligned}$$
(2.26)

Taking the \(L^2\) norm in (2.26) and using the estimates:

  1. 1.

    \(\vert \cos (t\vert \xi \vert ^{s})\vert \le 1\), for \(t\in \left[ 0,T\right] \) and \(\xi \in {\mathbb {R}}^{d}\),

  2. 2.

    \(\vert \sin (t\vert \xi \vert ^{s})\vert \le 1\), for large frequencies and \(t\in \left[ 0,T\right] \) and

  3. 3.

    \(\vert \sin (t\vert \xi \vert ^{s})\vert \le t\vert \xi \vert ^{s} \le T\vert \xi \vert ^{s}\), for small frequencies and \(t\in \left[ 0,T\right] \),

leads to

$$\begin{aligned} \Vert {\widehat{u}}(t,\cdot )\Vert _{L^2}^{2} \lesssim \Vert {\widehat{u}}_{0}\Vert _{L^2}^{2} + \Vert {\widehat{u}}_{1}\Vert _{L^2}^{2} + \mathop {\int }\limits _{0}^{t}\Vert {\widehat{f}}(\tau ,\cdot )\Vert _{L^2}^{2}\mathrm d\tau , \end{aligned}$$

and by using the Parseval–Plancherel identity, we get

$$\begin{aligned} \Vert u(t,\cdot )\Vert _{L^2}^{2} \lesssim \Vert u_{0}\Vert _{L^2}^{2} + \Vert u_{1}\Vert _{L^2}^{2} + \mathop {\int }\limits _{0}^{t}\Vert f(\tau ,\cdot )\Vert _{L^2}^{2}\mathrm d\tau , \end{aligned}$$
(2.27)

for all \(t\in [0,T]\). To estimate \(\Vert f(\tau ,\cdot )\Vert _{L^2}\), the last term in the above inequality, we use the triangle inequality and the estimates

$$\begin{aligned} \Vert a(\cdot )u(\tau ,\cdot )\Vert _{L^2}&\le \Vert a\Vert _{L^{\infty }}^{\frac{1}{2}} \Vert a^{\frac{1}{2}}(\cdot )u(\tau ,\cdot )\Vert _{L^2}\nonumber \\&\lesssim \Vert a\Vert _{L^{\infty }}^{\frac{1}{2}} \big (1+\Vert a\Vert _{L^{\infty }}\big )^{\frac{1}{2}}\big [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\big ] \nonumber \\&\lesssim \big (1+\Vert a\Vert _{L^{\infty }}\big )\big [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\big ], \end{aligned}$$
(2.28)

resulting from (2.23), and similarly

$$\begin{aligned} \Vert b(\cdot )u_{t}(\tau ,\cdot )\Vert _{L^2}&\le \Vert b\Vert _{L^{\infty }} \Vert u_{t}(\tau ,\cdot )\Vert _{L^2}\nonumber \\&\lesssim \Vert b\Vert _{L^{\infty }}\big (1+\Vert a\Vert _{L^{\infty }}\big )\big [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\big ], \end{aligned}$$
(2.29)

resulting from (2.24), to get

$$\begin{aligned} \Vert f(\tau ,\cdot )\Vert _{L^2} \lesssim \big (1+\Vert a\Vert _{L^{\infty }}\big )\big (1+\Vert b\Vert _{L^{\infty }}\big )\big [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\big ]. \end{aligned}$$
(2.30)

The desired estimate for u follows by substituting (2.30) into (2.27), finishing the proof. \(\square \)

Lemma 2.5

Let \(d>2s\). Assume that \(a\in L^{\frac{d}{s}}({\mathbb {R}}^d) \cap L^{\frac{d}{2s}}({\mathbb {R}}^d)\) and \(b\in L^{\frac{d}{s}}({\mathbb {R}}^d)\) be non-negative. If \(u_0\in H^{2s}({\mathbb {R}}^d)\) and \(u_1\in H^{s}({\mathbb {R}}^d)\), then, there is a unique solution \(u\in C([0,T]; H^{2s}({\mathbb {R}}^d))\cap C^{1}([0,T]; H^{s}({\mathbb {R}}^d))\) to (1.2) and it satisfies the estimate

$$\begin{aligned} \Vert u(t,\cdot )\Vert _2 \lesssim \Big (1+\Vert a\Vert _{L^{\frac{d}{s}}}\Big )\Big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\Big )\Big (1+\Vert b\Vert _{L^{\frac{d}{s}}}\Big )^2\bigg [\Vert u_0\Vert _{H^{2s}} + \Vert u_{1}\Vert _{H^s}\bigg ], \end{aligned}$$
(2.31)

uniformly in \(t\in [0,T]\).

Proof

Proceeding as in the proof of Lemma 2.4, we get

$$\begin{aligned} \partial _{t}E(t) = -2\Vert b^{\frac{1}{2}}(\cdot )u_{t}(t,\cdot )\Vert _{L^2}^2\le 0, \end{aligned}$$
(2.32)

for the energy function of the system defined by

$$\begin{aligned} E(t)= \Vert u_{t}(t,\cdot )\Vert _{L^2}^2 + \Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2}^2 + \Vert a^{\frac{1}{2}}(\cdot )u(t,\cdot )\Vert _{L^2}^2, \end{aligned}$$
(2.33)

which implies the decay of the energy over t. That is

$$\begin{aligned} E(t)\le \Vert u_{1}\Vert _{L^2}^2 + \Vert (-\Delta )^{\frac{s}{2}}u_{0}\Vert _{L^2}^2 + \Vert a^{\frac{1}{2}}(\cdot )u_{0}\Vert _{L^2}^2, \end{aligned}$$
(2.34)

for all \(t\in [0,T]\). Using Hölder’s inequality (see Proposition 2.1) for the last term in (2.34) together with \(\Vert a^{\frac{1}{2}}\Vert _{L^p}^2 = \Vert a\Vert _{L^{\frac{p}{2}}}\), gives

$$\begin{aligned} \Vert a^{\frac{1}{2}}(\cdot )u_{0}(\cdot )\Vert _{L^2}^2 \le \Vert a\Vert _{L^{\frac{p}{2}}}\Vert u_0\Vert _{L^q}^2, \end{aligned}$$
(2.35)

for \(1<p,q<\infty \), satisfying \(\frac{1}{p}+\frac{1}{q}=\frac{1}{2}\). Now, if we choose \(q=\frac{2d}{d-2s}\) and consequently \(p=\frac{d}{s}\), it follows from Proposition 2.2 that

$$\begin{aligned} \Vert u_0\Vert _{L^q} \lesssim \Vert (-\Delta )^{\frac{s}{2}}u_{0}(\cdot )\Vert _{L^2}\le \Vert u_{0}\Vert _{H^s}, \end{aligned}$$
(2.36)

and thus

$$\begin{aligned} \Vert a^{\frac{1}{2}}(\cdot )u_{0}(\cdot )\Vert _{L^2}^2 \lesssim \Vert a\Vert _{L^{\frac{d}{2s}}}\Vert u_{0}\Vert _{H^s}^2. \end{aligned}$$
(2.37)

Substituting (2.37) in (2.34), we get the estimates

$$\begin{aligned} \bigg \{\Vert u_{t}(t,\cdot )\Vert _{L^2}^2, \Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2}^2, \Vert a^{\frac{1}{2}}(\cdot )u(t,\cdot )\Vert _{L^2}^2\bigg \} \lesssim \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )\big [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\big ]^2, \end{aligned}$$
(2.38)

uniformly in \(t\in [0,T]\). To prove the estimate for the solution u, we argue as in the proof of Lemma 2.4 to get

$$\begin{aligned} \Vert u(t,\cdot )\Vert _{L^2}^{2} \lesssim \Vert u_{0}\Vert _{L^2}^{2} + \Vert u_{1}\Vert _{L^2}^{2} + \mathop {\int }\limits _{0}^{t}\Vert f(\tau ,\cdot )\Vert _{L^2}^{2}\mathrm d\tau , \end{aligned}$$
(2.39)

for all \(t\in [0,T]\), with \(f(t,x):=-a(x)u(t,x)-b(x)u_{t}(t,x)\). In order to estimate \(\Vert f(\tau ,\cdot )\Vert _{L^2}\), we use the triangle inequality to get

$$\begin{aligned} \Vert f(t,\cdot )\Vert _{L^2}\le \Vert a(\cdot )u(t,\cdot )\Vert _{L^2} + \Vert b(\cdot )u_{t}(t,\cdot )\Vert _{L^2}. \end{aligned}$$
(2.40)

To estimate the first term in (2.40), we first use Hölder’s inequality together with \(\Vert a^{2}\Vert _{L^{\frac{p}{2}}} = \Vert a\Vert _{L^p}^2\), to get

$$\begin{aligned} \Vert a(\cdot )u(t,\cdot )\Vert _{L^2} \le \Vert a\Vert _{L^p} \Vert u(t,\cdot )\Vert _{L^q}, \end{aligned}$$
(2.41)

for \(1<p,q<\infty \), satisfying \(\frac{1}{p}+\frac{1}{q}=\frac{1}{2}\), and we choose \(q=\frac{2d}{d-2s}\) and consequently \(p=\frac{d}{s}\), in order to get (from Proposition 2.2)

$$\begin{aligned} \Vert u(t,\cdot )\Vert _{L^q} \lesssim \Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2}, \end{aligned}$$
(2.42)

and thus

$$\begin{aligned} \Vert a(\cdot )u(t,\cdot )\Vert _{L^2} \lesssim \Vert a\Vert _{L^{\frac{d}{s}}} \Vert (-\Delta )^{\frac{s}{2}}u(t,\cdot )\Vert _{L^2}, \end{aligned}$$
(2.43)

for all \(t\in [0,T]\). Using the estimate (2.38), we arrive at

$$\begin{aligned} \Vert a(\cdot )u(t,\cdot )\Vert _{L^2}&\lesssim \Vert a\Vert _{L^{\frac{d}{s}}} \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )^{\frac{1}{2}}\big [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\big ]\nonumber \\&\lesssim \Vert a\Vert _{L^{\frac{d}{s}}} \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )\big [\Vert u_0\Vert _{H^s} + \Vert u_{1}\Vert _{L^2}\big ]. \end{aligned}$$
(2.44)

For the second term in (2.40), we argue as above, to get

$$\begin{aligned} \Vert b(\cdot )u_t(t,\cdot )\Vert _{L^2} \lesssim \Vert b\Vert _{L^{\frac{d}{s}}} \Vert (-\Delta )^{\frac{s}{2}}u_t(t,\cdot )\Vert _{L^2}, \end{aligned}$$
(2.45)

for all \(t\in [0,T]\). We need now to estimate \(\Vert (-\Delta )^{\frac{s}{2}}u_t(t,\cdot )\Vert _{L^2}\). For this, we note that if u solves the Cauchy problem

$$\begin{aligned} \bigg \lbrace \begin{array}{l} u_{tt}(t,x) + (-\Delta )^{s}u(t,x) + a(x)u(t,x) + b(x)u_{t}(t,x)=0, \,\, (t,x)\in [0,T]\times {\mathbb {R}}^d,\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x), \quad x\in {\mathbb {R}}^d, \end{array} \end{aligned}$$

then \(u_t\) solves

$$\begin{aligned} \bigg \lbrace \begin{array}{l} (u_t)_{tt}(t,x) + (-\Delta )^{s}u_t(t,x) + a(x)u_t(t,x) + b(x)(u_t)_{t}(t,x)=0, \,\, (t,x)\in [0,T]\times {\mathbb {R}}^d,\\ u_t(0,x)=u_{1}(x),\quad u_{tt}(0,x)=-(-\Delta )^{s}u_0(x) - a(x)u_0(x) - b(x)u_1(x), \quad x\in {\mathbb {R}}^d. \end{array} \end{aligned}$$

Thanks to (2.43) and (2.45), one has

$$\begin{aligned} \Vert a(\cdot )u_0(\cdot )\Vert _{L^2} \lesssim \Vert a\Vert _{L^{\frac{d}{s}}} \Vert u_0\Vert _{H^s},\quad \Vert b(\cdot )u_1(\cdot )\Vert _{L^2} \lesssim \Vert b\Vert _{L^{\frac{d}{s}}} \Vert u_1\Vert _{H^s}. \end{aligned}$$
(2.46)

The estimate for \(\Vert (-\Delta )^{\frac{s}{2}}u_t(t,\cdot )\Vert _{L^2}\) follows by using (2.38) applied to the problem (2.4), to get

$$\begin{aligned} \Vert (-\Delta )^{\frac{s}{2}}u_t(t,\cdot )\Vert _{L^2}&\lesssim \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )^{\frac{1}{2}}\big [\Vert u_1\Vert _{H^s} + \Vert u_{tt}(0,\cdot )\Vert _{L^2}\big ]\nonumber \\&\lesssim \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )^{\frac{1}{2}}\big [\Vert u_1\Vert _{H^s} + \Vert u_0\Vert _{H^{2s}} + \Vert a\Vert _{L^{\frac{d}{s}}} \Vert u_0\Vert _{H^s} + \Vert b\Vert _{L^{\frac{d}{s}}} \Vert u_1\Vert _{H^s}\big ]\nonumber \\&\lesssim \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )\big (1+\Vert a\Vert _{L^{\frac{d}{s}}}\big )\big (1+\Vert b\Vert _{L^{\frac{d}{s}}}\big )\big [\Vert u_0\Vert _{H^{2s}} + \Vert u_1\Vert _{H^s}\big ]. \end{aligned}$$
(2.47)

By substituting (2.47) in (2.45), we get

$$\begin{aligned} \Vert b(\cdot )u_t(t,\cdot )\Vert _{L^2}&\lesssim \Vert b\Vert _{L^{\frac{d}{s}}}\big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )\big (1+\Vert a\Vert _{L^{\frac{d}{s}}}\big )\big (1+\Vert b\Vert _{L^{\frac{d}{s}}}\big )\big [\Vert u_0\Vert _{H^{2s}} + \Vert u_1\Vert _{H^s}\big ]\nonumber \\&\lesssim \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )\big (1+\Vert a\Vert _{L^{\frac{d}{s}}}\big )\big (1+\Vert b\Vert _{L^{\frac{d}{s}}}\big )^2\big [\Vert u_0\Vert _{H^{2s}} + \Vert u_1\Vert _{H^s}\big ], \end{aligned}$$
(2.48)

and the estimate for \(\Vert f(t,\cdot )\Vert _{L^2}\) follows from (2.30) and (2.44) with (2.48), yielding

$$\begin{aligned} \Vert f(t,\cdot )\Vert _{L^2} \lesssim \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )\big (1+\Vert a\Vert _{L^{\frac{d}{s}}}\big )\big (1+\Vert b\Vert _{L^{\frac{d}{s}}}\big )^2\big [\Vert u_0\Vert _{H^{2s}} + \Vert u_1\Vert _{H^s}\big ]. \end{aligned}$$
(2.49)

Combining these estimates, we get the estimate for the solution u. Now, to estimate \(\Vert (-\Delta )^s u\Vert _{L^2}\), we need first to estimate \(u_{tt}\). Reasoning as in (2.47), the first estimate for \(u_t\) in (2.38), when applied to \(u_t\) the solution to (2.4) instead of u, gives

$$\begin{aligned} \Vert u_{tt}(t,\cdot )\Vert _{L^2}&\lesssim \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )^{\frac{1}{2}}\big [\Vert u_1\Vert _{H^s} + \Vert u_{tt}(0,\cdot )\Vert _{L^2}\big ]\nonumber \\&\lesssim \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )^{\frac{1}{2}}\big [\Vert u_1\Vert _{H^s} + \Vert u_0\Vert _{H^{2s}} + \Vert a\Vert _{L^{\frac{d}{s}}} \Vert u_0\Vert _{H^s} + \Vert b\Vert _{L^{\frac{d}{s}}} \Vert u_1\Vert _{H^s}\big ]\nonumber \\&\lesssim \big (1+\Vert a\Vert _{L^{\frac{d}{2s}}}\big )\big (1+\Vert a\Vert _{L^{\frac{d}{s}}}\big )\big (1+\Vert b\Vert _{L^{\frac{d}{s}}}\big )\big [\Vert u_0\Vert _{H^{2s}} + \Vert u_1\Vert _{H^s}\big ]. \end{aligned}$$
(2.50)

The estimate for \(\Vert (-\Delta )^s u\Vert _{L^2}\) follows by taking the \(L^2\) norm in the equality

$$\begin{aligned} (-\Delta )^{s}u(t,x)=-u_{tt}(t,x) - a(x)u(t,x) - b(x)u_{t}(t,x), \end{aligned}$$

and using the triangle inequality in the right-hand side and by taking into consideration the so far obtained estimates (2.44), (2.48) and (2.50). This completes the proof. \(\square \)

3 Very weak well-posedness

Here and in the sequel, we consider the case when the equation coefficients a, b and the Cauchy data \(u_0\) and \(u_1\) are irregular (functions) and prove that the Cauchy problem

$$\begin{aligned} \bigg \lbrace \begin{array}{l} u_{tt}(t,x) + (-\Delta )^{s}u(t,x) + a(x)u(t,x) + b(x)u_{t}(t,x)=0,\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x), \end{array} \end{aligned}$$
(3.1)

for \((t,x)\in [0,T]\times {\mathbb {R}}^d\), has a unique very weak solution. We have in mind “functions” having \(\delta \) or \(\delta ^2\)-like behaviours. We note that we understand a multiplication of distributions as multiplication of approximating families, in particular the multiplication of their representatives in Colombeau algebra.

3.1 Existence of very weak solutions

In order to prove existence of very weak solutions to (3.1), we need the following definitions.

Definition 3

(Friedrichs mollifier) A function \(\psi \in C_0^{\infty }({\mathbb {R}}^d)\) is said to be a Friedrichs mollifier if \(\psi \) is non-negative and \(\mathop {\int }\limits _{{\mathbb {R}}^d}\psi (x)\mathrm dx=1\).

Example 3.1

An example of a Friedrichs mollifier is given by:

$$\begin{aligned} \psi (x)=\left\{ \begin{array}{l} \alpha \mathrm e^{-\frac{1}{1-{\vert x\vert ^2}}}\quad \vert x\vert <1,\\ 0\qquad \qquad \vert x\vert \ge 1, \end{array} \right. \end{aligned}$$

where the constant \(\alpha \) is choosed in such way that \(\mathop {\int }\limits _{{\mathbb {R}}^d}\psi (x)\mathrm dx=1\).

Assume now \(\psi \) as defined above a Friedrichs mollifier.

Definition 4

(Mollifying net) For \(\varepsilon \in (0,1]\), and \(x\in {\mathbb {R}}^d\), a net of functions \(\left( \psi _\varepsilon \right) _{\varepsilon \in (0,1]}\) is called a mollifying net if

$$\begin{aligned} \psi _\varepsilon (x)=\omega (\varepsilon )^{-1}\psi \left( x/\omega (\varepsilon )\right) , \end{aligned}$$

where \(\omega (\varepsilon )\) is a positive function converging to 0 as \(\varepsilon \rightarrow 0\) and \(\psi \) is a Friedrichs mollifier. In particular, if we take \(\omega (\varepsilon )=\varepsilon \), then, we get

$$\begin{aligned} \psi _\varepsilon (x)=\varepsilon ^{-1}\psi \left( x/\varepsilon \right) . \end{aligned}$$

Given a function (distribution) f, regularising f by convolution with a mollifying net \(\left( \psi _\varepsilon \right) _{\varepsilon \in (0,1]}\), yields a net of smooth functions, namely

$$\begin{aligned} (f_\varepsilon )_{\varepsilon \in (0,1]}=(f*\psi _\varepsilon )_{\varepsilon \in (0,1]}. \end{aligned}$$
(3.2)

Remark 3.2

The term “regularisation” of a function or distribution f, when used, will be viewed as a net of smooth functions \((f_{\varepsilon })_{\varepsilon \in (0,1]}\) arising from convolution with a mollifying net (as in Definition 4). However, the term “approximation” is more general in the sense that approximations are not necessarily arising from convolution with molliffying nets. For instance, if we consider \((f_{\varepsilon })_{\varepsilon \in (0,1]}\) a regularisation of f, then the net of functions \((\tilde{f}_{\varepsilon })_{\varepsilon \in (0,1]}\) defined by

$$\begin{aligned} \tilde{f}_{\varepsilon }=f_{\varepsilon } + \mathrm e^{-\frac{1}{\varepsilon }}, \end{aligned}$$
(3.3)

is an approximation of f but not resulting from regularisation.

Now, for a function (distribution) f, let \((f_\varepsilon )_{\varepsilon \in (0,1]}\) be a net of smooth functions approximating f, not necessarily coming from regularisation.

Definition 5

(Moderateness) Let X be a normed space of functions on \({\mathbb {R}}^d\) endowed with the norm \(\Vert \cdot \Vert _X\).

  1. 1.

    A net of functions \((f_\varepsilon )_{\varepsilon \in (0,1]}\) from X is said to be X-moderate, if there exist \(N\in {\mathbb {N}}_0\) such that

    $$\begin{aligned} \Vert f_\varepsilon \Vert _X \lesssim \omega (\varepsilon )^{-N}. \end{aligned}$$
    (3.4)
  2. 2.

    For \(T>0\). A net of functions \((u_\varepsilon (\cdot ,\cdot ))_{\varepsilon \in (0,1]}\) from \(C\big ([0,T];H^{s}({\mathbb {R}}^d)\big )\cap C^1\big ([0,T]; L^{2}({\mathbb {R}}^d)\big )\) is said to be \(C\big ([0,T];H^{s}({\mathbb {R}}^d)\big )\cap C^1\big ([0,T];L^{2}({\mathbb {R}}^d)\big )\)-moderate, if there exist \(N\in {\mathbb {N}}_0\) such that

    $$\begin{aligned} \sup _{t\in [0,T]}\Vert u_\varepsilon (t,\cdot )\Vert _1 \lesssim \omega (\varepsilon )^{-N}. \end{aligned}$$
    (3.5)
  3. 3.

    For \(T>0\). A net of functions \((u_\varepsilon (\cdot ,\cdot ))_{\varepsilon \in (0,1]}\) from \(C\big ([0,T];H^{2s}({\mathbb {R}}^d)\big )\cap C^1\big ([0,T];H^{s}({\mathbb {R}}^d)\big )\) is said to be \(C\big ([0,T];H^{2s}({\mathbb {R}}^d)\big )\cap C^1\big ([0,T];H^{s}({\mathbb {R}}^d)\big )\)-moderate, if there exist \(N\in {\mathbb {N}}_0\) such that

    $$\begin{aligned} \sup _{t\in [0,T]}\Vert u_\varepsilon (t,\cdot )\Vert _2 \lesssim \omega (\varepsilon )^{-N}. \end{aligned}$$
    (3.6)

For the second and the third definitions of moderateness, we will shortly write \(C_1\)-moderate and \(C_2\)-moderate.

The following proposition states that moderateness as defined above is a natural assumption for compactly supported distributions. Indeed, we have:

Proposition 3.1

Let \(f\in {\mathcal {E}}^{'}({\mathbb {R}}^d)\) and let \((f_\varepsilon )_{\varepsilon \in (0,1]}\) be regularisation of f obtained via convolution with a mollifying net \(\left( \psi _\varepsilon \right) _{\varepsilon \in (0,1]}\) (see Definition 4). Then, the net \((f_\varepsilon )_{\varepsilon \in (0,1]}\) is \(L^{p}({\mathbb {R}}^d)\)-moderate for any \(1\le p\le \infty \).

Proof

Fix \(p\in [1,\infty ]\) and let \(f\in {\mathcal {E}}^{'}({\mathbb {R}}^d)\). By the structure theorems for distributions (see [31, Corollary 5.4.1]), there exists \(n\in {\mathbb {N}}\) and compactly supported functions \(f_{\alpha }\in C({\mathbb {R}}^{d})\) such that

$$\begin{aligned} {f}=\sum _{|\alpha | \le n}\partial ^{\alpha }f_{\alpha }, \end{aligned}$$

where \(|\alpha |\) is the length of the multi-index \(\alpha \). The convolution of f with a mollifying net \(\left( \psi _\varepsilon \right) _{\varepsilon \in (0,1]}\) yields

$$\begin{aligned} f*\psi _{\varepsilon }=\sum _{\vert \alpha \vert \le n}\partial ^{\alpha }f_{\alpha }*\psi _{\varepsilon }=\sum _{\vert \alpha \vert \le n}f_{\alpha }*\partial ^{\alpha }\psi _{\varepsilon }=\sum _{\vert \alpha \vert \le n}\varepsilon ^{-d-\vert \alpha \vert }f_{\alpha }*\partial ^{\alpha }\psi (x/\varepsilon ). \end{aligned}$$
(3.7)

Taking the \(L^p\) norm in (3.7) gives

$$\begin{aligned} \Vert f*\psi _{\varepsilon }\Vert _{L^p} \le \sum _{\vert \alpha \vert \le n}\varepsilon ^{-d-\vert \alpha \vert }\Vert f_{\alpha }*\partial ^{\alpha }\psi (x/\varepsilon )\Vert _{L^p}. \end{aligned}$$
(3.8)

Since \(f_{\alpha }\) and \(\psi \) are compactly supported then, Young’s inequality applies for any \(p_1,p_2 \in [1,\infty ]\), provided that \(\frac{1}{p_1}+\frac{1}{p_2}=\frac{1}{p}\). That is

$$\begin{aligned} \Vert f_{\alpha }*\partial ^{\alpha }\psi (x/\varepsilon )\Vert _{L^p} \le \Vert f_{\alpha }\Vert _{L^{p_1}}\Vert \partial ^{\alpha }\psi (x/\varepsilon )\Vert _{L^{p_2}} <\infty . \end{aligned}$$

It follows from (3.8) that \((f_\varepsilon )_{\varepsilon \in (0,1]}\) is \(L^{p}({\mathbb {R}}^d)\)-moderate. \(\square \)

Example 3.3

Let \((\psi _\varepsilon )_\varepsilon \) be a mollifying net such that \(\psi _\varepsilon (x) = \varepsilon ^{-1}\psi (\varepsilon ^{-1}x)\). Since \(\psi \) is compactly supported, then,

  1. (1)

    For \(f(x)=\delta _{0}(x)\), we have \(f_{\varepsilon }(x) = \varepsilon ^{-1}\psi (\varepsilon ^{-1}x)\le C\varepsilon ^{-1}.\)

  2. (2)

    For \(f(x)=\delta _{0}^{2}(x)\), we can take \(f_{\varepsilon }(x) = \varepsilon ^{-2}\psi ^{2}(\varepsilon ^{-1}x) \le C\varepsilon ^{-2}.\)

Now, we are ready to introduce the notion of very weak solutions adapted to our problem. Here and in the sequel, we consider \(\omega (\varepsilon )=\varepsilon \), in all the above definitions.

Definition 6

(Very weak solution) A net of functions \((u_{\varepsilon })_{\varepsilon }\in C([0,T];H^{s}({\mathbb {R}}^d))\cap C^1([0,T];L^{2}({\mathbb {R}}^d))\) is said to be a very weak solution to the Cauchy problem (3.1), if there exist

  • \(L^{\infty }({\mathbb {R}}^d)\)-moderate approximations \((a_{\varepsilon })_{\varepsilon }\) and \((b_{\varepsilon })_{\varepsilon }\) to a and b, with \(a_{\varepsilon } \ge 0\) and \(b_{\varepsilon } \ge 0\),

  • \(H^{s}({\mathbb {R}}^d)\)-moderate approximation \((u_{0,\varepsilon })_{\varepsilon }\) to \(u_0\),

  • \(L^{2}({\mathbb {R}}^d)\)-moderate approximation \((u_{1,\varepsilon })_{\varepsilon }\) to \(u_1\),

such that, \((u_{\varepsilon })_{\varepsilon }\) solves the approximating problems

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2u_{\varepsilon }(t,x) + (-\Delta )^{s}u_{\varepsilon }(t,x) + a_{\varepsilon }(x)u_{\varepsilon }(t,x) + b_{\varepsilon }(x)\partial _{t}u_{\varepsilon }(t,x)=0, \,\, (t,x)\in [0,T]\times {\mathbb {R}}^d,\\ u_{\varepsilon }(0,x)=u_{0,\varepsilon }(x),\quad \partial _{t}u_{\varepsilon }(0,x)=u_{1,\varepsilon }(x), \quad x\in {\mathbb {R}}^d, \end{array} \end{aligned}$$
(3.9)

for all \(\varepsilon \in (0,1]\), and is \(C_1\)-moderate.

We have also the following alternative definition of a very weak solution to (3.1), under the assumptions of Lemma 2.5.

Definition 7

Let \(d>2s\). A net of functions \((u_{\varepsilon })_{\varepsilon }\in C([0,T];H^{2s}({\mathbb {R}}^d))\cap C^1([0,T];H^{s}({\mathbb {R}}^d))\) is said to be a very weak solution to the Cauchy problem (3.1), if there exist

  • \((L^{\frac{d}{s}}({\mathbb {R}}^d)\cap L^{\frac{d}{2s}}({\mathbb {R}}^d))\)-moderate approximation \((a_{\varepsilon })_{\varepsilon }\) to a, with \(a_{\varepsilon } \ge 0\),

  • \(L^{\frac{d}{s}}({\mathbb {R}}^d)\)-moderate approximation \((b_{\varepsilon })_{\varepsilon }\) to b, with \(b_{\varepsilon } \ge 0\),

  • \(H^{2s}({\mathbb {R}}^d)\)-moderate approximation \((u_{0,\varepsilon })_{\varepsilon }\) to \(u_0\),

  • \(H^{s}({\mathbb {R}}^d)\)-moderate approximation \((u_{1,\varepsilon })_{\varepsilon }\) to \(u_1\),

such that, \((u_{\varepsilon })_{\varepsilon }\) solves the approximating problems (as in Definition 6) for all \(\varepsilon \in (0,1]\), and is \(C_2\)-moderate.

Now, under the assumptions in Definition 6 and Definition 7, the existence of a very weak solution is straightforward.

Theorem 3.2

Assume that there exist \(\big \{L^{\infty }({\mathbb {R}}^d),L^{\infty }({\mathbb {R}}^d),H^{s}({\mathbb {R}}^d),L^2({\mathbb {R}}^d)\big \}\)-moderate approximations to \(a,b,u_0\) and \(u_1\), respectively, with \(a_{\varepsilon } \ge 0\) and \(b_{\varepsilon } \ge 0\). Then, the Cauchy problem (3.1) has a very weak solution.

Proof

Let \(a,b,u_0\) and \(u_1\) as in assumptions. Then, there exists \(N_1,N_2,N_3,N_4 \in {\mathbb {N}}\), such that

$$\begin{aligned} \Vert a_{\varepsilon }\Vert _{L^{\infty }} \lesssim \varepsilon ^{-N_1},\quad \Vert b_{\varepsilon }\Vert _{L^{\infty }} \lesssim \varepsilon ^{-N_2}, \end{aligned}$$

and

$$\begin{aligned} \Vert u_{0,\varepsilon }\Vert _{H^s} \lesssim \varepsilon ^{-N_3},\quad \Vert u_{1,\varepsilon }\Vert _{H^s} \lesssim \varepsilon ^{-N_4}. \end{aligned}$$

It follows from the energy estimate (2.14), that

$$\begin{aligned} \Vert u_{\varepsilon }(t,\cdot )\Vert _1 \lesssim \varepsilon ^{-N_1-N_2-\max \{N_3,N_4\}}, \end{aligned}$$

uniformly in \(t\in [0,T]\), which means that the net \((u_{\varepsilon })_{\varepsilon }\) is \(C_1\)-moderate. This concludes the proof. \(\square \)

As an alternative to Theorem 3.2 in the case when \(d>2s\) and the equation coefficients and data satisfy the hypothesis of Definition 7, we have the following theorem for which we do not give the proof, since it is similar to the one of Theorem 3.2.

Theorem 3.3

Assume that there exist \(\big \{(L^{\frac{d}{s}}({\mathbb {R}}^d)\cap L^{\frac{d}{2s}}({\mathbb {R}}^d)),L^{\frac{d}{s}}({\mathbb {R}}^d),H^{2s}({\mathbb {R}}^d),H^{s}({\mathbb {R}}^d)\big \}\)-moderate approximations to \(a,b,u_0\) and \(u_1\), respectively, with \(a_{\varepsilon } \ge 0\) and \(b_{\varepsilon } \ge 0\). Then, the Cauchy problem (3.1) has a very weak solution.

3.2 Uniqueness

In what follows we want to prove the uniqueness of the very weak solution to the Cauchy problem (3.1) in both situations, either in the case when very weak solutions exist with the assumptions of Theorem 3.2 or in the case of Theorem 3.3. We need the following definition.

Definition 8

(Negligibility) Let X be a normed space endowed with the norm \(\Vert \cdot \Vert _X\). A net of functions \((f_\varepsilon )_{\varepsilon \in (0,1]}\) from X is said to be X-negligible, if the estimate

$$\begin{aligned} \Vert f_\varepsilon \Vert _X \lesssim \varepsilon ^k, \end{aligned}$$
(3.10)

is valid for all \(k>0\).

Roughly speaking, we understand the uniqueness of the very weak solution to the Cauchy problem (3.1), in the sense that negligible changes in the approximations of the equation coefficients and initial data lead to negligible changes in the corresponding very weak solutions. More precisely,

Definition 9

(Uniqueness) We say that the Cauchy problem (3.1) has a unique very weak solution, if for all families of approximations \((a_{\varepsilon })_{\varepsilon }\), \((\tilde{a}_{\varepsilon })_{\varepsilon }\) and \((b_{\varepsilon })_{\varepsilon }\), \((\tilde{b}_{\varepsilon })_{\varepsilon }\) for the equation coefficients a and b, and families of approximations \((u_{0,\varepsilon })_{\varepsilon }\), \((\tilde{u}_{0,\varepsilon })_{\varepsilon }\) and \((u_{1,\varepsilon })_{\varepsilon }\), \((\tilde{u}_{1,\varepsilon })_{\varepsilon }\) for the Cauchy data \(u_0\) and \(u_1\), such that the nets \((a_{\varepsilon }-\tilde{a}_{\varepsilon })_{\varepsilon }\), \((b_{\varepsilon }-\tilde{b}_{\varepsilon })_{\varepsilon }\), \((u_{0,\varepsilon }-\tilde{u}_{0,\varepsilon })_{\varepsilon }\) and \((u_{1,\varepsilon }-\tilde{u}_{1,\varepsilon })_{\varepsilon }\) are \(\big \{L^{\infty }({\mathbb {R}}^d),L^{\infty }({\mathbb {R}}^d),H^{s}({\mathbb {R}}^d),L^2({\mathbb {R}}^d)\big \}\)-negligible, it follows that the net

$$\begin{aligned} \big (u_{\varepsilon }(t,\cdot )-\tilde{u}_{\varepsilon }(t,\cdot )\big )_{\varepsilon } \end{aligned}$$

is \(L^2({\mathbb {R}}^d)\)-negligible for all \(t\in [0,t]\), where \((u_{\varepsilon })_{\varepsilon }\) and \((\tilde{u}_{\varepsilon })_{\varepsilon }\) are the families of solutions to the approximating Cauchy problems

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2u_{\varepsilon }(t,x) + (-\Delta )^{s}u_{\varepsilon }(t,x) + a_{\varepsilon }(x)u_{\varepsilon }(t,x) + b_{\varepsilon }(x)\partial _{t}u_{\varepsilon }(t,x)=0, \,\, (t,x)\in [0,T]\times {\mathbb {R}}^d,\\ u_{\varepsilon }(0,x)=u_{0,\varepsilon }(x),\quad \partial _{t}u_{\varepsilon }(0,x)=u_{1,\varepsilon }(x), \quad x\in {\mathbb {R}}^d, \end{array} \end{aligned}$$
(3.11)

and

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2\tilde{u}_{\varepsilon }(t,x) + (-\Delta )^{s}\tilde{u}_{\varepsilon }(t,x) + \tilde{a}_{\varepsilon }(x)\tilde{u}_{\varepsilon }(t,x) + \tilde{b}_{\varepsilon }(x)\partial _{t}\tilde{u}_{\varepsilon }(t,x)=0, \,\, (t,x)\in [0,T]\times {\mathbb {R}}^d,\\ \tilde{u}_{\varepsilon }(0,x)=\tilde{u}_{0,\varepsilon }(x),\quad \partial _{t}\tilde{u}_{\varepsilon }(0,x)=\tilde{u}_{1,\varepsilon }(x), \quad x\in {\mathbb {R}}^d, \end{array} \end{aligned}$$
(3.12)

respectively.

Theorem 3.4

Assume that \(a,b \ge 0\), in the sense that their approximating nets are non-negative. Under the conditions of Theorem 3.2, the very weak solution to the Cauchy problem (3.1) is unique.

Proof

Let \((u_{\varepsilon })_{\varepsilon }\) and \((\tilde{u}_{\varepsilon })_{\varepsilon }\) be the families of solutions to (3.11) and (3.12) and assume that the nets \((a_{\varepsilon }-\tilde{a}_{\varepsilon })_{\varepsilon }\), \((b_{\varepsilon }-\tilde{b}_{\varepsilon })_{\varepsilon }\), \((u_{0,\varepsilon }-\tilde{u}_{0,\varepsilon })_{\varepsilon }\) and \((u_{1,\varepsilon }-\tilde{u}_{1,\varepsilon })_{\varepsilon }\) are \(L^{\infty }({\mathbb {R}}^d)\), \(L^{\infty }({\mathbb {R}}^d)\), \(H^{s}({\mathbb {R}}^d)\), \(L^2({\mathbb {R}}^d)\)-negligible, respectively. The function \(U_{\varepsilon }(t,x)\) defined by

$$\begin{aligned} U_{\varepsilon }(t,x):=u_{\varepsilon }(t,x)-\tilde{u}_{\varepsilon }(t,x), \end{aligned}$$

satisfies

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2U_{\varepsilon }(t,x) + (-\Delta )^{s}U_{\varepsilon }(t,x) + a_{\varepsilon }(x)U_{\varepsilon }(t,x) + b_{\varepsilon }(x)\partial _{t}U_{\varepsilon }(t,x)=f_{\varepsilon }(t,x),\\ U_{\varepsilon }(0,x)=(u_{0,\varepsilon }-\tilde{u}_{0,\varepsilon })(x),\quad \partial _{t}U_{\varepsilon }(0,x)=(u_{1,\varepsilon }-\tilde{u}_{1,\varepsilon })(x), \end{array} \end{aligned}$$
(3.13)

for \((t,x)\in [0,T]\times {\mathbb {R}}^d\), where,

$$\begin{aligned} f_{\varepsilon }(t,x):=\big (\tilde{a}_{\varepsilon }(x)-a_{\varepsilon }(x)\big )\tilde{u}_{\varepsilon }(t,x) + \big (\tilde{b}_{\varepsilon }(x)-b_{\varepsilon }(x)\big )\partial _{t}\tilde{u}_{\varepsilon }(t,x). \end{aligned}$$

According to Duhamel’s principle (see Proposition 2.3), the solution to (3.13) has the following representation

$$\begin{aligned} U_{\varepsilon }(t,x) = W_{\varepsilon }(t,x) + \mathop {\int }\limits _{0}^{t}V_{\varepsilon }(t,x;\tau )\mathrm d\tau , \end{aligned}$$
(3.14)

where \(W_{\varepsilon }(t,x)\) is the solution to the homogeneous problem

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2W_{\varepsilon }(t,x) + (-\Delta )^{s}W_{\varepsilon }(t,x) + a_{\varepsilon }(x)W_{\varepsilon }(t,x) + b_{\varepsilon }(x)\partial _{t}W_{\varepsilon }(t,x)=0,\\ W_{\varepsilon }(0,x)=(u_{0,\varepsilon }-\tilde{u}_{0,\varepsilon })(x),\quad \partial _{t}W_{\varepsilon }(0,x)=(u_{1,\varepsilon }-\tilde{u}_{1,\varepsilon })(x), \end{array} \end{aligned}$$
(3.15)

for \((t,x)\in [0,T]\times {\mathbb {R}}^d\), and \(V_{\varepsilon }(t,x;\tau )\) solves

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2V_{\varepsilon }(t,x;\tau ) + (-\Delta )^{s}V_{\varepsilon }(t,x;\tau ) + a_{\varepsilon }(x)V_{\varepsilon }(t,x;\tau ) + b_{\varepsilon }(x)\partial _{t}V_{\varepsilon }(t,x;\tau )=0,\\ V_{\varepsilon }(\tau ,x;\tau )=0,\quad \partial _{t}V_{\varepsilon }(\tau ,x;\tau )=f_{\varepsilon }(\tau ,x), \end{array} \end{aligned}$$
(3.16)

for \((t,x)\in [\tau ,T]\times {\mathbb {R}}^d\) and \(\tau \in [0,T]\). By taking the \(L^2\)-norm on both sides of (3.14) and using Minkowski’s integral inequality, we get

$$\begin{aligned} \Vert U_{\varepsilon }(t,\cdot )\Vert _{L^2} \le \Vert W_{\varepsilon }(t,\cdot )\Vert _{L^2} + \mathop {\int }\limits _{0}^{t}\Vert V_{\varepsilon }(t,\cdot ;\tau )\Vert _{L^2}\mathrm d\tau . \end{aligned}$$
(3.17)

The energy estimate (2.14) allows us to control \(\Vert W_{\varepsilon }(t,\cdot )\Vert _{L^2}\) and \(\Vert V_{\varepsilon }(t,\cdot ;\tau )\Vert _{L^2}\) to get

$$\begin{aligned} \Vert W_{\varepsilon }(t,\cdot )\Vert _{L^2} \lesssim \Big (1+\Vert a_{\varepsilon }\Vert _{L^{\infty }}\Big )\Big (1+\Vert b_{\varepsilon }\Vert _{L^{\infty }}\Big )\bigg [\Vert u_{0,\varepsilon }-\tilde{u}_{0,\varepsilon }\Vert _{H^s} + \Vert u_{1,\varepsilon }-\tilde{u}_{1,\varepsilon }\Vert _{L^2}\bigg ], \end{aligned}$$

and

$$\begin{aligned} \Vert V_{\varepsilon }(t,\cdot ;\tau )\Vert _{L^2} \lesssim \Big (1+\Vert a_{\varepsilon }\Vert _{L^{\infty }}\Big )\Big (1+\Vert b_{\varepsilon }\Vert _{L^{\infty }}\Big )\bigg [\Vert f_{\varepsilon }(\tau ,\cdot )\Vert _{L^2}\bigg ]. \end{aligned}$$

By taking into consideration that \(t\in [0,T]\), it follows from (3.17) that

$$\begin{aligned}&\Vert U_{\varepsilon }(t,\cdot )\Vert _{L^2} \lesssim \Big (1+\Vert a_{\varepsilon }\Vert _{L^{\infty }}\Big )\Big (1+\Vert b_{\varepsilon }\Vert _{L^{\infty }}\Big ) \bigg [\Vert u_{0,\varepsilon }-\tilde{u}_{0,\varepsilon }\Vert _{H^s} +\nonumber \\&\quad \Vert u_{1,\varepsilon }-\tilde{u}_{1,\varepsilon }\Vert _{L^2} + \mathop {\int }\limits _{0}^{T}\Vert f_{\varepsilon }(\tau ,\cdot )\Vert _{L^2}\mathrm d\tau \bigg ], \end{aligned}$$
(3.18)

where \(\Vert f_{\varepsilon }(\tau ,\cdot )\Vert _{L^2}\) is estimated as follows,

$$\begin{aligned} \Vert f_{\varepsilon }(\tau ,\cdot )\Vert _{L^2}&\le \Vert (\tilde{a}_{\varepsilon }(\cdot )-a_{\varepsilon }(\cdot ))\tilde{u}_{\varepsilon }(\tau ,\cdot )\Vert _{L^2} + \Vert (\tilde{b}_{\varepsilon }(\cdot )-b_{\varepsilon }(\cdot ))\partial _{t}\tilde{u}_{\varepsilon }(\tau ,\cdot )\Vert _{L^2}\\&\nonumber \le \Vert \tilde{a}_{\varepsilon } - a_{\varepsilon }\Vert _{L^{\infty }}\Vert \tilde{u}_{\varepsilon }(\tau ,\cdot )\Vert _{L^2} + \Vert \tilde{b}_{\varepsilon } - b_{\varepsilon }\Vert _{L^{\infty }}\Vert \partial _{t}\tilde{u}_{\varepsilon }(\tau ,\cdot )\Vert _{L^2}. \end{aligned}$$
(3.19)

On the one hand, the nets \((a_{\varepsilon })_{\varepsilon }\) and \((b_{\varepsilon })_{\varepsilon }\) are \(L^{\infty }\)-moderate by assumption, and the net \((\tilde{u}_{\varepsilon })_{\varepsilon }\) is \(C_1\)-moderate being a very weak solution to (3.12). On the other hand, the nets \((a_{\varepsilon }-\tilde{a}_{\varepsilon })_{\varepsilon }\), \((b_{\varepsilon }-\tilde{b}_{\varepsilon })_{\varepsilon }\), \((u_{0,\varepsilon }-\tilde{u}_{0,\varepsilon })_{\varepsilon }\) and \((u_{1,\varepsilon }-\tilde{u}_{1,\varepsilon })_{\varepsilon }\) are \(L^{\infty }({\mathbb {R}}^d)\), \(L^{\infty }({\mathbb {R}}^d)\), \(H^{s}({\mathbb {R}}^d)\), \(L^2({\mathbb {R}}^d)\)-negligible. It follows from (3.18) combined with (3.19) that

$$\begin{aligned} \Vert U_{\varepsilon }(t,\cdot )\Vert _{L^2} \lesssim \varepsilon ^{k}, \end{aligned}$$

for all \(k>0\), showing the uniqueness of the very weak solution. \(\square \)

The analogue to Definition 9 and Theorem 3.4 in the case when \(d>2s\) with Theorem 3.3’s background, read:

Definition 10

We say that the Cauchy problem (3.1) has a unique very weak solution, if for all families of approximations \((a_{\varepsilon })_{\varepsilon }\), \((\tilde{a}_{\varepsilon })_{\varepsilon }\) and \((b_{\varepsilon })_{\varepsilon }\), \((\tilde{b}_{\varepsilon })_{\varepsilon }\) for the equation coefficients a and b, and families of approximations \((u_{0,\varepsilon })_{\varepsilon }\), \((\tilde{u}_{0,\varepsilon })_{\varepsilon }\) and \((u_{1,\varepsilon })_{\varepsilon }\), \((\tilde{u}_{1,\varepsilon })_{\varepsilon }\) for the Cauchy data \(u_0\) and \(u_1\), such that the nets \((a_{\varepsilon }-\tilde{a}_{\varepsilon })_{\varepsilon }\), \((b_{\varepsilon }-\tilde{b}_{\varepsilon })_{\varepsilon }\), \((u_{0,\varepsilon }-\tilde{u}_{0,\varepsilon })_{\varepsilon }\) and \((u_{1,\varepsilon }-\tilde{u}_{1,\varepsilon })_{\varepsilon }\) are \(\big \{(L^{\frac{d}{s}}({\mathbb {R}}^d)\cap L^{\frac{d}{2s}}({\mathbb {R}}^d)),L^{\frac{d}{s}}({\mathbb {R}}^d),H^{2s}({\mathbb {R}}^d),H^{s}({\mathbb {R}}^d)\big \}\)-negligible, it follows that the net \(\big (u_{\varepsilon }(t,\cdot )-\tilde{u}_{\varepsilon }(t,\cdot )\big )_{\varepsilon \in (0,1]}\), is \(L^2({\mathbb {R}}^d)\)-negligible for all \(t\in [0,T]\), where \((u_{\varepsilon })_{\varepsilon }\) and \((\tilde{u}_{\varepsilon })_{\varepsilon }\) are the families of solutions to the corresponding approximating Cauchy problems.

Theorem 3.5

Let \(d>2s\) and assume that \(a,b \ge 0\), in the sense that there approximating nets are non-negative. With the assumptions of Theorem 3.3, the very weak solution to the Cauchy problem (3.1) is unique.

4 Coherence with classical theory

The question to be answered here is that, in the case when \(a,b\in L^{\infty }({\mathbb {R}}^d)\), \(u_0 \in H^{s}({\mathbb {R}}^d)\) and \(u_1 \in L^{2}({\mathbb {R}}^d)\) or alternatively when \((a,b)\in (L^{\frac{d}{s}}({\mathbb {R}}^d)\cap L^{\frac{d}{2s}}({\mathbb {R}}^d))\times L^{\frac{d}{s}}({\mathbb {R}}^d)\), \(u_0 \in H^{2s}({\mathbb {R}}^d)\) and \(u_1 \in H^{s}({\mathbb {R}}^d)\) and a classical solution to the Cauchy problem

$$\begin{aligned} \bigg \lbrace \begin{array}{l} u_{tt}(t,x) + (-\Delta )^{s}u(t,x) + a(x)u(t,x) + b(x)u_{t}(t,x)=0, \,\, (t,x)\in [0,T]\times {\mathbb {R}}^d,\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x), \quad x\in {\mathbb {R}}^d, \end{array} \end{aligned}$$
(4.1)

exists, does the very weak solution obtained via regularisation techniques recapture it?

Theorem 4.1

Let \(\psi \) be a Friedrichs mollifier. Assume \(a,b\in L^{\infty }({\mathbb {R}}^d)\) be non-negative and suppose that \(u_0 \in H^{s}({\mathbb {R}}^d)\) and \(u_1 \in L^{2}({\mathbb {R}}^d)\). Then, for any regularising families \((a_{\varepsilon })_{\varepsilon }=(a*\psi _{\varepsilon })_{\varepsilon }\) and \((b_{\varepsilon })_{\varepsilon }=(b*\psi _{\varepsilon })_{\varepsilon }\) for the equation coefficients, satisfying

$$\begin{aligned} \Vert a_{\varepsilon } - a\Vert _{L^{\infty }} \rightarrow 0,\quad \text {and}\quad \Vert b_{\varepsilon } - b\Vert _{L^{\infty }} \rightarrow 0, \end{aligned}$$
(4.2)

and any regularising families \((u_{0,\varepsilon })_{\varepsilon }=(u_{0}*\psi _{\varepsilon })_{\varepsilon }\) and \((u_{1,\varepsilon })_{\varepsilon }=(u_{1}*\psi _{\varepsilon })_{\varepsilon }\) for the initial data, the net \((u_{\varepsilon })_{\varepsilon }\) converges to the classical solution (given by Lemma 2.4) of the Cauchy problem (4.1) in \(L^{2}\) as \(\varepsilon \rightarrow 0\).

Proof

Let \((u_{\varepsilon })_{\varepsilon }\) be the very weak solution given by Theorem 3.2 and u the classical one, as in Lemma 2.4. The classical solution satisfies

$$\begin{aligned} \bigg \lbrace \begin{array}{l} u_{tt}(t,x) + (-\Delta )^{s}u(t,x) + a(x)u(t,x) + b(x)u_{t}(t,x)=0, \,\, (t,x)\in [0,T]\times {\mathbb {R}}^d,\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x), \quad x\in {\mathbb {R}}^d, \end{array} \end{aligned}$$
(4.3)

and \((u_{\varepsilon })_{\varepsilon }\) solves

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2u_{\varepsilon }(t,x) + (-\Delta )^{s}u_{\varepsilon }(t,x) + a_{\varepsilon }(x)u_{\varepsilon }(t,x) + b_{\varepsilon }(x)\partial _{t}u_{\varepsilon }(t,x)=0, \,\, (t,x)\in [0,T]\times {\mathbb {R}}^d,\\ u_{\varepsilon }(0,x)=u_{0,\varepsilon }(x),\quad \partial _{t}u_{\varepsilon }(0,x)=u_{1,\varepsilon }(x), \quad x\in {\mathbb {R}}^d. \end{array} \end{aligned}$$
(4.4)

Denoting \(U_{\varepsilon }(t,x):=u_{\varepsilon }(t,x)-u(t,x)\), we have that \(U_{\varepsilon }\) solves the Cauchy problem

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2U_{\varepsilon }(t,x) + (-\Delta )^{s}U_{\varepsilon }(t,x) + a_{\varepsilon }(x)U_{\varepsilon }(t,x) + b_{\varepsilon }(x)\partial _{t}U_{\varepsilon }(t,x)=\Theta _{\varepsilon }(t,x),\\ U_{\varepsilon }(0,x)=(u_{0,\varepsilon }-u_0)(x),\quad \partial _{t}U_{\varepsilon }(0,x)=(u_{1,\varepsilon }-u_1)(x), \end{array} \end{aligned}$$
(4.5)

where

$$\begin{aligned} \Theta _{\varepsilon }(t,x):= -\big (a_{\varepsilon }(x)-a(x)\big )u(t,x) - \big (b_{\varepsilon }(x)-b(x)\big )\partial _{t}u(t,x). \end{aligned}$$
(4.6)

Thanks to Duhamel’s principle, \(U_{\varepsilon }\) can be represented by

$$\begin{aligned} U_{\varepsilon }(t,x) = W_{\varepsilon }(t,x) + \mathop {\int }\limits _{0}^{t}V_{\varepsilon }(t,x;\tau )\mathrm d\tau , \end{aligned}$$
(4.7)

where \(W_{\varepsilon }(t,x)\) is the solution to the homogeneous problem

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2W_{\varepsilon }(t,x) + (-\Delta )^{s}W_{\varepsilon }(t,x) + a_{\varepsilon }(x)W_{\varepsilon }(t,x) + b_{\varepsilon }(x)\partial _{t}W_{\varepsilon }(t,x)=0,\\ W_{\varepsilon }(0,x)=(u_{0,\varepsilon }-u_0)(x),\quad \partial _{t}W_{\varepsilon }(0,x)=(u_{1,\varepsilon }-u_1)(x), \end{array} \end{aligned}$$
(4.8)

for \((t,x)\in [0,T]\times {\mathbb {R}}^d\), and \(V_{\varepsilon }(t,x;\tau )\) solves

$$\begin{aligned} \bigg \{ \begin{array}{l} \partial _{t}^2V_{\varepsilon }(t,x;\tau ) + (-\Delta )^{s}V_{\varepsilon }(t,x;\tau ) + a_{\varepsilon }(x)V_{\varepsilon }(t,x;\tau ) + b_{\varepsilon }(x)\partial _{t}V_{\varepsilon }(t,x;\tau )=0,\\ V_{\varepsilon }(\tau ,x;\tau )=0,\quad \partial _{t}V_{\varepsilon }(\tau ,x;\tau )=\Theta _{\varepsilon }(\tau ,x), \end{array} \end{aligned}$$
(4.9)

for \((t,x)\in [\tau ,T]\times {\mathbb {R}}^d\) and \(\tau \in [0,T]\). We take the \(L^2\)-norm in (4.7) and we argue as in the proof of Theorem 3.4. We obtain

$$\begin{aligned} \Vert U_{\varepsilon }(t,\cdot )\Vert _{L^2} \le \Vert W_{\varepsilon }(t,\cdot )\Vert _{L^2} + \mathop {\int }\limits _{0}^{t}\Vert V_{\varepsilon }(t,\cdot ;\tau )\Vert _{L^2}\mathrm d\tau , \end{aligned}$$
(4.10)

where

$$\begin{aligned} \Vert W_{\varepsilon }(t,\cdot )\Vert _{L^2} \lesssim \Big (1+\Vert a_{\varepsilon }\Vert _{L^{\infty }}\Big )\Big (1+\Vert b_{\varepsilon }\Vert _{L^{\infty }}\Big )\bigg [\Vert u_{0,\varepsilon }-u_{0}\Vert _{H^s} + \Vert u_{1,\varepsilon }-u_{1,}\Vert _{L^2}\bigg ], \end{aligned}$$

and

$$\begin{aligned} \Vert V_{\varepsilon }(t,\cdot ;\tau )\Vert _{L^2} \lesssim \Big (1+\Vert a_{\varepsilon }\Vert _{L^{\infty }}\Big )\Big (1+\Vert b_{\varepsilon }\Vert _{L^{\infty }}\Big )\bigg [\Vert \Theta _{\varepsilon }(\tau ,\cdot )\Vert _{L^2}\bigg ], \end{aligned}$$

by the energy estimate from Lemma 2.4, and \(\Theta _{\varepsilon }\) is estimated by

$$\begin{aligned} \Vert \Theta _{\varepsilon }(\tau ,\cdot )\Vert _{L^2} \le \Vert a_{\varepsilon } - a\Vert _{L^{\infty }}\Vert u(\tau ,\cdot )\Vert _{L^2} + \Vert b_{\varepsilon } - b\Vert _{L^{\infty }}\Vert \partial _{t}u(\tau ,\cdot )\Vert _{L^2}. \end{aligned}$$
(4.11)

First, one observes that \(\Vert a_{\varepsilon }\Vert _{L^{\infty }}<\infty \) and \(\Vert b_{\varepsilon }\Vert _{L^{\infty }}<\infty \) uniformly in \(\varepsilon \) by the fact that \(a,b\in L^{\infty }({\mathbb {R}}^d)\) and \(\Vert u(\tau ,\cdot )\Vert _{L^2}\) and \(\Vert \partial _{t}u(\tau ,\cdot )\Vert _{L^2}\) are bounded as well, since u is a classical solution to (4.1). This, together with

$$\begin{aligned} \Vert a_{\varepsilon } - a\Vert _{L^{\infty }} \rightarrow 0,\quad \text {and}\quad \Vert b_{\varepsilon } - b\Vert _{L^{\infty }} \rightarrow 0,\quad \text {as } \varepsilon \rightarrow 0, \end{aligned}$$

from the assumptions, and

$$\begin{aligned} \Vert u_{0,\varepsilon }-u_{0}\Vert _{H^s} \rightarrow 0,\quad \Vert u_{1,\varepsilon }-u_{1,}\Vert _{L^2} \rightarrow 0,\quad \text {as } \varepsilon \rightarrow 0, \end{aligned}$$

shows that

$$\begin{aligned} \Vert U_{\varepsilon }(t,\cdot )\Vert _{L^2} \rightarrow 0,\quad \text {as } \varepsilon \rightarrow 0, \end{aligned}$$

uniformly in \(t\in [0,T]\), and this finishes the proof. \(\square \)

In the case when a classical solution exists in the sense of Lemma 2.5, the coherence theorem reads as follows. We avoid giving the proof since it is similar to the proof of Theorem 4.1.

Theorem 4.2

Let \(\psi \) be a Friedrichs mollifier. Assume \((a,b)\in (L^{\frac{d}{s}}({\mathbb {R}}^d)\cap L^{\frac{d}{2s}}({\mathbb {R}}^d))\times L^{\frac{d}{s}}({\mathbb {R}}^d)\) be non-negative and suppose that \(u_0 \in H^{2s}({\mathbb {R}}^d)\) and \(u_1 \in H^{s}({\mathbb {R}}^d)\). Then, for any regularising families \((a_{\varepsilon })_{\varepsilon }=(a*\psi _{\varepsilon })_{\varepsilon }\) and \((b_{\varepsilon })_{\varepsilon }=(b*\psi _{\varepsilon })_{\varepsilon }\) for the equation coefficients, and any regularising families \((u_{0,\varepsilon })_{\varepsilon }=(u_{0}*\psi _{\varepsilon })_{\varepsilon }\) and \((u_{1,\varepsilon })_{\varepsilon }=(u_{1}*\psi _{\varepsilon })_{\varepsilon }\) for the initial data, the net \((u_{\varepsilon })_{\varepsilon }\) converges to the classical solution (given by Lemma 2.5) of the Cauchy problem (4.1) in \(L^{2}\) as \(\varepsilon \rightarrow 0\).

Remark 4.1

In Theorem 4.1, we proved the coherence result, provided that

$$\begin{aligned} \Vert a_{\varepsilon } - a\Vert _{L^{\infty }} \rightarrow 0,\quad \text {and}\quad \Vert b_{\varepsilon } - b\Vert _{L^{\infty }} \rightarrow 0, \end{aligned}$$

as \(\varepsilon \rightarrow 0\). This is in particular true if we consider coefficients from \(C_0 ({\mathbb {R}}^d)\), the space of continuous functions on \({\mathbb {R}}^d\) vanishing at infinity which is a Banach space when endowed with the \(L^{\infty }\)-norm. For more details, see Section 3.1.10 in [32].