Keywords

1 Introduction

The non-linear evolution equation

$$\displaystyle{ \partial _{x}(\partial _{t}u + \partial _{x}f(u) - \beta \partial _{xxx}^{3}u) = \gamma u, }$$
(1)

with \(\beta,\,\gamma \in \mathbb{R}\) and \(f(u) = \frac{{u}^{2}} {2}\) was derived by Ostrovsky [20] to model small-amplitude long waves in a rotating fluid of finite depth. This equation generalizes the Korteweg-deVries equation (corresponding to γ = 0) by an additional term induced by the Coriolis force. It is deduced by considering two asymptotic expansions of the shallow water equations, first with respect to the rotation frequency and then with respect to the amplitude of the waves [8].

Mathematical properties of the Ostrovsky equation (1) have been studied recently in great depth, including the local and global well-posedness in energy space [7, 12, 14, 25], stability of solitary waves [10, 13, 15], and convergence of solutions in the limit of the Korteweg-deVries equation [11, 15]. We shall consider the limit of the no high-frequency dispersion β = 0, therefore (1) reads

$$\displaystyle{ \partial _{x}(\partial _{t}u + \partial _{x}f(u)) = \gamma u. }$$
(2)

In this form, Eq. (2) is known under various different names such as the reduced Ostrovsky equation [21, 23], the Ostrovsky-Hunter equation [3], the short-wave equation [8], and the Vakhnenko equation [18, 22].

Integrating (2) with respect to x we obtain the integro-differential formulation of (2) (see [16])

$$\displaystyle{ \partial _{t}u + \partial _{x}f(u) = {\gamma \int }^{x}u(t,y)\mathit{dy}, }$$

which is equivalent to

$$\displaystyle{ \partial _{t}u + \partial _{x}f(u) = \gamma P,\qquad \partial _{x}P = u. }$$

Due to the regularizing effect of the P equation we have that

$$\displaystyle{ u \in L_{\mathit{loc}}^{\infty }\Longrightarrow P \in {L}^{\infty }((0,T);W_{\mathit{ loc}}^{1,\infty }), T > 0. }$$

The flux f is assumed to be smooth, Lipschitz continuous, and genuinely nonlinear, i.e.:

$$\displaystyle\begin{array}{rcl} f \in {C}^{2}(\mathbb{R}),\qquad \vert \{u \in \mathbb{R};f^{\prime\prime}(u) = 0\}\vert = 0,\qquad f^{\prime}(0) = 0,\qquad \vert f^{\prime}(\cdot )\vert \leq L,& &{}\end{array}$$
(3)

and the constant γ is assumed to be real

Since the solutions are merely locally bounded, the Lipschitz continuity of the flux f assumed in (3) guarantees the finite speed of propagation of the solutions of (2).

This paper is devoted to the wellposedness of the initial-boundary value problem (see Sect. 2) and the Cauchy problem (see Sect. 3) for (2). Our existence argument is based on a passage to the limit using a compensated compactness argument [24] in a vanishing viscosity approximation of (8):

$$\displaystyle{ \partial _{t}u_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) = \gamma P_{\varepsilon } +\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon },\qquad \partial _{ x}P_{\varepsilon } = u_{\varepsilon }. }$$

On the other hand we use the method of [9] for the uniqueness and stability of the solutions of (2).

2 The Initial Boundary Value Problem

In this section, we augment (2) with the boundary condition

$$\displaystyle{ u(t,0) = 0,\qquad t > 0, }$$
(4)

and the initial datum

$$\displaystyle{ u(0,x) = u_{0}(x),\qquad x > 0. }$$
(5)

We assume that

$$\displaystyle{ u_{0} \in {L}^{2}(0,\infty ) \cap L_{\mathit{ loc}}^{\infty }(0,\infty ),\quad \int _{ 0}^{\infty }u_{ 0}(x)\mathit{dx} = 0. }$$
(6)

The zero mean assumption on the initial condition is motivated by (2). Indeed, integrating both sides of (2) we have that \(u(t,\cdot )\) has zero mean for every t > 0, therefore it is natural to assume the same on the initial condition.

Integrating (2) on (0,x) we obtain the integro-differential formulation of the initial-boundary value problem (2), (4), (5) (see [16])

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u + \partial _{x}f(u) = \gamma \int _{0}^{x}u(t,y)\mathit{dy},\quad &\qquad t > 0,\ x > 0, \\ u(t,0) = 0, \quad &\qquad t > 0, \\ u(0,x) = u_{0}(x), \quad &\qquad x > 0. \end{array} \right. }$$
(7)

This is equivalent to

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u + \partial _{x}f(u) = \gamma P, \quad &\qquad t > 0,\ x > 0, \\ \partial _{x}P = u, \quad &\qquad t > 0,\ x > 0, \\ u(t,0) = P(t,0) = 0,\quad &\qquad t > 0, \\ u(0,x) = u_{0}(x), \quad &\qquad x > 0. \end{array} \right. }$$
(8)

Due to the regularizing effect of the P equation in (8) we have that

$$\displaystyle{ u \in L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2})\Longrightarrow P \in L_{\mathit{ loc}}^{\infty }((0,\infty );W_{\mathit{ loc}}^{1_{,}\infty }(0,\infty )). }$$
(9)

Therefore, if a map \(u \in L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2})\) satisfies, for every convex map \(\eta \in {C}^{2}(\mathbb{R})\),

$$\displaystyle{ \partial _{t}\eta (u) + \partial _{x}q(u) - \gamma \eta ^{\prime}(u)P \leq 0,\qquad q(u) {=\int }^{u}f^{\prime}(\xi )\eta ^{\prime}(\xi )\,d\xi, }$$
(10)

in the sense of distributions, then [5, Theorem 1.1] provides the existence of a strong trace \(u_{0}^{\tau }\) on the boundary x = 0.

Definition 1.

We say that \(u \in L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2})\) is an entropy solution of the initial-boundary value problem (2), (4), and (5) if:

  1. (i)

    u is a distributional solution of (7) or equivalently of (8);

  2. (ii)

    for every convex function \(\eta \in {C}^{2}(\mathbb{R})\) the entropy inequality (10) holds in the sense of distributions in \((0,\infty ) \times (0,\infty )\);

  3. (iii)

    for every convex function \(\eta \in {C}^{2}(\mathbb{R})\) with corresponding q defined by q′ = f′η′, the boundary entropy condition

    $$\displaystyle{ \begin{array}{c} q(u_{0}^{\tau }(t)) - q(0) - \eta ^{\prime}(0)\frac{{(u_{0}^{\tau }(t))}^{2}} {2} \leq 0\end{array} }$$
    (11)

    holds for a.e. \(t \in (0,\infty )\), where u 0 τ(t) is the trace of u on the boundary x = 0.

We observe that the previous definition is equivalent to the following inequality (see [2]):

$$\displaystyle{ \begin{array}{cl} \int _{0}^{\infty }\int _{0}^{\infty }(\vert u - c\vert \partial _{t}\phi & +\mathrm{ sign}\left (u - c\right )(f(u) - f(c))\partial _{x}\phi )\mathit{dt}\,\mathit{dx} \\ & + \gamma \int _{0}^{\infty }\int _{0}^{\infty }\mathrm{sign}\left (u - c\right )\mathit{P}\,\mathit{dt}\,\mathit{dx} \\ & -\int _{0}^{\infty }\mathrm{sign}\left (c\right )(f(u_{0}^{\tau }(t)) - f(c))\mathit{dt} \\ & +\int _{ 0}^{\infty }\vert u_{0}(x) - c\vert \phi (0,x)\mathit{dx} \geq 0, \end{array} }$$

for every non-negative \(\phi \in {C}^{\infty }({\mathbb{R}}^{2})\) with compact support, and for every \(c \in \mathbb{R}\).

The main result of this section is the following theorem.

Theorem 1.

Assume (3), (5) , and (6) . The initial-boundary value problem (2), (4) , and (5) possesses a unique entropy solution u in the sense of Definition 1. Moreover, if u and v are two entropy solutions (2), (4), (5) in the sense of Definition  1 the following inequality holds

$$\displaystyle{ \left \|u(t,\cdot ) - v(t,\cdot )\right \|_{{L}^{1}(0,R)} \leq {e}^{Ct}\left \|u(0,\cdot ) - v(0,\cdot )\right \|_{{ L}^{1}(0,R+Lt)}, }$$
(12)

for almost every t > 0, R,T > 0, and a suitable constant C > 0.

Our existence argument is based on a passage to the limit in a vanishing viscosity approximation of (8). Fix a small number \(\varepsilon > 0\), and let \(u_{\varepsilon } = u_{\varepsilon }(t,x)\) be the unique classical solution of the following mixed problem

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) = \gamma P_{\varepsilon } +\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon },\quad &\quad t > 0,\ x > 0, \\ \partial _{x}P_{\varepsilon } = u_{\varepsilon }, \quad &\quad t > 0,\ x > 0, \\ u_{\varepsilon }(t,0) = P_{\varepsilon }(t,0) = 0, \quad &\quad t > 0, \\ u_{\varepsilon }(0,x) = u_{\varepsilon,0}(x), \quad &\quad x > 0,\end{array} \right. }$$
(13)

where \(u_{\varepsilon,0}\) is a \({C}^{\infty }(0,\infty )\) approximation of u 0 such that

$$\displaystyle{ \left \|u_{\varepsilon,0}\right \|_{{L}^{2}(0,\infty )} \leq \left \|u_{0}\right \|_{{L}^{2}(0,\infty )},\quad \int _{0}^{\infty }u_{\varepsilon,0}(x)\mathit{dx} = 0. }$$
(14)

Clearly, (13) is equivalent to the integro-differential problem

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) = \gamma \int _{0}^{x}u_{\varepsilon }(t,y)\mathit{dy} +\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon },\quad &\quad t > 0,\ x > 0, \\ u_{\varepsilon }(t,0) = 0, \quad &\quad t > 0, \\ u_{\varepsilon }(0,x) = u_{\varepsilon,0}(x), \quad &\quad x > 0.\end{array} \right. }$$
(15)

The existence of such solutions can be obtained by fixing a small number δ > 0 and considering the further approximation of (13) (see [4])

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u_{\varepsilon,\delta } + \partial _{x}f(u_{\varepsilon,\delta }) = \gamma P_{\varepsilon,\delta } +\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon,\delta }, \quad &\quad t > 0,\ x > 0, \\ -\delta \partial _{\mathit{xx}}^{2}P_{\varepsilon,\delta } + \partial _{x}P_{\varepsilon,\delta } = u_{\varepsilon,\delta }, \quad &\quad t > 0,\ x > 0, \\ u_{\varepsilon,\delta }(t,0) = P_{\varepsilon,\delta }(t,0) = \partial _{x}P_{\varepsilon,\delta }(t,0) = 0,\quad &\quad t > 0, \\ u_{\varepsilon,\delta }(0,x) = u_{\varepsilon,0}(x), \quad &\quad x > 0,\end{array} \right. }$$

and then sending δ → 0.

Let us prove some a priori estimates on \(u_{\varepsilon }\).

Lemma 1.

The following statements are equivalent

$$\displaystyle\begin{array}{rcl} \int _{0}^{\infty }u_{\varepsilon }(t,x)\mathit{dx}& =& 0,\quad t \geq 0,{}\end{array}$$
(16)
$$\displaystyle\begin{array}{rcl} \frac{d} {\mathit{dt}}\int _{0}^{\infty }u_{\varepsilon }^{2}\mathit{dx} + 2\varepsilon \int _{ 0}^{\infty }{(\partial _{ x}u_{\varepsilon })}^{2}\mathit{dx}& =& 0,\quad t > 0.{}\end{array}$$
(17)

Proof.

Let t > 0. We begin by proving that (16) implies (17). Multiplying (15) by \(u_{\varepsilon }\) and integrating over \((0,\infty )\) gives

$$\displaystyle\begin{array}{rcl} \frac{1} {2} \frac{d} {\mathit{dt}}\int _{0}^{\infty }u_{\varepsilon }^{2}\mathit{dx}& =& \int _{ 0}^{\infty }u_{\varepsilon }\partial _{ t}u_{\varepsilon }\mathit{dx} {}\\ & =& \varepsilon \int _{0}^{\infty }u_{\varepsilon }\partial _{\mathit{ xx}}^{2}u_{\varepsilon }\mathit{dx} -\int _{ 0}^{\infty }u_{\varepsilon }f^{\prime}(u_{\varepsilon })\partial _{ x}u_{\varepsilon }\mathit{dx} + \gamma \int _{0}^{\infty }u_{\varepsilon }\Big(\int _{ 0}^{x}u_{\varepsilon }\mathit{dy}\Big)\mathit{dx} {}\\ & =& -\varepsilon \int _{0}^{\infty }{(\partial _{ x}u_{\varepsilon })}^{2}\mathit{dx} + \gamma \int _{ 0}^{\infty }u_{\varepsilon }\Big(\int _{ 0}^{x}u_{\varepsilon }\mathit{dy}\Big)\mathit{dx}. {}\\ \end{array}$$

For (13),

$$\displaystyle{ \int _{0}^{\infty }u_{\varepsilon }\Big(\int _{ 0}^{x}u_{\varepsilon }\mathit{dy}\Big)\mathit{dx} =\int _{ 0}^{\infty }P_{\varepsilon }(t,x)\partial _{ x}P_{\varepsilon }(t,x)\mathit{dx} = \frac{1} {2}P_{\varepsilon }^{2}(t,\infty ). }$$

Then,

$$\displaystyle{ \frac{d} {\mathit{dt}}\int _{0}^{\infty }u_{\varepsilon }^{2}\mathit{dx} + 2\varepsilon \int _{ 0}^{\infty }{(\partial _{ x}u_{\varepsilon })}^{2}\mathit{dx} = \gamma P_{\varepsilon }^{2}(t,\infty ). }$$
(18)

Thanks to (16),

$$\displaystyle{ \lim _{x\rightarrow \infty }P_{\varepsilon }^{2}(t,x) ={\left(\int _{ 0}^{\infty }u_{\varepsilon }(t,x)\mathit{dx}\right)}^{2} = 0. }$$
(19)

Now (18) and (19) give (17).

Let us show that (17) implies (16). We assume by contraddiction that (16) does not hold, namely:

$$\displaystyle{ \int _{0}^{\infty }u_{\varepsilon }(t,x)\mathit{dx}\neq 0. }$$

For (13),

$$\displaystyle{ P_{\varepsilon }^{2}(t,\infty ) ={\left(\int _{ 0}^{\infty }u_{\varepsilon }(t,x)\mathit{dx}\right)}^{2}\neq 0. }$$

Therefore, (18) gives

$$\displaystyle{ \frac{d} {\mathit{dt}}\int _{0}^{\infty }u_{\varepsilon }^{2}\mathit{dx} + 2\varepsilon \int _{ 0}^{\infty }{(\partial _{ x}u_{\varepsilon })}^{2}\mathit{dx}\neq 0, }$$

which contradicts (17).

Lemma 2.

For each t ≥ 0, (16) holds true. In particular, we have that

$$\displaystyle{ \left \|u_{\varepsilon }(t,\cdot )\right \|_{{L}^{2}(0,\infty )}^{2} + 2\varepsilon \int _{ 0}^{t}\left \|\partial _{ x}u_{\varepsilon }(s,\cdot )\right \|_{{L}^{2}(0,\infty )}^{2}\mathit{ds} \leq \left \|u_{ 0}\right \|_{{L}^{2}(0,\infty )}^{2}. }$$
(20)

Proof.

We begin by observing that \(u_{\varepsilon }(t,0) = 0\) implies \(\partial _{t}u_{\varepsilon }(t,0) = 0\). Thus, thanks to (3),

$$\displaystyle{ \varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon }(t,0) = \partial _{ t}u_{\varepsilon }(t,0) + f^{\prime}(u_{\varepsilon }(t,0))\partial _{x}u_{\varepsilon }(t,0) - \gamma \int _{0}^{0}u_{\varepsilon }(t,x)\mathit{dx} = 0. }$$
(21)

Differentiating (15) with respect to x, we have

$$\displaystyle{ \partial _{x}(\partial _{t}u_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) -\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon }) = \gamma u_{\varepsilon }. }$$

For (21) and the smoothness of \(u_{\varepsilon }\), an integration over (0,) gives (16). Lemma 1 says that (17) also holds true. Therefore, integrating (17) on (0,t), for (14), we have (20).

Lemma 3.

We have that

$$\displaystyle{ \{u_{\varepsilon }\}_{\varepsilon >0}\quad {is \ bounded \ in}\quad L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2}). }$$
(22)

Consequently,

$$\displaystyle{ \{P_{\varepsilon }\}_{\varepsilon >0}\quad {is \ bounded \ in}\quad L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2}). }$$
(23)

Proof.

Thanks to (15), (20), and the Hölder inequality,

$$\displaystyle\begin{array}{rcl} \partial _{t}u_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) -\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon }& =& \gamma \int _{ 0}^{x}u_{\varepsilon }(t,y)\mathit{dy} \leq \gamma \Big\vert \int _{ 0}^{x}u_{\varepsilon }(t,y)\mathit{dy}\Big\vert {}\\ & \leq & \gamma \int _{0}^{x}\vert u_{\varepsilon }(t,y)\vert \mathit{dy} \leq \gamma \sqrt{x}\left \|u_{\varepsilon }(t,\cdot )\right \|_{{ L}^{2}(0,\infty )} {}\\ & \leq & \gamma \sqrt{x}\left \|u_{0}\right \|_{{L}^{2}(0,\infty )}. {}\\ \end{array}$$

Let v, w, \(v_{\varepsilon }\), and \(w_{\varepsilon }\) be the solutions of the following equations:

$$\displaystyle\begin{array}{rcl} & & \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}v + \partial _{x}f(v) = \gamma \left \|u_{0}\right \|_{{L}^{2}(0,\infty )}\sqrt{x},\quad &t > 0,x > 0, \\ v(t,0) = 0, \quad &t > 0, \\ v(0,x) = u_{0}(x), \quad &x > 0, \end{array} \right. {}\\ & & \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}w + \partial _{x}f(w) = -\gamma \left \|u_{0}\right \|_{{L}^{2}(0,\infty )}\sqrt{x},\quad &t > 0,x > 0, \\ w(t,0) = 0, \quad &t > 0, \\ w(0,x) = u_{0}(x), \quad &x > 0, \end{array} \right. {}\\ & & \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}v_{\varepsilon } + \partial _{x}f(v_{\varepsilon }) = \gamma \left \|u_{0}\right \|_{{L}^{2}(0,\infty )}\sqrt{x} +\varepsilon \partial _{\mathit{xx}}^{2}v_{\varepsilon },\quad &t > 0,x > 0, \\ v_{\varepsilon }(t,0) = 0, \quad &t > 0, \\ v_{\varepsilon }(0,x) = u_{\varepsilon,0}(x), \quad &x > 0, \end{array} \right. {}\\ & & \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}w_{\varepsilon } + \partial _{x}f(w_{\varepsilon }) = -\gamma \left \|u_{0}\right \|_{{L}^{2}(0,\infty )}\sqrt{x} +\varepsilon \partial _{\mathit{xx}}^{2}w_{\varepsilon },\quad &t > 0,x > 0, \\ w_{\varepsilon }(t,0) = 0, \quad &t > 0, \\ w_{\varepsilon }(0,x) = u_{\varepsilon,0}(x), \quad &x > 0, \end{array} \right.{}\\ \end{array}$$

respectively. Then \(u_{\varepsilon }\), \(v_{\varepsilon }\), and \(w_{\varepsilon }\) are respectively a solution, a supersolution, and a subsolution of the parabolic problem

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}q + \partial _{x}f(q) = \gamma \int _{0}^{x}u_{\varepsilon }(t,y)\mathit{dy} +\varepsilon \partial _{\mathit{xx}}^{2}q,\quad &\quad t > 0,\ x > 0, \\ q(t,0) = 0, \quad &\quad t > 0, \\ q(0,x) = u_{\varepsilon,0}(x), \quad &\quad x > 0.\end{array} \right. }$$

Thus, see [6, Chap. 2, Theorem 9],

$$\displaystyle{ w_{\varepsilon } \leq u_{\varepsilon } \leq v_{\varepsilon }. }$$

Moreover, \(\{w_{\varepsilon }\}_{\varepsilon >0}\) and \(\{v_{\varepsilon }\}_{\varepsilon >0}\) are uniformly bounded in \(L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2})\) and converge to w and v respectively, see [1, 17]. Therefore the two functions

$$\displaystyle{ W =\inf _{\varepsilon >0}w_{\varepsilon },\qquad V =\sup _{\varepsilon >0}v_{\varepsilon } }$$

belong to \(L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2})\) and satisfy

$$\displaystyle{ W \leq w_{\varepsilon } \leq u_{\varepsilon } \leq v_{\varepsilon } \leq V. }$$
(24)

This gives (22). Since

$$\displaystyle{ \vert P_{\varepsilon }(t,x)\vert =\Big\vert \int _{ 0}^{x}u_{\varepsilon }(t,y)\mathit{dy}\Big\vert \leq \int _{ 0}^{x}\vert u_{\varepsilon }(t,y)\vert \mathit{dy}, }$$

(23) follows from (22).

Let us continue by proving the existence of a distributional solution to (2), (4), and (5) satisfying (10).

Lemma 4.

There exists a function \(u \in L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2})\) that is a distributional solution of (8) and satisfies (10) for every convex entropy \(\eta \in {C}^{2}(\mathbb{R})\) .

We construct a solution by passing to the limit in a sequence \(\left \{u_{\varepsilon }\right \}_{\varepsilon >0}\) of viscosity approximations (13). We use the compensated compactness method [24].

Lemma 5.

There exist a subsequence \(\{u_{\varepsilon _{k}}\}_{k\in \mathbb{N}}\) of \(\{u_{\varepsilon }\}_{\varepsilon >0}\) and a limit function \(u \in L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2})\) such that

$$\displaystyle{ u_{\varepsilon _{k}} \rightarrow u\ \mathrm{a.e.\ and\ in}\ L_{\mathit{loc}}^{p}({(0,\infty )}^{2}),\ 1 \leq p < \infty. }$$
(25)

Moreover, we have

$$\displaystyle{ P_{\varepsilon _{k}} \rightarrow P\ \mathrm{a.e.\ and\ in}\ L_{\mathit{loc}}^{p}(0,\infty;W_{\mathit{ loc}}^{1,p}(0,\infty )),\ 1 \leq p < \infty, }$$
(26)

where

$$\displaystyle{ P(t,x) =\int _{ 0}^{x}u(t,y)\mathit{dy},\qquad t \geq 0,\quad x \geq 0. }$$

Proof.

Let \(\eta: \mathbb{R} \rightarrow \mathbb{R}\) be any convex C 2 entropy function, and \(q: \mathbb{R} \rightarrow \mathbb{R}\) be the corresponding entropy flux defined by q′ = f′η′. By multiplying the first equation in (13) by \(\eta ^{\prime}(u_{\varepsilon })\) and using the chain rule, we get

$$\displaystyle{ \partial _{t}\eta (u_{\varepsilon }) + \partial _{x}q(u_{\varepsilon }) =\underbrace{\mathop{ \varepsilon \partial _{\mathit{xx}}^{2}\eta (u_{\varepsilon })}}\limits _{=:\mathcal{L}_{1,\varepsilon }}\,\underbrace{\mathop{ -\varepsilon \eta ^{\prime\prime}(u_{\varepsilon }){\left (\partial _{x}u_{\varepsilon }\right )}^{2}}}\limits _{=:\mathcal{L}_{2,\varepsilon }}\,\underbrace{\mathop{ +\gamma \eta ^{\prime}(u_{\varepsilon })P_{\varepsilon }}}\limits _{=:\mathcal{L}_{3,\varepsilon }}, }$$

where \(\mathcal{L}_{1,\varepsilon }\), \(\mathcal{L}_{2,\varepsilon }\), \(\mathcal{L}_{3,\varepsilon }\) are distributions.

Thanks to Lemma 2

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{1,\varepsilon } \rightarrow 0\ \mathrm{in}\ H_{\mathit{loc}}^{-1}({(0,\infty )}^{2}), {}\\ & & \{\mathcal{L}_{2,\varepsilon }\}_{\varepsilon >0}\ \mathrm{is\ uniformly\ bounded\ in}\ L_{\mathit{loc}}^{1}({(0,\infty )}^{2}). {}\\ \end{array}$$

We prove that

$$\displaystyle{ \{\mathcal{L}_{3,\varepsilon }\}_{\varepsilon >0}\ \mathrm{is \ uniformly \ bounded \ in}\ L_{\mathit{loc}}^{1}({(0,\infty )}^{2}). }$$

Let K be a compact subset of \({(0,\infty )}^{2}\). For Lemma 3,

$$\displaystyle\begin{array}{rcl} \left \|\gamma \eta ^{\prime}(u_{\varepsilon })P_{\varepsilon }\right \|_{{L}^{1}(K)}& =& \gamma \int \int _{K}\vert \eta ^{\prime}(u_{\varepsilon })\vert \vert P_{\varepsilon }\vert \mathit{dt}\,\mathit{dx} {}\\ & \leq & \gamma \left \|\eta ^{\prime}(u_{\varepsilon })\right \|_{{L}^{\infty }(K)}\left \|P_{\varepsilon }\right \|_{{L}^{\infty }(K)}\vert K\vert. {}\\ \end{array}$$

Therefore, Murat’s lemma [19] implies that

$$\displaystyle{ \mbox{ $\left \{\partial _{t}\eta (u_{\varepsilon }) + \partial _{x}q(u_{\varepsilon })\right \}_{\varepsilon >0}$ lies in a compact subset of $H_{\mathrm{loc}}^{-1}({(0,\infty )}^{2})$.} }$$
(27)

The \(L_{\mathit{loc}}^{\infty }\) bound stated in Lemma 3, (27), and Tartar’s compensated compactness method [24] give the existence of a subsequence \(\{u_{\varepsilon _{k}}\}_{k\in \mathbb{N}}\) and a limit function \(u \in L_{\mathit{loc}}^{\infty }({(0,\infty )}^{2})\) such that (25) holds.

Finally, (26) follows from (25), the Hölder inequality, and the identities

$$\displaystyle{ P_{\varepsilon _{k}}(t,x) =\int _{ 0}^{x}u_{\varepsilon _{ k}}(t,y)\mathit{dy},\qquad \partial _{x}P_{\varepsilon _{k}} = u_{\varepsilon _{k}}. }$$

Moreover, [5, Theorem 1.1] tells us that the limit u admits a strong boundary trace \(u_{0}^{\tau }\) at \((0,\infty ) \times \{ x = 0\}\). Since, arguing as in [5, Sect. 3.1] (indeed our solution is obtained as the vanishing viscosity limit of (8)), [5, Lemma 3.2] and the boundedness of the source term P (cf. (9)) imply (11).

We are now ready for the proof of Theorem 1.

Proof (Proof of Theorem 1).

Lemma (5) gives the existence of an entropy solution u(t,x) of (7), or equivalently (8).

Let us show that u(t,x) is unique, and that (12) holds true. Since our solutions is only locally bounded we use the doubling of variables method and get local estimates based on the finite speed of propagation of the waves generated by (2). Let u,v be two entropy solutions of (7), or equivalently of (8), and 0 < t < T. By arguing as in [2, 9], using the fact that the two solutions satisfy the same boundary conditions, we can prove that

$$\displaystyle{ \partial _{t}(\vert u - v\vert ) + \partial _{x}((f(u) - f(v))\mathrm{sign}\left (u - v\right )) - \gamma \mathrm{sign}\left (u - v\right )(P_{u} - P_{v}) \leq 0 }$$

holds in the sense of distributions in \((0,\infty ) \times (0,\infty )\), and

$$\displaystyle{ \begin{array}{rl} \left \|u(t,\cdot ) - v(t,\cdot )\right \|_{I(t)} & \leq \left \|u_{0} - v_{0}\right \|_{I(0)} \\ +&\gamma \int _{0}^{t}\int _{I(s)}\mathrm{sign}\left (u - v\right )(P_{u} - P_{v})\mathit{ds}\,\mathit{dx}, \end{array} \quad 0 < t < T, }$$
(28)

where

$$\displaystyle{ P_{u}(t,x) =\int _{ 0}^{x}u(t,y)\mathit{dy},\quad P_{ v} =\int _{ 0}^{x}v(t,y)\mathit{dy},\quad I(s) = (0,R + L(t - s)), }$$

and L is the Lipschitz constant of the flux f.

Since

$$\displaystyle\begin{array}{rcl} \gamma \int _{0}^{t}\int _{ I(s)}\mathrm{sign}\left (u - v\right )(P_{u} - P_{v})\mathit{ds}\,\mathit{dx}& \leq & \gamma \int _{0}^{t}\int _{ I(s)}\vert P_{u} - P_{v}\vert \mathit{ds}\,\mathit{dx} \\ & \leq & \gamma \int _{0}^{t}\int _{ I(s)}\Big(\Big\vert \int _{0}^{x}\vert u - v\vert \mathit{dy}\Big\vert \Big)\mathit{ds}\,\mathit{dx} \\ & \leq & \gamma \int _{0}^{t}\int _{ I(s)}\Big(\Big\vert \int _{I(s)}\vert u - v\vert \mathit{dy}\Big\vert \Big)\mathit{ds}\,\mathit{dx} \\ & =& \gamma \int _{0}^{t}\vert I(s)\vert \left \|u(s,\cdot ) - v(s,\cdot )\right \|_{{ L}^{1}(I(s))}\mathit{ds}, \\ & & {}\end{array}$$
(29)

and

$$\displaystyle{ \vert I(s)\vert = R + L(t - s) \leq R + Lt \leq R + LT, }$$
(30)

we can consider the following continuous function:

$$\displaystyle{ G(t) = \left \|u(t,\cdot ) - v(t,\cdot )\right \|_{{L}^{1}(I(t))},\quad t \geq 0. }$$
(31)

Using this notation, it follows from (28)–(30) that

$$\displaystyle{ G(t) \leq G(0) + C\int _{0}^{t}G(s)\mathit{ds}, }$$

where \(C = \gamma (R + LT)\). Gronwall’s inequality and (31) give

$$\displaystyle{ \left \|u(t,\cdot ) - v(t,\cdot )\right \|_{{L}^{1}(0,R)} \leq {e}^{Ct}\left \|u_{ 0} - v_{0}\right \|_{{L}^{1}(0,R+Lt)}, }$$

that is (12).

3 The Cauchy Problem

Let us consider now the Cauchy problem associated to (2). Since the arguments are similar to those of the previous section we simply sketch them, highlighting only the differences between the two problems.

In this section we augment (2) with the initial datum

$$\displaystyle{ u(0,x) = u_{0}(x),\qquad x \in \mathbb{R}. }$$
(32)

We assume that

$$\displaystyle{ u_{0} \in {L}^{2}(\mathbb{R}) \cap L_{\mathit{ loc}}^{\infty }(\mathbb{R}),\quad \int _{ \mathbb{R}}u_{0}(x)\mathit{dx} = 0. }$$
(33)

Indeed, integrating both sides of (2) we have that \(u(t,\cdot )\) has zero mean for every t > 0, therefore it is natural to assume the same on the initial condition. We rewrite the Cauchy problem (2), (32) in the following way

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u + \partial _{x}f(u) = \gamma \int _{0}^{x}u(t,y)\mathit{dy},\quad &\qquad t > 0,\ x \in \mathbb{R}, \\ u(0,x) = u_{0}(x), \quad &\qquad x \in \mathbb{R},\end{array} \right. }$$
(34)

or equivalently

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u + \partial _{x}f(u) = \gamma P,\quad &\qquad t > 0,\ x \in \mathbb{R}, \\ \partial _{x}P = u, \quad &\qquad t > 0,\ x \in \mathbb{R}, \\ P(t,0) = 0, \quad &\qquad t > 0, \\ u(0,x) = u_{0}(x), \quad &\qquad x \in \mathbb{R}. \end{array} \right. }$$
(35)

Due to the regularizing effect of the P equation in (35) we have that

$$\displaystyle{ u \in L_{\mathit{loc}}^{\infty }((0,\infty ) \times \mathbb{R})\Longrightarrow P \in L_{\mathit{ loc}}^{\infty }((0,\infty );W_{\mathit{ loc}}^{1_{,}\infty }(\mathbb{R})). }$$

Definition 2.

We say that \(u \in L_{\mathit{loc}}^{\infty }((0,\infty ) \times \mathbb{R})\) is an entropy solution of the initial value problem (2), and (32) if:

  1. (i)

    u is a distributional solution of (34) or equivalently of (35);

  2. (ii)

    For every convex function \(\eta \in {C}^{2}(\mathbb{R})\) the entropy inequality

    $$\displaystyle{ \partial _{t}\eta (u) + \partial _{x}q(u) - \gamma \eta ^{\prime}(u)P \leq 0,\qquad q(u) {=\int }^{u}f^{\prime}(\xi )\eta ^{\prime}(\xi )\,d\xi, }$$
    (36)

    holds in the sense of distributions in \((0,\infty ) \times \mathbb{R}\).

The main result of this section is the following theorem.

Theorem 2.

Assume (32) and (33) . The initial value problem (2) and (32) possesses a unique entropy solution u in the sense of Definition  2 . Moreover, if u and v are two entropy solutions (2) and (32) , in the sense of Definition  2 the following inequality holds

$$\displaystyle{ \left \|u(t,\cdot ) - v(t,\cdot )\right \|_{{L}^{1}(-R,R)} \leq {e}^{Ct}\left \|u(0,\cdot ) - v(0,\cdot )\right \|_{{ L}^{1}(-R-Lt,R+Lt)}, }$$
(37)

for almost every t > 0, R,T > 0, and a suitable constant C > 0.

Our existence argument is based on a passage to the limit in a vanishing viscosity approximation of (35).

Fix a small number \(\varepsilon > 0\), and let \(u_{\varepsilon } = u_{\varepsilon }(t,x)\) be the unique classical solution of the following mixed problem

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) = \gamma P_{\varepsilon } +\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon },\quad &\quad t > 0,\ x \in \mathbb{R}, \\ \partial _{x}P_{\varepsilon } = u_{\varepsilon }, \quad &\quad t > 0,\ x \in \mathbb{R}, \\ P_{\varepsilon }(t,0) = 0, \quad &\quad t > 0, \\ u_{\varepsilon }(0,x) = u_{\varepsilon,0}(x), \quad &\quad x \in \mathbb{R},\end{array} \right. }$$
(38)

where \(u_{\varepsilon,0}\) is a \({C}^{\infty }(\mathbb{R})\) approximation of u 0 such that

$$\displaystyle{ \left \|u_{\varepsilon,0}\right \|_{{L}^{2}(\mathbb{R})} \leq \left \|u_{0}\right \|_{{L}^{2}(\mathbb{R})},\quad \int _{\mathbb{R}}u_{\varepsilon,0}(x)\mathit{dx} = 0. }$$
(39)

Clearly, (38) is equivalent to the integro-differential problem

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) = \gamma \int _{0}^{x}u_{\varepsilon }(t,y)\mathit{dy} +\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon },\quad &\quad t > 0,\ x \in \mathbb{R}, \\ u_{\varepsilon }(0,x) = u_{\varepsilon,0}(x), \quad &\quad x \in \mathbb{R}.\end{array} \right. }$$
(40)

The existence of such solutions can be obtained by fixing a small number δ > 0 and considering the further approximation of (38) (see [4])

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \partial _{t}u_{\varepsilon,\delta } + \partial _{x}f(u_{\varepsilon,\delta }) = \gamma P_{\varepsilon,\delta } +\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon,\delta },\quad &\quad t > 0,\ x \in \mathbb{R}, \\ -\delta \partial _{\mathit{xx}}^{2}P_{\varepsilon,\delta } + \partial _{x}P_{\varepsilon,\delta } = u_{\varepsilon,\delta }, \quad &\quad t > 0,\ x \in \mathbb{R}, \\ P_{\varepsilon,\delta }(t,0) = 0, \quad &\quad t > 0, \\ u_{\varepsilon,\delta }(0,x) = u_{\varepsilon,0}(x), \quad &\quad x \in \mathbb{R},\end{array} \right. }$$

and then sending δ → 0.

Let us prove some a priori estimates on \(u_{\varepsilon }\). Arguing as in Lemma 1 we have the following.

Lemma 6.

Let us suppose that

$$\displaystyle{ P_{\varepsilon }(t,-\infty ) = 0,\quad t \geq 0,\quad (\mathrm{or}\quad P_{\varepsilon }(t,\infty ) = 0), }$$
(41)

where \(P_{\varepsilon }(t,x)\) is defined in (38) . Then the following statements are equivalent

$$\displaystyle\begin{array}{rcl} \int _{\mathbb{R}}u_{\varepsilon }(t,x)\mathit{dx}& =& 0,\quad t \geq 0,{}\end{array}$$
(42)
$$\displaystyle\begin{array}{rcl} \frac{d} {\mathit{dt}}\int _{\mathbb{R}}u_{\varepsilon }^{2}\mathit{dx} + 2\varepsilon \int _{ \mathbb{R}}{(\partial _{x}u_{\varepsilon })}^{2}\mathit{dx}& =& 0,\quad t > 0.{}\end{array}$$
(43)

Lemma 7.

For each t ≥ 0, (42) holds true, and

$$\displaystyle{ P_{\varepsilon }(t,\infty ) = P_{\varepsilon }(t,-\infty ) = 0. }$$
(44)

In particular, we have that

$$\displaystyle{ \left \|u_{\varepsilon }(t,\cdot )\right \|_{{L}^{2}(\mathbb{R})}^{2} + 2\varepsilon \int _{ 0}^{t}\left \|\partial _{ x}u_{\varepsilon }(s,\cdot )\right \|_{{L}^{2}(\mathbb{R})}^{2}\mathit{ds} \leq \left \|u_{ 0}\right \|_{{L}^{2}(\mathbb{R})}^{2}. }$$
(45)

Proof.

Differentiating (40) with respect to x, we have

$$\displaystyle{ \partial _{x}(\partial _{t}u_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) -\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon }) = u_{\varepsilon }. }$$

Since \(u_{\varepsilon }\) is a smooth solution of (40), an integration over \(\mathbb{R}\) gives (42).Again for the regularity of \(u_{\varepsilon }\), from (38), we get

$$\displaystyle\begin{array}{rcl} \lim _{x\rightarrow -\infty }(\partial _{t}u_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) -\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon })& =& \gamma \int _{ 0}^{-\infty }u_{\varepsilon }(t,x)\mathit{dx} = \gamma P_{\varepsilon }(t,-\infty ) = 0, {}\\ \lim _{x\rightarrow \infty }(\partial _{t}su_{\varepsilon } + \partial _{x}f(u_{\varepsilon }) -\varepsilon \partial _{\mathit{xx}}^{2}u_{\varepsilon })& =& \gamma \int _{ 0}^{\infty }u_{\varepsilon }(t,x)\mathit{dx} = \gamma P_{\varepsilon }(t,\infty ) = 0, {}\\ \end{array}$$

that is (44).

Lemma 6 says that (43) also holds true. Therefore, integrating (43) on (0,t), for (39), we have (45).

Arguing as in Lemma 3 we obtain the following lemma:

Lemma 8.

We have that

$$\displaystyle{ \{u_{\varepsilon }\}_{\varepsilon >0}\quad {is\,\,bounded\,\,in}\quad L_{\mathit{loc}}^{\infty }((0,\infty ) \times \mathbb{R}). }$$
(46)

Consequently,

$$\displaystyle{ \{P_{\varepsilon }\}_{\varepsilon >0}\quad {is\,\,bounded\,\,in}\quad L_{\mathit{loc}}^{\infty }((0,\infty ) \times \mathbb{R}). }$$
(47)

Let us continue by proving the existence of a distributional solution to (2) and (5) satisfying (36).

Lemma 9.

There exists a function \(u \in L_{\mathit{loc}}^{\infty }((0,\infty ) \times \mathbb{R})\) that is a distributional solution of (35) and satisfies (36) for every convex entropy \(\eta \in {C}^{2}(\mathbb{R})\) .

We construct a solution by passing to the limit in a sequence \(\left \{u_{\varepsilon }\right \}_{\varepsilon >0}\) of viscosity approximations (38). We use the compensated compactness method [24].

Lemma 10.

There exists a subsequence \(\{u_{\varepsilon _{k}}\}_{k\in \mathbb{N}}\) of \(\{u_{\varepsilon }\}_{\varepsilon >0}\) and a limit function \(u \in L_{\mathit{loc}}^{\infty }((0,\infty ) \times \mathbb{R})\) such that

$$\displaystyle{ u_{\varepsilon _{k}} \rightarrow u\ \mathrm{a.e.\ and\ in}\ L_{\mathit{loc}}^{p}((0,\infty ) \times \mathbb{R}),\ 1 \leq p < \infty. }$$
(48)

Moreover, we have

$$\displaystyle{ P_{\varepsilon _{k}} \rightarrow P\ \mathrm{a.e.\ and\ in}\ L_{\mathit{loc}}^{p}((0,\infty );W_{\mathit{ loc}}^{1,p}(\mathbb{R})),\ 1 \leq p < \infty, }$$
(49)

where

$$\displaystyle{ P(t,x) =\int _{ 0}^{x}u(t,y)\mathit{dy},\qquad t \geq 0,\quad x \in \mathbb{R}. }$$

Proof.

Let \(\eta: \mathbb{R} \rightarrow \mathbb{R}\) be any convex C 2 entropy function, and \(q: \mathbb{R} \rightarrow \mathbb{R}\) be the corresponding entropy flux defined by q′ = f′η′. By multiplying the first equation in (38) by \(\eta ^{\prime}(u_{\varepsilon })\) and using the chain rule, we get

$$\displaystyle{ \partial _{t}\eta (u_{\varepsilon }) + \partial _{x}q(u_{\varepsilon }) =\underbrace{\mathop{ \varepsilon \partial _{\mathit{xx}}^{2}\eta (u_{\varepsilon })}}\limits _{=:\mathcal{L}_{1,\varepsilon }}\,\underbrace{\mathop{ -\varepsilon \eta ^{\prime\prime}(u_{\varepsilon }){\left (\partial _{x}u_{\varepsilon }\right )}^{2}}}\limits _{=:\mathcal{L}_{2,\varepsilon }}\,\underbrace{\mathop{ +\gamma \eta ^{\prime}(u_{\varepsilon })P_{\varepsilon }}}\limits _{=:\mathcal{L}_{3,\varepsilon }}, }$$

where \(\mathcal{L}_{1,\varepsilon }\), \(\mathcal{L}_{2,\varepsilon }\), \(\mathcal{L}_{3,\varepsilon }\) are distributions.

Arguing as in Lemma 5, we have that

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{1,\varepsilon } \rightarrow 0\ \mathrm{in}\ H_{\mathit{loc}}^{-1}((0,\infty ) \times \mathbb{R}), {}\\ & & \{\mathcal{L}_{2,\varepsilon }\}_{\varepsilon >0}\ \mathrm{and}\ \{\mathcal{L}_{3,\varepsilon }\}_{\varepsilon >0}\ \mathrm{are\ uniformly\ bounded\ in}\ L_{\mathit{loc}}^{1}((0,\infty ) \times \mathbb{R}). {}\\ \end{array}$$

Therefore, Murat’s lemma [19] implies that

$$\displaystyle{ \mbox{ $\left \{\partial _{t}\eta (u_{\varepsilon }) + \partial _{x}q(u_{\varepsilon })\right \}_{\varepsilon >0}$ lies in a compact subset of $H_{\mathrm{loc}}^{-1}((0,\infty ) \times \mathbb{R})$.} }$$
(50)

The \(L_{\mathit{loc}}^{\infty }\) bound stated in Lemma 8, (50), and Tartar’s compensated compactness method [24] imply the existence of a subsequence \(\{u_{\varepsilon _{k}}\}_{k\in \mathbb{N}}\) and a limit function \(u \in L_{\mathit{loc}}^{\infty }((0,\infty ) \times \mathbb{R})\) such that (48) holds.

Finally, (49) follows from (48), the Hölder inequality, and the identities

$$\displaystyle{ P_{\varepsilon _{k}}(t,x) =\int _{ 0}^{x}u_{\varepsilon _{ k}}(t,y)\mathit{dy},\qquad \partial _{x}P_{\varepsilon _{k}} = u_{\varepsilon _{k}}. }$$

We are now ready for the proof of Theorem 2.

Proof (Proof of Theorem 2).

Lemma (10) gives the existence of an entropy solution u of (7), or equivalently (35).

Let us show that u is unique, and that (37) holds true. Let u,v be two entropy solutions of (7) or equivalently of (35) and 0 < t < T. Arguing as in [9] we can prove that

$$\displaystyle{ \begin{array}{rl} \left \|u(t,\cdot ) - v(t,\cdot )\right \|_{I(t)} & \leq \left \|u_{0} - v_{0}\right \|_{I(0)} \\ +&\gamma \int _{0}^{t}\int _{I(s)}\mathrm{sign}\left (u - v\right )(P_{u} - P_{v})\mathit{ds}\,\mathit{dx} \end{array} \quad 0 < t < T, }$$
(51)

where

$$\displaystyle{ P_{u}(t,x) =\int _{ 0}^{x}u(t,y)\mathit{dy},\quad P_{ v} =\int _{ 0}^{x}v(t,y)\mathit{dy},\quad I(s) = (-R-L(t-s),R+L(t-s)), }$$

and L is the Lipschitz constant of the flux f.

Since

$$\displaystyle\begin{array}{rcl} \gamma \int _{0}^{t}\int _{ I(s)}\mathrm{sign}\left (u - v\right )(P_{u} - P_{v})\mathit{ds}\,\mathit{dx}& \leq & \gamma \int _{0}^{t}\int _{ I(s)}\vert P_{u} - P_{v}\vert \mathit{ds}\,\mathit{dx} \\ & \leq & \gamma \int _{0}^{t}\int _{ I(s)}\Big(\Big\vert \int _{0}^{x}\vert u - v\vert \mathit{dy}\Big\vert \Big)\mathit{ds}\,\mathit{dx} \\ & \leq & \gamma \int _{0}^{t}\int _{ I(s)}\Big(\Big\vert \int _{I(s)}\vert u - v\vert \mathit{dy}\Big\vert \Big)\mathit{ds}\,\mathit{dx} \\ & =& \gamma \int _{0}^{t}\vert I(s)\vert \left \|u(s,\cdot ) - v(s,\cdot )\right \|_{{ L}^{1}(I(s))}\mathit{ds}, \\ & & {}\end{array}$$
(52)

and

$$\displaystyle{ \vert I(s)\vert = 2R + 2L(t - s) \leq 2R + 2Lt \leq 2R + 2LT, }$$
(53)

we can consider the following continuous function:

$$\displaystyle{ G(t) = \left \|u(t,\cdot ) - v(t,\cdot )\right \|_{{L}^{1}(I(t))},\quad t \geq 0. }$$
(54)

It follows from (51) to (53) that

$$\displaystyle{ G(t) \leq G(0) + C\int _{0}^{t}G(s)\mathit{ds}, }$$

where \(C = \gamma (2R + 2LT)\).

Gronwall’s inequality and (54) give

$$\displaystyle{ \left \|u(t,\cdot ) - v(t,\cdot )\right \|_{{L}^{1}(-R,R)} \leq {e}^{Ct}\left \|u_{ 0} - v_{0}\right \|_{{L}^{1}(-R-Lt,R+Lt)}, }$$

that is (37).